Vast Data collaborates with Microsoft to accelerate agentic AI innovation

0
92

VAST Data, the AI Operating System company, announced today at Microsoft Ignite a collaboration with Microsoft to power the next wave of agentic AI. Available soon to Azure customers, the VAST AI OS provides a simple way to deploy high-performance, scalable AI infrastructure in the cloud.

Enterprises will be able to access VAST’s complete suite of data services in Azure, including unified storage, data cataloging, and database capabilities to support complex AI workflows. This integration will enable organizations to manage data seamlessly across on-premises, hybrid, and multi-cloud environments, delivering the scale, intelligence, and automation required to accelerate AI innovation.

The VAST AI Operating System will run on Azure infrastructure, enabling customers to deploy and operate it using the same tools, governance, security, and billing frameworks they have become accustomed to. The solution will deliver unified management, consistent performance, and Azure-grade reliability.

“This collaboration with Microsoft reflects our shared vision for the future of AI infrastructure, where performance, scale, and simplicity converge to enable enterprises to transform their business with agentic AI,” said Jeff Denworth, Co-Founder at VAST Data. “Becoming an Azure Partner represents the first milestone in that journey. Customers will be able to unify their data and AI pipelines across environments with the same power, simplicity, and performance they expect from VAST, now with the reach, elasticity, and reliability of Microsoft’s global cloud.”

Azure customers will soon be able to fully leverage the capabilities of the VAST AI OS on Azure, gaining a comprehensive platform built for agentic AI and high-performance workflows. With VAST InsightEngine and AgentEngine, organizations can run intelligent, data-driven operations directly where their data lives—accelerating vector search, RAG pipelines, real-time reasoning, and autonomous agent orchestration across hybrid and multi-cloud environments. Designed for large-scale model training and inference, VAST AI OS keeps Azure GPU and CPU clusters saturated with high-throughput data services, intelligent caching, and metadata-optimized I/O, benefiting from Azure’s latest infrastructure, including the Laos VM Series with Azure Boost Accelerated Networking. An exabyte-scale DataSpace creates a unified global namespace that eliminates silos and allows seamless bursting from on-premises environments to Azure without migration or reconfiguration. VAST also unifies data access through its DataStore, supporting file, object, and block protocols and its VAST DataBase, which blends transactional performance with warehouse-class query speeds and data lake economics. Together with a DASE architecture that elastically scales compute and storage independently and reduces storage footprint through Similarity Reduction, Azure customers gain a scalable, cost-efficient foundation for modern AI workloads.

“VAST’s AI Operating System running on Azure will give Azure customers a high-performance, scalable platform built on the Laos VM Series using Azure Boost that seamlessly extends on-premises AI pipelines into Azure’s GPU-accelerated infrastructure,” said Aung Oo, Vice President, Azure Storage at Microsoft. “Many AI model builders in the world leverage VAST for its scalability, breakthrough performance, and AI-native capabilities. This collaboration can help our mutual customers streamline operations, reduce costs, and accelerate time-to-insight for AI workloads of every size.”

As Microsoft continues to invest in the future of AI infrastructure, including its own custom silicon initiatives, VAST will work closely with the Azure team to align on next-generation platform requirements. This collaboration positions VAST as a strategic element of Microsoft’s broader AI computing strategy, helping to unlock the full potential of emerging innovations in compute. Together, the companies will aim to ensure that future AI systems, regardless of the processor or model architecture, are fueled by an AI operating system built for scale, performance, and simplicity.

Leave a reply