VAST Data - Experts & Thought Leaders
Latest VAST Data news & announcements
VAST Data, the AI Operating System company, announced at Microsoft Ignite a collaboration with Microsoft to power the next wave of agentic AI. Available soon to Azure customers, the VAST AI OS provides a simple way to deploy high-performance, scalable AI infrastructure in the cloud. Enterprises will be able to access VAST’s complete suite of data services in Azure, including unified storage, data cataloging, and database capabilities to support complex AI workflows. This integration will enable organisations to manage data seamlessly across on-premises, hybrid, and multi-cloud environments, delivering the scale, intelligence, and automation required to accelerate AI innovation. VAST AI Operating System The VAST AI Operating System will run on Azure infrastructure, enabling customers to deploy and operate it using the same tools, governance, security, and billing frameworks they have become accustomed to. The solution will deliver unified management, consistent performance, and Azure-grade reliability. “This collaboration with Microsoft reflects our shared vision for the future of AI infrastructure, where performance, scale, and simplicity converge to enable enterprises to transform their business with agentic AI,” said Jeff Denworth, Co-Founder at VAST Data. “Becoming an Azure Partner represents the first milestone in that journey. Customers will be able to unify their data and AI pipelines across environments with the same power, simplicity, and performance they expect from VAST, now with the reach, elasticity, and reliability of Microsoft’s global cloud.” Advantages of the capabilities of the VAST AI OS Azure customers will be able to take full advantage of the capabilities of the VAST AI OS running on Azure, including: Built for Agentic AI: Leverage VAST InsightEngine and AgentEngine to run intelligent, data-driven workflows directly where data lives. InsightEngine delivers stateless, high-performance compute and database services that accelerate vector search, RAG pipelines, and data preparation. AgentEngine orchestrates autonomous agents operating on real-time data streams, enabling continuous AI reasoning across hybrid and multi-cloud environments. Performance at Scale for Model Builders: Designed for the demands of model training and inference, VAST AI OS keeps Azure GPU and CPU clusters saturated with high-throughput data services, intelligent caching, and metadata-optimized I/O to ensure predictable performance from pilot to multi-region scale. VAST benefits from the latest Azure Infrastructure solutions including the Laos VM Series using Azure Boost Accelerated Networking. Seamless Hybrid AI Workflows: An exabyte-scale DataSpace creates a unified global namespace that eliminates data silos and enables effortless data mobility. Customers can instantly burst from on-premises to Azure for GPU-accelerated workloads without migration or reconfiguration. Unified Data Access: VAST’s DataStore supports file (NFS, SMB), object (S3), and block protocols, while the VAST DataBase combines transactional performance with the query speed of a warehouse and the economics of a data lake, allowing diverse workloads to run on one platform without compromise. Elastic, Cost-Efficient Architecture: VAST’s Disaggregated, Shared-Everything (DASE) design enables independent scaling of compute and storage resources within Azure. Combined with built-in Similarity Reduction, the platform minimises storage footprint and reduces cost for large-scale AI infrastructure. Azure’s GPU-accelerated infrastructure “VAST’s AI Operating System running on Azure will give Azure customers a high-performance, scalable platform built on the Laos VM Series using Azure Boost that seamlessly extends on-premises AI pipelines into Azure’s GPU-accelerated infrastructure,” said Aung Oo, Vice President, Azure Storage at Microsoft. “Many AI model builders in the world leverage VAST for its scalability, breakthrough performance, and AI-native capabilities. This collaboration can help our mutual customers streamline operations, reduce costs, and accelerate time-to-insight for AI workloads of every size.” Future of AI infrastructure As Microsoft continues to invest in the future of AI infrastructure, including its own custom silicon initiatives, VAST will work closely with the Azure team to align on next-generation platform requirements. This collaboration positions VAST as a strategic element of Microsoft’s broader AI computing strategy, helping to unlock the full potential of emerging innovations in compute. Together, the companies will aim to ensure that future AI systems, regardless of the processor or model architecture, are fuelled by an AI operating system built for scale, performance, and simplicity. Upcoming joint appearances Renen Hallak, VAST Data Founder and CEO, will be at Microsoft Ignite in San Francisco and available for joint customer meetings, to discuss how Azure and the VAST AI Operating System will enable enterprises to operationalise agentic AI at global scale. At Supercomputing 2025 in St. Louis, VAST Data will host Andrew Jones, Engineering Leader, Future Supercomputing & AI Capabilities, on November 19 in a conversation exploring how Azure AI and modern data strategies are shaping the AI cloud. Register to join the breakfast session and be part of the discussion on the future of AI infrastructure. Representatives from both VAST and Microsoft will also deliver technical presentations and demos in their respective booths throughout the event. Learn more by visiting VAST Booth #3204 and Microsoft Booth #1627.
VAST Data, the AI Operating System company, announced an expanded partnership with Google Cloud, enabling customers to deploy the VAST AI Operating System (AI OS) as a fully managed service and extend a unified global namespace across hybrid environments. Powered by the VAST DataSpace, enterprises can seamlessly connect clusters running in Google Cloud and on-premises locations, eliminating complex migrations and making data instantly available wherever AI runs. Fragmented storage and siloed data pipelines Enterprises want to run AI where it performs best, but data rarely lives in one place and migrating can take months and costs millions. Fragmented storage and siloed data pipelines make it hard to feed the AI accelerators with consistent, high-throughput access and every environment change multiplies governance and compliance burdens. VAST DataSpace to connect clusters VAST and Google Cloud address this challenge by making data placement a choice rather than a constraint. In this recorded demonstration, VAST showcased the power of the VAST DataSpace to connect clusters across more than 10,000 kilometres, linking one in the United States with another in Japan. This configuration delivered seamless, near real-time access to the same data in both locations while running inference workloads with vLLM, enabling intelligent workload placement so organisations can run AI models on TPUs in the US and GPUs in Japan without duplicating data or managing separate environments. Fully managed AI OS “Through our partnership with Google Cloud, we’re meeting customers where they are by delivering a fully managed AI OS,” said Jeff Denworth, Co-Founder at VAST Data. “When leveraging our global namespace with intelligent streaming, Google Cloud customers can auto-deploy a VAST-managed cluster via Google Cloud Marketplace and start production in minutes, providing integrated governance and billing, elastic scale, and supporting the hybrid cloud mission – and it’s all handled by VAST, making enterprise data instantly usable for agentic workloads.” Digital transformation journeys “Bringing VAST AI Operating System to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the data solution on Google Cloud's trusted, global infrastructure," said Nirav Mehta, Vice President, Compute Platform at Google Cloud. “VAST can now securely scale and support customers on their digital transformation journeys.” Powering Google Cloud TPUs with data access and near-local version Recent performance results also show how the VAST AI Operating System connects seamlessly to Google Cloud Tensor Processing Unit (TPU) virtual machines, integrating directly with Google Cloud’s platform for large-scale AI. In testing with Meta’s Llama-3.1-8B-Instruct model, the VAST AI Operating System delivered model load speeds on par with local NVMe disks, while maintaining predictable performance during cold starts. These results confirm that the VAST AI OS is not just a data platform but a performance engine designed to keep accelerators fully utilised and AI pipelines continuously in motion. Advanced platform features “The VAST AI OS is redefining what it means to move fast in AI. Loading models at speeds comparable to local NVMe while delivering a full suite of advanced platform features,” said Subramanian Kartik, Chief Scientist at VAST Data. “This is the kind of acceleration that turns idle accelerators into active intelligence, driving higher efficiency and faster time to insight for every AI workload.” With VAST on Google Cloud, customers can benefit Deploy AI in Minutes, Not Months: Organisations can run production AI workloads on Google Cloud today against existing on-premises datasets without migration planning, transfer delays, or extended compliance cycles. Using VAST DataSpace and intelligent streaming, they can present a consistent global namespace of data across on-prem and Google Cloud instantly. Reduce Data-Movement Costs: Stream only the subsets that models require to avoid full replication and reduce egress – cutting footprint and redirecting budget from data movement to AI innovation with infrastructure that is future-ready for the demanding AI pipelines in genomics, structural biology, and financial services. Maximise Google Cloud Innovation with Flexible Data Placement: Choose what to migrate, replicate, or cache to Google Cloud while keeping one namespace and consistent governance by applying unified access controls, audit, and retention policies everywhere to simplify compliance and reduce operational risk. Leverage VAST DataStore and VAST DataBase to unify prep, training, inference, and analytics without rewiring pipelines. TPU-Ready Data Path: Feed TPU VMs over validated NFS paths with optimised model-load and small-file/metadata-aware I/O, achieving warm-start load times on par with local NVMe while maintaining predictable, steady behaviour during cold-starts. Build on a Unified Platform: The VAST AI Operating System delivers a DataStore, DataBase, InsightEngine, AgentEngine and DataSpace that scales across on-premises and Google Cloud environments and adapts to changing business needs without architectural rewrites, enabling data scientists to use a variety of access protocols with a single solution. VAST can be deployed today in Google Cloud. Joint validation and reference guidance for establishing a VAST DataSpace spanning Google Cloud and external clusters are available to qualified customers and partners.
VAST Data, the AI Operating System company, announced it has signed a commercial agreement valued at $1.17 billion with CoreWeave, the essential cloud for AI. The expanded partnership reinforces CoreWeave’s long-standing commitment to the VAST AI OS as its primary data foundation, solidifying VAST as a key component of CoreWeave’s AI cloud. CoreWeave’s infrastructure Powered by the VAST AI OS, CoreWeave’s infrastructure delivers instant access to massive datasets, breakthrough performance, and cloud-scale economics for both training and inference workloads. Built on an infinitely scalable system architecture, CoreWeave can deploy VAST in any data centre for any of its customers, without ever having to worry about platform reliability or scale. These are some of the most intensive and demanding computing environments in the world, and together VAST and CoreWeave are making sure their customers are always computing. Next generation of AI infrastructure As part of this expansion, CoreWeave is working with VAST to deliver sophisticated data services to their shared customers that extend across the full stack. This optimises data pipelines and unlocks the advanced design capabilities that model builders require. Together, the companies are building the next generation of AI infrastructure, enabling customers to move faster, scale seamlessly, and operate with unmatched efficiency. “At VAST, we are building the data foundation for the most ambitious AI initiatives in the world,” said Renen Hallak, Founder and CEO of VAST Data. “Our deep integration with CoreWeave is the result of a long-term commitment to working side by side at both the business and technical levels. By aligning our roadmaps, we are delivering an AI platform that organisations cannot find anywhere else in the market.” CoreWeave’s GPU-accelerated infrastructure “The VAST AI Operating System underpins key aspects of how we design and deliver our AI cloud,” said Brian Venturo, co-founder and Chief Strategy Officer of CoreWeave. “This partnership enables us to deliver AI infrastructure that is the most performant, scalable, and cost-efficient in the market, while reinforcing the trust and reliability of a data platform that our customers depend on for their most demanding workloads.” The agreement advances a shared mission to redefine the data and compute architecture for AI. By combining CoreWeave’s GPU-accelerated infrastructure with the VAST AI Operating System, the companies are building a new class of intelligent data architecture designed to support continuous training, real-time inference, and large-scale data processing for mission-critical industries.
Aligning physical and cyber defence for total protection
DownloadUnderstanding AI-powered video analytics
DownloadEnhancing physical access control using a self-service model
DownloadHow to implement a physical security strategy with privacy in mind
DownloadSecurity and surveillance technologies for the casino market
Download