At this year’s Google Cloud Next 2025 conference, the message was crystal clear: AI is evolving fast, and Google is leading that transformation with technology that redefines the enterprise landscape. The keynote unveiled some of the most powerful advancements to date, aimed at making AI not only faster and more intelligent, but also more accessible and integrated into every part of a business.
One of the biggest announcements was the introduction of Cloud VAN, a breakthrough in global networking infrastructure. Google is now opening its private fiber network—the same one that supports search, YouTube, and Workspace—to enterprises everywhere. This means companies will be able to run their operations on the same infrastructure as Google itself. Early users like Citadel Securities and Nestlé have already reported significant performance gains. Cloud VAN is delivering over 40% faster performance and can cut total ownership costs by as much as 40%. Google Cloud customers around the world will be able to tap into this resource by the end of the month.
Hardware also took center stage with the unveiling of Ironwood, Google’s most advanced Tensor Processing Unit to date. As the seventh generation of TPUs, Ironwood offers a jaw-dropping 3600x performance improvement over the first TPU chip ever made available to the public. The scale of this innovation is staggering. An Ironwood pod contains more than 9,000 chips, delivering over 42 exaflops of compute power—24 times more than what’s currently available from the world’s fastest supercomputer. This level of power is specifically designed to train and serve the next generation of AI models, including the latest versions of Gemini.
To complement this hardware, Google introduced upgrades to its software stack. GKE, Google Kubernetes Engine, now supports new features specifically designed for AI inference at scale. These features reduce serving costs, lower latency, and increase throughput dramatically. At the same time, Google is now making its powerful internal ML runtime, called Pathways, available to its cloud customers. Built by DeepMind, Pathways helps dynamically route tasks across hundreds of accelerators for unmatched performance.
With all this computing firepower now available, Gemini 2.0 becomes even more effective. Already considered Google’s most advanced model, Gemini 2.0 now achieves 24 times more intelligence per dollar than GPT-4. It even outperforms other top contenders like DeepSeek R1 by a factor of five. Gemini can now also run locally through Google Distributed Cloud, whether it’s in connected environments or secure, air-gapped locations—including government systems operating at secret-level clearance.
Google is bringing this AI power directly into the tools people use every day. In Google Workspace, Gemini is getting even smarter. Users can now analyze data in Sheets using a guided assistant, turn documents into spoken audio summaries in Docs, and automate workflows with Workspace Flows. These updates turn traditional productivity tools into intelligent, collaborative platforms.
Creativity got a major boost too. Google is now the first cloud provider to offer Lyria, a new model that can turn text into music. Artists and creators can generate up to 30-second audio tracks just by writing a prompt. Google is also adding support for Meta’s Llama 4 and AI2’s full range of open-source models, all of which are now available in the Vertex AI model garden.
With these models integrated into Vertex AI, teams can access enterprise data sources across multiple clouds without duplicating or moving data. This simplifies access and control while keeping information secure and centralized.
But perhaps the most futuristic vision came with the introduction of the Agent Development Kit. This open-source tool allows developers to build complex multi-agent systems. These aren’t just chatbots—they’re intelligent agents capable of reasoning, collaborating, and interacting with each other. Google is standardizing this capability through a new protocol called the Model Context Protocol, which lets agents work together across different platforms. Even open frameworks like LangGraph and Crew AI are jumping on board, helping to build an open ecosystem of AI agents that can communicate seamlessly.
Taking things a step further, Google launched Agent Space, a platform that gives every employee in an organization a personal workspace powered by AI agents. These agents understand enterprise data, integrate with third-party tools, and provide smart assistance—all while meeting strict privacy and security standards.
Customer support is also entering a new phase. Google’s updated customer engagement suite now allows AI to understand spoken language with near-human accuracy, detect emotion in real-time, and even respond to live video feeds. This means virtual agents can diagnose issues by watching a device on a customer’s screen and suggest solutions instantly.
Data teams are also getting personalized AI support. Data engineers can automate metadata creation and pipeline management. Data scientists now have an AI pair programmer that helps build models faster. Business analysts can ask natural language questions about their data and get answers they can embed directly into their dashboards.
And for software developers, Google introduced Code Assist Agents that act as collaborative teammates. These agents understand an entire development cycle, from bug tracking to versioning to deployment. They even integrate with tools from Atlassian, Sentry, and Snyk to create a truly connected development environment.
The pace of innovation on display at Google Cloud Next 2025 wasn’t just fast—it felt like a quantum leap. From foundational infrastructure to intelligent agents that think, act, and collaborate, Google isn’t just keeping up with AI’s rapid evolution—they’re defining its future.