Google has introduced Gemini 2.0, its most advanced AI model to date, which will power AI Overviews and Search updates in 2025.
Google says, “Gemini 2.0 will enable us to build new AI agents that bring us closer to our vision of a universal assistant.”
Google’s most capable AI model yet
In a recent New York Times interview, Google CEO Sundar Pichai said that “Search will profoundly change in 2025,” and with the recent introduction of Gemini, “Google’s most capable AI model yet that’s built for the agentic era, bringing enhanced performance, more multimodality, and new native tool use,” Pichai meant it.
The tech giant announced the release of Gemini on its Keyword blog, where Pichai had this to say:
- “Information is at the core of human progress. It’s why we’ve focused for more than 26 years on our mission to organize the world’s information and make it accessible and useful.”
According to Google, Gemini is “its most capable AI model yet,” with advanced features capable of audio and native image generation, advanced multimodality, answering complex math equations, multi-step questions, coding, and integrating with Google tools like Search and Maps.
Sundar Pichai, Google’s CEO, wrote:
- “Today, we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.”
AI transforms Search
Since the release of AI Overviews, Search has changed considerably, with AI snippets providing concise answers, which has led to an increase in zero click-through rates, hurting many sites that rely on organic traffic.
Now Google says its latest Search update, powered by Gemini 2.0, will change it even more:
CEO Sundar Pichai said:
- “No product has been transformed more by AI than Search.”
He added:
- “Our AI Overviews now reach 1 billion people, enabling them to ask entirely new types of questions — quickly becoming one of our most popular Search features ever.”
AI Overviews gets advanced reasoning capabilities
Google has already begun testing Gemini 2.0 in AI Overviews, and the experimental model will soon be available to Gemini users.
Pichai said this about Google`s next step:
- “We’re bringing the advanced reasoning capabilities of Gemini 2.0 to AI Overviews to tackle more complex topics and multi-step questions, including advanced math equations, multimodal queries, and coding.”
Along with Gemini 2.0. Google also announced its launching of Deep Research, a new feature that acts as a research assistant that uses long context capabilities and advanced reasoning, which is already available to users of Gemini Advanced.
Integrates with all platforms
Besides powering AI Overviews, Gemini 2.0 will integrate with other Google platforms, including Maps and Search.
Google says the integration will facilitate intuitive navigation that will enhance its platform’s functionality and the user experience.
Pichai wrote:
- “It’s why we continue to push the frontiers of AI to organize that information across every input and make it accessible via any output so that it can be truly useful for you.”
Other features Gemini 2.0 will bring include:
- “In addition to supporting multimodal inputs like images, video, and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio.”
Rolling out in 2025
Google said that once the initial testing of Gemini 2.0 in AI Overviews is complete, it will begin integrating it into other Google products and expand Gemini-powered AI Overviews to more countries and languages in 2025.