In this blog, you will discover all the latest updates and announcements from the Google I/O 2025 event, beginning with the initial AI-influenced welcome video featuring “You Get What You Give” by New Radicals to CEO Sundar Pichai’s sign-off. Google I/O 2025 was bursting with information and updates for the technology giant and its products.
Google revealed some significant successes for its AI technologies during that period, including Gemini topping several LMArena leaderboard categories. Another instance Google felt particularly proud of was Gemini finishing Pokémon Blue a few weeks ago.
From the whole two-hour event, we will analyse what product changes and announcements you need to know to ensure that you can stroll off with all the takeaways without spending the same time it takes to learn about a big motion picture.
Here’s the most stunning news out of Google I/O before we delve in: Google’s subscription fee for its Google AI Ultra offer Although Google offers a basic subscription for $19.99 per month, the Ultra plan comes in at an amazing $249.99 per month for its whole set of products with the highest rate limits.
Google announced so much. Here’s what you shouldn’t miss.
Google Search AI Mode | Google I/O 2025
Google concealed its most obvious aspect too far into the event, but we’ll reveal it at the top.
Google announced at Google I/O that all US residents can use Google Search’s AI Mode today. It will enable users use Google’s search tool for longer, more complex searches. Using a “query fan-out technique,” AI Mode can break a search into numerous parts, process each part, and aggregate the results for the user. Google says AI Mode “checks its work” but doesn’t specify.
Google Launches AI Mode for Search.
AI Mode is now available. Later this summer, Google will launch Personal Context in AI Mode, which will make recommendations based on past searches and other contextual data from Gmail.
Data visualization tools, which can display search results in a visual graph when applicable, and Deep Search, which can browse many websites, will also join AI Mode.
Google believes 1.5 billion users see its AI overviews in search each month, therefore AI Mode offers the largest potential user base of all Google’s offerings.
AI Shopping
These artificial intelligence shopping tools received the most attention at Google I/O live.
The AI Mode-linked Google Shopping Graph showed approximately 50 billion goods. When users search for a specific couch, Google will provide options that match that description.
Google also showed a presentation where the presenter uploaded a photo of herself so artificial intelligence could create her perfect apparel shape. Google Labs’ virtual try-on tool is Cher’s Clueless closet in real life.
The speaker used an AI shopping agent to track price and availability. User received notification of price change after price drop.
Google said that from Wednesday, Google Labs users could try on numerous outfits using AI.
Android XR
Google I/O revealed its post-Google Glass AR/VR plans. The company unveiled numerous wearable gadgets using Android XR.
Google appears to have considered the many uses for an immersive headset and a pair of smartglasses while creating Android XR.
Samsung has hinted at its Project Moohan XR helmet, but Google I/O was the first time Google showed it in collaboration with Qualcomm. Google said the Project Moohan headset will be available later this year.
In addition to the XR helmet, Google introduced Android XR smartglasses with cameras, speakers, and an in-lens display that interacts with a smartphone. Gentle Monster and Warby Parker partnerships will make these smart glasses more stylish than Google Glass.
Google said developers can start working on Glasses next year, thus the smartglasses’ release date is likely after that.

Gemini
At I/O 2025 Gemini, Google’s AI model, took front stage. Google’s most potent variation is the improved Gemini 2.5 Pro. The company showed Gemini 2.5 Pro converting ideas into apps. Google also presented Gemini 2.5 Flash, a less expensive variant of the robust Pro model. The latter will be sent early in June; the former will follow. First for “trusted testers,” Google also revealed Gemini 2.5 Pro Deep Think for challenging math and coding.
Google made Jules, an asynchronous coding agent public beta available. Jules allows developers to change files and codes bases.
Developers will be able to replicate the same voice in several languages using new Native Audio Output text-to-speech models.
Agent Mode will include an artificial intelligence agent inside the Gemini app capable of task completion depending on user directions.
Gemini will be included into Google products including Workspace with Personalized Smart Replies. Gemini will use personal context from documents, emails, and more from a user’s Google apps to fit their tone, voice, and style for automatic responses. Summer will bring Gmail’s Workspace.
Gemini also revealed Gemini in Chrome, an AI Assistant responding to questions depending on the web page, and Deep Research, which lets users submit files to guide the AI agent when posing questions. U.S. Gemini members get the latter feature this week.
Google wants Gemini included to its TVs, wearables, and cars.
Generative AI updates
Just a component of Google’s AI approach were Gemini’s language model and AI assistant enhancements. The corporation also made announcements on generative artificial intelligence.
Google unveiled Imagen 4, its most recent form of picture production. Google says Imagen 4 boasts better graphics and details. Imagen 4 also produces superior graphic text and typeography. AI models are famously poor at this, hence Imagen 4 appears interesting.
Flow debuted Veo 3, a new video generation model. Google says Veo 3 can create scenes, sound effects, background noise, and dialogue with greater knowledge of physics.
Right present they have Veo 3, Flow, and Lyria 2.
Google debuted at I/O Gemini Canvas, a co-creation tool.
Project Starline aka Google Beam
Project Starline is defunct, another noteworthy Google I/O disclosure.
Google’s immersive communication endeavor will be replaced by Google Beam, an AI-first communication platform.
Google Beam features
Google Meet translations, therefore enabling real-time speech translation in conferences. AI can replicate the voice and tone of a speaker, therefore rendering the translation natural. Today English and Spanish Google Meet translations are accessible; more languages will be added in the next weeks.
Google hinted at a 3-D conferencing system under Google Beam using many cameras to record a user from many angles and project them on a 3-D light-field display.
Project Astra
Project Starline may have changed names, but Project Astra still targets Google.
Google revealed a lot about Project Astra, its real-world universal AI assistant.
New AI assistant Gemini Live leverages a user’s mobile device’s camera and speech to interact with their environment. The AI assistant at Gemini Live answers camera video questions. Gemini Live is available today, according Google.
Gemini Google Lens could integrate Project Astra’s live AI capabilities into Google Search’s AI mode.
Google hopes Gemini Live will help impaired individuals and be accessible.
Project Mariner
Another Google AI effort interacting with the web to complete user tasks is Project Mariner.
Late last year, Project Mariner was unveiled; Google then included a multi-tasking capability allowing artificial intelligence agents to operate on up to 10 tasks concurrently. Teach and Repeat lets the AI agent learn from past jobs and run related ones without exact directions.
Google wants to bring agentic artificial intelligence to Gemini, Google Search via AI Mode, and Chrome
Read More At: JassInsights
References & Citations: Binder, M. (2025, May 20). Everything you need to know from Google I/O 2025. Mashable India.