
There are 15 major announcements at the I/O
The Flip of the Pixel: How Ray-Ban Meta Glasses Are Measuring What Happened to Google, Twitter, and Facebook
is a senior reporter focusing on wearables, health tech, and more with 13 years of experience. Before coming to The Verge, she worked for Gizmodo and PC Magazine.
The other thing to note is that Google seems to be leaning on Samsung for XR glasses hardware, too. Shahram Izadi, the VP of XR at Google said that the company is working with a company to go beyond headsets and glasses. There will be the first pair of Project Aura glasses made by Xreal.
The partnership indicates that the internet giant is taking style seriously this time around. Warby Parker is well known as a direct-to-consumer eyewear brand that makes it easy to get trendy glasses at a relatively accessible price. Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyoncé, Rihanna, Gigi Hadid, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.
In regards to what these glasses will do, they are a great vehicle for use of Gemini, and that is what was emphasized by Google. The prototype glasses have been fitted with cameras, microphones and speakers so that the AI assistant can help you understand the world around you. The demos included getting turn-by-turn directions, taking photos, and live language translation. That pretty much lines up with what I saw at my Android XR hands-on in December, but Google has slowly been rolling out these demos more publicly over the past few months.
The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Not only do Meta’s glasses look like an ordinary pair of Ray-Bans, Ray-Ban itself is an iconic brand known for its Wayfarer shape. The average person wouldn’t feel so put off wearing glasses. Meta has started to put out a few limited edition versions of its second-gen smart glasses, similar to the fashion strategy of sneakers. Meta is rumored to be releasing versions of it’s smart glasses for athletes.
The 15 biggest announcements at Google I/O 2025: Google AI reveals the nuances of your looks, pants, dresses, and skirts
After a password is compromised, you can be sure that the browser will update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.
Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. It uses an AI model that “understands the human body and nuances of clothing.”
Gmail uses artificial intelligence to suggest replies to your emails, but it will now use information from your inbox and Drive to sound like you, thanks to a new feature. The feature will help it suggest more formal responses to conversations with your boss, if it takes your recipient’s tone into account.
Google Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.
Source: The 15 biggest announcements at Google I/O 2025
The 15 biggest announcements at Google I/O 2025: The Google assistant’s view of the Web and its application to document creation and navigation
Stitch is a new tool that can be used to create interface using themes and a description. You can use wireframes, rough sketches, and examples of other designs to guide Stitch’s output. The experiment is currently available on Google Labs.
It was announced recently that the screen sharing feature will be free for all users of the device.
Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. The new Live icon in thelens can help you show what’s on your camera and talk back and forth with Search.
It is building an assistant into its browser. Starting on May 21st, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on their behalf. The feature can work with up to two tabs for now, but Google plans on adding support for more later this year.
In addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool is able to create 8-second video clips based on text and images. It comes with tools for creating longer videos and stitching clips together.
Source: The 15 biggest announcements at Google I/O 2025
The AI World at I/O 2025: A Roundup of Tech Journalists and Challenges for Google, Project Moohan, and Project Starline
The Deep Think mode is meant for complex queries. It is capable of considering several hypotheses and will only be available to trusted testers first.
Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.
Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.
The Project Starline is taking a huge step forward. It will launch in an HP branded device that has a light field display and six cameras to be able to create a 3D image of someone on a video call.
Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.
But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you missed it, you can find out what else happened in the roundup below.
Deep search, charts and other new features in the new mode will be tested this summer. It’s also rolling out the ability to shop in AI Mode in the “coming months.”
Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there’s a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google’s Android XR smart glasses prototypes. The Project Mariner booth is 10 feet away and completely empty.
While nothing was going to steal AI’s spotlight at this year’s keynote — 95 mentions! There is a lot of buzz over the new iteration of the phone. But the demos we got to see here were notably shorter, with more guardrails, than what I got to see back in December. Probably because, unlike a few months ago, there are cameras everywhere and these are “risky” demos.
The first project is Project Moohan. Not much has changed since I first slipped on the headset. It’s still an Android-flavored Apple Vision Pro, albeit much lighter and more comfortable to wear. You can adjust the fit of the headset with a dial in the back. The top button brings upgemini if you press it. Artificial intelligence assistants are here to help, and you can ask them to do things. Specifically, I ask it to take me to my old college stomping grounds in Tokyo in Google Maps without having to open the Google Maps app. Natural language and context, baby.
That is a demo I have received before. It’s a new thing for Google to show me today. As in, you can now get 3D depth in a regular old video you’ve filmed without any special equipment. (Never mind that the example video I’m shown is most certainly filmed by someone with an eye for enhancing dramatic perspectives.)
I was given a quick tour of the prototype glasses by the crowd that had gathered outside. The emphasis is on the prototype. They’re simple; it’s actually hard to spot the camera in the frame and the discreet display in the right lens. When I put them on, I see a translucent screen that shows the time and weather. If I press the temple, it brings up — you guessed it — Gemini. I asked Gemini to identify one of the two paintings in front of me. At first, it fails because I’m too far away. These demos are risky. I ask it to compare the two paintings, and it tells me some obvious conclusions. The one on the right uses brighter colors while the one on the left uses subdued colors.
On a nearby shelf, there are a few travel guidebooks. I tell Gemini a lie — that I’m not an outdoorsy type, so which book would be the best for planning a trip to Japan? It chooses one. I’m then prompted to take a photo with the glasses. I do, and a little preview pops up on the display. Now that’s something the Ray-Ban Meta smart glasses can’t do — and arguably, one of the Meta glasses’ biggest weaknesses for the content creators that make up a huge chunk of its audience. You can use the display to frame your images. It’s less likely that you’ll be able to get a perfect shot or that you’ll accidentally tilt your head for a Dutch angle.
These are the safest demos that can be done. The things I saw behind closed doors in December were a more convincing example of why someone might want this tech. There were prototypes with not one, but two built-in displays, so you could have a more expansive view. I was given the chance to try the live translation. The whole “Gemini can identify things in your surroundings and remember things for you” demo felt personalized, proactive, powerful, and pretty dang creepy. But those demos were on tightly controlled guardrails — and at this point in Google’s story of smart glasses redemption, it can’t afford a throng of tech journalists all saying, “Hey, this stuff? It does not work.
There is a presence of Meta here at the Shoreline but you won’t know it if you read the introduction to the new operating system. You can see it in the way that eyewear brands like Gentle Monster are partners in the glasses that will be launched later. This is Google’s answer to Meta’s partnership with EssilorLuxottica and Ray-Ban. You can see it in the way that Google is promoting artificial intelligence as the future of smart glasses and headsets. Meta, for its part, has been preaching the same for months — and why shouldn’t it? It’s already sold 2 million units of the Ray-Ban Meta glasses.