
Project Aura is a smart glasses project
Xreal and Google are teaming up on Project Aura: A pair of smart glasses that use Android XR for mixed-reality devices
is a senior reporter focusing on wearables, health tech, and more with 13 years of experience. She used to work for PC Magazine and Gizmodo.
Meta’s smart glasses playbook seems like it is being taken a page out of by the internet giant. It is a big deal, as it shows how successful Meta has been with its Ray-Ban smart glasses. The company said in February that it had sold over two million pairs of Ray-Ban smart glasses and was positioning them as the ideal hardware for assistants.
The partnership hints that Google is taking style a lot more seriously this time around. Warby Parker is well known as a direct-to-consumer eyewear brand that makes it easy to get trendy glasses at a relatively accessible price. The Gentle Monster brands aren’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z because of its provocative silhouettes, and the fact that Gentle Monster is favored by celebrities such as Britney Spears, Serena Williams, and Ariana Grande. With both brands, we can see that the Android XR is meant for both everyday glasses and bolder, trendsetting options.
Xreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.
The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Ray-Ban is a brand known for its Wayfarer shape and Meta’s glasses look like ordinary Ray-Bans. In other words, they’re glasses the average person wouldn’t feel quite so put off wearing. Since launching its second-gen smart glasses in late 2023, Meta has also put out a few limited edition versions, playing into the same fashion strategy as sneakers. Supposedly, Meta is going to release versions of its smart glasses for athletes.
Xreal, Google, and Meet: A Comparative Study of Project Aura’s Products and Services in the Augmented Reality Era
That hints at a hardware evolution compared to Xreal’s current devices. We don’t know which of the three chipsets that Project Aura will use. Project Aura is in the same boat as Project Moohan in that it is counting on developers to start building apps and use cases now. Project Aura can be easily brought over to a different form factor, thanks to an announcement from Xreal and Google.
I was told that Xreal would be at the augmented world expo next month. But we know it’ll have Gemini built-in, as well as a large field-of-view. In the product render, you can also see what looks like cameras in the hinges and nose bridge, plus microphones and buttons in the temples.
If a password is compromised, Chrome will be able to generate a strong replacement and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.
A new feature in the works allows you to see what clothing looks on you by uploading a full-length photo of yourself. The model that it uses understood the human body and nuances of clothing.
Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. The feature takes your recipient’s tone into consideration, which will allow it to suggest more formal responses in a conversation with your boss.
Google Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.
Google I/O 2025: 15 Biggest Announcements ai Gemini (https://www.theverge.com/news/17/05/2019)
Stitch is a new tool that uses artificial intelligence and can generate interface using selected themes and descriptions. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.
After announcing that the screensharing feature would be free for all users of the phone, the company has now said that it will be free for the iPad as well.
Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. In AI Mode, you can show what you have on your camera while talking to Search through the new Live icon.
A new subscription called “AI Ultra” is giving access to the most advanced AI models, and higher usage limits, as well as other perks. The subscription includes access to Project Mariner which is able to complete up to 10 tasks at once.
Google has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.
The experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.
Source: The 15 biggest announcements at Google I/O 2025
Google AI and Artificial Intelligence at I/O 2025: A Quick Look at a Top-K List of Recent Google Product Announcements
Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.
Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.
Project Starline began as a 3D video chat booth, but is taking a huge step forward. It will launch inside an HP device with six cameras, and a light field display, to create a 3D image of the person you are chatting with on a video call.
Google just wrapped up its big keynote at I/O 2025. There were a lot of announcements related to artificial intelligence, including updates across various models and new features for Search and Gmail.
There were some surprises, like a new app and an update to Project Starline. If you did not catch the event live, you can see what went wrong in the rundown below.
Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. In the coming months, it will also be giving shoppers the ability to shop in Artificial Intelligence Mode.