I saw that the search engine would put a device on your face

How I Saw XR Glasses: A Demonstration of the New Mixed Reality Operating System, Google Books, and a Yellow Book

It’s an ordinary Tuesday. I am surrounded by representatives from both companies in a room and wearing glasses that look normal. One of them starts speaking in Spanish while standing in front of me. I don’t speak Spanish. I can clearly see that her words are being translated into English subtitles. Reading them, I can see she’s describing what I’m seeing in real time.

The first experience I had with the new mixed reality OS was wearing the prototypes I have for smart glasses. A new generation of augmented reality devices will embody all our wildest dreams of what smart glasses can be, and that’s thanks to the bet made by Google.

Google wants everyone to know the time is finally right for XR, and it’s pointing to Gemini as its north star. Adding a new piece of software will help you interact with your environment in richer ways. In a demo, Google had me prompt Gemini to name the title of a yellow book sitting behind me on a shelf. I’d briefly glanced at it earlier but hadn’t taken a photo. Gemini took a second, and then offered up an answer. I checked to make sure it was correct.

How I Wanna Walk Inside the Project Moohan Headset and How I’m Getting Closer to JYP Entertainment: The Case of Using Google Maps and Prototype Smart Glasses

The Meta Quest 3 headset is similar to the Vision Pro headset. The light seal gives you the option of letting the world bleed in. It’s lightweight and doesn’t pinch my face too tightly. I don’t have to do my hair again because my ponytail slots through the top. At first, the resolution doesn’t feel quite as sharp as the Vision Pro — until the headset automatically calibrates to my pupillary distance.

I start to feel like I am at this point again. I was walking through pinching to select items and how to open the app launcher. There’s an eye calibration process that feels awfully similar to the Vision Pro’s. If I want, I can retreat into an immersive mode to watch YouTube and Google TV on a distant mountain. I can open apps, resize them, and place them at various points around the room. I have done this before. This just happens to be Google-flavored.

It is easy to ignore the idea that we can crack the augmented reality puzzle with the help of something called Gemini, which is all things. Generative AI is having a moment right now, but not always in a positive way. Outside of conferences filled with tech evangelists, AI is often viewed with derision and suspicion. But inside the Project Moohan headset or wearing a pair of prototype smart glasses? I can see why both companies think Gemini is the best app for XR.

For me, it’s the fact that I don’t have to be specific when I ask for things. Usually, I get flustered talking to AI assistants because I have to remember the wake word, clearly phrase my request, and sometimes even specify my preferred app.

In the Moohan headset, I can say, “Take me to JYP Entertainment in Seoul,” and it will automatically open Google Maps and show me that building. I can ask my windows to be reorganized if they get cluttered. I don’t have to hold something up. When wearing the prototype glasses I watch and listen as one of the programmers summarizes a long text message about lemons, ginger, and olive oil from the store. I was able to switch from speaking English to Japanese to ask about the weather and get an answer in written and spoken Japanese.

It’s not just interactions with Gemini that linger in my mind, either. It is how experiences can be built on top of them. I asked Gemini how to get somewhere and saw turn-by-turn text directions. When I stared down, the text turned into a map of my surroundings. It’s very easy to imagine myself using something like that in real life.

It can be hard to sell headsets to the average person. Personally, I’m more enamored with the glasses demo, but those have no concrete timeline. (Google made the prototypes, but it’s focusing on working with other partners to bring hardware to market.) There are still cultural cues that have to be established with either form factor. There needs to be more than one type of app and experience for the average person.

Listening to Kim and Izadi talk, I want to believe. But I’m also acutely aware that all of my experiences were tightly controlled. I wasn’t given free rein to try and break things. I couldn’t take photos of the headset or glasses. At every point, I was carefully guided through preapproved demos that Google and Samsung were reasonably sure would work. I — and every other consumer — can’t fully believe until we can play with these things without guardrails.

But even knowing that, I can’t deny that, for an hour, I felt like Tony Stark with Gemini as my Jarvis. For better or worse, this example has molded so much of our expectations for how XR and AI assistants should work. I’ve tried dozens of headsets and smart glasses that promised to make what I see in the movies real — and utterly failed. For the first time, I experienced something relatively close.

The choice of the term “XR” is an interesting one. There are a million terms and acronyms for this space: there’s virtual reality, augmented reality, mixed reality, extended reality, and others, all of which mean different but overlapping things. It looks like the broadest of the terms, which is why it was chosen by the search giant. “When we talk about extended reality or XR, we mean a whole spectrum of experiences from virtual reality to augmented reality and everything in between” said Samat.

Previous post FBI Director will quit at end of Biden administration
Next post The Game Awards will take place in 2024