We know that it’s the end of the search engine

Why Live Blogs? I/O: A Two-Hour Infomercial for Google, Glimpse into Everyone’s Favors

We have a few reporters at I/O in Mountain View, California, including senior writers Lauren Goode and Paresh Dave, senior reviews editor Julian Chokkattu, and staff writer Reece Rogers. Will Knight and editorial director Michael Calore will be watching from their desks while the team provides live updates and commentary.

1 pm Eastern is where the presentation will start. We’ll start our live feed about 30 minutes before that, so if you want to watch it, you can do it here. This text will disappear and will be replaced by the feed of live updates, just like magic.

You may be asking, why are we publishing a live blog when the whole show is also being streamed on YouTube? The short answer is easy: We love live blogs! The longer answer is that the I/O keynote is basically a two-hour infomercial for all things Google. It’s a marketing presentation, and while we expect Google to break some news in this keynote, the content on your screen will of course be missing a lot of the necessary context around it—how Google’s AI-powered search offerings compare to the competition, how the new Gemini chatbot features stack up to OpenAI’s new ChatGPT-4o, or what Android’s new security features mean for users like you. The goal is to give you context that aids in your understanding of the news, as well as to distill it and analyze it. Live BLOGS are fun for us to do. Okay, let us have this.

What’s New in the World of Android, What Happens When You Get It, and What It Holds For Google I/O 2024?

I got a chance to speak with Burke and Sameer Samat, president of the Android ecosystem at Google, about what’s new in the world of Android, the company’s new AI assistant Gemini, and what it all holds for the future of the OS. Samat referred to these updates as a “once-in-a-generational opportunity to reimagine what the phone can do, and to rethink all of Android.”

You can read our full rundown of what to expect from Google I/O 2024. We’ll be here at 9:30 Pacific for our commentary. The live event begins about an hour later.

Nearly a decade ago, Google showed off a feature called Now on Tap in Android Marshmallow—tap and hold the home button and Google will surface helpful contextual information related to what’s on the screen. Talking about a movie with a friend? Now on Tap could get you details about the title without having to leave the messaging app. Do you look at a restaurant on the site? OpenTable could be brought to your attention with a tap.

I was fresh out of college, and these improvements felt exciting and magical—its ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.

Samat claims Google has received positive feedback from consumers, but Circle to Search’s latest feature hails specifically from student feedback. Circle to Search can now be used on physics and math problems when a user circles them—Google will spit out step-by-step instructions on completing the problems without the user leaving the syllabus app.

The generative AI model launched late last year can make a dramatic amount of improvements to search, according to the interview that was published ahead of the event. People have time that is valuable. They deal with hard things. If you can use technology to give people answers to their questions and take some of the work out of it, we would be all for that.

According to Samat, it’s an opt-in experience on the phone. I think that Gemini is getting more advanced and changing over time. There is a choice for the consumers if they want to use this new assistant. People are trying it out, that is what we are seeing. We are getting a lot of feedback.

Ask Photos: Asking for Picture-based Answers to Questions about Google Search Labs and the Screenplay ‘Measurement of the Northern Lights’

It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI.

These changes to Google Search have been long in the making. The company’s Search Labs section let users try experimental new features in search of something called Search Generative Experience. The big question is when those features would become part of the search engine. The answer is, well, now.

Google says it has made a customized version of its Gemini AI model for these new Search features, though it declined to share any information about the size of this model, its speeds, or the guardrails it has put in place around the technology.

In response to a query about where to best view the northern lights, the person said it was not worth going to. The best place to see the northern lights, known as theAurora borealis, are in the top of the world in places with minimal light pollution. The website will offer a link to NordicVisitor.com. But then the AI continues yapping on below that, saying “Other places to see the northern lights include Russia and Canada’s northwest territories.”

Google Lens already lets you search for something based on images, but now Google’s taking things a step further with the ability to search with a video. That means you can take a video of something you want to search for, ask a question during the video, and Google’s AI will attempt to pull up relevant answers from the web.

A new feature that is coming this summer could be very helpful for anyone who has been taking photos for a long time. “Ask Photos” lets Gemini pore over your Google Photos library in response to your questions, and the feature goes beyond just pulling up pictures of dogs and cats. The CEO asked the person what his license plate number was. The response was the number itself, followed by a picture of it so he could make sure that was right.

The goal of the project is for the assistant to watch and understand what it sees through your device, remember where you are, and do things for you. Many of the most impressive demos from I/O this year were powered by it, and the company’s intention is to be an honest-to-goodness Artificial Intelligence agent that can’t just talk to you, but also do things on your behalf.

Google’s answer to OpenAI’s Sora is a new generative AI model that can output 1080p video based on text, image, and video-based prompts. Videos can be produced in a variety of styles, like aerial shots or timelapses, and can be tweaked with more prompts. The company is already offering Veo to some creators for use in YouTube videos but is also pitching it to Hollywood for use in films.

Gems: A New Custom Chatbot for the Search Engine and its Compilation With Artificial Intelligence-Generated Videos for Android and iOS

A new custom chatbots creator called Gems is being rolled out by the search engine. Like Openai’s GPTs, Gems gives users the option of giving instructions to Gemini to tailor its response to how it is being used. If you are a Gemini Advanced subscriber, you will be able to get a positive and constant running coach with daily goals and plans.

If you use an iPad, you can circle a math problem on your screen to get help figuring it out. The problem, so it won’t help students cheat on their homework, but it will be easier to complete when you break it down into steps.

The real-time warnings will show up on the phone when it catches red flags such as common scammer conversation patterns, which it will then use to help you avoid scam calls. The company promises to offer more details on the feature later in the year.

Users will soon be able to ask questions about videos on-screen, and the answer will be based on automatic subtitles, which will be available in the next few months. It can offer information if you are a Gemini Advanced user. The next few months are expected to see a number of updates for Gemini.

The lightest version of the Gemini model is going to be added to the browser on desktop. Using the built-in assistant, you’ll be able to make text for product reviews, social media posts, and more directly within the Chrome browser.

The company says it is expanding what it can do, by incorporating watermarking into content generated with its new Veo video generator and now detecting artificial intelligence-generated videos.

Previous post There were electrical problems in the port that caused the ship to hit the bridge
Next post The future and past of mobile computing have been pointed out