It will be about artificial intelligence once again by the year 2024

Searching for the Next App Store: Google vs Perplexity, Google Pixel 8, Pixel 9 and Pixel 2, and other AI-Powered Product Development Challenges

Sources tell The Verge this week that the company is aggressively trying to get employees from the other side of the ocean to work on its own search offering. Perplexity, a $1 billion start-up founded by a former OpenAI researcher, said that its own artificial intelligence-based search product had attracted 10 million monthly active users back in January.

OpenAI previously attempted to give ChatGPT access to live web data via ChatGPT plugins, which have since been retired in favor of GPTs. People rapidly adopted ChatGPT for information-gathering tasks after it was introduced in November 2022, but the chatbot — like all bots built upon LLMs — has a poor reputation for providing accurate or up-to-date information.

Raghavan said people come to them because they are trusted. People will go to the internet to check something out even if it is a new thing because it is a trusted source and it becomes more important in this era of generative Ai.

It seems unlikely that Google will focus much on new hardware this year, given that the Pixel 8A is already available for preorder and you can now buy a relaunched, cheaper Pixel Tablet, unchanged apart from the fact that the magnetic speaker dock is now a separate purchase. The company could still tease new products like thePixel 9, which is already leaking all over the place, and thePixel Tablet 2, of course.

That kind of thing could make the Rabbit R1 and Human Ai Pin less likely to be a success, as each device has struggled to justify its existence. At the moment, the only advantage they maybe sort of have is that it’s kind of hard (though not impossible) to pull off using a smartphone as an AI wearable.

I/O could also see the debut of a new, more personal version of its digital assistant, rumored to be called “Pixie.” The Gemini-powered assistant is expected to integrate multimodal features like the ability to take pictures of objects to learn how to use them or get directions to a place to buy them.

Google will probably also focus on ways it plans to turn your smartphone into more of an AI gadget. It means more generative features for the apps. It has been working on features that help with eating out or shopping, for example. Google is also testing a feature that uses AI to call a business and wait on hold for you until there’s actually a human being available to talk to.

The keynote speech at I/O will be held on Tuesday, May 14th at 10:00AM Pacific/1:00PM Eastern. You can catch that on Google’s site or its YouTube channel, via the livestream link that’s also embedded at the top of this page. There is a version with an American Sign Language interpreter. Set a good amount of time aside for that; I/O tends to go on for a couple of hours.

Previous post What can we know about when aid can enter Gaza?
Next post There will be more about the use of artificial intelligence, or as they call it, “AI”, in the year 2024