There will be more about the use of artificial intelligence, or as they call it, “AI”, in the year 2024
Google AI Hasn’t Designed for Developers? A Pivotal Time of Android’s Re-Organization and the Birth of the AI Era
We didn’t think the Humane AI pin or the Rabbit R1 would be much of a help to us dealing with our personal tech, but they have been woefully underbaked. This coming Tuesday is the first day of developer season, and Hot Gadget Spring is over.
It also happens to be a pivotal time for Android. The re-org puts theAndroid team together with the hardware team for the first time. To run full speed ahead and put more artificial intelligence in other stuff is what the directive says. Not preferring Google’s own products was a foundational principle of Android, though that model started shifting years ago as hardware and software teams collaborated more closely. The wall is gone and the era of artificial intelligence is here. And if the past 12 months have been any indication, it’s going to be a little messy.
The success of the AI era is dependent on those integrations. It is not possible to read your emails and calendars the same way that it is possible to read a history of every place you have visited in the past decade. Those are real advantages, and Google needs every advantage right now. We’ve seen a lot of signals that Apple is going to introduce a smarter version of its virtual assistant at the developers conference this year. Microsoft and OpenAI are not sitting still. Even if it is not a party trick, artificial intelligence that is more than a party trick can be delivered by way of the advantages of the company.
Gemini launched as an AI-fueled alternative to the standard Google Assistant a little over three months ago, and it didn’t feel quite ready yet. On day one, it couldn’t access your calendar or set a reminder — not super helpful. Google has added those functions since then, but it still doesn’t support third-party media apps like Spotify. Google Assistant has supported Spotify for most of the last decade.
Google I/O Kickstarts: Artificial Intelligence Features for the Pixel and Pixel Tablets and the Human Ai Pin, for Android and iOS
It seems unlikely that Google will focus much on new hardware this year, given that the Pixel 8A is already available for preorder and you can now buy a relaunched, cheaper Pixel Tablet, unchanged apart from the fact that the magnetic speaker dock is now a separate purchase. The company could still tease new products like the Pixel 9 — which, in typical Google fashion, is already leaking all over the place — and the Pixel Tablet 2, of course.
That kind of thing could be bad news for devices like the Rabbit R1 and the Human Ai Pin, which each recently launched and struggled to justify their existence. They only have one advantage at the moment, which is that it is hard to use a phone as an artificial intelligence device.
I/O could also see the debut of a new, more personal version of its digital assistant, rumored to be called “Pixie.” The Gemini-powered assistant is expected to integrate multimodal features like the ability to take pictures of objects to learn how to use them or get directions to a place to buy them.
Google will probably also focus on ways it plans to turn your smartphone into more of an AI gadget. The more artificial intelligence features for the apps, the better. It’s been working on AI features that help with dining and shopping or finding EV chargers in Google Maps, for instance. Google is also testing a feature that uses AI to call a business and wait on hold for you until there’s actually a human being available to talk to.
The keynote talk will be on May 14th at 1PM, at the I/O. You can watch that on either of the websites or on the YouTube channel by going to the link at the top of the page. (There’s also a version with an American Sign Language interpreter.) Set a good amount of time aside for that; I/O tends to go on for a couple of hours.