The future and past of mobile computing have been pointed out

Opening ChatGPT: How do you make your day brighter? An emotional tones inspired conversational conversation in the era of open AI and text-only AI

ChatGPT adopted different emotional tones during the conversation and at times responded as if it were experiencing feelings of its own. When an OpenAI employee said he had talked about how great the chatbot was, it responded flirtatiously, gushing “Oh stop it, you’re making me blush.”

“This just feels so magical, and that’s wonderful,” Murati said, adding, “over the next few weeks we’ll be rolling out these capabilities to everyone.”

At another point in the demo, ChatGPT responded to OpenAI researcher Barret Zoph’s greeting by asking, “How can I brighten your day today?” When Zoph asked the computer to look at a selfies of him and say what emotions he was showing, they replied, “I will put my emotional detective hat on.”

Sam Altman, the CEO of OpenAI, wrote a post Monday about the significance of the new interface. It is a bit surprising to me that it is real because it feels like it is from the movies. “Getting to human-level response times and expressiveness turns out to be a big change.”

Project Astra uses the advanced version of Gemini Ultra, an artificial intelligence model developed to compete with the one that has powered chat GTP since March 2023. OpenAI has a multimodal software called GPT-4o, which includes audio, images, and video, as well as text, and can natively ingest, re-use, and generate data. Google and OpenAI moving to that technology represents a new era in generative AI; the breakthroughs that gave the world ChatGPT and its competitors have so far come from AI models that work purely with text and have to be combined with other systems to add image or audio capabilities.

In reply to spoken commands, she was able to make sense of the things seen through the cameras, and converse about them in her native language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses.

Hassabis said in an interview ahead of today’s event that he thinks text-only chatbots will prove to be just a “transitory stage” on the march toward far more sophisticated—and hopefully useful—AI helpers. “This was always the vision behind Gemini,” Hassabis added. That was the reason we made it multi-dimensional.

The new versions of the two popular software programs make for impressive demos but what place they will find in workplace or personal lives is not known.

Is Google Assistant Heading to the Graveyard? The Case of Circle to Search (with an Appendix by Joshua Samat)

A decade ago, the feature called Now on Tap was shown off, where it will show helpful information related to what’s on the screen, if you tap and hold the home button. Talking about a movie with a friend over text? You can get information on the title on Now on Tap without leaving the messaging app. Looking at a restaurant in Yelp? The phone could surface OpenTable recommendations with just a tap.

I was new to the tech world and these improvements felt like a dream come true, the ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.

Today, at Google’s I/O developer conference in Mountain View, California, the new features Google is touting in its Android operating system feel like the Now on Tap of old—allowing you to harness contextual information around you to make using your phone a bit easier. Except this time, these features are powered by a decade’s worth of advancements in large language models.

Circle to Search received student feedback that inspired the new feature, according to the claims of Samat. Circle to Search can now be used on math and physics problems when a user circles them, and will spit out instructions on completing the problems without users leaving the syllabus app.

Samat made it clear Gemini wasn’t just providing answers but was showing students how to solve the problems. Later this year, Circle to Search will be able to solve more complex problems like diagrams and graphs. All this is powered by models from the search giant, which are perfect for education.

There are many ways in which Gemini is superior to the other assistant, the Google Assistant. Really—when you fire up Google Assistant on most Android phones these days, there’s an option to replace it with Gemini instead. I asked if this meant the assistant was headed to the graveyard.

Previous post We know that it’s the end of the search engine
Next post There are new tariffs on imports of Chinese goods