The idea is that the artificial intelligence model it has is capable of taking down GPT-4

Google GPT-4 vs. Alphabet Gemini: An Artificial Intelligence Dream based on Tensor 3 and AICore

Right now, Google’s Tensor 3 processor seems to be the only one capable of running the model. Developers can use the new system service called AICore to build features into their apps that were powered by Gemini, which is a type of mobile computation. Your phone is still going to need a high-end chip to work, but several companies are making compatible ones. Developers can get into Google’s early access program now.

Google, which declared a “code red” after ChatGPT’s launch and has been perceived to be playing catch-up ever since, seems to be still trying to hold fast to its “bold and responsible” mantra. Hassabis and Pichai both say they’re not willing to move too fast just to keep up, especially as we get closer to the ultimate AI dream: artificial general intelligence, the term for an AI that is self-improving, smarter than humans, and poised to change the world. “As we approach AGI, things are going to be different,” Hassabis says. I think we need to approach that cautiously because it is kind of active technology. Cautiously, but optimistically.”

So, let’s just get to the important question, shall we? Ready, go, is how OpenAI characterizes its GPT-4 against Alphabet Inc.’s Gemini. This has very clearly been on Google’s mind for a while. Hassabis says they did a thorough analysis of the systems side by side. Multiple tests like the Multi-task Language Understanding benchmark are used to compare the two models. “I think we’re substantially ahead on 30 out of 32” of those benchmarks, Hassabis says, with a bit of a smile on his face. Some are very small. Some of them are bigger.

Right now, Gemini’s most basic models are text in and text out, but more powerful models like Gemini Ultra can work with images, video, and audio. Hassabis says that it is going to get even more general. There are still things like action and touch. Over time, he says, Gemini will get more senses, become more aware, and become more accurate and grounded in the process. These models are still hallucinating and have biases, but they seem to understand better about the world. The better they will get, Hassabis says.

It is important to realize that benchmarks are just a sample of what people can do with the platform, and that the real test of it will come from everyday users who want to use it. Google seems to see coding in particular as a killer app for Gemini; it uses a new code-generating system called AlphaCode 2 that it says performs better than 85 percent of coding competition participants, up from 50 percent for the original AlphaCode. Users will notice an improvement when they use the model.

Previous post The funny sitcom creator Norman Lear died at the age of 101
Next post It hopes the artificial intelligence model it has created will help it take down GPT-4