The future of artificial intelligence is boring

Unfettered AI Language Models are Silver Tongue Agents of Chaos: Anthropic‘s Response to an Open Letter from David Pichai

Over the course of the last several years, Google has invested a lot of money in artificial intelligence, and the company is desperate to show that it can advance the technology more quickly than OpenAI. The company will not hold back anymore as it did the LaMDA chatbot that was announced long before it became public, according to the high- level message from the stream of announcements.

And yet, unfettered AI language models are also silver-tongued agents of chaos. They are willing to fabricate facts, express unpleasant biases, and say disturbing things. Bing chat was restricted immediately after its launch due to the bot revealing its secret code name and accusing a New York Times columnist of not loving his spouse.

In order to keep its experimental search feature clean, Google worked hard to control the chaotic nature of text-generation technology.

In the smarter version of search, it is not possible to use the first person or talk about thoughts or feelings. It completely avoids topics that might be considered risky, refusing to dispense medical advice or offer answers on potentially controversial topics such as US politics.

In March, some big names in AI research signed an open letter calling for a six-month pause on creating machine learning systems more powerful than GPT-4, which powers ChatGPT. Pichai said in his speech yesterday that the company is currently training a new, more powerful language model called “Gemini” and that he was not a member.

A source at Google tells me this new system will incorporate a range of recent advances from different large language models and may eclipse GPT-4. Don’t expect to experience the full power or charisma of someone who is named “Gemini”. If the same chaos-taming methods seen in the experiment are applied, it might seem like another clever autocomplete.

Anthropic has increased the context window of its own chatbot to around 75,000 words. The company stated in a post on their internet site that it is enough to process the entire book in one go. The system was tested by editing a single sentence in the novel and asking Claude to notice the change. It did it in 22 seconds.

Right now, Claude’s new capacity is only available to Anthropic’s business partners, who are tapping into the chatbot via the company’s API. It’s not clear, but the pricing is sure to be a significant increase. Spending more on compute is related to processing more text.

Previous post Search, Artificial Intelligence, and dancing with Microsoft is the topic that was covered in an exclusive interview
Next post The future of artificial intelligence is boring