Everyone releases products quickly

On the Status of Anthropic AI: Updates and Limitations on the Viability of the Pi and GPT based Models, Pre-Firing with Altman

The pace of innovation is rapid as the year ends. Soon after news broke that Altman would return as OpenAI’s CEO, Inflection AI, another competitor, announced an upgraded model for its Pi chatbot. If you were hoping for a break in the pummeling of AI news, don’t hold your breath.

Anthropic’s latest model, Claude 2.1, was given two key updates. The ability to send more data to the bot at once is another. The token limit for Claude is now set at 200,000 tokens, which is approximately the length of a 500 page book. (Sorry Leo Tolstoy fans, you’ll have to wait until future updates to analyze all of War and Peace in a single prompt.) To compare, the rate limit for the GPT-4 Turbo model, announced by Altman pre-firing, is capped at 128,000.

Anthropic claims that Claude is more likely to admit when it isn’t sure of the answer, than it is to lie. “We tested Claude 2.1’s honesty by curating a large set of complex, factual questions that probe known weaknesses in current models,” reads the company’s blog post. There’s a lack of veracity that persists to be a major issue for chatbot.

Stable Video Diffusion and Claude 2.1: What Happened When HubSpot Learns to Smell Like Her?

The text-to-video model spits out animations that range from eerily beautiful to downright disturbing when you input a prompt. Stable Video Diffusion can transform your still images into videos with motion.

According to OpenAI, the feature is new and was rolled out in a short period of time after the departure of the CEO. Users who paid for the service were the only ones allowed to use the feature.

It is not yet giving Spike Jonze his movie Her, but the software developers at OpenAI took another step towards their goal by giving the chatbot the ability to hold a conversation. The idea is that a chatbot can be even more powerful if it can accept inputs and provide outputs in multiple mediums, like voice, text, and images. Who knows when it’ll learn how to smell.

“Feels like every week, there’s something new being launched or announced from one of the major players. So, my guess is that the launches of Stable Video Diffusion and Claude 2.1 were likely just a coincidence,” says Dharmesh Shah, who’s the CTO and cofounder of HubSpot as well as an OpenAI shareholder.

The OpenAI AI Debacle: Do We Need More Regulations to Feed the Machines That Are Generating Artificial Intelligence?

West emphasizes that it is important to ensure that existing laws are applied to technology companies that are creating artificial intelligence, because threats from it are already present. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power — something she thinks needs more scrutiny from anti-trust regulators. “Regulators for a very long time have taken a very light touch with this market,” says West. “We need to start by enforcing the laws we have right now.”

The debacle at the company that built ChatGatt highlights the concern that commercial forces are against the responsible development of artificial intelligence systems.

The push to retain dominance leads to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altman’s initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altman’s return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

Ilya Sutskever, OpenAI’s chief scientist and a member of the board that ousted Altman, this July shifted his focus to ‘superalignment’, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Sutskever was among employees who signed a letter threatening to leave unless Altman returned and he expressed regret about the impacts of his actions after the board fired him.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown University’s Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include a former CEO of a software company who is currently on the board of an e-commerce platform.

The company was founded almost a year ago and has gone on to become a world renowned company. The bot was based on a the company’s GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that emerged from this technique has astounded and worried scientists and the general public alike.

There is a heated competitive landscape for artificial intelligence. There are moreAI products that will be added soon, according to the internet giant. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. There are other competing rivals, such as Stability AI and Cohere.

According to West, these start-ups rely on a lot of the computing resources provided by only three companies, which could potentially create a race for dominance.

Source: [What the OpenAI drama means for AI progress — and safety](https://lostobject.org/2023/11/22/musk-is-entering-the-openai-drama/)

Hinton’s Theoretical Project: Deep Learning, AGI, and the Road to AGI: A Conversation with Geoffrey West

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. He says the first thing you need to do to make a car go as fast as possible is remove the brakes. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

Openai was founded to develop a deep-learning system that is smart, but also good at one specific thing. It is not clear if AGI is even possible. “The jury is very much out on that front,” says West. But some are starting to bet on it. He used to think that AGI would happen on a time scale of between 30 and 100 years. He thinks we’ll get it in five to 20 years.

There are dangers associated with the misuse of artificial intelligence, for example, creating misinformation, committing scams or even inventing new bioterrorism weapons. When it comes to historical biases and social injustices, today’s artificial intelligence systems tend to reinforce them, says West.

Previous post The New York Times reported on an Arab Israeli who experienced pain twice
Next post The Israel-hamas deal is known