Tech doyens call for a pause in a sudden alarm
A Letter to Elon Musk, Bill Gates and Steve Wozniak Concerning the Existential Risks of Artificial Intelligence
It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Elon Musk, Bill Gates and Steve Wozniak are among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.
An open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.
Artificial intelligence experts have become concerned about the potential for bias in responses, as well as the fact that some artificial intelligence tools can be used to spread misinformation. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.
Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart they can self-improve themselves beyond humanity’s control. He’s more concerned about “mediocre AI”, which is widely deployed and is used to trick people or spread dangerous misinformation.
Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI’s existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI’s similar generator known as DALL-E.
“Corporate ambitions and desire for dominance often triumph over ethical concerns,” Su said. I am not surprised that these organizations are already testing something more advanced than before.
With the rapid pace of advancement in artificial intelligence, the letter hints at broader unease inside and outside the industry. China, the EU and Singapore have previously introduced early versions of governance frameworks that are powered by Artificial Intelligence.
That’s the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.
James Grimmelmann, a professor of digital and information law at Cornell University, says that a pause is a good idea, but the letter doesn’t take the regulatory problems seriously. “It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars.”
A CAIDP Motion to Defend Artificial Intelligence and Implications in the Commercial Deployment of GPT-Based Language Models
It is warning that language models can be utilized to automate jobs and spread misinformation and that language models like GPT-4 are already competing with humans. The letter raises the possibility that artificial intelligence systems could replace humans and remake civilization.
The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.
An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive, and a risk to public safety.”
In the complaint, CAIDP asks the FTC to halt any further commercial deployment of GPT models and require independent assessments of the models before any future rollouts. It also asks for a publicly accessible reporting tool similar to the one that allows consumers to file fraud complaints. And it seeks firm rulemaking on the FTC’s rules for generative AI systems, building on the agency’s ongoing but still relatively informal research and evaluation of AI tools.