Bill Gates, along with other tech leaders, want to stop the artificial intelligence race
How OpenAI, Microsoft, and Google are Testing Advanced Language Models to Maximize Profits? The View from Google, ChatGPT, and OpenAI
Part of the letter’s concern is that OpenAI, Microsoft, and Google have begun a race to release new models quickly in order to make money. At such pace, the letter argues, developments are happening faster than society and regulators can come to terms with.
Yuval Noah Harari, Apple’s co- founder Steve Wozniak and Jaan Baltic are just a few of the signatories. The full list of signatories can be seen here, though new names should be treated with caution as there are reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, an individual who is partly responsible for the current race dynamic in AI).
But excitement around ChatGPT and Microsoft’s maneuvers in search appear to have pushed Google into rushing its own plans. The company recently debuted Bard, a competitor to ChatGPT, and it has made a language model called PaLM, which is similar to OpenAI’s offerings, available through an API. “It feels like we are moving too quickly,” says Peter Stone, a professor at the University of Texas at Austin, and the chair of the One Hundred Year Study on AI, a report aimed at understanding the long-term implications of AI.
Artificial intelligence experts worry more and more about the ability of the software to spread misinformation, as well as the impact on consumer privacy. Questions have been asked about how Artificial Intelligence can help students cheat, and how it can shift our relationship with technology.
The companies did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google.
Musk left Openai three years later and since then has criticized the company. Gates co founded Microsoft, which has invested billions of dollars.
Corporate ambitions and desire for dominance are often the result of ethical concerns. “I won’t be surprised if these organizations are already testing something more advanced than ChatGPT or [Google’s] Bard as we speak.”
Governments are regulating high-risk artificial intelligence tools. The United Kingdom stated Wednesday that it would avoid heavy-handed legislation which could stifle innovation. Lawmakers in the European Union have been working to put in place sweeping rules for artificial intelligence.
A group of prominent computer scientists, such as Apple co- founder Steve Wozniack, and other tech industry notables are calling for a 6 month delay to consider the risks.
James Grimmelmann, a Cornell University professor of digital and information law, says a pause is a good idea, but the letter is vague and does not take the regulatory problems seriously. “It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars.”
AI for the 21st Century: Ethical Aspects of Artificial Intelligence (AI) and Google’s GPT-4 Language Model
The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Until this year, when ethical concerns led to it not releasing powerful language models that had been created by Google, the company developed some of the artificial intelligence that was needed to build GPT-4.