Experts say a risk of extinction from artificial intelligence

OpenAI and the Student Essay Competition: On the Importance of AI-Generated Links, Citations, and References for Fact-Finding

The media is partly to blame. These systems are portrayed as having emotions and desires in many reports. Often, journalists fail to emphasize the unreliability of these systems — to make clear the contingent nature of the information they offer.

But, as I hope the beginning of this piece made clear, OpenAI could certainly help matters, too. Although chatbots are being presented as a new type of technology, it’s clear people use them as search engines. Many of them get confused because they are launched as search engines. A generation of internet users have been trained to type questions into a box and get answers. But while sources like Google and DuckDuckGo provide links that invite scrutiny, chatbots muddle their information in regenerated text and speak in the chipper tone of an all-knowing digital assistant. A sentence or two is not enough for this kind of priming.

Bing tends to search the web in response to factual queries and supply users with links as sources, but I think that it does slightly better on these sorts of fact-finding tasks. If you are paying for the Plus version and using the beta plug-ins, you will be able to search the web. It is self-contained and more likely to be deceptive.

Interventions don’t need to be complex, but they need to be there. How can it not be aware when to caution the user to check their sources and when to generate factual citations? Why can’t it respond to someone asking “is this text AI-generated?” with a clear “I’m sorry, I’m not capable of making that judgment”? If Openai is interested in speaking to us, we will update the story.

In May, one Texas A&M professor used the chatbot to check if students had written an essay with the help of artificial intelligence. Ever obliging, ChatGPT said, yes, all the students’ essays were AI-generated, even though it has no reliable capability to make this assessment. The class was warned by the professor that they wouldn’t receive their diplomas until his mistake was pointed out. In April a law professor spoke about how the system created false stories about him. He only found out when a colleague, who was doing research, alerted him to the fact. “It was quite chilling,” the professor told The Washington Post. It is incredibly harmful to make an allegation like this.

Schwartz deserves plenty of blame in this scenario, but the frequency with which cases like this are occurring — when users of ChatGPT treat the system as a reliable source of information — suggests there also needs to be a wider reckoning.

Because when it comes to preparing people to use technology as powerful, as hyped, and as misunderstood as ChatGPT, it’s clear OpenAI isn’t doing enough.

Source: https://www.theverge.com/2023/5/30/23741996/openai-chatgpt-false-information-misinformation-responsibility

Warning on Artificial Intelligence from the Gateway to the Future: OpenAI’s ChatGPT is Now More Powerful than GPT-4

It’s a warning you could tack on to just about any information source, from Wikipedia to Google to the front page of The New York Times, and it would be more or less correct.

This is the warning OpenAI pins to the homepage of its AI chatbot ChatGPT — one point among nine that detail the system’s capabilities and limitations.

The director of the Center for Artificial Intelligence Safety warns that Artificial Intelligence poses immediate risks of bias, misinformation, and cyberattacks.

“I thought for a long time that we were, like, 30 to 50 years away from that. … Now, I think we may be much closer, maybe only five years away from that,” he estimated.

In a recent interview with NPR, Hinton, who was instrumental in AI’s development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

In a separate statement published in March and now signed by more than 30,000 people, tech executives and researchers called for a six-month pause on training of AI systems more powerful than GPT-4, the latest version of the ChatGPT chatbot.

In recent months the call for guardrails has increased with public and profit driven enterprises embracing new generations of programs.

Sam Altman, CEO of OpenAI, the generated text juggernaut that is behind ChatGPT, and the so-called godfather of AI who recently left Google, Geoffrey Hinton, were among the hundreds of leading figures who signed the we’re-on-the-brink-of-crisis statement.

AI experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don’t take control over humans or destroy the world.

Previous post The CEOs and researchers have warned of the risk of extinction
Next post Japan has been notified that North Korea will launch a satellite in the coming days