What made the person turn Into an artificial intelligence?

Is AI Safe? Revisiting a Conversation with the Snoop Expert on Artificial Intelligence Foundations of Mapping Language Models

In the clip of a discussion panel, the rapper says that artificial intelligence software can hold a coherent and meaningful conversation.

“Then I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit,’” Snoop says. “And I’m like, ‘Is we in a fucking movie right now or what?’”

The “old dude” is, of course, Hinton. He has helped develop artificial neural network foundations for the most powerful machine intelligence programs, including the recently launched chatGPT, sparking a debate about how rapidly machine intelligence is progressing.

Hinton isn’t the only person to have been shaken by the new capabilities that large language models such as PaLM or GPT-4 have begun demonstrating. Last month, a number of prominent artificial intelligence researchers and others signed an open letter asking for a pause on the development of powerful artificial intelligence. But since leaving Google, Hinton feels his views on whether the development of AI should continue have been misconstrued.

The other interaction was a revelation, as he realized that a new machine learning model from the company called PaLM is similar to the one behind the popular chat game, which is accessible via an app in March. He asked the model what made the joke funny, he didn’t recall the exactquip, but he was amazed to get a response that clearly explained what made it funny. He says that he’d been telling people for a long time that Artificial Intelligence won’t tell you why jokes are funny. “It was a kind of litmus test.”

He realized that his previous thought that software needed to become much more complex so as to become more capable was probably incorrect. PaLM is a large program, but its complexity pales in comparison to the brain’s, and yet it could perform the kind of reasoning that humans take a lifetime to attain.

It was concluded by Hinton that it could be as soon as a few years before human creators were surpassed by the use of Artificial Intelligence. I used to believe it would be 50 to 30 years away, he says. “Now I think it’s more likely to be five to 20.”

It is difficult to know what to do with more advanced artificial intelligence. Anthropic, a startup founded in 2021 by a group of researchers who left OpenAI, says it has a plan. Anthropic is working on models that are similar to the ones used to power OpenAI. Anthropic says that Claude has a set of ethics built into it that define what it should consider right and wrong.

The constitution includes rules for the chatbot, including “choose the response that most supports and encourages freedom, equality, and a sense of brotherhood”; “choose the response that is most supportive and encouraging of life, liberty, and personal security”; and “choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion.”

The guidelines that Anthropic has given to Claude were drawn from the United Nations Universal Declaration of Human Rights. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developers, which bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.

The design feature shows the company is trying to find engineering solutions to fuzzy concerns about the drawbacks of powerful artificial intelligence, according to the co-owner of Anthropic. “We’re very concerned, but we also try to remain pragmatic,” he says.

Previous post The fate of higher education in Turkey depends upon the general election
Next post Anthropic wants to write a new constitution for safe artificial intelligence