Scientists think GPT-4 is here

Conversations with Bindu Reddy and OpenAI about the GPT-4 text generation system and its impact on openAI and AI language toolboxes

Like many other people over the past week, Bindu Reddy recently fell under the spell of ChatGPT, a free chatbot that can answer all manner of questions with stunning and unprecedented eloquence.

Reddy, CEO of Abacus. AI, which develops tools for coders who use artificial intelligence, was charmed by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. She and her company are searching for ways to use chatg pt to write technical documents. She says it works well after testing it.

OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post. The team fed the human-written answers to GPT- 3.5 as training data, then used a form of reward and punishment called reinforcement learning that pushed the model to provide better answers.

Jacob Andreas, an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. He says, “Here’s a thing that you can use to apply a mental model to other agents that you interact with.”

GPT-4 will shake up science because of its current and future iteration, says White. “I think it’s actually going to be a huge infrastructure change in science, almost like the internet was a big change,” he says. He said that it will not replace scientists, but might help with some tasks. “I think we’re going to start realizing we can connect papers, data programmes, libraries that we use and computational work or even robotic experiments.”

Outputting false information is another problem. Hallucinating is a consequence of models such as GPT-4 which predict the next word in a sentence. She says that models like those can’t be relied on because of the hallucinating. And this remains a concern in the latest version, she says, although OpenAI says that it has improved safety in GPT-4.

Luccioni was disappointed by the safety assurances made by Openai without access to the data used for training. You do not know what the data is. You can not improve it. She says it is impossible to do science with a model like this.

The mystery about how GPT-4 was trained is also a concern for van Dis’s colleague at Amsterdam, psychologist Claudi Bockting. “It’s very hard as a human being to be accountable for something that you cannot oversee,” she says. One of the things that is worried is they could be much more biased than human beings are. Luccioni explains that it’s difficult to see where the bias came from without access to the code behind GPT-4.

Van Dis, Bockting and colleagues argued earlier in the year that there was need to develop a set of living guidelines for the use and development of tools such as GPT-4. They are concerned that any legislation around AI technologies will struggle to keep up with the pace of development. The University of Amsterdam will hold an invitational summit on April 11 to discuss concerns with representatives from organizations including UNESCO, the World Economic Forum and Organization for Economic Co-operation and Development.

Previous post Florida bill would ban gender studies and give governor more power
Next post Humans escaped Europe’s deep freeze