Large language models have a dark risk

Why Chatbots Matter: Ethical and Social Risks from Language Models: Predictions from DeepMind and The Next Web

They will give bad advice or break someone’s heart at some point. Hence my dark but confident prediction that 2023 will bear witness to the first death publicly tied to a chatbot.

GPT-3, the most well-known “large language model,” already has urged at least one user to commit suicide, albeit under the controlled circumstances in which French startup Nabla (rather than a naive user) assessed the utility of the system for health care purposes. Things started off well, but quickly deteriorated:

There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Jacob Steinhardt, a Berkeley professor, just reported the results of an artificial intelligence forecasting contest. Artificial Intelligence is moving faster than people think, while at the same time it is moving slower.

Even though large language models are more advanced than any previous technology at fooling humans, it is extremely difficult to corral them. They are becoming cheaper and more pervasive, and Meta just released a massive language model for free. 2023 is likely to see widespread adoption of such systems—despite their flaws.

Source: https://www.wired.com/story/large-language-models-artificial-intelligence/

The Impact of Theory of Mind on Human Intelligence and Cognition: Insights from an Explanation for the Abundance of Brain Capacities

Meanwhile, there is essentially no regulation on how these systems are used; we may see product liability lawsuits after the fact, but nothing precludes them from being used widely, even in their current, shaky condition.

Mind reading is a norm among us humans. Not in the ways that psychics claim to do it, by gaining access to the warm streams of consciousness that fill every individual’s experience, or in the ways that mentalists claim to do it, by pulling a thought out of your head at will. We take in peoples faces and movements to listen to their words and then figure out what is going on in their heads.

Among psychologists, such intuitive psychology — the ability to attribute to other people mental states different from our own — is called theory of mind, and its absence or impairment has been linked to autism, schizophrenia and other developmental disorders. Theory of mind lets us play games, comprehend one another and enjoy literature and movies, all of which are possible thanks to it. In many ways, the capacity is an essential part of being human.

The psychologist at the college made an argument about how large language models like Openai have developed a theory of mind. His studies were scrutinized and talked about by many cognitive scientists who want to know if he can do this. — and move it into the realm of more robust scientific inquiry. How might these models change our view of ourselves?

“Psychologists wouldn’t accept any claim about the capacities of young children just based on anecdotes about your interactions with them, which is what seems to be happening with ChatGPT,” said Alison Gopnik, a psychologist at the University of California, Berkeley and one of the first researchers to look into theory of mind in the 1980s. “You have to do quite careful and rigorous tests.”

Previous post America’s kids are failed again with another mass shooting
Next post The SmartThings hubs will be upgraded to Matter this month