Humans are trying to keep us safe from artificial intelligence

AI As a Threat to Human Extinction: Open AI, DeepMind, and Yejin Choi: A Public Speaking at the Global Safety Summit

In March, an open letter signed by Elon Musk and other technologists warned that giant AI systems pose profound risks to humanity. Weeks later, Geoffrey Hinton, a pioneer in developing AI tools, quit his research role at Google, warning of the grave risks posed by the technology. More than 500 business and science leaders, including representatives of OpenAI and Google DeepMind, have put their names to a 23-word statement saying that addressing the risk of human extinction from AI “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. The UK government invoked the potential threat of artificial intelligence when it announced that it would host the first global safety summit for the technology.

Yejin Choi, a professor in the School of Computer Science & Engineering at the University of Washington, is developing an open source model called Delphi, designed to have a sense of right and wrong. She’s interested in how humans perceive Delphi’s moral pronouncements. The same is true for systems that don’t require a lot of resources. “The current focus on the scale is very unhealthy for a variety of reasons,” she says. “It’s a total concentration of power, just too expensive, and unlikely to be the only way.” —W.K.

Conversations with AI: How Artificial Intelligence Can Enhance Human-AI Interactions and Detect Harmful Adversarial Content

“I wanted to use generative AI to capture the potential and unease felt as we explore our relationship with this new technology,” says artist Sam Cannon, who worked alongside four photographers to enhance portraits with AI-crafted backgrounds. I fed images and ideas to the artificial Intelligence and it gave its own in return.

A year ago, the idea of holding a meaningful conversation with a computer was the stuff of science fiction. But since OpenAI’s ChatGPT launched last November, life has started to feel more like a techno-thriller with a fast-moving plot. Artificial intelligence tools are impacting how people live and work. It depends on who is helping to write the plot.

Second, it allows a group of company executives and technologists to dominate the conversation, while other communities are left out. The head of a New York City-based institute on the social consequences of artificial intelligence says letters from tech-industry leaders are drawing boundaries around who is an expert in the conversation.

Artificial intelligence systems and Tools have the potential to benefit from many different areas. But they can also cause well-documented harms, from biased decision-making to the elimination of jobs. AI-powered facial recognition is already being abused by autocratic states to track and oppress people. People who are in marginalized communities are likely to be affected the most by biases of the technology that deny them welfare benefits, medical care or asylum. The debates are not getting enough oxygen.

One of the biggest concerns surrounding the latest breed of generative AI is its potential to boost misinformation. There are more fraudulent, fake text, photos and videos that can be produced with this technology, which makes it easier to influence elections or undermine peoples ability to trust information. Tech companies that want to avoid or reduce the risks must put ethics and safety at the center of their work. At present, they seem to be reluctant to do so. OpenAI did a stress test. GPT4 gives it the chance to produce harmful content and put safeguards in place. But although the company described what it did, the full details of the testing and the data that the model was trained on were not made public.

Tech firms have to make sure that their tools and systems are safe before they’re released. They should submit data in full to independent regulatory bodies that are able to verify them, much as drug companies must submit clinical-trial data to medical authorities before drugs can go on sale.

Previous post There are over $200 billion in fraudulent business loans
Next post The independent state legislature theory has been rejected by the Supreme Court