Biden has a secret weapon against the killer artificial intelligence
Predicting and Warning about Artificial Intelligence: The Case of Biden, Sunak, Dark Sides, Dead Reckoning, and The Terminator
Predicting and warning against the future have been the themes of science fiction for decades. Even as Star Trek envisioned the wonders of flip phones and iPads, Neal Stephenson’s Snow Crash warned of the dystopian nature of the metaverse.
Too often, it seems like the minds pushing AI watched too much Trek and not enough Kubrick. Throughout Silicon Valley, the hype is often focused on all the wonderful things AI can create, from art to music to term papers. Others are left to warn about the possibility that the artificial intelligence may have other peoples work without authorization, or that it may be evolving too quickly. Never before have optimism and pessimism coexisted so uncomfortably.
Given that, it almost seems petty to nitpick about the actual content. I will anyways. Let’s start with Biden’s executive order. You will not have to read it, I read all the government-speak. By the end, I was jonesing for Dramamine. How will the president deal with the dark side of artificial intelligence? By unleashing a human wave of bureaucracy. The document wantonly calls for the creation of new committees, working groups, boards, and task forces. There’s also a consistent call to add AI oversight to the tasks of current civil servants and political appointees.
Dead Reckoning is called the perfect artificial intelligence panic movie. An Artificial intelligence called The Entity becomes sentient and threatens to ruin the world with its all-knowing intelligence. It is, as Marah Eakin wrote for WIRED earlier this year, the ideal “paranoia litmus test”—when something rises to the level of Big Bad in a summer blockbuster, you know it’s the thing people are most freaked out by right now. The Entity must seem frightening to someone like President Biden, who is familiar with the brinkmanship occurring around the world. It also begs the question: Did no one watch The Terminator?
The large language model that the world is enamored with is getting some presents as the first birthday approaches. The Executive Order on the safe, secure and trustworthy development and use of artificial intelligence was issued by President Joe Biden. And UK prime minister Rishi Sunak threw a party with a cool extinction-of-the-human-race theme, wrapped up with a 28-country agreement (counting the EU as a single country) promising international cooperation to develop AI responsibly. Happy birthday!
Before anyone gets too excited, let’s remember that it has been over half a century since credible studies predicted disastrous climate change. Now that the water is literally lapping at our feet and heat is making whole chunks of civilization uninhabitable, the international order has hardly made a dent in the gigatons of fossil fuel carbon dioxide spewing into the atmosphere. The second in line to the presidency is a climate denier in the United States. Will the progress of regulation be better?
The document doesn’t provide a firm legal backing to all of the regulations and mandates that may be result from the plan, which is one of the things that it doesn’t have. (Although, don’t hold your breath, as a government shutdown looms.) Many of Biden’s solutions depend on self-regulation by the industry that’s under examination which had substantial input into the initiative.
The UK AI Research Resource is a good idea, but it takes a lot longer than the US, and it needs to go democratized
Similarly, the UK plans for a national AI Research Resource (AIRR) to provide supercomputer-level computing power to diverse researchers keen on studying frontier AI.
Training a frontier artificial intelligence system can take months and cost hundreds of millions of dollars. “In academia, this is currently impossible.” The research resources aim to democratize the capabilities.
“It’s a good thing,” says Bengio. All of the systems’ capabilities are owned by companies that want to make money from them. We need academics and government-funded organizations that are really working to protect the public to be able to understand these systems better.”
The executive order directs agencies that fund life-sciences research to establish standards to protect against using AI to engineer dangerous biological materials.
Agencies are also encouraged to help skilled immigrants with AI expertise to study, stay and work in the United States. In the next 1.5 years the National Science Foundation must fund and start at least one regional innovation engine that focuses on Artificial intelligence, and establish at least four national artificial intelligence research institutions, to replace the 25 currently funded.
In 2021, Wald and colleagues at Stanford published a white paper with a blueprint of what such a service might look like. According to the NAIRR task force report, it was supposed to have a budget of $2.6 billion over 6 years. That is peanuts. In my view it should be substantially larger,” says Wald. Lawmakers will have to pass the CREATE AI Act, a bill introduced in July 2023, to release funds for a full-scale NAIRR, he says. “We need Congress to step up and take this seriously, and fund and invest,” says Wald. “If they don’t, we’re leaving it to the companies.”
The UK goverment announced plans for the UK AIRR were in March. The government promised to triple the funding pot for AIRR to 300 million dollars as part of a £90 million investment to transform UK computing capacity. Given its population and gross domestic product, the UK investment is much more substantial than the US proposal, says Wald.
The plan is backed by two new supercomputers: Dawn in Cambridge, which aims to be running in the next two months; and the Isambard-AI cluster in Bristol, which is expected to come online next summer.
Source: The world’s week on AI safety: powerful computing efforts launched to boost research
“What are we going to do?” Yoshua Bengio, AI pioneer and trainer at Mila, Quebec, is a “next generation AI”
We are on our way to build systems that are useful and potentially dangerous. “We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same thing.
“We’re talking about AI that doesn’t yet exist — the things that are going to come out next year,” says Yoshua Bengio, an AI pioneer and scientific director of Mila, the Quebec AI Institute in Canada, who attended the summit.