The White House has some rules for how the government can use artificial intelligence
The AI Act: Testing and Regulating Artificial Intelligence in the United States and the EU, and the Conversation with Reeve Givens
The guidance to agencies says that any AI technology they use has to have proper safeguards in place by Dec. 1. If they can’t provide those safeguards, they have to stop using the technology, unless they can prove using it is necessary for the function of the agency.
In November, the US joined the UK, China, and members of the EU in signing a declaration that acknowledged the dangers of rapid AI advances but also called for international collaboration. Harris revealed a nonbinding declaration on the use of Artificial Intelligence from 31 nations. It sets up rudimentary guardrails and calls for the deactivation of systems that engage in “unintended behavior.”
If an agency cannot guarantee such safeguards, it must stop using the AI system or justify its continued use. The deadline to comply with the new requirements is December 1.
Around the world countries are trying to regulate artificial intelligence. The EU voted in December to pass its AI Act, a measure that governs the creation and use of AI technologies, and formally adopted it earlier this month. China is also working on artificial intelligence regulation.
A draft of the guidance was released last fall, ahead of Vice President Harris’ trip to the first global AI summit, in the United Kingdom. The draft was then opened up for public comment before being released in its final form Thursday.
It also requires that each agency appoint a chief artificial intelligence officer, a senior role that will oversee implementation of AI. And it outlines how the government is trying to grow the workforce focused on AI, including by hiring at least 100 professionals in the field by this summer.
The public deserves confidence that the federal government will use technology in a responsible way, said Shalanda Young, director of the Office of Management and Budget.
The details of the process to assess, test and monitor the impacts of the technology are not clear in the guidance.
Alex Reeve Givens, the president and CEO of the Center for Democracy and Technology, told NPR that she still has questions about what the testing requirements are and who in the government has the expertise to greenlight the technology.
“I see this as the first step. What’s going to come after is very detailed practice guides and expectations around what effective auditing looks like, for example,” Reeve Givens said. “There’s a lot more work to be done.”
One of the next steps that Reeve Givens is eyeing is the guidance that the administration will release on the procurement process and what requirements will be in place for companies whose AI technology the government wants to buy.
“That is the moment when a lot of decisions and values can be made and a lot of testing can be done before the government spends money on the system,” she said.
“We can then ask questions about, well, ‘What testing did you do? What did that look like? There can be more eyes and more public scrutiny on those use cases, but this gives us the hook to be able to start that public conversation,” she said.
How Artificial Intelligence Can Help Combat Disease and Prevent Disease: Comment on the Biden Administration’s Office of Microwave Observations (OMB)
The OMB guidance encourages innovation through the use of artificial intelligence. Ifeoma Ajunwa said the guidance sends a signal that it’s ok for agencies to look at using artificial intelligence.
Several government agencies already use artificial intelligence, but the memo from the Biden administration outlines other ways the technology could be impactful — from forecasting extreme weather events to tracking the spread of disease and opioid use.