Joe Biden thinks that US government programs may be bad for citizens
The Araucaria Summit on Generative AI (Generic AI & AI Adiabatic Futures), Proceedings of the 12th White House Security Summit
The office of coordination with the president’s priorities can add testing and independent evaluation to the OMB rules if they so choose because they can do it in their role. They would ask government agencies to evaluate and monitor both algorithms in use and any acquired in the future for negative impacts on privacy, democracy, market concentration, and access to government services.
Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.
External tests of generativeAI models would have to be done by people with no direct involvement in the system’s development, if a draft memo is to be believed. It also instructs the leaders of federal agencies to explore ways they can use generative AI such as OpenAI’s ChatGPT “without imposing undue risk.”
Biden’s AI executive order requires the OMB to provide its guidance to federal agencies in the next five months. The draft policy will be open for public comment until December 5.
The AI hype-train has a knack for turning even close allies into competitors, though. The ink on the future-looking declaration was barely dry before the US asserted its leadership role in developing and guiding AI, as vice president Kamala Harris delivered a speech warning that AI hazards—including deepfakes and biased algorithms—are already here. The White House announced a sweeping executive order designed to lay out rules for governing and regulating AI early this week, and yesterday outlined new rules to prevent government algorithms doing harm.
The venue for the Summit paid homage to Alan Turing, the British mathematician who did foundational work on both computing and AI, and who helped the Allies break Nazi codes during the Second World War by developing early computing devices. (A previous UK government apologized in 2009 for the way Turing was prosecuted for being gay in 1952.)
When a senior’s healthcare plan is kicked off because of an incorrect artificial intelligence system, is that something that would stick in his mind? Is threatening a woman by her partner with explicit photos not an issue for her?