Biden’s big artificial intelligence plan sounds dangerous, but it lacks bite

Artificial Intelligence Safety: The Rise and Fall of Governments and Organisations – from Banking to Financial Crime Prevention and Health Care to Road Safety

Sunak last week announced a funding of over £100 million for research on artificial intelligence safety, and plans to set up an Artificial Intelligence Safety Institute. He is making another £100 million available to develop AI for use in health care. The announcements that have been released so far are welcome. There are a lot of opinions on the benefits and risks of Artificial Intelligence, and they must work together towards a consensus. This needs to be led by researchers and mediated by international organizations, allowing evidence to lead decision-making and giving all countries an equal voice and an equal stake in the outcome.

Governments and corporations should not fear regulation. It enables technologies to develop and protects people from harm. It needn’t come at the expense of innovation. Setting firm boundaries could lead to safer innovation in them.

Fortunately, there is a wealth of literature on regulation — from banking to medicines to food to road safety — on which governments can draw. The International Atomic Energy Agency is a good example of how civil nuclear technology can be used to teach artificial intelligence. https://doi.org/k3h2; 2023).

Every technology is different, but some fundamental principles have emerged over decades of regulatory experience. The need for transparency and the need for regulators to have complete data is one of the things that necessitate transparency. Another is the need for legally binding standards for monitoring, compliance and liability.

The global financial crisis shows what can happen when regulators don’t check relevant data. Regulators were unaware that banks and insurance companies were dependent on risky credit and invested many billions of dollars in opaque black box financial products until it was too late. Most people had no idea how these products had been created or what their systemic risks were.

Other mainstays of regulation include registration, regular monitoring, reporting of incidents that could cause harm, and continuing education, for both users and regulators. Road safety offers lessons here. The car has changed the lives of billions, but it also causes harm. To reduce risks, manufacturers need to comply with safety standards, the vehicles need to be tested regularly, and there is an insurance-based framework to assess and apportion liability in accidents. Regulation can even spur innovation. The introduction of emissions standards inspired the development of cleaner vehicles.

Nonetheless, the idea that companies should not be, as Sunak put it, “marking their own homework”, is right for a technology that poses a known risk to jobs and uses algorithms that often reinforce bias and discrimination. Governments want to attract flagship companies — Elon Musk, the owner of X (formerly Twitter), is reported to be attending the summit. But when it comes to AI safety, independent scrutiny is required, and that means regulation — a word that governments are reluctant to use.

Tiny superheroes, fun-size dinosaurs, and overgrown insects squealed at the White House on Monday. The costumed children celebrating Halloween with President Biden weren’t there for the unveiling of a sweeping new executive order on artificial intelligence. The US government is in the middle of a lengthy, new to-do list and Vice President Harris heads to a UK summit to sell the president’s vision, so leaders in Congress may be asking themselves, trick or treat.

Biden even aims to gain a measure of control over private AI projects. He plans to deploy the Defense Production Act—written to allow government control of industries during wartime—to force private US technology firms to report sensitive details of their most secretive AI development projects to the federal government.

“This executive order will use the same authority to make companies prove—prove—that their most powerful systems are safe before allowing them to be used,” Biden said.

Far-Off AI Risks: V. P. Harris at the UK’s Summit on AI Safety (with a preview from Sunak)

Vice President Harris was on hand for the announcement and is taking his Artificial Intelligence vision on the road for the rest of the week. She’ll be taking her own agenda to the UK’s Summit on AI Safety, hosted by Prime Minister Rishi Sunak and primarily focused on far-off AI risks.

Previous post The prosecutors have their own words against Bankman-Fried
Next post The first briefing talks about Israel not agreeing to a cease-fire and how tech messes with our senses