The hearing on the regulation of artificial intelligence was very friendly

Future harms: a rhetorical sleight-of-hand for automakers and the auto-electroweak sector

This focus on future harms is a common rhetorical sleight of hand among industry figures. These individuals “position accountability right out into the future,” she said, generally by talking about artificial general intelligence, or AGI: a hypothetical AI system smarter than humans across a range of tasks. Some experts suggest we’re getting closer to creating such systems, but this conclusion is strongly contested.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. “We’re in serious trouble in terms of the capabilities of these cars,” Cummings says. “They’re not even close to being as capable as people think they are.”

Also like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology that still has many unsolved problems. In the mid-2010s there was a policy of laissez faire aroundautonomous cars that government officials didn’t want to stop.

Some auto companies have stopped developing self-driving car projects because of issues with the technology. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

Lawmakers and governments have begun to suggest regulation of generative Ai tools and large language models. The current panic is focused on large language models and tools that are remarkably good at answering questions and solving problems even if they have some significant flaws.

This rhetorical feint was obvious at the hearing. Discussing government licensing, OpenAI’s Altman quietly suggested that any licenses need only apply to future systems. “Where I think the licensing scheme comes in is not for what these models are capable of today,” he said. As we head towards artificial general intelligence, that is where I think we need such a scheme.

For example, researchers like Joy Buolamwini have repeatedly identified problems with bias in facial recognition, which remains inaccurate at identifying Black faces and has produced many cases of wrongful arrest in the US. During the hearing, facial recognition and its flaws were mentioned once, but not at all.

Mitchell says that good regulation depends on setting standards that firms can’t easy bend to their advantage and that this requires a nuanced understanding of the technology being assessed. She gives an example of how facial recognition firms can make a lot of money selling their products to police forces. This sounds reassuring, but experts say the company used skewed tests to produce these figures. Mitchell added that she generally does not trust Big Tech to act in the public interest. She said that technology companies have shown that they don’t see respecting people as a part of running a company.

Those running their own companies stressed the potential threat to competition. “Regulation invariably favours incumbents and can stifle innovation,” Emad Mostaque, founder and CEO of Stability AI, told The Verge. Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction: “Requiring a license to train models would be like requiring a license to write code. IMO, it would further concentrate power in the hands of a few & drastically slow down progress, fairness & transparency.”

Some experts think some form of licensing could be effective. Margaret Mitchell, who was forced out of Google alongside Timnit Gebru after authoring a research paper on the potential harms of AI language models, describes herself as “a proponent of some amount of self-regulation, paired with top-down regulation.” She told The Verge that she could see the appeal of certification but perhaps for individuals rather than companies.

At the hearing this week, he was not so grandiose. While mentioning the issue of regulatory capture, Altman was not as clear about licensing smaller entities. We don’t want to slow down smaller startups. He said that we don’t want to slow down open source efforts and that they still need to comply.

Although Altman’s OpenAI is still called a “startup” by some, it’s arguably the most influential AI company in the world. Its launch of image and text generation tools like ChatGPT and deals with Microsoft to remake Bing have sent shockwaves through the entire tech industry. Altman himself is well positioned: able to appeal to both the imaginations of the VC class and hardcore AI boosters with grand promises to build superintelligent AI and, maybe one day, in his own words, “capture the light cone of all future value in the universe.”

Previous post He’s the unofficial ambassador of Montana, and isn’t buying the TikTok ban
Next post Debt negotiations have stopped because the country is closer to a possible default