The European Commission’s Proposed Artificial Intelligence Act: Benchmarks, Fines, and the Right to Complain about AI Systems
Following a round of intense negotiations this week, lawmakers in Brussels have now reached a “provisional agreement” on the European Union’s proposed Artificial Intelligence Act (AI Act). The EU is expected to be the world’s first comprehensive set of rules to govern artificial intelligence and could be a benchmark for other regions looking to pass similar laws.
According to the press release, negotiators established obligations for “high-impact” general-purpose AI (GPAI) systems that meet certain benchmarks, like risk assessments, adversarial testing, incident reports, and more. It mandates transparency by those systems that include creation of technical documents and “detailed summaries about the content used for training” and is something companies like OpenAI have refused to do.
Another element is that citizens should have a right to launch complaints about AI systems and receive explanations about decisions on “high-risk” systems that impact their rights.
The press release did not state what the benchmarks are or how they would work, but it did mention a framework for fines for companies that break the rules. They vary based on the violation and size of the company and can range from 35 million euros or 7 percent of global revenue, to 7.5 million euros or 1.5 percent of global revenue of turnover.
EU lawmakers have pushed for a completely ban on the use of artificial intelligence in criminal investigations, but governments are seeking exceptions for military, law enforcement and national security. France, Germany, and Italy are thought to have had late proposals that contributed to the delays.
EU Commission press conference on bans on artificial intelligence and biometrics regulating open AI and AI’s for Openai and chatGPT
A final deal is expected to be reached by the end of the year. The law probably won’t come into force until at least 2025.
Now that a provisional agreement has been reached, more negotiations will still be required, including votes by Parliament’s Internal Market and Civil Liberties committees.
Negotiations over rules regulating live biometrics monitoring (such as facial recognition) and “general-purpose” foundation AI models like OpenAI’s ChatGPT have been highly divisive. These were reportedly still being debated this week ahead of Friday’s announcement, causing the press conference announcing the agreement to be delayed.
During the course of 36 hours, EU parliament, council, and commission members thrashed out new legislation for companies like Openai after months of debate. Lawmakers are trying to strike a deal in time for the EU parliament election campaign.
Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.
Measures to make it easier to protect copyrighted works from generative artificial intelligence and to require general purpose artificial intelligence systems to be more transparent about their energy use were also included.
The European Commissioner stated at a press conference on Friday night that Europe had positioned itself as a pioneer and understood the importance of its standard-setting role.