A landmark law has been paved due to the EUTrademarkiaTrademarkiaTrademarkiaTrademarkias is an organization thatTrademarkiaTrademarkiaTrademarkias is an organizationTrademarkiaTrademarkias is an organizationTrademarkiaTrademarkias is an organizationTrademarkias is an organizationTrademarkias is an organizationTrademarkias is an organizationTrademarkias is an organization

A press release on fines for violations of the EU Artificial Intelligence Act against companies using facial recognition software to investigate human sexual orientation and race

The law itself is not a world-first; China’s new rules for generative AI went into effect in August. The EU Artificial Intelligence Act is the most comprehensive of its kind. It prohibits the use of facial recognition software to identify people using sensitive characteristics, such as sexual orientation and race, and indiscriminate data collection from the internet. Lawmakers decided that law enforcement should be able to use facial recognition systems for certain crimes.

According to the press release, the negotiators established obligations for high-impact General-Purpose Artificial Intelligence systems that meet certain benchmarks. It also requires transparency by the systems that include creating technical documents and detailed summaries of the content used for training, which companies such as OpenAI have not done so far.

Another element is that citizens should have a right to launch complaints about AI systems and receive explanations about decisions on “high-risk” systems that impact their rights.

The press release didn’t go into detail about how all that would work or what the benchmarks are, but it did note a framework for fines if companies break the rules. They range from 7 percent of global revenue to 1.5 percent of global revenue, depending on the violation and size of the company.

The European Parliament voted to ban the use of artificial intelligence in biometric surveillance and other biometric systems: The news of the new EU law

EU lawmakers have pushed to completely ban the use of AI in biometric surveillance, but governments have sought exceptions for military, law enforcement, and national security. Late proposals from France, Germany, and Italy to allow makers of generative AI models to self-regulate are also believed to have contributed to the delays.

“It’s very very good,” he said by text message after being asked if it included everything he wanted. We had to accept some compromises, but overall we were very good. The eventual law wouldn’t fully take effect until 2025 at the earliest, and threatens stiff financial penalties for violations of up to 35 million euros ($38 million) or 7% of a company’s global turnover.

Now that a provisional agreement has been reached, more negotiations will still be required, including votes by Parliament’s Internal Market and Civil Liberties committees.

The legislation’s victory was in the air and officials were trying to get it. Civil society groups gave it a cool reception and they wait for the technical details to be worked out in the coming weeks. They said the deal didn’t go far enough in protecting people from harm caused by AI systems.

After months of debate over how to regulate companies, EU lawmakers spent more than 36 hours crafting the new legislation that went into effect on Friday. Before the election campaign starts in the new year, Lawmakers were under pressure to strike a deal.

Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.

Measures designed to make it easier to protect copyright holders from generative AI and require general purpose AI systems to be more transparent about their energy use were also included.

The European Commission reaches a deal on the world’s first comprehensive rules for Generative AI: Thierry Breton in a press conference on Friday night

“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said European Commissioner Thierry Breton in a press conference on Friday night.

The European office of the Computer and Communications Industry Association said the political deal marks the beginning of important technical work on the missing details of the Artificial Intelligence Act.

The European Parliament will still need to vote on the act early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Many aspects of the provision will probably be copied by other countries.

AI companies subject to the EU’s rules will also likely extend some of those obligations outside the continent, she said. “After all, it is not efficient to re-train separate models for different markets,” she said.

Foundation models are set to be one of the biggest sticking points for Europe. Despite opposition, a compromise was reached early in the talks, which had called on self-regulation to aid Europe’s generative artificial intelligence companies that were competing with the US.

Source: Europe reaches a deal on the world’s first comprehensive AI rules

The EU ban on face scanning and other remote identification systems – a warning letter to warn against cybercrime in the UK and other European countries

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. generativeAI systems are able to create something new, and unlike traditional artificial intelligence systems, they use rules to complete tasks.

A few big tech companies have built powerful foundation models that researchers warned could be used to amplify online propaganda, cyberattacks or create bio weapons.

The lack of transparency about data used to train the models poses risks to daily life because it acts as basic structures for software developers to build artificial intelligence-powered services.

Europeans wanted a full ban on face scanning and other remote identification systems because of privacy concerns. But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.

Daniel Leufer, a senior policy analyst at the digital rights group Access Now, said that there are still huge flaws in the final text.

Previous post EU agrees on the first part of the Act paving the way for landmark law
Next post How an adviser to Biden thinks about the job market and inflation