Europe agrees AI rules to take effect in 2025 with infraction penalties of up to $38m or 7% of a company’s global turnover.

Europe agrees AI rules to take effect in 2025 with infraction penalties of up to $38m or 7% of a company’s global turnover.

The first comprehensive artificial intelligence regulations in history were reached by European Union negotiators on Friday, opening the door for legal supervision of this technology, which has the potential to revolutionize daily life and inspire fears of existential threats to humankind.

Just before midnight, European Commissioner Thierry Breton tweeted, “Deal!” “The EU is the first continent to establish explicit guidelines for the application of AI.”

This week’s protracted closed-door negotiations yielded the outcome; the first round lasted 22 hours, and the second round began on Friday morning.

The pressure was on officials to get support for the centerpiece measure politically. However, civil society organizations met it with a cold response while they awaited the resolution of technical issues that would need to be resolved in the upcoming weeks. They said that not enough was done to shield humans from the dangers of artificial intelligence.

Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a lobby group for the tech industry, stated that “today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing.”

When the EU released the first draft of its rulebook in 2021, it jumped ahead of everyone else in the world in the race to develop AI safeguards. However, the current surge in generative AI has forced European officials to hurriedly amend a proposal that was positioned to become a global model.

Brando Benifei, an Italian legislator who is co-leading the body’s negotiation efforts, told reporters late Friday that while the European Parliament will still need to vote on the act early in the next year, that is now merely a formality because the accord has been reached.

When asked if it had everything he wanted, he replied by SMS, “It’s very good.” “Overall, very good, but we had to accept some compromises.” The proposed rule, which would not go into full force until 2025 at the latest, would impose severe fines for infractions of up to 35 million euros ($38 million), or 7% of a company’s worldwide sales.

The ability of generative AI systems, such as OpenAI’s ChatGPT, to produce text, photos, and music that resemble human speech has taken the world by storm. However, concerns have been raised about the risks that this quickly advancing technology poses to jobs, privacy, copyright protection, and even human life itself.

Though they’re still catching up to Europe, the United States, the United Kingdom, China, and international coalitions like the Group of Seven major democracies have now weighed in with their ideas to govern AI.

Anu Bradford, a professor at Columbia Law School and an authority on EU law and digital regulation, stated that strict and all-encompassing regulations from the EU “may set a powerful example for many governments considering regulation.”While they “may not copy every provision, other countries will probably emulate many of its features.”

According to her, AI businesses that must abide by EU regulations will probably carry part of their duties outside of the EU. “Retraining distinct models for disparate markets is inefficient,” the speaker stated.

The original intent of the AI Act was to reduce the risks associated with particular AI functions according to a risk scale that ranged from low to unacceptable. Legislators, however, pressed for its expansion to include foundation models—the sophisticated systems that serve as the basis for general-purpose AI services like ChatGPT and Google’s Bard chatbot.

It appeared that foundation models would be a major source of contention for Europe. However, despite resistance from France, which advocated for self-regulation to support domestic European generative AI businesses in competing with major US rivals, such as Microsoft, the organization that supports OpenAI, negotiators were able to strike a provisional agreement early in the discussions.

These systems, often referred to as massive language models, are trained using enormous collections of text and photos that are taken directly from the internet. Unlike classical AI, which processes data and performs tasks according to preset rules, generative AI systems enable them to produce something original.

The businesses creating foundation models will need to create technical documentation, adhere to EU copyright regulations, and specify the training materials. Extra attention will be paid to the most sophisticated foundation models that provide “systemic risks.” This will include evaluating and reducing such risks, disclosing significant events, implementing cybersecurity safeguards, and disclosing their energy efficiency.

Researchers have issued a warning that the foundation models, created by a few large tech corporations may be used to enhance cyberattacks, the development of bioweapons, and online disinformation and manipulation.

Rights organizations also warn that while the models serve as foundational frameworks for software developers creating AI-powered applications, there are hazards to everyday life from the lack of transparency surrounding the data used to train them.

AI-powered facial recognition surveillance systems turned out to be the most difficult subject to negotiate, but after much haggling, a solution was reached.

Because of privacy concerns, European politicians intended to completely outlaw the use of face scanning and other “remote biometric identification” devices in public places. However, member-nation governments were able to work out exceptions that would allow law enforcement to utilize them to combat more serious crimes like child sex abuse and terrorist attacks.

The AI Act’s exemptions and other significant flaws, including the lack of protection for AI systems used in border control and migration and the ability for developers to choose not to have their systems categorized as high-risk, have alarmed rights groups.

Daniel Leufer, a senior policy analyst at the digital rights organization Access Now, stated, “The fact remains that huge flaws will remain in this final text, whatever the victories may have been in these final negotiations.”

Facebook20k
Twitter60k
100k
Instagram500k
600k