As discussed in Part One of this series, the Artificial Intelligence Act (“AI Act”) has now gone into effect in the European Union. In Part One of this series, the Internet Law Attorneys at Revision Legal summarized the applicability of the AI Act, some exemptions from coverage, and the effective date timetable. In Part Two, we will discuss the Act’s enforcement mechanisms, risk-level framework, and the types of AI programs and systems that are banned. The final Part Three will summarize the AI Act’s regulations of high, limited, and low-risk AI systems.
Violations could be expensive
Before discussing the AI Act’s risk framework, let’s summarize enforcement and the potential costs for those that violate the Act.
First, the AI Act does not create a private right of action for natural persons or artificial entities. Enforcement will be accomplished by the European AI Office and, at the national level, by the various member-state market surveillance authorities. Since the Act just went into effect, the establishment of these enforcement offices is just beginning.
The enforcement authorities will have the power to investigate, demand documentation, require and enforce corrective actions, and impose civil fines. The AI Act establishes several levels of potential penalties. Violation of the rules regarding banned AI systems can result in civil fines of up to the greater of $30-35 million or 7% of global revenue. Lesser violations can result in civil fines of up to greater than $10-15 million or 3% of global revenue. Providing false information can result in civil fines of up to the greater of $5-7 million or 1.5% of global revenue.
The AI Act’s risk-hierarchy framework
The AI Act’s regulatory framework involves four levels of risk that are potentially present in the deployment, use, and distribution of AI software systems. The four levels are: unacceptable, high, limited, and low. In simple terms, the AI Act bans AI systems that are deemed to have an unacceptable level of risk; the remainder is given high, limited, and low levels of regulation.
How these levels are defined is based on the risks that are posed by the AI systems to fundamental rights and freedoms of people and society protected by EU law. For this reason, the threat/risk level of a given AI system can change over time. For example, the EU’s summary notes that AI-enabled video games and spam filters may be “low-risk” currently but that the currently-designated risk level might change with the ongoing development of generative AI programs.
Certain AI programs and systems are banned
Starting in February 2025, the AI Act will ban certain types of AI programs/systems. The list is not exhaustive and, likely, will be expanded in future years. The current list can be summarized as follows:
- Use of “real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes …” with certain exceptions (such as the “the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings”)
- Use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques
- Use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective, or the effect, of materially distorting behavior in a manner that causes or is reasonably likely to cause significant harm
- Use of AI systems to create something like a social credit score leading to detrimental or unfavorable treatment that is unrelated to the context or that is unjustified or disproportionate to social behavior or its gravity
- Use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offense
- Use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
- Use of AI systems to infer the emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons
- Use of biometric categorization systems that categorize individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
See Parts One and Three for further information.
Contact the Internet Law and Social Media Attorneys at Revision Legal
For more information, contact the experienced Internet Law and Social Media Lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.