What to Know About the New European Union Artificial Intelligence Act (Part Two) – Enforcement and What AI Systems are Banned featured image

What to Know About the New European Union Artificial Intelligence Act (Part Two) – Enforcement and What AI Systems are Banned

by John DiGiacomo

Partner

Internet Law

As discussed in Part One of this series, the Artificial Intelligence Act (“AI Act”) has now gone into effect in the European Union. In Part One of this series, the Internet Law Attorneys at Revision Legal summarized the applicability of the AI Act, some exemptions from coverage, and the effective date timetable. In Part Two, we will discuss the Act’s enforcement mechanisms, risk-level framework, and the types of AI programs and systems that are banned. The final Part Three will summarize the AI Act’s regulations of high, limited, and low-risk AI systems.

Violations could be expensive

Before discussing the AI Act’s risk framework, let’s summarize enforcement and the potential costs for those that violate the Act.

First, the AI Act does not create a private right of action for natural persons or artificial entities. Enforcement will be accomplished by the European AI Office and, at the national level, by the various member-state market surveillance authorities. Since the Act just went into effect, the establishment of these enforcement offices is just beginning.

The enforcement authorities will have the power to investigate, demand documentation, require and enforce corrective actions, and impose civil fines. The AI Act establishes several levels of potential penalties. Violation of the rules regarding banned AI systems can result in civil fines of up to the greater of $30-35 million or 7% of global revenue. Lesser violations can result in civil fines of up to greater than $10-15 million or 3% of global revenue. Providing false information can result in civil fines of up to the greater of $5-7 million or 1.5% of global revenue.

The AI Act’s risk-hierarchy framework

The AI Act’s regulatory framework involves four levels of risk that are potentially present in the deployment, use, and distribution of AI software systems. The four levels are: unacceptable, high, limited, and low. In simple terms, the AI Act bans AI systems that are deemed to have an unacceptable level of risk; the remainder is given high, limited, and low levels of regulation.

How these levels are defined is based on the risks that are posed by the AI systems to fundamental rights and freedoms of people and society protected by EU law. For this reason, the threat/risk level of a given AI system can change over time. For example, the EU’s summary notes that AI-enabled video games and spam filters may be “low-risk” currently but that the currently-designated risk level might change with the ongoing development of generative AI programs.

Certain AI programs and systems are banned

Starting in February 2025, the AI Act will ban certain types of AI programs/systems. The list is not exhaustive and, likely, will be expanded in future years. The current list can be summarized as follows:

  • Use of “real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes …” with certain exceptions (such as the “the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings”)
  • Use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques
  • Use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective, or the effect, of materially distorting behavior in a manner that causes or is reasonably likely to cause significant harm
  • Use of AI systems to create something like a social credit score leading to detrimental or unfavorable treatment that is unrelated to the context or that is unjustified or disproportionate to social behavior or its gravity
  • Use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offense
  • Use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
  • Use of AI systems to infer the emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons
  • Use of biometric categorization systems that categorize individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

See Parts One and Three for further information.

Contact the Internet Law and Social Media Attorneys at Revision Legal

For more information, contact the experienced Internet Law and Social Media Lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.

Extra, Extra!
Recent Posts

Does the AI-Copyright Legal Fight Represent a National Security Threat?

Does the AI-Copyright Legal Fight Represent a National Security Threat?

Copyright

The holders of copyrights for newspapers, magazines, books, and other publications are involved in numerous legal battles with owners of AI modules over alleged copyright infringement. The plaintiff copyright owners claim that the AI large language modules have been trained on huge quantities of copyrighted materials without permission and — most importantly — without payment. […]

Read more about Does the AI-Copyright Legal Fight Represent a National Security Threat?

How Does Buy-Sell Insurance Work For An Owners’ Agreement?

How Does Buy-Sell Insurance Work For An Owners’ Agreement?

Corporate

The owners of most small, closely-held businesses negotiate and sign some form of an “Owner’s Agreement.” An important part of such Agreements is the “Buy-Sell” provisions. These are often some of the most difficult to negotiate. The gist of the buy-sell part of the Owners’ Agreement is to establish the rules for what happens if […]

Read more about How Does Buy-Sell Insurance Work For An Owners’ Agreement?

Status on Social Media Moderation Statutes and Cases

Status on Social Media Moderation Statutes and Cases

Internet Law

Social media content moderation by technology platforms was one of the “hot” legal topics in 2023-2024. Three States — California, Texas, and Florida — passed different statutes to either require more content moderation (California) or to limit such moderation (Texas and Florida). All the statutes, in one way or another, demanded more transparency and information […]

Read more about Status on Social Media Moderation Statutes and Cases

Put Revision Legal on your side