The Artificial Intelligence Act (“AI Act”) has now gone into effect in the European Union. Implementation of the AI Act will be phased in over three years or so, allowing businesses time to evaluate their AI systems and create compliance policies and programs. The purpose of the AI Act is to regulate the use of artificial intelligence software programs to limit risks from AI software and to prevent various types of adverse social, political, and social consequences for the EU generally and for consumers more specifically. Revision Legal provides this three-part summary of what is required by the EU’s AI Act.
Broad applicability — even to U.S.-based businesses
The AI Act will apply very broadly. This is because, first, the definition of “AI” is very broad. The AI Act defines “AI programs” to include anything that is “machine-based,” that operates with “varying levels of autonomy,” and that may exhibit “adaptiveness after deployment,” which uses inputs to generate outputs — such as predictions, recommendations, decisions, etc. — with significant impacts on real and virtual environments. That might cover a very broad range of education, design, engineering, monitoring, autonomous, and other products. Conceivably, an automated HVAC system might now have an AI component — with only a small AI output — that would be subject to the AI Act.
The broad application of the AI Act also results from the broad definition of applicability. The AI Act delineates four categories of businesses to which the act applies if they:
(i) market an AI system to or within the EU
(ii) provide service to an AI system user to or within the EU OR
(iii) use the output of an AI system
Note that the physical location of the AI system is not determinative. The four categories of businesses are: providers, deployers, importers and distributors. Similar and overlapping — but also different – obligations are imposed by the AI Act on these four categories of businesses depending on the risk level of the AI system being provided, deployed, imported or distributed.
Compliance by February 2025 in some cases; full compliance by August 2027
As noted, the AI Act will be phased in. The phases are based on the risk-hierarchy set forth in the Act and full compliance will be required by August 2027.
In some cases, compliance must occur by February 2025. This deadline relates to AI software systems that are banned under the AI Act.
By August 2025, the AI Act’s general AI regulations will be effective (involving low-to-limited-risk AI systems). By August 2026, the AI Act’s mandates for “high-risk” AI software will become effective. Finally, by August 2027, the regulations with respect to product-safety high-risk AI programs will become effective. (More on these risk levels and requirements in Parts Two and Three). Note that there is a later effective deadline of August 2030 for high-risk AI systems used by public authorities.
Exceptions to coverage
The AI Act exempts certain AI systems from coverage, including:
- AI systems used exclusively for military and national defense/security purposes
- AI systems solely used for scientific research and development
- AI systems used solely for personal, and nonprofessional activities
- AI systems offered as free and open-source (although minor requirements must be met related to copyright and other disclosure requirements) — note that this exception does not apply if the open-source AI is high-risk or is among the banned types of AI
See Parts Two and Three for further information.
Contact the Internet Law and Social Media Attorneys at Revision Legal
For more information, contact the experienced Internet Law and Social Media Lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.