The Artificial Intelligence Act (“AI Act”) has now gone into effect in the European Union. In Part One of this series, the Internet Law Attorneys at Revision Legal summarized the applicability of the AI Act, some exemptions from coverage, and the effective date timetable. Part Two summarized the Act’s enforcement mechanisms, risk-level framework, and banned types of AI programs and systems. This final Part Three summarizes the AI Act’s regulations of high, limited, and low-risk AI systems.
High-risk AI programs/systems under the AI Act
Looking at the AI Act, high-risk AI programs/systems can be seen to apply to three broad categories:
- Societal risk — for example, risks to energy generation and provision, transportation systems, law enforcement, banking, government, etc.
- Risks to fundamental rights/freedoms — for example, dangers to groups based on group characteristics, to individuals based on application of AI systems, etc.
- Specific risks to individuals and businesses — for example, AI systems for automobiles, airplanes, machinery, medical devices, etc.
Further, looking at the AI Act, clearly, EU regulators have identified several sources of risk including:
- From the AI programming itself as developed, modified, and as it might modify itself
- From the inputs used, including inputs that may be false/fake, inaccurate, biased, etc.
- From the lack of human input and supervision
- From external threats such as hacking and cybercrime and
- From internal misuse, accident, sabotage, etc.
To combat and attempt to mitigate these risks, the AI Act imposes a long list of mandates on developers of AI systems. Some of these mandates include pre-marketing testing and validation of the AI system, including accuracy of input data sets and generated outputs, proof of human oversight in design and implementation, proof of sufficient and adequate cybersecurity procedures and protocols (both internal and external), the requirement of obtaining — and then affixing — a “Conformité Européenne (“CE”) certification to the product and the registration of the the AI system with EU AI regulators. Other mandates include:
- Having risk management policies and personnel in place
- Establishing quality training programs for individuals involved with the AI program/system
- Maintaining technical documentation
- Providing transparency on how the AI system functions
- Having robust monitoring programs
- Promptly providing documents and access when demanded by regulators
- And more
To whom do these mandates apply?
The AI Act distinguishes four types of persons/businesses involved in AI provision: Developers, deployers, distributors, and importers. Most of the obligations — those listed above, for example — are imposed on developers. Deployers have fewer obligations, most of which relate to proper use and oversight. Distributors and importers have even fewer obligations but are responsible for ensuring that the products are properly labeled with the CE markings.
Note that the categories can be fluid. If, for example, a distributor licenses an AI system from a developer and then modifies it, the distributor could become a developer (and, thus, subject to the higher requirements of the AI Act).
Limited and low-risk AI systems
The AI Act imposes relatively few mandates on developers of limited and low-risk AI programs. Generally, developers of low-risk AI systems must provide technical documentation, instructions for use, comply with the EU Copyright Directive and publish a summary about the content used for training. Developers of limited-risk AI products must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure proper and adequate cybersecurity protections.
Contact the Internet Law and Social Media Attorneys at Revision Legal
For more information, contact the experienced Internet Law and Social Media Lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.