EU AI Act: Part 3 - Risk-based Classification & Compliance of AI Systems

eu ai act Jun 26, 2024

The EU AI Act classifies AI Systems and General-purpose AI systems (as explained in part 2 of the series) further based on their potential risk to health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection. Based on the classification, each category of systems is subjected to different compliance requirements.

AI Systems

The single-purpose AI systems are classified into 4 risk-based categories for the purpose of compliance and monitoring requirements.

1. Prohibited

AI practices that violate fundamental EU rights and values are prohibited under the AI Act. Below are some of the AI practices that fall under this category.

AI systems using subliminal, manipulative, or deceptive techniques.

Exploiting vulnerabilities based on age, disability, or social/economic situations.

Evaluating/classifying individuals based on social behaviour leading to unjustified treatment.

Predicting criminal behaviour based solely on profiling.

Creating facial recognition databases from untargeted scraping.

Inferring emotions in workplaces and educational institutions (exceptions for medical/safety reasons).

Categorising individuals based on biometric data to infer sensitive information.

2. High risk

AI systems are those that are deemed a high risk to the health and safety or fundamental rights of individuals. High riskAI systems are subject to strict compliance requirements. Below are some of the AI practices that fall under this category.

Biometrics: AI for identifying people remotely (excluding verification for confirming identity), AI for categorising individuals based on sensitive attributes and AI for detecting emotions.

Critical Infrastructure: AI is a safety component in critical infrastructure management (digital, traffic, utilities).

Education and Vocational Training: Access and Admission: AI for determining access or admission to educational institutions.

Employment and Worker Management: AI for job recruitment, advertisement targeting, and application filtering and AI for decisions on work terms, promotions, task allocation, and performance evaluation.

Essential Services and Benefits: AI for evaluating eligibility and managing public benefits and services, AI for assessing creditworthiness, excluding fraud detection, AI for risk assessment and pricing in life and health insurance and AI for evaluating emergency calls and dispatching response services.

Law Enforcement: Risk Assessment: AI for assessing the risk of becoming a crime victim, AI for evaluating evidence in criminal investigations, AI for assessing the risk of offending/re-offending, AI for profiling in criminal detection, investigation, or prosecution.

Migration, Asylum, and Border Control: Polygraphs: AI for lie detection in migration contexts, AI for assessing risks (security, health, migration) posed by individuals, AI for assisting in asylum, visa, or residence permit applications, AI for detecting, recognising, or identifying people in migration contexts (excluding document verification).

Administration of Justice and Democratic Processes: AI for assisting judicial authorities in legal research and application, AI for influencing election outcomes or voting behaviour (excluding administrative or logistical campaign tools).

3. Limited risk

Limited risk AI usage refers to the risks associated with a lack of transparency in AI usage, particularly AI interactions such as Chatbots, AI-generated or manipulated audio, image, video, or text content, Deep fakes, and certain emotion recognition and biometric categorisation.

Limited risk AI systems are subject to Transparency obligations.

4. Minimal risk

AI usage that poses minimal or no risk to citizens' rights or safety, such as Spam filtering, recommendation systems and so on.

Minimal risk AI systems are subject to a voluntary code of conduct.

 

General-purpose AI Systems (GPAI Systems)

These systems that are underpinned by general-purpose foundational models are classified into 2 categories based on their systemic risk.

1. GPAI Systems with Systemic risk

A GPAI system is classified as having a systemic risk if it demonstrates high-impact capabilities based on technical evaluation, indicators and benchmarks of the model. An AI model is considered high-impact if the cumulative amount of computation used for its training measured in floating point operations is greater than 10(^25.). They are subject to additional compliance requirements

2. GPAI Systems without Systemic risk

GPAI systems that do not contain systemic risk based on their impact capabilities and are subject to transparency obligations.

Reference:

EU AI Act

Charter of Fundamental Rights of the European Union

Relevant articles:

EU AI Act: Part 1 - A Brief Overview

EU AI Act: Part 2 - EU AI Act: Part 2 - How are AI systems defined?

EU AI Act: Part 4 - Tune in for part 4

Disclaimer: This article is intended solely for educational purposes and should not be taken as legal advice.

Are you ready to harness the transformative power of AI to revolutionise your business operations and drive innovation? 
To achieve this, a robust data strategy is your secret weapon, and here are some valuable tips to help you ensure a successful implementation. Understanding and overcoming the challenges is the key to a successful journey.

Then why not consider buying our Data Strategy Course

Buy Data Strategy Course

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.