Post

AI Risk Classification

The EU AI Act uses a four-tier risk classification to determine what obligations apply. Getting this classification right is the foundation of AI governance -- it determines everything from documentation requirements to whether you need a conformity assessment.

AI Risk Classification

The EU AI Act uses a four-tier risk classification to determine what obligations apply. Getting this classification right is the foundation of AI governance – it determines everything from documentation requirements to whether you need a conformity assessment before deployment.


The Four Risk Tiers

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
                    ┌───────────────┐
                    │  UNACCEPTABLE │  <- Banned outright
                    │   (Article 5) │
                    └───────┬───────┘
                            │
                    ┌───────┴───────┐
                    │   HIGH RISK   │  <- Heavy obligations
                    │  (Annex III)  │     (conformity assessment,
                    │               │      documentation, monitoring)
                    └───────┬───────┘
                            │
                    ┌───────┴───────┐
                    │ LIMITED RISK  │  <- Transparency obligations
                    │ (Article 50)  │     (disclosure, labeling)
                    └───────┬───────┘
                            │
                    ┌───────┴───────┐
                    │ MINIMAL RISK  │  <- No specific obligations
                    │  (default)    │     (voluntary codes of conduct)
                    └───────────────┘

Tier 1: Unacceptable Risk (Banned)

These AI practices are prohibited entirely. Effective since February 2, 2025.

Prohibited Practice What It Means
Social scoring by public authorities Classifying people based on social behavior or predicted personality traits leading to detrimental treatment
Real-time remote biometric identification in public spaces (for law enforcement) Live facial recognition in public areas, with narrow exceptions (missing children, imminent terrorist threat)
Emotion recognition in workplace and education AI systems inferring emotions of employees or students (except for medical/safety purposes)
Manipulative AI exploiting vulnerabilities AI designed to distort behavior of persons due to age, disability, or social/economic situation
Untargeted scraping for facial recognition databases Building face databases from internet/CCTV without consent
Predictive policing based solely on profiling AI predicting criminal behavior based on personal characteristics alone

Tier 2: High Risk (Annex III)

AI systems in these domains face the heaviest obligations. Enforceable from August 2, 2026 (August 2, 2027 for systems embedded in regulated products like medical devices).

Domain Examples
Biometric identification Remote biometric identification (non-real-time), biometric categorization, emotion recognition (permitted contexts)
Critical infrastructure AI managing safety of digital infrastructure, road traffic, water/gas/electricity supply
Education & vocational training AI determining access to education, evaluating learning outcomes, monitoring students during tests
Employment & worker management AI used in recruitment, screening, hiring decisions, task allocation, performance monitoring, promotion/termination decisions
Essential services access AI evaluating creditworthiness, setting insurance premiums, emergency service dispatch
Law enforcement Polygraph/emotion detection, evidence reliability assessment, crime prediction, profiling
Migration & border control Visa processing, asylum application assessment, security risk assessment
Justice & democracy AI assisting judges, influencing election outcomes

High-Risk Obligations

Organizations deploying high-risk AI systems must implement:

Requirement What It Involves
Risk management system Continuous lifecycle risk identification, assessment, and mitigation
Data governance Training data quality, relevance, representativeness, bias assessment
Technical documentation Annex IV: detailed system description, development methodology, performance metrics, known limitations
Record-keeping / logging Automatic logging of system operation for traceability
Transparency Clear instructions for deployers, information about capabilities and limitations
Human oversight Measures enabling human monitoring, ability to override/stop the system
Accuracy, robustness, cybersecurity Performance testing, adversarial robustness, security measures
Conformity assessment Pre-market assessment (self-assessment or third-party depending on domain)
CE marking Affix CE mark indicating conformity
EU database registration Register in the EU database for high-risk AI systems
Post-market monitoring Ongoing monitoring after deployment, incident reporting

Tier 3: Limited Risk (Article 50)

AI systems that interact directly with people or generate content. Transparency obligations from August 2, 2026. See Transparency and Disclosure for the full deep-dive.

System Type Obligation
Chatbots / conversational AI Inform users they are interacting with AI
AI-generated text Machine-readable marking as AI-generated
AI-generated images / video / audio Machine-readable marking + disclosure when depicting real people/events
Deepfakes Clear disclosure that content is artificially generated or manipulated
Emotion detection / biometric categorization Inform the person being subjected to the system

Tier 4: Minimal Risk (Default)

Everything else. No mandatory obligations – only voluntary codes of conduct encouraged.

Examples: spam filters, AI in video games, inventory optimization, internal search, autocomplete.


How to Classify Your AI System

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Start: Describe the AI system and its use case
  │
  ├─► Is it on the Article 5 prohibited list?
  │     YES → BANNED. Do not deploy.
  │     NO  ↓
  │
  ├─► Is it a safety component of a regulated product (Annex I)?
  │   OR is it listed in Annex III high-risk domains?
  │     YES → HIGH RISK. Full compliance required.
  │     NO  ↓
  │
  ├─► Does it directly interact with natural persons?
  │   OR does it generate synthetic text/image/audio/video?
  │   OR does it perform emotion detection or biometric categorization?
  │     YES → LIMITED RISK. Transparency obligations (Article 50).
  │     NO  ↓
  │
  └─► MINIMAL RISK. No mandatory obligations.

Important nuance: A single AI platform can have components in different risk tiers. A customer service platform might have a chatbot (limited risk) that also assists with employment decisions (high risk). Each use case must be classified independently.


Classifying MMS AI Systems

AI System Use Case Risk Tier Rationale Key Obligation
Customer support chatbot Answers product/order questions Limited Directly interacts with consumers AI disclosure at first interaction
Product recommendation engine Suggests products based on browsing Minimal No direct interaction, no high-risk domain None mandatory
Service desk AI agent Internal IT support with tool access Limited Interacts with employees AI disclosure
AI-powered product Q&A Generates product descriptions/answers Limited Generates synthetic text AI-generated content marking
HR screening tool (if built) Filters job applications High Employment domain (Annex III) Full compliance: documentation, conformity assessment, human oversight
Credit/financing assessment (if built) Evaluates customer creditworthiness High Essential services access (Annex III) Full compliance
Internal analytics / reporting Summarizes business data Minimal Internal tool, no high-risk domain None mandatory
Store inventory optimization Predicts stock needs Minimal No interaction, no high-risk domain None mandatory

Key takeaway for MMS

Most consumer-facing AI at MMS will be limited risk (transparency obligations). This is manageable – it mainly requires disclosure and content labeling. The risk escalates if AI starts making decisions that materially affect people (employment, credit, safety). Any move into those domains triggers the full high-risk compliance stack.


References

This post is licensed under CC BY 4.0 by the author.