Post

EU AI Act

The EU AI Act is the world's first comprehensive AI law. It classifies AI systems by risk level and imposes obligations ranging from outright bans to transparency requirements.

EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law. It classifies AI systems by risk level and imposes obligations ranging from outright bans to transparency requirements. For any company deploying AI to EU users – including MMS – this is the regulatory baseline.


What It Is

The EU AI Act is a binding regulation (not a directive – it applies directly, no national transposition needed) that establishes harmonized rules for AI systems across the European Union. It has extraterritorial reach: it applies to any organization that places an AI system on the EU market or whose AI system’s output is used in the EU, regardless of where the organization is based.

Key characteristics:

  • Risk-based approach: obligations scale with the risk level of the AI system
  • Horizontal regulation: applies across all sectors (unlike sector-specific rules in healthcare, finance, etc.)
  • Technology-neutral: defines AI broadly to cover current and future techniques
  • Extraterritorial: applies to non-EU companies if their AI affects EU users

Implementation Timeline

Date What Happens Status
Aug 1, 2024 AI Act enters into force Done
Feb 2, 2025 Prohibited AI practices banned + AI literacy obligations begin Active
Aug 2, 2025 GPAI model obligations + governance infrastructure (notified bodies, conformity assessment system) Active
Aug 2, 2026 High-risk AI system obligations (Annex III) + transparency rules (Article 50) + full enforcement Deadline
Aug 2, 2027 Extended transition for high-risk AI in regulated products (medical devices, automotive, aviation) Upcoming

The critical deadline for most enterprises is August 2, 2026. By this date:

  • All AI systems must be classified by risk tier
  • High-risk systems must have conformity assessments, technical documentation, CE marking, and EU database registration completed
  • Transparency obligations (chatbot disclosure, synthetic content labeling) must be implemented
  • Deployers must maintain AI system inventories

Risk-Based Approach

The Act defines four risk tiers. See AI Risk Classification for the detailed classification guide with decision flowchart and MMS-specific examples.

1
2
3
4
5
6
7
8
9
10
11
12
13
┌─────────────────────────────────────────┐
│  UNACCEPTABLE RISK (Banned)             │  Social scoring, manipulative AI,
│  Article 5                              │  real-time biometric surveillance
├─────────────────────────────────────────┤
│  HIGH RISK (Heavy obligations)          │  HR/employment, credit scoring,
│  Annex III                              │  education, law enforcement, CI
├─────────────────────────────────────────┤
│  LIMITED RISK (Transparency)            │  Chatbots, deepfakes, emotion
│  Article 50                             │  detection, AI-generated content
├─────────────────────────────────────────┤
│  MINIMAL RISK (No obligations)          │  Spam filters, video game AI,
│  Default                                │  inventory management
└─────────────────────────────────────────┘

How to Identify Limited Risk and High Risk Systems

Limited Risk Systems (Article 50)

A system is limited risk if it falls into any of these categories:

Trigger Example
AI system interacts directly with humans Chatbots, virtual assistants, AI-powered customer service
System generates synthetic audio, image, video, or text Content generators, deepfake tools, AI writing assistants
System performs emotion recognition Sentiment analysis on calls, facial emotion detection
System performs biometric categorization Categorizing people by age, gender, ethnicity from biometric data

Key test: If a person could reasonably believe they are interacting with a human, or could reasonably believe content is human-generated – it’s at least limited risk.

High Risk Systems (Annex III – the 8 domains)

A system is high risk if it’s used in any of these 8 domains AND it poses a significant risk to health, safety, or fundamental rights:

# Domain Examples
1 Biometrics Facial recognition, fingerprint matching (beyond simple authentication)
2 Critical infrastructure AI managing electricity grids, water supply, traffic systems
3 Education & vocational training Automated grading, student admission scoring, learning path determination
4 Employment & worker management CV screening, interview scoring, performance monitoring, promotion decisions
5 Essential services (public & private) Credit scoring, insurance pricing, social benefit eligibility
6 Law enforcement Predictive policing, evidence analysis, lie detection
7 Migration, asylum & border control Visa application assessment, risk profiling at borders
8 Justice & democratic processes AI assisting judges, influencing election outcomes

Two-condition test (Article 6(2)):

  1. The AI system is used in one of the 8 domains listed in Annex III, AND
  2. It poses a significant risk of harm to health, safety, or fundamental rights

Exception (Article 6(3)): A system may be exempted from high-risk classification if it meets any of these conditions:

  • It performs a narrow procedural task (not influencing a consequential decision)
  • It improves the result of a previously completed human activity
  • It is preparatory to a human assessment but does not replace or influence it
  • It detects decision-making patterns without replacing or influencing human judgment

What Builders and Deployers Must Do

For Limited Risk Systems – Enterprise Obligations

# Obligation What to do concretely
1 Disclose AI interaction Before or at the start of interaction, clearly inform users they are communicating with an AI (e.g., “You are chatting with an AI assistant”)
2 Label AI-generated content Mark synthetic text, images, audio, and video as AI-generated using both human-readable labels and machine-readable metadata (C2PA / watermarking)
3 Disclose emotion recognition If the system detects emotions or infers sentiment, inform the affected person explicitly
4 Disclose deepfakes If content depicts real people/events synthetically, it must be labeled as artificially generated or manipulated
5 Keep records Maintain documentation of what disclosures are made, where, and how

Enterprise action items:

  • Audit all customer-facing AI touchpoints and add disclosure notices
  • Implement content labeling (watermarking, metadata tags) for AI-generated outputs
  • Update terms of service and privacy policies to reference AI use
  • Train customer-facing teams to explain AI involvement when asked

For High Risk Systems – Enterprise Obligations

This is the heavy compliance tier. Both providers (builders) and deployers (users) have obligations:

If You Are Building (Provider):

# Obligation What it means in practice
1 Risk Management System (Art. 9) Establish a continuous risk identification, analysis, and mitigation process for the entire AI lifecycle. Document residual risks.
2 Data Governance (Art. 10) Training, validation, and test datasets must be relevant, representative, free of errors, and complete. Document data sources, collection methods, and preprocessing steps.
3 Technical Documentation (Art. 11) Create detailed documentation covering: system purpose, design, development methodology, performance metrics, known limitations, hardware/software requirements.
4 Record-Keeping / Logging (Art. 12) Build automatic logging into the system – logs must capture input data, outputs, decisions, and be sufficient for post-hoc auditing. Retain logs for the system’s lifetime.
5 Transparency to Deployers (Art. 13) Provide clear instructions for use, intended purpose, performance levels, known risks, and human oversight requirements.
6 Human Oversight by Design (Art. 14) Design the system so a human can effectively oversee it, understand its outputs, decide not to use it, intervene, or stop it.
7 Accuracy, Robustness & Cybersecurity (Art. 15) Achieve and document appropriate levels of accuracy, be resilient to errors and adversarial attacks, and implement cybersecurity measures.
8 Conformity Assessment (Art. 43) Before placing on the market: conduct internal conformity assessment (or third-party for biometrics). Document compliance.
9 CE Marking & EU Declaration (Art. 48-49) Affix CE marking to compliant systems. Draw up a written EU declaration of conformity.
10 EU Database Registration (Art. 71) Register the high-risk AI system in the EU public database before placing it on the market.
11 Post-Market Monitoring (Art. 72) Establish and document a post-market monitoring plan. Collect and analyze data on performance throughout the system’s lifetime.
12 Serious Incident Reporting (Art. 73) Report serious incidents (death, serious harm, fundamental rights breach, property/environment damage) to market surveillance authorities within 15 days.

If You Are Deploying / Rolling Out (Deployer):

# Obligation What it means in practice
1 Follow provider instructions Use the AI system per the provider’s instructions for use. Don’t repurpose it.
2 Human oversight Assign competent, trained individuals to oversee the AI system’s operation. They must have authority to override or stop it.
3 Input data quality Ensure input data is relevant and sufficiently representative for the intended purpose.
4 Monitoring Monitor the system’s operation for risks. If issues arise, suspend use and inform the provider.
5 Logging retention Keep system-generated logs for a minimum period (at least 6 months, unless otherwise required by law). Note: this is the deployer’s minimum floor – the provider who built the system must retain logs for the system’s full operational lifetime (Art. 12).
6 Fundamental Rights Impact Assessment (FRIA) Before deploying: conduct an impact assessment on fundamental rights (like a DPIA but broader). Required for public bodies and certain private deployers.
7 Inform affected persons Notify individuals subject to high-risk AI decisions (e.g., job candidates screened by AI, customers scored by AI).
8 Workplace consultation If deploying in the workplace, inform worker representatives/trade unions.

Enterprise action items for high-risk rollout:

  • Conduct risk classification before procurement or development
  • Require vendors to provide EU AI Act technical documentation and conformity declarations
  • Appoint a human oversight owner for each high-risk system
  • Complete a Fundamental Rights Impact Assessment (FRIA)
  • Set up logging infrastructure and define retention policies
  • Establish an incident reporting process (15-day SLA to authorities)
  • Document everything – risk assessments, data governance decisions, monitoring results
  • Register the system in the EU database if you’re the provider

Key Obligations by Role

The Act distinguishes between roles in the AI value chain. Most enterprises are “deployers” (they use AI systems built by others), but some are also “providers” (they build AI systems).

Role Definition Key Obligations
Provider Develops or places AI system on market Risk management, data governance, technical documentation, conformity assessment, post-market monitoring, incident reporting
Deployer Uses AI system under their authority Human oversight, input data quality, monitoring, transparency to affected persons, DPIA for high-risk, maintain logs
Importer Places non-EU AI system on EU market Verify conformity assessment, ensure documentation, CE marking present
Distributor Makes AI system available (not provider/importer) Verify CE marking, documentation, storage/transport conditions

MMS context: When MMS uses Claude, GPT-4, or Gemini via API to build customer-facing agents, MMS is both a deployer (of the foundation model) and a provider (of the AI system/agent built on top). This dual role means MMS has obligations from both columns.


GPAI Model Obligations

General-Purpose AI (GPAI) models like Claude, GPT-4, and Gemini have specific provider obligations (applicable since Aug 2, 2025):

All GPAI model providers must:

  • Maintain and make available technical documentation
  • Provide information and documentation to downstream providers integrating the model
  • Establish a copyright compliance policy
  • Publish a sufficiently detailed summary of training data

GPAI models with systemic risk (>10^25 FLOPs training compute) additionally must:

  • Perform model evaluations including adversarial testing
  • Assess and mitigate systemic risks
  • Report serious incidents to the AI Office
  • Ensure adequate cybersecurity protections

This matters for MMS because foundation model providers (Anthropic, OpenAI, Google) bear these obligations – but MMS must verify that the GPAI models it uses comply, and must fulfill its own obligations as provider of the downstream AI system.


Penalties

Violation Maximum Fine
Prohibited AI practices (Article 5) EUR 35 million or 7% of global annual turnover (whichever is higher)
High-risk obligations, GPAI obligations, notified body violations EUR 15 million or 3% of global annual turnover
Supplying incorrect, incomplete, or misleading information to authorities EUR 7.5 million or 1% of global annual turnover

For SMEs and startups, fines are capped at the lower percentage threshold. These penalties are comparable to GDPR fines and designed to be “effective, proportionate, and dissuasive.”


What This Means for MMS

MMS operates consumer-facing AI systems in Germany (EU member state). Here is the immediate relevance:

MMS AI System Likely Risk Classification Key Obligation
Customer support chatbot / agent Limited risk (Article 50) Must disclose AI interaction, label AI-generated content
Product recommendation engine Minimal risk No specific obligations (unless it manipulates purchasing decisions)
Internal HR screening / hiring tool High risk (Annex III, employment) Full compliance: risk management, documentation, conformity assessment, human oversight
Service desk agent with tool access Limited risk + evaluate tool actions Disclosure + assess whether tool actions touch high-risk domains
AI-powered search / product Q&A Limited risk Disclosure that responses are AI-generated

Step 1 is building an AI inventory. You cannot classify what you haven’t cataloged. See Enterprise AI Governance Playbook for the inventory template and governance process.

Step 2 is implementing transparency. For MMS’s consumer-facing chatbots, this is the most immediate obligation. See Transparency and Disclosure for the Article 50 deep-dive and implementation checklist.


References

This post is licensed under CC BY 4.0 by the author.