Post

Transparency and Disclosure

Article 50 of the EU AI Act requires that people know when they're talking to AI and when content is AI-generated. For consumer-facing chatbots and AI-powered product content, this is the most immediately actionable compliance requirement.

Transparency and Disclosure

Article 50 of the EU AI Act requires that people know when they’re talking to AI and when content is AI-generated. For MMS’s consumer-facing chatbots and AI-powered product content, this is the most immediately actionable compliance requirement – and it applies from August 2, 2026.


Article 50 Overview

Article 50 establishes transparency obligations for four categories of AI systems, regardless of their risk classification. These apply to all AI systems that fall into these categories, not just high-risk ones.

Category Obligation Who Bears It
AI systems interacting with people (chatbots) Inform the person they are interacting with AI Provider (design) + Deployer (implement)
Emotion detection / biometric categorization Inform the person being subjected to the system Deployer
AI-generated synthetic content (text, image, audio, video) Mark outputs in machine-readable format as AI-generated Provider
Deepfakes (depicting real people/events) Disclose that content is artificially generated or manipulated Deployer

Chatbot Disclosure Requirements

This is the most directly relevant obligation for consumer-facing AI.

What the law says

“Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the natural person concerned is informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.”

Practical requirements

Requirement Detail
When “At the latest at the time of the first interaction or exposure”
How “In a clear and distinguishable manner”
Exception Only if it is “obvious to a reasonably well-informed, observant and circumspect natural person” – this exception rarely applies to consumer chatbots
Accessibility Must account for vulnerable groups (elderly, children, people with disabilities)

Implementation patterns

Pattern 1: Pre-chat banner

1
2
3
4
5
┌──────────────────────────────────────────┐
│  You are chatting with an AI assistant.   │
│  You can request a human agent at any     │
│  time by typing "speak to a person."     │
└──────────────────────────────────────────┘

Pattern 2: First-message disclosure The AI agent’s first response includes: “Hi! I’m an AI assistant here to help with your question. If you’d like to speak with a human, just let me know.”

Pattern 3: Persistent indicator A visible label or icon (e.g., “AI” badge) displayed throughout the conversation, not just at the start.

Recommended approach: Combine patterns 1 and 3 – pre-chat banner for explicit disclosure plus persistent indicator for ongoing awareness. Pattern 2 alone may not satisfy “clear and distinguishable” if the user could miss it in a long response.


Synthetic Content Labeling

AI systems that generate text, images, audio, or video must mark outputs as AI-generated.

Machine-readable marking

“Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.”

Requirements for the marking:

  • Effective: actually detectable by downstream systems
  • Interoperable: works across platforms and tools
  • Robust: survives reasonable transformations (compression, cropping, etc.)
  • Reliable: low false positive/negative rates

What this means in practice

Content Type Marking Approach
AI-generated text Metadata tags, watermarking (e.g., C2PA content credentials)
AI-generated images Embedded metadata (EXIF/IPTC), invisible watermarks, C2PA provenance
AI-generated audio/video Watermarking, C2PA provenance chain
Deepfakes All of the above + visible disclosure to the viewer

Code of Practice on AI-Generated Content

The EU AI Office published the first draft Code of Practice on Transparency of AI-Generated Content in December 2025. This code provides practical guidance on how to comply with Article 50’s content marking requirements. Expected to be finalized May-June 2026.

Key elements:

  • Standardized metadata schemas for AI-generated content
  • Technical standards for watermarking robustness
  • Interoperability requirements between platforms
  • Guidance on when human-edited AI content still requires marking

Emotion Detection Disclosure

If an AI system detects emotions, sentiment, or performs biometric categorization of individuals, it must inform them.

“Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system.”

This applies even in non-high-risk contexts. If MMS were to use sentiment analysis on customer chat messages to route to specialized agents or adjust responses, the customer must be informed.


Implementation Checklist for MMS

Immediate actions (before Aug 2, 2026)

  • Audit all customer-facing AI touchpoints – identify every place a user interacts with AI (chatbot, product Q&A, search, email responses)
  • Add AI disclosure to all chatbot entry points – pre-chat banner + persistent indicator
  • Add “request human agent” option – users must be able to opt out of AI interaction
  • Implement content marking for AI-generated text shown to customers (product descriptions, answers, summaries)
  • Document compliance – maintain a transparency log recording what disclosure is shown where, when it was implemented, and how it was tested
  • Train customer-facing teams – support staff must understand what AI systems are in use and how to handle questions about AI interaction
  • Update Terms of Service / Privacy Policy – include information about AI system use, types of AI interaction, and user rights
  • Test with vulnerable user groups – ensure disclosure is accessible and understandable

If using emotion/sentiment detection

  • Disclose sentiment analysis to users if AI analyzes their emotional state or sentiment
  • Evaluate whether emotion detection is necessary – if not essential, consider removing it to simplify compliance

Ongoing compliance

  • Regular audits – quarterly review that all AI touchpoints have proper disclosure
  • Monitor Code of Practice updates – adapt content marking as standards are finalized
  • Maintain evidence – screenshots, logs, and test results demonstrating compliance

Timeline

Date Milestone
Dec 2025 First draft Code of Practice on AI-generated content published
May-Jun 2026 Code of Practice expected to be finalized
Aug 2, 2026 Article 50 transparency obligations become enforceable
Ongoing Code of Practice may be updated as technology and standards evolve

References

This post is licensed under CC BY 4.0 by the author.