2026-04-23·11 min read·

EU AI Act Art.50: Transparency Obligations — Chatbot Disclosure, Emotion Recognition, and AI-Generated Content Labeling (2026)

Registration under Art.49 closes the pre-market compliance chain for high-risk AI systems. Article 50 of the EU AI Act operates on a different axis: it extends transparency obligations to a much broader category of AI systems — including many that are not high-risk — and places disclosure duties at the moment of human interaction or content generation, not at the point of market placement. Where the high-risk AI Act framework is primarily about systemic accountability, Art.50 is about per-interaction honesty.

For developers and deployers, Art.50 is one of the most immediately operational provisions in the EU AI Act. It applies to chatbots, emotion recognition systems, deepfake generators, and AI systems producing synthetic text about matters of public interest — all areas where commercial AI deployment is already widespread. Understanding what Art.50 requires, who bears each obligation, and how the machine-readable disclosure mandate works in practice is essential for any team shipping AI products into the EU market.

Art.50 in the EU AI Act Architecture

Art.50 sits in Chapter IV of the EU AI Act, titled "Transparency Obligations for Providers and Deployers of Certain AI Systems." This is a standalone transparency layer, separate from both the high-risk AI compliance chain (Chapters II-III) and the general-purpose AI (GPAI) obligations (Chapter V). The key structural feature of Art.50 is that it creates obligations for AI systems regardless of their risk classification — a chatbot or deepfake tool that is not high-risk under Art.6 or Annex III is still subject to Art.50 if it falls within the defined categories.

ChapterScopeKey Obligations
Chapter IIHigh-risk AI systemsRisk management, data governance, technical documentation, human oversight
Chapter IIIGPAI modelsModel evaluation, adversarial testing, copyright policies, transparency to downstream providers
Chapter IVCertain AI systems (broad)Transparency at point of interaction, emotion recognition notification, synthetic content labeling
Chapter VProhibited AI systemsHard bans (social scoring, real-time biometrics in public, subliminal manipulation)

Art.50 creates four distinct transparency obligations, each targeting a different actor type, AI system category, and disclosure moment. They do not nest into each other — a single AI product can trigger multiple Art.50 obligations simultaneously.

Art.50(1): Chatbot Disclosure — Providers

Obligation: Providers of AI systems intended to interact with natural persons must ensure those systems are designed and developed so that the persons interacting with them are informed, in a timely, clear, and intelligible manner, that they are interacting with an AI system — unless this is obvious from the context.

Who bears it: Providers (the entity that develops or has an AI system developed and places it on the EU market or puts it into service under its own name or trademark).

When it applies: At the point of interaction. The obligation is architectural — it is built into the system at the design and development phase, not applied ad hoc by the deployer.

The "obvious from context" exception: Art.50(1) includes an exception where disclosure is not required if it is apparent from the context that the user is interacting with an AI. This exception is narrow in practice. A corporate chatbot widget, a virtual assistant in a mobile app, or an AI customer service agent cannot rely on this exception simply because AI assistants are common — the standard is whether it is obvious in that specific interaction, not whether AI is generally known to exist.

Technical implementation: The disclosure must be timely (before or at the start of the interaction), clear (not buried in terms of service or privacy notices), and intelligible (understandable by the target user population). Common implementation patterns:

Interaction with high-risk requirements: For AI systems that are both chatbot-like and high-risk (for example, an AI used in employment screening that involves conversational interaction), Art.50(1) applies alongside the high-risk transparency obligations under Art.13. Art.13 requires information about the system to be provided to deployers; Art.50(1) requires disclosure to end users. These are cumulative, not alternative.

GPAI model providers: Providers of GPAI models (as defined in Art.3(63)) used in interactive applications bear Art.50(1) obligations at the provider level, meaning they must design their models to support downstream compliance. However, the deployer who integrates the GPAI model into a chatbot product also bears obligations to ensure the disclosure actually reaches the end user.

Art.50(2): Emotion Recognition and Biometric Categorisation — Deployers

Obligation: Deployers of AI systems that perform emotion recognition or biometric categorisation must inform the natural persons exposed to those systems of their operation, in a timely, clear, and intelligible manner.

Who bears it: Deployers (entities using an AI system in the course of their professional activity, or putting the system into service under their own authority).

Scope of "emotion recognition": The EU AI Act defines emotion recognition in Art.3(39) as a system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. This covers:

Scope of "biometric categorisation": Art.3(40) defines biometric categorisation as assigning natural persons to specific categories based on their biometric data, including by reference to sex, age, hair colour, eye colour, tattoos, ethnicity, sexual or political orientation, religion, health status. This is a broad definition that captures a wide range of behavioural analytics tools.

Exception — security/defence: Art.50(2) does not apply to AI systems used for biometric verification whose sole purpose is to confirm whether a specific person is who they claim to be. It also does not apply to AI systems authorised for law enforcement purposes under Art.5 derogations or Chapter V provisions.

Practical scenarios where Art.50(2) applies:

Notification requirement: The notification must reach the person before or at the point of exposure. Existing GDPR Art.13/14 notices addressing biometric data processing cover some of this ground, but Art.50(2) requires explicit disclosure about the AI system specifically — not just the data processing in general. Teams relying on existing privacy notices should audit whether those notices meet the AI-specific intelligibility standard.

Art.50(3): AI-Generated Synthetic Content — Providers

Obligation: Providers of AI systems that generate synthetic audio, image, video, or text content must ensure the outputs are marked with a machine-readable disclosure indicating that the content was artificially generated or manipulated.

Who bears it: Providers — this is the system-level obligation, implemented at the model and infrastructure layer before the content reaches the deployer or end user.

What "synthetic" means in context: Art.50(3) targets content that either:

  1. Is entirely AI-generated (images, audio, video, or text produced without a human original)
  2. Represents a manipulation of an existing human likeness or voice — the deepfake scenario where an existing person appears to say or do something they did not

Machine-readable disclosure — technical format: The EU AI Act does not specify a single technical format for the machine-readable marker. The Commission is expected to issue implementing acts under Art.50(7) specifying technical standards. In the interim, the following approaches are used in industry practice and referenced in recitals:

ApproachDescriptionTypical Use
C2PA (Coalition for Content Provenance and Authenticity)Cryptographically signed content credentials embedded in file metadataImages, video
WatermarkingStatistical or imperceptible signal embedded in pixel/frequency domainImages, video, audio
Metadata tagsXMP/EXIF tags indicating AI generationImages
Text watermarkingToken-level statistical patterns indicating AI authorshipText
File-level flagsStandardised fields in container formats (MP4, WAV)Audio, video

Providers deploying synthetic content generation at scale (image generation APIs, text-to-speech services, video synthesis platforms) must implement at least one of these approaches before the obligation's application date. For providers offering API access to other developers (downstream deployers), the obligation is to design the output pipeline to include the marker — not to ensure the downstream deployer preserves it, though contracts and terms of service typically address marker preservation requirements.

Deepfake-specific scope: Synthetic content depicting an identifiable person — in their voice, likeness, or image — where that content does not correspond to reality, is the core deepfake scenario. The machine-readable disclosure requirement applies regardless of whether the deepfake was created with or without the depicted person's consent. Art.50(3) is a disclosure obligation, not a consent requirement (consent and legitimate interest questions fall under GDPR).

Art.50(4): Deployer Disclosure for Deepfakes and Synthetic Media

Obligation: Deployers who use AI systems to produce synthetic content — particularly audio, image, video, or text that falsely appears to be real, or that depicts a real person — must disclose to the recipients of that content that it was AI-generated or manipulated. This disclosure must be clear and audible/visible in the content itself (not only in machine-readable metadata).

Who bears it: Deployers who publish, broadcast, or distribute AI-generated content to audiences. This includes:

Exceptions under Art.50(4):

Relationship to Art.50(3): Art.50(3) places the machine-readable marker obligation on the provider of the generation system. Art.50(4) places the human-readable disclosure obligation on the deployer who distributes the content. A single company building and using its own synthetic media tool bears both. A company using a third-party provider's API bears Art.50(4) and relies on the provider to have implemented Art.50(3).

Disclosure format for Art.50(4): The disclosure must be prominent and cannot be relegated to metadata alone. Common formats:

Art.50(5): GPAI Providers — Synthetic Text About Public Interest

Obligation: Providers of GPAI models that generate text with the primary purpose of informing the public on matters of public interest must ensure the text is marked as AI-generated using machine-readable formats.

Who bears it: GPAI model providers (not deployers) where the model is specifically designed or primarily used to generate informational text at scale — news article generators, policy brief generators, public affairs AI tools.

What "matters of public interest" means: This is intentionally broad and tracks terminology used in European media regulation. It includes electoral information, public health, economic and financial matters, security, environmental matters, and governance topics. General-purpose text generation tools are not necessarily captured unless they are marketed or configured for public interest informational content.

Overlap with Art.50(3): Art.50(5) is more specific than Art.50(3) for the text domain in the public interest context. Both can apply simultaneously. The GPAI provider must ensure machine-readable disclosure; downstream deployers distributing the text to the public bear additional human-readable disclosure obligations.

Enforcement and Penalties Under Art.99

Art.50 violations are subject to enforcement under Art.99(3) of the EU AI Act, which sets the penalty at up to €15,000,000 or 3% of total worldwide annual turnover, whichever is higher. For SMEs, the cap is the lower of the fixed amount or the percentage — a symmetrical structure compared to the higher fines applicable for prohibited AI system violations (Art.99(1): €35M / 7%) or high-risk AI violations (Art.99(2): €15M / 3%).

Market surveillance authorities (MSAs) designated by each Member State under Art.70 are the primary enforcers. National enforcement practice will vary: some MSAs will focus on media and advertising sectors (Art.50(4) emphasis), while others may prioritise chatbot and emotion recognition sectors (Art.50(1)-(2) emphasis).

Interaction with GDPR and NIS2

GDPR Art.22 (Automated Decision-Making): Emotion recognition and biometric categorisation under Art.50(2) frequently overlap with GDPR Art.22, which gives data subjects rights regarding solely automated processing producing significant effects. Where Art.50(2) AI systems feed into automated decisions, Art.22 rights apply alongside the Art.50(2) notification requirement.

GDPR Art.9 (Special Categories): Biometric data used for the purpose of uniquely identifying persons is a special category under GDPR Art.9. Emotion recognition systems processing biometric data are operating on Art.9 data, which requires explicit consent or one of the Art.9(2) exceptions alongside the Art.50(2) notification.

NIS2 Art.21 (Security Measures): For providers of AI systems covered by Art.50(3)-(5) operating critical information infrastructure, synthetic content generation pipelines are supply chain assets subject to NIS2 security requirements. The integrity of the machine-readable marking infrastructure (C2PA signing keys, watermarking models) must be protected under NIS2 risk management obligations.

Temporal Scope — When Art.50 Applies

The EU AI Act entered into force on 1 August 2024. Art.50 applies from 2 August 2026 — two years after entry into force. For GPAI model providers, the relevant transition provision in Art.111(3) provides a 12-month grace period from the date of GPAI model provisions applying (2 August 2025), which means GPAI providers have until approximately August 2026 as well.

Systems placed on the market before 2 August 2026 that undergo substantial modifications after that date become subject to Art.50 in full. Systems that remain unchanged after August 2026 may benefit from transitional provisions, but given the rapid iteration cycle of AI products, most commercially active systems will be re-deployed with new versions well before any grandfathering period runs out.

Python TransparencyComplianceChecker Implementation

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional


class Art50ObligationType(Enum):
    CHATBOT_DISCLOSURE = "art50_1_chatbot_disclosure"
    EMOTION_BIOMETRIC_NOTIFICATION = "art50_2_emotion_biometric_notification"
    SYNTHETIC_CONTENT_MACHINE_MARKING = "art50_3_synthetic_machine_marking"
    SYNTHETIC_CONTENT_DEPLOYER_DISCLOSURE = "art50_4_synthetic_deployer_disclosure"
    GPAI_PUBLIC_INTEREST_TEXT = "art50_5_gpai_public_interest"


class ActorType(Enum):
    PROVIDER = "provider"
    DEPLOYER = "deployer"
    GPAI_PROVIDER = "gpai_provider"


class DisclosureFormat(Enum):
    UI_LABEL = "ui_label"
    SYSTEM_PROMPT = "system_prompt"
    SPOKEN_DISCLOSURE = "spoken_disclosure"
    C2PA_CREDENTIAL = "c2pa_credential"
    STATISTICAL_WATERMARK = "statistical_watermark"
    XMP_METADATA_TAG = "xmp_metadata_tag"
    TEXT_WATERMARK = "text_watermark"
    ARTICLE_BYLINE = "article_byline"
    ON_SCREEN_OVERLAY = "on_screen_overlay"


@dataclass
class Art50Obligation:
    obligation_type: Art50ObligationType
    actor_type: ActorType
    trigger: str
    required_formats: list[DisclosureFormat]
    exception_applies: bool = False
    exception_reason: Optional[str] = None
    implementation_notes: str = ""


@dataclass
class AISystemProfile:
    system_name: str
    actor_type: ActorType
    interacts_with_humans: bool = False
    uses_emotion_recognition: bool = False
    uses_biometric_categorisation: bool = False
    generates_synthetic_audio: bool = False
    generates_synthetic_image: bool = False
    generates_synthetic_video: bool = False
    generates_synthetic_text: bool = False
    distributes_synthetic_content: bool = False
    is_gpai_public_interest: bool = False
    context_makes_ai_obvious: bool = False
    is_biometric_verification_only: bool = False
    is_law_enforcement_authorised: bool = False
    is_artistic_or_satirical: bool = False


class TransparencyComplianceChecker:

    def check_obligations(self, profile: AISystemProfile) -> list[Art50Obligation]:
        obligations: list[Art50Obligation] = []

        # Art.50(1): Chatbot disclosure — provider obligation
        if (
            profile.interacts_with_humans
            and profile.actor_type in (ActorType.PROVIDER, ActorType.GPAI_PROVIDER)
        ):
            exception = profile.context_makes_ai_obvious
            obligations.append(Art50Obligation(
                obligation_type=Art50ObligationType.CHATBOT_DISCLOSURE,
                actor_type=profile.actor_type,
                trigger="AI system designed to interact with natural persons",
                required_formats=[
                    DisclosureFormat.UI_LABEL,
                    DisclosureFormat.SYSTEM_PROMPT,
                    DisclosureFormat.SPOKEN_DISCLOSURE,
                ],
                exception_applies=exception,
                exception_reason="Context makes AI nature obvious" if exception else None,
                implementation_notes=(
                    "Disclosure must be timely (at session start), clear, and intelligible. "
                    "UI label plus system-level disclosure is the safest combination."
                ),
            ))

        # Art.50(2): Emotion recognition / biometric categorisation — deployer obligation
        if profile.actor_type == ActorType.DEPLOYER and (
            profile.uses_emotion_recognition or profile.uses_biometric_categorisation
        ):
            exception = (
                profile.is_biometric_verification_only
                or profile.is_law_enforcement_authorised
            )
            reason = None
            if profile.is_biometric_verification_only:
                reason = "Sole purpose is identity verification (not categorisation or emotion inference)"
            elif profile.is_law_enforcement_authorised:
                reason = "Law enforcement derogation applies"

            obligations.append(Art50Obligation(
                obligation_type=Art50ObligationType.EMOTION_BIOMETRIC_NOTIFICATION,
                actor_type=ActorType.DEPLOYER,
                trigger="Deployer operating emotion recognition or biometric categorisation system",
                required_formats=[DisclosureFormat.UI_LABEL],
                exception_applies=exception,
                exception_reason=reason,
                implementation_notes=(
                    "Notification must reach exposed persons before or at point of exposure. "
                    "GDPR Art.13/14 notices should be updated to include AI-specific disclosure."
                ),
            ))

        # Art.50(3): Machine-readable marking — provider obligation
        synthetic_types = [
            profile.generates_synthetic_audio,
            profile.generates_synthetic_image,
            profile.generates_synthetic_video,
            profile.generates_synthetic_text,
        ]
        if any(synthetic_types) and profile.actor_type in (
            ActorType.PROVIDER, ActorType.GPAI_PROVIDER
        ):
            formats = []
            if profile.generates_synthetic_image or profile.generates_synthetic_video:
                formats.extend([DisclosureFormat.C2PA_CREDENTIAL, DisclosureFormat.STATISTICAL_WATERMARK])
            if profile.generates_synthetic_audio:
                formats.extend([DisclosureFormat.STATISTICAL_WATERMARK])
            if profile.generates_synthetic_text:
                formats.extend([DisclosureFormat.TEXT_WATERMARK, DisclosureFormat.XMP_METADATA_TAG])

            obligations.append(Art50Obligation(
                obligation_type=Art50ObligationType.SYNTHETIC_CONTENT_MACHINE_MARKING,
                actor_type=profile.actor_type,
                trigger="Provider of system generating synthetic audio, image, video, or text",
                required_formats=formats,
                implementation_notes=(
                    "Machine-readable format required. C2PA is the leading standard for image/video. "
                    "Commission implementing acts under Art.50(7) will specify mandatory formats."
                ),
            ))

        # Art.50(4): Human-readable disclosure — deployer obligation
        if profile.distributes_synthetic_content and profile.actor_type == ActorType.DEPLOYER:
            exception = profile.is_law_enforcement_authorised or profile.is_artistic_or_satirical
            obligations.append(Art50Obligation(
                obligation_type=Art50ObligationType.SYNTHETIC_CONTENT_DEPLOYER_DISCLOSURE,
                actor_type=ActorType.DEPLOYER,
                trigger="Deployer distributing AI-generated synthetic content to audiences",
                required_formats=[
                    DisclosureFormat.ON_SCREEN_OVERLAY,
                    DisclosureFormat.ARTICLE_BYLINE,
                ],
                exception_applies=exception,
                exception_reason=(
                    "Artistic/satirical content with evident AI nature or law enforcement"
                    if exception else None
                ),
                implementation_notes=(
                    "Human-readable disclosure must be visible/audible in the content itself — "
                    "machine-readable metadata alone does not satisfy Art.50(4)."
                ),
            ))

        # Art.50(5): GPAI public interest text marking
        if profile.is_gpai_public_interest and profile.actor_type == ActorType.GPAI_PROVIDER:
            obligations.append(Art50Obligation(
                obligation_type=Art50ObligationType.GPAI_PUBLIC_INTEREST_TEXT,
                actor_type=ActorType.GPAI_PROVIDER,
                trigger="GPAI provider whose model primarily generates public-interest text",
                required_formats=[DisclosureFormat.TEXT_WATERMARK, DisclosureFormat.XMP_METADATA_TAG],
                implementation_notes=(
                    "Applies to GPAI models designed or marketed for informational text generation "
                    "on public interest topics (elections, health, finance, governance)."
                ),
            ))

        return obligations

    def generate_compliance_report(self, profile: AISystemProfile) -> dict:
        obligations = self.check_obligations(profile)
        active = [o for o in obligations if not o.exception_applies]
        excepted = [o for o in obligations if o.exception_applies]

        return {
            "system_name": profile.system_name,
            "total_obligations": len(obligations),
            "active_obligations": len(active),
            "excepted_obligations": len(excepted),
            "obligations": [
                {
                    "type": o.obligation_type.value,
                    "actor": o.actor_type.value,
                    "trigger": o.trigger,
                    "formats": [f.value for f in o.required_formats],
                    "exception": o.exception_applies,
                    "exception_reason": o.exception_reason,
                    "notes": o.implementation_notes,
                }
                for o in obligations
            ],
        }


# Example: customer service chatbot provider
checker = TransparencyComplianceChecker()
chatbot_profile = AISystemProfile(
    system_name="CustomerServiceBot v2",
    actor_type=ActorType.PROVIDER,
    interacts_with_humans=True,
    context_makes_ai_obvious=False,
)
report = checker.generate_compliance_report(chatbot_profile)
# Active obligation: Art.50(1) chatbot disclosure
# Required formats: ui_label, system_prompt, spoken_disclosure

# Example: media platform distributing AI-generated video
media_profile = AISystemProfile(
    system_name="SyntheticNewsVideo",
    actor_type=ActorType.DEPLOYER,
    distributes_synthetic_content=True,
    is_artistic_or_satirical=False,
)
media_report = checker.generate_compliance_report(media_profile)
# Active obligation: Art.50(4) deployer disclosure
# Required formats: on_screen_overlay, article_byline

Art.50 Compliance Checklist

Providers of chatbot or conversational AI systems:

Deployers of emotion recognition or biometric categorisation systems:

Providers of synthetic audio, image, video, or text generation:

Deployers distributing synthetic content:

GPAI model providers (public interest text):

What Comes Next in the EU AI Act

Art.50 closes Chapter IV transparency obligations. The remaining chapters of the EU AI Act cover measures supporting innovation (regulatory sandboxes under Art.57-63), governance and market surveillance (Art.64-95), and final provisions. For the high-risk AI compliance chain, the sequence runs: risk management (Art.9) → data governance (Art.10) → technical documentation (Art.11) → record-keeping (Art.12) → transparency to deployers (Art.13) → human oversight (Art.14) → accuracy, robustness, cybersecurity (Art.15) → conformity assessment (Art.43) → EU DoC (Art.46) → CE marking (Art.47) → registration (Art.49).

For deployers and providers operating in the EU, Art.50 is where compliance becomes user-visible. Every chatbot conversation, every emotion analytics dashboard, every AI-generated image published at scale — each is a regulated interaction under EU law. The infrastructure for transparency is now part of the required architecture.