2026-04-16·12 min read·

EU AI Act Art.50 Transparency Obligations: Chatbot Disclosure, Deepfakes & AI-Generated Content — Developer Guide (2026)

EU AI Act Article 50 governs the transparency layer that sits between AI systems and the humans who interact with or are affected by them. Unlike the deep compliance obligations of Chapters II and III — risk management, technical documentation, conformity assessment — Art.50 applies to a much broader category of AI deployments, including systems that are not classified as high-risk under Annex III. If your system talks to people, generates synthetic content, or produces deep fakes, Art.50 applies regardless of whether your system appears on the high-risk list.

For SaaS developers, Art.50 is the regulation most directly shaping how LLM-powered chatbots, AI image and video generators, text-generation tools, and emotion-recognition systems must be built and deployed in the EU from 2 August 2026. The obligations are operational, not just architectural: every user interaction, every synthetic image published, every deep fake generated triggers specific disclosure requirements that must be built into your system at the feature level.

The enforcement stakes are material: Art.50 violations involving providers and deployers of general-use AI systems carry administrative fines up to €15 million or 3% of global annual turnover under Art.99(4). For GPAI model providers, Art.50(3) violations specific to GPAI models carry the same penalties.


Art.50 in the EU AI Act Structure

Art.50 sits in Chapter IV — Transparency Obligations for Providers and Deployers of Certain AI Systems. This positioning is significant: Art.50 is separate from Chapter III (high-risk AI system obligations) and applies independently of whether the AI system is high-risk.

ChapterScopeArt.50 Relationship
Chapter IIProhibited AI practices (Art.5)Irrelevant — prohibited systems cannot be deployed
Chapter IIIHigh-risk AI systems (Art.6–Art.49)Art.50 obligations are additional to Chapter III if both apply
Chapter IVTransparency for certain AI — Art.50Applies to chatbots, deep fakes, AI content generators
Chapter VGPAI models (Art.51–Art.56)Art.50(3) applies to GPAI models generating synthetic content
Chapter VIGovernance and enforcementArt.99 penalty framework applies to Art.50 violations

Who Art.50 applies to:

ActorApplicable ParagraphsTrigger
ProviderArt.50(1), Art.50(3)System designed to interact with humans; system generates synthetic content
DeployerArt.50(2), Art.50(4), Art.50(5)Deployer uses system for human interaction; deployer uses system for deep fakes; deployer publishes AI text for public interest
BothArt.50(6), Art.50(7)Exemptions and right to complain apply to both

Art.50(1): Provider Obligation — Human-Interaction AI Disclosure

Art.50(1) places the primary obligation on providers: AI systems intended to interact directly with natural persons must be designed and developed so that the persons know they are interacting with an AI system, unless this is obvious from context.

What triggers Art.50(1):

System TypeTriggers Art.50(1)Design Obligation
Customer-facing chatbot (LLM-powered)YesMust disclose AI nature
AI virtual assistant (scheduling, support)YesMust disclose AI nature
AI phone/voice agent (IVR with LLM)YesMust disclose AI nature
AI avatar in video callYesMust disclose AI nature
AI email response generator (user-visible output)YesMust disclose AI generation
Search engine (AI ranking, no direct interaction)NoNo direct human-AI interaction
Recommendation engine (no direct interaction)NoNo direct human-AI interaction

"Obvious from context" exemption: The exemption is narrow. A robotic chat interface named "Robo-Helper" does not automatically qualify as obvious — the name is suggestive but not conclusive. Contexts where the exemption clearly applies include: clearly-labeled creative writing tools ("Generate AI story"), AI code completion within IDEs where AI assistance is the explicit product value proposition, or productivity assistants branded explicitly as AI tools. When in doubt, disclosing is the safer path — the exemption is a defense, not a deployment default.

Law enforcement exemption: Art.50(1) explicitly does not apply to AI systems authorised by law to detect, prevent, investigate, or prosecute criminal offences. This exemption is relevant to covert AI-assisted investigation tools but has no application to commercial SaaS.

Design obligation vs. runtime obligation: Art.50(1) is a design obligation on the provider. The provider must build the disclosure into the system architecture — the default mode of the system must include the disclosure. This is different from a usage policy; you cannot satisfy Art.50(1) by writing terms of service that tell deployers to disclose. The system itself must generate the disclosure.


Art.50(2): Deployer Obligation — Human-Interaction Runtime Disclosure

Art.50(2) mirrors Art.50(1) but places the runtime obligation on the deployer: when deploying an Art.50(1)-covered system, the deployer must ensure natural persons are informed they are interacting with an AI system.

Provider vs. deployer responsibility matrix:

ScenarioProvider ObligationDeployer Obligation
Provider builds chatbot, deploys directly to end usersArt.50(1): design disclosure into systemArt.50(2): ensure disclosure at runtime
Provider builds chatbot API, deployer integratesArt.50(1): design disclosure capabilityArt.50(2): activate and display disclosure
Provider's disclosure is hardcoded and cannot be disabledArt.50(1): satisfied by designArt.50(2): satisfied by provider's design
Provider's disclosure is optional/configurableArt.50(1): must provide disclosure mechanismArt.50(2): deployer must enable disclosure

Practical implication for API providers: If you provide an LLM API that deployers integrate into customer-facing applications, you satisfy Art.50(1) by building the disclosure capability. Your deployers satisfy Art.50(2) by enabling and surfacing that disclosure. Your API terms and technical documentation should require deployers to activate and not suppress the disclosure mechanism. An API provider whose documentation instructs deployers to suppress the AI disclosure is in violation of Art.50(1) even if the deployer is technically the party failing to disclose.


Art.50(3): Provider Obligation — Machine-Readable Marking of AI-Generated Content

Art.50(3) requires providers of AI systems — including GPAI model systems — that generate synthetic audio, image, video, or text content to ensure that the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.

Scope of Art.50(3):

Content TypeSystem ExamplesMarking Requirement
Synthetic audioVoice cloning, AI speech synthesis, TTS beyond standardMachine-readable metadata marking AI origin
AI-generated imagesText-to-image models (Stable Diffusion integrations, DALL-E wrappers)Machine-readable metadata (e.g., C2PA ContentCredentials)
AI-generated videoText-to-video, video synthesis, motion generationMachine-readable watermarking or metadata
AI-generated textLLM outputs published or presented as standalone contentMachine-readable marking (where technically feasible)

Machine-readable marking technical standards: The EU AI Act references the "generally acknowledged state of the art" for machine-readable marking. The emerging standard is the C2PA (Coalition for Content Provenance and Authenticity) specification, which enables attaching cryptographically signed content credentials to media files indicating AI generation. For text, marking remains technically more challenging — embedded metadata, document properties, or invisible steganographic markers are evolving approaches. Providers should track CEN/CENELEC standardisation work under Art.40 for AI-specific technical standards as they emerge.

The "assistive function" exemption: Art.50(3) does not apply where the AI system performs an assistive function for standard editing or does not substantially alter the input data. Grammar correction, spell checking, formatting suggestions, and minor AI-assisted edits are excluded. The threshold is "substantial alteration" — if the system transforms the semantic or visual content substantially beyond the original input, the exemption does not apply.

GPAI model provider obligation: GPAI model providers (Art.51–Art.56) are explicitly included in Art.50(3). An LLM model provider whose model is used to generate text, images, or audio must ensure their model outputs are technically capable of being marked as AI-generated. This is an obligation on the GPAI model provider level, not just on downstream deployers.


Art.50(4): Deployer Obligation — Deep Fake Disclosure

Art.50(4) requires deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake to disclose that the content has been artificially generated or manipulated.

What constitutes a "deep fake" under Art.50(4):

Content TypeDeep Fake ThresholdArt.50(4) Applies
Realistic video of a person doing/saying things they did not doYes — person likeness, realistic, falseYes
AI-generated face swapped onto existing videoYes — realistic, false attributionYes
Voice cloning with false attributionYes — voice likeness, falseYes
AI-generated fictional character (clearly stylized)No — not realistic person likenessNo
AI art that is clearly fantasticalNo — not presenting as realNo
AI-generated historical scene without specific personsContext-dependentLikely no if not presenting specific persons falsely

Disclosure method for deep fakes: The disclosure must be clear, conspicuous, and persistent — a brief disclaimer that appears only at the start of a video or is buried in video description metadata does not satisfy Art.50(4). The disclosure must be associated with the content in a way that a viewer encountering the content in any context (e.g., shared out of context) would be informed of the AI manipulation.

Exemptions to Art.50(4):

  1. Authorised by law for detecting, preventing, investigating, or prosecuting criminal offences
  2. Freedom of expression / artistic freedom (Art.11 and Art.13 of the EU Charter) — satire, parody, clearly artistic works where no realistic deception is intended and appropriate safeguards protect third parties

The artistic freedom exemption requires that the context makes the AI manipulation clear — a satirical program that uses AI-generated imitation of a politician's voice with clear satirical labeling differs from a realistic-appearing false statement video.


Art.50(5): Deployer Obligation — Public Interest AI Text Disclosure

Art.50(5) requires deployers of AI systems that generate or manipulate text published for the purpose of informing the public on matters of public interest to disclose that the text has been artificially generated or manipulated.

Scope of "public interest" text:

Publication ContextArt.50(5) AppliesNotes
News article generated by AIYesPublic interest information
AI-generated opinion piece on public affairsYesPublic interest
AI-generated marketing emailNoCommercial, not public interest
AI-generated product documentationNoCommercial, not public interest
AI-assisted SEO blog contentNo — unless on matters of public interestCommercial content
AI-generated social media post on political affairsYes — if informing public

Editorial control exemption: Art.50(5) does not apply where:

  1. The AI-generated text has been subject to human review or editorial control, AND
  2. A natural or legal person holds editorial responsibility for the publication

This is the journalist/editor model: if a human editor meaningfully reviews and takes responsibility for AI-drafted content, the editorial control exemption applies. The keyword is "meaningfully" — a rubber-stamp review that does not actually engage with the content does not satisfy the exemption.


Art.50(6): Common Exemptions — Obvious Content and Assistive Editing

Art.50(6) provides two cross-cutting exemptions to Art.50(3), (4), and (5):

Exemption A — Obviously artificial content: Where it is obvious that the content is artificially generated or manipulated. This applies to clearly fantastical AI art, stylized AI animations, or content produced in obviously AI-native contexts where no reasonable viewer would mistake the content for authentic human-produced material.

Exemption B — Assistive editing: Where the AI system performs an assistive function for standard editing or does not substantially alter the input data provided by the deployer or its appearance. Grammar assistants, translation aids, formatting tools, and similar minor AI interventions are not subject to the Art.50 disclosure obligations.


Art.50(7): Right to Complain

Natural persons who have been exposed to AI systems covered by Art.50(1) and Art.50(2) — specifically those that interact with humans — have the right to make a complaint to the relevant national competent authority. This creates an accountability mechanism: users who believe they were interacted with by an AI system without proper disclosure can file formal complaints. Providers and deployers should maintain records of their Art.50 compliance mechanisms to be able to respond to such complaints.


Art.50 Intersection Matrix

ArticleInteraction with Art.50
Art.5Prohibited AI systems cannot be deployed — Art.50 moot
Art.6, Annex IIIHigh-risk AI systems face Chapter III obligations plus Art.50 where applicable (many Annex III systems interact with humans)
Art.13High-risk AI transparency obligations — more specific than Art.50(1) but complementary for HR systems
Art.22Deployers of high-risk AI in public authority contexts — Art.50(2) applies additionally
Art.26Deployer obligations for high-risk AI — Art.50 obligations are separate Chapter IV obligations
Art.28Distributors who become providers — Art.50 obligations transfer
Art.51–56GPAI model providers explicitly included in Art.50(3) synthetic content marking
Art.52 (within Art.50)Recital 133: GPAI models used for creative purposes have specific guidance
Art.74Market surveillance authorities enforce Art.50 compliance
Art.99Administrative fines for Art.50 violations: up to €15M or 3% global turnover
GDPR Art.22AI-driven automated decisions — Art.50(1) disclosure complements GDPR automated decision notice
GDPR Art.13/14Data collection notice in same interaction — Art.50 disclosure can be combined with GDPR disclosure

CLOUD Act Dimension for Art.50 Compliance Records

Art.50 compliance requires providers and deployers to maintain records of their disclosure mechanisms, marking implementations, and — particularly for Art.50(4) deep fake systems — evidence that disclosures were made. Under Art.18, 10-year retention applies to high-risk AI systems; for Art.50 systems that are not high-risk, best practice documentation retention should cover at least the statute of limitations period for administrative violations (typically 3–5 years under national transpositions).

CLOUD Act risk for Art.50 records:

Record TypeArt.50 SourceCLOUD Act Risk
Chatbot interaction logs showing AI disclosureArt.50(1)/(2)High if on US cloud infrastructure
AI content generation audit logsArt.50(3)High
Deep fake generation records with disclosure evidenceArt.50(4)High
Editorial review records for AI textArt.50(5) exemption evidenceHigh
Complaint response records (Art.50(7))Art.50(7)Medium

If compliance evidence demonstrating that disclosures were made, content was marked, and deployers fulfilled their Art.50 obligations is stored on US-based cloud infrastructure (AWS, Azure, GCP), those records are potentially compellable by US federal agencies under CLOUD Act (18 U.S.C. §2713) — simultaneously with any EU market surveillance request. EU-native PaaS infrastructure eliminates this dual-access risk: compliance records remain under a single regulatory regime, with no US government parallel access path.


Python Implementations

1. TransparencyDisclosureManager

from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Optional

class DisclosureType(Enum):
    AI_INTERACTION = "ai_interaction"       # Art.50(1)/(2)
    SYNTHETIC_CONTENT = "synthetic_content" # Art.50(3)
    DEEP_FAKE = "deep_fake"                 # Art.50(4)
    PUBLIC_TEXT = "public_text"             # Art.50(5)

class DisclosureStatus(Enum):
    COMPLIANT = "compliant"
    MISSING = "missing"
    EXEMPTION_APPLIED = "exemption_applied"
    PENDING_REVIEW = "pending_review"

@dataclass
class DisclosureRecord:
    system_id: str
    disclosure_type: DisclosureType
    status: DisclosureStatus
    disclosure_text: Optional[str]          # Human-readable disclosure
    machine_readable_marker: Optional[str]  # For Art.50(3): C2PA credentials URI
    exemption_basis: Optional[str]          # If exemption applied: legal basis
    recorded_at: datetime = field(default_factory=datetime.utcnow)
    session_id: Optional[str] = None        # For Art.50(1)/(2): interaction session

class TransparencyDisclosureManager:
    """
    Manages Art.50 disclosure obligations for AI systems.
    Tracks compliance state for chatbot interactions, synthetic content,
    deep fakes, and public interest text generation.
    """

    def __init__(self, system_id: str):
        self.system_id = system_id
        self._records: list[DisclosureRecord] = []

    def record_chatbot_disclosure(
        self,
        session_id: str,
        disclosure_shown: bool,
        disclosure_text: str,
        exemption_basis: Optional[str] = None,
    ) -> DisclosureRecord:
        """Art.50(1)/(2): Record that AI-interaction disclosure was shown."""
        status = (
            DisclosureStatus.EXEMPTION_APPLIED if exemption_basis
            else (DisclosureStatus.COMPLIANT if disclosure_shown else DisclosureStatus.MISSING)
        )
        record = DisclosureRecord(
            system_id=self.system_id,
            disclosure_type=DisclosureType.AI_INTERACTION,
            status=status,
            disclosure_text=disclosure_text if disclosure_shown else None,
            machine_readable_marker=None,
            exemption_basis=exemption_basis,
            session_id=session_id,
        )
        self._records.append(record)
        return record

    def record_synthetic_content_marking(
        self,
        content_id: str,
        content_type: str,  # "image", "audio", "video", "text"
        c2pa_manifest_uri: Optional[str],
        marking_method: str,
        exemption_basis: Optional[str] = None,
    ) -> DisclosureRecord:
        """Art.50(3): Record that synthetic content was marked as AI-generated."""
        if exemption_basis:
            status = DisclosureStatus.EXEMPTION_APPLIED
        elif c2pa_manifest_uri or marking_method:
            status = DisclosureStatus.COMPLIANT
        else:
            status = DisclosureStatus.MISSING
        record = DisclosureRecord(
            system_id=self.system_id,
            disclosure_type=DisclosureType.SYNTHETIC_CONTENT,
            status=status,
            disclosure_text=f"Content type: {content_type}. Method: {marking_method}",
            machine_readable_marker=c2pa_manifest_uri,
            exemption_basis=exemption_basis,
            session_id=content_id,
        )
        self._records.append(record)
        return record

    def compliance_summary(self) -> dict:
        """Return compliance summary across all disclosure types."""
        summary = {dtype.value: {"compliant": 0, "missing": 0, "exemption": 0}
                   for dtype in DisclosureType}
        for record in self._records:
            key = record.disclosure_type.value
            if record.status == DisclosureStatus.COMPLIANT:
                summary[key]["compliant"] += 1
            elif record.status == DisclosureStatus.MISSING:
                summary[key]["missing"] += 1
            elif record.status == DisclosureStatus.EXEMPTION_APPLIED:
                summary[key]["exemption"] += 1
        return summary

2. DeepFakeLabeler

from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Optional

class DeepFakeDisclosureMethod(Enum):
    ON_SCREEN_OVERLAY = "on_screen_overlay"       # Persistent visual label on video
    AUDIO_ANNOUNCEMENT = "audio_announcement"      # "This video contains AI-manipulated content"
    METADATA_EMBEDDED = "metadata_embedded"        # C2PA or similar machine-readable
    PLATFORM_LABEL = "platform_label"              # Platform-level label (YouTube AI label, etc.)
    DESCRIPTION_LABEL = "description_label"        # In video description/caption (NOT sufficient alone)

class ArtisticFreedomBasis(Enum):
    SATIRE = "satire"
    PARODY = "parody"
    CLEARLY_FICTIONAL = "clearly_fictional"
    ARTISTIC_EXPRESSION = "artistic_expression"
    NOT_APPLICABLE = "not_applicable"

@dataclass
class DeepFakeDisclosureRecord:
    content_id: str
    content_type: str  # "video", "audio", "image"
    is_deep_fake: bool
    disclosure_methods: list[DeepFakeDisclosureMethod]
    law_enforcement_exemption: bool = False
    artistic_freedom_basis: ArtisticFreedomBasis = ArtisticFreedomBasis.NOT_APPLICABLE
    artistic_safeguards_documented: bool = False
    disclosure_text: Optional[str] = None
    c2pa_credentials_uri: Optional[str] = None
    recorded_at: datetime = field(default_factory=datetime.utcnow)

class DeepFakeLabeler:
    """
    Manages Art.50(4) deep fake disclosure obligations.
    Validates that required disclosures are in place before content publication.
    """

    def __init__(self, deployer_id: str):
        self.deployer_id = deployer_id

    def assess_disclosure_requirement(
        self,
        content_id: str,
        content_type: str,
        is_realistic_person_likeness: bool,
        is_attributed_to_real_person: bool,
        intended_as_satire_parody: bool,
        is_law_enforcement_authorized: bool,
    ) -> dict:
        """
        Determine if Art.50(4) disclosure is required for given content.
        Returns assessment with required actions.
        """
        is_deep_fake = is_realistic_person_likeness and is_attributed_to_real_person
        law_enforcement_exempt = is_law_enforcement_authorized
        artistic_exempt = intended_as_satire_parody and is_realistic_person_likeness

        if not is_deep_fake:
            return {"requires_disclosure": False, "reason": "Not a deep fake"}
        if law_enforcement_exempt:
            return {"requires_disclosure": False, "reason": "Law enforcement exemption (Art.50(4))"}
        if artistic_exempt:
            return {
                "requires_disclosure": False,
                "reason": "Artistic freedom exemption (Art.50(4))",
                "warning": "Must document artistic safeguards protecting third-party rights",
            }
        return {
            "requires_disclosure": True,
            "required_methods": [
                DeepFakeDisclosureMethod.ON_SCREEN_OVERLAY,
                DeepFakeDisclosureMethod.METADATA_EMBEDDED,
            ],
            "recommended_text": "This content contains AI-generated or AI-manipulated elements.",
            "reason": "Art.50(4): deep fake content requires disclosure",
        }

    def validate_disclosure(self, record: DeepFakeDisclosureRecord) -> list[str]:
        """Validate that disclosure meets Art.50(4) requirements. Returns list of issues."""
        issues = []
        if not record.is_deep_fake:
            return issues
        if record.law_enforcement_exemption or record.artistic_freedom_basis != ArtisticFreedomBasis.NOT_APPLICABLE:
            if record.artistic_freedom_basis != ArtisticFreedomBasis.NOT_APPLICABLE and not record.artistic_safeguards_documented:
                issues.append("Artistic freedom exemption requires documented safeguards for third-party rights")
            return issues
        if not record.disclosure_methods:
            issues.append("No disclosure methods applied for deep fake content")
        if DeepFakeDisclosureMethod.DESCRIPTION_LABEL in record.disclosure_methods and len(record.disclosure_methods) == 1:
            issues.append("Description-only disclosure insufficient for Art.50(4) — must include persistent on-content label")
        if not record.disclosure_text:
            issues.append("Missing disclosure text")
        return issues

3. AIContentMarker

import hashlib
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Optional

class ContentMarkingMethod(Enum):
    C2PA_CREDENTIALS = "c2pa_credentials"         # Coalition for Content Provenance
    IPTC_METADATA = "iptc_metadata"               # IPTC photo/image metadata
    XMP_SIDECAR = "xmp_sidecar"                   # Adobe XMP metadata
    INVISIBLE_WATERMARK = "invisible_watermark"    # Steganographic
    DOCUMENT_PROPERTIES = "document_properties"   # For text/document formats
    HTTP_HEADER = "http_header"                   # X-AI-Generated header for API responses

@dataclass
class ContentMarkingRecord:
    content_id: str
    content_type: str     # "image", "audio", "video", "text"
    generator_model_id: str
    generator_version: str
    marking_method: ContentMarkingMethod
    marking_data: Optional[dict]          # Method-specific marking payload
    content_hash: str                     # SHA-256 of content at marking time
    is_assistive_edit_only: bool = False  # Art.50(6) exemption flag
    substantially_alters_input: bool = True
    marked_at: datetime = field(default_factory=datetime.utcnow)

class AIContentMarker:
    """
    Implements Art.50(3) machine-readable marking of AI-generated content.
    Manages C2PA-compatible content credentials and fallback marking methods.
    """

    # EU AI Act Art.50(3) requires marking to be:
    # - Machine-readable
    # - Detectable as artificially generated
    # - Technically feasible given content type
    # - Interoperable and robust

    SUPPORTED_CONTENT_TYPES = {"image", "audio", "video", "text"}

    def __init__(self, provider_id: str, model_id: str, model_version: str):
        self.provider_id = provider_id
        self.model_id = model_id
        self.model_version = model_version
        self._marking_records: list[ContentMarkingRecord] = []

    def is_marking_required(
        self,
        content_type: str,
        is_assistive_edit: bool,
        substantially_alters_input: bool,
    ) -> tuple[bool, Optional[str]]:
        """
        Determine if Art.50(3) marking is required.
        Returns (required, exemption_reason_if_not).
        """
        if is_assistive_edit and not substantially_alters_input:
            return False, "Assistive editing exemption (Art.50(6)(b)): does not substantially alter input"
        if content_type not in self.SUPPORTED_CONTENT_TYPES:
            return False, f"Content type '{content_type}' not covered by Art.50(3)"
        return True, None

    def mark_content(
        self,
        content_id: str,
        content_type: str,
        content_bytes: bytes,
        is_assistive_edit: bool = False,
        substantially_alters_input: bool = True,
        preferred_method: Optional[ContentMarkingMethod] = None,
    ) -> ContentMarkingRecord:
        """
        Apply Art.50(3)-compliant marking to AI-generated content.
        Returns the ContentMarkingRecord documenting compliance.
        """
        required, exemption_reason = self.is_marking_required(
            content_type, is_assistive_edit, substantially_alters_input
        )
        content_hash = hashlib.sha256(content_bytes).hexdigest()

        # Select marking method based on content type if not specified
        if preferred_method is None:
            method_map = {
                "image": ContentMarkingMethod.C2PA_CREDENTIALS,
                "audio": ContentMarkingMethod.C2PA_CREDENTIALS,
                "video": ContentMarkingMethod.C2PA_CREDENTIALS,
                "text": ContentMarkingMethod.DOCUMENT_PROPERTIES,
            }
            method = method_map.get(content_type, ContentMarkingMethod.HTTP_HEADER)
        else:
            method = preferred_method

        marking_data = {
            "provider_id": self.provider_id,
            "model_id": self.model_id,
            "model_version": self.model_version,
            "generation_timestamp": datetime.utcnow().isoformat(),
            "ai_generated": True,
            "art50_compliant": True,
            "exemption": exemption_reason,
        }

        record = ContentMarkingRecord(
            content_id=content_id,
            content_type=content_type,
            generator_model_id=self.model_id,
            generator_version=self.model_version,
            marking_method=method if required else ContentMarkingMethod.HTTP_HEADER,
            marking_data=marking_data if required else {"exemption": exemption_reason},
            content_hash=content_hash,
            is_assistive_edit_only=is_assistive_edit,
            substantially_alters_input=substantially_alters_input,
        )
        self._marking_records.append(record)
        return record

    def audit_log(self) -> list[dict]:
        """Return audit-ready log of all content markings for Art.50(3) compliance evidence."""
        return [
            {
                "content_id": r.content_id,
                "content_type": r.content_type,
                "method": r.marking_method.value,
                "marked_at": r.marked_at.isoformat(),
                "content_hash": r.content_hash,
                "model": f"{r.generator_model_id}@{r.generator_version}",
                "exemption": r.marking_data.get("exemption") if r.marking_data else None,
            }
            for r in self._marking_records
        ]

40-Item Art.50 Compliance Checklist

Art.50(1) — Provider Design Obligations (Human-Interaction Systems)

  1. Identify all AI systems in your product that interact directly with natural persons
  2. Confirm each interaction system includes built-in disclosure of AI nature
  3. Validate the disclosure appears before or at the start of the interaction (not buried in settings)
  4. Ensure disclosure cannot be disabled by deployers via API configuration
  5. Document that "obvious from context" exemption is not relied on without clear factual basis
  6. Confirm law enforcement exemption is not claimed for commercial SaaS
  7. Include AI disclosure requirement in API documentation for deployers
  8. Test disclosure UI in all supported languages for EU deployment

Art.50(2) — Deployer Runtime Obligations (Human-Interaction Systems) 9. If you are a deployer, verify the AI system you use provides a compliant disclosure mechanism 10. Confirm your deployment does not suppress or override the provider's disclosure 11. Document where in your user interface the Art.50(1)/(2) disclosure appears 12. Verify disclosure persists throughout the interaction (not just at session start)

Art.50(3) — Synthetic Content Marking (Providers) 13. Inventory all AI systems in your product that generate audio, image, video, or text content 14. Implement C2PA ContentCredentials or equivalent machine-readable marking for images and video 15. Implement machine-readable marking for audio content (C2PA Audio or equivalent) 16. For text generation: implement marking via document metadata, HTTP headers, or watermarking 17. Verify marking is interoperable (readable by third-party detection tools) 18. Assess whether "assistive editing" exemption applies — document the basis 19. Verify that marking is applied at generation time, not optionally post-generation 20. For GPAI model providers: confirm all downstream deployments can mark outputs 21. Monitor CEN/CENELEC and ISO/IEC standardisation progress for AI content marking

Art.50(4) — Deep Fake Disclosure (Deployers) 22. Identify all use cases in your product that generate or manipulate realistic person likenesses 23. For each deep fake use case, confirm persistent on-content disclosure is implemented 24. Validate disclosure is visible when content is viewed out of context (shared externally) 25. Assess artistic freedom exemption only where satire/parody intent is explicit and documented 26. Document safeguards protecting third-party rights where artistic freedom exemption is claimed 27. Do not rely on description-only disclosure as the sole Art.50(4) compliance mechanism

Art.50(5) — Public Interest Text Disclosure (Deployers) 28. Identify all use cases where AI generates text published for public interest purposes 29. For each identified use case, implement visible disclosure of AI generation 30. Document whether editorial control exemption applies — who reviewed and holds responsibility 31. If editorial review is the basis, document the review process and editorial accountability

Art.50(6) — Exemptions Documentation 32. Document "obviously artificial" exemption basis where relied on 33. Document "assistive editing" exemption basis — confirm no substantial alteration of input 34. Maintain per-content exemption records for Art.50(3)/(4)/(5) wherever exemptions are claimed

Art.50(7) — Complaint Handling 35. Establish complaint intake process for users who believe Art.50 disclosures were missing 36. Document complaints received and responses provided 37. Track complaint patterns for systemic disclosure failures

Records & Retention 38. Maintain audit logs of all Art.50 disclosures made (interaction logs, content marking records) 39. Retain disclosure records for at least 5 years (statute of limitations for administrative violations) 40. Store compliance records on EU-native infrastructure to avoid CLOUD Act dual-access risk


See Also