EU AI Act Art.42: Transparency Obligations for Certain AI Systems — Chatbot Disclosure, Emotion Recognition, and Synthetic Content Labeling (2026)
Article 42 of the EU AI Act establishes transparency obligations for a specific set of AI system deployments that do not necessarily qualify as high-risk AI systems but nonetheless require disclosure to the natural persons they interact with or whose outputs they present to. While Chapter III of the Act governs comprehensive obligations for high-risk AI systems including conformity assessments, technical documentation, and human oversight requirements, Art.42 creates a lighter but categorically mandatory layer of transparency that applies to three distinct scenarios: AI systems that interact directly with natural persons, AI systems performing emotion recognition or biometric categorisation, and AI systems that generate or manipulate synthetic content.
For software developers and AI engineers building consumer-facing applications, enterprise communication tools, content generation platforms, or any system that touches real people through AI-mediated interaction, Art.42 is one of the most practically relevant articles in the regulation. Its obligations are triggered by function — what the system does in relation to a person — rather than by risk classification, meaning they apply to a chatbot, a content generation tool, or an emotion analytics system regardless of whether that system has been designated as high-risk. Understanding Art.42 means understanding not just the disclosure requirements, but the technical implementation choices that fulfill them across three structurally different scenarios.
The Scope Logic of Art.42
Before examining each obligation, it is worth understanding why Art.42 is structured around three separate regimes rather than a single transparency requirement. The three scenarios share a common feature — they all involve AI systems that affect natural persons in ways those persons may not be able to detect without disclosure — but they differ fundamentally in the nature of the AI involvement and what information is required to make the interaction transparent.
In the first scenario, a person is actively communicating with an AI system but may believe they are communicating with a human. The transparency concern is about the counterparty identity: is this a human or a machine?
In the second scenario, a person is being assessed by an AI system reading their emotional or biometric signals in ways they may not be aware of. The transparency concern is about surveillance: is my body or behaviour being analysed by an AI?
In the third scenario, a person is consuming content — video, audio, images, text — that appears to reflect reality but has been generated or manipulated by AI. The transparency concern is about authenticity: is this real or synthetic?
Each scenario requires different disclosure content, different disclosure timing, different disclosure channels, and different technical implementation approaches. Art.42 addresses all three while also establishing which obligations fall on deployers (who put the AI into use in specific contexts) and which fall on providers (who build the underlying systems).
Regime 1 — Chatbot Disclosure: Natural Person Interaction
The first transparency regime under Art.42 applies to deployers of AI systems that are intended to interact directly with natural persons. The core obligation is that deployers must ensure that natural persons interacting with such AI systems are informed that they are interacting with an AI system — except in cases where this is obvious from the circumstances or context.
The natural person interaction obligation is triggered by two conditions: the AI system is designed to interact with humans (rather than to process data in the background), and that interaction occurs in a context where the person might reasonably believe they are interacting with a human being. A customer service chatbot, a virtual assistant, a conversational AI embedded in a mobile application, and an AI agent conducting interviews or negotiations all qualify. A recommendation algorithm processing a user's viewing history does not qualify, because the algorithm does not conduct a conversational interaction with the user.
The exception for cases that are "obvious from the circumstances or context" has practical significance for clearly-branded AI interfaces. Where a user has navigated to a page explicitly labelled as an AI assistant, or where the interface design, name, or prior communications make the AI nature of the system unmistakably clear, the disclosure obligation can be considered fulfilled by context. However, this exception should be applied conservatively — the bar for "obvious" should reflect the knowledge and sophistication of the actual user population, not of technically sophisticated developers or compliance reviewers. In consumer contexts, AI awareness remains inconsistent, and an interface that appears to a developer as "obviously a chatbot" may be experienced very differently by an elderly user or someone with limited technology exposure.
The disclosure obligation under the first regime must be provided:
Before the interaction begins: The person must be informed that they are interacting with an AI system before the interaction takes place, not during or after. A disclosure at the bottom of a long conversation thread, or embedded in terms of service that users are unlikely to read, does not fulfill the Art.42 obligation.
In a clear and understandable manner: The disclosure must be comprehensible to the target user population. Technical language, small print, or layered disclosure flows that require multiple clicks to reach the actual information do not meet the "clear and understandable" standard. The disclosure should be prominent, direct, and expressed in plain language appropriate to the deployment context.
Persistently accessible: For interactions of meaningful duration or complexity, the AI nature of the system should remain accessible to the person throughout the interaction, not only at the initial disclosure point. This is particularly relevant for AI systems that simulate conversational personas — where a person may begin to forget the AI nature of the counterparty over the course of an extended exchange.
For developers implementing this regime, the disclosure architecture matters as much as the disclosure content. Where the AI system operates through a human-facing interface, the disclosure should be implemented at the UI layer with clear persistent indicators. Where the AI system operates through an API or middleware layer and the disclosure obligation falls on the downstream deployer, the provider should ensure that downstream integration agreements include clear requirements for Art.42-compliant disclosure implementation.
A specific consideration arises for AI systems that are designed to simulate human personalities, voices, or personas. The use of a human-sounding name, a synthesised human voice, or a persona with claimed human characteristics creates a heightened disclosure obligation — the more convincingly human the AI presents itself, the more active and prominent the disclosure must be to counteract that impression.
Regime 2 — Emotion Recognition and Biometric Categorisation Disclosure
The second transparency regime under Art.42 applies to deployers of AI systems that perform emotion recognition or biometric categorisation from biometric data. The obligation requires that deployers inform the natural persons being subjected to these systems that emotion recognition or biometric categorisation is being performed, the categories of data being processed, and the purpose of that processing.
Emotion recognition and biometric categorisation occupy a unique position in the EU AI Act's risk architecture. Emotion recognition systems are included in Annex III as high-risk AI systems in specific contexts (employment, law enforcement, education), but Art.42 creates a disclosure obligation that applies even outside those high-risk contexts — wherever a system is analysing a person's facial expressions, physiological signals, voice patterns, or other indicators to infer emotional states or categorise them by biometric characteristics, that person must be informed.
The practical scenarios where this regime applies include retail environments using emotion analytics to assess customer reactions to products or promotions, customer service call centres using voice emotion analysis to assess caller satisfaction or stress, healthcare applications using facial expression analysis to monitor patient states, and educational platforms using attention or engagement tracking based on facial or physiological cues.
The disclosure obligation for emotion recognition and biometric categorisation must cover:
The existence of the processing: The person must know that an AI system is analysing their biometric signals or inferring their emotional state. This cannot be buried in a general data protection notice — it must be explicit and specific to the AI system performing the analysis.
The categories of data: The person must understand what signals are being collected and analysed. "We analyse your facial expressions" is more informative than "we collect biometric data" — the disclosure should reflect the actual data types involved, including whether audio, video, physiological sensor data, or other biometric indicators are being used.
The purpose: The person must understand what the emotion recognition or biometric categorisation is used for. Asserting a general purpose like "improving our service" without specifying what the emotional analysis data feeds into (e.g., "to assess your satisfaction with this support interaction") does not meet the specificity required.
Interaction with GDPR and the Biometric Data Framework
Biometric data for the purpose of uniquely identifying a natural person constitutes special category data under GDPR Art.9, and its processing requires explicit consent or another Article 9(2) legal basis. Even where the biometric categorisation performed under Art.42 does not involve identification (e.g., classifying a person as "appearing happy" rather than identifying them as a specific individual), the underlying biometric signals typically constitute special category data under GDPR, triggering the full GDPR regime.
The Art.42 transparency obligation is additive to, not a substitute for, GDPR obligations. An organisation deploying an emotion recognition system must both fulfill the Art.42 disclosure obligation (informing the person of the AI-based emotion analysis) and fulfill its GDPR transparency and legal basis obligations (providing a GDPR-compliant privacy notice, establishing a valid legal basis, and meeting the requirements for processing special category biometric data). These obligations must be coordinated but addressed separately — Art.42 transparency is not a substitute for Art.13/14 GDPR notices, and GDPR notices are not a substitute for Art.42 AI-specific disclosure.
Regime 3 — Synthetic Content Labeling: The Deepfake Disclosure Obligation
The third transparency regime under Art.42 addresses the growing challenge of AI-generated synthetic content — images, audio, video, and text that is generated or manipulated by AI in ways that could mislead persons into believing they are encountering genuine content. This regime differs from the first two in a crucial respect: it applies to the content itself rather than to an interaction, which means the disclosure obligation must travel with the content or be embedded in it, not merely provided at the point of initial delivery.
The Art.42 obligation for synthetic content has two components that apply to different actors:
Provider obligation — technical enabling: Providers of AI systems that generate synthetic audio, video, image content, or text that constitutes a digital representation of existing persons, objects, places, or other entities or events and resembles existing persons, objects, places, or real events in a way that would falsely appear to a person to be authentic must ensure that the outputs of those systems are marked in a machine-readable format and detectable as artificially generated or manipulated.
This provider-side obligation targets the AI system itself: the system must be designed to produce outputs that carry technically embedded markers indicating their synthetic origin. This does not mean every AI-generated image requires a visible watermark — the technical marking obligation focuses on machine-readable signals that allow downstream detection systems, platforms, and verification tools to identify AI-generated content, even when that content has been processed, shared, or presented without its original metadata.
Deployer obligation — visible labeling: Deployers using AI systems to generate synthetic content that could falsely appear authentic must label those outputs as artificially generated or manipulated in a way that is clearly visible to the natural person encountering the content.
The deployer-side obligation adds a user-visible labeling layer on top of the machine-readable technical marking. Where an AI system is used to create a video that portrays a real person saying something they did not say, or an image that depicts a real event that did not occur, or audio that replicates a real person's voice saying words they did not speak, the output must be labeled as AI-generated in a manner visible to the person viewing or listening to it.
What Synthetic Content Triggering Art.42 Looks Like in Practice
The Art.42 synthetic content regime does not apply to all AI-generated content — it applies specifically to content that:
- Is a digital representation of real persons, objects, places, or events (not purely fictional creations)
- Resembles those real entities closely enough that a person could be deceived into believing the content is authentic
- Has been generated or manipulated (the manipulation category captures deepfakes built on real footage as well as purely AI-generated content)
A photorealistic AI-generated image of a fictional landscape does not trigger Art.42 — there is no real entity being misrepresented. An AI-generated image of a real political leader making a speech they never gave clearly triggers it. A highly stylised cartoon version of a real person that no reasonable person would mistake for documentary footage falls outside the scope. A near-photorealistic synthetic video of a real public figure in a fabricated scenario falls squarely within it.
For developers building content generation systems, this creates a design requirement: the system must be able to determine whether its outputs constitute Art.42-triggering synthetic content and apply appropriate technical marking and visible labeling accordingly. This cannot be implemented purely as a post-generation policy check — it requires architectural integration of content classification, marking, and labeling into the generation pipeline.
Technical Implementation of the Marking Obligation
The provider-side technical marking obligation under Art.42 does not prescribe specific technical standards, but the EU AI Act's references to machine-readable marking align with emerging industry standards for content provenance and AI authenticity marking, particularly the Coalition for Content Provenance and Authenticity (C2PA) standard and its implementing specifications.
C2PA provides a cryptographically signed provenance infrastructure that embeds a chain of custody record into media files, including information about the AI systems used to generate or modify the content, the organisation responsible, and the date and nature of the modifications. C2PA manifests can survive certain types of processing (format conversion, compression) depending on implementation and survive in ways that allow verification tools to recover and validate the provenance data.
For image content, technical marking options include:
C2PA manifest embedding: A cryptographically signed metadata package embedded in the image file structure (JPEG, PNG, WebP, HEIC) containing provenance information. Compatible with Adobe Content Authenticity Initiative and a growing ecosystem of verification tools.
Steganographic watermarking: Invisible watermarks embedded in the pixel data that survive certain transformations (compression, resizing, format conversion) and can be detected by dedicated detection tools. Examples include the invisible watermarking techniques developed by Google DeepMind and others specifically for AI-generated image marking.
Metadata-based marking: EXIF/IPTC metadata fields indicating AI generation, though this approach is fragile — metadata is easily stripped by image processing pipelines and social media platforms.
For audio and video content, equivalent approaches include audio steganographic watermarking in the frequency domain (robust to compression and re-encoding), video-level metadata embedding in container formats (MP4, MKV), and frame-level watermarking in video content.
For text content, technical marking is more challenging because text lacks the structured binary format that supports robust steganographic embedding. Current approaches include metadata tagging (MIME type, HTTP headers, document metadata) and statistical watermarking techniques that encode marking signals in the probability distributions of generated tokens — though the latter techniques remain an area of active development and are not yet standardised.
The Art.42 obligation requires that technical marking be machine-readable and detectable. This does not require that every platform consuming AI-generated content actually detect it — the obligation is on the provider of the AI system to ensure its outputs carry the marking, not on every downstream consumer to implement detection. However, the practical value of machine-readable marking depends on the existence of detection infrastructure that downstream platforms, fact-checkers, and content authentication services can use.
Exceptions: When Art.42 Obligations Do Not Apply
Art.42 establishes exceptions to the transparency obligations that reflect the need for authorised uses of AI systems without disclosure in specific contexts. These exceptions are narrow and require specific authorization or context.
Law enforcement and national security: Competent authorities undertaking activities for law enforcement purposes, including authorised testing and the deployment of AI systems in surveillance and intelligence activities authorised by national law, are not subject to the Art.42 obligations where those obligations would interfere with the authorised activity. This exception is not a blanket exemption for all law enforcement use of AI — it applies specifically where disclosure would undermine the lawful purpose of an authorised activity, and it is subject to oversight and accountability requirements under the Act's Chapter on law enforcement use.
Authorised security testing: AI systems deployed for authorised security testing, including penetration testing, red teaming, and testing of AI systems themselves, may operate without the Art.42 disclosures where such disclosures would undermine the testing purpose. A red team testing whether a chatbot can be manipulated into harmful responses needs to interact with the system as a real user would, which requires that the chatbot operate normally rather than in a special "being tested" mode.
Journalistic expression and artistic freedom: The Art.42 labeling obligation for synthetic content does not apply where the synthetic content is used in the context of journalistic expression, artistic expression, or satire, provided that the artificial nature of the content is clearly indicated in a way appropriate to that context. A news organisation using synthetic images to illustrate historical scenarios it cannot photograph must label them, but the labeling approach appropriate to a news article context (an editorial caption) differs from what would be required for a social media post claiming to depict a real event. Satire and artistic works using AI-generated imagery or audio may use contextual labeling (framing within the work, accompanying commentary) rather than explicit technical marking, provided the synthetic nature is clear to the audience within that context.
The journalistic and artistic expression exception requires particular care in implementation. The exception applies to the work of journalism and artistic expression, not to all content that claims to be journalism or art. A deepfake video designed to deceive voters about a political candidate's statements does not become exempt from Art.42 by being labelled "political satire" — the exception requires genuine journalistic or artistic purpose and context, and the labeling within that context must genuinely serve the transparency goal.
Art.42 Cross-Reference Architecture
Understanding Art.42 in isolation understates its regulatory significance. The transparency obligations interact with other provisions across the EU AI Act and with provisions of other EU regulations in ways that multiply compliance requirements.
Art.42 × Art.26 (Obligations of deployers): Art.26 establishes the general obligations of deployers of high-risk AI systems, including monitoring obligations and obligations to inform providers when the system is not performing as expected. Where an AI system subject to Art.42 is also a high-risk AI system under Art.6 and Annex III, the deployer must fulfill both the Art.42 transparency obligations and the broader Art.26 deployer obligations. The interaction matters particularly for emotion recognition in high-risk contexts (employment screening, law enforcement) where the Art.42 disclosure is one element of a more comprehensive compliance regime.
Art.42 × Art.50 (AI literacy): Art.50 requires providers and deployers to take measures to ensure sufficient AI literacy among staff deploying AI systems. The Art.42 transparency obligations assume that staff responsible for deploying AI-interactive systems, emotion recognition systems, and content generation systems understand what constitutes a compliant disclosure — AI literacy training should include Art.42-specific content for staff operating in these deployment contexts.
Art.42 × GDPR Art.22 (Automated decision-making): Where an AI system performing emotion recognition or biometric categorisation uses its outputs as the basis for decisions that produce legal effects or significantly affect persons — employment decisions, credit decisions, insurance underwriting — the processing may constitute automated decision-making subject to GDPR Art.22 restrictions. The Art.22 right not to be subject to solely automated decisions, combined with the Art.42 transparency obligation, creates a layered regime requiring both disclosure of the AI system's operation and, where Art.22 applies, the right to human review and explanation of the decision.
Art.42 × GDPR Art.9 (Special categories of personal data): As noted in the discussion of the second regime, biometric data processed for the purpose of uniquely identifying natural persons, and health data used in emotion recognition systems, typically constitute special category data under GDPR Art.9, requiring explicit consent or another Art.9(2) legal basis. The Art.42 transparency obligation does not substitute for or satisfy the GDPR legal basis requirement — both must be addressed.
Art.42 × Art.52 (Prohibitions): Some deployments that might otherwise appear to fall within Art.42's scope for biometric categorisation are prohibited outright under Art.5 of the EU AI Act. Real-time remote biometric identification systems used in publicly accessible spaces by law enforcement for purposes other than those listed in Art.5(1)(h) are prohibited, not merely subject to transparency obligations. The interaction between the Art.42 transparency obligations and the Art.5 prohibitions requires careful mapping: Art.42 applies to what is lawful, not to what is prohibited.
Python Implementation: TransparencyDisclosureManager
The following Python implementation provides a framework for managing Art.42 transparency obligations across the three regimes:
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
import hashlib
import json
from datetime import datetime
class TransparencyRegime(Enum):
NATURAL_PERSON_INTERACTION = "natural_person_interaction"
EMOTION_RECOGNITION = "emotion_recognition"
BIOMETRIC_CATEGORISATION = "biometric_categorisation"
SYNTHETIC_CONTENT = "synthetic_content"
class DisclosureStatus(Enum):
COMPLIANT = "compliant"
PENDING = "pending"
EXEMPT = "exempt"
NON_COMPLIANT = "non_compliant"
class ExemptionType(Enum):
LAW_ENFORCEMENT_AUTHORISED = "law_enforcement_authorised"
SECURITY_TESTING = "security_testing"
JOURNALISTIC_EXPRESSION = "journalistic_expression"
ARTISTIC_EXPRESSION = "artistic_expression"
OBVIOUS_FROM_CONTEXT = "obvious_from_context"
@dataclass
class NaturalPersonInteractionDisclosure:
disclosure_text: str
disclosure_placement: str # e.g. "pre_interaction_banner", "persistent_ui_indicator"
is_prominent: bool
is_prior_to_interaction: bool
user_language: str
ai_persona_name: Optional[str] = None
simulates_human_voice: bool = False
def validate(self) -> list[str]:
issues = []
if not self.is_prior_to_interaction:
issues.append("Disclosure must be provided before interaction begins (Art.42(1)(a))")
if not self.is_prominent:
issues.append("Disclosure must be clear and prominently displayed")
if self.simulates_human_voice and self.disclosure_placement == "terms_of_service":
issues.append("Human-voice AI requires active disclosure beyond ToS embedding")
return issues
@dataclass
class EmotionRecognitionDisclosure:
system_description: str
data_categories: list[str] # e.g. ["facial_expressions", "voice_tone", "physiological_signals"]
processing_purpose: str
biometric_data_legal_basis: str # GDPR Art.9(2) legal basis
is_disclosed_before_processing: bool
gdpr_notice_provided: bool
def validate(self) -> list[str]:
issues = []
if not self.is_disclosed_before_processing:
issues.append("Emotion recognition disclosure must occur before processing begins")
if not self.gdpr_notice_provided:
issues.append("GDPR Art.13/14 notice required in addition to Art.42 disclosure")
if not self.biometric_data_legal_basis:
issues.append("GDPR Art.9(2) legal basis required for biometric/health data processing")
if not self.data_categories:
issues.append("Disclosure must specify categories of biometric data being analysed")
return issues
@dataclass
class SyntheticContentMarking:
content_type: str # "image", "audio", "video", "text"
technical_marking_method: str # e.g. "c2pa_manifest", "steganographic_watermark", "metadata"
visible_label_text: str
visible_label_placement: str
depicts_real_entity: bool
could_be_mistaken_for_authentic: bool
c2pa_manifest_hash: Optional[str] = None
def validate(self) -> list[str]:
issues = []
if self.depicts_real_entity and self.could_be_mistaken_for_authentic:
if not self.technical_marking_method:
issues.append("Machine-readable marking required for synthetic content depicting real entities")
if not self.visible_label_text:
issues.append("Visible labeling required for synthetic content that may appear authentic")
if self.technical_marking_method == "metadata" and self.content_type in ["image", "video"]:
issues.append("Metadata-only marking is fragile; prefer C2PA or steganographic marking")
return issues
def generate_c2pa_manifest(self, generator_info: dict) -> str:
manifest = {
"claim": {
"recorder": generator_info.get("system_name", ""),
"claim_generator": generator_info.get("system_version", ""),
"created_at": datetime.utcnow().isoformat(),
"ai_generated": True,
"depicts_real_entity": self.depicts_real_entity,
"content_type": self.content_type,
}
}
manifest_json = json.dumps(manifest, sort_keys=True)
self.c2pa_manifest_hash = hashlib.sha256(manifest_json.encode()).hexdigest()
return manifest_json
@dataclass
class Art42ComplianceRecord:
system_id: str
deployment_context: str
regimes_applicable: list[TransparencyRegime]
natural_person_disclosure: Optional[NaturalPersonInteractionDisclosure] = None
emotion_disclosure: Optional[EmotionRecognitionDisclosure] = None
synthetic_content_marking: Optional[SyntheticContentMarking] = None
exemptions_claimed: list[ExemptionType] = field(default_factory=list)
exemption_authorisation_reference: Optional[str] = None
last_reviewed: Optional[str] = None
def assess_compliance(self) -> dict[str, DisclosureStatus]:
results: dict[str, DisclosureStatus] = {}
all_issues: list[str] = []
for regime in self.regimes_applicable:
if regime in self.exemptions_claimed_for_regime(regime):
results[regime.value] = DisclosureStatus.EXEMPT
continue
if regime == TransparencyRegime.NATURAL_PERSON_INTERACTION:
if not self.natural_person_disclosure:
results[regime.value] = DisclosureStatus.NON_COMPLIANT
all_issues.append("No natural person interaction disclosure configured")
else:
issues = self.natural_person_disclosure.validate()
results[regime.value] = DisclosureStatus.COMPLIANT if not issues else DisclosureStatus.NON_COMPLIANT
all_issues.extend(issues)
elif regime in (TransparencyRegime.EMOTION_RECOGNITION, TransparencyRegime.BIOMETRIC_CATEGORISATION):
if not self.emotion_disclosure:
results[regime.value] = DisclosureStatus.NON_COMPLIANT
all_issues.append("No emotion recognition / biometric categorisation disclosure configured")
else:
issues = self.emotion_disclosure.validate()
results[regime.value] = DisclosureStatus.COMPLIANT if not issues else DisclosureStatus.NON_COMPLIANT
all_issues.extend(issues)
elif regime == TransparencyRegime.SYNTHETIC_CONTENT:
if not self.synthetic_content_marking:
results[regime.value] = DisclosureStatus.NON_COMPLIANT
all_issues.append("No synthetic content marking configured")
else:
issues = self.synthetic_content_marking.validate()
results[regime.value] = DisclosureStatus.COMPLIANT if not issues else DisclosureStatus.NON_COMPLIANT
all_issues.extend(issues)
return results
def exemptions_claimed_for_regime(self, regime: TransparencyRegime) -> list[ExemptionType]:
if ExemptionType.LAW_ENFORCEMENT_AUTHORISED in self.exemptions_claimed:
return self.exemptions_claimed
if regime == TransparencyRegime.NATURAL_PERSON_INTERACTION:
if ExemptionType.OBVIOUS_FROM_CONTEXT in self.exemptions_claimed:
return [ExemptionType.OBVIOUS_FROM_CONTEXT]
if regime == TransparencyRegime.SYNTHETIC_CONTENT:
for exemption in [ExemptionType.JOURNALISTIC_EXPRESSION, ExemptionType.ARTISTIC_EXPRESSION]:
if exemption in self.exemptions_claimed:
return [exemption]
return []
def generate_compliance_summary(self) -> str:
status_map = self.assess_compliance()
lines = [
f"Art.42 Compliance Summary — System: {self.system_id}",
f"Deployment context: {self.deployment_context}",
f"Last reviewed: {self.last_reviewed or 'Not reviewed'}",
"",
]
for regime, status in status_map.items():
indicator = "✓" if status == DisclosureStatus.COMPLIANT else ("⊘" if status == DisclosureStatus.EXEMPT else "✗")
lines.append(f" {indicator} {regime}: {status.value}")
return "\n".join(lines)
Art.42 Implementation Checklist
Deployers and providers should work through the following checklist to assess and demonstrate Art.42 compliance across their AI deployments.
Regime 1: Natural Person Interaction
- Have you identified all AI systems in your deployment that interact directly with natural persons?
- For each such system, is a disclosure provided that the person is interacting with an AI before the interaction begins?
- Is the disclosure prominent, clear, and expressed in plain language appropriate to your user population?
- If the system simulates a human name, voice, or persona, is the AI disclosure sufficiently active and prominent to counteract that impression?
- Does the "obvious from context" exception genuinely apply to your deployment, based on the actual experience of your target user population rather than technically-informed observers?
Regime 2: Emotion Recognition and Biometric Categorisation
- Have you identified all AI systems in your deployment that perform emotion recognition or biometric categorisation?
- Does your disclosure specify the categories of biometric data being collected and analysed, not just the fact of AI analysis?
- Does your disclosure specify the purpose of the emotion recognition or biometric categorisation in concrete terms?
- Is the disclosure provided before the biometric processing begins, not at or after the point of capture?
- Have you coordinated Art.42 disclosure with your GDPR Art.13/14 transparency obligations for the same processing?
- Have you identified the GDPR Art.9(2) legal basis for any biometric special category data processing, separately from the Art.42 disclosure?
- Where the emotion recognition outputs feed into decisions with legal or significant effects, have you assessed GDPR Art.22 automated decision-making requirements?
Regime 3: Synthetic Content
- As a provider of AI systems generating synthetic content, have you implemented machine-readable technical marking (C2PA, steganographic watermarking, or equivalent) in your generation outputs?
- Does your marking implementation cover all content types your system generates (images, audio, video, text)?
- Is the technical marking implementation robust to common processing operations (compression, format conversion, resizing) that your content may undergo after generation?
- As a deployer using AI to generate synthetic content depicting real persons, events, or places, have you implemented visible labeling on all outputs that could be mistaken for authentic content?
- Is your visible labeling system integrated into the content generation pipeline (not a post-hoc manual process) to ensure consistent application?
Exemptions and Special Cases
- If you are claiming the journalistic expression or artistic exception, have you documented the basis for that claim and ensured that the synthetic nature of the content is clearly indicated within the journalistic or artistic context?
- If you are claiming the obvious-from-context exception for natural person interaction, have you tested that claim with actual members of your target user population?
- If your AI system is subject to both Art.42 transparency obligations and separate obligations under Chapter III (high-risk AI systems), have you documented how the two compliance regimes are addressed in coordination rather than treating them as alternatives?
Enforcement Landscape and Practical Priorities
Market surveillance authorities and data protection supervisory authorities share enforcement jurisdiction for Art.42 — the former for the AI-specific transparency obligations under the EU AI Act, the latter for the GDPR dimensions of biometric and emotion data processing. This dual enforcement architecture means that Art.42 compliance failures can trigger both AI Act enforcement proceedings and GDPR enforcement proceedings for the same underlying system deployment.
The practical enforcement risk profile differs across the three regimes. For natural person interaction disclosure, the obligation is simple to check — a regulator can interact with the system directly and assess whether disclosure is provided. Non-compliance is highly visible. For emotion recognition, enforcement typically requires a complaint from an affected person, audit activity, or incident investigation. For synthetic content, enforcement risk is concentrated in high-profile cases involving political deepfakes, non-consensual intimate deepfakes, and commercial fraud, where the harm is clear and the public interest in enforcement is strong.
For developers, the practical priorities are to implement chatbot disclosure at the architectural level (not as an afterthought), to integrate synthetic content marking into generation pipelines from the outset (retrofitting marking is technically harder and creates gaps in the provenance record), and to coordinate Art.42 and GDPR compliance in a single data protection and AI compliance review process rather than treating them as independent workstreams.
Article 42 is, in the end, an article about trust. The obligations it creates are not technically burdensome compared to the documentation, conformity assessment, and monitoring requirements for high-risk AI systems. But they reflect a foundational principle: people who interact with AI systems, who are assessed by AI systems, and who encounter AI-generated content deserve to know what they are dealing with. Building that transparency into systems from the start is both a legal requirement and a design choice that shapes the nature of the human-AI relationship being created.