EU AI Act Art.6 High-Risk AI Systems: Developer Guide (Annex III Obligations 2026)
Article 6 of the EU AI Act (Regulation 2024/1689) is the provision that determines whether your AI system becomes subject to the Act's most demanding compliance obligations — risk management systems, training data governance, technical documentation, logging, transparency, human oversight, accuracy requirements, and mandatory conformity assessment.
If your system qualifies as high-risk under Art.6, you have until 2 August 2026 to comply with Art.9-15. If you build AI for regulated products (medical devices, machinery, vehicles), the deadline is 2 August 2027. Miss these and you face market prohibition and fines up to €30,000,000 or 6% of total worldwide annual turnover (Art.99(2)).
This guide covers both classification routes, the eight Annex III categories in full, all Art.9-15 technical obligations, the conformity assessment paths, and what deploying on EU-native infrastructure means for your high-risk AI pipeline.
The AI Act Applicability Timeline (High-Risk Context)
| Date | Milestone |
|---|---|
| 02.02.2025 | Art.5 Prohibited Practices — fully applicable |
| 02.08.2025 | GPAI obligations (Art.51-56) — fully applicable |
| 02.08.2026 | Art.6(2) Annex III High-Risk AI — fully applicable |
| 02.08.2027 | Art.6(1) Annex I (regulated products) — fully applicable |
High-risk AI under Annex III must be compliant before the August 2026 deadline. The 18-month runway from Act entry into force to Annex III applicability ends in Q3 2026. National market surveillance authorities (MSAs) will begin enforcement from August 2026.
Article 6: Two Routes to High-Risk Classification
Route 1 — Art.6(1): Safety Components in Regulated Products (Annex I)
Art.6(1) applies to AI systems that are safety components of products regulated under EU harmonisation legislation listed in Annex I. These products must already undergo third-party conformity assessment under their sector legislation.
Annex I regulated product categories:
- Machinery (Directive 2006/42/EC + Regulation 2023/1230)
- Toys (Directive 2009/48/EC)
- Recreational craft and personal watercraft (Directive 2013/53/EU)
- Lifts (Directive 2014/33/EU)
- ATEX equipment for explosive atmospheres (Directive 2014/34/EU)
- Radio equipment (Directive 2014/53/EU)
- Pressure equipment (Directive 2014/68/EU)
- In vitro diagnostic medical devices (Regulation 2017/746/IVDR)
- Medical devices (Regulation 2017/745/MDR) — high impact for health AI
- Civil aviation (Regulation 2018/1139)
- Motor vehicles (Regulation 2019/2144)
- Agricultural and forestry vehicles (Regulation 2013/167/EU)
Critical: An AI system is an Art.6(1) high-risk system if:
- It is a safety component of a product in Annex I; AND
- The product (or its safety component) must undergo third-party conformity assessment under Annex I legislation
Example: A computer vision system in a medical device (MDR Class IIb) that determines treatment recommendations is an Art.6(1) high-risk AI. The MDR conformity assessment applies, AND the AI Act high-risk obligations apply in parallel.
Deadline for Art.6(1): 2 August 2027 (one year later than Annex III).
Route 2 — Art.6(2): Annex III Direct Listing
Art.6(2) applies to AI systems directly listed in Annex III of the AI Act, regardless of the product they are embedded in. This is the provision that covers most software-first AI applications.
Annex III has 8 categories:
| # | Category | Examples |
|---|---|---|
| 1 | Biometrics | Remote biometric ID, biometric categorisation, emotion recognition |
| 2 | Critical Infrastructure | Safety components in electricity, gas, water, transport, digital infrastructure |
| 3 | Education and Vocational Training | Admissions decisions, student evaluation, exam proctoring AI |
| 4 | Employment | Recruitment/CV screening, promotion, performance monitoring AI |
| 5 | Essential Private and Public Services | Credit scoring, insurance pricing, benefits eligibility, emergency service dispatching |
| 6 | Law Enforcement | Individual risk assessment in criminal proceedings, lie detector AI, evidence evaluation |
| 7 | Migration, Asylum, Border Control | Visa risk assessment, asylum document authentication, border surveillance |
| 8 | Administration of Justice and Democratic Processes | AI assisting judges, electoral infrastructure AI |
Each category has its own scope nuances. Developers need to map their system to the specific language in Annex III, not just the category headings.
Annex III Category Deep-Dive
Category 1: Biometrics
Three subcategories under Art.6(2) + Annex III(1):
1a — Remote Biometric Identification (RBI): Real-time or post-hoc identification of natural persons using biometric data in public spaces. The AI Act Art.5 already bans real-time RBI with three narrow law-enforcement exceptions. Post-hoc RBI (checking recorded footage against a database) is high-risk under Annex III(1)(a).
1b — Biometric Categorisation: AI that classifies individuals in biometric categories (gender, age, ethnicity, sexual orientation, political opinion, religion). Applies only where categorisation is done based on biometric data. Note: GDPR Art.9 simultaneously applies — explicit consent or Art.9(2) exception required.
1c — Emotion Recognition: AI systems that infer emotional states from biometrics. Note intersection with Art.5(1)(e) — emotion recognition in workplaces and educational settings is prohibited outright. High-risk classification under Annex III(1)(c) applies to permitted emotion recognition use cases (e.g., driver drowsiness in vehicles).
Category 2: Critical Infrastructure
AI systems that are safety components in the management and operation of critical infrastructure covered by NIS2 Directive (2022/2555). This creates a direct NIS2 × AI Act intersection:
- NIS2 Art.21 ICT risk management obligations apply to the infrastructure operator
- AI Act Annex III high-risk obligations apply to the AI system provider
- One AI system, two regulatory regimes, two sets of obligations
Covered sectors: electricity, gas, water, wastewater, district heating, digital infrastructure (IXPs, DNS, TLD registries, CSPs, CDNs), transport (air, rail, water, road), health, public administration, space.
Category 3: Education and Vocational Training
AI used to determine access to or assignment within educational/vocational training institutions, to evaluate learning outcomes that affect educational progression, or to monitor and detect prohibited conduct during assessments.
Developer note: Student proctoring AI (webcam monitoring, eye-tracking, keystroke analysis) used to detect cheating or impersonation in formal examinations is Annex III Category 3 high-risk. This is one of the most commercially contested categories because edtech companies dispute whether their proctoring tools "determine" outcomes or merely assist human reviewers.
The AI Act's position: if the AI output materially influences the decision (passing/failing a student), the system is high-risk regardless of nominal "human oversight" claims.
Category 4: Employment
The most commercially significant category for enterprise software developers. Annex III(4) covers:
- Recruitment: CV screening, initial application filtering, job interview assessment tools, candidate ranking AI
- Employment decisions: AI used for promotion, dismissal, performance evaluation, task allocation and monitoring
- Workforce monitoring: AI that monitors employee behavior, performance, or contractual obligations at individual level
Key determination question: Does the AI system make or materially influence decisions about individuals? Job-matching platforms, HR analytics, performance management tools, and workforce scheduling AI all need to answer this.
Temporal scope: Annex III(4) applies to AI used in the context of employment relationships — including gig-economy platforms where workers are not formally employees. The AI Act's definition deliberately covers platform work.
Category 5: Essential Private and Public Services
This category covers AI that determines access to essential services that people depend on:
- Credit and insurance: AI scoring systems used in individual credit decisions (loan approval, credit limit), insurance pricing (risk assessment, premium calculation), and insurance underwriting
- Public benefits: AI that determines eligibility for social benefits, public housing allocation, emergency services prioritisation
- Utilities: AI used to determine access to electricity, water, gas, or district heating at individual level
GDPR Art.22 intersection: Automated decision-making that produces "legal or similarly significant effects" already requires GDPR Art.22 compliance (explicit consent or contract necessity or legal basis, plus right to explanation). High-risk AI under Annex III(5) adds the AI Act's full Art.9-15 obligations on top.
Category 6: Law Enforcement
Art.6(2) Annex III(6) applies to AI used by or on behalf of law enforcement authorities. Three subcategories:
- Individual risk assessment: AI that assesses individual criminal risk, recidivism risk, or the risk of becoming a victim of crime — used by police, courts, or prosecution
- Polygraph and lie detection: AI used as lie detectors or reliability assessors for statements or evidence
- Evidence evaluation: AI used to evaluate the reliability of evidence in criminal proceedings
Note: This category overlaps with Art.5(1)(f)'s prohibition on predictive policing for criminal offences based solely on profiling. Systems that are prohibited under Art.5 are not merely high-risk — they are banned entirely. But risk-assessment tools that incorporate human judgment and additional factors beyond profiling remain high-risk under Annex III(6).
Category 7: Migration, Asylum, Border Control
Annex III(7) covers AI used by or on behalf of competent authorities in migration and border management:
- Lie detection / reliability assessment for applicants (asylum, visa, border crossing)
- Risk assessment of irregular migration risk
- Document authenticity assessment for asylum or visa applications
- Processing and examination of asylum and visa applications (including eligibility determination)
High GDPR sensitivity: biometric data is special category data under Art.9 GDPR. AI systems processing biometric data for migration purposes face the GDPR Art.9 + AI Act Annex III dual compliance burden.
Category 8: Administration of Justice and Democratic Processes
The final Annex III category covers:
- AI assisting judicial authorities in researching and interpreting facts and law, and applying the law to specific cases
- AI influencing elections: electoral advertising targeting systems, microtargeting AI, automated political messaging
Electoral AI: The intersection with the Digital Services Act (DSA) and Electoral Integrity Regulation is direct. Very large online platforms face DSA Art.34 systemic risk assessments for AI used in elections, plus AI Act Annex III high-risk obligations if the system determines targeting or delivery of political content.
Article 6(3) — The Opt-Out Mechanism (Important for Providers)
Art.6(3) creates a narrow exception: if an Annex III listed AI system does not pose a significant risk of harm to health, safety, or fundamental rights, the provider may designate it as not high-risk. The provider must document the reasoning, register the system in the EU database, and the AI Office may challenge the designation.
Practice: This is not a blanket self-certification mechanism. The GPAI Office's 2025 guidance indicates that providers should demonstrate:
- The AI system is narrow in scope
- Human decision-making is genuinely in control
- The potential harm is limited in severity and probability
Most commercially meaningful Category 4-5 systems will not qualify for the Art.6(3) opt-out.
Art.9-15: The High-Risk AI Obligations
Once classified as high-risk, Art.9-15 impose seven categories of technical and governance obligations:
Art.9 — Risk Management System
Core requirement: A continuous, iterative risk management system for the entire lifecycle of the high-risk AI system (design, development, testing, deployment, post-market monitoring).
The risk management system must:
- Identify and analyse known and foreseeable risks associated with the AI system (Art.9(2)(a))
- Estimate and evaluate risks that may emerge from intended use (Art.9(2)(b))
- Evaluate risks from reasonably foreseeable misuse (Art.9(2)(c))
- Adopt risk management measures to address the risks (Art.9(4))
Residual risk standard (Art.9(4)): After applying risk mitigation measures, the residual risk must be judged acceptable. If the system cannot achieve acceptable residual risk levels, it cannot be placed on the market.
# Art.9 Risk Register — Minimal Implementation Pattern
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime
from typing import Optional
class RiskSeverity(Enum):
CRITICAL = "critical" # harm to fundamental rights / safety
HIGH = "high" # significant adverse effect on individual
MEDIUM = "medium" # limited but real impact
LOW = "low" # marginal impact
class RiskStatus(Enum):
IDENTIFIED = "identified"
MITIGATED = "mitigated"
RESIDUAL = "residual" # accepted residual risk
ELIMINATED = "eliminated"
@dataclass
class HighRiskAIRiskEntry:
risk_id: str
description: str
affected_rights: list[str] # e.g., ["non-discrimination", "privacy"]
severity: RiskSeverity
probability: str # "likely", "possible", "unlikely"
mitigation_measures: list[str]
residual_severity: RiskSeverity
residual_accepted: bool
review_date: datetime
lifecycle_stage: str # "design", "development", "testing", "deployment", "monitoring"
article_reference: str = "Art.9 EU AI Act 2024/1689"
class HighRiskRiskManagementSystem:
"""
Implements Art.9 continuous risk management for high-risk AI systems.
Lifecycle-spanning, iterative, documented per Art.9(3).
"""
def __init__(self, system_id: str, annex_iii_category: str):
self.system_id = system_id
self.annex_iii_category = annex_iii_category
self.risks: list[HighRiskAIRiskEntry] = []
self.created_at = datetime.utcnow()
self.last_updated = datetime.utcnow()
def add_risk(self, entry: HighRiskAIRiskEntry) -> None:
self.risks.append(entry)
self.last_updated = datetime.utcnow()
def get_unacceptable_residual_risks(self) -> list[HighRiskAIRiskEntry]:
"""Identify risks where residual risk is not accepted — system cannot go live."""
return [r for r in self.risks
if r.status == RiskStatus.RESIDUAL and not r.residual_accepted]
def generate_art9_report(self) -> dict:
return {
"system_id": self.system_id,
"annex_iii_category": self.annex_iii_category,
"regulation": "EU AI Act 2024/1689 Art.9",
"total_risks": len(self.risks),
"critical_risks": len([r for r in self.risks if r.severity == RiskSeverity.CRITICAL]),
"accepted_residual_risks": len([r for r in self.risks if r.residual_accepted]),
"blocking_risks": len(self.get_unacceptable_residual_risks()),
"last_updated": self.last_updated.isoformat(),
"lifecycle_coverage": list(set(r.lifecycle_stage for r in self.risks)),
}
Art.10 — Training, Validation, and Testing Data
Core requirement: High-risk AI systems trained on data must use data governance and management practices that ensure data quality.
Key obligations:
- Relevance and representativeness: Training data must be relevant and sufficiently representative of the intended users and use contexts (Art.10(3))
- Bias examination: Before training, data must be examined for possible biases that could lead to discriminatory outcomes (Art.10(3))
- Data gaps: Providers must identify and address data gaps (Art.10(4)) — permitted to process special category data (Art.9 GDPR) solely for bias detection and correction under strict conditions (Art.10(5))
- Documentation: Data sources, collection methods, preprocessing, labelling, and assumptions must be documented in technical documentation (Art.11 + Annex IV)
# Art.10 Training Data Governance Record
import json
from datetime import datetime
def create_art10_data_governance_record(
dataset_name: str,
source_description: str,
collection_method: str,
geographic_coverage: list[str],
temporal_range: str,
labeling_methodology: str,
known_limitations: list[str],
bias_examination_results: dict,
special_category_data_used: bool,
special_category_purpose: Optional[str] = None,
) -> dict:
"""
Art.10(2) data governance record for high-risk AI training data.
Must be included in Annex IV technical documentation.
"""
record = {
"regulation_article": "Art.10 EU AI Act 2024/1689",
"created_at": datetime.utcnow().isoformat(),
"dataset_name": dataset_name,
"source": source_description,
"collection_method": collection_method,
"geographic_coverage": geographic_coverage,
"temporal_range": temporal_range,
"labeling": labeling_methodology,
"known_limitations_and_gaps": known_limitations,
"bias_examination": bias_examination_results,
"special_category_data": {
"used": special_category_data_used,
"purpose": special_category_purpose,
"legal_basis": "Art.10(5) EU AI Act — bias detection/correction only" if special_category_data_used else None,
},
}
return record
Art.11 — Technical Documentation
Core requirement: Providers must prepare comprehensive technical documentation before placing the system on the market. The content is specified in Annex IV and includes:
- General description of the AI system (purpose, intended use, versions)
- Description of elements of the AI system and its development process
- Detailed information about the monitoring, functioning, and control of the system
- Description of risk management system (Art.9)
- Training and validation/testing data description (Art.10)
- Description of human oversight measures (Art.14)
- Performance metrics, accuracy, robustness and cybersecurity measures (Art.15)
- Post-market monitoring plan (Art.72)
Technical documentation must be kept for 10 years after placing on the market (Art.18(1)).
Art.12 — Record-Keeping and Logging
Core requirement: High-risk AI systems must automatically generate logs throughout their operation. These logs enable post-hoc reconstruction of the system's functioning and are required to be retained.
Logging must capture:
- Period of each use (start/end)
- Reference database against which the system was checked (for biometric systems)
- Input data used
- The persons responsible for verifying outputs (for systems subject to human oversight)
CLOUD Act × Art.12 conflict: If your AI system's logs are stored on US cloud infrastructure (AWS, Azure, GCP), the US government can access those logs via CLOUD Act subpoena without notifying the EU subject. For Annex III Category 6 (law enforcement AI) and Category 7 (migration AI), this creates a jurisdiction conflict — EU enforcement authorities expect exclusive access to evidence records that US cloud creates parallel access to.
EU-native solution: Art.12 logs stored in EU-based infrastructure (German, French, or other EU member state data centres) face no CLOUD Act exposure. For legally sensitive Annex III applications, this is not a compliance preference — it is a defensible legal position.
# Art.12 Structured Logging for High-Risk AI Systems
import logging
import json
from datetime import datetime, timezone
from typing import Any, Optional
import uuid
class HighRiskAILogger:
"""
Art.12-compliant logging for high-risk AI systems.
Captures input context, output, human reviewer, and session metadata.
"""
def __init__(self, system_id: str, annex_iii_category: str, log_path: str):
self.system_id = system_id
self.annex_iii_category = annex_iii_category
self.log_path = log_path
self._setup_logger()
def _setup_logger(self):
self.logger = logging.getLogger(f"high_risk_ai.{self.system_id}")
handler = logging.FileHandler(self.log_path)
handler.setFormatter(logging.Formatter('%(message)s'))
self.logger.addHandler(handler)
self.logger.setLevel(logging.INFO)
def log_inference(
self,
session_id: str,
input_hash: str, # hash of input, not raw PII
output_summary: str,
confidence_score: float,
human_reviewer_id: Optional[str],
human_decision_override: Optional[bool],
reference_database_version: Optional[str] = None,
) -> str:
record_id = str(uuid.uuid4())
entry = {
"record_id": record_id,
"regulation": "Art.12 EU AI Act 2024/1689",
"system_id": self.system_id,
"annex_iii_category": self.annex_iii_category,
"timestamp_utc": datetime.now(timezone.utc).isoformat(),
"session_id": session_id,
"input_hash_sha256": input_hash,
"output_summary": output_summary,
"confidence_score": confidence_score,
"human_oversight": {
"reviewer_id": human_reviewer_id,
"override_applied": human_decision_override,
},
"reference_database_version": reference_database_version,
}
self.logger.info(json.dumps(entry))
return record_id
Art.13 — Transparency and Provision of Information
Core requirement: High-risk AI systems must be designed and developed to ensure sufficient transparency for deployers to understand the system's capabilities, limitations, and appropriate use.
Providers must supply deployers with instructions for use (Annex IV, Section 2.3) including:
- Identity and contact details of the provider
- The intended purpose and conditions of use
- Performance metrics and accuracy levels
- Known and foreseeable circumstances that may lead to risks
- Human oversight measures required
- Hardware and software requirements
Deployer transparency obligation: Deployers using high-risk AI that makes decisions affecting individuals must inform those individuals that they are subject to high-risk AI (Art.50 + Art.13(3)). This creates a customer-facing notification requirement for SaaS products using Annex III AI.
Art.14 — Human Oversight
Core requirement: High-risk AI systems must be designed to allow effective human oversight during their operation. This is one of the most implementation-specific obligations.
Human oversight measures must enable designated persons to:
- Fully understand the AI system's capabilities and limitations (Art.14(4)(a))
- Monitor the AI system's operation and detect malfunctions (Art.14(4)(b))
- Intervene in real-time or interrupt the system (Art.14(4)(c))
- Interpret the output with appropriate understanding (Art.14(4)(d))
- Decide not to use the system or disregard the output (Art.14(4)(e))
"Human in the loop" is not sufficient. Art.14 requires effective oversight, not nominal oversight. A rubber-stamp review process where humans approve AI outputs without genuinely evaluating them does not satisfy Art.14. The AI Office's 2025 guidance indicates that oversight mechanisms must be designed to make non-compliance practically difficult, not just formally possible.
Art.15 — Accuracy, Robustness, and Cybersecurity
Core requirement: High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.
Accuracy: Providers must declare performance metrics (accuracy, error rates, F1 scores) in technical documentation. For biometric systems (Annex III Category 1), false positive and false negative rates must be declared separately for relevant demographic groups.
Robustness: Systems must be resilient to errors, faults, and inconsistencies that may occur during operation. This includes adversarial inputs — deliberate manipulation to produce wrong outputs.
Cybersecurity (Art.15 × CRA): High-risk AI systems must be resilient to attacks exploiting the AI-specific vulnerabilities of the system: data poisoning, model poisoning, adversarial examples, model extraction, membership inference. The Cyber Resilience Act (Regulation 2024/2847) Art.11 security-by-design requirements apply simultaneously for AI systems with digital elements.
Conformity Assessment: Self-Assessment vs. Notified Body
Path A — Self-Assessment (Most Annex III Systems)
Most Annex III high-risk AI systems qualify for internal conformity assessment — the provider conducts the conformity assessment themselves, draws up technical documentation, registers in the EU database, and affixes CE marking.
Self-assessment applies to Annex III Categories 1 (non-biometric RBI), 2-8, except where the system is a safety component of a regulated product that already requires third-party assessment.
Self-assessment process:
- Confirm Art.6(2) Annex III classification (or Art.6(3) opt-out documentation)
- Establish Art.9 risk management system
- Implement Art.10-15 technical obligations
- Draw up technical documentation (Annex IV)
- Establish quality management system (Art.17)
- Draw up EU Declaration of Conformity (Art.47 + Annex V)
- Register in EU database (Art.49) — mandatory before market placement
- Affix CE marking (Art.48)
Path B — Notified Body Assessment (Biometric RBI and Annex I Products)
Notified Body involvement is mandatory in two situations:
- Real-time remote biometric identification systems where allowed (law enforcement with judicial authorisation — Art.5(1)(d) exceptions)
- AI systems that are safety components of Annex I regulated products AND the product requires third-party conformity assessment under Annex I legislation (e.g., MDR Class IIb/III medical devices, machinery with safety functions)
Notified Bodies for AI Act conformity are designated by EU member states and published in NANDO (New Approach Notified and Designated Organisations). As of mid-2026, several notified bodies in Germany (TÜV SÜD, TÜV Rheinland, Bureau Veritas Germany), France, and the Netherlands have received AI Act designation.
EU Database Registration (Art.49)
Before placing a high-risk AI system on the EU market, providers must register in the EU database for high-risk AI systems at euaidb.eu. This creates a public registry of high-risk AI deployments.
Registration requires:
- Name and contact details of provider
- AI system name, description, and intended purpose
- Annex III category
- Whether the system has been found not to be high-risk under Art.6(3)
- Status of conformity assessment
- Declaration of Conformity reference
Exception (Art.6(3) opt-outs must also be registered): Providers who determine their system is not high-risk under Art.6(3) must register that determination in the same EU database. The AI Office can challenge the registration.
High-Risk AI on EU-Native vs. US Cloud Infrastructure
The intersection of Art.12 logging, Art.9 risk management documentation, and Art.10 training data governance creates a data residency question that goes beyond GDPR.
US cloud exposure for high-risk AI:
| Obligation | US Cloud Risk | EU-Native Resolution |
|---|---|---|
| Art.12 Logs | CLOUD Act access to AI decision logs | Logs stored in EU jurisdiction, no CLOUD Act subpoena exposure |
| Art.10 Training Data | Data poisoning attack surface via US cloud API | Training pipelines in EU infrastructure, audit trail intact |
| Art.9 Risk Docs | Technical documentation subject to US discovery | Documentation in EU-controlled storage |
| Art.11 Technical Docs | 10-year retention in US cloud = 10 years of CLOUD Act exposure | EU storage = single jurisdiction for retention |
| Annex III Category 6/7 | Law enforcement / migration AI logs: US parallel access to EU enforcement data | Only EU MSA access via EU legal channels |
For Annex III Categories 1 (biometrics), 5 (credit/insurance), 6 (law enforcement), and 7 (migration), the combination of AI Act Art.12 logging requirements and CLOUD Act creates a structural jurisdiction conflict that EU-native infrastructure resolves by design.
Practical Compliance Timeline for Annex III Providers
If you are building or operating an Annex III high-risk AI system:
Immediate (Now → Q2 2026)
- Map your AI systems to Annex III categories using the eight category descriptions above
- Determine if Art.6(3) opt-out applies — document reasoning if so
- Initiate Art.9 risk management system — identify risks across lifecycle
- Begin Art.10 data governance documentation — training data sources, bias examination
- Assess conformity assessment path (self-assessment or notified body)
Q2-Q3 2026 (3 months before deadline)
- Complete Art.11 technical documentation (Annex IV structure)
- Implement Art.12 logging infrastructure — structured, automated, lifecycle-appropriate
- Validate Art.13 transparency materials — deployer instructions for use
- Test Art.14 human oversight mechanisms — genuine, not nominal
- Verify Art.15 accuracy, robustness, and cybersecurity measures
Before 2 August 2026
- Quality management system (Art.17) operational
- EU Declaration of Conformity drawn up (Art.47)
- Register in EU database (Art.49) — mandatory before market placement
- CE marking affixed (Art.48)
- Post-market monitoring plan established (Art.72)
Enforcement and Penalties for High-Risk AI Non-Compliance
| Violation | Maximum Fine |
|---|---|
| Placing non-compliant high-risk AI on market | €30,000,000 or 6% global turnover |
| Non-compliant technical documentation | €20,000,000 or 4% global turnover |
| Providing incorrect information to authorities | €10,000,000 or 2% global turnover |
| Art.6(3) misuse (incorrect non-high-risk designation) | €20,000,000 or 4% global turnover |
National market surveillance authorities (MSAs) have market withdrawal, market prohibition, and mandatory recall powers for non-compliant high-risk AI systems. They can also require technical documentation disclosure.
See Also
- EU AI Act Art.5 Prohibited Practices: Developer Guide (February 2025)
- EU AI Act 2026 Conformity Assessment: Developer Guide
- EU AI Act Regulatory Sandbox (Art.57-63): Developer Guide
- EU AI Office & GPAI Model Regulation: Developer Guide (Art.51-56)
- EU NIS2 + AI Act Critical Infrastructure: Double Compliance Guide
- EU Cyber Resilience Act: SBOM Requirements and Vulnerability Handling