If your AI system is embedded in a physical product subject to EU type-approval or CE marking, you have two compliance regimes to satisfy simultaneously. EU AI Act Arts.104–112 are the amendment articles that formally integrate the AI Act into existing EU sector legislation — creating legal bridges between the horizontal AI regulation and vertical product safety frameworks covering vehicles, aircraft, industrial machinery, and aviation security systems.
Article 104 begins the sequence of amendments. Understanding how these amendments work — and specifically how the Annex I pathway coordinates conformity assessment between the EU AI Act and sector regulations — is essential for any developer building AI systems for dual-regulated markets.
What Arts.104–112 Actually Do
The EU AI Act does not simply add to the compliance burden for sector-regulated AI systems. It creates an integrated framework with specific coordination mechanisms designed to avoid double assessments. The amendment articles (Arts.104–112) formally amend existing EU regulations to:
- Acknowledge AI Act authority — They insert EU AI Act cross-references into sector regulations, establishing the legal relationship between the two compliance frameworks.
- Clarify dual compliance obligations — They confirm that AI components in regulated products must satisfy both the sector regulation and the EU AI Act where applicable.
- Enable assessment coordination — They create the legal basis for combined conformity assessment procedures and single-notified-body pathways where practicable.
The sectors covered by Arts.104–112 include civil aviation security (Regulation (EC) No 300/2008), agricultural and forestry vehicles (Regulation (EU) No 167/2013), two/three-wheel vehicles (Regulation (EU) No 168/2013), motor vehicles (Regulation (EU) 2018/858), civil aviation safety (Regulation (EU) 2018/1139), and vehicle type-approval safety requirements (Regulation (EU) 2019/2144).
The Foundation: Annex I and the High-Risk Classification Pathway
Art.104's amendment context only makes sense once you understand Annex I to the EU AI Act. Annex I lists the EU harmonization legislation under which an AI system can qualify as high-risk through the Article 6(1) pathway — distinct from the Annex III pathway that lists specific application areas.
Article 6(1) — The Annex I pathway:
An AI system that is a safety component of a product, or is itself a product, covered by EU harmonization legislation listed in Annex I, and the product is required to undergo a third-party conformity assessment by a notified body under that harmonization legislation, shall be considered a high-risk AI system.
The Annex I list currently includes:
| EU Instrument | Sector | AI Relevance |
|---|---|---|
| Regulation (EU) 2018/858 | Motor vehicles | ADAS, autonomous driving, driver monitoring |
| Regulation (EU) 2018/1139 | Civil aviation | Aircraft AI systems, ATC decision support |
| Regulation (EU) 2019/2144 | Vehicle type-approval | ISA, AEB, lane-keeping, drowsiness detection |
| Regulation (EU) 167/2013 | Agricultural vehicles | Autonomous guidance, obstacle detection |
| Regulation (EU) 168/2013 | Two/three-wheel vehicles | ABS controllers, emergency braking assistance |
| Regulation (EU) 2023/1230 | Machinery | AI in robots, CNC machines, production equipment |
| Directive 2006/42/EC | Machinery (legacy) | Safety functions in CE-marked machinery |
| Regulation (EU) 2017/745 (MDR) | Medical devices | AI diagnostics, clinical decision support |
| Regulation (EU) 2017/746 (IVDR) | In vitro diagnostics | AI-based laboratory analysis |
| Directive 2014/53/EU (RED) | Radio equipment | AI in smart devices, IoT with radio |
| Directive 2009/48/EC | Toys | AI in interactive toys |
| Directive 2013/53/EU | Recreational craft | AI navigation and safety systems |
The critical trigger: If your AI system is a safety component in a product requiring third-party conformity assessment under any Annex I regulation, your AI system is automatically high-risk under the EU AI Act, regardless of whether it would be classified as high-risk under Annex III.
The Conformity Assessment Coordination Mechanism
This is where Art.6(1) and Art.6(2) interact to create the dual compliance framework — and where developers most often get confused.
Art.6(1): Annex I Pathway — Third-Party Assessment Required
For AI systems that trigger Art.6(1) (Annex I product with third-party conformity assessment requirement), the EU AI Act does not automatically add a separate AI-specific conformity assessment. Instead, Article 43(2) provides a coordination mechanism:
Where AI systems are required to undergo a third-party conformity assessment under the harmonisation legislation listed in Annex I, that assessment shall be carried out under the corresponding requirements of the relevant Union harmonisation legislation. The notified body responsible for checking compliance with that legislation shall also be responsible for the conformity assessment of the AI system to the extent that it is integrated or constitutes a safety component of that product.
Practical implication: Your sector-specific notified body performs both the sector assessment AND the AI Act compliance check. You do not need two separate assessments. The AI Act requirements (technical documentation, QMS, risk management, post-market monitoring) are checked as part of the existing sector assessment.
Art.6(2): Annex III Pathway — Self-Assessment or Separate Third-Party
For AI systems classified as high-risk under Annex III (not Annex I), the conformity assessment follows Article 43's standard procedure — typically self-assessment for most Annex III categories, with third-party assessment required for remote biometric identification and a few other categories.
The intersection problem: Some AI systems trigger BOTH Art.6(1) AND Art.6(2). A pedestrian detection system in a car is:
- High-risk via Art.6(1): It's a safety component in a vehicle subject to Reg 2018/858 requiring type-approval assessment
- Potentially high-risk via Art.6(2)/Annex III Cat. 2: Critical infrastructure (transport) safety management
When both pathways apply, Art.6(1) takes precedence for conformity assessment purposes — the sector-specific assessment covers both.
Per-Sector Analysis: What Changes Under Arts.104–112
Automotive: Regulations 167/2013, 168/2013, 2018/858, 2019/2144
Regulation (EU) 2018/858 (motor vehicles) covers whole vehicle type-approval (WVTA) for passenger cars, commercial vehicles, and trailers. AI systems fall into:
- Safety systems integrated into type-approved systems: Covered by Annex I pathway. Assessment by technical service (Annex I notified body equivalent) under UNECE regulations.
- Aftermarket AI systems: Must satisfy AI Act independently if they constitute high-risk AI.
Regulation (EU) 2019/2144 (type-approval safety features) mandates specific AI-enabled safety features for new vehicles from 2022–2026:
- ISA (Intelligent Speed Assistance): Mandatory from July 2022. AI classifies speed limit signs and road context. High-risk under Art.6(1) as safety-critical AI in type-approved vehicle.
- AEB (Autonomous Emergency Braking): Mandatory from July 2024. AI detects pedestrians, cyclists, obstacles. High-risk under Art.6(1).
- Lane Departure Warning / Lane Keeping Assist: Mandatory from July 2024. High-risk under Art.6(1).
- Driver Drowsiness and Attention Warning: Mandatory. Uses AI to monitor driver state. High-risk under Art.6(1).
For these mandatory features, type-approval is the conformity assessment mechanism. Satisfying type-approval via UNECE regulations and the technical service satisfies the EU AI Act conformity assessment requirement for these specific systems.
Regulation (EU) 167/2013 (agricultural vehicles): Covers tractors, harvesters, forestry machines. AI applications:
- Autonomous guidance systems (GPS + camera + AI path planning): If marketed as a safety function, falls under Art.6(1).
- Obstacle detection and emergency stop: Safety-critical, type-approval relevant.
- Precision agriculture AI (yield prediction, disease detection): Not typically type-approval relevant, falls under Annex III assessment only if applicable.
Regulation (EU) 168/2013 (two/three-wheel vehicles): Covers motorcycles, mopeds, scooters, quadricycles. AI applications:
- ABS controllers with AI components: Already mandatory, type-approval relevant.
- Advanced cornering assistance AI: Emerging, falls under type-approval when safety-relevant.
- eCall AI processing: Not typically type-approval relevant for AI Act purposes.
Civil Aviation: Regulation (EU) 2018/1139 (EASA)
Regulation 2018/1139 is the EASA regulation governing aircraft design, manufacturing, and maintenance. It applies to commercial aircraft and large drones. AI applications:
| AI Application | EASA Classification | EU AI Act Classification |
|---|---|---|
| Flight management system AI components | Design Organisation Approval (DOA) required | High-risk via Art.6(1) Annex I |
| Predictive maintenance AI | Service organisation approval required | Potentially high-risk via Art.6(1) |
| Air Traffic Control decision support | Covered under EUROCONTROL/EASA frameworks | High-risk via Annex III (critical infrastructure) |
| Airport security AI screening | Reg (EC) No 300/2008 framework | High-risk via Annex III (law enforcement adjacent) |
| Drone autonomy systems | EASA UAS regulation (EU) 2019/947 | High-risk if commercial |
The EASA-AI Act coordination challenge: EASA uses its own Certification Specifications (CS) and Acceptable Means of Compliance (AMC) documents. EASA AMC-2 CS-25.1309 covers software development assurance using DO-178C/DO-254 standards. The EU AI Act requires different documentation (Annex IV technical documentation). These two documentation frameworks do not map cleanly onto each other.
Practical approach: Use a dual-documentation structure where:
- The DO-178C/DO-254 artifacts satisfy EASA certification
- An Annex IV wrapper document cross-references the certification artifacts to satisfy EU AI Act technical documentation
- The Design Organisation Approval process functions as the EU AI Act third-party conformity assessment
Civil Aviation Security: Regulation (EC) No 300/2008
Regulation (EC) No 300/2008 establishes minimum standards for civil aviation security in the EU, covering passenger and baggage screening, cargo security, and access control. AI applications in scope:
- Automated baggage X-ray screening: AI detects threat items (weapons, explosives). This is threat detection by technology — high-risk under Annex III (law enforcement / critical infrastructure adjacent) AND under Art.6(1) if the screening equipment requires third-party certification.
- Biometric passport verification at boarding: AI verifies document authenticity and facial matching. High-risk under Annex III for both biometric identification (remote) and critical infrastructure.
- Passenger risk scoring / behavioural detection: AI-based passenger profiling for security queue prioritisation. Potentially prohibited under Art.5 (social scoring) depending on implementation, or high-risk under Annex III.
- Perimeter intrusion detection AI: AI-based CCTV analysis at airport perimeters. High-risk under Annex III (law enforcement / critical infrastructure).
Critical intersection — Art.5 prohibited practices in aviation security: Some AI applications used routinely in aviation security contexts may constitute prohibited AI practices under EU AI Act Art.5:
- Emotion recognition AI used on passengers: Prohibited in professional contexts unless narrowly scoped to transport safety (exemption is narrow and contested)
- Biometric categorisation based on sensitive attributes (race, religion, political opinion) for security profiling: Prohibited
- Real-time remote biometric identification in publicly accessible areas: Prohibited except for specific law enforcement exceptions (airports are publicly accessible)
Aviation security developers must audit their systems against Art.5 before assuming high-risk AI compliance is sufficient.
Machinery: Regulation (EU) 2023/1230
The Machinery Regulation (Reg 2023/1230, replacing Directive 2006/42/EC from 2027) explicitly addresses AI-enabled safety functions and is listed in Annex I to the EU AI Act.
Key coexistence rules:
- Machinery Regulation Annex I lists Essential Health and Safety Requirements (EHSRs). An AI safety function must satisfy the relevant EHSR (e.g., EHSR 1.2 control systems, EHSR 1.3 protection against mechanical hazards).
- EU AI Act Art.9 QMS requirements align with Machinery Regulation documentation requirements (Art.10 technical file) but are not identical.
- Harmonised standards under Machinery Regulation (EN ISO 13849 for safety-related control systems, EN 62061) may not fully cover AI-specific requirements under EU AI Act Arts.10-15. Gaps must be bridged.
The higher standard rule: Where the EU AI Act and Machinery Regulation impose different standards for the same aspect, the higher standard applies. The EU AI Act's bias and accuracy requirements (Art.10) may exceed what EN ISO 13849 addresses for learning systems — the AI Act requirement applies.
Dual CE Marking When Both Regimes Apply
When an AI system must satisfy both the EU AI Act and a sector regulation, CE marking documentation must reference both instruments:
Declaration of Conformity (DoC) structure for dual-regulated AI:
DECLARATION OF CONFORMITY
Product: [Name and model]
Manufacturer: [Company, address]
This product conforms with:
1. Regulation (EU) 2024/1689 (EU AI Act), Article 6(1)/(2), as a [high-risk AI system / provider of general-purpose AI model]
Conformity assessment procedure: Article 43(2) — assessed by [notified body name, NB number] under [sector regulation name]
2. Regulation (EU) [sector regulation number], as a [product type]
Type-examination certificate: [Certificate number] issued by [notified body name, NB number]
[Other sector-specific declarations as required]
The above-mentioned product satisfies all applicable essential requirements of both Regulations.
Signed by: [Authorised person, date, place]
One DoC or two? EU AI Act Art.47(2) allows the EU AI Act DoC to be integrated into the sector-specific DoC where technically feasible. The combined document must reference both instruments and be clear about which conformity assessment covered which requirements.
Documentation Consolidation Strategy
Managing two sets of compliance documentation is expensive. A consolidation strategy reduces cost:
| Documentation Element | EU AI Act (Annex IV) | Sector Regulation | Consolidation Approach |
|---|---|---|---|
| Technical description + capabilities | Yes | Yes (varies) | Unified technical file with dual references |
| Risk management records | Art.9 — systematic risk management | FMEA, HAZOP (machinery/automotive), FHA (aviation) | Use sector methodology, add AI-specific risks as annex |
| Validation and testing data | Art.10 — dataset documentation | Test reports per sector standard | Sector test report + AI dataset annex |
| Post-market monitoring plan | Art.72 — mandatory | Adverse event reporting (MDR/IVDR), field performance monitoring | Unified monitoring plan covering both obligations |
| QMS records | Art.9 — quality management | ISO 9001 / IATF 16949 / EN AS 9100 | Existing QMS + AI Act supplement |
| Incident reporting procedure | Art.65 — 15-day NCA notification | Sector-specific vigilance reporting timelines | Dual-trigger procedure document |
Critical difference — incident reporting timelines:
- EU AI Act Art.65: 15 working days for serious incidents to NCA
- MDR Art.87: 15 calendar days for serious incidents (more restrictive)
- Aviation (EASA): 72 hours for certain safety occurrences
- Automotive (UNECE): 30 days typical for field safety issues
A dual-regulated AI developer must identify the most restrictive timeline and build their incident response around it.
CLOUD Act Intersection for Dual-Regulated AI
Dual-regulated AI systems often involve sensitive compliance data that creates compounded CLOUD Act exposure:
Aerospace / aviation AI:
- Airworthiness documentation, maintenance records, flight data recorder analysis: Subject to strict confidentiality requirements under EASA/ICAO frameworks
- US company access to these records via CLOUD Act could implicate ICAO Annex 13 (accident investigation) confidentiality protections — creating a conflict of laws
Automotive AI:
- Type-approval documentation, test data: Some information is commercially sensitive; US subsidiary access to EU type-approval data poses IP and regulatory risk
- Real-time vehicle telemetry used for AI monitoring: Creates ongoing CLOUD Act exposure as monitoring data flows
Recommended approach: Host all dual-regulated technical documentation, test artifacts, and post-market monitoring data on EU-sovereign infrastructure outside CLOUD Act reach. Particularly critical for aviation safety data and automotive crash test results.
Python Implementation: SectorRegulationComplianceMapper
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class SectorRegulation(Enum):
MOTOR_VEHICLES = "Regulation (EU) 2018/858"
VEHICLE_TYPE_APPROVAL = "Regulation (EU) 2019/2144"
AGRICULTURAL_VEHICLES = "Regulation (EU) 167/2013"
TWO_WHEEL_VEHICLES = "Regulation (EU) 168/2013"
CIVIL_AVIATION = "Regulation (EU) 2018/1139"
AVIATION_SECURITY = "Regulation (EC) 300/2008"
MACHINERY = "Regulation (EU) 2023/1230"
MEDICAL_DEVICES = "Regulation (EU) 2017/745"
IVDR = "Regulation (EU) 2017/746"
RADIO_EQUIPMENT = "Directive 2014/53/EU"
NONE = "No sector regulation applicable"
class ConformityAssessmentPath(Enum):
SECTOR_COVERS_AI_ACT = "Art.43(2): Sector assessment covers EU AI Act"
PARALLEL_ASSESSMENTS = "Art.43(1): Separate EU AI Act + sector assessments"
SELF_ASSESSMENT = "Art.43(5): Self-assessment (Annex IX procedure)"
NOT_HIGH_RISK = "Not a high-risk AI system"
@dataclass
class AISystemProfile:
name: str
sector_regulation: SectorRegulation
requires_third_party_sector_assessment: bool
annex_iii_category: Optional[str] = None # e.g., "Cat.2 Critical Infrastructure"
has_safety_function: bool = False
involves_biometrics: bool = False
publicly_accessible_deployment: bool = False
@dataclass
class ComplianceMapping:
profile: AISystemProfile
is_high_risk_annex_i: bool = False
is_high_risk_annex_iii: bool = False
conformity_path: ConformityAssessmentPath = ConformityAssessmentPath.NOT_HIGH_RISK
risk_flags: list[str] = field(default_factory=list)
documentation_gaps: list[str] = field(default_factory=list)
incident_reporting_deadline_days: int = 15
class SectorRegulationComplianceMapper:
def __init__(self):
self.annex_i_regulations = {
SectorRegulation.MOTOR_VEHICLES,
SectorRegulation.VEHICLE_TYPE_APPROVAL,
SectorRegulation.AGRICULTURAL_VEHICLES,
SectorRegulation.TWO_WHEEL_VEHICLES,
SectorRegulation.CIVIL_AVIATION,
SectorRegulation.MACHINERY,
SectorRegulation.MEDICAL_DEVICES,
SectorRegulation.IVDR,
SectorRegulation.RADIO_EQUIPMENT,
}
# Most restrictive incident reporting timelines per sector
self.sector_incident_deadlines = {
SectorRegulation.CIVIL_AVIATION: 3, # 72 hours (EASA)
SectorRegulation.MEDICAL_DEVICES: 15, # 15 calendar days (MDR Art.87)
SectorRegulation.IVDR: 15,
SectorRegulation.MOTOR_VEHICLES: 30, # ~30 days (UNECE field safety)
SectorRegulation.VEHICLE_TYPE_APPROVAL: 30,
SectorRegulation.MACHINERY: 15, # Defaults to AI Act 15 working days
}
def map(self, profile: AISystemProfile) -> ComplianceMapping:
mapping = ComplianceMapping(profile=profile)
# Step 1: Check Annex I high-risk classification
if (
profile.sector_regulation in self.annex_i_regulations
and profile.requires_third_party_sector_assessment
):
mapping.is_high_risk_annex_i = True
# Step 2: Check Annex III high-risk classification
if profile.annex_iii_category:
mapping.is_high_risk_annex_iii = True
# Step 3: Determine conformity assessment path
if not mapping.is_high_risk_annex_i and not mapping.is_high_risk_annex_iii:
mapping.conformity_path = ConformityAssessmentPath.NOT_HIGH_RISK
elif mapping.is_high_risk_annex_i:
# Art.43(2): Sector assessment covers AI Act
mapping.conformity_path = ConformityAssessmentPath.SECTOR_COVERS_AI_ACT
elif mapping.is_high_risk_annex_iii:
mapping.conformity_path = ConformityAssessmentPath.SELF_ASSESSMENT
# Step 4: Assess risk flags
if profile.involves_biometrics and profile.publicly_accessible_deployment:
mapping.risk_flags.append(
"CRITICAL: Real-time remote biometric identification in public space — "
"Art.5 prohibition applies unless specific law enforcement exception"
)
if profile.sector_regulation == SectorRegulation.AVIATION_SECURITY:
mapping.risk_flags.append(
"AUDIT: Review against Art.5 prohibited practices — emotion recognition "
"and passenger profiling AI common in aviation security may be prohibited"
)
# Step 5: Identify documentation gaps
if mapping.is_high_risk_annex_i and mapping.is_high_risk_annex_iii:
mapping.documentation_gaps.append(
"Both Annex I and Annex III apply: Sector assessment (Art.43(2)) takes "
"precedence for conformity, but document BOTH classification bases"
)
if profile.sector_regulation == SectorRegulation.CIVIL_AVIATION:
mapping.documentation_gaps.append(
"Map DO-178C/DO-254 artifacts to Annex IV EU AI Act requirements — "
"create cross-reference table for notified body review"
)
# Step 6: Set incident reporting deadline (most restrictive)
sector_deadline = self.sector_incident_deadlines.get(
profile.sector_regulation, 15
)
mapping.incident_reporting_deadline_days = min(sector_deadline, 15)
return mapping
def generate_report(self, mapping: ComplianceMapping) -> str:
p = mapping.profile
lines = [
f"=== Dual Compliance Report: {p.name} ===",
f"Sector Regulation: {p.sector_regulation.value}",
f"",
f"HIGH-RISK CLASSIFICATION:",
f" Annex I pathway: {'YES' if mapping.is_high_risk_annex_i else 'NO'}",
f" Annex III pathway: {'YES — ' + p.annex_iii_category if mapping.is_high_risk_annex_iii else 'NO'}",
f"",
f"CONFORMITY ASSESSMENT PATH: {mapping.conformity_path.value}",
f"INCIDENT REPORTING DEADLINE: {mapping.incident_reporting_deadline_days} days",
f"",
]
if mapping.risk_flags:
lines.append("RISK FLAGS:")
for flag in mapping.risk_flags:
lines.append(f" ⚠ {flag}")
lines.append("")
if mapping.documentation_gaps:
lines.append("DOCUMENTATION ACTIONS:")
for gap in mapping.documentation_gaps:
lines.append(f" → {gap}")
return "\n".join(lines)
# Example usage
mapper = SectorRegulationComplianceMapper()
# Automotive ADAS system
adas_profile = AISystemProfile(
name="Pedestrian Detection AEB System",
sector_regulation=SectorRegulation.VEHICLE_TYPE_APPROVAL,
requires_third_party_sector_assessment=True,
annex_iii_category="Cat.2 Critical Infrastructure (Transport)",
has_safety_function=True,
)
adas_mapping = mapper.map(adas_profile)
print(mapper.generate_report(adas_mapping))
# → Annex I: YES, Annex III: YES, Path: Art.43(2) sector covers AI Act
# Airport baggage screening AI
baggage_profile = AISystemProfile(
name="Automated Threat Detection AI (X-ray)",
sector_regulation=SectorRegulation.AVIATION_SECURITY,
requires_third_party_sector_assessment=True,
annex_iii_category="Cat.6 Law Enforcement (threat detection)",
has_safety_function=True,
publicly_accessible_deployment=True,
)
baggage_mapping = mapper.map(baggage_profile)
print(mapper.generate_report(baggage_mapping))
# → Annex I: YES, Annex III: YES, AUDIT: Art.5 prohibited practices review
Art.5 Prohibited Practice Audit for Sector AI
Before assuming dual-compliance is your only concern, verify your AI system does not constitute a prohibited practice under Art.5. Several sector-common AI applications are at risk:
| Sector AI Application | Art.5 Risk | Analysis |
|---|---|---|
| Emotion recognition at airport boarding | HIGH | Art.5(1)(f) prohibits emotion recognition in professional/public contexts — narrow transport safety exception contested |
| Biometric passenger categorisation (nationality, age profiling) | HIGH | Art.5(1)(b) prohibits biometric categorisation inferring sensitive attributes |
| Real-time facial recognition at airport terminals (law enforcement) | CONDITIONAL | Art.5(1)(h) — law enforcement exception available but with strict conditions (judicial authorisation, specific offences) |
| Social scoring for passenger risk profiles | HIGH | Art.5(1)(c) general social scoring prohibition — aviation security profiling must be crime/threat-specific, not general behaviour |
| Driver drowsiness detection in vehicles | LOW | Mandatory under 2019/2144, scope limited to driving safety — not a general emotion inference system |
| Predictive maintenance AI in aircraft | NONE | Maintenance scheduling not in prohibited categories |
30-Item Dual Compliance Readiness Checklist
CLASSIFICATION AND SCOPE
- 1. Identify all applicable sector regulations for your AI-embedded product (Reg 2018/858, 2018/1139, 167/2013, 168/2013, 2019/2144, 2023/1230, MDR, IVDR, or RED)
- 2. Confirm whether third-party conformity assessment is required under your sector regulation — this determines Art.6(1) Annex I high-risk status
- 3. Separately assess Annex III high-risk classification — document the outcome even if Annex I applies
- 4. Map each AI system component to both classification pathways and document the outcome
- 5. Audit all AI applications in your product against Art.5 prohibited practices before proceeding with conformity assessment
CONFORMITY ASSESSMENT
- 6. Where Art.43(2) applies (Annex I + third-party sector assessment): notify your sector-specific notified body that they must also assess EU AI Act compliance
- 7. Confirm your notified body has EU AI Act competence — not all sector-specific notified bodies have been designated for AI Act assessment
- 8. Where Annex I AND Annex III apply: document that Art.43(2) takes precedence for the conformity assessment procedure
- 9. Where only Annex III applies (no sector regulation): apply Article 43(1) or 43(5) self-assessment procedure as applicable
- 10. Schedule conformity assessment timetable integrated with sector-specific type-approval or certification timeline
DOCUMENTATION CONSOLIDATION
- 11. Create a single Annex IV technical documentation file that references sector-specific documentation artifacts (type-approval technical file, design basis, DO-178C artifacts)
- 12. Map sector-specific QMS (ISO 9001, IATF 16949, EN AS 9100) to EU AI Act Art.9 QMS requirements — document gaps and how they are bridged
- 13. Validate that risk management records satisfy both EU AI Act Art.9(2) systematic risk management AND sector-specific risk methodology (FMEA, FHA, HAZOP)
- 14. Prepare dataset documentation (Art.10 compliance) describing training data lineage, relevance to operational conditions, bias testing
- 15. Document human oversight mechanisms (Art.14) in terms the sector-specific authority will recognise (e.g., EASA pilot override requirements, automotive minimum risk condition)
DECLARATION AND MARKING
- 16. Prepare a combined Declaration of Conformity referencing both EU AI Act Art.47 and the applicable sector regulation DoC requirements
- 17. Confirm CE marking requirements — a single CE mark on the product covers all applicable harmonization legislation
- 18. Ensure the AI system is registered in the EU AI Database (Art.49/60) — separate from any sector-specific registration requirements
POST-MARKET AND INCIDENT REPORTING
- 19. Identify the most restrictive incident reporting timeline across all applicable regulations (civil aviation: 72h; MDR/IVDR: 15 calendar days; EU AI Act: 15 working days)
- 20. Draft a unified post-market monitoring plan (Art.72) covering both EU AI Act and sector-specific post-market surveillance obligations
- 21. Establish a joint incident classification procedure that triggers the correct reporting path for incidents meeting thresholds under either regime
- 22. Set up monitoring data collection consistent with both Art.72 requirements and sector-specific reporting (e.g., EASA occurrence reporting, MDR vigilance)
SUPPLY CHAIN
- 23. Identify your role under EU AI Act: if you supply an AI system to a product manufacturer, you are likely the AI system provider; the manufacturer may be a deployer
- 24. If the product manufacturer performs sector conformity assessment, confirm contractual arrangements specify EU AI Act compliance responsibilities
- 25. For Tier 1/Tier 2 automotive suppliers: identify where EU AI Act provider obligations sit — with the AI system developer, the Tier 1 integrator, or the OEM
- 26. Ensure supply chain technical documentation access (Art.61) is addressed in supplier agreements
PROHIBITED PRACTICES AND SPECIAL RISKS
- 27. For aviation security AI: specifically review emotion recognition, biometric categorisation, and real-time biometric identification applications against Art.5
- 28. For passenger profiling or vehicle occupant monitoring AI: confirm categorisation is not based on sensitive attributes (race, religion, political opinion)
- 29. For any AI operating in publicly accessible spaces: verify real-time remote biometric identification permissions or absence of such use
ONGOING COMPLIANCE
- 30. Implement a delegated act monitoring procedure (Art.104) — Annex I and Annex III can be amended by Commission delegated act with 3-month minimum notice; classification can change
See Also
- EU AI Act Art.105: Amendment to Regulation (EU) No 167/2013 — Agricultural and Forestry Vehicle AI Systems, Autonomous Guidance High-Risk Classification
- EU AI Act Art.103: Transitional Provisions — Aug 2026 Full-Application Deadline and 98-Day Compliance Countdown
- EU AI Act Art.5: Prohibited AI Practices — Social Scoring, Biometric Surveillance, Subliminal Manipulation
- EU AI Act Art.43: Conformity Assessment Procedures for High-Risk AI Systems
- EU AI Act Art.72: Post-Market Monitoring Plan — Mandatory Obligations Developer Guide
- EU AI Act Art.65: Reporting Serious Incidents — 15-Day NCA Notification Procedure