EU AI Act Art.43 Conformity Assessment: Internal Control vs. Notified Body — Developer Guide (2026)
EU AI Act Article 43 answers the question every high-risk AI developer eventually asks: how do I actually prove my system is compliant? The answer is a conformity assessment — a structured procedure that transforms your technical documentation, risk management outputs, and quality management system into a legally valid compliance declaration. What makes Art.43 developer-friendly is that for most high-risk AI systems, providers can conduct this assessment themselves through internal control (Annex VI) without engaging an external auditor or paying notified body fees.
This is a significant departure from the approach taken in medical devices (MDR) or machinery regulation, where notified body involvement is widespread. For standalone high-risk AI systems — which covers the majority of SaaS providers building AI under Annex III — Art.43(1) defaults to self-certification. The notified body track (Annex VII) is reserved for specific, higher-stakes categories: components of regulated products and, in some cases, remote biometric identification systems.
For developers, Art.43 is the culmination of the compliance preparation work required by Art.9 (risk management), Art.17 (quality management system), and Annex IV (technical documentation). Once those are in place, Art.43 defines the procedure that ties them together into a CE marking-eligible conformity declaration.
Art.43 in the Conformity Infrastructure
Art.43 sits at the interface between pre-market compliance obligations and the market placement act. It is not a standalone requirement — it depends on earlier compliance work being complete:
| Phase | Article | Output |
|---|---|---|
| Risk Management | Art.9 | Risk management file |
| Technical Documentation | Annex IV | Technical documentation package |
| Quality Management System | Art.17 | QMS documentation |
| Conformity Assessment | Art.43 | Assessment report + conclusion |
| Declaration of Conformity | Art.48 | EU Declaration of Conformity |
| CE Marking | Art.49 | CE marking affixed to system or documentation |
| EU Database Registration | Art.32 | Registration in EU AI Act database (before market placement) |
Art.43 depends on Art.9, Art.17, and Annex IV being complete. The assessment procedure is a verification exercise — it checks that prior obligations have been satisfied, not a substitute for satisfying them. If your Art.9 risk management file is incomplete, Art.43 internal control will surface the gap. If Annex IV documentation is missing a required element, the assessment cannot be concluded.
The Two Assessment Tracks
Art.43 establishes two conformity assessment procedures. Which one applies depends on the type of high-risk AI system:
Track 1: Internal Control (Annex VI)
Who it applies to: Providers of high-risk AI systems listed in Annex III, other than those subject to Track 2 requirements.
Core feature: The provider conducts the entire assessment without involving a notified body. This is not a light-touch checklist — it requires rigorous self-assessment against all applicable requirements. But the assessment is under the provider's control, does not require external fees, and can be conducted on the provider's timeline.
What Track 1 requires:
- Complete technical documentation (Annex IV)
- Implemented and documented QMS (Art.17)
- Completed risk management procedure (Art.9)
- Internal assessment of compliance with Arts.9–15
- Drawing up of Declaration of Conformity (Art.48)
- Affixing of CE marking (Art.49)
Typical timeline: 4–12 weeks for a well-prepared provider with complete prior documentation.
Track 2: Notified Body Assessment (Annex VII)
Who it applies to:
-
Regulated product components: High-risk AI systems that are products or safety components of products subject to Union harmonisation legislation listed in Annex I Section A, where that legislation requires third-party assessment by a notified body.
-
Remote biometric identification systems: Annex III Point 1(a) systems where the Commission has issued implementing acts requiring third-party assessment.
-
Commission implementing act categories: Art.43(3) gives the Commission authority to issue implementing acts requiring Annex VII assessment for additional Annex III categories where specific risks or characteristics justify mandatory third-party assessment.
What Track 2 requires:
- All Track 1 documentation (Annex IV, Art.9, Art.17)
- Plus: submission to a designated notified body (Art.33) for:
- QMS assessment
- Technical documentation review
- Potentially: prototype testing or sample inspection
- Notified body issues EU technical documentation certificate or QMS certification
- Declaration of Conformity drawn up after notified body approval
Key constraint: The notified body must be designated under Art.33 and currently active (not suspended under Art.36). Check the NANDO database for current designations before engaging a body.
Art.43(1): The Internal Control Procedure in Detail
The Annex VI internal control procedure has five core elements that must be satisfied before the provider can draw up the Declaration of Conformity:
Element 1: Complete Technical Documentation
The provider must have compiled the full technical documentation package as specified in Annex IV. Required contents include:
- General description of the AI system (purpose, intended use, user categories)
- Detailed description of system elements (architecture, algorithms, training methodology)
- Training and validation data governance documentation
- Risk management documentation (Art.9 output)
- QMS documentation (Art.17 output)
- Human oversight design (Art.14)
- Accuracy, robustness, and cybersecurity measures (Art.15)
- Post-market monitoring plan
The Annex VI assessment verifies completeness of this package — not correctness against external standards. Compliance with harmonised standards (Art.40) or common specifications (Art.41) creates a presumption of conformity for the elements those standards cover, which reduces the self-assessment burden for covered requirements.
Element 2: Internal Compliance Assessment
The provider conducts an internal assessment of the AI system's compliance with all applicable requirements of Title III Chapter 2 (Arts.9–15). This assessment must:
- Be systematic — every requirement must be checked
- Be documented — the assessment methodology, evidence reviewed, and findings must be recorded
- Be conducted by qualified personnel — ideally a team with both technical and legal expertise
- Identify any non-conformities and resolve them before concluding the assessment
Element 3: Declaration of Conformity
Upon successful internal assessment, the provider draws up the Declaration of Conformity under Art.48. The declaration must identify the AI system, reference the applicable requirements and any harmonised standards applied, and be signed by an authorised representative.
The declaration is a legally significant document — it asserts that the provider accepts full responsibility for the system's compliance with EU AI Act requirements.
Element 4: CE Marking
After the Declaration of Conformity is complete, the provider affixes the CE marking under Art.49. The CE marking signals to market participants and surveillance authorities that the conformity assessment has been completed.
Element 5: Record-Keeping
The provider must retain the complete conformity assessment record — including Annex IV technical documentation, Art.9 risk management file, Art.17 QMS documentation, and Annex VI assessment report — for 10 years after the AI system is placed on the market (Art.18). These records must be available to national market surveillance authorities upon request (Art.74).
Art.43(2): When Notified Body Assessment Is Mandatory
Understanding when Track 2 is mandatory is as important as understanding how to execute Track 1. There are three pathways to mandatory notified body involvement:
Pathway A: Regulated Product Integration
If your high-risk AI system is embedded as a product or safety component in a product subject to Union harmonisation legislation listed in Annex I Section A, the conformity assessment procedure of that Annex I legislation governs — not the AI Act's standalone procedure.
Annex I Section A includes:
| Legislation | Scope |
|---|---|
| Machinery Regulation (EU) 2023/1230 | Industrial machinery with AI-driven safety functions |
| Medical Devices Regulation (MDR) 2017/745 | AI systems as medical device software (MDSW) |
| In-vitro Diagnostic Medical Devices Regulation (IVDR) 2017/746 | AI-driven IVD software |
| Radio Equipment Directive 2014/53/EU | AI in radio-connected products |
| Automotive General Safety Regulation (EU) 2019/2144 | AI in vehicle safety systems |
| Civil Aviation (EASA rules) | AI in aircraft systems |
Where the applicable Annex I legislation requires a notified body for the product as a whole, the AI system assessment is integrated into that notified body procedure. The AI Act does not add a parallel Annex VII assessment on top — the Annex I procedure covers both.
Practical implication for developers: An AI diagnostic system integrated into a Class IIb medical device does not get the Art.43(1) internal control option. The MDR notified body procedure governs, and that procedure is updated to cover AI Act requirements. The same applies to AI safety components in machinery, vehicles, and aviation systems.
Pathway B: Remote Biometric Identification Systems
Remote biometric identification systems listed in Annex III Point 1(a) are subject to third-party assessment requirements where the Commission has determined this is necessary. As of 2026, providers of such systems should monitor EU AI Office communications for applicable implementing acts.
Pathway C: Commission Implementing Acts
Art.43(3) creates a forward-looking expansion mechanism. The Commission can issue implementing acts requiring Annex VII assessment for additional Annex III categories where specific risk profiles or characteristics justify mandatory third-party scrutiny. Providers of high-risk AI in rapidly-evolving risk categories (e.g., AI in critical infrastructure, law enforcement) should monitor this pathway.
Art.43(4): Substantial Modification — The Reset Trigger
Art.43(4) establishes that any substantial modification to a high-risk AI system requires a new conformity assessment from the beginning. This is one of the most operationally significant provisions for development teams with continuous deployment practices.
What Constitutes Substantial Modification?
Art.3(23) defines substantial modification as a change to the AI system that:
- Affects compliance with applicable requirements of Title III Chapter 2 (Arts.9–15); or
- Changes the intended purpose of the AI system.
Practical examples of substantial modifications:
- Adding new capabilities that extend the intended purpose (e.g., expanding from single-language to multilingual output)
- Retraining on a materially different dataset that changes system behaviour, accuracy, or risk profile
- Changing the deployment context (e.g., from HR screening tool to judicial decision support)
- Architectural changes that affect risk level (e.g., adding real-time autonomous decision capability)
- Changes that introduce new categories of potential fundamental rights impact
What Does NOT Constitute Substantial Modification?
- Bug fixes that restore the system to its documented behaviour without changing intended purpose
- Performance improvements that maintain identical functionality and risk profile
- UI/UX changes that do not affect AI decision logic
- Security patches that do not modify model behaviour
- Infrastructure migration (e.g., moving to a different server) without changing system architecture
- Documentation updates
Governance Recommendation: Modification Classification Procedure
Development teams should establish a modification classification procedure as part of their Art.17 QMS. Every change affecting the AI system should be classified before deployment:
MINOR → No new conformity assessment needed
POTENTIALLY SUBSTANTIAL → Classification review required (senior review, documented decision)
SUBSTANTIAL → New Art.43 assessment mandatory before deployment
This classification must be documented. The audit trail from change description to classification decision to assessment outcome is the evidence that Art.43(4) governance is in place.
Art.43 × Intersection Matrix
| Article | Connection | Impact on Art.43 |
|---|---|---|
| Art.9 | Risk management | Incomplete Art.9 = Art.43 assessment cannot be concluded |
| Art.17 | Quality management | Art.17 QMS documentation is the primary Annex VI input |
| Annex IV | Technical documentation | Complete Annex IV = conformity assessment ready to begin |
| Art.22 | Authorised representative | Art.22 representative must ensure Art.43 assessment is completed for third-country providers |
| Art.32 | EU database registration | Registration required before market placement; depends on completed Art.43 assessment |
| Art.40 | Harmonised standards | Compliance with harmonised standards creates presumption of conformity — reduces Annex VI self-assessment burden for covered requirements |
| Art.41 | Common specifications | Same presumption effect as Art.40 for elements covered by common specifications |
| Art.48 | Declaration of Conformity | Art.48 declaration is the output of the Art.43 assessment — cannot precede it |
| Art.49 | CE marking | CE marking affixed only after Art.48 declaration is drawn up |
| Art.72 | Post-market monitoring | Art.72 obligations begin after market placement; monitoring outputs may trigger new Art.43 assessment if substantial modification detected |
| Art.73 | Serious incident reporting | Incidents may surface modifications requiring reassessment under Art.43(4) |
The SaaS Developer Decision Tree
Is your AI system listed in Annex III as high-risk?
│
├── NO → Art.43 conformity assessment does not apply.
│ Check: Are you in scope of GPAI obligations (Art.51–56)?
│
└── YES →
Is the AI system a product or safety component of a product
subject to Annex I Section A legislation
(MDR, Machinery Regulation, IVDR, Automotive, Aviation)?
│
├── YES → Follow the Annex I conformity procedure.
│ Does the Annex I legislation require a notified body?
│ ├── YES → Track 2: Notified Body (Annex VII) mandatory.
│ │ Engage a designated notified body (NANDO).
│ └── NO → Annex I internal procedure applies.
│ Verify whether AI Act requirements integrate.
│
└── NO →
Is it a remote biometric identification system (Annex III Point 1(a))
subject to a Commission implementing act requiring third-party assessment?
│
├── YES → Track 2: Notified Body (Annex VII) mandatory.
│
└── NO → Track 1: Internal Control (Annex VI) available.
Self-certification path.
Timeline: 4–12 weeks.
Cost: Internal staff time (no notified body fees).
For SaaS providers: If you are building a standalone AI system under Annex III Points 2–8 — employment screening, educational AI, critical infrastructure management, law enforcement AI that is not remote biometric ID, migration or asylum AI, justice system AI — Track 1 is available by default. You do not need a notified body unless you fall into one of the Track 2 pathways above.
Annex VI vs. Annex VII: Practical Comparison
| Dimension | Track 1: Annex VI (Internal Control) | Track 2: Annex VII (Notified Body) |
|---|---|---|
| External party required? | No | Yes — designated notified body (Art.33) |
| Cost range | €0–50K (internal staff time) | €50K–500K+ (notified body fees + delays) |
| Typical timeline | 4–12 weeks | 6–18 months (notified body capacity) |
| Who asserts compliance? | Provider self-assessment | Notified body independent determination |
| Post-market challenge risk | Market surveillance authority challenge possible | Lower risk (third-party validated) |
| CE marking basis | Provider Declaration of Conformity (Art.48) | Notified body certificate + provider declaration |
| NANDO dependency | None | Body must maintain designation under Art.33/36 |
| Annual reassessment | Internal review + trigger events | Per notified body surveillance schedule |
| Applies to | Most Annex III standalone systems | Annex I-integrated + remote biometric ID |
CLOUD Act × Art.43: Where Your Conformity Documentation Lives
Art.43 requires providers to complete and retain conformity assessment documentation — Annex IV technical documentation, Art.9 risk management files, Art.17 QMS records, and Annex VI assessment reports — for 10 years under Art.18. These records must be available to national market surveillance authorities on request (Art.74).
If this documentation is stored on US-headquartered cloud infrastructure (AWS, Azure, Google Cloud), it falls within scope of the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), which authorises US law enforcement to compel US cloud providers to produce data stored anywhere in the world, regardless of where that data resides.
This creates a dual-access structure for conformity assessment records:
| Access Path | Mechanism | Provider Notification |
|---|---|---|
| EU national market surveillance authority | Art.74 EU AI Act request | Provider receives request |
| US government subpoena via CLOUD Act | US court order to cloud provider | Provider may not be notified |
For providers of high-risk AI systems, this dual-access structure has concrete implications:
- Technical documentation (which reveals system architecture and training methodology) could be compelled without the provider's knowledge
- Risk management files (which may contain frank internal assessments of system limitations) could be obtained through the cloud provider rather than the provider directly
- QMS documentation (which documents internal governance processes) could be accessed through a different legal pathway than EU authorities would use
EU-native infrastructure eliminates this dual-access path entirely. When conformity assessment records are stored on infrastructure that is owned and operated by EU entities, the CLOUD Act access pathway does not apply. Records remain under a single-regime framework — only EU market surveillance authorities can compel access, through the Art.74 mechanism with provider notification.
For providers building AI systems that will be subject to Art.43, the infrastructure decision for storing conformity documentation is not merely a cost or performance question — it is a jurisdiction question with compliance risk implications.
Python Implementation
1. ConformityAssessmentRouter
Determines which Art.43 track applies based on AI system profile:
from dataclasses import dataclass
from enum import Enum
from typing import Optional
class ConformityTrack(Enum):
INTERNAL_CONTROL = "annex_vi_internal_control"
NOTIFIED_BODY = "annex_vii_notified_body"
ANNEX_I_PROCEDURE = "annex_i_regulated_product"
NOT_APPLICABLE = "not_high_risk"
class AnnexIIIPoint(Enum):
POINT_1A_BIOMETRIC_REMOTE = "1a"
POINT_1B_BIOMETRIC_CATEGORISATION = "1b"
POINT_2_CRITICAL_INFRASTRUCTURE = "2"
POINT_3_EDUCATION = "3"
POINT_4_EMPLOYMENT = "4"
POINT_5_ESSENTIAL_SERVICES = "5"
POINT_6_LAW_ENFORCEMENT = "6"
POINT_7_MIGRATION = "7"
POINT_8_JUSTICE = "8"
@dataclass
class AISystemProfile:
annex_iii_point: Optional[AnnexIIIPoint]
is_annex_i_regulated_product_component: bool
annex_i_legislation_requires_notified_body: bool
covered_by_commission_implementing_act: bool = False
class ConformityAssessmentRouter:
"""
Implements Art.43(1)-(3) track selection logic for high-risk AI systems.
"""
MANDATORY_NOTIFIED_BODY_POINTS = {AnnexIIIPoint.POINT_1A_BIOMETRIC_REMOTE}
def determine_track(self, profile: AISystemProfile) -> ConformityTrack:
if profile.annex_iii_point is None:
return ConformityTrack.NOT_APPLICABLE
# Art.43(2): Annex I regulated product components
if profile.is_annex_i_regulated_product_component:
if profile.annex_i_legislation_requires_notified_body:
return ConformityTrack.NOTIFIED_BODY
return ConformityTrack.ANNEX_I_PROCEDURE
# Art.43: Mandatory third-party assessment categories
if profile.annex_iii_point in self.MANDATORY_NOTIFIED_BODY_POINTS:
return ConformityTrack.NOTIFIED_BODY
if profile.covered_by_commission_implementing_act:
return ConformityTrack.NOTIFIED_BODY
# Default: Track 1 available for all remaining Annex III systems
return ConformityTrack.INTERNAL_CONTROL
def assessment_summary(self, profile: AISystemProfile) -> dict:
track = self.determine_track(profile)
metadata = {
ConformityTrack.INTERNAL_CONTROL: {
"procedure": "Annex VI Internal Control",
"notified_body_required": False,
"typical_timeline_weeks": "4-12",
"cost_range": "€0-50K (internal staff)",
"regulatory_basis": "Art.43(1) EU AI Act (EU) 2024/1689",
},
ConformityTrack.NOTIFIED_BODY: {
"procedure": "Annex VII Third-Party Assessment",
"notified_body_required": True,
"typical_timeline_weeks": "26-78",
"cost_range": "€50K-500K+ (notified body fees)",
"regulatory_basis": "Art.43(2)/(3) EU AI Act (EU) 2024/1689",
},
ConformityTrack.ANNEX_I_PROCEDURE: {
"procedure": "Applicable Annex I harmonisation legislation",
"notified_body_required": None,
"typical_timeline_weeks": "varies",
"cost_range": "varies by legislation",
"regulatory_basis": "Art.43(2) EU AI Act + applicable Annex I legislation",
},
ConformityTrack.NOT_APPLICABLE: {
"procedure": "Not applicable",
"notified_body_required": False,
"typical_timeline_weeks": "N/A",
"cost_range": "N/A",
"regulatory_basis": "System not in Annex III scope",
},
}
return {"track": track.value, **metadata[track]}
2. AnnexVIChecker
Validates that all Annex VI (Internal Control) procedure elements are in place before drawing up the Declaration of Conformity:
from dataclasses import dataclass, field
from typing import List
@dataclass
class CheckResult:
element: str
satisfied: bool
gaps: List[str] = field(default_factory=list)
@property
def status(self) -> str:
return "PASS" if self.satisfied else "FAIL"
class AnnexVIChecker:
"""
Validates Annex VI (Internal Control) procedure readiness.
All checks must pass before Declaration of Conformity (Art.48) can be drawn up.
"""
def check_technical_documentation(
self,
annex_iv_complete: bool,
has_risk_management_file: bool,
has_qms_documentation: bool,
has_validation_results: bool,
) -> CheckResult:
gaps = []
if not annex_iv_complete:
gaps.append("Annex IV technical documentation package incomplete")
if not has_risk_management_file:
gaps.append("Art.9 risk management file missing or not concluded")
if not has_qms_documentation:
gaps.append("Art.17 QMS documentation missing")
if not has_validation_results:
gaps.append("Validation and testing results not documented")
return CheckResult("Technical Documentation (Annex IV)", len(gaps) == 0, gaps)
def check_compliance_assessment(
self,
arts_9_to_15_assessed: bool,
assessment_documented: bool,
nonconformities_resolved: bool,
qualified_assessors: bool,
) -> CheckResult:
gaps = []
if not arts_9_to_15_assessed:
gaps.append("Not all Arts.9–15 requirements assessed")
if not assessment_documented:
gaps.append("Internal assessment findings not documented")
if not nonconformities_resolved:
gaps.append("Open non-conformities remain — cannot conclude assessment")
if not qualified_assessors:
gaps.append("Assessment not conducted by qualified personnel")
return CheckResult("Internal Compliance Assessment", len(gaps) == 0, gaps)
def check_record_keeping(
self,
retention_period_years: int,
records_accessible_to_authorities: bool,
records_jurisdiction: str,
) -> CheckResult:
gaps = []
if retention_period_years < 10:
gaps.append(
f"Record retention {retention_period_years}y insufficient — Art.18 requires 10 years"
)
if not records_accessible_to_authorities:
gaps.append("Records not configured for Art.74 market surveillance access")
if records_jurisdiction != "eu_native":
gaps.append(
f"Records on {records_jurisdiction} infrastructure — CLOUD Act dual-access risk"
)
return CheckResult("Record Keeping (Art.18)", len(gaps) == 0, gaps)
def full_report(self, **kwargs) -> dict:
checks = [
self.check_technical_documentation(
kwargs.get("annex_iv_complete", False),
kwargs.get("has_risk_management_file", False),
kwargs.get("has_qms_documentation", False),
kwargs.get("has_validation_results", False),
),
self.check_compliance_assessment(
kwargs.get("arts_9_to_15_assessed", False),
kwargs.get("assessment_documented", False),
kwargs.get("nonconformities_resolved", False),
kwargs.get("qualified_assessors", False),
),
self.check_record_keeping(
kwargs.get("retention_period_years", 0),
kwargs.get("records_accessible_to_authorities", False),
kwargs.get("records_jurisdiction", "unknown"),
),
]
all_gaps = [g for c in checks if not c.satisfied for g in c.gaps]
return {
"annex_vi_complete": len(all_gaps) == 0,
"declaration_of_conformity_possible": len(all_gaps) == 0,
"checks": [
{"element": c.element, "status": c.status, "gaps": c.gaps}
for c in checks
],
"total_gaps": len(all_gaps),
"blocking_gaps": all_gaps,
}
3. SubstantialModificationDetector
Implements Art.3(23) + Art.43(4) modification classification to determine whether a change triggers a new conformity assessment:
from dataclasses import dataclass
from enum import Enum
from typing import List, Tuple
class ModificationSeverity(Enum):
MINOR = "minor"
REVIEW_REQUIRED = "potentially_substantial"
SUBSTANTIAL = "substantial"
@dataclass
class ModificationEvent:
change_description: str
affects_intended_purpose: bool
adds_new_capability: bool
changes_deployment_context: bool
affects_risk_level: bool
dataset_change_significant: bool
class SubstantialModificationDetector:
"""
Implements Art.3(23) + Art.43(4) classification.
Determines whether a change to a high-risk AI system requires
a new conformity assessment before deployment.
"""
# Art.3(23) definitive triggers: affects intended purpose or compliance
DEFINITIVE_TRIGGERS = [
"affects_intended_purpose",
"adds_new_capability",
"changes_deployment_context",
]
# Potentially substantial: requires classification review
REVIEW_TRIGGERS = [
"affects_risk_level",
"dataset_change_significant",
]
def classify(
self, event: ModificationEvent
) -> Tuple[ModificationSeverity, List[str]]:
reasons = []
for trigger in self.DEFINITIVE_TRIGGERS:
if getattr(event, trigger):
reasons.append(
f"Art.3(23) trigger: {trigger.replace('_', ' ')}"
)
if reasons:
return ModificationSeverity.SUBSTANTIAL, reasons
for trigger in self.REVIEW_TRIGGERS:
if getattr(event, trigger):
reasons.append(
f"Classification review required: {trigger.replace('_', ' ')}"
)
if reasons:
return ModificationSeverity.REVIEW_REQUIRED, reasons
return ModificationSeverity.MINOR, ["No Art.3(23) triggers identified"]
def new_assessment_required(self, event: ModificationEvent) -> bool:
severity, _ = self.classify(event)
return severity == ModificationSeverity.SUBSTANTIAL
def modification_report(self, event: ModificationEvent) -> dict:
severity, reasons = self.classify(event)
return {
"change": event.change_description,
"severity": severity.value,
"new_art43_assessment_required": (
severity == ModificationSeverity.SUBSTANTIAL
),
"classification_review_required": (
severity == ModificationSeverity.REVIEW_REQUIRED
),
"reasons": reasons,
"regulatory_basis": "Art.3(23) + Art.43(4) EU AI Act (EU) 2024/1689",
}
40-Item Art.43 Conformity Assessment Compliance Checklist
Track Selection (Items 1–8)
- 1. Confirmed that the AI system is listed in Annex III as a high-risk AI system
- 2. Verified whether the AI system is a product or safety component of a product subject to Annex I Section A legislation
- 3. If Annex I regulated product: identified the applicable legislation and its conformity assessment requirements
- 4. Verified whether the system is a remote biometric identification system (Annex III Point 1(a))
- 5. Checked for applicable Commission implementing acts requiring Annex VII assessment
- 6. Confirmed Track 1 (Internal Control) is available for this specific system type
- 7. Documented track selection decision with regulatory basis (Art.43(1) or Art.43(2))
- 8. Assigned internal owner responsible for conformity assessment management
Pre-Assessment: Documentation Readiness (Items 9–20)
- 9. Art.9 risk management procedure completed and concluded
- 10. Risk management file covers all known and foreseeable risks for intended use
- 11. Risk mitigation measures implemented, tested, and documented
- 12. Residual risk evaluated as acceptable
- 13. Art.17 QMS fully documented and implemented
- 14. QMS covers all mandatory elements (Art.17(1)(a)–(l))
- 15. Annex IV technical documentation package complete
- 16. Training data governance documentation included (data sources, processing, quality)
- 17. Validation and testing results documented against intended use cases
- 18. Human oversight measures (Art.14) implemented and documented
- 19. Accuracy, robustness, cybersecurity measures (Art.15) implemented
- 20. Post-market monitoring plan documented (Art.72)
Annex VI Procedure Execution (Items 21–28)
- 21. Internal compliance assessment conducted for all Arts.9–15 requirements
- 22. Assessment conducted by qualified personnel with technical and legal expertise
- 23. Assessment findings documented with evidence references
- 24. All non-conformities identified in assessment resolved before proceeding
- 25. Applied harmonised standards (Art.40) or common specifications (Art.41) referenced
- 26. Internal assessment report finalised
- 27. Declaration of Conformity (Art.48) drawn up with all mandatory elements
- 28. CE marking (Art.49) affixed after Declaration of Conformity is complete
Post-Assessment Obligations (Items 29–35)
- 29. EU database registration completed before market placement (Art.32)
- 30. All conformity assessment records retained for 10 years (Art.18)
- 31. Records accessible to national market surveillance authorities (Art.74)
- 32. Post-market monitoring system operational (Art.72)
- 33. Serious incident reporting procedure in place (Art.73)
- 34. Instructions for use delivered to deployers (Art.13)
- 35. Authorised representative appointed if provider is established outside EU (Art.22)
Substantial Modification Governance (Items 36–40)
- 36. Modification classification procedure established in QMS (Art.17)
- 37. Every AI system change assessed for substantial modification status before deployment
- 38. Substantial modification determination documented with Art.3(23) analysis
- 39. New conformity assessment initiated immediately upon substantial modification determination
- 40. Version control links each AI system release to its conformity assessment record
See Also
- EU AI Act Art.9 Risk Management System: Developer Guide
- EU AI Act Art.17 Quality Management System: Developer Guide
- EU AI Act Art.32 EU Database Registration: Developer Guide
- EU AI Act Art.37 Obligations of Importers: Developer Guide
- EU AI Act Art.36 Suspension of Notified Body Designation: Developer Guide