2026-04-16·12 min read·

EU AI Act Art.31 Conformity Assessment Procedure for High-Risk AI: Developer Guide (2026)

EU AI Act Article 31 establishes the conformity assessment obligation that every provider of a high-risk AI system must fulfil before placing the system on the EU market or putting it into service. Conformity assessment is the gatekeeper: without a completed and documented procedure, a high-risk AI system cannot legally carry the CE marking, cannot be marketed in the EU, and exposes the provider to fines up to €30 million or 6 % of global annual turnover under Art.99 for placing a non-conforming system on the market.

Art.31 creates two distinct assessment routes with very different cost and time profiles. The Annex VI internal control route allows providers to self-certify compliance using their own technical documentation and testing. The Annex VII notified body route requires external accredited assessment — mandatory for biometric identification systems in publicly accessible spaces, optional (but strategically valuable) for other high-risk categories.

The practical consequence for AI developers: the route you must take is determined by Annex III category, not by your choice. Getting this wrong costs months. Notified body queues in 2025-2026 are 6-12 months. Internal control assessments, if properly prepared, can be completed in 4-8 weeks. This guide walks through both routes, the technical documentation requirements each demands, and how to architect your development process to minimise conformity assessment time and cost.


Art.31 in the High-Risk AI Compliance Timeline

Art.31 is a pre-market gate. It sits between the implementation of Art.8-17 technical requirements and the Art.48 EU declaration of conformity that legally authorises market placement:

Compliance PhaseKey ArticlesArt.31 Role
Requirements implementationArt.8 (risk management), Art.9 (risk management system), Art.10 (data governance), Art.11 (technical documentation), Art.13 (transparency), Art.14 (human oversight), Art.15 (accuracy/robustness)Pre-condition: all requirements must be met before assessment
Quality management systemArt.17Annex VII route requires QMS to be operational before assessment
Technical documentationArt.18 (keeping), Annex IV (content)Assessment input: technical documentation reviewed during Art.31 procedure
Conformity assessmentArt.31Gate: Annex VI (self-assessment) or Annex VII (notified body)
Declaration of conformityArt.48Post-assessment: provider signs EU declaration
CE markingArt.49Post-declaration: CE marking affixed
RegistrationArt.71 (EU AI database)Post-CE: register before market placement for most Annex III systems
Post-marketArt.72 (PMM), Art.73 (incident reporting)Ongoing obligations begin at market placement

Art.31 also has a retroactive trigger via Art.23 (substantial modification): if a post-deployment update constitutes a substantial modification, the conformity assessment procedure must be repeated in full before the modified system is placed back on the market or put into service.


The Two Routes: Annex VI vs Annex VII

The choice between conformity assessment routes is not discretionary for most providers — it is determined by the Annex III category of the high-risk AI system:

FactorAnnex VI (Internal Control)Annex VII (Notified Body)
Who performs assessmentProvider's own staffAccredited notified body (third party)
When mandatoryDefault for all Annex III systems not in point 1Mandatory for Annex III point 1: biometric ID systems in publicly accessible spaces
When optionalN/AAny Annex III system where provider wants external validation
Cost estimate€10k–€50k internal resource€50k–€500k+ notified body fees + internal preparation
Typical timeline4–8 weeks (if documentation ready)6–18 months (queue + assessment + iteration)
OutputInternal conformity assessment report + technical documentation packageNotified body certificate + QMS approval
Renewal triggerSubstantial modification (Art.23)Substantial modification + notified body certificate expiry
Harmonised standard shortcutYes — standard compliance = presumption of conformityYes — but notified body still audits QMS

Annex III Points That Trigger Annex VII (Mandatory Notified Body Route)

Under Art.31, the Annex VII route is mandatory for:

For all other Annex III categories — employment (point 4), education (point 3), essential services (point 5), law enforcement (points 6-7 other than biometric ID), border management (point 7), justice (point 8), democratic processes (point 8) — Annex VI internal control is the default mandatory route.


Annex VI: Internal Control Procedure — 6 Steps

Annex VI defines a 6-step internal control procedure that providers execute without external involvement. Each step has specific documentation requirements that feed into the technical documentation package maintained under Art.18:

Step 1: Verify Conformity with Harmonised Standards (Art.40) or Common Specifications (Art.41)

Before the assessment proper, identify whether a harmonised standard published in the Official Journal of the EU covers the high-risk AI system in whole or in part. If the system is fully compliant with applicable harmonised standards, Art.42(1) creates a presumption of conformity for those aspects. Document:

As of 2026, CEN-CENELEC JTC21 has published early standards, but comprehensive coverage remains incomplete. For aspects not covered by harmonised standards, Art.41 common specifications (adopted by Commission implementing act) may apply.

Step 2: Compile the Annex IV Technical Documentation Package

Annex VI requires the complete Annex IV technical documentation package to exist before assessment begins. The 8 categories of Annex IV content that must be ready:

Annex IV ElementRequired ContentCommon Gap
1. General descriptionSystem purpose, capabilities, interaction model, affected personsVague capability descriptions
2. Detailed descriptionDevelopment processes, training methodology, architectureUndocumented training data decisions
3. Training, validation, testingDatasets used, data governance, validation/test methodology, performance metricsInsufficient demographic disaggregation records
4. Monitoring, functioning, controlHuman oversight measures, Art.14 UI features, logging mechanismsArt.14 oversight controls undocumented
5. Risk assessmentArt.9 risk management system documentation, identified risks and mitigationsArt.9 risk register incomplete or not linked to technical controls
6. Post-market monitoringArt.72 PMM plan, Art.73 incident reporting proceduresPMM plan not drafted pre-assessment
7. Declaration of conformityDraft EU declaration per Annex V (completed at Step 5)Incomplete
8. Instructions for useArt.13(3) transparency documentation, operator instructionsGeneric documentation not system-specific

Step 3: Technical Documentation Review and Internal Assessment Report

The provider's designated responsible persons (Art.16(1)(k) — provider must designate staff with sufficient AI literacy and legal knowledge) conduct a structured review of the technical documentation against the Art.8-15 requirements:

Requirement ArticleAssessment QuestionEvidence Required
Art.9 — Risk managementIs the risk management system documented, operational, and cycle-complete?Risk register, testing records, residual risk acceptance sign-off
Art.10 — Data governanceAre training/validation/test datasets documented with bias mitigation evidence?Data sheets, preprocessing logs, demographic validation
Art.11 — Technical documentationIs Annex IV complete and accurate?Documentation completeness checklist
Art.13 — TransparencyDo instructions for use contain all Art.13(3) mandatory disclosures?IFU document with Art.13(3) cross-reference
Art.14 — Human oversightAre oversight measures technically implemented and documented?Human override logs, interrupt capability tests
Art.15 — Accuracy/robustnessDo accuracy, robustness and cybersecurity measures meet the Art.15 standard?Test metrics, adversarial testing results, security penetration test summary

The output of Step 3 is an Internal Assessment Report — not mandated in name by Art.31 but required in substance to document the assessment process and conclusions.

Step 4: Technical Testing and Validation

Providers must perform — or commission — technical testing sufficient to verify compliance with Art.8-15. This includes:

All test results are documented with version identifiers for the AI model, dataset, and software components tested.

Step 5: Sign the EU Declaration of Conformity (Annex V)

After the internal assessment and testing confirm conformity, the provider's legal representative signs the EU Declaration of Conformity per Annex V. The declaration must specify:

The declaration must be drawn up in one of the EU official languages of the placing-on-market jurisdiction and kept for 10 years under Art.18(1).

Step 6: Affix CE Marking (Art.49)

The CE marking is affixed to the high-risk AI system — or, where that is not possible, to the packaging or accompanying documents — after the EU Declaration of Conformity is signed. Art.49(3) specifies minimum height of 5 mm for the CE marking where affixed to a physical product.

For software-only AI systems, the CE marking appears in the product documentation, software UI (where feasible), and must be included in the technical documentation.


Annex VII: Notified Body Route — When and How

Annex VII applies to biometric identification systems in publicly accessible spaces (Annex III point 1) and to providers who voluntarily choose external certification. The Annex VII procedure involves two parallel streams:

Stream A: Quality Management System Assessment

The notified body assesses whether the provider's QMS (required under Art.17) meets Annex VII Appendix 1 criteria:

  1. QMS documentation: quality policy, objectives, processes, roles (Art.17(1))
  2. Conformity assessment integration: how the QMS controls the conformity assessment process itself
  3. Design control: design review, verification, validation procedures
  4. Risk management integration: Art.9 system integration with QMS corrective action processes
  5. Document control: Annex IV technical documentation lifecycle management
  6. Corrective and preventive action (CAPA): Art.20 corrective action procedures in the QMS
  7. Training records: Art.4 AI literacy measures for relevant staff
  8. Supplier/subcontractor control: Art.25 supply chain obligations reflected in QMS

The QMS assessment results in a QMS Certificate from the notified body, valid for a defined period (typically 3-5 years) subject to annual surveillance audits.

Stream B: Technical Documentation Review

Concurrently, the notified body reviews the Annex IV technical documentation and may conduct or commission independent technical testing. The notified body:

Annex VII Output: EU-Type Examination Certificate

For biometric identification systems, the combined QMS + technical documentation review results in an EU-Type Examination Certificate per Annex VII point 5.2. The certificate:

Finding and Engaging a Notified Body

Notified bodies for AI Act conformity assessment are designated by member state national authorities (notifying authorities) and listed in the NANDO database. As of 2026, notified body designation under the AI Act is still proceeding — providers should:

  1. Check NANDO database for currently designated AI Act notified bodies in scope for their Annex III category
  2. Request quotes from at least 3 designated bodies — fee variation is substantial (€50k–€500k)
  3. Expect a pre-assessment meeting to agree scope and timeline
  4. Build at least 6 months into project schedule for queue + assessment; 12 months for complex systems
  5. Prepare QMS documentation to ISO 9001:2015 level before engagement — notified bodies will not begin formal assessment until QMS baseline exists

Route Selection Decision Matrix

QuestionYes →No →
Is the AI system in Annex III point 1 (biometric ID in publicly accessible spaces)?Annex VII mandatoryContinue
Is the provider subject to other sector legislation requiring notified body (MDR, Machinery, RED)?Consider Annex VII for alignmentContinue
Does the provider need external certification for contractual or procurement requirements?Consider voluntary Annex VIIContinue
Is a relevant harmonised standard published for the system's Annex III category?Apply standard for presumption; Annex VI feasibleDocument standard absence; Annex VI still feasible
DefaultAnnex VI (internal control)

Art.31 Intersection Matrix

Conformity assessment under Art.31 is not isolated — it is the validation point where all upstream compliance obligations converge:

Intersecting ArticleRelationshipPractical Impact
Art.9 — Risk managementAnnex VI Step 3 reviews Art.9 completeness; Art.9 gaps block conformityIncomplete risk register = conformity assessment failure
Art.10 — Data governanceTraining data documentation (Annex IV element 3) assessed during conformityUndocumented bias testing = non-conformity
Art.11 — Technical documentationAnnex IV package is the primary input to Art.31 assessmentDocumentation quality directly determines assessment speed
Art.17 — Quality management systemAnnex VII requires operational QMS; Annex VI benefits from QMS even if not requiredQMS absence lengthens Annex VI timeline; blocks Annex VII entirely
Art.23 — Substantial modificationSubstantial modification triggers Art.31 re-assessmentDeployment update policy must include Art.23 change classification gate
Art.43 — Standards / Common SpecsHarmonised standard compliance = Art.42 presumption shortcut in Annex VI Step 1Monitor CEN-CENELEC JTC21 output; adopt standards as published
Art.48 — EU Declaration of ConformityArt.31 completion is prerequisite for Art.48 declarationNo assessment = no declaration = CE marking unlawful
Art.49 — CE markingCE marking requires Art.48 declaration which requires Art.31 completionMarketing begins only after Art.31 is complete
Art.71 — EU AI database registrationRegistration required before market placement for most Annex III systems; registration presupposes completed Art.31Art.31 data (procedure used, notified body) feeds Art.71 registration
Art.72 — Post-market monitoringPMM plan (Annex IV element 6) reviewed during Art.31 assessmentPMM plan must exist before Art.31 completes

Art.31 × Art.23: Substantial Modification and Re-Assessment

Art.23 defines substantial modification as any change that affects the AI system's compliance with the Art.8-15 requirements or results in a change to the intended purpose. When a substantial modification occurs post-deployment, Art.31 requires a new conformity assessment.

Practically, this means providers must implement a change control gate in their deployment pipeline:

Pull request / model update
          ↓
Is this a change to: model architecture, training data, fine-tuning, 
intended purpose, operating conditions, performance thresholds, 
human oversight design, cybersecurity measures?
          ↓
         Yes → Substantial modification assessment (Art.23 form)
               ↓
         Change crosses threshold? → Art.31 re-assessment required
               before re-deployment
          ↓
         No → Minor update — document in technical documentation log
               (Annex IV element 1 changelog) — no new conformity assessment

The threshold for "substantial" is not quantified in Art.23. The Commission has indicated that changes requiring retraining of the model on materially different data, changes to safety-relevant components, or changes to the Annex III category classification are clearly substantial. Software bug fixes, UI localisation, and infrastructure changes that do not affect AI functionality are generally not substantial.


CLOUD Act × Art.31 Conformity Records

Conformity assessment documentation — the technical documentation package (Annex IV), internal assessment report, EU Declaration of Conformity (Annex V), and notified body certificate (Annex VII route) — must be retained by the provider for 10 years from market placement under Art.18(1).

When conformity documentation is stored on US-headquartered cloud infrastructure (AWS, Azure, GCP), the Clarifying Lawful Overseas Use of Data (CLOUD) Act creates a parallel disclosure obligation:

Document TypeEU RequirementCLOUD Act Exposure
Annex IV technical documentationArt.18 — 10 year retention, MSA access on request (Art.21)Subject to CLOUD Act subpoena by US DOJ/FBI without EU judicial review
Internal assessment reportNo explicit retention mandate; recommended 10 yearsSame exposure as Annex IV
EU Declaration of ConformityAnnex V — must be available to market surveillance authoritiesAvailable to US authorities in parallel with EU MSA access
QMS documentation (Annex VII)Notified body may require ongoing accessSubject to US government CLOUD Act demands on cloud provider
Test result datasetsAnnex IV element 3 — retention tied to technical documentationSource data for conformity claims subject to dual jurisdiction

The specific risk: a US government CLOUD Act demand for conformity documentation serves as a discovery mechanism for the AI system's full technical architecture, training data sources, and security measures. For defence-adjacent, critical infrastructure, or law enforcement AI systems under Annex III, this represents a material national security and competitive intelligence risk.

EU-native cloud infrastructure — operating exclusively under GDPR, the NIS2 Directive, and EU member state law — eliminates CLOUD Act exposure. When conformity documentation is stored on EU-native infrastructure with no US parent company, there is no CLOUD Act jurisdiction over the data, and all access requests must proceed through EU mutual legal assistance treaty (MLAT) channels with judicial oversight.

For providers storing high-risk AI conformity records, EU-native PaaS like sota.io removes this dual-jurisdiction risk entirely and simplifies the Art.21 cooperation model with market surveillance authorities.


Python Implementation

ConformityAssessmentRecord: Structured Assessment Tracking

from dataclasses import dataclass, field
from datetime import date, datetime
from enum import Enum
from typing import Optional


class ConformityRoute(str, Enum):
    ANNEX_VI = "annex_vi_internal_control"
    ANNEX_VII = "annex_vii_notified_body"


class AssessmentStatus(str, Enum):
    NOT_STARTED = "not_started"
    IN_PROGRESS = "in_progress"
    COMPLETED = "completed"
    FAILED = "failed"
    EXPIRED = "expired"  # Art.23 substantial modification triggers re-assessment


@dataclass
class NotifiedBodyDetails:
    name: str
    nando_number: str
    member_state: str
    certificate_number: Optional[str] = None
    certificate_issue_date: Optional[date] = None
    certificate_expiry_date: Optional[date] = None
    qms_certificate_number: Optional[str] = None

    def is_certificate_valid(self, check_date: date = None) -> bool:
        if not self.certificate_expiry_date:
            return False
        return (check_date or date.today()) <= self.certificate_expiry_date


@dataclass
class ConformityAssessmentRecord:
    """Art.31 conformity assessment record for a high-risk AI system."""

    # System identification
    system_name: str
    system_version: str
    annex_iii_category: str  # e.g. "Annex III point 4 — employment screening"

    # Assessment configuration
    route: ConformityRoute
    route_rationale: str  # Why this route was selected

    # Timeline
    assessment_start_date: date
    assessment_completion_date: Optional[date] = None
    status: AssessmentStatus = AssessmentStatus.NOT_STARTED

    # Harmonised standards (Art.40/Art.42 presumption)
    harmonised_standards: list[str] = field(default_factory=list)
    standards_fully_cover_system: bool = False

    # Annex VI internal control (if applicable)
    internal_assessment_report_ref: Optional[str] = None
    internal_assessment_author: Optional[str] = None
    internal_assessment_date: Optional[date] = None

    # Annex VII notified body (if applicable)
    notified_body: Optional[NotifiedBodyDetails] = None

    # Output documents
    eu_declaration_ref: Optional[str] = None
    eu_declaration_date: Optional[date] = None
    ce_marking_affixed_date: Optional[date] = None

    # Art.18 retention
    retention_expiry: Optional[date] = None  # 10 years from market placement

    # Art.23 modification tracking
    substantial_modifications: list[dict] = field(default_factory=list)

    def is_assessment_valid(self) -> bool:
        """Check whether assessment remains valid (no unaddressed substantial modification)."""
        unaddressed = [
            m for m in self.substantial_modifications
            if m.get("reassessment_completed") is False
        ]
        return (
            self.status == AssessmentStatus.COMPLETED
            and len(unaddressed) == 0
        )

    def record_substantial_modification(
        self,
        modification_description: str,
        modification_date: date,
        reassessment_required: bool
    ) -> None:
        """Record an Art.23 substantial modification event."""
        self.substantial_modifications.append({
            "description": modification_description,
            "date": str(modification_date),
            "reassessment_required": reassessment_required,
            "reassessment_completed": False if reassessment_required else None,
        })
        if reassessment_required:
            self.status = AssessmentStatus.EXPIRED

    def to_summary(self) -> dict:
        return {
            "system": f"{self.system_name} v{self.system_version}",
            "annex_iii_category": self.annex_iii_category,
            "route": self.route.value,
            "status": self.status.value,
            "assessment_valid": self.is_assessment_valid(),
            "eu_declaration_signed": self.eu_declaration_date is not None,
            "ce_marking_affixed": self.ce_marking_affixed_date is not None,
            "open_modifications": sum(
                1 for m in self.substantial_modifications
                if m.get("reassessment_completed") is False
            ),
        }

AnnexVIProcedure: Internal Control Step Tracker

from dataclasses import dataclass
from enum import Enum
from typing import Optional


class StepStatus(str, Enum):
    PENDING = "pending"
    IN_PROGRESS = "in_progress"
    COMPLETE = "complete"
    BLOCKED = "blocked"  # upstream dependency not met


@dataclass
class AnnexVIStep:
    step_number: int
    name: str
    description: str
    status: StepStatus = StepStatus.PENDING
    completed_by: Optional[str] = None
    completed_date: Optional[str] = None
    evidence_refs: list[str] = None
    blockers: list[str] = None

    def __post_init__(self):
        if self.evidence_refs is None:
            self.evidence_refs = []
        if self.blockers is None:
            self.blockers = []


class AnnexVIProcedure:
    """Internal control conformity assessment procedure per Annex VI."""

    STEPS = [
        (1, "Harmonised Standards Verification",
         "Identify applicable EN standards; document coverage gaps (Art.40/Art.42)"),
        (2, "Annex IV Technical Documentation Compilation",
         "Compile complete Annex IV package: general description, detailed design, "
         "training/validation/testing records, monitoring/control measures, risk assessment, "
         "PMM plan, draft declaration, instructions for use"),
        (3, "Internal Assessment Report",
         "Review technical documentation against Art.8-15; produce Internal Assessment Report "
         "signed by designated responsible person (Art.16(1)(k))"),
        (4, "Technical Testing and Validation",
         "Performance testing, bias testing (Art.10(5)), robustness testing, "
         "cybersecurity assessment (Art.15(5)), human oversight verification (Art.14)"),
        (5, "EU Declaration of Conformity",
         "Sign Annex V EU Declaration of Conformity; ensure declaration includes all mandatory fields"),
        (6, "CE Marking Affixation",
         "Affix CE marking per Art.49 to system, packaging, or documentation"),
    ]

    def __init__(self, system_name: str):
        self.system_name = system_name
        self.steps = [
            AnnexVIStep(step_number=n, name=name, description=desc)
            for n, name, desc in self.STEPS
        ]

    def complete_step(self, step_number: int, completed_by: str, evidence_refs: list[str]) -> None:
        step = self._get_step(step_number)
        step.status = StepStatus.COMPLETE
        step.completed_by = completed_by
        step.completed_date = str(date.today())
        step.evidence_refs = evidence_refs

    def block_step(self, step_number: int, blockers: list[str]) -> None:
        step = self._get_step(step_number)
        step.status = StepStatus.BLOCKED
        step.blockers = blockers

    def is_procedure_complete(self) -> bool:
        return all(s.status == StepStatus.COMPLETE for s in self.steps)

    def next_pending_step(self) -> Optional[AnnexVIStep]:
        for step in self.steps:
            if step.status in (StepStatus.PENDING, StepStatus.IN_PROGRESS):
                return step
        return None

    def summary(self) -> dict:
        return {
            "system": self.system_name,
            "procedure": "Annex VI — Internal Control",
            "complete": self.is_procedure_complete(),
            "steps": [
                {
                    "step": s.step_number,
                    "name": s.name,
                    "status": s.status.value,
                    "evidence_count": len(s.evidence_refs),
                }
                for s in self.steps
            ],
            "next_action": self.next_pending_step().name if self.next_pending_step() else "COMPLETE",
        }

    def _get_step(self, step_number: int) -> AnnexVIStep:
        for step in self.steps:
            if step.step_number == step_number:
                return step
        raise ValueError(f"Step {step_number} not found")

TechnicalDocumentationVerifier: Annex IV Completeness Check

from dataclasses import dataclass, field
from typing import Optional


ANNEX_IV_REQUIREMENTS = {
    "element_1_general": [
        "intended_purpose_documented",
        "natural_persons_affected_identified",
        "interaction_model_described",
        "geographic_deployment_scope",
        "version_identification",
    ],
    "element_2_detailed": [
        "development_methodology_documented",
        "training_algorithm_described",
        "model_architecture_documented",
        "hyperparameter_choices_justified",
        "pre_trained_components_identified",
    ],
    "element_3_datasets": [
        "training_data_sources_documented",
        "data_governance_measures_applied",
        "bias_mitigation_described",
        "validation_dataset_documented",
        "test_dataset_documented",
        "demographic_disaggregation_records",
    ],
    "element_4_monitoring": [
        "human_oversight_measures_documented",  # Art.14
        "logging_mechanisms_described",  # Art.12
        "monitoring_capabilities_listed",
        "interrupt_override_mechanism_documented",
    ],
    "element_5_risk_assessment": [
        "art9_risk_register_complete",
        "known_risks_identified",
        "mitigation_measures_documented",
        "residual_risk_accepted",
        "foreseeable_misuse_addressed",
    ],
    "element_6_pmm": [
        "pmm_plan_drafted",  # Art.72
        "art73_incident_criteria_defined",
        "pmm_kpis_specified",
        "deployer_cooperation_obligations_described",
    ],
    "element_7_declaration": [
        "draft_eu_declaration_prepared",  # Annex V
    ],
    "element_8_ifu": [
        "instructions_for_use_complete",  # Art.13(3)
        "operator_instructions_included",
        "limitations_and_constraints_documented",
        "contact_information_for_support",
    ],
}


@dataclass
class VerificationResult:
    element: str
    requirement: str
    satisfied: bool
    evidence_ref: Optional[str] = None
    gap_description: Optional[str] = None


class TechnicalDocumentationVerifier:
    """Annex IV technical documentation completeness checker for Art.31 assessment."""

    def __init__(self, system_name: str):
        self.system_name = system_name
        self.results: list[VerificationResult] = []

    def verify_requirement(
        self,
        element: str,
        requirement: str,
        satisfied: bool,
        evidence_ref: Optional[str] = None,
        gap_description: Optional[str] = None,
    ) -> None:
        self.results.append(VerificationResult(
            element=element,
            requirement=requirement,
            satisfied=satisfied,
            evidence_ref=evidence_ref,
            gap_description=gap_description,
        ))

    def run_batch_verification(self, satisfaction_map: dict[str, bool]) -> None:
        """
        Bulk verification: satisfaction_map keys are 'element.requirement' strings.
        Example: {'element_1_general.intended_purpose_documented': True, ...}
        """
        for element, requirements in ANNEX_IV_REQUIREMENTS.items():
            for req in requirements:
                key = f"{element}.{req}"
                satisfied = satisfaction_map.get(key, False)
                self.verify_requirement(element, req, satisfied)

    def conformity_assessment_ready(self) -> bool:
        """Returns True if all Annex IV requirements are satisfied."""
        verified = {f"{r.element}.{r.requirement}": r.satisfied for r in self.results}
        for element, requirements in ANNEX_IV_REQUIREMENTS.items():
            for req in requirements:
                if not verified.get(f"{element}.{req}", False):
                    return False
        return True

    def gaps_report(self) -> list[dict]:
        """Return list of unsatisfied requirements for remediation."""
        gaps = []
        for result in self.results:
            if not result.satisfied:
                gaps.append({
                    "element": result.element,
                    "requirement": result.requirement,
                    "gap": result.gap_description or "Not documented",
                })
        return gaps

    def readiness_score(self) -> float:
        """Percentage of Annex IV requirements satisfied."""
        if not self.results:
            return 0.0
        satisfied = sum(1 for r in self.results if r.satisfied)
        return round(satisfied / len(self.results) * 100, 1)

Art.31 Compliance Checklist (40 Items)

Route Selection (6 items)

Annex IV Technical Documentation (10 items)

Internal Control Steps — Annex VI (8 items)

Notified Body — Annex VII (6 items, if applicable)

Declaration and Marking (5 items)

Ongoing Obligations (5 items)


What to Do Now

If you have a biometric identification system (Annex III point 1): Start Annex VII engagement immediately — 6-12 month notified body queue is realistic in 2026. Begin QMS development to ISO 9001:2015 level in parallel. The notified body engagement itself requires 4-6 months of preparation.

If you have any other Annex III high-risk AI system: Run the TechnicalDocumentationVerifier batch verification against your current documentation state. Any gap in Annex IV will block Annex VI completion. Most providers are 40-60% ready when they first audit their documentation — the typical gap is Art.9 risk register depth and Art.14 human oversight documentation.

If you are pre-development: Architect conformity assessment into your SDLC as a first-class gate. The providers who complete Art.31 fastest are those who treat technical documentation as a living artefact updated at each sprint, not a document written retrospectively 4 weeks before market placement.

If you need to store conformity documentation: Evaluate infrastructure jurisdiction before selecting a storage provider. Annex IV documentation, test results, and QMS records stored on US-headquartered cloud infrastructure are subject to CLOUD Act demands with no EU judicial oversight. EU-native infrastructure eliminates this dual-jurisdiction exposure and simplifies Art.21 cooperation with market surveillance authorities.


See Also