2026-04-14·14 min read·EU AI Act

EU AI Act Conformity Assessment: Practical 90-Day Self-Assessment Guide for Developers (2026)

August 2, 2026 is the enforcement date for Annex III high-risk AI systems under the EU AI Act. Most developers who build AI-powered hiring tools, credit scoring models, educational assessment platforms, or biometric authentication systems either do not know they are in scope, or assume that conformity assessment requires a notified body and costs tens of thousands of euros. Both assumptions are wrong for the vast majority of SaaS AI providers.

The EU AI Act creates two conformity assessment tracks. Annex VI is an internal control self-assessment: you design, document, and verify your own compliance without any third party involved. Annex VII is the notified body track — mandatory only for a narrow class of systems (real-time biometric identification in public spaces, AI safety components in regulated hardware products). For typical SaaS AI systems subject to Annex III, Annex VI is the correct and legally sufficient track.

This guide walks through how to run an Annex VI conformity assessment in 90 days, starting now. The process divides into four phases: scope and classification (days 1–15), technical documentation (days 16–45), internal control testing (days 46–60), and declaration plus registration (days 61–90). The documentation artifacts are specific and non-trivial. Starting in June 2026 is too late — the technical documentation package alone typically requires a full month of structured work.

Are You in Scope? Three-Question Check

Before choosing a conformity assessment track, you need to confirm the obligation applies to you.

Question 1: Does your AI system fall under Annex III?

Annex III lists the categories of AI systems classified as high-risk. The categories relevant to most SaaS AI builders are:

If your system falls into any of these categories and is deployed in or to the EU market, you are a high-risk AI provider under the EU AI Act.

Question 2: Are you the provider or the deployer?

Under the EU AI Act, the provider is the entity that develops the AI system and places it on the market (builds the model, owns the pipeline). The deployer is an entity that uses a third-party high-risk AI system in their business context. The conformity assessment obligation under Art.43 falls on the provider. If you built the AI system — trained the model, designed the decision pipeline, wrapped a foundation model into a high-risk application — you are the provider and must complete conformity assessment. If you are using an off-the-shelf third-party high-risk AI tool, your obligations are different (Art.26 deployer obligations, not Art.43 provider obligations).

Question 3: Is the system placed on the EU market or used in the EU?

The EU AI Act applies to providers established in the EU, and to providers established outside the EU whose systems produce outputs used in the EU. If your SaaS platform is used by EU-based employers, EU-based financial institutions, or EU-based educational institutions to make decisions about EU natural persons, the Act applies regardless of where your servers are located.

If you answered yes to all three: you must complete a conformity assessment under Art.43 before placing the system on the market. The August 2, 2026 deadline applies to systems already in use — if your Annex III system was deployed before August 2, 2026, you have a grace period until August 2, 2027 to complete the assessment for existing systems. New systems placed on the market after August 2, 2026 must have completed conformity assessment before deployment.

Track Selection: Annex VI vs. Annex VII

Annex VI (Internal Control) applies if:

For the vast majority of SaaS AI systems — CV screening software, credit risk models, educational assessment tools, employee monitoring platforms — Annex VI self-assessment is legally sufficient and requires no third-party certification. The conformity declaration you sign at the end of the Annex VI process has the same legal weight as a notified body certificate for Annex VI-eligible systems.

Annex VII (Notified Body) is mandatory only for:

If your system does not fall into these two categories, choosing Annex VII voluntarily adds cost and time without adding legal benefit. The operative question is not "would a notified body certificate look good" but "what does the regulation actually require."

The Annex VI Self-Assessment in 4 Phases

Phase 1: Inventory and Classification (Days 1–15)

Before you can assess conformity, you need to know exactly what you are assessing. Most teams discover during this phase that their "one AI system" is actually three or four distinct AI pipelines with different risk profiles.

Build your AI system registry. For each model, decision pipeline, and AI-powered feature in your product, create a registry entry documenting:

The registry does not need to be complex — a structured markdown file or spreadsheet is sufficient. What matters is completeness and documented rationale. If your system falls under two Annex III categories, document both. If you are uncertain whether a category applies, document your reasoning for the conclusion you reach.

Identify your August 2026 vs. August 2027 obligations. Systems already in service before August 2, 2026 have until August 2, 2027 to complete conformity assessment (for the existing deployed version). New systems or substantially modified existing systems placed on the market after August 2, 2026 must have completed assessment before deployment. Your registry should flag which systems fall under which timeline.

Output artifact: system_registry.md — a structured document listing every AI system in scope, the Annex III category it falls under with rationale, the affected persons, and the compliance timeline.

Phase 2: Technical Documentation Package (Days 16–45)

This is the substantive core of the Annex VI process. Art.11 requires providers to maintain technical documentation per Annex IV. Annex IV specifies eight required sections. Each section has mandatory content — missing sections do not constitute a valid technical documentation package.

Section 1: General Description

The general description must cover the intended purpose of the AI system as stated in the instructions for use; the version number; the hardware, software, and data dependencies; what the AI system is specifically designed to do; what it is not designed to do; known limitations on performance; and a description of the types of persons who interact with it (end users, operators, affected persons). This section is the plain-language description that a market surveillance authority reads first to understand what the system does before reviewing the technical details.

Section 2: Detailed Technical Description

The detailed description must include the system architecture, the role of each component in the overall AI pipeline, the software specifications and external interfaces, the training approach (supervised/unsupervised/reinforcement, transfer learning, etc.), the inference infrastructure, and the version management process. For systems built on foundation models or third-party model APIs (OpenAI, Anthropic, Cohere), this section must document the upstream model dependency, the extent of customisation, and how the system behaves when the upstream model is updated.

Section 3: Training Data Governance (Art.10 Documentation)

This is typically the most difficult section for teams without formal data governance practices. Art.10 requires that training, validation, and test datasets meet standards for relevance, representativeness, and freedom from errors and incomplete information. The documentation must describe:

For systems using publicly available datasets (Common Crawl derivatives, LAION, etc.) or third-party training data providers, you must document what provenance assurance you received and what bias assessment you performed. "We used GPT-4 to generate training data" requires the same level of documentation scrutiny as a human-annotated dataset.

Section 4: Risk Management Lifecycle Log (Art.9 Documentation)

Art.9 requires a risk management system that operates over the entire lifecycle of the high-risk AI system — not just a one-time pre-deployment risk assessment. The documentation must describe the risk management process, the risk categories identified, the mitigation measures applied, and the residual risk accepted. Critically, the risk management log must be updated as new risks are identified post-deployment. The Art.9 documentation is not a static document — market surveillance authorities will check whether it reflects the system's current risk profile.

The risk categories to document include: performance risks (inaccuracy, distributional shift, model drift), security risks (adversarial attacks, prompt injection for LLM-based systems, data poisoning), operational risks (human override failures, misuse by deployers), and fundamental rights risks (discriminatory outcomes, disproportionate impact on protected groups).

Section 5: Human Oversight Mechanisms (Art.14)

Art.14 requires that high-risk AI systems are designed and developed in a way that allows human oversight. The documentation must describe: who can override AI decisions and with what authority, how the override mechanism is implemented technically, what information humans are presented with to enable meaningful oversight (confidence scores, decision factors, uncertainty indicators), what training deployers receive on the override process, and what logs are maintained of human interventions.

For automated decision pipelines that generate outputs fed directly into consequential decisions (hiring, credit, education outcomes), the documentation must be specific about where human review occurs in the workflow, not just state that human oversight exists in principle.

Section 6: Logging and Event Recording (Art.12)

Art.12 requires that high-risk AI systems have logging capabilities enabling post-hoc review of system behaviour. The documentation must describe what the system logs, at what level of granularity, how logs are stored, for how long, and who has access to them. The minimum Art.12 requirements include:

Section 7: Accuracy, Robustness, and Cybersecurity (Art.15)

The documentation must describe the metrics used to measure system accuracy across the full intended deployment range, the robustness testing performed (including testing for performance degradation under distributional shift and adversarial conditions), and the cybersecurity measures implemented (penetration testing, input validation, access controls, incident response procedures).

For LLM-based systems, the robustness documentation should address prompt injection risks and jailbreak testing — Art.15's adversarial robustness requirements apply to the full threat model of the deployed system, not just classical ML model robustness.

Section 8: Post-Market Monitoring System (Art.72)

The documentation must describe how you will monitor the system after deployment: what metrics you will track, what feedback channels you have from deployers and affected persons, what thresholds trigger a reassessment, and how you will handle the serious incident reporting requirements of Art.73 (competent authority notification within 72 hours for incidents involving death, serious harm, significant property damage, or serious fundamental rights violations involving EU persons).

Output artifact: technical_documentation/ directory with one file per Annex IV section, a master index.md linking all sections, and a version control log tracking changes.

Phase 3: Internal Control Testing (Days 46–60)

The internal control phase is where you verify — through structured testing — that the system as built actually conforms to the requirements described in the technical documentation. The documentation describes what your system should do; the testing verifies that it does.

Art.9 risk management verification. Run the risk management controls documented in Section 4 against the current deployed system. For each identified risk, verify that the documented mitigation measure is implemented and operating as intended. Test the edge cases explicitly: what happens when training distribution inputs appear outside the documented operating range? What triggers the distributional shift alert if one is documented?

Art.12 logging verification. Trigger system interactions across the range of input types and verify that logs are generated at the documented granularity. Test log retention: are logs retained for the documented period? Are they accessible to the documented parties? Verify that the override event logging captures the information needed for post-hoc review.

Art.14 human oversight verification. Test the human override mechanism: can the documented override principal actually override the AI decision? Is the override captured in the audit log? Does the system present the decision factors described in the documentation? This testing must be performed by someone with the documented override authority, not just the development team.

Art.15 cybersecurity verification. For SaaS AI systems, this typically means a penetration test or third-party security scan against the documented threat model. For LLM-based systems, it means prompt injection testing and jailbreak attempts against the documented input validation controls. If your organisation cannot perform this testing internally, a third-party security vendor engagement is appropriate — but the scope should be defined by your Art.15 documentation, not a generic web application scan.

Output artifact: conformity_test_report.md — a structured document with one section per article requirement tested, documenting the test methodology, test results (pass/fail), evidence of test execution, and any identified non-conformities with remediation actions.

Phase 4: Declaration and Registration (Days 61–90)

EU Declaration of Conformity (Art.48). The EU Declaration of Conformity is a formal legal document signed by the provider (or the provider's authorised representative) declaring that the AI system conforms to the relevant provisions of the EU AI Act. Art.48 specifies the required content: provider identity and contact details, system name and version, applicable regulations, standards or specifications applied (if harmonised standards exist for your category — most Annex III categories do not yet have harmonised standards, so this section documents the specific EU AI Act articles assessed), assessment method used (Annex VI), declaration text, date and place of issue, and signature of the responsible person. The Declaration is not filed with a regulator — it is maintained with the technical documentation and made available to market surveillance authorities on request.

CE Marking (Art.49). Once the Declaration of Conformity is signed, high-risk AI systems that are stand-alone software products (not embedded in CE-marked hardware) must affix the CE marking to the system documentation and any accompanying technical materials. For SaaS products, this means displaying the CE marking in the product documentation, the technical documentation package, and the conformity declaration. The CE marking does not appear on the software interface itself for pure software products — it appears in the formal documentation.

EU AI Database Registration (Art.22). Most Annex III high-risk AI systems must be registered in the EU AI Database before being placed on the market. The database is operated by the European AI Office. Registration requires creating an account, submitting a structured dataset describing the system (intended purpose, categories of persons affected, accuracy metrics, oversight mechanisms), and receiving a registration identifier. The registration data is publicly accessible — this is part of the EU AI Act's transparency architecture. The registration must be updated when a substantial modification is made to the system.

Record retention (Art.18). Technical documentation, the Declaration of Conformity, the EU AI Database registration record, and post-market monitoring reports must be retained for 10 years after the AI system has been placed on the market or put into service. This retention requirement has hosting infrastructure implications.

Output artifacts: eu_declaration_of_conformity.pdf (signed), ce_marking_documentation.md, eu_ai_database_registration.json (registration confirmation from the EU AI Office database).

Infrastructure Note: Where You Host the Documentation Matters

Art.18's 10-year retention requirement creates a long-term custody obligation for conformity documentation. The documentation must be available to national market surveillance authorities on request — which means it must be retrievable, authentic, and unaltered across a decade of system evolution.

Storing conformity documentation on US-jurisdiction cloud infrastructure (AWS, Google Cloud, Azure, Vercel, Railway) creates a structural problem: the CLOUD Act gives US federal authorities the power to compel US-headquartered cloud providers to disclose stored data without a European court order, regardless of where the data is physically located. Technical documentation for an EU AI Act conformity assessment — which describes model architecture, training data sources, risk assessments, and security testing — is precisely the kind of sensitive technical information that may attract law enforcement or regulatory interest.

Hosting conformity documentation on EU-jurisdiction infrastructure — where legal compulsion requires an EU court order and operates under GDPR-aligned legal frameworks — means national market surveillance authorities in France, Germany, or the Netherlands can access it through the EU legal process, while US authorities cannot compel access without EU judicial oversight. For systems operating in industries with heightened sensitivity (financial services, healthcare, HR), the jurisdiction of the documentation hosting is a compliance consideration, not just a privacy preference.

Python ConformityAssessmentTracker

from enum import Enum
from dataclasses import dataclass, field
from datetime import date
from typing import Optional, List

class AnnexIIICategory(Enum):
    BIOMETRIC_IDENTIFICATION = "annex_iii_1"
    CRITICAL_INFRASTRUCTURE = "annex_iii_2"
    EDUCATION = "annex_iii_3"
    EMPLOYMENT_HR = "annex_iii_4"
    ESSENTIAL_SERVICES_CREDIT = "annex_iii_5"
    LAW_ENFORCEMENT = "annex_iii_6"
    MIGRATION_BORDER = "annex_iii_7"
    ADMINISTRATION_OF_JUSTICE = "annex_iii_8"

class AssessmentTrack(Enum):
    ANNEX_VI_INTERNAL_CONTROL = "annex_vi"
    ANNEX_VII_NOTIFIED_BODY = "annex_vii"

class PhaseStatus(Enum):
    NOT_STARTED = "not_started"
    IN_PROGRESS = "in_progress"
    COMPLETE = "complete"
    BLOCKED = "blocked"

@dataclass
class AnnexIVSection:
    section_number: int
    title: str
    status: PhaseStatus = PhaseStatus.NOT_STARTED
    artifact_path: Optional[str] = None
    last_updated: Optional[date] = None
    gaps: List[str] = field(default_factory=list)

@dataclass
class ConformityAssessmentStatus:
    system_name: str
    system_version: str
    annex_iii_categories: List[AnnexIIICategory]
    assessment_track: AssessmentTrack
    enforcement_deadline: date
    
    # Phase 1
    system_registry_complete: bool = False
    
    # Phase 2 - Annex IV sections
    annex_iv_sections: List[AnnexIVSection] = field(default_factory=list)
    
    # Phase 3
    art9_testing_complete: bool = False
    art12_logging_verified: bool = False
    art14_oversight_tested: bool = False
    art15_security_tested: bool = False
    conformity_test_report_path: Optional[str] = None
    
    # Phase 4
    declaration_of_conformity_signed: bool = False
    ce_marking_affixed: bool = False
    eu_ai_database_registered: bool = False
    registration_id: Optional[str] = None

def check_annex_vii_required(system: ConformityAssessmentStatus) -> bool:
    """Determine if Annex VII notified body assessment is required."""
    # Real-time biometric RID systems and safety components in
    # CE-marked regulated products require Annex VII.
    # Most SaaS AI systems qualify for Annex VI.
    annex_vii_categories = {
        AnnexIIICategory.BIOMETRIC_IDENTIFICATION,
        AnnexIIICategory.CRITICAL_INFRASTRUCTURE,  # only if safety component in regulated product
    }
    return any(cat in annex_vii_categories for cat in system.annex_iii_categories)

def build_annex_iv_checklist() -> List[AnnexIVSection]:
    """Return the 8 required Annex IV technical documentation sections."""
    return [
        AnnexIVSection(1, "General description — intended purpose, version, hardware/software dependencies"),
        AnnexIVSection(2, "Detailed technical description — architecture, training approach, inference infrastructure"),
        AnnexIVSection(3, "Training data governance (Art.10) — provenance, bias assessment, representativeness"),
        AnnexIVSection(4, "Risk management lifecycle log (Art.9) — risk categories, mitigations, residual risks"),
        AnnexIVSection(5, "Human oversight mechanisms (Art.14) — override authority, technical implementation, training"),
        AnnexIVSection(6, "Logging and event recording (Art.12) — what, granularity, retention, access"),
        AnnexIVSection(7, "Accuracy, robustness, cybersecurity (Art.15) — metrics, adversarial testing, pentest"),
        AnnexIVSection(8, "Post-market monitoring (Art.72) — metrics tracked, incident reporting triggers, Art.73 procedure"),
    ]

def assess_readiness(system: ConformityAssessmentStatus) -> dict:
    """Score conformity assessment readiness across all phases."""
    phases = {
        "phase_1_classification": system.system_registry_complete,
        "phase_2_documentation": all(
            s.status == PhaseStatus.COMPLETE 
            for s in system.annex_iv_sections
        ),
        "phase_3_testing": all([
            system.art9_testing_complete,
            system.art12_logging_verified,
            system.art14_oversight_tested,
            system.art15_security_tested,
        ]),
        "phase_4_declaration": all([
            system.declaration_of_conformity_signed,
            system.ce_marking_affixed,
            system.eu_ai_database_registered,
        ]),
    }
    
    completed = sum(1 for v in phases.values() if v)
    return {
        "phases": phases,
        "completion_pct": completed / len(phases) * 100,
        "ready_for_market": all(phases.values()),
        "missing_annex_iv_sections": [
            s.title for s in system.annex_iv_sections 
            if s.status != PhaseStatus.COMPLETE
        ],
    }

# Example: CV screening SaaS provider
cv_screening_system = ConformityAssessmentStatus(
    system_name="TalentRank CV Screening",
    system_version="2.4.1",
    annex_iii_categories=[AnnexIIICategory.EMPLOYMENT_HR],
    assessment_track=AssessmentTrack.ANNEX_VI_INTERNAL_CONTROL,
    enforcement_deadline=date(2026, 8, 2),
    annex_iv_sections=build_annex_iv_checklist(),
)

readiness = assess_readiness(cv_screening_system)
print(f"Track: {cv_screening_system.assessment_track.value}")
print(f"Notified body required: {check_annex_vii_required(cv_screening_system)}")
print(f"Readiness: {readiness['completion_pct']}%")
print(f"Ready for market: {readiness['ready_for_market']}")

Output for TalentRank CV Screening:

Track: annex_vi
Notified body required: False
Readiness: 0.0%
Ready for market: False

25-Item Conformity Assessment Checklist

Phase 1: Scope and Classification (Items 1–5)

  1. AI system registry created with all in-scope systems identified
  2. Each system mapped to applicable Annex III category with documented rationale
  3. Provider vs. deployer role confirmed and documented
  4. EU market scope confirmed (systems producing outputs used by EU persons)
  5. August 2026 / August 2027 timeline identified per system (new vs. existing deployment)

Phase 2: Technical Documentation (Items 6–16)

  1. Annex IV Section 1: General description complete (intended purpose, version, dependencies)
  2. Annex IV Section 2: Detailed technical description complete (architecture, training, inference)
  3. Annex IV Section 3: Training data governance documented (provenance, bias testing, representativeness)
  4. Annex IV Section 4: Art.9 risk management log maintained as living document
  5. Annex IV Section 5: Human oversight mechanisms specified (who, how, authority, training)
  6. Annex IV Section 6: Art.12 logging requirements documented (what logged, retention, access)
  7. Annex IV Section 7: Art.15 accuracy, robustness, and cybersecurity measures documented
  8. Annex IV Section 8: Post-market monitoring and Art.73 incident reporting procedure documented
  9. Technical documentation directory structured with index.md linking all sections
  10. Version control log for documentation changes maintained
  11. Art.18 10-year retention plan in place (EU-jurisdiction storage confirmed)

Phase 3: Internal Control Testing (Items 17–21)

  1. Art.9 risk management controls verified against deployed system
  2. Art.12 logging verified across full input range (logs generated, retained, accessible)
  3. Art.14 human override mechanism tested by override-authorised principal
  4. Art.15 cybersecurity testing conducted (penetration test or equivalent, scope documented)
  5. conformity_test_report.md completed with pass/fail per article and evidence citations

Phase 4: Declaration and Registration (Items 22–25)

  1. EU Declaration of Conformity drafted with Art.48 required content
  2. Declaration signed by responsible person (provider or authorised representative)
  3. CE marking affixed to technical documentation and accompanying materials
  4. System registered in EU AI Database (Art.22) with registration ID recorded

Key Takeaways

This guide is part of the EU AI Act developer series. Related posts: EU AI Act Art.43 — Two-Track Conformity Assessment System, EU AI Act Annex III — High-Risk AI System Classification, EU AI Act Art.9 — Risk Management System Requirements.