2026-04-25·14 min read·

EU AI Act Art.95: Voluntary Codes of Conduct for Non-High-Risk AI — Developer Guide (2026)

Art.5 prohibits certain AI practices. Art.6–16 impose mandatory obligations on high-risk AI systems. Art.51–55 govern GPAI providers. Art.95 covers everyone else.

Article 95 is the EU AI Act's voluntary compliance lane — the mechanism by which providers of AI systems that do not reach the high-risk threshold can nonetheless signal meaningful compliance posture, earn market differentiation, and reduce future reclassification risk. It is structurally underestimated: many developers in the non-high-risk space assume Art.95 is purely optional and therefore unimportant. That assumption is wrong in two directions.

First, voluntary codes of conduct under Art.95 become commercially binding the moment procurement decisions, enterprise contracts, or insurance underwriting require them. A provider who has signed an AI Office-facilitated code and then violates it faces Art.99(3) exposure for supplying misleading information — the voluntary label applies to whether you sign, not to what happens once you do.

Second, Art.95 participation is the most accessible path to demonstrating the kind of systematic AI governance that regulators and enterprise customers increasingly expect even from non-high-risk providers. With the Art.84 Commission review cycle running and Annex III expansion possible, providers who have invested in Art.95 compliance are better positioned to absorb reclassification without crisis.

What Article 95 Actually Says

Article 95 establishes the voluntary codes mechanism, the facilitation responsibilities of the AI Office and Member States, the development process, SME-specific provisions, and Commission responsibilities for Union-level codes.

Article 95(1) — The Voluntary Application Framework:

The AI Office and the Member States shall encourage and facilitate the drawing up of voluntary codes of conduct intended to foster the voluntary application of specific requirements to AI systems other than high-risk AI systems, in accordance with Articles 9, 10, 11, 12, 13, 14 and 15, on the basis of technical specifications and solutions identified as best practices with due regard to the intended purpose of the AI systems, while also taking into account available technical standards.

Art.95(1) identifies the target of voluntary codes: AI systems that are not high-risk under Art.6 and Annex III. Providers of these systems — chatbots, recommendation engines, content moderation tools, HR analytics below the Annex III threshold, productivity AI — can voluntarily apply Chapter III requirements through Art.95 codes.

The referenced articles are significant:

These are exactly the requirements that Annex III high-risk systems must implement. Art.95 allows non-high-risk providers to adopt them voluntarily, creating a compliance posture that mirrors high-risk obligations without the mandatory enforcement exposure.

Article 95(2) — The GPAI Dimension:

The AI Office and the Member States shall facilitate the development of codes of conduct relating to the voluntary application of specific requirements as referred to in paragraph 1 by the providers of general-purpose AI models, in accordance with Articles 53 and 54, in particular taking into account the fact that where commitments are made by providers of general-purpose AI models under the codes of conduct on the basis of Article 56(1), any such code of conduct could in particular cover the categories provided for in Article 56(2).

Art.95(2) extends the voluntary codes framework to GPAI providers who are not subject to systemic-risk obligations — those whose models fall below the Art.51(2) 10²⁵ FLOPs threshold. The reference to Art.56(1) means that the mandatory GPAI Code of Practice and the voluntary Art.95 codes are designed to interact: a GPAI provider can participate in both, with Art.56 covering systemic-risk GPAI obligations and Art.95 covering voluntary additional commitments.

Article 95(3) — Who Can Develop Codes:

Codes of conduct may be drawn up by individual providers of AI systems or users, or by organisations representing them, or by both, including with involvement of any interested party and their representative organisations, including civil society organisations and academia.

This is deliberately broad. Four development pathways:

  1. Individual provider codes — A single company writes and commits to its own code
  2. Industry association codes — A sector body (e.g., a FinTech association, a medical AI consortium) develops a code that multiple members adopt
  3. Multi-stakeholder codes — Codes developed with user organisations, civil society, and academia involved
  4. Hybrid codes — Combinations of the above, with different parties responsible for different sections

Article 95(4) — SME-Specific Provisions:

The AI Office and the Member States shall take into account the specific interests and needs of SMEs when encouraging and facilitating the drawing up of codes of conduct.

The SME provision in Art.95(4) is not merely aspirational. In practice, it means the AI Office will facilitate the development of SME-accessible codes — lighter-weight documentation requirements, lower governance overhead, and code structures that do not require the compliance infrastructure of a large enterprise. For providers who qualify as SMEs under EU law, this creates an accessible onboarding path to Art.95 compliance.

Article 95(5) — Governance Mechanisms:

The AI Office and the Member States shall facilitate the development of adequate governance mechanisms for codes of conduct, which may include monitoring arrangements and, where appropriate, include the management of a list of signatories that have committed to the codes of conduct and reporting on the implementation thereof.

Art.95(5) makes clear that Art.95 codes are not purely self-declared. The AI Office facilitates governance structures — signatory registries, monitoring mechanisms, and implementation reporting. A provider listed as a signatory to an AI Office-facilitated code has made a public, registerable commitment. Compliance with that commitment is therefore verifiable, and claims of compliance become factual assertions subject to Art.99(3).

Article 95(6) — Commission Role:

The Commission shall facilitate the development of codes of conduct at Union level, including by developing technical specifications and solutions addressing the environmental sustainability of AI systems.

The Commission has a specific mandate to develop Union-level codes, including codes addressing the environmental sustainability of AI — energy consumption, carbon footprint, and resource use. Providers deploying large models or infrastructure-intensive AI should track the Commission's Union-level code development as it will establish the benchmark for environmental compliance posture.

Art.95 vs Art.56: The GPAI CoP Distinction

Developers building on or with GPAI models frequently confuse Art.95 and Art.56. They are complementary, not interchangeable.

DimensionArt.56 GPAI Code of PracticeArt.95 Voluntary Code of Conduct
WhoGPAI providers (all sizes)Non-high-risk AI providers + GPAI providers (voluntary additional)
WhatArt.53–55 GPAI obligationsArt.9, 10, 11, 12, 13, 14, 15 Chapter III requirements
Compliance presumptionYes — Art.56(4) creates a presumption of conformityNo formal presumption, but market signal
AI Office roleFacilitates CoP process, reviews CoP complianceFacilitates code development and governance
MandatoryCoP is the primary compliance pathway (not legally mandatory, but practically essential)Genuinely voluntary — no obligation to participate
Enforcement exposureNon-compliance with Art.53–55 triggers Art.99(2) penaltiesFalse claims of Art.95 compliance trigger Art.99(3) penalties
SignatoriesGPAI CoP signatory list maintained by AI OfficeArt.95 signatory registries facilitated by AI Office

The practical implication: if you provide a GPAI model, Art.56 is your primary obligation pathway. Art.95 provides an additional voluntary layer — particularly for GPAI providers who want to demonstrate broader compliance posture extending beyond GPAI-specific obligations to the Chapter III requirements their downstream deployers must satisfy.

If you provide a non-GPAI AI system that is not high-risk (a customer service chatbot, a product recommendation engine, a non-Annex III HR tool), Art.56 does not apply to you. Art.95 is your voluntary compliance option.

The Art.6(3) Interaction: Reducing Reclassification Risk

Art.6(3) of the EU AI Act allows providers of Annex III AI systems to demonstrate that their specific application presents no significant risk and therefore does not meet the high-risk threshold, despite appearing on the Annex III list. This is the "significant risk" gate.

The interaction with Art.95 codes is strategic. If a provider operates an AI system that is near the high-risk boundary — close to Annex III categories, or in a sector where future Annex III expansion is possible — maintaining Art.95 voluntary code compliance demonstrates systematic governance. This has two effects:

  1. Substantive: A provider with documented risk management (Art.9), data governance (Art.10), and transparency measures (Art.13) under an Art.95 code is genuinely less likely to present the "significant risk" that Art.6(3) exempts are designed to capture.

  2. Procedural: If a Market Surveillance Authority investigates whether a system crosses the high-risk threshold, a provider who can demonstrate systematic Art.95 compliance is better positioned to support an Art.6(3) no-significant-risk argument than one with no documented governance.

Art.84 authorises the Commission to add new categories to Annex III. Providers who have built Art.95 compliance infrastructure can absorb reclassification without rebuilding their entire compliance programme from scratch.

CLOUD Act Infrastructure Intersection

Art.95 voluntary codes that include commitments about data governance (Art.10 equivalents), logging (Art.12 equivalents), or security (Art.15 equivalents) interact with infrastructure jurisdiction when AI system components run on US-cloud infrastructure.

The CLOUD Act (Clarifying Lawful Overseas Use of Data Act) allows US authorities to compel US cloud providers to produce data stored overseas. If an Art.95 code commits to EU-resident data processing, EU-only logging access, or jurisdiction-specific security controls, those commitments cannot be fully honoured if the underlying infrastructure is subject to CLOUD Act production orders.

Three CLOUD Act scenarios for Art.95 code signatories:

Scenario 1 — Commitments compatible with US cloud: The Art.95 code commits to transparency and human oversight (Art.13, Art.14 equivalents) but does not restrict data residency. No CLOUD Act conflict. Full compliance achievable on any infrastructure.

Scenario 2 — Commitments creating residency expectations: The Art.95 code commits to EU-resident processing or EU-jurisdiction-only access to AI system logs. A CLOUD Act production order requiring the US cloud provider to produce those logs would breach the code commitment. Infrastructure risk exists.

Scenario 3 — EU-native infrastructure: An AI provider using EU-native infrastructure — a PaaS platform subject exclusively to EU law, operating entirely within EU data centres — can make Art.95 infrastructure commitments without CLOUD Act exposure. A single US legal order cannot reach infrastructure with no US nexus.

For providers making Art.95 commitments about data handling, logging, or access controls: infrastructure jurisdiction is a code content decision, not merely a technical one.

What Art.95 Codes Must Contain to Be Commercially Useful

The regulation does not enumerate mandatory content for Art.95 codes. The AI Office facilitation framework will develop minimum elements over time, but in the interim, codes that gain commercial traction contain five structural elements:

1. Scope definition: Which specific AI systems the code applies to, described with sufficient precision to allow users and regulators to determine whether a given deployment is covered.

2. Requirement mapping: For each voluntary requirement (Art.9, 10, 11, 12, 13, 14, 15 equivalents), a specification of what implementation looks like in the provider's systems. Not "we will do risk management" but "we maintain a risk register covering [specific risk categories] updated [frequency], reviewed by [role]."

3. Monitoring mechanism: How compliance with the code is verified — self-assessment, third-party audit, or AI Office monitoring. Codes with no monitoring mechanism have limited commercial credibility.

4. Signatory obligations: What the provider commits to by signing — documentation to maintain, reports to submit, incident reporting thresholds, and the consequences of code violation.

5. Deviation reporting: A mechanism for reporting when code commitments cannot be met — whether due to technical constraints, legal conflicts (including CLOUD Act scenarios), or changed circumstances.

Python Implementation: Art.95 Code Adherence Tracker

from dataclasses import dataclass, field
from datetime import datetime, date
from typing import Optional
from enum import Enum

class AdherenceStatus(Enum):
    COMPLIANT = "compliant"
    PARTIAL = "partial"
    NON_COMPLIANT = "non_compliant"
    NOT_ASSESSED = "not_assessed"

@dataclass
class CodeCommitment:
    article_equivalent: str  # e.g., "Art.9 equivalent - Risk Management"
    commitment_text: str
    implementation_evidence: list[str]
    last_assessment_date: date
    status: AdherenceStatus
    next_review_date: date
    deviation_note: Optional[str] = None

@dataclass 
class Art95CodeOfConduct:
    code_name: str
    issuing_body: str  # "AI Office", "industry association name", "individual"
    signatory_since: date
    scope_description: str
    commitments: list[CodeCommitment]
    monitoring_mechanism: str
    infrastructure_jurisdiction: str  # "EU-only", "US-cloud", "hybrid"
    cloud_act_analysis: Optional[str] = None
    
    def voluntary_compliance_score(self) -> dict:
        total = len(self.commitments)
        if total == 0:
            return {"score": 0.0, "status": "NO_COMMITMENTS"}
        
        compliant = sum(1 for c in self.commitments 
                       if c.status == AdherenceStatus.COMPLIANT)
        partial = sum(1 for c in self.commitments 
                     if c.status == AdherenceStatus.PARTIAL)
        
        score = (compliant + 0.5 * partial) / total
        
        return {
            "score": round(score, 2),
            "compliant_count": compliant,
            "partial_count": partial,
            "non_compliant_count": sum(1 for c in self.commitments 
                                       if c.status == AdherenceStatus.NON_COMPLIANT),
            "total": total,
            "percentage": f"{score * 100:.0f}%",
            "status": "GREEN" if score >= 0.9 else "AMBER" if score >= 0.7 else "RED",
        }
    
    def overdue_reviews(self) -> list[CodeCommitment]:
        today = date.today()
        return [c for c in self.commitments if c.next_review_date < today]
    
    def cloud_act_risk_assessment(self) -> str:
        if self.infrastructure_jurisdiction == "EU-only":
            return "LOW: EU-native infrastructure has no CLOUD Act nexus."
        if self.infrastructure_jurisdiction == "US-cloud":
            data_commitments = [c for c in self.commitments 
                               if "Art.10" in c.article_equivalent or 
                                  "Art.12" in c.article_equivalent or
                                  "Art.15" in c.article_equivalent]
            if data_commitments:
                return ("HIGH: US-cloud infrastructure with data/logging/security commitments. "
                       "CLOUD Act production orders could breach Art.95 code commitments. "
                       "Consider EU-native infrastructure migration or commitment scope narrowing.")
            return "MEDIUM: US-cloud infrastructure with no data-residency commitments."
        return "REVIEW: Hybrid infrastructure — assess each commitment against CLOUD Act exposure."
    
    def generate_monitoring_report(self) -> dict:
        score_data = self.voluntary_compliance_score()
        return {
            "code_name": self.code_name,
            "report_date": date.today().isoformat(),
            "signatory_duration_days": (date.today() - self.signatory_since).days,
            "compliance_score": score_data,
            "overdue_reviews": len(self.overdue_reviews()),
            "cloud_act_risk": self.cloud_act_risk_assessment(),
            "deviations": [
                {"commitment": c.article_equivalent, "note": c.deviation_note}
                for c in self.commitments 
                if c.deviation_note
            ],
        }

# Example: HR analytics provider, below Annex III threshold
hr_analytics_code = Art95CodeOfConduct(
    code_name="EU AI Act Art.95 Voluntary Compliance Code v1.0",
    issuing_body="industry_association",
    signatory_since=date(2026, 1, 15),
    scope_description="Non-Annex III HR analytics AI systems for candidate screening support",
    infrastructure_jurisdiction="EU-only",
    commitments=[
        CodeCommitment(
            article_equivalent="Art.9 equivalent - Risk Management",
            commitment_text="Maintain risk register covering discrimination, accuracy, and scope-creep risks",
            implementation_evidence=["risk_register_v2.pdf", "quarterly_review_2026Q1.pdf"],
            last_assessment_date=date(2026, 3, 31),
            status=AdherenceStatus.COMPLIANT,
            next_review_date=date(2026, 6, 30),
        ),
        CodeCommitment(
            article_equivalent="Art.13 equivalent - Transparency",
            commitment_text="Disclose AI involvement in all candidate assessments to hiring managers",
            implementation_evidence=["disclosure_template_v3.docx"],
            last_assessment_date=date(2026, 3, 31),
            status=AdherenceStatus.COMPLIANT,
            next_review_date=date(2026, 6, 30),
        ),
        CodeCommitment(
            article_equivalent="Art.14 equivalent - Human Oversight",
            commitment_text="Ensure hiring decision cannot be made by AI output alone",
            implementation_evidence=["process_audit_q1_2026.pdf"],
            last_assessment_date=date(2026, 3, 31),
            status=AdherenceStatus.PARTIAL,
            next_review_date=date(2026, 5, 15),
            deviation_note="Edge case: urgent-hire pipeline allows 24h AI-only screening. Remediation in progress.",
        ),
    ],
    monitoring_mechanism="quarterly self-assessment + annual third-party review",
)

report = hr_analytics_code.generate_monitoring_report()
print(f"Compliance score: {report['compliance_score']['percentage']} ({report['compliance_score']['status']})")
print(f"CLOUD Act risk: {report['cloud_act_risk']}")

The Commercial Case: Why Non-Mandatory Still Matters

Three market forces are making Art.95 compliance commercially significant even though it remains voluntary:

Enterprise procurement requirements: Large enterprise buyers increasingly include AI governance requirements in vendor questionnaires and procurement criteria. An Art.95 code signatory can respond to these requirements with a documented, structured answer. Non-signatories must either construct ad-hoc responses or acknowledge the absence of systematic governance.

Insurance underwriting: AI liability insurance is maturing as a market. Underwriters are beginning to condition premiums and coverage on documented AI governance practices. Art.95 code adherence provides a standardised evidence base for insurance applications — reducing the documentation burden and potentially reducing premium loading for providers with systematic compliance posture.

Regulatory relationship: The AI Office maintains relationships with AI providers across the compliance spectrum. Providers who participate constructively in voluntary code development and maintenance are known to regulators before enforcement becomes relevant. This relationship is valuable if the provider's AI systems are later scrutinised in an Art.84 reclassification review or an Art.6(3) significant-risk assessment.

See Also

The 30-Item Art.95 Compliance Checklist

Code Selection and Onboarding (Items 1–8)

Requirement Implementation (Items 9–20)

Monitoring and Governance (Items 21–30)