EU AI Act Art.95: Voluntary Codes of Conduct for Non-High-Risk AI — Developer Guide (2026)
Art.5 prohibits certain AI practices. Art.6–16 impose mandatory obligations on high-risk AI systems. Art.51–55 govern GPAI providers. Art.95 covers everyone else.
Article 95 is the EU AI Act's voluntary compliance lane — the mechanism by which providers of AI systems that do not reach the high-risk threshold can nonetheless signal meaningful compliance posture, earn market differentiation, and reduce future reclassification risk. It is structurally underestimated: many developers in the non-high-risk space assume Art.95 is purely optional and therefore unimportant. That assumption is wrong in two directions.
First, voluntary codes of conduct under Art.95 become commercially binding the moment procurement decisions, enterprise contracts, or insurance underwriting require them. A provider who has signed an AI Office-facilitated code and then violates it faces Art.99(3) exposure for supplying misleading information — the voluntary label applies to whether you sign, not to what happens once you do.
Second, Art.95 participation is the most accessible path to demonstrating the kind of systematic AI governance that regulators and enterprise customers increasingly expect even from non-high-risk providers. With the Art.84 Commission review cycle running and Annex III expansion possible, providers who have invested in Art.95 compliance are better positioned to absorb reclassification without crisis.
What Article 95 Actually Says
Article 95 establishes the voluntary codes mechanism, the facilitation responsibilities of the AI Office and Member States, the development process, SME-specific provisions, and Commission responsibilities for Union-level codes.
Article 95(1) — The Voluntary Application Framework:
The AI Office and the Member States shall encourage and facilitate the drawing up of voluntary codes of conduct intended to foster the voluntary application of specific requirements to AI systems other than high-risk AI systems, in accordance with Articles 9, 10, 11, 12, 13, 14 and 15, on the basis of technical specifications and solutions identified as best practices with due regard to the intended purpose of the AI systems, while also taking into account available technical standards.
Art.95(1) identifies the target of voluntary codes: AI systems that are not high-risk under Art.6 and Annex III. Providers of these systems — chatbots, recommendation engines, content moderation tools, HR analytics below the Annex III threshold, productivity AI — can voluntarily apply Chapter III requirements through Art.95 codes.
The referenced articles are significant:
- Art.9 — Risk management system (continuous monitoring, risk identification, mitigation)
- Art.10 — Data governance (training data quality, relevance, bias assessment)
- Art.11 — Technical documentation (system description, purpose, capability limitations)
- Art.12 — Record keeping and logging (automatic logs, post-market monitoring support)
- Art.13 — Transparency and disclosure (user information, intended purpose, limitations)
- Art.14 — Human oversight (design for meaningful human intervention and override)
- Art.15 — Accuracy, robustness, and cybersecurity (performance metrics, adversarial testing)
These are exactly the requirements that Annex III high-risk systems must implement. Art.95 allows non-high-risk providers to adopt them voluntarily, creating a compliance posture that mirrors high-risk obligations without the mandatory enforcement exposure.
Article 95(2) — The GPAI Dimension:
The AI Office and the Member States shall facilitate the development of codes of conduct relating to the voluntary application of specific requirements as referred to in paragraph 1 by the providers of general-purpose AI models, in accordance with Articles 53 and 54, in particular taking into account the fact that where commitments are made by providers of general-purpose AI models under the codes of conduct on the basis of Article 56(1), any such code of conduct could in particular cover the categories provided for in Article 56(2).
Art.95(2) extends the voluntary codes framework to GPAI providers who are not subject to systemic-risk obligations — those whose models fall below the Art.51(2) 10²⁵ FLOPs threshold. The reference to Art.56(1) means that the mandatory GPAI Code of Practice and the voluntary Art.95 codes are designed to interact: a GPAI provider can participate in both, with Art.56 covering systemic-risk GPAI obligations and Art.95 covering voluntary additional commitments.
Article 95(3) — Who Can Develop Codes:
Codes of conduct may be drawn up by individual providers of AI systems or users, or by organisations representing them, or by both, including with involvement of any interested party and their representative organisations, including civil society organisations and academia.
This is deliberately broad. Four development pathways:
- Individual provider codes — A single company writes and commits to its own code
- Industry association codes — A sector body (e.g., a FinTech association, a medical AI consortium) develops a code that multiple members adopt
- Multi-stakeholder codes — Codes developed with user organisations, civil society, and academia involved
- Hybrid codes — Combinations of the above, with different parties responsible for different sections
Article 95(4) — SME-Specific Provisions:
The AI Office and the Member States shall take into account the specific interests and needs of SMEs when encouraging and facilitating the drawing up of codes of conduct.
The SME provision in Art.95(4) is not merely aspirational. In practice, it means the AI Office will facilitate the development of SME-accessible codes — lighter-weight documentation requirements, lower governance overhead, and code structures that do not require the compliance infrastructure of a large enterprise. For providers who qualify as SMEs under EU law, this creates an accessible onboarding path to Art.95 compliance.
Article 95(5) — Governance Mechanisms:
The AI Office and the Member States shall facilitate the development of adequate governance mechanisms for codes of conduct, which may include monitoring arrangements and, where appropriate, include the management of a list of signatories that have committed to the codes of conduct and reporting on the implementation thereof.
Art.95(5) makes clear that Art.95 codes are not purely self-declared. The AI Office facilitates governance structures — signatory registries, monitoring mechanisms, and implementation reporting. A provider listed as a signatory to an AI Office-facilitated code has made a public, registerable commitment. Compliance with that commitment is therefore verifiable, and claims of compliance become factual assertions subject to Art.99(3).
Article 95(6) — Commission Role:
The Commission shall facilitate the development of codes of conduct at Union level, including by developing technical specifications and solutions addressing the environmental sustainability of AI systems.
The Commission has a specific mandate to develop Union-level codes, including codes addressing the environmental sustainability of AI — energy consumption, carbon footprint, and resource use. Providers deploying large models or infrastructure-intensive AI should track the Commission's Union-level code development as it will establish the benchmark for environmental compliance posture.
Art.95 vs Art.56: The GPAI CoP Distinction
Developers building on or with GPAI models frequently confuse Art.95 and Art.56. They are complementary, not interchangeable.
| Dimension | Art.56 GPAI Code of Practice | Art.95 Voluntary Code of Conduct |
|---|---|---|
| Who | GPAI providers (all sizes) | Non-high-risk AI providers + GPAI providers (voluntary additional) |
| What | Art.53–55 GPAI obligations | Art.9, 10, 11, 12, 13, 14, 15 Chapter III requirements |
| Compliance presumption | Yes — Art.56(4) creates a presumption of conformity | No formal presumption, but market signal |
| AI Office role | Facilitates CoP process, reviews CoP compliance | Facilitates code development and governance |
| Mandatory | CoP is the primary compliance pathway (not legally mandatory, but practically essential) | Genuinely voluntary — no obligation to participate |
| Enforcement exposure | Non-compliance with Art.53–55 triggers Art.99(2) penalties | False claims of Art.95 compliance trigger Art.99(3) penalties |
| Signatories | GPAI CoP signatory list maintained by AI Office | Art.95 signatory registries facilitated by AI Office |
The practical implication: if you provide a GPAI model, Art.56 is your primary obligation pathway. Art.95 provides an additional voluntary layer — particularly for GPAI providers who want to demonstrate broader compliance posture extending beyond GPAI-specific obligations to the Chapter III requirements their downstream deployers must satisfy.
If you provide a non-GPAI AI system that is not high-risk (a customer service chatbot, a product recommendation engine, a non-Annex III HR tool), Art.56 does not apply to you. Art.95 is your voluntary compliance option.
The Art.6(3) Interaction: Reducing Reclassification Risk
Art.6(3) of the EU AI Act allows providers of Annex III AI systems to demonstrate that their specific application presents no significant risk and therefore does not meet the high-risk threshold, despite appearing on the Annex III list. This is the "significant risk" gate.
The interaction with Art.95 codes is strategic. If a provider operates an AI system that is near the high-risk boundary — close to Annex III categories, or in a sector where future Annex III expansion is possible — maintaining Art.95 voluntary code compliance demonstrates systematic governance. This has two effects:
-
Substantive: A provider with documented risk management (Art.9), data governance (Art.10), and transparency measures (Art.13) under an Art.95 code is genuinely less likely to present the "significant risk" that Art.6(3) exempts are designed to capture.
-
Procedural: If a Market Surveillance Authority investigates whether a system crosses the high-risk threshold, a provider who can demonstrate systematic Art.95 compliance is better positioned to support an Art.6(3) no-significant-risk argument than one with no documented governance.
Art.84 authorises the Commission to add new categories to Annex III. Providers who have built Art.95 compliance infrastructure can absorb reclassification without rebuilding their entire compliance programme from scratch.
CLOUD Act Infrastructure Intersection
Art.95 voluntary codes that include commitments about data governance (Art.10 equivalents), logging (Art.12 equivalents), or security (Art.15 equivalents) interact with infrastructure jurisdiction when AI system components run on US-cloud infrastructure.
The CLOUD Act (Clarifying Lawful Overseas Use of Data Act) allows US authorities to compel US cloud providers to produce data stored overseas. If an Art.95 code commits to EU-resident data processing, EU-only logging access, or jurisdiction-specific security controls, those commitments cannot be fully honoured if the underlying infrastructure is subject to CLOUD Act production orders.
Three CLOUD Act scenarios for Art.95 code signatories:
Scenario 1 — Commitments compatible with US cloud: The Art.95 code commits to transparency and human oversight (Art.13, Art.14 equivalents) but does not restrict data residency. No CLOUD Act conflict. Full compliance achievable on any infrastructure.
Scenario 2 — Commitments creating residency expectations: The Art.95 code commits to EU-resident processing or EU-jurisdiction-only access to AI system logs. A CLOUD Act production order requiring the US cloud provider to produce those logs would breach the code commitment. Infrastructure risk exists.
Scenario 3 — EU-native infrastructure: An AI provider using EU-native infrastructure — a PaaS platform subject exclusively to EU law, operating entirely within EU data centres — can make Art.95 infrastructure commitments without CLOUD Act exposure. A single US legal order cannot reach infrastructure with no US nexus.
For providers making Art.95 commitments about data handling, logging, or access controls: infrastructure jurisdiction is a code content decision, not merely a technical one.
What Art.95 Codes Must Contain to Be Commercially Useful
The regulation does not enumerate mandatory content for Art.95 codes. The AI Office facilitation framework will develop minimum elements over time, but in the interim, codes that gain commercial traction contain five structural elements:
1. Scope definition: Which specific AI systems the code applies to, described with sufficient precision to allow users and regulators to determine whether a given deployment is covered.
2. Requirement mapping: For each voluntary requirement (Art.9, 10, 11, 12, 13, 14, 15 equivalents), a specification of what implementation looks like in the provider's systems. Not "we will do risk management" but "we maintain a risk register covering [specific risk categories] updated [frequency], reviewed by [role]."
3. Monitoring mechanism: How compliance with the code is verified — self-assessment, third-party audit, or AI Office monitoring. Codes with no monitoring mechanism have limited commercial credibility.
4. Signatory obligations: What the provider commits to by signing — documentation to maintain, reports to submit, incident reporting thresholds, and the consequences of code violation.
5. Deviation reporting: A mechanism for reporting when code commitments cannot be met — whether due to technical constraints, legal conflicts (including CLOUD Act scenarios), or changed circumstances.
Python Implementation: Art.95 Code Adherence Tracker
from dataclasses import dataclass, field
from datetime import datetime, date
from typing import Optional
from enum import Enum
class AdherenceStatus(Enum):
COMPLIANT = "compliant"
PARTIAL = "partial"
NON_COMPLIANT = "non_compliant"
NOT_ASSESSED = "not_assessed"
@dataclass
class CodeCommitment:
article_equivalent: str # e.g., "Art.9 equivalent - Risk Management"
commitment_text: str
implementation_evidence: list[str]
last_assessment_date: date
status: AdherenceStatus
next_review_date: date
deviation_note: Optional[str] = None
@dataclass
class Art95CodeOfConduct:
code_name: str
issuing_body: str # "AI Office", "industry association name", "individual"
signatory_since: date
scope_description: str
commitments: list[CodeCommitment]
monitoring_mechanism: str
infrastructure_jurisdiction: str # "EU-only", "US-cloud", "hybrid"
cloud_act_analysis: Optional[str] = None
def voluntary_compliance_score(self) -> dict:
total = len(self.commitments)
if total == 0:
return {"score": 0.0, "status": "NO_COMMITMENTS"}
compliant = sum(1 for c in self.commitments
if c.status == AdherenceStatus.COMPLIANT)
partial = sum(1 for c in self.commitments
if c.status == AdherenceStatus.PARTIAL)
score = (compliant + 0.5 * partial) / total
return {
"score": round(score, 2),
"compliant_count": compliant,
"partial_count": partial,
"non_compliant_count": sum(1 for c in self.commitments
if c.status == AdherenceStatus.NON_COMPLIANT),
"total": total,
"percentage": f"{score * 100:.0f}%",
"status": "GREEN" if score >= 0.9 else "AMBER" if score >= 0.7 else "RED",
}
def overdue_reviews(self) -> list[CodeCommitment]:
today = date.today()
return [c for c in self.commitments if c.next_review_date < today]
def cloud_act_risk_assessment(self) -> str:
if self.infrastructure_jurisdiction == "EU-only":
return "LOW: EU-native infrastructure has no CLOUD Act nexus."
if self.infrastructure_jurisdiction == "US-cloud":
data_commitments = [c for c in self.commitments
if "Art.10" in c.article_equivalent or
"Art.12" in c.article_equivalent or
"Art.15" in c.article_equivalent]
if data_commitments:
return ("HIGH: US-cloud infrastructure with data/logging/security commitments. "
"CLOUD Act production orders could breach Art.95 code commitments. "
"Consider EU-native infrastructure migration or commitment scope narrowing.")
return "MEDIUM: US-cloud infrastructure with no data-residency commitments."
return "REVIEW: Hybrid infrastructure — assess each commitment against CLOUD Act exposure."
def generate_monitoring_report(self) -> dict:
score_data = self.voluntary_compliance_score()
return {
"code_name": self.code_name,
"report_date": date.today().isoformat(),
"signatory_duration_days": (date.today() - self.signatory_since).days,
"compliance_score": score_data,
"overdue_reviews": len(self.overdue_reviews()),
"cloud_act_risk": self.cloud_act_risk_assessment(),
"deviations": [
{"commitment": c.article_equivalent, "note": c.deviation_note}
for c in self.commitments
if c.deviation_note
],
}
# Example: HR analytics provider, below Annex III threshold
hr_analytics_code = Art95CodeOfConduct(
code_name="EU AI Act Art.95 Voluntary Compliance Code v1.0",
issuing_body="industry_association",
signatory_since=date(2026, 1, 15),
scope_description="Non-Annex III HR analytics AI systems for candidate screening support",
infrastructure_jurisdiction="EU-only",
commitments=[
CodeCommitment(
article_equivalent="Art.9 equivalent - Risk Management",
commitment_text="Maintain risk register covering discrimination, accuracy, and scope-creep risks",
implementation_evidence=["risk_register_v2.pdf", "quarterly_review_2026Q1.pdf"],
last_assessment_date=date(2026, 3, 31),
status=AdherenceStatus.COMPLIANT,
next_review_date=date(2026, 6, 30),
),
CodeCommitment(
article_equivalent="Art.13 equivalent - Transparency",
commitment_text="Disclose AI involvement in all candidate assessments to hiring managers",
implementation_evidence=["disclosure_template_v3.docx"],
last_assessment_date=date(2026, 3, 31),
status=AdherenceStatus.COMPLIANT,
next_review_date=date(2026, 6, 30),
),
CodeCommitment(
article_equivalent="Art.14 equivalent - Human Oversight",
commitment_text="Ensure hiring decision cannot be made by AI output alone",
implementation_evidence=["process_audit_q1_2026.pdf"],
last_assessment_date=date(2026, 3, 31),
status=AdherenceStatus.PARTIAL,
next_review_date=date(2026, 5, 15),
deviation_note="Edge case: urgent-hire pipeline allows 24h AI-only screening. Remediation in progress.",
),
],
monitoring_mechanism="quarterly self-assessment + annual third-party review",
)
report = hr_analytics_code.generate_monitoring_report()
print(f"Compliance score: {report['compliance_score']['percentage']} ({report['compliance_score']['status']})")
print(f"CLOUD Act risk: {report['cloud_act_risk']}")
The Commercial Case: Why Non-Mandatory Still Matters
Three market forces are making Art.95 compliance commercially significant even though it remains voluntary:
Enterprise procurement requirements: Large enterprise buyers increasingly include AI governance requirements in vendor questionnaires and procurement criteria. An Art.95 code signatory can respond to these requirements with a documented, structured answer. Non-signatories must either construct ad-hoc responses or acknowledge the absence of systematic governance.
Insurance underwriting: AI liability insurance is maturing as a market. Underwriters are beginning to condition premiums and coverage on documented AI governance practices. Art.95 code adherence provides a standardised evidence base for insurance applications — reducing the documentation burden and potentially reducing premium loading for providers with systematic compliance posture.
Regulatory relationship: The AI Office maintains relationships with AI providers across the compliance spectrum. Providers who participate constructively in voluntary code development and maintenance are known to regulators before enforcement becomes relevant. This relationship is valuable if the provider's AI systems are later scrutinised in an Art.84 reclassification review or an Art.6(3) significant-risk assessment.
See Also
- EU AI Act Art.56: GPAI Code of Practice — The mandatory GPAI compliance pathway that Art.95 complements
- EU AI Act Art.6(3): High-Risk Exemption — The no-significant-risk gate that Art.95 compliance supports
- EU AI Act Art.84: Commission Review — How Annex III can be expanded to reclassify your system
- GPAI Enforcement Countdown: 98 Days — The August 2, 2026 deadline context
- EU AI Act Art.94: AI Office Commitments — What happens when voluntary posture meets mandatory enforcement
- EU AI Act Art.96: Commission Guidelines for AI Implementation — The Commission guidance framework that interprets how Art.95 voluntary requirements apply in practice
The 30-Item Art.95 Compliance Checklist
Code Selection and Onboarding (Items 1–8)
- 1. System scope assessment: Document which AI systems you operate that are not high-risk under Art.6 and Annex III. These are the systems eligible for Art.95 code participation.
- 2. Near-threshold systems identified: Identify AI systems that are close to Annex III categories. These are highest-priority for Art.95 voluntary compliance — the Art.6(3) interaction makes their governance posture strategically important.
- 3. AI Office code catalogue reviewed: Check the AI Office's published list of facilitated codes and signatory registries. Sector-specific codes (HR analytics, financial services chatbots, content moderation) may already exist.
- 4. Industry association codes reviewed: Identify whether your sector has an industry-developed Art.95 code. Participating in existing codes is faster than developing your own.
- 5. Code development decision: Document whether you will participate in an existing code, develop a company-specific code, or contribute to a new industry code — and the rationale for that choice.
- 6. SME status assessment: Determine whether your organisation qualifies as an SME under EU law. If yes, ensure the codes you consider have SME-accessible implementation requirements.
- 7. Signatory registration: Once you join a code, register as a signatory with the AI Office or the relevant governance body. This creates the public commitment that supports commercial credibility.
- 8. Code coverage mapping: Map each Art.95 code commitment to the specific AI systems and workflows it covers. Gaps in coverage create both compliance risk and commercial ambiguity.
Requirement Implementation (Items 9–20)
- 9. Art.9 equivalent — risk management system: Implement a structured risk register covering the AI systems in scope, with regular review cycles and documented remediation for identified risks.
- 10. Art.10 equivalent — data governance: Document training data sources, data quality measures, and bias assessment results for AI systems covered by the code.
- 11. Art.11 equivalent — technical documentation: Maintain a technical description of each covered AI system sufficient for a regulator to understand its purpose, capabilities, and limitations.
- 12. Art.12 equivalent — record keeping: Implement automatic logging for covered AI system outputs, with retention periods and access controls documented.
- 13. Art.13 equivalent — transparency disclosure: Publish the information users need to understand when they are interacting with or affected by an AI system covered by the code.
- 14. Art.14 equivalent — human oversight: Design covered AI systems to enable meaningful human intervention and override. Document how human oversight is implemented operationally.
- 15. Art.15 equivalent — accuracy metrics: Establish performance metrics for covered AI systems, document accuracy targets, and maintain testing results demonstrating compliance.
- 16. Art.15 equivalent — cybersecurity: Implement and document cybersecurity controls appropriate to the risk profile of covered AI systems, including adversarial testing where relevant.
- 17. Infrastructure jurisdiction documented: Record the infrastructure on which each covered AI system processes data, with particular attention to CLOUD Act exposure for US-cloud deployments.
- 18. CLOUD Act conflict assessment: For any code commitment about data residency, log access, or security controls: assess whether US-cloud infrastructure could produce a CLOUD Act conflict. Document the assessment.
- 19. Deviation procedure implemented: Establish a process for identifying when code commitments cannot be met and for reporting deviations to the relevant governance body.
- 20. Environmental sustainability assessed: Document the energy consumption and carbon footprint of covered AI system infrastructure, in anticipation of Commission Union-level codes addressing Art.95(6) sustainability requirements.
Monitoring and Governance (Items 21–30)
- 21. Monitoring schedule established: Set regular assessment dates for each code commitment — quarterly for high-risk commitments, semi-annually for stable ones.
- 22. Compliance score tracked: Use a structured compliance tracking system (like the Python class above) to maintain an audit-ready record of commitment adherence over time.
- 23. Overdue review remediation: At each assessment cycle, identify commitments whose review dates have passed and remediate before the next monitoring period.
- 24. Third-party verification planned: Schedule at least annual third-party review of code compliance for commercially significant commitments — particularly those referenced in customer contracts or insurance documentation.
- 25. Signatory registry status current: Confirm that your organisation's signatory status with the AI Office or industry body reflects your current commitment scope.
- 26. Customer disclosure prepared: Prepare a customer-facing summary of your Art.95 code participation, suitable for use in RFP responses, procurement questionnaires, and contract annexes.
- 27. Insurance documentation ready: Compile Art.95 compliance evidence in a format suitable for AI liability insurance underwriting — monitoring reports, third-party assessments, commitment documentation.
- 28. Art.84 reclassification contingency: Confirm that your Art.95 compliance programme would satisfy mandatory Chapter III requirements if your AI system were reclassified as high-risk. If not, document the gaps.
- 29. Art.99(3) false claim check: Review all public statements, marketing materials, and RFP responses for claims about Art.95 compliance. Ensure each claim is supported by documented evidence — Art.99(3) applies to misleading information about compliance.
- 30. Annual code review cycle: Schedule a formal annual review of your Art.95 code participation: whether existing commitments remain appropriate, whether new AI systems should be brought into scope, and whether code content has been updated by the issuing body.