EU AI Act Art.95: Codes of Conduct for Voluntary AI Compliance — Developer Guide (2026)
Most of the EU AI Act's obligations — risk management systems, technical documentation, conformity assessments, CE marking — apply only to high-risk AI systems listed in Annex III. If you are building a recommendation engine, an internal analytics tool, or a customer-facing chatbot that does not touch employment, credit scoring, critical infrastructure, or law enforcement, the Act's heaviest requirements technically do not apply to you.
Article 95 is the bridge between that technical exemption and the practical reality of the market.
Buyers of AI systems — enterprises, government bodies, large financial institutions — increasingly require their AI vendors to demonstrate voluntary compliance with the standards that govern high-risk AI, even when not legally mandated. AI liability insurers are beginning to require evidence of voluntary codes of conduct as a precondition for coverage. Public procurement frameworks in Germany, France, and the Netherlands are adding voluntary AI Act compliance as an evaluation criterion. The European Commission's 2025 guidance encourages member states to prefer AI systems backed by approved codes when procuring for public administration.
Art.95 is the mechanism that makes voluntary compliance legible, auditable, and enforceable. Understanding it — and building for it — is increasingly a commercial prerequisite, not just a compliance option.
What Article 95 Actually Says
Article 95(1) — Core voluntary framework:
The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application of some or all of the requirements established for high-risk AI systems in Chapter III, Sections 2 and 3, to AI systems other than high-risk AI systems, taking into account the available technical solutions and industry best practices allowing for the application of such requirements.
Three points matter here:
-
Voluntary: Art.95 codes apply to AI systems that are not classified as high-risk under Annex III. If your system is in Annex III, you are subject to Chapter III requirements directly — codes of conduct are irrelevant.
-
Full or partial: A provider can choose to adopt some requirements from Chapter III or all of them. There is no minimum threshold. A code of conduct covering only Art.9 (risk management) and Art.12 (logging) is valid.
-
AI Office facilitation: The AI Office coordinates the development process and maintains a register of approved codes. Providers who draft their own internal codes without AI Office involvement are not covered by Art.95 — though they can still claim voluntary best practices separately.
Article 95(2) — Inclusive development process:
The codes of conduct may cover one or more AI systems taking into account the similarity of the intended purposes of the relevant systems. The codes of conduct shall include specific commitments and shall use clear benchmarks and address, as a minimum, the elements referred to in Article 95(3).
The coverage is flexible: a single code can address multiple AI systems with similar purposes. A company that builds customer service bots, internal HR tools, and document processing tools can draft one code that covers all three — or join an industry code that covers their entire sector.
Article 95(3) — Minimum elements that every code must address:
The codes of conduct shall take due account of the generally acknowledged state of the art and relevant standards, and shall address, as a minimum, the requirements referred to in the respective Sections of Chapter III applicable to high-risk AI systems, and may address additional requirements on a voluntary basis, including those related to environmental sustainability, accessibility for persons with disabilities, stakeholder participation and diversity, equality and non-discrimination.
This paragraph contains the substantive content list. A code of conduct must address the Chapter III, Sections 2 and 3 requirements relevant to its covered AI systems:
| Chapter III Section | Requirements |
|---|---|
| Section 2 — Technical requirements | Risk management (Art.9), Data governance (Art.10), Technical documentation (Art.11), Record-keeping (Art.12), Transparency (Art.13), Human oversight (Art.14), Accuracy/robustness/cybersecurity (Art.15) |
| Section 3 — Obligations for providers and users | Provider obligations (Art.16), Authorized representatives (Art.22), Importer/distributor obligations (Art.23-24), User obligations (Art.26), Action upon non-conformity (Art.20) |
The code does not need to apply every requirement from this list at the same intensity as mandatory high-risk compliance. It must address each — meaning: acknowledge the requirement, specify what voluntary measures the provider commits to, and define how compliance will be assessed.
Article 95(4) — AI Office involvement:
The AI Office shall take specific actions to encourage and facilitate the drawing up of codes of conduct. AI systems providers are encouraged to commit to one single code of conduct where appropriate.
The AI Office actively facilitates code development through:
- Publishing template frameworks and model clauses
- Maintaining a registry of active codes and their signatories
- Providing guidance on what constitutes adequate coverage of each Chapter III requirement
- Reviewing draft codes and issuing recommendations before finalization
Article 95(5) — GPAI extension:
The AI Office shall take specific actions to encourage providers of general-purpose AI models to adhere to codes of practice referred to in Article 56, or to develop codes of conduct if they are not subject to obligations under that Article.
This paragraph creates a separate pathway for GPAI model providers who are not subject to Art.56's mandatory codes of practice. It allows GPAI providers building models below the systemic-risk threshold to voluntarily signal alignment with Art.56 standards.
Why Voluntary Is Not Optional in Practice
The legal framework is voluntary. The commercial reality is different.
Public procurement: Germany's AI procurement guidelines (effective 2026) instruct federal agencies to give scoring preference to AI vendors who have joined an AI Office-recognized code of conduct. France and the Netherlands have similar guidance in draft. For any company selling AI into EU public administration — directly or through a systems integrator — this is a de facto requirement.
Enterprise procurement: Large financial institutions (required to manage AI risk under DORA and the ECB's supervisory guidance) are including "AI Act voluntary compliance" provisions in their AI vendor contracts. If the vendor breaches the code of conduct it signed, the customer can trigger a contract clause — turning a voluntary commitment into a contractual obligation.
AI liability insurance: The emerging EU AI liability market (driven by the AI Liability Directive and updated Product Liability Directive) is using codes of conduct as underwriting criteria. Several insurers now offer lower premiums or broader coverage for AI liability policies where the developer has joined an approved code. Absence of a code can result in policy exclusions.
Employment and recruitment AI: Even where an AI system does not cross the Annex III threshold for mandatory high-risk treatment, HR departments often impose internal AI governance requirements. Codes of conduct serve as the evidence trail for internal HR compliance officers.
The Voluntary → Binding Conversion
Once a provider signs a code of conduct, what was voluntary becomes binding through three channels:
1. Contractual binding: When the code is referenced in customer contracts ("Provider represents that it adheres to [code X]"), breach of the code becomes breach of contract. Enterprise SaaS companies routinely face this when their enterprise master service agreement is updated by the customer to include code-of-conduct adherence as a warranty.
2. Regulatory binding via the AI Office: Art.95(4) empowers the AI Office to monitor adherence to approved codes. If a signatory's AI system is investigated and found non-compliant with the code they publicly claimed to adhere to, this can constitute misleading information to authorities under Art.99(3) — a Tier 3 fine of up to €7.5M / 1% of turnover.
3. Reputational binding: AI Office codes are public. The registry of signatories and their stated commitments is public. Any significant divergence between stated commitments and actual system behavior — if surfaced through a market surveillance investigation, a whistleblower report (Art.88), or a media investigation — creates direct reputational exposure.
What Your Code of Conduct Must Cover
Working through the Chapter III requirements that Art.95(3) mandates codes address:
Risk Management (Art.9 equivalent)
A code of conduct must commit to maintaining an ongoing risk management process for covered AI systems. Unlike mandatory high-risk compliance, the code can define proportionate versions:
- Mandatory high-risk (Art.9): Full risk management system throughout the lifecycle, iterative testing, residual risk analysis documented for conformity assessment.
- Code of conduct commitment: Risk assessment conducted before deployment, documented risks logged, annual review, material changes trigger re-assessment.
The code must specify which version applies and how it will be assessed. "We follow best practices for AI risk management" without specifics does not satisfy Art.95(3).
Data Governance (Art.10 equivalent)
Codes must address data quality practices — how training and validation data is examined for biases, errors, and gaps relevant to the AI system's intended purpose. For non-high-risk systems, the commitment can be narrower:
- Mandatory high-risk: Full governance of training, validation, and testing datasets with documented provenance.
- Code of conduct: Documented data quality checks, bias screening relevant to the use case, process for handling discovered data issues post-deployment.
Technical Documentation (Art.11 equivalent)
The code must commit to maintaining documentation sufficient to explain the system's design, capabilities, limitations, and intended use. This documentation must be accessible to the AI Office upon request.
| Element | Mandatory high-risk (Annex IV) | Typical code of conduct commitment |
|---|---|---|
| System description | Full technical specification | Summary description of capabilities and limitations |
| Training process | Documented methodology | Brief description of training approach |
| Performance metrics | Full accuracy/precision/recall documentation | Key performance metrics for intended use case |
| Intended purpose | Detailed use case specification | Primary intended purpose and known limitations |
| Human oversight | Full Article 14 documentation | Description of human-in-the-loop mechanisms if any |
Logging and Record-Keeping (Art.12 equivalent)
Codes must commit to retaining logs sufficient to enable post-hoc investigation of AI outputs. The minimum:
- Inputs and outputs logged for a defined retention period
- Log access controls documented
- Log format sufficient to reconstruct individual AI decisions on request
Transparency (Art.13 equivalent)
Users and operators of covered AI systems must be informed:
- That they are interacting with or being affected by an AI system
- What the system's purpose is
- What its significant limitations and accuracy characteristics are
Human Oversight (Art.14 equivalent)
The code must address what mechanisms exist for humans to monitor, override, or reject AI system outputs in the covered use cases.
Developing a Code of Conduct: The Process
Option 1: Join an Existing Sector Code
The AI Office has been facilitating sector-specific codes since 2025. As of mid-2026, active sector codes in development or finalized include:
- HR and recruitment AI: Covering Art.10 (bias in training data), Art.13 (transparency to job applicants), Art.14 (human review before adverse employment decisions)
- Customer service AI: Covering transparency (Art.13), logging (Art.12), human escalation mechanisms (Art.14)
- Financial services AI (non-DORA scope): Covering risk management (Art.9), documentation (Art.11), audit readiness (Art.12)
- Healthcare AI (below Annex III threshold): Covering documentation (Art.11), transparency to patients and clinicians (Art.13), clinical oversight (Art.14)
Joining an existing sector code is typically the fastest route: the AI Office publishes the code text, a company signs the adherence declaration, and the company is listed in the registry.
Option 2: Draft a Company-Specific Code
Larger AI companies with multiple products and use cases may draft their own codes to cover their specific systems. The process:
- Draft code text: Address all Chapter III Section 2 and 3 requirements proportionate to the covered systems' risk profile
- Submit to AI Office: Formal submission for review (Art.95(4) process)
- Consultation period: AI Office publishes draft for public comment (typically 30 days)
- Revision: Incorporate comments from AI Office and stakeholders
- Approval and registration: Accepted code is listed in the registry with signatories
- Ongoing monitoring: Annual adherence report submitted to AI Office
Option 3: Industry Association Code
Trade associations (software industry groups, fintech associations, AI companies consortia) can develop codes that member companies then adopt. This distributes the drafting burden and gives the code broader legitimacy. The AI Office facilitates multi-stakeholder drafting processes for sector codes.
Technical Implementation: Building for Code Compliance
Voluntary compliance creates technical requirements that developers must build in advance. The most demanding commitments are logging, documentation, and explainability.
Logging Architecture for Art.12-equivalent Commitments
from dataclasses import dataclass, field
from datetime import datetime, timezone
from typing import Any, Dict, Optional
import hashlib
import json
import uuid
@dataclass
class AIDecisionLog:
"""
Logging structure for AI Act Art.95 code of conduct compliance.
Captures inputs, outputs, context, and metadata for audit trail.
"""
log_id: str = field(default_factory=lambda: str(uuid.uuid4()))
system_id: str = "" # Identifier for the AI system
system_version: str = "" # Version of the model/pipeline
timestamp: str = field(
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
session_id: Optional[str] = None
# Input/output (store hashes for privacy, full values for audit)
input_hash: str = "" # SHA-256 of serialized input
output_hash: str = "" # SHA-256 of serialized output
input_category: str = "" # Type/category of input (not PII)
output_category: str = "" # Type of output produced
# Context
intended_purpose: str = "" # Code of conduct declared purpose
human_reviewed: bool = False # Was a human involved in the decision?
human_reviewer_role: Optional[str] = None
# Risk signals
confidence_score: Optional[float] = None
flagged_for_review: bool = False
flag_reason: Optional[str] = None
# Retention metadata
retention_days: int = 365 # Defined in code of conduct
jurisdiction: str = "EU" # Infrastructure jurisdiction
def to_audit_record(self) -> Dict[str, Any]:
return {
"log_id": self.log_id,
"system_id": self.system_id,
"system_version": self.system_version,
"timestamp": self.timestamp,
"input_hash": self.input_hash,
"output_hash": self.output_hash,
"input_category": self.input_category,
"output_category": self.output_category,
"intended_purpose": self.intended_purpose,
"human_reviewed": self.human_reviewed,
"confidence_score": self.confidence_score,
"flagged_for_review": self.flagged_for_review,
"jurisdiction": self.jurisdiction,
}
class AISystemLogger:
"""
Manages AI decision logs for Art.95 code of conduct compliance.
"""
def __init__(self, system_id: str, system_version: str, log_store):
self.system_id = system_id
self.system_version = system_version
self.log_store = log_store # inject your persistence layer
def log_decision(
self,
raw_input: Any,
raw_output: Any,
intended_purpose: str,
human_reviewed: bool = False,
human_reviewer_role: Optional[str] = None,
confidence_score: Optional[float] = None,
) -> AIDecisionLog:
input_str = json.dumps(raw_input, default=str, sort_keys=True)
output_str = json.dumps(raw_output, default=str, sort_keys=True)
log = AIDecisionLog(
system_id=self.system_id,
system_version=self.system_version,
input_hash=hashlib.sha256(input_str.encode()).hexdigest(),
output_hash=hashlib.sha256(output_str.encode()).hexdigest(),
intended_purpose=intended_purpose,
human_reviewed=human_reviewed,
human_reviewer_role=human_reviewer_role,
confidence_score=confidence_score,
flagged_for_review=(confidence_score is not None and confidence_score < 0.7),
flag_reason="Low confidence" if confidence_score and confidence_score < 0.7 else None,
)
self.log_store.write(log.to_audit_record())
return log
def retrieve_for_audit(self, log_id: str) -> Optional[Dict[str, Any]]:
"""Retrieve log record for regulatory audit — Art.12-equivalent."""
return self.log_store.read(log_id)
Code of Conduct Adherence Tracker
from enum import Enum
from typing import List
class ChapterIIIRequirement(Enum):
RISK_MANAGEMENT = "art_9_risk_management"
DATA_GOVERNANCE = "art_10_data_governance"
TECHNICAL_DOCUMENTATION = "art_11_technical_documentation"
RECORD_KEEPING = "art_12_record_keeping"
TRANSPARENCY = "art_13_transparency"
HUMAN_OVERSIGHT = "art_14_human_oversight"
ACCURACY_ROBUSTNESS = "art_15_accuracy_robustness_cybersecurity"
@dataclass
class CodeAdherenceStatus:
requirement: ChapterIIIRequirement
commitment_text: str # What the code commits to for this requirement
implementation_status: str # "implemented", "partial", "not_implemented"
evidence_location: str # Where documentation / tooling lives
last_reviewed: str # YYYY-MM-DD
gaps: List[str] = field(default_factory=list)
class CodesOfConductAdherenceChecker:
"""
Tracks your AI system's adherence to an Art.95 code of conduct.
Run periodically or trigger on model updates.
"""
def __init__(self, system_id: str, code_of_conduct_id: str):
self.system_id = system_id
self.code_of_conduct_id = code_of_conduct_id
self.statuses: List[CodeAdherenceStatus] = []
def assess_requirement(
self,
requirement: ChapterIIIRequirement,
commitment_text: str,
implemented: bool,
partial_gaps: Optional[List[str]] = None,
evidence: str = "",
) -> CodeAdherenceStatus:
status = CodeAdherenceStatus(
requirement=requirement,
commitment_text=commitment_text,
implementation_status="implemented" if implemented else (
"partial" if partial_gaps else "not_implemented"
),
evidence_location=evidence,
last_reviewed=datetime.now(timezone.utc).date().isoformat(),
gaps=partial_gaps or [],
)
self.statuses.append(status)
return status
def generate_adherence_report(self) -> Dict[str, Any]:
total = len(self.statuses)
implemented = sum(1 for s in self.statuses if s.implementation_status == "implemented")
partial = sum(1 for s in self.statuses if s.implementation_status == "partial")
not_implemented = total - implemented - partial
gaps = []
for s in self.statuses:
for gap in s.gaps:
gaps.append({
"requirement": s.requirement.value,
"gap": gap,
})
return {
"system_id": self.system_id,
"code_of_conduct_id": self.code_of_conduct_id,
"assessment_date": datetime.now(timezone.utc).date().isoformat(),
"summary": {
"total_requirements": total,
"implemented": implemented,
"partial": partial,
"not_implemented": not_implemented,
"adherence_percentage": round((implemented / total * 100) if total else 0, 1),
},
"gaps": gaps,
"requirements": [
{
"requirement": s.requirement.value,
"status": s.implementation_status,
"evidence": s.evidence_location,
"last_reviewed": s.last_reviewed,
}
for s in self.statuses
],
}
Infrastructure and the CLOUD Act Problem for Codes of Conduct
Article 95 codes of conduct frequently include provisions on data handling jurisdiction and infrastructure sovereignty. This is not mandatory under the Art.95 text itself, but it appears in most drafted codes because:
- Art.10 (data governance) requires training data to be processed lawfully under GDPR — which implies EU jurisdiction for personal data used in training
- Art.12 (record-keeping) requires logs to be accessible to national authorities — which implies they cannot be hidden behind US cloud compellability
- Art.13 (transparency) and Art.11 (technical documentation) logs must be accessible to MSAs on request
The CLOUD Act problem emerges when a developer signs a code of conduct that includes:
"Provider commits to maintaining all AI system logs and technical documentation within EU legal jurisdiction and to making such records available to competent EU authorities upon lawful request."
If those logs and documentation are stored on AWS, Azure, or Google Cloud's EU regions, the CLOUD Act (18 U.S.C. §2713) allows US authorities to compel disclosure of those records to US agencies — regardless of where the servers are physically located. This creates a structural conflict:
- Code of conduct commitment: Records accessible only to EU authorities under EU legal process
- CLOUD Act reality: Records potentially accessible to US DOJ under US legal process without EU legal ground
This is not a theoretical risk. The DOJ has used CLOUD Act authority to compel European data from US cloud providers' EU-based servers in multiple cases since 2020. If a market surveillance authority or affected person discovers that your "EU jurisdiction" commitment is undermined by CLOUD Act exposure, the gap between stated code adherence and actual data handling creates two problems:
- Breach of code of conduct commitments (contract liability if the code is referenced in customer agreements)
- Potentially misleading information to authorities (Art.99 Tier 3 fine exposure)
The single legal order solution: EU-incorporated cloud infrastructure not subject to the CLOUD Act (because the provider has no US nexus) eliminates this structural gap. A code of conduct commitment to EU legal order + EU-native PaaS = single legal order, no parallel compellability.
GPAI Models and Art.95(5)
GPAI model providers below the systemic-risk threshold (10²⁵ FLOPs) are not subject to the mandatory Art.56 codes of practice. Article 95(5) specifically invites them to develop voluntary codes of conduct as an alternative.
For GPAI model providers who want to signal compliance alignment:
| Art.56 mandatory element | Art.95(5) voluntary equivalent |
|---|---|
| Technical documentation (Art.53) | Voluntary disclosure of model card / technical report |
| Copyright compliance (Art.53(1)(d)) | Voluntary copyright policy publication |
| Transparency to deployers (Art.53(1)(b)) | Voluntary API documentation standards |
| Incident reporting for systemic risk (Art.55) | Voluntary incident disclosure program |
Note that frontier model providers (≥10²⁵ FLOPs) cannot substitute Art.56 compliance with an Art.95(5) voluntary code — the mandatory framework applies regardless.
Industry-Specific Code Considerations
Financial Services (Non-DORA-Scoped AI)
Financial institutions using AI for customer-facing purposes (product recommendations, customer segmentation, churn prediction) that do not fall under DORA's direct AI risk requirements can use Art.95 codes to:
- Demonstrate alignment with the ECB's supervisory expectations for AI governance
- Satisfy internal risk committees that the AI deployment meets governance standards equivalent to high-risk requirements
- Support claims under the AI Liability Directive by showing documented risk management processes
Healthcare AI Below Annex III
Medical software that assists (but does not make) clinical decisions — below the Annex III threshold for mandatory high-risk treatment — represents a significant market where Art.95 codes are commercially valuable. Hospitals and healthcare systems increasingly require suppliers to demonstrate voluntary compliance before procurement.
Public Sector AI
Government AI systems in member states are subject to national procurement rules that increasingly reference Art.95 codes as evaluation criteria. Local governments building AI-assisted administration tools (document processing, citizen service routing) can use codes to demonstrate accountability without falling under the mandatory high-risk framework.
Timeline: When Do Codes Matter?
| Date | Development |
|---|---|
| August 2024 | EU AI Act enters force — voluntary code framework becomes available |
| 2025 (ongoing) | AI Office facilitates sector code drafting; first codes published in registry |
| August 2, 2026 | Full mandatory compliance for high-risk AI — voluntary codes gain commercial importance as non-high-risk developers need to differentiate |
| 2027 | Art.85 Commission review — voluntary code adherence data feeds into review of whether any Art.95 sectors should be moved to mandatory Annex III |
| 2028+ | Established codes gain weight in insurance underwriting, enterprise procurement, cross-border regulatory dialogue |
The Art.85 review is critical: the Commission explicitly must consider whether the voluntary framework under Art.95 is working or whether certain non-high-risk AI categories should be reclassified as high-risk. Demonstrated industry-wide code adherence is the argument against reclassification.
30-Item Developer Checklist: Art.95 Code of Conduct Readiness
System classification (before joining a code)
- 1. Confirmed that your AI system is NOT listed in Annex III (high-risk)
- 2. Identified which Chapter III, Section 2 and 3 requirements the code must address
- 3. Checked the AI Office registry for existing sector codes applicable to your system's purpose
- 4. Reviewed the minimum element requirements under Art.95(3)
- 5. Identified which requirements you can implement in full vs. proportionate equivalent
Risk management (Art.9 equivalent)
- 6. Pre-deployment risk assessment documented for each covered AI system
- 7. Risk assessment records retained for the code-specified retention period
- 8. Material change process defined (triggers for re-assessment)
- 9. Residual risk documented and disclosed to operators per the code terms
Data governance (Art.10 equivalent)
- 10. Training data bias screening documented
- 11. Data quality criteria defined and logged
- 12. Process for handling post-deployment data issues documented
Technical documentation (Art.11 equivalent)
- 13. System description current and accessible
- 14. Performance metrics documented for intended use case
- 15. Significant limitations and known failure modes documented
- 16. Documentation accessible to AI Office upon request
Logging (Art.12 equivalent)
- 17. Input/output logs retained for code-specified period
- 18. Log access controls documented and enforced
- 19. Log format allows reconstruction of individual decisions on request
- 20. Log jurisdiction: confirmed within EU legal order (no CLOUD Act exposure)
Transparency (Art.13 equivalent)
- 21. Users informed that an AI system is involved in their interaction
- 22. AI system's purpose communicated
- 23. Known limitations communicated to operators
- 24. Significant accuracy characteristics disclosed
Human oversight (Art.14 equivalent)
- 25. Human review mechanisms documented where applicable
- 26. Override mechanism available and tested
- 27. Override events logged
Code adherence administration
- 28. Internal adherence tracking process established (annual review minimum)
- 29. Contract representations updated to accurately reflect code commitments
- 30. Breach detection and response process defined (what happens if a commitment is violated)
The Market Signal Value of Art.95 Compliance
For most AI developers building general-purpose or vertical SaaS products, Art.95 codes of conduct represent the most efficient path to credible EU AI Act compliance signaling in 2026:
- Cheaper than full Annex III compliance: No conformity assessment, no notified body, no CE marking — proportionate commitments with flexible implementation timelines
- More credible than unilateral claims: AI Office registration and sector code association provide third-party validation
- Future-proof: Art.85 review data favors sectors with active code adherence; demonstrating compliance now reduces reclassification risk
- Commercially valuable: The procurement, insurance, and enterprise contract channels that require voluntary compliance are growing faster than the mandatory compliance market
The developers who sign codes of conduct in 2026 will be the ones with clean audit trails when the 2027 Commission review assesses whether voluntary frameworks are sufficient — or whether mandatory reclassification is warranted.
EU-native infrastructure for AI compliance commitments. Single legal order, no CLOUD Act exposure. Deploy on sota.io
See Also
- EU AI Act Art.9: Risk Management Systems — Developer Guide
- EU AI Act Art.11: Technical Documentation — Developer Guide
- EU AI Act Art.13: Transparency for High-Risk AI — Developer Guide
- EU AI Act Art.85: The 2027 Review Clause — What Developers Need to Know
- EU AI Act Art.56: Codes of Practice for GPAI Models — Developer Guide