EU AI Act Art.11 Technical Documentation: Annex IV Deep Dive Developer Guide
EU AI Act Article 11 is the cornerstone of high-risk AI compliance: before you place a high-risk AI system on the EU market, you must assemble technical documentation structured across 8 mandatory Annex IV sections. This documentation must be maintained, updated on every substantial modification, and retained for 10 years after the last unit is placed on the market (Art.11(3)).
This guide covers every Annex IV section in operational depth, the cross-article documentation matrix (Art.11 × Art.10 × Art.9 × Art.12), conformity assessment evidence requirements, and how EU-native infrastructure simplifies single-jurisdiction technical documentation.
What Art.11 Actually Requires
Article 11(1) — Pre-market obligation: Before placing a high-risk AI system on the EU market or putting it into service, the provider must draw up technical documentation "in accordance with Annex IV."
Article 11(2) — SME adaptation: Micro and small enterprises may provide certain documentation in a simplified form without prejudicing the obligation to demonstrate conformity. The Commission publishes specific guidance.
Article 11(3) — Retention: Technical documentation and conformity assessment documentation must be kept for 10 years after the last system is placed on the market or put into service. This is the longest AI Act retention obligation.
Article 11(4) — Updates: Documentation must be updated whenever a "substantial modification" occurs (defined in Art.1(1)(67)). Minor updates, bug fixes, and performance patches below the substantial-modification threshold require an amendment record but not a full re-assessment.
Art.11 is enforcement-critical: Market surveillance authorities (MSAs) under Art.74 can demand access to documentation at any time. A provider unable to produce current, complete Annex IV documentation faces:
- Fines up to €30 million or 6% of global annual turnover (Art.99(3)) for non-compliance
- Withdrawal from the EU market
- Suspension of conformity assessment certificates
Annex IV: The 8-Section Structure
Annex IV defines the mandatory minimum — sections 1–8 — in an ordered sequence. Each section is not optional and each has specific sub-elements the Commission has elaborated through guidance and standards.
Section 1: General Description of the AI System
Required content:
- Intended purpose — the specific tasks the system is designed to perform (Art.9(2)(a) risk assessment feeds this)
- Version information — software version, hardware configuration where applicable
- Deployment context — the specific operational environment (physical, social, technical)
- Human-AI interaction — how users interact with the system, decision-making role of AI vs. human
- Foreseeable users — user populations including vulnerable groups (children, elderly, disabled)
- AI system architecture — high-level description of how components interact
- Intended output — the nature of AI output (decision, recommendation, classification, generation)
Section 1 links directly to Art.13 (Transparency) requirements: whatever the system does must be describable by the provider in plain language sufficient for operators and users.
# Section 1 documentation template structure
section_1 = {
"system_name": "EmploymentScreeningAI v2.1",
"intended_purpose": "Automated CV relevance scoring for initial candidate screening in recruitment pipelines",
"version": "2.1.3",
"deployment_context": {
"physical": "Cloud-based SaaS, accessed via browser/API",
"social": "HR teams at companies with 50+ employees",
"technical": "REST API integration with ATS systems (Workday, Lever, Greenhouse)"
},
"annex_iii_category": "Category 4 — Employment, workers management, access to self-employment",
"human_ai_interaction": "AI generates relevance score 0-100. Human HR reviewer makes final decision. AI recommendation is not binding.",
"intended_output": "Numerical relevance score + ranked candidate list + explanation of scoring factors",
"foreseeable_users": ["HR professionals", "Recruiters", "Hiring managers"],
"vulnerable_groups_considered": True,
"documentation_date": "2026-04-10",
"documentation_owner": "ComplianceTeam@example.com"
}
Section 2: Detailed Description of Elements and Development Process
This is the technical architecture section — it goes deep into how the system was built:
- Development methodology — agile, waterfall, MLOps pipeline description
- Training approach — supervised/unsupervised/reinforcement learning; transfer learning if applicable
- Model architecture — neural network type, parameter count, layer structure (without revealing trade secrets)
- Computational resources used — training compute, inference hardware requirements
- Key design choices — decisions made during development that affect system behavior (Art.10(2)(a) feeds this)
- External tools and pre-trained models — third-party model components, libraries, APIs
- Human involvement in development — HITL during training, annotation workforce, review processes
The CLOUD Act intersection is most acute here: if your training compute or inference infrastructure sits on US-jurisdiction cloud providers, you must document the potential for US legal process access to model weights, training logs, and inference records. EU-native providers (including sota.io) operate under a single EU legal regime — no parallel CLOUD Act jurisdiction applies.
# Section 2: Key design choices documentation (links to Art.10(2)(a))
design_choices = {
"feature_selection": {
"choice": "Exclude demographic proxies (name, address, graduation year)",
"rationale": "Art.5(1)(e) compliance — prevent indirect age/ethnicity discrimination",
"documented_alternatives_considered": ["include all CV fields", "use embedding-based anonymization"],
"selected_because": "Zero-proxy approach provides strongest Art.5 defensibility"
},
"model_architecture": {
"type": "BERT-base fine-tuned",
"parameters": "110M",
"inference_latency_p99_ms": 250,
"hardware": "GPU A10G (EU-west datacenter)"
},
"training_compute": {
"jurisdiction": "EU-west (Frankfurt)",
"cloud_act_exposure": "None — EU-native provider, single EU legal regime",
"training_hours": 48,
"gpu_type": "A100"
}
}
Section 3: Information on Training, Validation, and Test Datasets
Section 3 is the direct output of Art.10 compliance — the training data governance documentation required under Art.10(2)-(5) must be captured here in structured form.
Required documentation:
- Dataset descriptions — source, size, composition, collection methodology
- Data preparation steps — cleaning, labeling, preprocessing
- Representativeness assessment — how the training data covers the intended deployment population (Art.10(2)(b))
- Bias examination results — the Art.10(3) bias examination must be documented with findings and mitigation actions
- Data gap documentation — Art.10(4) data gaps relative to EU Charter Art.21 characteristics
- Special category data handling — if Art.10(5) exception was used, the four cumulative conditions must be documented
- Train/validation/test split — ratios and methodology
- Dataset versions — version control for reproducibility
# Section 3: Bias examination documentation (Art.10(3))
bias_examination = {
"method": "Disparate Impact Ratio (DIR)",
"threshold": 0.80,
"results": {
"gender": {"dir": 0.94, "compliant": True, "finding": "No significant bias detected"},
"age_proxy": {"dir": 0.87, "compliant": True, "finding": "Slight but within threshold — monitored"},
"nationality_proxy": {"dir": 0.79, "compliant": False, "finding": "BELOW threshold — mitigation applied"}
},
"mitigation_applied": {
"nationality_proxy": "Re-weighted training samples for underrepresented EU nationalities; re-evaluated DIR=0.83 post-mitigation"
},
"equalized_odds_check": {
"method": "True Positive Rate parity across EU Charter Art.21 groups",
"tpr_max_delta": 0.04,
"assessment": "Within acceptable range"
},
"examination_date": "2026-03-15",
"examiner": "BiasAuditTeam@example.com",
"re_examination_trigger": "Next substantial modification OR quarterly review"
}
Section 4: Description of the Development Process
Section 4 documents how the system was built, tested, and verified — the quality management system (QMS) evidence for conformity assessment:
- Development phases — data collection, preprocessing, model training, evaluation, integration, testing
- Testing methodology — unit testing, integration testing, performance testing, adversarial testing
- Evaluation metrics — accuracy, precision/recall, F1, AUC-ROC, latency benchmarks
- Human review processes — who reviewed what at each stage
- Version control — Git repository references (without IP disclosure)
- Change management — how modifications are tracked and evaluated against the substantial-modification threshold
- Internal audit trail — who approved what and when
This section feeds the conformity assessment body (notified body or self-assessment) with the quality management evidence required under Art.9 (Risk Management System).
# Section 4: Development process record
development_process = {
"phases": [
{"phase": "Data Collection", "start": "2025-09-01", "end": "2025-10-15", "lead": "DataTeam"},
{"phase": "Model Training v1", "start": "2025-10-16", "end": "2025-11-30", "lead": "MLTeam"},
{"phase": "Bias Examination", "start": "2025-12-01", "end": "2025-12-15", "lead": "BiasAuditTeam"},
{"phase": "Integration Testing", "start": "2026-01-01", "end": "2026-02-28", "lead": "QATeam"},
{"phase": "Conformity Assessment", "start": "2026-03-01", "end": "2026-03-31", "lead": "ComplianceTeam"}
],
"test_coverage": {
"unit_tests": 847,
"integration_tests": 124,
"adversarial_tests": 43,
"performance_benchmarks": 18
},
"evaluation_metrics": {
"precision": 0.89,
"recall": 0.84,
"f1": 0.865,
"latency_p99_ms": 245
},
"version_control": "git.internal.example.com/employment-ai (hash: abc1234)",
"approval_records": {
"bias_examination": {"approver": "ChiefAIEthicsOfficer", "date": "2025-12-16"},
"conformity": {"approver": "LegalCompliance", "date": "2026-03-31"}
}
}
Section 5: Monitoring, Functioning, and Control
Section 5 covers post-development monitoring capabilities — specifically:
- Monitoring metrics — what operational metrics are tracked to detect performance degradation
- Logging implementation — the Art.12 logging system that generates the logs required for MSA access
- Human oversight mechanisms — Art.14 human oversight controls, override capabilities
- Failure modes and fallback — what happens when the system fails or returns low-confidence outputs
- Accuracy thresholds — below what confidence level does the system defer to human review
- Drift detection — how concept drift or data distribution shift is detected and flagged
Art.11 × Art.12 intersection: Section 5 must be consistent with your Art.12 logging implementation. If Art.12 requires 10-year log retention, Section 5 must describe the logging system that generates those logs.
# Section 5: Art.12 logging system documentation
logging_system = {
"log_events": [
"input_received",
"score_generated",
"human_decision",
"override_event",
"confidence_below_threshold",
"system_error"
],
"log_format": "structured JSON with timestamps, system version, input hash, output, confidence",
"retention_policy": {
"duration_years": 10,
"legal_basis": "Art.11(3) + Art.12 EU AI Act",
"storage": "EU-native encrypted object storage (Frankfurt region)",
"access_control": "RBAC — Compliance team + MSA on demand"
},
"human_oversight": {
"override_mechanism": "HR reviewer can override any score with documented reason",
"low_confidence_threshold": 0.60,
"action_below_threshold": "Flag for mandatory human review, score displayed as 'Uncertain'"
},
"drift_detection": {
"method": "Weekly distribution monitoring on input features",
"alert_threshold": "KL-divergence > 0.15 triggers compliance review",
"responsible": "MLOps team"
}
}
Section 6: Post-Market Monitoring System
Section 6 documents the post-market monitoring (PMM) system required under Art.72:
- PMM plan — how performance is tracked after deployment
- Serious incident reporting — procedures for Art.73 serious incident notification to MSAs
- Near-miss recording — internal log of near-miss events that did not reach serious incident threshold
- Feedback loops — how user/operator feedback is collected and fed into the risk management cycle
- Periodic review schedule — when technical documentation is reviewed and updated
- Substantial modification trigger — criteria that trigger re-assessment under Art.11(4)
This section closes the loop from Section 5 (operational monitoring) to Art.9 (risk management system), creating the continuous improvement evidence trail that auditors look for.
# Section 6: Post-market monitoring plan
post_market_monitoring = {
"monitoring_frequency": "Weekly automated + quarterly manual review",
"key_performance_indicators": [
"Score distribution drift (input vs. baseline)",
"Override rate (target: <5% of decisions overridden — high override = AI not trusted)",
"Complaint rate from candidates",
"Accuracy on held-out labeled test set (monthly re-evaluation)"
],
"serious_incident_procedure": {
"definition": "AI output causes discriminatory hiring decision with documented harm",
"reporting_timeline": "15 working days to national MSA (Art.73(2))",
"internal_escalation": "24h to CAEO + Legal",
"msa_contact": "National Market Surveillance Authority"
},
"periodic_review": {
"documentation_review": "Annually",
"bias_re_examination": "Quarterly + on substantial modification",
"risk_management_update": "After every significant incident or near-miss"
},
"substantial_modification_criteria": [
"Change in intended purpose",
"Change in Annex III classification",
"Performance degradation >10% on evaluation metrics",
"New training dataset replacing >30% of original data",
"Architecture change affecting output generation"
]
}
Section 7: Declaration of Conformity
Section 7 is the declaration of conformity reference — providers must include or reference the EU Declaration of Conformity required under Art.47.
Required elements in the declaration:
- Provider identity and contact details
- Statement that the system conforms to applicable AI Act requirements
- Reference to the applicable harmonized standards used
- Identification of the notified body (if applicable, for Annex I regulated product route)
- Date and place of declaration
- Signature of authorized representative
The declaration of conformity is a legally binding document — false declarations expose providers to criminal liability under national law.
EU Declaration of Conformity (Art.47 EU AI Act)
System: EmploymentScreeningAI v2.1
Provider: Example AI GmbH, Berlin, Germany
EU Representative (if non-EU): N/A (EU-established provider)
We, Example AI GmbH, declare under sole responsibility that the above-
described AI system conforms to Regulation (EU) 2024/1689 (EU AI Act)
applicable requirements, specifically:
- Article 9: Risk Management System
- Article 10: Data and Data Governance
- Article 11: Technical Documentation
- Article 12: Record-Keeping
- Article 13: Transparency and Provision of Information
- Article 14: Human Oversight
- Article 15: Accuracy, Robustness, and Cybersecurity
Conformity Assessment Route: Self-assessment (Art.43(2))
Harmonized Standards Applied: EN ISO/IEC 42001:2023 (AI Management Systems)
Notified Body: N/A (self-assessment route)
Date: 2026-04-01
Place: Berlin, Germany
Authorized Representative: [Name, Title]
Signature: [Signature]
Section 8: Standards, Specifications, and Technical Solutions
Section 8 documents the standards and specifications used to demonstrate conformity:
- Harmonized standards — published in the EU Official Journal as providing presumption of conformity
- Common specifications — Commission implementing acts where harmonized standards are unavailable
- International standards — ISO/IEC standards applied (ISO 42001, ISO 27001, etc.)
- Technical specifications — industry frameworks, CEN/CENELEC guidelines
- Deviation records — where a standard was partially applied, document which parts were applied and why deviations were justified
# Section 8: Standards applied
standards_applied = {
"harmonized_standards": [
{
"standard": "EN ISO/IEC 42001:2023",
"scope": "AI Management Systems",
"application": "Full — QMS framework for AI system development and operation",
"oj_reference": "OJ C 2025/XXXX (pending publication)"
}
],
"international_standards": [
{"standard": "ISO/IEC 27001:2022", "scope": "Information Security Management"},
{"standard": "ISO/IEC 27701:2019", "scope": "Privacy Information Management (GDPR alignment)"},
{"standard": "ISO/IEC 23894:2023", "scope": "AI Risk Management — guidance"}
],
"technical_frameworks": [
{"framework": "NIST AI RMF 1.0", "application": "Supplementary risk management structure"},
{"framework": "EU AI HLEG Trustworthy AI Guidelines", "application": "Design principle reference"}
],
"deviations": [
{
"standard": "EN ISO/IEC 42001:2023 Section 9.1",
"deviation": "Monitoring frequency reduced from monthly to quarterly for low-risk operational metrics",
"justification": "System deployed in low-volume environment (<500 decisions/month)"
}
]
}
The Art.11 × Art.10 × Art.9 × Art.12 Documentation Matrix
These four articles create an interlocking documentation system — each feeds specific sections of Annex IV:
| Annex IV Section | Primary Article | Supporting Articles |
|---|---|---|
| Section 1 (General Description) | Art.13 (Transparency) | Art.9(2)(a) (intended purpose in risk assessment) |
| Section 2 (Development Process) | Art.10(2)(a) (design choices) | Art.15 (accuracy/robustness metrics) |
| Section 3 (Training Data) | Art.10 (data governance) | Art.10(3) bias examination, Art.10(5) special data |
| Section 4 (Development QMS) | Art.9 (risk management) | Art.9(4) testing requirements |
| Section 5 (Monitoring/Control) | Art.12 (logging/record-keeping) | Art.14 (human oversight mechanisms) |
| Section 6 (Post-Market) | Art.72 (post-market monitoring) | Art.73 (serious incident reporting) |
| Section 7 (Declaration) | Art.47 (conformity declaration) | Art.43 (conformity assessment procedure) |
| Section 8 (Standards) | Art.40 (harmonized standards) | Art.41 (common specifications) |
Practical implication: You cannot complete Annex IV documentation independently of your Art.10 data governance work (Section 3), your Art.9 risk management system (Section 4), your Art.12 logging implementation (Section 5), or your Art.72 PMM plan (Section 6). These must be built concurrently, not sequentially.
Art.11(3): The 10-Year Retention Obligation
The 10-year retention period is the longest in the EU AI Act and creates significant infrastructure requirements:
What must be retained:
- All 8 Annex IV sections in their versions at every update
- Art.12 logs for the operational period
- Conformity assessment records
- Substantial modification records with justification
- Internal audit records
- Bias examination results and mitigation actions
GDPR × Art.11(3) conflict: Art.11 requires 10-year retention of documentation that may contain personal data (training dataset documentation, bias examination records, operational logs). GDPR Art.5(1)(e) requires storage limitation — personal data must not be kept longer than necessary.
Resolution: Technical documentation should use anonymized/aggregated references to personal data wherever possible. Operational logs (Art.12) must retain personal data only to the extent technically necessary, with minimization applied. Bias examination records typically contain aggregated statistics (DIR scores, TPR parity) rather than individual records — these are GDPR-compatible for 10-year retention.
# Art.11(3) compliant retention policy
retention_policy = {
"annex_iv_sections": {
"duration": "10 years post last-unit-on-market",
"format": "Immutable versioned storage (append-only)",
"personal_data": "Minimized — aggregated statistics only, no individual records",
"storage_location": "EU-native encrypted storage, Frankfurt region",
"access_log": True,
"cloud_act_exposure": "None — EU-native provider"
},
"art_12_operational_logs": {
"duration": "10 years (aligned with Art.11(3))",
"personal_data_minimization": "Input hash only, no raw personal data in logs",
"gdpr_lawful_basis": "Art.6(1)(c) GDPR — legal obligation (EU AI Act Art.12)",
"deletion_schedule": "After 10 years from last system placement on market"
},
"bias_examination_records": {
"duration": "10 years",
"content": "Aggregated statistics (DIR scores, TPR parity) — no individual records",
"gdpr_compatible": True,
"reason": "Statistical aggregates do not constitute personal data under GDPR"
}
}
Substantial Modification and Documentation Updates
Art.1(1)(67) defines "substantial modification" as a change that affects the AI system's compliance with the requirements of Chapter III Section 2, or results in a change to the intended purpose.
Substantial modifications require:
- Full re-assessment against applicable requirements
- Updated Annex IV documentation in all affected sections
- New or updated conformity assessment (notified body where applicable)
- Updated Declaration of Conformity
Non-substantial modifications (bug fixes, performance patches, security updates not affecting the model logic) require:
- Amendment record in the change management log
- Documented justification for non-substantial classification
- Updated version information in Section 1
# Substantial modification evaluation framework
def evaluate_modification(change_description: dict) -> dict:
"""
Evaluate whether a change constitutes a substantial modification
under Art.1(1)(67) EU AI Act.
"""
triggers = {
"intended_purpose_change": change_description.get("changes_intended_purpose", False),
"annex_iii_category_change": change_description.get("changes_annex_iii_classification", False),
"model_architecture_change": change_description.get("fundamental_architecture_change", False),
"performance_degradation": change_description.get("accuracy_delta_pct", 0) < -10,
"training_data_replacement": change_description.get("training_data_replacement_pct", 0) > 30,
}
is_substantial = any(triggers.values())
return {
"is_substantial_modification": is_substantial,
"triggers": {k: v for k, v in triggers.items() if v},
"required_actions": [
"Full conformity re-assessment",
"Update all affected Annex IV sections",
"Updated Declaration of Conformity",
"Notify notified body (if applicable)"
] if is_substantial else [
"Amendment record in change log",
"Justification for non-substantial classification",
"Version update in Section 1"
]
}
SME and Startup Considerations (Art.11(2))
Art.11(2) allows micro and small enterprises to provide documentation in a simplified form — without prejudicing the obligation to demonstrate conformity.
The Commission's SME-specific guidance (still being developed) is expected to clarify:
- Which Annex IV sections can be simplified
- What minimum content satisfies the simplified form
- Whether simplified documentation is accepted for self-assessment conformity
Practical guidance for SMEs:
- Section 1 (General Description) and Section 7 (Declaration) cannot be simplified significantly
- Section 2 (Development Process) can use lighter documentation for simple models (e.g., single fine-tuned pre-trained model)
- Section 3 (Training Data) must still include bias examination results — Art.10(3) is not waived
- Section 8 (Standards) is simpler if you rely entirely on EN ISO/IEC 42001
Regardless of simplified form, all 8 sections must exist. "Simplified" means less extensive, not absent.
The CLOUD Act and Single-Jurisdiction Documentation
A critical but rarely discussed Art.11 issue: if your development infrastructure, training pipeline, inference infrastructure, or documentation storage sits on US-jurisdiction cloud providers (AWS, Azure, GCP, Oracle Cloud), US legal process under the CLOUD Act can compel production of your Annex IV documentation.
This creates a dual-jurisdiction access problem for EU high-risk AI providers:
- EU MSA access (Art.74): National MSA can demand documentation access under EU law
- US CLOUD Act access: US DOJ can demand the same documentation simultaneously under US law
The conflict is most acute for:
- Training data documentation (Section 3) — may reveal proprietary datasets or personal data used in training
- Development process records (Section 4) — may reveal trade secrets
- Operational logs (Section 5/Art.12) — may contain personal data subject to GDPR
EU-native providers (operating exclusively under EU law, no US parent companies, no US datacenters) eliminate the CLOUD Act dimension entirely — your Annex IV documentation is subject exclusively to EU legal process, with predictable EU GDPR Art.48 restrictions on cross-border transfers.
For sota.io deployments: training pipelines, inference infrastructure, and documentation storage can be hosted in EU-native infrastructure under a single EU legal regime, eliminating the dual-jurisdiction documentation access risk.
Conformity Assessment Route and Annex IV
The conformity assessment route (Art.43) determines how Annex IV documentation is reviewed:
Self-Assessment (Art.43(2)) — available for most Annex III high-risk systems:
- Provider conducts internal conformity assessment using Annex IV documentation
- Technical documentation must be complete and internally consistent
- MSA can demand access for market surveillance
- No notified body involvement (unless harmonized standards not fully applied)
Notified Body Assessment (Art.43(1)) — required for Annex I regulated-product route:
- Notified body receives and reviews Annex IV documentation
- Section 4 (Development Process) and Section 8 (Standards) are scrutinized most heavily
- Bodies may request additional evidence beyond minimum Annex IV requirements
EU Database Registration (Art.49): After conformity assessment, high-risk AI systems must be registered in the EU AI Act database. Registration requires key metadata from Annex IV Sections 1 and 7.
Annex IV Documentation Checklist
Before placing a high-risk AI system on the EU market:
Section 1 — General Description
- Intended purpose documented in operational language
- Annex III category identified and justified
- Human-AI interaction model described
- Foreseeable user populations identified, including vulnerable groups
- System architecture overview complete
Section 2 — Development and Design
- Key design choices documented (links to Art.10(2)(a))
- External pre-trained components declared
- Compute jurisdiction documented (CLOUD Act exposure assessed)
- Model performance benchmarks recorded
Section 3 — Training, Validation, Test Datasets
- Dataset descriptions complete (source, size, composition)
- Representativeness assessment documented (Art.10(2)(b))
- Bias examination performed and documented (Art.10(3))
- Data gap documentation complete (Art.10(4))
- Art.10(5) exception documented if used (4 conditions)
Section 4 — Development Process
- Development phases documented with dates and owners
- Test results recorded (unit, integration, adversarial)
- Evaluation metrics documented
- Approval records captured
Section 5 — Monitoring and Control
- Art.12 logging system described
- Human oversight mechanisms documented (Art.14)
- Drift detection procedures defined
- Fallback mechanisms documented
Section 6 — Post-Market Monitoring
- PMM plan documented (Art.72)
- Serious incident reporting procedure defined (Art.73)
- Periodic review schedule established
- Substantial modification triggers defined
Section 7 — Declaration of Conformity
- Art.47 Declaration drafted and signed
- Conformity assessment route identified
- Authorized signatory identified
Section 8 — Standards Applied
- All applicable harmonized standards listed
- Common specifications listed (if harmonized standards unavailable)
- Deviations documented with justification
Cross-Article Consistency
- Section 3 consistent with Art.10 data governance records
- Section 5 consistent with Art.12 logging implementation
- Section 4 consistent with Art.9 risk management system
- Section 6 consistent with Art.72/73 monitoring obligations
See Also
- EU AI Act Art.10 Training Data Governance — Section 3 source material
- EU AI Act Art.6 High-Risk AI Systems — classification and Art.9-15 obligations overview
- EU AI Act Art.5 Prohibited AI Practices — prohibited systems that require no documentation
- EU AI Office & GPAI Model Regulation (Art.51-56) — GPAI documentation requirements
- EU Cyber Resilience Act: SBOM and Vulnerability Handling — CRA × AI Act intersection for products with digital elements