EU AI Act Art.27 Fundamental Rights Impact Assessment (FRIA): Developer Guide (High-Risk AI 2026)
EU AI Act Article 27 is the Fundamental Rights Impact Assessment (FRIA) obligation for public-authority deployers of high-risk AI. While most Art.26 obligations apply universally to all deployers, Art.27 targets a specific intersection: deployers that are public authorities (or bodies acting in their name) operating in any of the six Annex III high-risk categories that directly affect individuals' access to public services, employment, justice, migration status, or civil liberties. In those contexts, Art.27 mandates a structured pre-deployment assessment of risks to fundamental rights — before the AI system goes live.
Art.27 is architecturally a downstream extension of Art.26(8). Art.26(8) creates the trigger: when a deployer is a public authority using a high-risk AI system in an Annex III use case, it must conduct a FRIA. Art.27 defines what that FRIA must contain. The two articles together form the public-authority deployer compliance module within Chapter III Section 2.
This guide covers Art.27(1)–(3) in full, the seven mandatory FRIA content elements under Art.27(1)(a)–(g), the six Annex III categories that trigger the FRIA obligation, the Art.27 intersection matrix with Art.26(8), Art.9, Art.14, Art.22(3), and Art.46, the EU Fundamental Rights Agency (FRA) toolkit, CLOUD Act jurisdiction risk for FRIA documentation stored on US infrastructure, Python implementation for FRIARecord, AffectedGroupsAssessor, and FRIAComplianceChecker, and the 40-item Art.27 compliance checklist.
Art.27 in the High-Risk AI Compliance Chain
Art.27 occupies the public-authority deployer governance layer of Chapter III Section 2:
| Article | Obligation Layer | Primary Addressee |
|---|---|---|
| Art.9 | Risk management system | Provider |
| Art.10 | Training data governance | Provider |
| Art.11 | Technical documentation | Provider |
| Art.12 | Automatic event logging | Provider (system design) |
| Art.13 | Instructions for use | Provider (must produce) |
| Art.14 | Human oversight | Provider (design) + Deployer (implement) |
| Art.17 | Quality management system | Provider |
| Art.20 | Corrective actions | Provider + Deployer (notification) |
| Art.21 | MSA cooperation | All operators including Deployers |
| Art.22 | EU database registration | Provider + Deployer (public authorities, Art.22(3)) |
| Art.26 | Deployer obligations | Deployer |
| Art.27 | Fundamental Rights Impact Assessment | Public-authority Deployer (Annex III contexts) |
Art.27 is not triggered by all high-risk AI deployments — only those where (a) the deployer is a public authority and (b) the deployment falls in specific Annex III categories. For most private-sector deployers, Art.27 does not apply. For public bodies deploying AI in employment, benefits, justice, or law enforcement contexts, it is mandatory before go-live.
Art.27(1): The FRIA Obligation — Who Must Conduct It
Art.27(1) text (summarised): Before deploying a high-risk AI system referred to in points 1, 2, 3, 5, 6, 7, or 8 of Annex III, a deployer that is a body governed by public law, or a private entity providing public services, shall carry out a Fundamental Rights Impact Assessment.
Three threshold conditions for Art.27 applicability:
- Deployer type: The deployer is (a) a public authority, (b) a body governed by public law, or (c) a private entity exercising public functions or providing services in the public interest.
- System classification: The AI system is classified as high-risk under Art.6(2) by reference to Annex III.
- Annex III category: The deployment falls within one of the six FRIA-triggering categories (see Art.27(2) below).
Private companies are generally excluded unless they exercise public authority or provide services with public-interest obligations (e.g., a private company contracted to deliver public welfare assessments on behalf of a municipality). When in doubt, legal analysis of the deployer's public-law status is required.
Timing: The FRIA must be completed before deployment — not after go-live. Art.27 does not specify a minimum advance period, but the FRIA output must inform the deployment decision and must be available to market surveillance authorities (Art.46) on request from the outset of deployment.
Art.27(2): The Six FRIA-Triggering Annex III Categories
Art.27(2) identifies six Annex III categories where the FRIA obligation applies. The table below maps each category to practical examples:
| Annex III Category | Scope | Practical Examples |
|---|---|---|
| 1 — Biometric identification | Real-time and post-hoc biometric identification of natural persons in publicly accessible spaces (with Art.5 exceptions) | Police facial recognition, border biometric matching, stadium access AI |
| 2 — Critical infrastructure | Management and operation of critical infrastructure (energy, water, transport, financial) where AI failures endanger life or access | Grid management AI, water treatment control, autonomous transport systems |
| 3 — Education and vocational training | Determining access to, or evaluating performance in, education and vocational training institutions funded or operated by public bodies | Public school admissions AI, automated exam scoring, vocational training eligibility screening |
| 5 — Employment, workers management, and access to self-employment | Recruitment, selection, task allocation, performance monitoring, promotion, termination, or access to self-employment in public-sector contexts | Civil service recruitment AI, public-sector performance evaluation, automated dismissal recommendation |
| 6 — Access to essential private and public services and benefits | Evaluating eligibility for essential services including public benefits, credit, emergency services, or social assistance administered by public bodies | Benefits eligibility AI (housing, welfare), credit scoring for public housing, emergency response prioritisation |
| 7 — Law enforcement | Risk assessments for individuals, profiling, crime analytics, or evidence evaluation by police or prosecution authorities | Recidivism prediction, crime hotspot AI, policing deployment optimisation |
| 8 — Migration, asylum, and border control | Border management, visa processing, asylum assessment, or irregular migration risk scoring | Automated visa pre-screening, asylum claim risk ranking, border crossing anomaly detection |
Notable exclusion: Annex III Category 4 (health and life sciences AI) does not appear in Art.27(2). Medical AI deployed by public health bodies is governed by separate sector-specific requirements (MDR/IVDR and dedicated health data regulations) and does not trigger the Art.27 FRIA.
Art.27(1)(a)–(g): The Seven Mandatory FRIA Content Elements
Art.27(1) specifies seven elements that the FRIA must contain:
(a) Description of the Intended Use and Deployment Context
The FRIA must describe the specific intended use of the AI system, the organisational and geographic deployment context, the decision-making process the AI system supports or automates, and the relationship between the AI system's outputs and any consequential human decisions.
What to document:
- Which specific Annex III use case applies
- The deployment scope (geographic, temporal, organisational unit)
- Whether the AI system makes autonomous decisions or supports human decision-making
- Which categories of individuals will be subject to the AI system's outputs
(b) Geographic and Temporal Scope
The FRIA must specify the geographic scope (which regions, districts, or jurisdictions are covered) and the temporal scope (deployment start date, planned review cycles, sunset date if applicable).
Why this matters: The CLOUD Act jurisdiction analysis (see below) depends on where FRIA documentation is stored. The temporal scope matters because Art.27(3) requires FRIA updates when the risk profile changes substantially.
(c) Categories of Affected Persons and Elevated Risk Groups
The FRIA must identify all categories of natural persons who will be directly or indirectly affected by the AI system, with specific attention to groups that face elevated rights risks:
- Elevated risk groups under EU fundamental rights law: Minors, persons with disabilities, ethnic minorities, asylum seekers, elderly persons, persons from low socio-economic backgrounds, LGBTQ+ individuals, and other protected characteristics under the EU Charter of Fundamental Rights (CFR).
- Intersectionality analysis: A single affected person may belong to multiple elevated-risk groups, compounding rights exposure.
- Indirect affected persons: Individuals affected by decisions made about others in the same household or community (e.g., family members of persons assessed by welfare AI).
(d) Fundamental Rights Assessment
The core of the FRIA: a structured assessment of risks to the fundamental rights guaranteed by the EU Charter of Fundamental Rights. The assessment must be grounded in the Art.9 risk management documentation provided by the AI system provider.
Charter rights typically at risk in Annex III deployments:
| Charter Article | Right | At-Risk Annex III Context |
|---|---|---|
| Art.1 | Human dignity | Biometric surveillance, automated dismissal |
| Art.7 | Respect for private life | Behavioural profiling, continuous monitoring |
| Art.8 | Personal data protection | Processing of sensitive data categories |
| Art.21 | Non-discrimination | Algorithmic bias in hiring, benefits, sentencing |
| Art.41 | Right to good administration | Automated decisions without explanation |
| Art.47 | Right to effective remedy | Decisions without human review availability |
| Art.48 | Presumption of innocence | Law enforcement risk scoring |
The assessment must identify which specific rights are at risk, the severity and likelihood of rights violations for each affected group, and whether existing technical or organisational measures adequately address each identified risk.
(e) Mitigation Measures
For each identified fundamental rights risk, the FRIA must document the mitigation measures already implemented and those planned before deployment. Mitigation measures must be proportionate to the identified risk severity.
Categories of mitigation measures:
- Technical: Bias testing across demographic groups, fairness metrics monitoring, explainability mechanisms, output confidence thresholds
- Procedural: Human review requirements for high-stakes decisions, appeals processes, mandatory human override capabilities
- Governance: Internal accountability structures, designated AI system responsible officer, periodic FRIA reviews
- Legal: GDPR Art.22 compliance for automated decisions, data minimisation, purpose limitation
(f) Human Oversight Measures
The FRIA must specify the human oversight mechanisms that will govern the AI system's operation. This connects directly to Art.14 (human oversight obligations on providers and deployers) and Art.26(4) (deployer monitoring obligations).
Minimum human oversight specification in a FRIA:
- Who has authority to override AI system outputs (role and organisation)
- What triggers a mandatory human review (confidence threshold, flagged demographic groups, appeal request)
- What training is provided to human reviewers to identify AI system failures
- How human oversight effectiveness is monitored and reported
(g) Supervisory Body Reference
The FRIA must identify the supervisory body or bodies responsible for overseeing the deployment, and confirm that the FRIA has been (or will be) communicated to those bodies as required. In EU Member States, this typically means:
- The national data protection authority (for personal data processing under GDPR)
- The national market surveillance authority (for AI Act Art.74 supervision)
- Any sector-specific regulator (financial services regulator, healthcare authority, etc.)
Art.27(3): FRIA Update Obligation
Art.27(3) creates an ongoing obligation: the deployer must update the FRIA whenever there is a substantial change to the risk profile of the AI system or its deployment context. The article does not define "substantial change" exhaustively, but guidance from the AI Act recitals and the EU Fundamental Rights Agency toolkit identifies the following as triggering events:
| Trigger | Description |
|---|---|
| New affected population | Expansion of deployment to new geographic area or demographic group |
| Algorithmic update | Provider updates the model in a way that changes accuracy or fairness characteristics |
| New use case | Same AI system applied to a new decision type not covered by original FRIA |
| Evidence of harm | Monitoring detects adverse fundamental rights impacts not identified in original assessment |
| Regulatory change | New legislation, court decisions, or supervisory guidance affecting rights analysis |
| Substantial modification (Art.3(23)) | Provider makes a substantial modification triggering re-classification under Art.4 |
Documentation requirement for updates: Each FRIA update must record the trigger, the specific changes made to the assessment, the updated risk conclusions, and the date of the update. Version control is essential.
Art.27 Intersection Matrix
Art.27 does not operate in isolation. Understanding the intersecting obligations is essential for compliance design:
| Intersecting Article | Relationship | Practical Impact |
|---|---|---|
| Art.26(8) | FRIA trigger: Art.26(8) requires public-authority deployers to conduct FRIA as specified in Art.27 | Art.26(8) is the gateway obligation; Art.27 is the specification |
| Art.9 | Risk documentation input: Art.27(1)(d) fundamental rights assessment must be grounded in Art.9 risk management docs provided by the provider | FRIA cannot be completed without receiving Art.9 documentation from provider; include contractual Art.9 documentation delivery obligation |
| Art.14 | Human oversight specification: Art.27(1)(f) human oversight measures connect directly to Art.14 implementation by the deployer | Art.14 compliance and Art.27(1)(f) documentation can share a unified human oversight specification |
| Art.22(3) | EU database: Public-authority deployers registering in the EU database under Art.22(3) must confirm FRIA completion status in their registration record | FRIA completion is a prerequisite for Art.22(3) registration; sequence: FRIA → CE marking verification → Art.22(3) registration |
| Art.46 | Market surveillance access: Art.46 gives market surveillance authorities access to FRIA documentation on request; Art.27 FRIA must be production-ready for MSA inspection | Store FRIA in an MSA-accessible location; document version history |
| Art.13 | Instructions for use: The provider's Art.13 IFU documentation forms part of the input for Art.27(1)(a) and (e); FRIA must reflect provider's intended use scope | Request full IFU documentation from provider before commencing FRIA |
EU Fundamental Rights Agency (FRA) FRIA Toolkit
The EU Fundamental Rights Agency published a dedicated FRIA toolkit to support public authorities in conducting AI Act-compliant FRIAs. The toolkit provides:
- Stakeholder mapping templates: Structured questionnaires for identifying affected persons and elevated risk groups (Art.27(1)(c))
- Rights inventory: Charter-aligned checklist of fundamental rights for systematic risk identification (Art.27(1)(d))
- Mitigation catalogue: Database of technical and procedural mitigation measures organised by rights category (Art.27(1)(e))
- Proportionality test: Framework for assessing whether FRIA risk conclusions justify deployment or require additional safeguards
- Update trigger checklist: Operational checklist for Art.27(3) FRIA update review
The FRA toolkit is advisory, not legally binding. However, deployers who follow it and document their use of the toolkit are better positioned to demonstrate good-faith compliance under Art.46 MSA scrutiny. Deviation from FRA methodology requires documented justification.
CLOUD Act × Art.27: Jurisdiction Risk for FRIA Documentation
A FRIA under Art.27 is a detailed evidentiary document containing:
- Personal data of affected individuals (through demographic analysis)
- Sensitive operational information about public-authority AI systems
- Assessment findings about fundamental rights risks — potentially including findings that the deployment raises unresolved rights concerns
When this documentation is stored on US-headquartered cloud infrastructure (AWS, Azure, Google Cloud, Microsoft 365), it falls within the scope of the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act). The CLOUD Act allows US authorities to compel cloud providers to produce data stored anywhere in the world, including EU-jurisdiction FRIA documentation.
Specific CLOUD Act risk vectors for Art.27 FRIAs:
| Risk | Mechanism | Impact |
|---|---|---|
| US law enforcement access to FRIA | CLOUD Act §2713 compels US cloud provider to produce FRIA despite EU data residency | FRIA evidence — including sensitive population data and rights risk findings — disclosed to US authorities without EU oversight |
| Fundamental rights of data subjects in FRIA | FRIA contains demographic analysis of affected individuals; CLOUD Act disclosure = unauthorised transfer under GDPR Chapter V | GDPR Art.48 prohibits transfers without adequacy decision or Art.49 derogation |
| Intelligence community access | Executive Order 12333 and FISA §702 permit US intelligence access to FRIA on US cloud | More difficult to detect and challenge than CLOUD Act law enforcement requests |
| Litigation discovery via US courts | US civil litigants can subpoena US cloud providers for FRIA documentation as part of discovery in US litigation involving the deploying authority | Sensitive findings disclosed in adversarial context |
Mitigation: EU-native cloud infrastructure eliminates CLOUD Act jurisdiction over FRIA documentation entirely. When a public authority stores FRIA records on infrastructure operated by an EU-headquartered provider with no US corporate parent, US compelled disclosure is not available. This is the clearest compliance path for Art.27 FRIA documentation security.
Python Implementation
FRIARecord — Structured FRIA Data Class
from dataclasses import dataclass, field
from typing import Optional
from datetime import date
from enum import Enum
class AnnexIIICategory(Enum):
BIOMETRIC = "annex_iii_1_biometric"
CRITICAL_INFRASTRUCTURE = "annex_iii_2_critical_infrastructure"
EDUCATION = "annex_iii_3_education"
EMPLOYMENT = "annex_iii_5_employment"
ESSENTIAL_SERVICES = "annex_iii_6_essential_services"
LAW_ENFORCEMENT = "annex_iii_7_law_enforcement"
MIGRATION = "annex_iii_8_migration"
class FRIARightRisk(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
@dataclass
class FundamentalRightsRiskEntry:
charter_article: str # e.g. "Art.21 Non-discrimination"
risk_description: str
affected_groups: list[str]
severity: FRIARightRisk
likelihood: FRIARightRisk
mitigation_measures: list[str]
residual_risk: FRIARightRisk
def fundamental_rights_risk_score(self) -> float:
"""Composite risk score: severity × likelihood (1-4 scale each)."""
severity_map = {
FRIARightRisk.LOW: 1, FRIARightRisk.MEDIUM: 2,
FRIARightRisk.HIGH: 3, FRIARightRisk.CRITICAL: 4
}
s = severity_map[self.severity]
l = severity_map[self.likelihood]
return round((s * l) / 16.0, 3) # Normalised 0.0–1.0
@dataclass
class FRIARecord:
system_id: str
system_name: str
annex_iii_category: AnnexIIICategory
deployer_name: str
deployer_type: str # "public_authority" | "public_law_body" | "public_service_private"
geographic_scope: str
temporal_scope_start: date
temporal_scope_end: Optional[date]
intended_use_description: str
art9_risk_docs_received: bool
art13_ifu_docs_received: bool
affected_persons_categories: list[str]
elevated_risk_groups: list[str]
rights_risk_entries: list[FundamentalRightsRiskEntry] = field(default_factory=list)
human_oversight_description: str = ""
supervisory_bodies: list[str] = field(default_factory=list)
fria_date: Optional[date] = None
fria_version: int = 1
fria_update_history: list[dict] = field(default_factory=list)
fra_toolkit_used: bool = False
def aggregate_risk_score(self) -> float:
if not self.rights_risk_entries:
return 0.0
scores = [e.fundamental_rights_risk_score() for e in self.rights_risk_entries]
return round(sum(scores) / len(scores), 3)
def has_critical_unmitigated_risk(self) -> bool:
return any(
e.residual_risk == FRIARightRisk.CRITICAL
for e in self.rights_risk_entries
)
def to_eu_database_record(self) -> dict:
"""Art.22(3) EU database registration payload for public-authority deployers."""
return {
"system_id": self.system_id,
"deployer": self.deployer_name,
"annex_iii_category": self.annex_iii_category.value,
"fria_completed": self.fria_date is not None,
"fria_date": str(self.fria_date) if self.fria_date else None,
"fria_version": self.fria_version,
"aggregate_risk_score": self.aggregate_risk_score(),
"critical_unmitigated": self.has_critical_unmitigated_risk(),
}
AffectedGroupsAssessor — Elevated Risk Group Analysis
from dataclasses import dataclass
ELEVATED_RISK_GROUPS = [
"minors_under_18",
"persons_with_disabilities",
"ethnic_minorities",
"asylum_seekers_refugees",
"elderly_persons_65plus",
"low_socioeconomic_background",
"lgbtq_persons",
"single_parent_households",
"homeless_persons",
"persons_with_mental_health_conditions",
]
@dataclass
class AffectedGroupsAssessor:
system_name: str
total_affected_population_estimate: int
groups_present_in_deployment: list[str]
def elevated_groups_identified(self) -> list[str]:
return [g for g in self.groups_present_in_deployment if g in ELEVATED_RISK_GROUPS]
def equity_gap_check(self) -> dict:
"""
Returns a structured equity gap analysis.
Flags whether elevated risk groups are adequately addressed
in the FRIA rights assessment.
"""
elevated = self.elevated_groups_identified()
return {
"total_elevated_groups": len(elevated),
"groups": elevated,
"equity_gap_flag": len(elevated) > 0,
"recommendation": (
"Conduct intersectional analysis: individuals may belong to "
"multiple elevated-risk groups, compounding rights exposure."
if len(elevated) >= 2 else
"Document specific rights risks for each elevated group "
"in Art.27(1)(d) assessment."
if len(elevated) == 1 else
"No elevated-risk groups identified. Verify deployment scope — "
"most public AI deployments affect at least one protected group."
),
}
def fria_scope_adequate(self) -> bool:
"""True if at least one elevated risk group is documented in the assessment."""
return len(self.elevated_groups_identified()) > 0 or self.total_affected_population_estimate > 0
FRIAComplianceChecker — Deployment Authorisation Gate
@dataclass
class FRIAComplianceChecker:
fria: FRIARecord
REQUIRED_ELEMENTS = [
"intended_use_description",
"geographic_scope",
"affected_persons_categories",
"elevated_risk_groups",
"rights_risk_entries",
"human_oversight_description",
"supervisory_bodies",
"fria_date",
]
def missing_elements(self) -> list[str]:
missing = []
for element in self.REQUIRED_ELEMENTS:
value = getattr(self.fria, element, None)
if not value:
missing.append(element)
return missing
def prerequisites_met(self) -> dict:
return {
"art9_risk_docs": self.fria.art9_risk_docs_received,
"art13_ifu_docs": self.fria.art13_ifu_docs_received,
"deployer_type_valid": self.fria.deployer_type in [
"public_authority", "public_law_body", "public_service_private"
],
}
def is_deployment_authorized(self) -> bool:
"""
Returns True only if:
1. All 7 Art.27(1)(a)-(g) elements are present
2. Art.9 and Art.13 docs were received from provider
3. No critical unmitigated rights risks remain
4. FRIA has been formally dated (completed before deployment)
"""
if self.missing_elements():
return False
prereqs = self.prerequisites_met()
if not all(prereqs.values()):
return False
if self.fria.has_critical_unmitigated_risk():
return False
if self.fria.fria_date is None:
return False
return True
def deployment_gate_report(self) -> dict:
return {
"system": self.fria.system_name,
"authorized": self.is_deployment_authorized(),
"missing_elements": self.missing_elements(),
"prerequisites": self.prerequisites_met(),
"critical_unmitigated_risk": self.fria.has_critical_unmitigated_risk(),
"aggregate_risk_score": self.fria.aggregate_risk_score(),
"fria_version": self.fria.fria_version,
"art22_3_registration_ready": self.fria.fria_date is not None,
}
Art.27 Compliance Checklist — 40 Items
Applicability Assessment (Items 1–8)
- 1. Deployer is a public authority, public-law body, or private entity providing public services
- 2. AI system is classified high-risk under Art.6(2) + Annex III
- 3. Annex III category falls within Art.27(2) scope (categories 1, 2, 3, 5, 6, 7, or 8)
- 4. FRIA required before deployment confirmed (not retroactive)
- 5. Art.26(8) obligation assessed and triggers Art.27
- 6. Sector-specific regulator identified (if healthcare, financial, etc.)
- 7. Legal opinion obtained on deployer's public-law status (if private entity in public service)
- 8. Prior FRIA exists from earlier deployment of same system (version control required)
Art.9 and Art.13 Documentation Receipt (Items 9–12)
- 9. Art.9 risk management documentation received from provider
- 10. Art.13 instructions for use received from provider
- 11. Contractual obligation on provider to deliver Art.9 + Art.13 docs included in procurement
- 12. Provider notification received of any post-FRIA substantial modifications
Art.27(1)(a): Intended Use Documentation (Items 13–15)
- 13. Specific intended use and Annex III category documented
- 14. Decision-making role of AI (autonomous vs. decision-support) specified
- 15. Relationship between AI outputs and consequential human decisions documented
Art.27(1)(b): Geographic and Temporal Scope (Items 16–17)
- 16. Geographic scope (regions, jurisdictions) specified
- 17. Temporal scope (start date, review cycle, sunset clause) specified
Art.27(1)(c): Affected Persons (Items 18–20)
- 18. All categories of directly affected persons identified
- 19. Elevated risk groups identified and documented (using FRA toolkit taxonomy)
- 20. Intersectionality analysis completed for overlapping group membership
Art.27(1)(d): Fundamental Rights Assessment (Items 21–25)
- 21. Charter rights inventory completed (Art.1, 7, 8, 21, 41, 47, 48 at minimum)
- 22. Each right assessed for risk severity and likelihood per affected group
- 23. Assessment grounded in Art.9 provider risk documentation
- 24. Algorithmic bias testing results reviewed and incorporated
- 25. FRA toolkit methodology applied (or deviation documented)
Art.27(1)(e): Mitigation Measures (Items 26–28)
- 26. Technical mitigation measures documented (fairness metrics, explainability, thresholds)
- 27. Procedural mitigation measures documented (human review, appeals, override)
- 28. Residual risk assessed after mitigation for each identified right
Art.27(1)(f): Human Oversight (Items 29–31)
- 29. Human oversight roles and authorities specified
- 30. Override triggers documented (confidence thresholds, flagged groups)
- 31. Human reviewer training programme documented
Art.27(1)(g): Supervisory Body (Items 32–33)
- 32. All relevant supervisory bodies identified (DPA, MSA, sector regulator)
- 33. Communication of FRIA to supervisory bodies completed or scheduled
Art.27(3): Update Process (Items 34–36)
- 34. FRIA version control implemented
- 35. Art.27(3) update triggers documented and operational
- 36. FRIA update review scheduled (minimum annual for ongoing deployments)
Art.22(3) and Art.46 Integration (Items 37–38)
- 37. FRIA completion status included in Art.22(3) EU database registration record
- 38. FRIA stored in MSA-accessible location for Art.46 on-request production
Infrastructure and CLOUD Act (Items 39–40)
- 39. FRIA documentation storage location assessed for CLOUD Act jurisdiction risk
- 40. EU-native infrastructure confirmed for FRIA storage (eliminates CLOUD Act exposure)
Art.27 Enforcement Exposure
Art.27 violations are subject to the AI Act's penalty framework under Art.99:
| Violation | Maximum Penalty |
|---|---|
| Non-compliance with Art.27 FRIA obligation | EUR 15,000,000 or 3% of global annual turnover (whichever higher) |
| Providing incorrect or misleading information to MSA regarding FRIA | EUR 7,500,000 or 1.5% of global annual turnover |
For public authorities, the penalty regime may differ — some Member States limit penalties on public bodies, but the reputational and operational consequences of non-compliance (MSA enforcement orders, deployment suspension under Art.79) are significant regardless of financial penalty applicability.
Key enforcement timeline: Art.27 obligations apply from 2 August 2026 for Annex III high-risk AI systems. The transition period gives deployers approximately one year from the date of this guide to complete FRIAs for all planned deployments in scope.
What to Do Now
For Public Authorities Planning AI Deployments (2026 Deadline)
- Map your AI portfolio against Art.27(2) categories: Identify every AI system in procurement, pilot, or production that falls within categories 1, 2, 3, 5, 6, 7, or 8 of Annex III.
- Request Art.9 + Art.13 documentation from all providers: Make this a contractual obligation in every AI procurement. Without Art.9 docs, the Art.27(1)(d) assessment cannot be completed.
- Use the FRA FRIA toolkit: Structured FRIAs based on established methodology are more defensible under Art.46 MSA scrutiny.
- Complete FRIAs before 2 August 2026: Build FRIA timelines into AI project delivery schedules. A complete FRIA for a complex law enforcement system may take 3–6 months.
- Assess FRIA documentation storage: If your authority uses US cloud infrastructure, evaluate the CLOUD Act jurisdiction risk for FRIA records containing sensitive population data.
For Developers Building AI Systems Sold to Public Authorities
- Deliver Art.9 and Art.13 documentation proactively: Your public-authority customers cannot complete their Art.27 FRIA without your documentation. Include documentation delivery in your go-live process.
- Notify deployers of substantial modifications: Art.27(3) update triggers include provider-side model updates. Establish a notification process for your public-sector customers.
- Design explainability for FRIA: FRIA assessors need to understand how your AI makes decisions to assess fundamental rights risks. Explainability APIs are not optional for systems sold into Annex III public-authority contexts.
- Consider where you store your Art.9 documentation: Your own Art.9 records, if stored on US cloud infrastructure, face the same CLOUD Act exposure as your customers' FRIA records.
See Also
- EU AI Act Art.26 Obligations for Deployers: Developer Guide — Art.26(8) is the FRIA trigger
- EU AI Act Art.22 EU Database of High-Risk AI Systems — Art.22(3) public-authority deployer registration requires FRIA completion status
- EU AI Act Art.14 Human Oversight: Developer Guide — Art.27(1)(f) human oversight specification connects to Art.14
- EU AI Act Art.9 Risk Management: Developer Guide — Art.9 risk docs are input to Art.27(1)(d) rights assessment
- EU AI Liability Directive (AILD 2024): Developer Guide — FRIA failure can constitute evidence of negligence under AILD causation presumption