2026-04-16·12 min read·

EU AI Act Art.27 Fundamental Rights Impact Assessment (FRIA): Developer Guide (High-Risk AI 2026)

EU AI Act Article 27 is the Fundamental Rights Impact Assessment (FRIA) obligation for public-authority deployers of high-risk AI. While most Art.26 obligations apply universally to all deployers, Art.27 targets a specific intersection: deployers that are public authorities (or bodies acting in their name) operating in any of the six Annex III high-risk categories that directly affect individuals' access to public services, employment, justice, migration status, or civil liberties. In those contexts, Art.27 mandates a structured pre-deployment assessment of risks to fundamental rights — before the AI system goes live.

Art.27 is architecturally a downstream extension of Art.26(8). Art.26(8) creates the trigger: when a deployer is a public authority using a high-risk AI system in an Annex III use case, it must conduct a FRIA. Art.27 defines what that FRIA must contain. The two articles together form the public-authority deployer compliance module within Chapter III Section 2.

This guide covers Art.27(1)–(3) in full, the seven mandatory FRIA content elements under Art.27(1)(a)–(g), the six Annex III categories that trigger the FRIA obligation, the Art.27 intersection matrix with Art.26(8), Art.9, Art.14, Art.22(3), and Art.46, the EU Fundamental Rights Agency (FRA) toolkit, CLOUD Act jurisdiction risk for FRIA documentation stored on US infrastructure, Python implementation for FRIARecord, AffectedGroupsAssessor, and FRIAComplianceChecker, and the 40-item Art.27 compliance checklist.


Art.27 in the High-Risk AI Compliance Chain

Art.27 occupies the public-authority deployer governance layer of Chapter III Section 2:

ArticleObligation LayerPrimary Addressee
Art.9Risk management systemProvider
Art.10Training data governanceProvider
Art.11Technical documentationProvider
Art.12Automatic event loggingProvider (system design)
Art.13Instructions for useProvider (must produce)
Art.14Human oversightProvider (design) + Deployer (implement)
Art.17Quality management systemProvider
Art.20Corrective actionsProvider + Deployer (notification)
Art.21MSA cooperationAll operators including Deployers
Art.22EU database registrationProvider + Deployer (public authorities, Art.22(3))
Art.26Deployer obligationsDeployer
Art.27Fundamental Rights Impact AssessmentPublic-authority Deployer (Annex III contexts)

Art.27 is not triggered by all high-risk AI deployments — only those where (a) the deployer is a public authority and (b) the deployment falls in specific Annex III categories. For most private-sector deployers, Art.27 does not apply. For public bodies deploying AI in employment, benefits, justice, or law enforcement contexts, it is mandatory before go-live.


Art.27(1): The FRIA Obligation — Who Must Conduct It

Art.27(1) text (summarised): Before deploying a high-risk AI system referred to in points 1, 2, 3, 5, 6, 7, or 8 of Annex III, a deployer that is a body governed by public law, or a private entity providing public services, shall carry out a Fundamental Rights Impact Assessment.

Three threshold conditions for Art.27 applicability:

  1. Deployer type: The deployer is (a) a public authority, (b) a body governed by public law, or (c) a private entity exercising public functions or providing services in the public interest.
  2. System classification: The AI system is classified as high-risk under Art.6(2) by reference to Annex III.
  3. Annex III category: The deployment falls within one of the six FRIA-triggering categories (see Art.27(2) below).

Private companies are generally excluded unless they exercise public authority or provide services with public-interest obligations (e.g., a private company contracted to deliver public welfare assessments on behalf of a municipality). When in doubt, legal analysis of the deployer's public-law status is required.

Timing: The FRIA must be completed before deployment — not after go-live. Art.27 does not specify a minimum advance period, but the FRIA output must inform the deployment decision and must be available to market surveillance authorities (Art.46) on request from the outset of deployment.


Art.27(2): The Six FRIA-Triggering Annex III Categories

Art.27(2) identifies six Annex III categories where the FRIA obligation applies. The table below maps each category to practical examples:

Annex III CategoryScopePractical Examples
1 — Biometric identificationReal-time and post-hoc biometric identification of natural persons in publicly accessible spaces (with Art.5 exceptions)Police facial recognition, border biometric matching, stadium access AI
2 — Critical infrastructureManagement and operation of critical infrastructure (energy, water, transport, financial) where AI failures endanger life or accessGrid management AI, water treatment control, autonomous transport systems
3 — Education and vocational trainingDetermining access to, or evaluating performance in, education and vocational training institutions funded or operated by public bodiesPublic school admissions AI, automated exam scoring, vocational training eligibility screening
5 — Employment, workers management, and access to self-employmentRecruitment, selection, task allocation, performance monitoring, promotion, termination, or access to self-employment in public-sector contextsCivil service recruitment AI, public-sector performance evaluation, automated dismissal recommendation
6 — Access to essential private and public services and benefitsEvaluating eligibility for essential services including public benefits, credit, emergency services, or social assistance administered by public bodiesBenefits eligibility AI (housing, welfare), credit scoring for public housing, emergency response prioritisation
7 — Law enforcementRisk assessments for individuals, profiling, crime analytics, or evidence evaluation by police or prosecution authoritiesRecidivism prediction, crime hotspot AI, policing deployment optimisation
8 — Migration, asylum, and border controlBorder management, visa processing, asylum assessment, or irregular migration risk scoringAutomated visa pre-screening, asylum claim risk ranking, border crossing anomaly detection

Notable exclusion: Annex III Category 4 (health and life sciences AI) does not appear in Art.27(2). Medical AI deployed by public health bodies is governed by separate sector-specific requirements (MDR/IVDR and dedicated health data regulations) and does not trigger the Art.27 FRIA.


Art.27(1)(a)–(g): The Seven Mandatory FRIA Content Elements

Art.27(1) specifies seven elements that the FRIA must contain:

(a) Description of the Intended Use and Deployment Context

The FRIA must describe the specific intended use of the AI system, the organisational and geographic deployment context, the decision-making process the AI system supports or automates, and the relationship between the AI system's outputs and any consequential human decisions.

What to document:

(b) Geographic and Temporal Scope

The FRIA must specify the geographic scope (which regions, districts, or jurisdictions are covered) and the temporal scope (deployment start date, planned review cycles, sunset date if applicable).

Why this matters: The CLOUD Act jurisdiction analysis (see below) depends on where FRIA documentation is stored. The temporal scope matters because Art.27(3) requires FRIA updates when the risk profile changes substantially.

(c) Categories of Affected Persons and Elevated Risk Groups

The FRIA must identify all categories of natural persons who will be directly or indirectly affected by the AI system, with specific attention to groups that face elevated rights risks:

(d) Fundamental Rights Assessment

The core of the FRIA: a structured assessment of risks to the fundamental rights guaranteed by the EU Charter of Fundamental Rights. The assessment must be grounded in the Art.9 risk management documentation provided by the AI system provider.

Charter rights typically at risk in Annex III deployments:

Charter ArticleRightAt-Risk Annex III Context
Art.1Human dignityBiometric surveillance, automated dismissal
Art.7Respect for private lifeBehavioural profiling, continuous monitoring
Art.8Personal data protectionProcessing of sensitive data categories
Art.21Non-discriminationAlgorithmic bias in hiring, benefits, sentencing
Art.41Right to good administrationAutomated decisions without explanation
Art.47Right to effective remedyDecisions without human review availability
Art.48Presumption of innocenceLaw enforcement risk scoring

The assessment must identify which specific rights are at risk, the severity and likelihood of rights violations for each affected group, and whether existing technical or organisational measures adequately address each identified risk.

(e) Mitigation Measures

For each identified fundamental rights risk, the FRIA must document the mitigation measures already implemented and those planned before deployment. Mitigation measures must be proportionate to the identified risk severity.

Categories of mitigation measures:

(f) Human Oversight Measures

The FRIA must specify the human oversight mechanisms that will govern the AI system's operation. This connects directly to Art.14 (human oversight obligations on providers and deployers) and Art.26(4) (deployer monitoring obligations).

Minimum human oversight specification in a FRIA:

(g) Supervisory Body Reference

The FRIA must identify the supervisory body or bodies responsible for overseeing the deployment, and confirm that the FRIA has been (or will be) communicated to those bodies as required. In EU Member States, this typically means:


Art.27(3): FRIA Update Obligation

Art.27(3) creates an ongoing obligation: the deployer must update the FRIA whenever there is a substantial change to the risk profile of the AI system or its deployment context. The article does not define "substantial change" exhaustively, but guidance from the AI Act recitals and the EU Fundamental Rights Agency toolkit identifies the following as triggering events:

TriggerDescription
New affected populationExpansion of deployment to new geographic area or demographic group
Algorithmic updateProvider updates the model in a way that changes accuracy or fairness characteristics
New use caseSame AI system applied to a new decision type not covered by original FRIA
Evidence of harmMonitoring detects adverse fundamental rights impacts not identified in original assessment
Regulatory changeNew legislation, court decisions, or supervisory guidance affecting rights analysis
Substantial modification (Art.3(23))Provider makes a substantial modification triggering re-classification under Art.4

Documentation requirement for updates: Each FRIA update must record the trigger, the specific changes made to the assessment, the updated risk conclusions, and the date of the update. Version control is essential.


Art.27 Intersection Matrix

Art.27 does not operate in isolation. Understanding the intersecting obligations is essential for compliance design:

Intersecting ArticleRelationshipPractical Impact
Art.26(8)FRIA trigger: Art.26(8) requires public-authority deployers to conduct FRIA as specified in Art.27Art.26(8) is the gateway obligation; Art.27 is the specification
Art.9Risk documentation input: Art.27(1)(d) fundamental rights assessment must be grounded in Art.9 risk management docs provided by the providerFRIA cannot be completed without receiving Art.9 documentation from provider; include contractual Art.9 documentation delivery obligation
Art.14Human oversight specification: Art.27(1)(f) human oversight measures connect directly to Art.14 implementation by the deployerArt.14 compliance and Art.27(1)(f) documentation can share a unified human oversight specification
Art.22(3)EU database: Public-authority deployers registering in the EU database under Art.22(3) must confirm FRIA completion status in their registration recordFRIA completion is a prerequisite for Art.22(3) registration; sequence: FRIA → CE marking verification → Art.22(3) registration
Art.46Market surveillance access: Art.46 gives market surveillance authorities access to FRIA documentation on request; Art.27 FRIA must be production-ready for MSA inspectionStore FRIA in an MSA-accessible location; document version history
Art.13Instructions for use: The provider's Art.13 IFU documentation forms part of the input for Art.27(1)(a) and (e); FRIA must reflect provider's intended use scopeRequest full IFU documentation from provider before commencing FRIA

EU Fundamental Rights Agency (FRA) FRIA Toolkit

The EU Fundamental Rights Agency published a dedicated FRIA toolkit to support public authorities in conducting AI Act-compliant FRIAs. The toolkit provides:

The FRA toolkit is advisory, not legally binding. However, deployers who follow it and document their use of the toolkit are better positioned to demonstrate good-faith compliance under Art.46 MSA scrutiny. Deviation from FRA methodology requires documented justification.


CLOUD Act × Art.27: Jurisdiction Risk for FRIA Documentation

A FRIA under Art.27 is a detailed evidentiary document containing:

When this documentation is stored on US-headquartered cloud infrastructure (AWS, Azure, Google Cloud, Microsoft 365), it falls within the scope of the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act). The CLOUD Act allows US authorities to compel cloud providers to produce data stored anywhere in the world, including EU-jurisdiction FRIA documentation.

Specific CLOUD Act risk vectors for Art.27 FRIAs:

RiskMechanismImpact
US law enforcement access to FRIACLOUD Act §2713 compels US cloud provider to produce FRIA despite EU data residencyFRIA evidence — including sensitive population data and rights risk findings — disclosed to US authorities without EU oversight
Fundamental rights of data subjects in FRIAFRIA contains demographic analysis of affected individuals; CLOUD Act disclosure = unauthorised transfer under GDPR Chapter VGDPR Art.48 prohibits transfers without adequacy decision or Art.49 derogation
Intelligence community accessExecutive Order 12333 and FISA §702 permit US intelligence access to FRIA on US cloudMore difficult to detect and challenge than CLOUD Act law enforcement requests
Litigation discovery via US courtsUS civil litigants can subpoena US cloud providers for FRIA documentation as part of discovery in US litigation involving the deploying authoritySensitive findings disclosed in adversarial context

Mitigation: EU-native cloud infrastructure eliminates CLOUD Act jurisdiction over FRIA documentation entirely. When a public authority stores FRIA records on infrastructure operated by an EU-headquartered provider with no US corporate parent, US compelled disclosure is not available. This is the clearest compliance path for Art.27 FRIA documentation security.


Python Implementation

FRIARecord — Structured FRIA Data Class

from dataclasses import dataclass, field
from typing import Optional
from datetime import date
from enum import Enum

class AnnexIIICategory(Enum):
    BIOMETRIC = "annex_iii_1_biometric"
    CRITICAL_INFRASTRUCTURE = "annex_iii_2_critical_infrastructure"
    EDUCATION = "annex_iii_3_education"
    EMPLOYMENT = "annex_iii_5_employment"
    ESSENTIAL_SERVICES = "annex_iii_6_essential_services"
    LAW_ENFORCEMENT = "annex_iii_7_law_enforcement"
    MIGRATION = "annex_iii_8_migration"

class FRIARightRisk(Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    CRITICAL = "critical"

@dataclass
class FundamentalRightsRiskEntry:
    charter_article: str        # e.g. "Art.21 Non-discrimination"
    risk_description: str
    affected_groups: list[str]
    severity: FRIARightRisk
    likelihood: FRIARightRisk
    mitigation_measures: list[str]
    residual_risk: FRIARightRisk

    def fundamental_rights_risk_score(self) -> float:
        """Composite risk score: severity × likelihood (1-4 scale each)."""
        severity_map = {
            FRIARightRisk.LOW: 1, FRIARightRisk.MEDIUM: 2,
            FRIARightRisk.HIGH: 3, FRIARightRisk.CRITICAL: 4
        }
        s = severity_map[self.severity]
        l = severity_map[self.likelihood]
        return round((s * l) / 16.0, 3)  # Normalised 0.0–1.0

@dataclass
class FRIARecord:
    system_id: str
    system_name: str
    annex_iii_category: AnnexIIICategory
    deployer_name: str
    deployer_type: str          # "public_authority" | "public_law_body" | "public_service_private"
    geographic_scope: str
    temporal_scope_start: date
    temporal_scope_end: Optional[date]
    intended_use_description: str
    art9_risk_docs_received: bool
    art13_ifu_docs_received: bool
    affected_persons_categories: list[str]
    elevated_risk_groups: list[str]
    rights_risk_entries: list[FundamentalRightsRiskEntry] = field(default_factory=list)
    human_oversight_description: str = ""
    supervisory_bodies: list[str] = field(default_factory=list)
    fria_date: Optional[date] = None
    fria_version: int = 1
    fria_update_history: list[dict] = field(default_factory=list)
    fra_toolkit_used: bool = False

    def aggregate_risk_score(self) -> float:
        if not self.rights_risk_entries:
            return 0.0
        scores = [e.fundamental_rights_risk_score() for e in self.rights_risk_entries]
        return round(sum(scores) / len(scores), 3)

    def has_critical_unmitigated_risk(self) -> bool:
        return any(
            e.residual_risk == FRIARightRisk.CRITICAL
            for e in self.rights_risk_entries
        )

    def to_eu_database_record(self) -> dict:
        """Art.22(3) EU database registration payload for public-authority deployers."""
        return {
            "system_id": self.system_id,
            "deployer": self.deployer_name,
            "annex_iii_category": self.annex_iii_category.value,
            "fria_completed": self.fria_date is not None,
            "fria_date": str(self.fria_date) if self.fria_date else None,
            "fria_version": self.fria_version,
            "aggregate_risk_score": self.aggregate_risk_score(),
            "critical_unmitigated": self.has_critical_unmitigated_risk(),
        }

AffectedGroupsAssessor — Elevated Risk Group Analysis

from dataclasses import dataclass

ELEVATED_RISK_GROUPS = [
    "minors_under_18",
    "persons_with_disabilities",
    "ethnic_minorities",
    "asylum_seekers_refugees",
    "elderly_persons_65plus",
    "low_socioeconomic_background",
    "lgbtq_persons",
    "single_parent_households",
    "homeless_persons",
    "persons_with_mental_health_conditions",
]

@dataclass
class AffectedGroupsAssessor:
    system_name: str
    total_affected_population_estimate: int
    groups_present_in_deployment: list[str]

    def elevated_groups_identified(self) -> list[str]:
        return [g for g in self.groups_present_in_deployment if g in ELEVATED_RISK_GROUPS]

    def equity_gap_check(self) -> dict:
        """
        Returns a structured equity gap analysis.
        Flags whether elevated risk groups are adequately addressed
        in the FRIA rights assessment.
        """
        elevated = self.elevated_groups_identified()
        return {
            "total_elevated_groups": len(elevated),
            "groups": elevated,
            "equity_gap_flag": len(elevated) > 0,
            "recommendation": (
                "Conduct intersectional analysis: individuals may belong to "
                "multiple elevated-risk groups, compounding rights exposure."
                if len(elevated) >= 2 else
                "Document specific rights risks for each elevated group "
                "in Art.27(1)(d) assessment."
                if len(elevated) == 1 else
                "No elevated-risk groups identified. Verify deployment scope — "
                "most public AI deployments affect at least one protected group."
            ),
        }

    def fria_scope_adequate(self) -> bool:
        """True if at least one elevated risk group is documented in the assessment."""
        return len(self.elevated_groups_identified()) > 0 or self.total_affected_population_estimate > 0

FRIAComplianceChecker — Deployment Authorisation Gate

@dataclass
class FRIAComplianceChecker:
    fria: FRIARecord

    REQUIRED_ELEMENTS = [
        "intended_use_description",
        "geographic_scope",
        "affected_persons_categories",
        "elevated_risk_groups",
        "rights_risk_entries",
        "human_oversight_description",
        "supervisory_bodies",
        "fria_date",
    ]

    def missing_elements(self) -> list[str]:
        missing = []
        for element in self.REQUIRED_ELEMENTS:
            value = getattr(self.fria, element, None)
            if not value:
                missing.append(element)
        return missing

    def prerequisites_met(self) -> dict:
        return {
            "art9_risk_docs": self.fria.art9_risk_docs_received,
            "art13_ifu_docs": self.fria.art13_ifu_docs_received,
            "deployer_type_valid": self.fria.deployer_type in [
                "public_authority", "public_law_body", "public_service_private"
            ],
        }

    def is_deployment_authorized(self) -> bool:
        """
        Returns True only if:
        1. All 7 Art.27(1)(a)-(g) elements are present
        2. Art.9 and Art.13 docs were received from provider
        3. No critical unmitigated rights risks remain
        4. FRIA has been formally dated (completed before deployment)
        """
        if self.missing_elements():
            return False
        prereqs = self.prerequisites_met()
        if not all(prereqs.values()):
            return False
        if self.fria.has_critical_unmitigated_risk():
            return False
        if self.fria.fria_date is None:
            return False
        return True

    def deployment_gate_report(self) -> dict:
        return {
            "system": self.fria.system_name,
            "authorized": self.is_deployment_authorized(),
            "missing_elements": self.missing_elements(),
            "prerequisites": self.prerequisites_met(),
            "critical_unmitigated_risk": self.fria.has_critical_unmitigated_risk(),
            "aggregate_risk_score": self.fria.aggregate_risk_score(),
            "fria_version": self.fria.fria_version,
            "art22_3_registration_ready": self.fria.fria_date is not None,
        }

Art.27 Compliance Checklist — 40 Items

Applicability Assessment (Items 1–8)

Art.9 and Art.13 Documentation Receipt (Items 9–12)

Art.27(1)(a): Intended Use Documentation (Items 13–15)

Art.27(1)(b): Geographic and Temporal Scope (Items 16–17)

Art.27(1)(c): Affected Persons (Items 18–20)

Art.27(1)(d): Fundamental Rights Assessment (Items 21–25)

Art.27(1)(e): Mitigation Measures (Items 26–28)

Art.27(1)(f): Human Oversight (Items 29–31)

Art.27(1)(g): Supervisory Body (Items 32–33)

Art.27(3): Update Process (Items 34–36)

Art.22(3) and Art.46 Integration (Items 37–38)

Infrastructure and CLOUD Act (Items 39–40)


Art.27 Enforcement Exposure

Art.27 violations are subject to the AI Act's penalty framework under Art.99:

ViolationMaximum Penalty
Non-compliance with Art.27 FRIA obligationEUR 15,000,000 or 3% of global annual turnover (whichever higher)
Providing incorrect or misleading information to MSA regarding FRIAEUR 7,500,000 or 1.5% of global annual turnover

For public authorities, the penalty regime may differ — some Member States limit penalties on public bodies, but the reputational and operational consequences of non-compliance (MSA enforcement orders, deployment suspension under Art.79) are significant regardless of financial penalty applicability.

Key enforcement timeline: Art.27 obligations apply from 2 August 2026 for Annex III high-risk AI systems. The transition period gives deployers approximately one year from the date of this guide to complete FRIAs for all planned deployments in scope.


What to Do Now

For Public Authorities Planning AI Deployments (2026 Deadline)

  1. Map your AI portfolio against Art.27(2) categories: Identify every AI system in procurement, pilot, or production that falls within categories 1, 2, 3, 5, 6, 7, or 8 of Annex III.
  2. Request Art.9 + Art.13 documentation from all providers: Make this a contractual obligation in every AI procurement. Without Art.9 docs, the Art.27(1)(d) assessment cannot be completed.
  3. Use the FRA FRIA toolkit: Structured FRIAs based on established methodology are more defensible under Art.46 MSA scrutiny.
  4. Complete FRIAs before 2 August 2026: Build FRIA timelines into AI project delivery schedules. A complete FRIA for a complex law enforcement system may take 3–6 months.
  5. Assess FRIA documentation storage: If your authority uses US cloud infrastructure, evaluate the CLOUD Act jurisdiction risk for FRIA records containing sensitive population data.

For Developers Building AI Systems Sold to Public Authorities

  1. Deliver Art.9 and Art.13 documentation proactively: Your public-authority customers cannot complete their Art.27 FRIA without your documentation. Include documentation delivery in your go-live process.
  2. Notify deployers of substantial modifications: Art.27(3) update triggers include provider-side model updates. Establish a notification process for your public-sector customers.
  3. Design explainability for FRIA: FRIA assessors need to understand how your AI makes decisions to assess fundamental rights risks. Explainability APIs are not optional for systems sold into Annex III public-authority contexts.
  4. Consider where you store your Art.9 documentation: Your own Art.9 records, if stored on US cloud infrastructure, face the same CLOUD Act exposure as your customers' FRIA records.

See Also