2026-04-16·12 min read·

EU AI Act + GDPR: Combined DPIA and FRIA Developer Guide (2026)

If you are building an AI system that processes personal data, you likely need two impact assessments: a Data Protection Impact Assessment (DPIA) under GDPR Article 35, and a Fundamental Rights Impact Assessment (FRIA) under EU AI Act Article 27. Running them separately is the default path — but it doubles your compliance workload for what is, in practice, a 60% overlap.

This guide covers the trigger conditions for both assessments, maps the intersection, explains the divergent sections, provides Python CombinedAssessmentTracker tooling, and gives you a 25-item compliance checklist — so you write the shared content once, not twice.


Two Frameworks, One Intersection

GDPR and the EU AI Act address different risks to the same natural persons. GDPR focuses on privacy risks arising from personal data processing. The EU AI Act addresses fundamental rights risks arising from automated decision-making by high-risk AI systems. Both frameworks use an impact assessment as the primary pre-deployment governance tool.

The overlap is structural:

RequirementDPIA (GDPR Art.35)FRIA (EU AI Act Art.27)
System descriptionRequiredRequired
Categories of persons affectedRequiredRequired
Processing purposesRequiredRequired
Data flowsRequiredNot required
Risk identificationPrivacy risksFundamental rights risks
Mitigation measuresRequiredRequired
DPO consultationRequired (if DPO appointed)Not required
Authority submissionOn request / prior consultationOn request
RetentionRecords of processing activities (Art.30)10 years (Art.27(5))

The system description, categories of affected persons, and the risk mitigation section overlap almost verbatim. You write them once, reference them twice.


Trigger Conditions

DPIA Trigger (GDPR Art.35)

A DPIA is required when processing is "likely to result in a high risk" to individuals. The three automatic triggers under Art.35(3):

  1. Systematic and extensive profiling that affects access to services or produces legal/significant effects — covers virtually all Annex III AI categories
  2. Large-scale processing of special categories (health, biometric, ethnic origin, criminal records)
  3. Systematic monitoring of publicly accessible areas at large scale

For most Annex III high-risk AI systems, trigger 1 is met. The WP248/EDPB guidelines identify nine additional criteria (novelty, automated decision-making, vulnerable data subjects, preventing exercise of rights) that each increase the likelihood of DPIA requirement. If two or more apply, the EDPB recommends a DPIA.

FRIA Trigger (EU AI Act Art.27)

A FRIA is required when ALL of the following apply:

  1. You are a deployer (not solely a provider) of the AI system
  2. The system is listed in Annex III as high-risk
  3. You are a public authority or private entity acting in its name, OR the system falls within one of six categories: employment/workers management, essential private services (credit scoring, insurance), education (access/assessment), law enforcement, migration/asylum/border, or administration of justice/democratic processes

Unlike the DPIA, which is triggered by processing characteristics, the FRIA is triggered by the deployer's institutional role and the specific Annex III category.


Combined Workflow: Five Phases

Phase 1 — Shared Trigger Check (1 day)

Before writing anything, confirm which assessments apply:

DPIA:  Personal data processed? → Does processing meet Art.35(3) or ≥2 EDPB criteria? → DPIA required
FRIA:  Annex III high-risk system? → Public authority deployer or qualifying private sector? → FRIA required
Both:  Proceed with combined process
One:   Use that assessment framework alone

If both apply, the combined process saves roughly three weeks of duplicated work.

Phase 2 — Shared Foundation Document (3–5 days)

Write one "System Description" document that feeds both assessments:

1. System purpose and intended use case (maps to DPIA processing description + FRIA Art.27(1)(a))
2. Technical architecture summary (provider, model type, integration points)
3. Input data: categories, sources, retention period, special categories if applicable
4. Output: decisions, recommendations, or classifications produced
5. Categories of natural persons affected (employees, customers, applicants, service users)
6. Automated decision-making: does any output have legal or similarly significant effect?
7. Third-party components: model provider, data processors, sub-processors
8. Deployment context: jurisdiction, organisational size, frequency of use

Both assessments import from this document by reference. Any system change requires only one foundation document update, not two separate updates.

Phase 3 — Divergent Sections (2–3 days each)

DPIA-specific additions:

FRIA-specific additions:

Phase 4 — Unified Risk Register (1–2 days)

One risk table, two lenses:

RiskGDPR Impact (1–5)Fundamental Rights Impact (1–5)MitigationOwner
Biased model output3 (profiling)5 (Art.21 non-discrimination)Bias audit quarterly, human review gate above 85% confidenceML team
Data breach via logs5 (special categories)3 (Art.8 violation)Encryption at rest, access controls, Art.33 notification planInfra
Opaque decision output24 (Art.47 no effective remedy)Explainability layer, decision log, human review pathwayProduct
Scope creep (Art.3(24))24 (Art.21, unintended profiling)Deployment scope gate, Art.26(1) use-within-intended-purpose checkLegal
Cross-border data transfer4 (Art.44–50)2EU-only hosting, no US sub-processorsInfra

This format satisfies the mitigation documentation requirement in both GDPR (Art.35(7)(d)) and the EU AI Act (Art.27(1)(d)–(e)).

Phase 5 — Sign-off and Storage (1 day)


Python: CombinedAssessmentTracker

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
import datetime

class AssessmentType(Enum):
    DPIA_ONLY = "dpia_only"
    FRIA_ONLY = "fria_only"
    COMBINED = "combined"
    NOT_REQUIRED = "not_required"

class RiskLevel(Enum):
    LOW = 1
    MEDIUM = 2
    HIGH = 3
    VERY_HIGH = 4
    CRITICAL = 5

@dataclass
class RiskEntry:
    description: str
    gdpr_impact: RiskLevel
    fundamental_rights_impact: RiskLevel
    mitigation: str
    owner: str
    residual_risk_acceptable: bool

@dataclass
class TriggerAssessment:
    processes_personal_data: bool
    meets_art35_criteria: bool
    is_annex_iii_high_risk: bool
    is_public_authority_deployer: bool
    annex_iii_fria_category: Optional[str]

    def required_assessment(self) -> AssessmentType:
        dpia_required = self.processes_personal_data and self.meets_art35_criteria
        fria_required = (
            self.is_annex_iii_high_risk and
            (self.is_public_authority_deployer or self.annex_iii_fria_category is not None)
        )
        if dpia_required and fria_required:
            return AssessmentType.COMBINED
        elif dpia_required:
            return AssessmentType.DPIA_ONLY
        elif fria_required:
            return AssessmentType.FRIA_ONLY
        return AssessmentType.NOT_REQUIRED

@dataclass
class CombinedAssessmentRecord:
    system_name: str
    assessment_date: str
    trigger: TriggerAssessment
    risks: list[RiskEntry] = field(default_factory=list)
    # DPIA-specific
    lawful_basis: Optional[str] = None
    dpo_consulted: bool = False
    prior_consultation_required: bool = False
    # FRIA-specific
    charter_articles_affected: list[str] = field(default_factory=list)
    human_oversight_description: Optional[str] = None
    redress_pathway: Optional[str] = None
    # Shared
    foundation_document_version: str = "1.0"
    sign_off_date: Optional[str] = None

    def deployment_ready(self) -> bool:
        assessment_type = self.trigger.required_assessment()
        if assessment_type == AssessmentType.NOT_REQUIRED:
            return True
        # DPIA gate
        if assessment_type in (AssessmentType.DPIA_ONLY, AssessmentType.COMBINED):
            if not self.lawful_basis:
                return False
            if self.prior_consultation_required:
                return False  # Must await supervisory authority consultation
        # FRIA gate
        if assessment_type in (AssessmentType.FRIA_ONLY, AssessmentType.COMBINED):
            if not self.human_oversight_description:
                return False
            if not self.charter_articles_affected:
                return False
        # Both: unacceptable residual risks block deployment
        blocking_risks = [r for r in self.risks if not r.residual_risk_acceptable]
        if blocking_risks:
            return False
        return self.sign_off_date is not None

    def summary(self) -> dict:
        return {
            "system": self.system_name,
            "assessment_type": self.trigger.required_assessment().value,
            "risks": len(self.risks),
            "blocking_risks": len([r for r in self.risks if not r.residual_risk_acceptable]),
            "deployment_ready": self.deployment_ready(),
            "fria_retention_until": (
                str(int(self.assessment_date[:4]) + 10) + self.assessment_date[4:]
                if self.sign_off_date else "not_signed_off"
            ),
        }


# Example: employment AI (Annex III point 4) + personal data processing
tracker = CombinedAssessmentRecord(
    system_name="cv-screening-saas-v2",
    assessment_date=str(datetime.date.today()),
    trigger=TriggerAssessment(
        processes_personal_data=True,
        meets_art35_criteria=True,      # profiling with significant effects
        is_annex_iii_high_risk=True,    # Annex III point 4: employment/recruitment
        is_public_authority_deployer=False,
        annex_iii_fria_category="employment_workers_management",
    ),
    lawful_basis="Art.6(1)(b) performance of contract",
    charter_articles_affected=["Art.21 non-discrimination", "Art.47 effective remedy"],
    human_oversight_description="All rejections reviewed by HR manager before candidate notification",
    redress_pathway="Candidate can request human review within 14 days via hr-review@company.com",
    sign_off_date=str(datetime.date.today()),
)
tracker.risks.append(RiskEntry(
    description="Biased model output from training data",
    gdpr_impact=RiskLevel.HIGH,
    fundamental_rights_impact=RiskLevel.CRITICAL,
    mitigation="Quarterly bias audit, human review gate above 80% confidence",
    owner="ML team",
    residual_risk_acceptable=True,
))

print(tracker.summary())
# {'system': 'cv-screening-saas-v2', 'assessment_type': 'combined', 'risks': 1,
#  'blocking_risks': 0, 'deployment_ready': True, 'fria_retention_until': '2036-04-15'}

EU Jurisdiction: Why Hosting Location Matters

Both the DPIA and FRIA require you to assess cross-border data transfer risk. If you use a US-headquartered cloud provider as your AI infrastructure host:

With an EU-native host (no parent company subject to CLOUD Act jurisdiction), both of these risks are eliminated from your risk register. That removes one table row from your unified risk register — and simplifies your prior consultation analysis.


25-Item Combined DPIA + FRIA Checklist

Trigger and Scope (5 items)

DPIA-Specific (7 items)

FRIA-Specific (7 items)

Unified Risk Register (6 items)


Key Deadlines

DeadlineFrameworkObligation
Already in effect (2018)GDPRDPIA for high-risk processing
2 August 2026EU AI ActFRIA for qualifying Annex III deployers
December 2027 (if Digital Omnibus adopted)EU AI ActPossible extension for some Annex III categories

The GDPR DPIA obligation is not affected by any EU AI Act timeline. If you are not running DPIAs for high-risk AI-driven processing today, that is an existing compliance gap.


Summary

Running a combined DPIA + FRIA is not just possible — it is the efficient path for any team building Annex III high-risk AI that processes personal data. The shared foundation document (system description, affected persons, unified risk register) covers most of the work. The divergent sections — lawful basis and DPO consultation for DPIA, Charter mapping and human oversight description for FRIA — are additive and relatively short.

Start with the trigger check, build the foundation document first, then add each framework's specific lens. Use the CombinedAssessmentRecord class as your machine-readable compliance record. Store both assessments under version control, update on significant system changes, and keep the FRIA accessible for a decade.