EU AI Act + GDPR: Combined DPIA and FRIA Developer Guide (2026)
If you are building an AI system that processes personal data, you likely need two impact assessments: a Data Protection Impact Assessment (DPIA) under GDPR Article 35, and a Fundamental Rights Impact Assessment (FRIA) under EU AI Act Article 27. Running them separately is the default path — but it doubles your compliance workload for what is, in practice, a 60% overlap.
This guide covers the trigger conditions for both assessments, maps the intersection, explains the divergent sections, provides Python CombinedAssessmentTracker tooling, and gives you a 25-item compliance checklist — so you write the shared content once, not twice.
Two Frameworks, One Intersection
GDPR and the EU AI Act address different risks to the same natural persons. GDPR focuses on privacy risks arising from personal data processing. The EU AI Act addresses fundamental rights risks arising from automated decision-making by high-risk AI systems. Both frameworks use an impact assessment as the primary pre-deployment governance tool.
The overlap is structural:
| Requirement | DPIA (GDPR Art.35) | FRIA (EU AI Act Art.27) |
|---|---|---|
| System description | Required | Required |
| Categories of persons affected | Required | Required |
| Processing purposes | Required | Required |
| Data flows | Required | Not required |
| Risk identification | Privacy risks | Fundamental rights risks |
| Mitigation measures | Required | Required |
| DPO consultation | Required (if DPO appointed) | Not required |
| Authority submission | On request / prior consultation | On request |
| Retention | Records of processing activities (Art.30) | 10 years (Art.27(5)) |
The system description, categories of affected persons, and the risk mitigation section overlap almost verbatim. You write them once, reference them twice.
Trigger Conditions
DPIA Trigger (GDPR Art.35)
A DPIA is required when processing is "likely to result in a high risk" to individuals. The three automatic triggers under Art.35(3):
- Systematic and extensive profiling that affects access to services or produces legal/significant effects — covers virtually all Annex III AI categories
- Large-scale processing of special categories (health, biometric, ethnic origin, criminal records)
- Systematic monitoring of publicly accessible areas at large scale
For most Annex III high-risk AI systems, trigger 1 is met. The WP248/EDPB guidelines identify nine additional criteria (novelty, automated decision-making, vulnerable data subjects, preventing exercise of rights) that each increase the likelihood of DPIA requirement. If two or more apply, the EDPB recommends a DPIA.
FRIA Trigger (EU AI Act Art.27)
A FRIA is required when ALL of the following apply:
- You are a deployer (not solely a provider) of the AI system
- The system is listed in Annex III as high-risk
- You are a public authority or private entity acting in its name, OR the system falls within one of six categories: employment/workers management, essential private services (credit scoring, insurance), education (access/assessment), law enforcement, migration/asylum/border, or administration of justice/democratic processes
Unlike the DPIA, which is triggered by processing characteristics, the FRIA is triggered by the deployer's institutional role and the specific Annex III category.
Combined Workflow: Five Phases
Phase 1 — Shared Trigger Check (1 day)
Before writing anything, confirm which assessments apply:
DPIA: Personal data processed? → Does processing meet Art.35(3) or ≥2 EDPB criteria? → DPIA required
FRIA: Annex III high-risk system? → Public authority deployer or qualifying private sector? → FRIA required
Both: Proceed with combined process
One: Use that assessment framework alone
If both apply, the combined process saves roughly three weeks of duplicated work.
Phase 2 — Shared Foundation Document (3–5 days)
Write one "System Description" document that feeds both assessments:
1. System purpose and intended use case (maps to DPIA processing description + FRIA Art.27(1)(a))
2. Technical architecture summary (provider, model type, integration points)
3. Input data: categories, sources, retention period, special categories if applicable
4. Output: decisions, recommendations, or classifications produced
5. Categories of natural persons affected (employees, customers, applicants, service users)
6. Automated decision-making: does any output have legal or similarly significant effect?
7. Third-party components: model provider, data processors, sub-processors
8. Deployment context: jurisdiction, organisational size, frequency of use
Both assessments import from this document by reference. Any system change requires only one foundation document update, not two separate updates.
Phase 3 — Divergent Sections (2–3 days each)
DPIA-specific additions:
- Lawful basis: Art.6 legal basis + Art.9 exception if special categories processed
- Necessity and proportionality: Why this AI approach vs alternatives? Is data minimised?
- Data subject rights impact: Can Art.15-22 rights (access, rectification, erasure, portability, objection) be exercised given the AI system's architecture?
- DPO consultation record: If a DPO is appointed, document their input before deployment
FRIA-specific additions:
- Charter mapping: Which EU Charter articles are at risk? (Art.7 privacy, Art.8 data protection, Art.21 non-discrimination, Art.47 effective remedy, Art.11 freedom of expression, Art.22 cultural/linguistic diversity)
- High-risk classification justification: Which Annex III entry applies? Why does the system meet the threshold?
- Art.14 human oversight measures: How is human override authority implemented? Who has override rights?
- Complaint and redress pathways: How can affected persons challenge automated decisions? Art.22 GDPR right not to be subject to solely automated decisions intersects here
Phase 4 — Unified Risk Register (1–2 days)
One risk table, two lenses:
| Risk | GDPR Impact (1–5) | Fundamental Rights Impact (1–5) | Mitigation | Owner |
|---|---|---|---|---|
| Biased model output | 3 (profiling) | 5 (Art.21 non-discrimination) | Bias audit quarterly, human review gate above 85% confidence | ML team |
| Data breach via logs | 5 (special categories) | 3 (Art.8 violation) | Encryption at rest, access controls, Art.33 notification plan | Infra |
| Opaque decision output | 2 | 4 (Art.47 no effective remedy) | Explainability layer, decision log, human review pathway | Product |
| Scope creep (Art.3(24)) | 2 | 4 (Art.21, unintended profiling) | Deployment scope gate, Art.26(1) use-within-intended-purpose check | Legal |
| Cross-border data transfer | 4 (Art.44–50) | 2 | EU-only hosting, no US sub-processors | Infra |
This format satisfies the mitigation documentation requirement in both GDPR (Art.35(7)(d)) and the EU AI Act (Art.27(1)(d)–(e)).
Phase 5 — Sign-off and Storage (1 day)
- DPIA: DPO sign-off → store in records of processing activities (Art.30) → submit to supervisory authority only if Art.36 prior consultation triggered (residual high risk remains after mitigation)
- FRIA: Authorised signatory → store for 10 years (Art.27(5)) → make available to market surveillance authorities on request
- Both: Version-control the document; any significant system change (DPIA: change to processing purpose/data; FRIA: substantial modification under Art.3(23)) triggers reassessment
Python: CombinedAssessmentTracker
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
import datetime
class AssessmentType(Enum):
DPIA_ONLY = "dpia_only"
FRIA_ONLY = "fria_only"
COMBINED = "combined"
NOT_REQUIRED = "not_required"
class RiskLevel(Enum):
LOW = 1
MEDIUM = 2
HIGH = 3
VERY_HIGH = 4
CRITICAL = 5
@dataclass
class RiskEntry:
description: str
gdpr_impact: RiskLevel
fundamental_rights_impact: RiskLevel
mitigation: str
owner: str
residual_risk_acceptable: bool
@dataclass
class TriggerAssessment:
processes_personal_data: bool
meets_art35_criteria: bool
is_annex_iii_high_risk: bool
is_public_authority_deployer: bool
annex_iii_fria_category: Optional[str]
def required_assessment(self) -> AssessmentType:
dpia_required = self.processes_personal_data and self.meets_art35_criteria
fria_required = (
self.is_annex_iii_high_risk and
(self.is_public_authority_deployer or self.annex_iii_fria_category is not None)
)
if dpia_required and fria_required:
return AssessmentType.COMBINED
elif dpia_required:
return AssessmentType.DPIA_ONLY
elif fria_required:
return AssessmentType.FRIA_ONLY
return AssessmentType.NOT_REQUIRED
@dataclass
class CombinedAssessmentRecord:
system_name: str
assessment_date: str
trigger: TriggerAssessment
risks: list[RiskEntry] = field(default_factory=list)
# DPIA-specific
lawful_basis: Optional[str] = None
dpo_consulted: bool = False
prior_consultation_required: bool = False
# FRIA-specific
charter_articles_affected: list[str] = field(default_factory=list)
human_oversight_description: Optional[str] = None
redress_pathway: Optional[str] = None
# Shared
foundation_document_version: str = "1.0"
sign_off_date: Optional[str] = None
def deployment_ready(self) -> bool:
assessment_type = self.trigger.required_assessment()
if assessment_type == AssessmentType.NOT_REQUIRED:
return True
# DPIA gate
if assessment_type in (AssessmentType.DPIA_ONLY, AssessmentType.COMBINED):
if not self.lawful_basis:
return False
if self.prior_consultation_required:
return False # Must await supervisory authority consultation
# FRIA gate
if assessment_type in (AssessmentType.FRIA_ONLY, AssessmentType.COMBINED):
if not self.human_oversight_description:
return False
if not self.charter_articles_affected:
return False
# Both: unacceptable residual risks block deployment
blocking_risks = [r for r in self.risks if not r.residual_risk_acceptable]
if blocking_risks:
return False
return self.sign_off_date is not None
def summary(self) -> dict:
return {
"system": self.system_name,
"assessment_type": self.trigger.required_assessment().value,
"risks": len(self.risks),
"blocking_risks": len([r for r in self.risks if not r.residual_risk_acceptable]),
"deployment_ready": self.deployment_ready(),
"fria_retention_until": (
str(int(self.assessment_date[:4]) + 10) + self.assessment_date[4:]
if self.sign_off_date else "not_signed_off"
),
}
# Example: employment AI (Annex III point 4) + personal data processing
tracker = CombinedAssessmentRecord(
system_name="cv-screening-saas-v2",
assessment_date=str(datetime.date.today()),
trigger=TriggerAssessment(
processes_personal_data=True,
meets_art35_criteria=True, # profiling with significant effects
is_annex_iii_high_risk=True, # Annex III point 4: employment/recruitment
is_public_authority_deployer=False,
annex_iii_fria_category="employment_workers_management",
),
lawful_basis="Art.6(1)(b) performance of contract",
charter_articles_affected=["Art.21 non-discrimination", "Art.47 effective remedy"],
human_oversight_description="All rejections reviewed by HR manager before candidate notification",
redress_pathway="Candidate can request human review within 14 days via hr-review@company.com",
sign_off_date=str(datetime.date.today()),
)
tracker.risks.append(RiskEntry(
description="Biased model output from training data",
gdpr_impact=RiskLevel.HIGH,
fundamental_rights_impact=RiskLevel.CRITICAL,
mitigation="Quarterly bias audit, human review gate above 80% confidence",
owner="ML team",
residual_risk_acceptable=True,
))
print(tracker.summary())
# {'system': 'cv-screening-saas-v2', 'assessment_type': 'combined', 'risks': 1,
# 'blocking_risks': 0, 'deployment_ready': True, 'fria_retention_until': '2036-04-15'}
EU Jurisdiction: Why Hosting Location Matters
Both the DPIA and FRIA require you to assess cross-border data transfer risk. If you use a US-headquartered cloud provider as your AI infrastructure host:
- DPIA: Requires a Chapter V adequacy decision, SCCs, or BCRs. The CLOUD Act means US authorities can compel disclosure of data stored on US-provider infrastructure regardless of server location — this must appear in your DPIA risk section.
- FRIA: Art.27(5) requires 10-year retention of the FRIA document. If that document contains sensitive information about affected persons' fundamental rights risks, its jurisdiction matters.
With an EU-native host (no parent company subject to CLOUD Act jurisdiction), both of these risks are eliminated from your risk register. That removes one table row from your unified risk register — and simplifies your prior consultation analysis.
25-Item Combined DPIA + FRIA Checklist
Trigger and Scope (5 items)
- 1. Confirmed DPIA trigger: personal data processed AND Art.35(3) criterion or ≥2 EDPB criteria met
- 2. Confirmed FRIA trigger: Annex III high-risk AND public authority deployer OR qualifying category
- 3. Identified combined assessment approach (avoid duplication)
- 4. Foundation document version control established
- 5. Reassessment triggers defined (Art.3(23) substantial modification, change to processing purpose)
DPIA-Specific (7 items)
- 6. Lawful basis under Art.6(1) documented
- 7. Art.9 exception documented if special categories processed
- 8. Necessity and proportionality analysis completed
- 9. Data minimisation verified: only data necessary for intended purpose collected
- 10. Data subject rights exercisability assessed (Art.15–22 practical access)
- 11. DPO consulted and consultation recorded (if DPO appointed)
- 12. Prior consultation to supervisory authority assessed — Art.36 triggered if residual high risk?
FRIA-Specific (7 items)
- 13. Annex III classification justification documented
- 14. EU Charter articles at risk identified (Art.7, Art.8, Art.21, Art.47 minimum)
- 15. Affected groups enumerated (Art.27(1)(c): specific categories at elevated risk)
- 16. Human oversight implementation documented per Art.14 (override authority, who, how)
- 17. Redress and complaint pathway documented
- 18. Duration and frequency of AI system use documented (Art.27(1)(b))
- 19. FRIA stored for 10 years from sign-off date (Art.27(5))
Unified Risk Register (6 items)
- 20. Each risk has GDPR impact score + fundamental rights impact score
- 21. Each risk has documented mitigation and identified owner
- 22. Residual risk acceptability determined for each entry
- 23. Blocking risks (residual risk not acceptable) prevent deployment
- 24. Cross-border data transfer risk assessed (CLOUD Act, adequacy status)
- 25. Risk register linked to Art.9 EU AI Act risk management system if provider obligations also apply
Key Deadlines
| Deadline | Framework | Obligation |
|---|---|---|
| Already in effect (2018) | GDPR | DPIA for high-risk processing |
| 2 August 2026 | EU AI Act | FRIA for qualifying Annex III deployers |
| December 2027 (if Digital Omnibus adopted) | EU AI Act | Possible extension for some Annex III categories |
The GDPR DPIA obligation is not affected by any EU AI Act timeline. If you are not running DPIAs for high-risk AI-driven processing today, that is an existing compliance gap.
Summary
Running a combined DPIA + FRIA is not just possible — it is the efficient path for any team building Annex III high-risk AI that processes personal data. The shared foundation document (system description, affected persons, unified risk register) covers most of the work. The divergent sections — lawful basis and DPO consultation for DPIA, Charter mapping and human oversight description for FRIA — are additive and relatively short.
Start with the trigger check, build the foundation document first, then add each framework's specific lens. Use the CombinedAssessmentRecord class as your machine-readable compliance record. Store both assessments under version control, update on significant system changes, and keep the FRIA accessible for a decade.