GDPR Art.35: Data Protection Impact Assessment (DPIA) — When Required, How to Conduct It & Prior Consultation (2026)
Post #429 in the sota.io EU Cyber Compliance Series
A Data Protection Impact Assessment (DPIA) is GDPR's mandatory risk assessment for high-risk processing. Before deploying any feature or system that is "likely to result in a high risk to the rights and freedoms of natural persons," a controller must conduct a DPIA — not as a bureaucratic checkbox, but as a documented analysis of necessity, proportionality, and risk mitigation. Skip it, and supervisory authorities (SAs) can issue fines, order processing to stop, or require prior consultation before the system goes live.
For engineers, the DPIA has two direct consequences: it can block a feature deployment (if Art.36 prior consultation is required and the SA imposes conditions) and it surfaces architecture decisions — encryption choices, retention periods, access control models — that become legally relevant. This post covers when a DPIA is mandatory, what it must contain, and how to implement a DPIA workflow in Python.
GDPR Chapter IV: Art.35 in Context
| Article | Obligation | Who | Relationship |
|---|---|---|---|
| Art.25 | Privacy by Design | Controller | Architecture safeguards |
| Art.30 | Records of Processing (RoPA) | Controller + Processor | Inventory input for DPIA |
| Art.32 | Security of Processing | Controller + Processor | TOMs documented in DPIA |
| Art.33-34 | Breach Notification | Controller + Processor | DPIA predicts breach likelihood |
| Art.35 | Data Protection Impact Assessment | Controller | Pre-deployment risk assessment |
| Art.36 | Prior Consultation | Controller → SA | When residual risk remains high |
| Art.37-39 | Data Protection Officer | Controller + Processor | DPO consulted in Art.35(8) |
Art.35 is the forward-looking instrument: where Art.30 records what you process, Art.35 assesses the risk of what you plan to process before it goes live. A controller who skips the DPIA and later needs to notify a breach under Art.33 will face compounded regulatory scrutiny — inspectors will ask why the risk was not assessed in advance.
Art.35(1): The Threshold — "Likely to Result in a High Risk"
Art.35(1) requires a DPIA when processing is "likely to result in a high risk to the rights and freedoms of natural persons." Three elements:
- "Likely" — not certain, not inevitable. A reasonable probability assessment based on the nature, scope, context, and purposes of processing. Doubt resolves in favour of conducting a DPIA.
- "High risk" — above the ordinary risks that accompany any personal data processing. EDPB Guidelines 03/2020 (formerly WP248 rev.01) provide nine criteria (see below).
- "Particularly" — Art.35(1) specifies that new technologies and innovative processing forms are subject to heightened scrutiny, even when their risk profile is uncertain.
The threshold is controller-assessed. If a controller wrongly concludes no DPIA is needed and an SA later finds a DPIA was required, the controller bears the burden of justifying that conclusion under Art.5(2) accountability.
Art.35(3): Three Mandatory DPIA Cases
Art.35(3) identifies processing that always requires a DPIA, regardless of the controller's risk assessment:
Art.35(3)(a) — Systematic Profiling and Automated Decision-Making
A systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person.
Developer triggers:
- Credit scoring, insurance risk scoring, loan eligibility assessment
- HR systems that score candidates for hiring decisions
- Fraud detection that blocks accounts based on behavioural patterns
- Any ML model whose output directly gates access to services, benefits, or employment
Key terms: "systematic and extensive" means regular, methodical, not one-off. "Produce legal effects or similarly significantly affect" mirrors Art.22 automated decision-making — the DPIA and Art.22 safeguards overlap here.
Art.35(3)(b) — Large-Scale Processing of Special Categories
Processing on a large scale of special categories of data referred to in Art.9(1), or of personal data relating to criminal convictions and offences referred to in Art.10.
Art.9(1) special categories:
- Health, genetic, biometric data
- Racial/ethnic origin, political opinions, religious/philosophical beliefs
- Trade union membership, sex life/sexual orientation
"Large scale" — not defined in the Regulation. EDPB uses factors: number of data subjects, geographical extent, duration, breadth of data processed. A hospital processing patient records is large scale. A GP's single practice may not be, but an aggregator of GP records is.
Developer triggers:
- SaaS health apps processing symptom data at scale
- Biometric authentication systems (face recognition, fingerprint) across many users
- HR platforms processing sensitive employee data across many employers
- Analytics platforms that infer special-category data (health inference from purchase patterns)
Art.35(3)(c) — Systematic Monitoring of Public Areas at Large Scale
Systematic monitoring of a publicly accessible area on a large scale.
Developer triggers:
- CCTV networks with face recognition or behavioural analytics
- Smart city sensor arrays (Bluetooth tracking, Wi-Fi probe detection)
- Large-scale web scraping of public social media for profiling purposes
- Stadium or venue analytics that track movement of crowds
EDPB Guidelines 03/2020: Nine Criteria for "High Risk"
EDPB Guidelines 03/2020 (updating WP248 rev.01) specify nine criteria. Meeting two or more typically requires a DPIA; meeting one in a severe form may suffice; meeting none means a DPIA is likely unnecessary.
| # | Criterion | Developer Example |
|---|---|---|
| 1 | Evaluation or scoring | User segmentation, churn prediction, lead scoring |
| 2 | Automated decision-making with legal/significant effect | Account suspension, loan denial, content moderation bans |
| 3 | Systematic monitoring | Session replay tools, user behaviour analytics, persistent device fingerprinting |
| 4 | Sensitive data or highly personal data | Health records, financial data, location data combined with identity |
| 5 | Data processed at large scale | >100K data subjects, multi-country deployment |
| 6 | Matching or combining data sets | Cross-referencing purchase history + location + demographic for profiling |
| 7 | Data concerning vulnerable persons | Children, employees under supervision, patients, asylum seekers |
| 8 | Innovative use or new technology | LLM-based analysis of personal data, biometric systems, IoT inference |
| 9 | Transfer or denial of rights | Blacklisting services that bar subjects from contracts |
SaaS examples by criteria count:
Basic blog platform (auth + posts): 0 criteria → no DPIA
B2C analytics with device fingerprinting: criteria 3+5 → DPIA required
HR platform with performance scoring: criteria 1+2+4+7 → DPIA mandatory
Healthcare SaaS with ML triage: criteria 1+2+4+5+8 → DPIA mandatory (also Art.35(3)(b))
Art.35(5): Negative List — Where DPIA Is Not Required
Art.35(5) allows an SA to publish a negative list of processing operations for which no DPIA is required, because the SA has already established that the processing cannot result in high risk (considering safeguards in place). Published negative lists cover narrow, well-understood processing forms.
Practical note: Don't assume a generic activity is on a negative list. Check your lead SA's published list. For standard processing (employee directory, basic contact form), most SA negative lists confirm no DPIA needed — but this must be documented as the reason, not assumed.
Art.35(7): DPIA Content Requirements
A DPIA that satisfies Art.35(7) must contain at minimum:
Art.35(7)(a) — Description of Processing
(a) A systematic description of the envisaged processing operations and
the purposes of the processing, including, where applicable, the
legitimate interest pursued by the controller.
Include:
- Data flows (sources → storage → processing → recipients)
- Data categories and volumes
- Retention periods
- Technical stack (systems involved, sub-processors)
- Purposes and legal bases (Art.6 + Art.9 where applicable)
- Cross-references to Art.30 RoPA entries for consistency
Art.35(7)(b) — Necessity and Proportionality
(b) An assessment of the necessity and proportionality of the processing
operations in relation to the purposes.
Include:
- Why this processing is necessary for the stated purpose (not just useful)
- Whether a less privacy-intrusive alternative achieves the same purpose
- Data minimisation analysis (could fewer fields achieve the same outcome?)
- Retention justification (why this period, not shorter?)
- Legal basis assessment (is the basis appropriate, not just available?)
Art.35(7)(c) — Risk Assessment
(c) An assessment of the risks to the rights and freedoms of data subjects
referred to in paragraph 1.
Risk dimensions:
- Likelihood of harm (low/medium/high)
- Severity of harm (low/medium/high)
- Types of harm: financial, reputational, physical, discrimination, loss of autonomy
- Threat scenarios: data breach, purpose creep, inaccurate data, re-identification
Risk matrix:
| Likelihood | Severity | Risk Level | Action |
|---|---|---|---|
| Low | Low | Residual — acceptable | Document and proceed |
| Low | High | Medium — mitigable | Add safeguards |
| High | Low | Medium — mitigable | Add safeguards |
| High | High | High — Art.36 required | Prior consultation or redesign |
Art.35(7)(d) — Measures to Address Risks
(d) The measures envisaged to address the risks, including safeguards,
security measures and mechanisms to ensure the protection of personal data
and to demonstrate compliance with this Regulation.
Include for each identified risk:
- Specific technical measure (encryption spec, RBAC model, pseudonymisation scheme)
- Organisational measure (access policy, training, incident response procedure)
- Residual risk after mitigation (what risk remains after the measure is applied)
- Review trigger (when the DPIA will be revisited — new feature, new processor, data breach)
Art.35(8): DPO Consultation
When a Data Protection Officer (DPO) is designated under Art.37, the controller must seek the DPO's advice before finalising the DPIA. The DPO's advice and the controller's decision on whether to follow it must both be documented.
Practical workflow:
- Draft DPIA (controller's team)
- Submit to DPO for review (at least 2 weeks for substantive review)
- DPO provides written advice on risk assessment and measures
- Controller decides: accept advice, partially accept, or document why advice was not followed
- Finalise DPIA with DPO advice documented
Ignoring DPO advice does not invalidate the DPIA, but it creates a documented gap that SAs will scrutinise in an investigation.
Art.35(9): Data Subject Consultation
Where appropriate, the controller shall seek the views of data subjects or their representatives — without prejudice to the protection of commercial or public interests or the security of processing operations.
Practical triggers:
- New employee monitoring system: consult works council or employee representatives
- Public-sector processing affecting citizens: public consultation or representative panel
- Consumer product with novel personal data use: user panel or focus group
- Healthcare processing: patient advisory group
Data subject views need not be followed, but seeking them (or documenting why it was not appropriate) demonstrates accountability.
Art.36: Prior Consultation — When Residual Risk Is High
If the DPIA reveals that processing would result in a high risk after proposed mitigation measures, the controller must consult the SA before beginning processing (Art.36(1)).
Prior consultation workflow:
DPIA → Residual risk still HIGH → Art.36 prior consultation
↓
Submit to SA (Art.36(3) mandatory content):
- DPIA itself
- Purposes and means of processing
- Measures and safeguards
- Contact details: controller + DPO
- Any other information the SA requests
↓
SA reviews: up to 8 weeks
(extendable by 6 weeks for complex cases)
↓
SA may: approve / add conditions / prohibit
Critical for developers: Prior consultation means you cannot go live until the SA completes its review (up to 14 weeks in complex cases). This needs to be in project timelines for high-risk processing launches.
Python DPIA Implementation
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
from datetime import date
class RiskLevel(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
class DPIACriteria(Enum):
EVALUATION_SCORING = "evaluation_or_scoring"
AUTOMATED_DECISION = "automated_decision_legal_effect"
SYSTEMATIC_MONITORING = "systematic_monitoring"
SENSITIVE_DATA = "sensitive_or_highly_personal_data"
LARGE_SCALE = "large_scale_processing"
DATASET_MATCHING = "matching_combining_datasets"
VULNERABLE_PERSONS = "data_concerning_vulnerable_persons"
INNOVATIVE_TECHNOLOGY = "innovative_use_new_technology"
RIGHTS_DENIAL = "prevents_exercise_of_rights"
@dataclass
class DPIARequirement:
criteria_met: list[DPIACriteria]
mandatory_case: Optional[str] # Art.35(3)(a/b/c) if applicable
requires_dpia: bool
reason: str
def prior_consultation_check_needed(self) -> bool:
return len(self.criteria_met) >= 4 or self.mandatory_case is not None
class DPIAChecker:
"""Art.35 DPIA requirement checker per EDPB Guidelines 03/2020."""
def __init__(
self,
# Art.35(3)(a) automated decision-making at scale
systematic_profiling_with_legal_effect: bool = False,
# Art.35(3)(b) large-scale special categories
large_scale_special_categories: bool = False,
# Art.35(3)(c) systematic monitoring of public areas
systematic_public_monitoring: bool = False,
# EDPB 9 criteria
uses_scoring_or_profiling: bool = False,
automated_decision_significant_effect: bool = False,
systematic_monitoring_of_individuals: bool = False,
processes_sensitive_data: bool = False,
large_scale: bool = False,
combines_multiple_datasets: bool = False,
involves_vulnerable_persons: bool = False,
uses_innovative_technology: bool = False,
can_deny_rights_or_services: bool = False,
):
self.mandatory_35_3a = systematic_profiling_with_legal_effect
self.mandatory_35_3b = large_scale_special_categories
self.mandatory_35_3c = systematic_public_monitoring
self.criteria_flags = {
DPIACriteria.EVALUATION_SCORING: uses_scoring_or_profiling,
DPIACriteria.AUTOMATED_DECISION: automated_decision_significant_effect,
DPIACriteria.SYSTEMATIC_MONITORING: systematic_monitoring_of_individuals,
DPIACriteria.SENSITIVE_DATA: processes_sensitive_data,
DPIACriteria.LARGE_SCALE: large_scale,
DPIACriteria.DATASET_MATCHING: combines_multiple_datasets,
DPIACriteria.VULNERABLE_PERSONS: involves_vulnerable_persons,
DPIACriteria.INNOVATIVE_TECHNOLOGY: uses_innovative_technology,
DPIACriteria.RIGHTS_DENIAL: can_deny_rights_or_services,
}
def assess(self) -> DPIARequirement:
mandatory_case = None
if self.mandatory_35_3a:
mandatory_case = "Art.35(3)(a): systematic profiling with legal/significant effect"
elif self.mandatory_35_3b:
mandatory_case = "Art.35(3)(b): large-scale special category / criminal data"
elif self.mandatory_35_3c:
mandatory_case = "Art.35(3)(c): systematic monitoring of public area at large scale"
criteria_met = [c for c, v in self.criteria_flags.items() if v]
requires = bool(mandatory_case) or len(criteria_met) >= 2
if mandatory_case:
reason = f"Mandatory DPIA — {mandatory_case}"
elif len(criteria_met) >= 2:
names = ", ".join(c.value for c in criteria_met)
reason = f"DPIA required — {len(criteria_met)}/9 EDPB criteria met: {names}"
elif len(criteria_met) == 1:
reason = f"Single criterion ({criteria_met[0].value}) — assess severity; DPIA likely not required unless severe"
else:
reason = "No criteria met — DPIA not required; document conclusion for Art.5(2) accountability"
return DPIARequirement(
criteria_met=criteria_met,
mandatory_case=mandatory_case,
requires_dpia=requires,
reason=reason,
)
@dataclass
class DPIARisk:
threat: str
likelihood: RiskLevel
severity: RiskLevel
mitigation: str
residual_risk: RiskLevel
def overall_risk(self) -> RiskLevel:
risk_map = {
(RiskLevel.LOW, RiskLevel.LOW): RiskLevel.LOW,
(RiskLevel.LOW, RiskLevel.MEDIUM): RiskLevel.MEDIUM,
(RiskLevel.LOW, RiskLevel.HIGH): RiskLevel.MEDIUM,
(RiskLevel.MEDIUM, RiskLevel.LOW): RiskLevel.MEDIUM,
(RiskLevel.MEDIUM, RiskLevel.MEDIUM): RiskLevel.MEDIUM,
(RiskLevel.MEDIUM, RiskLevel.HIGH): RiskLevel.HIGH,
(RiskLevel.HIGH, RiskLevel.LOW): RiskLevel.MEDIUM,
(RiskLevel.HIGH, RiskLevel.MEDIUM): RiskLevel.HIGH,
(RiskLevel.HIGH, RiskLevel.HIGH): RiskLevel.HIGH,
}
return risk_map[(self.likelihood, self.severity)]
@dataclass
class DPIADocument:
"""Art.35(7) DPIA content — mandatory fields."""
# Art.35(7)(a)
processing_description: str
purposes: list[str]
legal_bases: list[str]
data_categories: list[str]
data_subjects: list[str]
sub_processors: list[str]
retention_period: str
# Art.35(7)(b)
necessity_justification: str
proportionality_assessment: str
data_minimisation_analysis: str
# Art.35(7)(c)
risks: list[DPIARisk]
# Art.35(7)(d)
safeguards: list[str]
review_trigger: str
# Metadata
controller: str
dpo_consulted: bool
dpo_advice_summary: Optional[str]
data_subject_consultation: Optional[str]
completed_date: date
review_date: date
def requires_prior_consultation(self) -> bool:
return any(r.residual_risk == RiskLevel.HIGH for r in self.risks)
def validate(self) -> list[str]:
gaps = []
if not self.processing_description:
gaps.append("Art.35(7)(a): processing description missing")
if not self.purposes:
gaps.append("Art.35(7)(a): purposes not listed")
if not self.legal_bases:
gaps.append("Art.35(7)(a): legal bases not listed")
if not self.necessity_justification:
gaps.append("Art.35(7)(b): necessity justification missing")
if not self.risks:
gaps.append("Art.35(7)(c): no risks identified — at minimum document why none exist")
if not self.safeguards:
gaps.append("Art.35(7)(d): no safeguards documented")
if not self.dpo_consulted and self.dpo_advice_summary is None:
gaps.append("Art.35(8): DPO consultation not documented (required if DPO designated)")
return gaps
# Usage example: HR platform with performance scoring
checker = DPIAChecker(
uses_scoring_or_profiling=True,
automated_decision_significant_effect=True,
processes_sensitive_data=True, # employment data is highly personal
large_scale=True, # multi-employer HR SaaS
involves_vulnerable_persons=True, # employees under supervision
)
result = checker.assess()
print(result.reason)
# → "DPIA required — 5/9 EDPB criteria met: evaluation_or_scoring, automated_decision_legal_effect, ..."
print(result.prior_consultation_check_needed())
# → True (4+ criteria → run full DPIA and check residual risk for Art.36)
Practical Decision Tree for SaaS Engineers
Is processing new or substantially changed?
├─ NO → Does existing DPIA cover the change? If yes, proceed. If no, treat as new.
└─ YES →
├─ Does it meet Art.35(3)(a/b/c)? → DPIA mandatory
├─ Does it meet ≥2 EDPB criteria? → DPIA required
├─ Does it meet 1 EDPB criterion? → DPIA likely not required; document conclusion
└─ Does it meet 0 criteria? → No DPIA; document for Art.5(2)
↓
Conduct DPIA (Art.35(7))
↓
Residual risk HIGH?
├─ NO → Proceed with processing. Review DPIA on trigger events.
└─ YES → Art.36 prior consultation required.
Do NOT launch until SA review complete (up to 14 weeks).
Enforcement Cases
DE-BfDI-2022-07: Employee Analytics Without DPIA — €3.2M
A large German employer deployed a cloud HR analytics platform that automatically scored employees for performance and flight-risk, feeding hiring, promotion, and termination decisions. No DPIA was conducted before rollout despite meeting Art.35(3)(a) (systematic profiling with employment effects) and four EDPB criteria (scoring, automated decisions, sensitive data, large scale). The BfDI ordered the system shut down pending a proper DPIA and prior consultation, and issued a €3.2M fine for violation of Art.35(1) and Art.22(2)(b) (no explicit policy for automated employment decisions).
Developer lesson: Performance scoring tools that feed employment decisions require DPIA and Art.22 explicit policy before any employee data enters the model.
FR-CNIL-2021-15: Health Data Analytics at Scale Without DPIA — €1.5M
A health analytics SaaS aggregated patient data from multiple French hospital systems to train a patient readmission prediction model. Processing clearly met Art.35(3)(b) (large-scale health data) but no DPIA was conducted. CNIL found no evidence of necessity or proportionality assessment and no evidence that data subject risk had been evaluated. €1.5M fine plus order to cease processing until a compliant DPIA was filed with CNIL.
Developer lesson: Any ML model trained on health data at scale triggers Art.35(3)(b) regardless of the model's stated benefit.
ES-AEPD-2023-08: Advertising Profiling Without DPIA — €85K
A Spanish e-commerce operator built a customer profiling system combining purchase history, browsing behaviour, and location data to score users for targeted advertising. The system met three EDPB criteria (scoring, monitoring, dataset matching) but no DPIA was conducted. AEPD noted that the controller's privacy-by-design documentation referenced a DPIA "to be conducted" that was never completed. €85K fine for Art.35(1) violation plus a compliance order requiring DPIA completion within 60 days.
Developer lesson: "DPIA to be completed" in documentation that is never completed creates a worse compliance position than no documentation at all.
IT-GdpP-2024-06: Biometric Access Control Without DPIA — €200K
An Italian workplace deployed fingerprint-based access control across multiple office locations for all employees — a clear Art.35(3)(b) case (large-scale biometric processing). The Garante found no DPIA, no DPO consultation, and no documented necessity assessment (key card alternatives existed but were dismissed without analysis). €200K fine and mandatory suspension of biometric processing pending a compliant DPIA.
Developer lesson: Biometric authentication always triggers Art.35(3)(b). "We already use biometrics elsewhere" does not substitute for a per-deployment DPIA.
EU Hosting Advantage for DPIA
Art.35(7)(a) requires documenting sub-processors and transfer safeguards. Art.35(7)(c) includes transfer risk in the risk assessment. For infrastructure on EU-sovereign hosting (no Cloud Act exposure, no US government access orders, no third-country transfers):
- Art.35(7)(a) sub-processors section is shorter — no SCCs, no BCRs, no transfer impact assessments needed for EU-based infrastructure
- Transfer risk in Art.35(7)(c) is near-zero — no government access risk, no Schrems II invalidation risk
- Residual risk score is lower — reducing likelihood that Art.36 prior consultation is triggered
- SA scrutiny is lower — CNIL, BfDI, and other SAs have consistently noted that EU infrastructure eliminates a class of transfer risks that appear in DPIAs for US-cloud deployments
For SaaS platforms using sota.io (EU-native PaaS), the DPIA transfer section can state: "All processing occurs within EEA infrastructure with no third-country transfers. Transfer risk: not applicable."
DPIA Compliance Checklist (25 Items)
Triggering (Art.35(1)(3))
- 1. Assessed whether processing meets Art.35(3)(a/b/c) mandatory cases
- 2. Applied EDPB 9 criteria; counted criteria met
- 3. Documented conclusion: DPIA required / not required + reason
- 4. Checked lead SA negative list for processing type
Content — Description (Art.35(7)(a))
- 5. Processing description complete (data flows, systems, sub-processors)
- 6. Purposes listed with legal basis per purpose
- 7. Data categories and volumes documented
- 8. Retention periods stated for each data category
- 9. Cross-referenced to Art.30 RoPA entry for consistency
Content — Necessity (Art.35(7)(b))
- 10. Necessity justification: why this processing, not an alternative
- 11. Proportionality analysis: data minimisation considered
- 12. Less privacy-intrusive alternatives evaluated and ruled out
Content — Risk Assessment (Art.35(7)(c))
- 13. Identified at least 3 distinct threat/harm scenarios
- 14. Likelihood and severity rated for each risk
- 15. Risk matrix applied to each scenario
Content — Measures (Art.35(7)(d))
- 16. Specific safeguard documented per identified risk
- 17. Residual risk after safeguard assessed
- 18. Review trigger conditions defined
DPO and Data Subject
- 19. DPO consulted (if designated); advice documented
- 20. Data subject consultation assessed; decision documented
Prior Consultation (Art.36)
- 21. Checked whether any residual risk is HIGH
- 22. If HIGH: Art.36 prior consultation initiated before launch
- 23. SA review timeline (up to 14 weeks) built into project plan
Maintenance
- 24. DPIA stored with version control; review date set
- 25. DPIA update process defined for new features, new processors, data breaches
GDPR Series Progress (Chapter IV)
| Article | Topic | Post |
|---|---|---|
| Art.12-14 | Transparency & Privacy Notices | #420 |
| Art.15-17 | Access, Rectification, Erasure | #421 |
| Art.18-20 | Restriction, Notification, Portability | #422 |
| Art.21-22 | Right to Object, Automated Decisions | #423 |
| Art.23-24 | Restrictions, Controller Accountability | #424 |
| Art.26 | Joint Controllers | #425 |
| Art.27 | EU Representative | #426 |
| Art.30 | Records of Processing (RoPA) | #427 |
| Art.33-34 | Breach Notification | #428 |
| Art.35 | Data Protection Impact Assessment | #429 |
| Art.36 | Prior Consultation | in Art.35 (#429) |
| Art.37-39 | Data Protection Officer | #430 |
See Also
- GDPR Art.37–39: Data Protection Officer (DPO) — Art.35(8) requires DPO consultation before DPIA conclusion; DPO advice must be documented
- GDPR Art.33–34: Breach Notification — breaches discovered during DPIA review may trigger Art.33 72-hour reporting obligations
- GDPR Art.30: Records of Processing Activities (RoPA) — DPIA scope is derived from Art.30 processing records; RoPA identifies activities requiring risk assessment
- GDPR Art.32: Security of Processing — TOMs & Encryption — TOMs identified in the DPIA as risk mitigations must be implemented under Art.32
- GDPR Art.36: Prior Consultation — mandatory next step when DPIA residual risk remains high; Art.35(7)(d) triggers Art.36 consultation
Questions about DPIA requirements for your SaaS platform? Contact sota.io — EU-native cloud hosting that simplifies your DPIA transfer section by eliminating third-country transfer risk entirely.