EU AI Act Art.70: Penalties — Fines for Prohibited Practices, High-Risk Obligations, and GPAI Models (2026)
EU AI Act Article 70 is the enforcement backstop of Regulation (EU) 2024/1689: it establishes the administrative penalty framework that gives the entire regulatory structure its coercive force. Without Art.70, the obligations in Art.5 through Art.69 would be recommendations with no financial consequence for non-compliance. Art.70 converts those obligations into a tiered penalty regime that scales with the severity of the violation, the global size of the company responsible, and — where relevant — whether the AI system posed systemic risk.
For developers and organisations deploying AI systems in the EU, Art.70 defines the downside risk of non-compliance in concrete financial terms. The highest tier — EUR 35 million or 7% of global annual worldwide turnover, whichever is higher — applies to violations of the prohibited AI practices in Art.5: the biometric categorisation systems, the real-time remote biometric identification in public spaces, the social scoring systems, and the manipulative AI techniques that bypass human autonomy. These are the practices the regulation treats as categorically incompatible with fundamental rights, and Art.70 prices them accordingly.
The second tier — EUR 15 million or 3% of global annual worldwide turnover — applies to non-compliance with any other obligation imposed by the Regulation: the risk management requirements of Art.9, the data governance obligations of Art.10, the technical documentation requirements of Art.11, the transparency obligations of Art.13, the human oversight mechanisms of Art.14, the registration obligations of Art.49, and the obligations imposed on providers, deployers, importers, and distributors throughout Chapter III. This tier covers the vast majority of violations most organisations will face in practice.
The third tier — EUR 7.5 million or 1.5% of global annual worldwide turnover — applies to incorrect, incomplete, or misleading information supplied to notified bodies and national competent authorities. This tier addresses a specific compliance risk: organisations that attempt to manage regulatory scrutiny through selective disclosure or optimistic characterisation of their AI systems' capabilities and limitations.
Art.70 became applicable on 2 August 2025 alongside the general obligations of the Regulation.
Art.70 in the Penalty Architecture
Art.70 operates within a broader enforcement architecture that spans multiple articles:
| Article | Enforcement Function | Relationship to Art.70 |
|---|---|---|
| Art.57 | NCA designation and tasks | Art.57 NCAs are the primary penalty-imposing authorities for non-GPAI violations |
| Art.58 | NCA enforcement powers | Art.58 establishes the investigation powers NCAs use to build penalty cases |
| Art.62 | AI Office enforcement powers | Art.62 applies to GPAI models; AI Office may impose Art.70(5) penalties |
| Art.65 | Serious incident reporting | Failure to report under Art.65 triggers Art.70(2) second-tier penalties |
| Art.66 | Market surveillance coordination | Cross-border penalties may involve multiple NCAs under Art.66 coordination |
| Art.67 | Union safeguard procedure | Commission may override NCA penalty measures; Art.70 applies at national level |
| Art.68 | AI regulatory sandboxes | Sandbox participation does not waive Art.70 liability outside sandbox scope |
| Art.69 | Codes of conduct | Voluntary code adherence is a mitigating factor in Art.70 penalty assessment |
| Art.70 | Administrative penalties — three tiers | This guide |
Art.70(1): First Tier — Prohibited AI Practices (EUR 35M / 7%)
Art.70(1) imposes the highest penalty tier for violations of Art.5: the prohibited AI practices that the Regulation treats as categorically incompatible with EU fundamental rights and values.
Penalty quantum. The penalty is the higher of EUR 35,000,000 or 7% of the total global annual worldwide turnover of the undertaking in the preceding financial year. For large technology companies with global revenues exceeding EUR 500 million, the 7% floor will typically exceed EUR 35 million and determine the actual penalty quantum. For SMEs, EUR 35 million is likely to be the binding constraint — subject to the proportionality provisions in Art.70(4).
Applicable violations. Art.70(1) applies to violations of Art.5(1)(a)-(f):
- Art.5(1)(a): AI systems that deploy subliminal techniques beyond a person's consciousness to manipulate behaviour in ways that cause or are likely to cause harm
- Art.5(1)(b): AI systems that exploit vulnerabilities of specific groups (age, disability, social/economic situations) to distort behaviour in harmful ways
- Art.5(1)(c): AI systems used for social scoring by or on behalf of public authorities with detrimental treatment of natural persons
- Art.5(1)(d): AI systems that assess the risk of criminal offences by natural persons based on profiling or personality traits (predictive policing for individuals)
- Art.5(1)(e): AI systems used for real-time remote biometric identification in publicly accessible spaces for law enforcement, beyond the narrow exceptions
- Art.5(1)(f): AI systems used to infer emotional states of natural persons in workplace or educational contexts (subject to the Art.5(1)(f) carve-outs)
NCA and AI Office roles. For Art.5 violations involving non-GPAI systems, the relevant NCA in the Member State where the violation occurred is the primary penalty authority. For GPAI models with systemic risk that engage in prohibited practices, the AI Office may have concurrent jurisdiction under Art.70(5).
Art.70(2): Second Tier — Other Obligations (EUR 15M / 3%)
Art.70(2) covers non-compliance with any other provision of the Regulation not specifically addressed in Art.70(1). This is the tier most organisations developing high-risk AI systems need to price into their compliance programmes.
Penalty quantum. The penalty is the higher of EUR 15,000,000 or 3% of total global annual worldwide turnover in the preceding financial year. For a company with EUR 1 billion in global revenue, a 3% penalty is EUR 30 million — above the EUR 15 million floor. For a startup with EUR 10 million in annual revenue, 3% is EUR 300,000, and the EUR 15 million ceiling would not be reached.
Key violation categories under Art.70(2). The most commonly triggered second-tier violations in practice are:
| Obligation | Article | Penalty Trigger |
|---|---|---|
| Risk management system — failure to establish, document, or maintain | Art.9 | System lacks documented risk management process at NCA audit |
| Data governance — training data quality, relevance, bias assessment | Art.10 | Training data undocumented or bias assessment absent |
| Technical documentation — pre-placement documentation | Art.11 | Annex IV technical documentation absent or materially incomplete |
| Record-keeping — automated logging | Art.12 | System logs not retained as required; logging capability disabled |
| Transparency — information to deployers | Art.13 | Instructions for use absent or materially deficient |
| Human oversight — design and implementation | Art.14 | System deployed without required human oversight mechanisms |
| Accuracy and robustness — specified levels | Art.15 | Performance levels not documented or substantially below specified |
| Registration — pre-placement in EU database | Art.49 | High-risk AI system placed on market without EUID registration |
| Serious incident reporting | Art.65 | Provider fails to notify NCA within 15 days of serious incident |
| Post-market monitoring | Art.72 | Provider has no post-market monitoring plan for deployed high-risk AI |
| Fundamental rights impact assessment | Art.27 | Deployer in specified sectors fails to conduct FRIA before deployment |
Deployer vs. provider liability. Art.70(2) applies to all parties subject to obligations under the Regulation: providers, importers, distributors, deployers, authorised representatives, and product manufacturers. The NCA must identify the responsible party for each obligation before imposing penalties. Where a deployer receives an AI system without adequate technical documentation (Art.13), the NCA must assess whether the violation originated at the provider or deployer level.
Art.70(3): Third Tier — Incorrect, Incomplete, or Misleading Information (EUR 7.5M / 1.5%)
Art.70(3) targets regulatory information integrity: the obligation to provide accurate, complete, and non-misleading information to notified bodies and national competent authorities during conformity assessment, market surveillance, and enforcement proceedings.
Penalty quantum. The penalty is the higher of EUR 7,500,000 or 1.5% of total global annual worldwide turnover in the preceding financial year.
Applicable conduct. Art.70(3) applies when:
- A provider supplies incorrect or incomplete technical documentation to a notified body during conformity assessment, resulting in a conformity assessment decision based on inaccurate information
- A provider or deployer provides misleading information to an NCA during a market surveillance investigation (Art.58) or serious incident investigation (Art.65)
- A provider supplies incorrect information to the EU AI database (Art.60) when registering a high-risk AI system
- An authorised representative supplies incorrect information to the Commission regarding the non-EU provider's obligations
Relationship to Art.70(2). Art.70(3) operates independently of Art.70(2). An organisation can face penalties under both tiers if it both fails to comply with a substantive obligation (Art.70(2)) and provides misleading information about that non-compliance to authorities (Art.70(3)). NCAs may impose both penalties on the same organisation for the same underlying compliance failure if the information-supply violations are distinct acts from the substantive violations.
Art.70(4): SME Proportionality and Natural Persons
Art.70(4) establishes proportionality obligations that modify the mechanical application of the penalty tiers for specific categories of operators.
SME and startup provisions. For providers that qualify as small and medium-sized enterprises (SMEs) under the EU SME definition (fewer than 250 employees and annual turnover ≤ EUR 50 million or annual balance sheet total ≤ EUR 43 million), the NCA must apply the Art.70 penalty tiers proportionately. In practice, this means:
- For SMEs, the percentage-of-turnover formula is more likely to produce a penalty below the fixed EUR ceiling
- NCAs retain discretion to impose penalties below the Art.70 maximums where the fixed ceilings would produce a result disproportionate to the SME's economic capacity
- Recital 161 of the Regulation notes that SMEs should be given sufficient time to comply and should not face penalties where they have demonstrably attempted compliance in good faith
Natural persons. Art.70(4) also applies the proportionality principle to natural persons subject to penalties under the Regulation. In practice, natural persons are most likely to face penalties as:
- Sole traders operating AI systems in professional contexts
- Partners in professional firms deploying high-risk AI systems
- Individuals acting as deployers of high-risk AI systems in professional capacities
For natural persons, the turnover-based penalty formula typically produces low absolute amounts. NCAs must apply the proportionality principle to avoid penalties that are manifestly disproportionate to individual economic capacity.
Mitigating factors. Art.70(4) also requires NCAs to consider mitigating factors when determining the penalty quantum within the Art.70 tiers:
- Voluntary code adherence under Art.69: Participation in a recognised voluntary code of conduct is a mitigating factor
- AI regulatory sandbox participation under Art.68: Good-faith participation in a sandbox, with findings disclosed to the NCA, may reduce penalty exposure for violations arising from sandbox-stage conduct
- Cooperation with NCA investigations: Proactive disclosure, timely response to information requests, and cooperation with corrective measures reduce penalties
- First violation: First-time violations by organisations with no history of non-compliance under the Regulation typically attract lower penalties within the tier maximum
Art.70(5): GPAI Model Penalties — AI Office Jurisdiction
Art.70(5) creates a parallel penalty track for providers of general-purpose AI models with systemic risk (as classified under Art.51). Where GPAI model providers violate obligations specific to GPAI models — particularly the Art.53 obligations (adversarial testing, incident reporting, cybersecurity for systemic-risk GPAI models) and the Art.52 base obligations (technical documentation, copyright policy, transparency) — the AI Office rather than national NCAs has primary enforcement jurisdiction.
AI Office penalty powers. The AI Office, acting under the Commission's authority, may impose penalties on GPAI model providers under Art.70(5) for:
- Failure to maintain the Art.52(1)(a) technical documentation requirements
- Failure to implement a copyright transparency policy under Art.52(1)(b)
- Failure to publish summaries of training data under Art.52(1)(c)
- Failure to conduct adversarial testing (red-teaming) required by Art.53(1)(a) for systemic-risk models
- Failure to report serious incidents and near-misses to the AI Office under Art.53(1)(b)
- Failure to implement cybersecurity protections for systemic-risk GPAI models under Art.53(1)(c)
Penalty quantum for GPAI violations. The Art.70(2) and Art.70(3) penalty tiers apply to GPAI model violations — the GPAI track does not create different quantum rules, only a different enforcement authority. For a GPAI provider with global revenues of EUR 50 billion, a 3% Art.70(2) penalty for failure to conduct required adversarial testing would be EUR 1.5 billion — a material financial consequence even for hyperscale AI companies.
AI Office vs. NCA jurisdiction. Art.70(5) creates a potential overlap where a GPAI model is deployed in a high-risk context: the NCA has jurisdiction over the high-risk AI system application, while the AI Office has jurisdiction over the GPAI model layer. Both enforcement authorities may act simultaneously. Organisations deploying GPAI models in high-risk AI applications should map their obligations across both tracks and ensure their compliance documentation is accessible to both NCA and AI Office investigators.
Art.70(6): Confidentiality in Penalty Proceedings
Art.70(6) establishes confidentiality protections for information disclosed during penalty proceedings. NCAs and the AI Office must protect:
- Trade secrets and commercially sensitive information disclosed during investigations
- Personal data within investigation materials (subject to GDPR and Regulation 2018/1725)
- Information whose disclosure would materially prejudice ongoing investigations or third-party rights
Practical significance for CLOUD Act compliance. Art.70(6) confidentiality provisions create a specific tension with US CLOUD Act production orders. An organisation subject to an EU NCA penalty investigation must maintain investigation materials under EU confidentiality obligations. If those same materials are stored on US-controlled cloud infrastructure and subject to a CLOUD Act production order, the organisation faces a genuine legal conflict: disclosure to US authorities may violate EU procedural confidentiality, while non-disclosure may violate US law.
The CLOUD Act conflict is most acute for:
- Technical documentation submitted during NCA investigations
- Correspondence between legal counsel and AI Office/NCA investigators
- Internal audit reports and risk assessments disclosed to regulators
- Corrective action plans submitted as part of settlement negotiations
Maintaining EU investigation materials on EU-incorporated, EU-law-governed infrastructure reduces — but does not eliminate — this exposure. The risk is structural and depends on the specific facts of each investigation and production order.
Multi-Jurisdiction Penalty Proceedings and CLOUD Act Complications
Art.70 operates at the national level, but the EU AI Act's cross-border enforcement mechanisms create scenarios where multiple NCAs in different Member States may impose penalties for related violations. Specifically:
Concurrent proceedings. A provider of a high-risk AI system deployed across multiple EU Member States may face:
- NCA-A in Germany initiating an investigation for risk management failures (Art.9 violation, Art.70(2))
- NCA-B in France initiating a separate investigation for the same system's transparency failures (Art.13 violation, Art.70(2))
- Both NCAs potentially imposing separate penalties on the same organisation
Commission coordination under Art.67. Where conflicting NCA enforcement actions arise from the same underlying violation, the Union safeguard procedure under Art.67 enables Commission review. However, Art.67 addresses conflicting national measures regarding AI system risk — it does not directly prevent duplicate penalty proceedings for distinct violations in distinct Member States.
Practical implications. Organisations deploying high-risk AI systems across multiple EU Member States should:
- Designate a single Member State NCA as primary contact (consistent with the Art.57 single-contact framework)
- Ensure that compliance documentation is accessible and consistent across all jurisdictions where the system is deployed
- In the event of a serious incident, notify the primary NCA first and coordinate cross-border disclosure through the Art.66 framework
- Maintain investigation materials on EU-governed infrastructure to reduce CLOUD Act exposure
Python PenaltyRiskAssessment Implementation
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class ViolationTier(Enum):
PROHIBITED_PRACTICE = "prohibited_practice" # Art.70(1): 35M/7%
OTHER_OBLIGATION = "other_obligation" # Art.70(2): 15M/3%
MISLEADING_INFO = "misleading_info" # Art.70(3): 7.5M/1.5%
class OperatorType(Enum):
LARGE_COMPANY = "large_company"
SME = "sme"
STARTUP = "startup"
NATURAL_PERSON = "natural_person"
@dataclass
class PenaltyRiskAssessment:
tier: ViolationTier
global_annual_turnover_eur: float
operator_type: OperatorType = OperatorType.LARGE_COMPANY
mitigating_factors: list[str] = field(default_factory=list)
aggravating_factors: list[str] = field(default_factory=list)
# Tier thresholds
_TIER_PARAMS = {
ViolationTier.PROHIBITED_PRACTICE: (35_000_000, 0.07),
ViolationTier.OTHER_OBLIGATION: (15_000_000, 0.03),
ViolationTier.MISLEADING_INFO: (7_500_000, 0.015),
}
def maximum_penalty(self) -> float:
fixed, pct = self._TIER_PARAMS[self.tier]
return max(fixed, self.global_annual_turnover_eur * pct)
def estimated_penalty(self) -> float:
max_p = self.maximum_penalty()
reduction = len(self.mitigating_factors) * 0.10
increase = len(self.aggravating_factors) * 0.15
factor = max(0.05, min(1.0, 1.0 - reduction + increase))
if self.operator_type in (OperatorType.SME, OperatorType.STARTUP):
factor *= 0.5
if self.operator_type == OperatorType.NATURAL_PERSON:
factor *= 0.1
return max_p * factor
def cloud_act_risk(self, investigation_data_on_us_cloud: bool) -> str:
if not investigation_data_on_us_cloud:
return "LOW — EU-hosted investigation materials, limited CLOUD Act exposure"
if self.tier == ViolationTier.PROHIBITED_PRACTICE:
return "CRITICAL — Art.5 investigation materials on US cloud: CLOUD Act + Art.70(6) confidentiality conflict"
return "HIGH — investigation materials on US cloud: CLOUD Act may conflict with Art.70(6) confidentiality obligations"
def jurisdiction_overlap_risk(self) -> str:
if self.tier in (ViolationTier.PROHIBITED_PRACTICE, ViolationTier.OTHER_OBLIGATION):
return "MONITOR — multi-NCA proceedings possible for cross-border AI deployments; designate primary NCA"
return "LOW — misleading information violations typically handled by single NCA"
def summary(self) -> dict:
return {
"tier": self.tier.value,
"maximum_penalty_eur": round(self.maximum_penalty(), 0),
"estimated_penalty_eur": round(self.estimated_penalty(), 0),
"operator_type": self.operator_type.value,
"mitigating_factors": self.mitigating_factors,
"aggravating_factors": self.aggravating_factors,
}
# Example usage:
assessment = PenaltyRiskAssessment(
tier=ViolationTier.OTHER_OBLIGATION,
global_annual_turnover_eur=100_000_000,
operator_type=OperatorType.SME,
mitigating_factors=["voluntary_code_of_conduct", "nca_cooperation", "first_violation"],
aggravating_factors=["repeated_non_compliance"],
)
print(f"Maximum penalty: EUR {assessment.maximum_penalty():,.0f}")
print(f"Estimated penalty (after factors): EUR {assessment.estimated_penalty():,.0f}")
print(f"CLOUD Act risk: {assessment.cloud_act_risk(investigation_data_on_us_cloud=True)}")
print(f"Jurisdiction overlap risk: {assessment.jurisdiction_overlap_risk()}")
Art.70 Compliance Checklist
| # | Item | Who | Timing |
|---|---|---|---|
| 1 | Map all Art.5 prohibited AI practice categories against your AI system portfolio: identify any features or capabilities that could be characterised as subliminal manipulation, vulnerability exploitation, social scoring, predictive policing, real-time remote biometric identification in public spaces, or workplace emotion recognition — the Art.70(1) first tier applies to these practices and the penalty quantum is the highest in the Regulation | Provider, Deployer | Before deployment |
| 2 | Quantify your Art.70 maximum exposure before deploying high-risk AI systems: calculate both the fixed ceiling (EUR 15M for Art.70(2)) and the turnover-based alternative (3% of global annual worldwide turnover) and determine which is higher — this is your maximum single-tier penalty exposure and should inform your compliance investment decisions | Legal, Finance | Before deployment |
| 3 | Identify which NCA is your primary supervisory authority: for organisations deploying high-risk AI systems in multiple EU Member States, designate the NCA of your principal establishment as primary contact and ensure compliance documentation is accessible and consistent across all Member States — this reduces the risk of concurrent multi-NCA proceedings | Compliance | Before deployment |
| 4 | Establish Art.70(3) information integrity controls: implement internal review processes for all information submitted to notified bodies, NCAs, and the EU AI database — the third-tier penalty for incorrect or misleading information applies independently of the substantive compliance violation and can be triggered by optimistic characterisations as well as deliberate misrepresentation | Legal, Compliance | Before market placement |
| 5 | Document mitigating factors contemporaneously: Art.70(4) requires NCAs to consider mitigating factors — document your Art.69 voluntary code adherence, Art.68 sandbox participation, NCA cooperation actions, and first-violation status as they occur, not retrospectively after an investigation is initiated | Compliance | Ongoing |
| 6 | For GPAI model providers, assess AI Office penalty exposure separately from NCA exposure: Art.70(5) gives the AI Office primary jurisdiction over GPAI model obligation violations — your compliance programme must address both tracks (NCA for high-risk AI system applications; AI Office for GPAI model layer) if you deploy GPAI models in high-risk contexts | Provider | Before deployment |
| 7 | Conduct a CLOUD Act data residency assessment for investigation materials: Art.70(6) confidentiality obligations protect materials disclosed in penalty proceedings — storing investigation-related materials (technical documentation, audit reports, corrective action plans, regulatory correspondence) on EU-incorporated, EU-law-governed infrastructure reduces the risk of CLOUD Act production order conflicts | IT, Legal | Before deployment |
| 8 | Review your serious incident response protocol against Art.70(2) penalty triggers: the most common Art.70(2) violations in enforcement practice will be failures to notify NCAs of serious incidents within the Art.65 15-day window — ensure your incident response protocol includes a regulatory notification track with NCA contact details, notification templates, and escalation thresholds | Operations, Legal | Before deployment |
| 9 | Assess SME penalty proportionality if you qualify: if your organisation meets the EU SME definition (fewer than 250 employees, turnover ≤ EUR 50M), document your SME status and ensure your compliance programme reflects the Art.70(4) proportionality expectation — SME status does not exempt from the Regulation's obligations but affects the penalty quantum NCAs can legitimately impose | Finance, Legal | Before deployment |
| 10 | Build a penalty exposure register as part of your AI governance framework: for each high-risk AI system in your portfolio, document the applicable Art.70 tier, the maximum penalty quantum, the mitigating factors in place, and the NCA with primary jurisdiction — this register is both a compliance management tool and evidence of good-faith compliance effort that NCAs will consider in penalty assessment | Compliance | Ongoing |
Series Context: Chapter IX Governance, Enforcement, and Penalties
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | Art.61 guide |
| Art.62 | AI Office enforcement powers — corrective measures, market withdrawal, emergency action | Art.62 guide |
| Art.63 | Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP input | Art.63 guide |
| Art.64 | Access to data and documentation — market surveillance authority enforcement powers | Art.64 guide |
| Art.65 | Reporting of serious incidents — provider NCA notification obligations | Art.65 guide |
| Art.66 | Market surveillance, information exchange, enforcement coordination | Art.66 guide |
| Art.67 | Union safeguard procedure — Commission review of conflicting NCA enforcement | Art.67 guide |
| Art.68 | AI regulatory sandboxes — national establishment, provider exemptions, compliance pathway | Art.68 guide |
| Art.69 | Codes of conduct — voluntary requirements, AI Office facilitation, SME access | Art.69 guide |
| Art.70 | Administrative penalties — prohibited practices, high-risk obligations, GPAI models | This guide |
| Art.71 | Exercise of the delegation — Commission delegated acts, five-year period, parliamentary oversight | Art.71 guide |
EU AI Act Art.70 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). Penalty calculations are illustrative only and depend on the specific facts of each case, the operator's global annual worldwide turnover, and NCA discretion within the Art.70 tiers. SME proportionality and mitigating factor assessments require legal advice specific to the operator's circumstances. CLOUD Act conflict analysis reflects the state of EU-US data transfer frameworks as of 2025. This guide reflects the text of the Regulation as enacted and does not constitute legal advice.