2026-04-24·12 min read·sota.io team

EU AI Act Art.70: Penalties — Fines for Prohibited Practices, High-Risk Obligations, and GPAI Models (2026)

EU AI Act Article 70 is the enforcement backstop of Regulation (EU) 2024/1689: it establishes the administrative penalty framework that gives the entire regulatory structure its coercive force. Without Art.70, the obligations in Art.5 through Art.69 would be recommendations with no financial consequence for non-compliance. Art.70 converts those obligations into a tiered penalty regime that scales with the severity of the violation, the global size of the company responsible, and — where relevant — whether the AI system posed systemic risk.

For developers and organisations deploying AI systems in the EU, Art.70 defines the downside risk of non-compliance in concrete financial terms. The highest tier — EUR 35 million or 7% of global annual worldwide turnover, whichever is higher — applies to violations of the prohibited AI practices in Art.5: the biometric categorisation systems, the real-time remote biometric identification in public spaces, the social scoring systems, and the manipulative AI techniques that bypass human autonomy. These are the practices the regulation treats as categorically incompatible with fundamental rights, and Art.70 prices them accordingly.

The second tier — EUR 15 million or 3% of global annual worldwide turnover — applies to non-compliance with any other obligation imposed by the Regulation: the risk management requirements of Art.9, the data governance obligations of Art.10, the technical documentation requirements of Art.11, the transparency obligations of Art.13, the human oversight mechanisms of Art.14, the registration obligations of Art.49, and the obligations imposed on providers, deployers, importers, and distributors throughout Chapter III. This tier covers the vast majority of violations most organisations will face in practice.

The third tier — EUR 7.5 million or 1.5% of global annual worldwide turnover — applies to incorrect, incomplete, or misleading information supplied to notified bodies and national competent authorities. This tier addresses a specific compliance risk: organisations that attempt to manage regulatory scrutiny through selective disclosure or optimistic characterisation of their AI systems' capabilities and limitations.

Art.70 became applicable on 2 August 2025 alongside the general obligations of the Regulation.


Art.70 in the Penalty Architecture

Art.70 operates within a broader enforcement architecture that spans multiple articles:

ArticleEnforcement FunctionRelationship to Art.70
Art.57NCA designation and tasksArt.57 NCAs are the primary penalty-imposing authorities for non-GPAI violations
Art.58NCA enforcement powersArt.58 establishes the investigation powers NCAs use to build penalty cases
Art.62AI Office enforcement powersArt.62 applies to GPAI models; AI Office may impose Art.70(5) penalties
Art.65Serious incident reportingFailure to report under Art.65 triggers Art.70(2) second-tier penalties
Art.66Market surveillance coordinationCross-border penalties may involve multiple NCAs under Art.66 coordination
Art.67Union safeguard procedureCommission may override NCA penalty measures; Art.70 applies at national level
Art.68AI regulatory sandboxesSandbox participation does not waive Art.70 liability outside sandbox scope
Art.69Codes of conductVoluntary code adherence is a mitigating factor in Art.70 penalty assessment
Art.70Administrative penalties — three tiersThis guide

Art.70(1): First Tier — Prohibited AI Practices (EUR 35M / 7%)

Art.70(1) imposes the highest penalty tier for violations of Art.5: the prohibited AI practices that the Regulation treats as categorically incompatible with EU fundamental rights and values.

Penalty quantum. The penalty is the higher of EUR 35,000,000 or 7% of the total global annual worldwide turnover of the undertaking in the preceding financial year. For large technology companies with global revenues exceeding EUR 500 million, the 7% floor will typically exceed EUR 35 million and determine the actual penalty quantum. For SMEs, EUR 35 million is likely to be the binding constraint — subject to the proportionality provisions in Art.70(4).

Applicable violations. Art.70(1) applies to violations of Art.5(1)(a)-(f):

NCA and AI Office roles. For Art.5 violations involving non-GPAI systems, the relevant NCA in the Member State where the violation occurred is the primary penalty authority. For GPAI models with systemic risk that engage in prohibited practices, the AI Office may have concurrent jurisdiction under Art.70(5).


Art.70(2): Second Tier — Other Obligations (EUR 15M / 3%)

Art.70(2) covers non-compliance with any other provision of the Regulation not specifically addressed in Art.70(1). This is the tier most organisations developing high-risk AI systems need to price into their compliance programmes.

Penalty quantum. The penalty is the higher of EUR 15,000,000 or 3% of total global annual worldwide turnover in the preceding financial year. For a company with EUR 1 billion in global revenue, a 3% penalty is EUR 30 million — above the EUR 15 million floor. For a startup with EUR 10 million in annual revenue, 3% is EUR 300,000, and the EUR 15 million ceiling would not be reached.

Key violation categories under Art.70(2). The most commonly triggered second-tier violations in practice are:

ObligationArticlePenalty Trigger
Risk management system — failure to establish, document, or maintainArt.9System lacks documented risk management process at NCA audit
Data governance — training data quality, relevance, bias assessmentArt.10Training data undocumented or bias assessment absent
Technical documentation — pre-placement documentationArt.11Annex IV technical documentation absent or materially incomplete
Record-keeping — automated loggingArt.12System logs not retained as required; logging capability disabled
Transparency — information to deployersArt.13Instructions for use absent or materially deficient
Human oversight — design and implementationArt.14System deployed without required human oversight mechanisms
Accuracy and robustness — specified levelsArt.15Performance levels not documented or substantially below specified
Registration — pre-placement in EU databaseArt.49High-risk AI system placed on market without EUID registration
Serious incident reportingArt.65Provider fails to notify NCA within 15 days of serious incident
Post-market monitoringArt.72Provider has no post-market monitoring plan for deployed high-risk AI
Fundamental rights impact assessmentArt.27Deployer in specified sectors fails to conduct FRIA before deployment

Deployer vs. provider liability. Art.70(2) applies to all parties subject to obligations under the Regulation: providers, importers, distributors, deployers, authorised representatives, and product manufacturers. The NCA must identify the responsible party for each obligation before imposing penalties. Where a deployer receives an AI system without adequate technical documentation (Art.13), the NCA must assess whether the violation originated at the provider or deployer level.


Art.70(3): Third Tier — Incorrect, Incomplete, or Misleading Information (EUR 7.5M / 1.5%)

Art.70(3) targets regulatory information integrity: the obligation to provide accurate, complete, and non-misleading information to notified bodies and national competent authorities during conformity assessment, market surveillance, and enforcement proceedings.

Penalty quantum. The penalty is the higher of EUR 7,500,000 or 1.5% of total global annual worldwide turnover in the preceding financial year.

Applicable conduct. Art.70(3) applies when:

Relationship to Art.70(2). Art.70(3) operates independently of Art.70(2). An organisation can face penalties under both tiers if it both fails to comply with a substantive obligation (Art.70(2)) and provides misleading information about that non-compliance to authorities (Art.70(3)). NCAs may impose both penalties on the same organisation for the same underlying compliance failure if the information-supply violations are distinct acts from the substantive violations.


Art.70(4): SME Proportionality and Natural Persons

Art.70(4) establishes proportionality obligations that modify the mechanical application of the penalty tiers for specific categories of operators.

SME and startup provisions. For providers that qualify as small and medium-sized enterprises (SMEs) under the EU SME definition (fewer than 250 employees and annual turnover ≤ EUR 50 million or annual balance sheet total ≤ EUR 43 million), the NCA must apply the Art.70 penalty tiers proportionately. In practice, this means:

Natural persons. Art.70(4) also applies the proportionality principle to natural persons subject to penalties under the Regulation. In practice, natural persons are most likely to face penalties as:

For natural persons, the turnover-based penalty formula typically produces low absolute amounts. NCAs must apply the proportionality principle to avoid penalties that are manifestly disproportionate to individual economic capacity.

Mitigating factors. Art.70(4) also requires NCAs to consider mitigating factors when determining the penalty quantum within the Art.70 tiers:


Art.70(5): GPAI Model Penalties — AI Office Jurisdiction

Art.70(5) creates a parallel penalty track for providers of general-purpose AI models with systemic risk (as classified under Art.51). Where GPAI model providers violate obligations specific to GPAI models — particularly the Art.53 obligations (adversarial testing, incident reporting, cybersecurity for systemic-risk GPAI models) and the Art.52 base obligations (technical documentation, copyright policy, transparency) — the AI Office rather than national NCAs has primary enforcement jurisdiction.

AI Office penalty powers. The AI Office, acting under the Commission's authority, may impose penalties on GPAI model providers under Art.70(5) for:

Penalty quantum for GPAI violations. The Art.70(2) and Art.70(3) penalty tiers apply to GPAI model violations — the GPAI track does not create different quantum rules, only a different enforcement authority. For a GPAI provider with global revenues of EUR 50 billion, a 3% Art.70(2) penalty for failure to conduct required adversarial testing would be EUR 1.5 billion — a material financial consequence even for hyperscale AI companies.

AI Office vs. NCA jurisdiction. Art.70(5) creates a potential overlap where a GPAI model is deployed in a high-risk context: the NCA has jurisdiction over the high-risk AI system application, while the AI Office has jurisdiction over the GPAI model layer. Both enforcement authorities may act simultaneously. Organisations deploying GPAI models in high-risk AI applications should map their obligations across both tracks and ensure their compliance documentation is accessible to both NCA and AI Office investigators.


Art.70(6): Confidentiality in Penalty Proceedings

Art.70(6) establishes confidentiality protections for information disclosed during penalty proceedings. NCAs and the AI Office must protect:

Practical significance for CLOUD Act compliance. Art.70(6) confidentiality provisions create a specific tension with US CLOUD Act production orders. An organisation subject to an EU NCA penalty investigation must maintain investigation materials under EU confidentiality obligations. If those same materials are stored on US-controlled cloud infrastructure and subject to a CLOUD Act production order, the organisation faces a genuine legal conflict: disclosure to US authorities may violate EU procedural confidentiality, while non-disclosure may violate US law.

The CLOUD Act conflict is most acute for:

Maintaining EU investigation materials on EU-incorporated, EU-law-governed infrastructure reduces — but does not eliminate — this exposure. The risk is structural and depends on the specific facts of each investigation and production order.


Multi-Jurisdiction Penalty Proceedings and CLOUD Act Complications

Art.70 operates at the national level, but the EU AI Act's cross-border enforcement mechanisms create scenarios where multiple NCAs in different Member States may impose penalties for related violations. Specifically:

Concurrent proceedings. A provider of a high-risk AI system deployed across multiple EU Member States may face:

Commission coordination under Art.67. Where conflicting NCA enforcement actions arise from the same underlying violation, the Union safeguard procedure under Art.67 enables Commission review. However, Art.67 addresses conflicting national measures regarding AI system risk — it does not directly prevent duplicate penalty proceedings for distinct violations in distinct Member States.

Practical implications. Organisations deploying high-risk AI systems across multiple EU Member States should:

  1. Designate a single Member State NCA as primary contact (consistent with the Art.57 single-contact framework)
  2. Ensure that compliance documentation is accessible and consistent across all jurisdictions where the system is deployed
  3. In the event of a serious incident, notify the primary NCA first and coordinate cross-border disclosure through the Art.66 framework
  4. Maintain investigation materials on EU-governed infrastructure to reduce CLOUD Act exposure

Python PenaltyRiskAssessment Implementation

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional

class ViolationTier(Enum):
    PROHIBITED_PRACTICE = "prohibited_practice"  # Art.70(1): 35M/7%
    OTHER_OBLIGATION = "other_obligation"         # Art.70(2): 15M/3%
    MISLEADING_INFO = "misleading_info"           # Art.70(3): 7.5M/1.5%

class OperatorType(Enum):
    LARGE_COMPANY = "large_company"
    SME = "sme"
    STARTUP = "startup"
    NATURAL_PERSON = "natural_person"

@dataclass
class PenaltyRiskAssessment:
    tier: ViolationTier
    global_annual_turnover_eur: float
    operator_type: OperatorType = OperatorType.LARGE_COMPANY
    mitigating_factors: list[str] = field(default_factory=list)
    aggravating_factors: list[str] = field(default_factory=list)

    # Tier thresholds
    _TIER_PARAMS = {
        ViolationTier.PROHIBITED_PRACTICE: (35_000_000, 0.07),
        ViolationTier.OTHER_OBLIGATION: (15_000_000, 0.03),
        ViolationTier.MISLEADING_INFO: (7_500_000, 0.015),
    }

    def maximum_penalty(self) -> float:
        fixed, pct = self._TIER_PARAMS[self.tier]
        return max(fixed, self.global_annual_turnover_eur * pct)

    def estimated_penalty(self) -> float:
        max_p = self.maximum_penalty()
        reduction = len(self.mitigating_factors) * 0.10
        increase = len(self.aggravating_factors) * 0.15
        factor = max(0.05, min(1.0, 1.0 - reduction + increase))
        if self.operator_type in (OperatorType.SME, OperatorType.STARTUP):
            factor *= 0.5
        if self.operator_type == OperatorType.NATURAL_PERSON:
            factor *= 0.1
        return max_p * factor

    def cloud_act_risk(self, investigation_data_on_us_cloud: bool) -> str:
        if not investigation_data_on_us_cloud:
            return "LOW — EU-hosted investigation materials, limited CLOUD Act exposure"
        if self.tier == ViolationTier.PROHIBITED_PRACTICE:
            return "CRITICAL — Art.5 investigation materials on US cloud: CLOUD Act + Art.70(6) confidentiality conflict"
        return "HIGH — investigation materials on US cloud: CLOUD Act may conflict with Art.70(6) confidentiality obligations"

    def jurisdiction_overlap_risk(self) -> str:
        if self.tier in (ViolationTier.PROHIBITED_PRACTICE, ViolationTier.OTHER_OBLIGATION):
            return "MONITOR — multi-NCA proceedings possible for cross-border AI deployments; designate primary NCA"
        return "LOW — misleading information violations typically handled by single NCA"

    def summary(self) -> dict:
        return {
            "tier": self.tier.value,
            "maximum_penalty_eur": round(self.maximum_penalty(), 0),
            "estimated_penalty_eur": round(self.estimated_penalty(), 0),
            "operator_type": self.operator_type.value,
            "mitigating_factors": self.mitigating_factors,
            "aggravating_factors": self.aggravating_factors,
        }


# Example usage:
assessment = PenaltyRiskAssessment(
    tier=ViolationTier.OTHER_OBLIGATION,
    global_annual_turnover_eur=100_000_000,
    operator_type=OperatorType.SME,
    mitigating_factors=["voluntary_code_of_conduct", "nca_cooperation", "first_violation"],
    aggravating_factors=["repeated_non_compliance"],
)

print(f"Maximum penalty: EUR {assessment.maximum_penalty():,.0f}")
print(f"Estimated penalty (after factors): EUR {assessment.estimated_penalty():,.0f}")
print(f"CLOUD Act risk: {assessment.cloud_act_risk(investigation_data_on_us_cloud=True)}")
print(f"Jurisdiction overlap risk: {assessment.jurisdiction_overlap_risk()}")

Art.70 Compliance Checklist

#ItemWhoTiming
1Map all Art.5 prohibited AI practice categories against your AI system portfolio: identify any features or capabilities that could be characterised as subliminal manipulation, vulnerability exploitation, social scoring, predictive policing, real-time remote biometric identification in public spaces, or workplace emotion recognition — the Art.70(1) first tier applies to these practices and the penalty quantum is the highest in the RegulationProvider, DeployerBefore deployment
2Quantify your Art.70 maximum exposure before deploying high-risk AI systems: calculate both the fixed ceiling (EUR 15M for Art.70(2)) and the turnover-based alternative (3% of global annual worldwide turnover) and determine which is higher — this is your maximum single-tier penalty exposure and should inform your compliance investment decisionsLegal, FinanceBefore deployment
3Identify which NCA is your primary supervisory authority: for organisations deploying high-risk AI systems in multiple EU Member States, designate the NCA of your principal establishment as primary contact and ensure compliance documentation is accessible and consistent across all Member States — this reduces the risk of concurrent multi-NCA proceedingsComplianceBefore deployment
4Establish Art.70(3) information integrity controls: implement internal review processes for all information submitted to notified bodies, NCAs, and the EU AI database — the third-tier penalty for incorrect or misleading information applies independently of the substantive compliance violation and can be triggered by optimistic characterisations as well as deliberate misrepresentationLegal, ComplianceBefore market placement
5Document mitigating factors contemporaneously: Art.70(4) requires NCAs to consider mitigating factors — document your Art.69 voluntary code adherence, Art.68 sandbox participation, NCA cooperation actions, and first-violation status as they occur, not retrospectively after an investigation is initiatedComplianceOngoing
6For GPAI model providers, assess AI Office penalty exposure separately from NCA exposure: Art.70(5) gives the AI Office primary jurisdiction over GPAI model obligation violations — your compliance programme must address both tracks (NCA for high-risk AI system applications; AI Office for GPAI model layer) if you deploy GPAI models in high-risk contextsProviderBefore deployment
7Conduct a CLOUD Act data residency assessment for investigation materials: Art.70(6) confidentiality obligations protect materials disclosed in penalty proceedings — storing investigation-related materials (technical documentation, audit reports, corrective action plans, regulatory correspondence) on EU-incorporated, EU-law-governed infrastructure reduces the risk of CLOUD Act production order conflictsIT, LegalBefore deployment
8Review your serious incident response protocol against Art.70(2) penalty triggers: the most common Art.70(2) violations in enforcement practice will be failures to notify NCAs of serious incidents within the Art.65 15-day window — ensure your incident response protocol includes a regulatory notification track with NCA contact details, notification templates, and escalation thresholdsOperations, LegalBefore deployment
9Assess SME penalty proportionality if you qualify: if your organisation meets the EU SME definition (fewer than 250 employees, turnover ≤ EUR 50M), document your SME status and ensure your compliance programme reflects the Art.70(4) proportionality expectation — SME status does not exempt from the Regulation's obligations but affects the penalty quantum NCAs can legitimately imposeFinance, LegalBefore deployment
10Build a penalty exposure register as part of your AI governance framework: for each high-risk AI system in your portfolio, document the applicable Art.70 tier, the maximum penalty quantum, the mitigating factors in place, and the NCA with primary jurisdiction — this register is both a compliance management tool and evidence of good-faith compliance effort that NCAs will consider in penalty assessmentComplianceOngoing

Series Context: Chapter IX Governance, Enforcement, and Penalties

ArticleCoveragePost
Art.57National Competent Authorities — designation, tasks, independenceArt.57 guide
Art.58NCA enforcement powers — investigation, access, corrective measuresArt.58 guide
Art.59AI Board — composition, independence, NCA coordinationArt.59 guide
Art.60EU AI database — public registry, EUID governance, Commission managementArt.60 guide
Art.61Scientific Panel — independent experts, model evaluation, AI Office advisoryArt.61 guide
Art.62AI Office enforcement powers — corrective measures, market withdrawal, emergency actionArt.62 guide
Art.63Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP inputArt.63 guide
Art.64Access to data and documentation — market surveillance authority enforcement powersArt.64 guide
Art.65Reporting of serious incidents — provider NCA notification obligationsArt.65 guide
Art.66Market surveillance, information exchange, enforcement coordinationArt.66 guide
Art.67Union safeguard procedure — Commission review of conflicting NCA enforcementArt.67 guide
Art.68AI regulatory sandboxes — national establishment, provider exemptions, compliance pathwayArt.68 guide
Art.69Codes of conduct — voluntary requirements, AI Office facilitation, SME accessArt.69 guide
Art.70Administrative penalties — prohibited practices, high-risk obligations, GPAI modelsThis guide
Art.71Exercise of the delegation — Commission delegated acts, five-year period, parliamentary oversightArt.71 guide

EU AI Act Art.70 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). Penalty calculations are illustrative only and depend on the specific facts of each case, the operator's global annual worldwide turnover, and NCA discretion within the Art.70 tiers. SME proportionality and mitigating factor assessments require legal advice specific to the operator's circumstances. CLOUD Act conflict analysis reflects the state of EU-US data transfer frameworks as of 2025. This guide reflects the text of the Regulation as enacted and does not constitute legal advice.