2026-04-26·11 min read·

EU AI Act Article 102 closes a liability gap that Art.99 leaves open: administrative fines target legal entities, but criminal sanctions target individuals. A company can absorb a €35M fine and continue operating. Criminal prosecution against a CTO, ML engineer, or founder cannot be absorbed — it threatens personal freedom.

Art.102 does not define the specific criminal penalties. Instead, it mandates Member States to establish effective, proportionate, and dissuasive criminal sanctions for AI Act infringements, leaving the severity — including imprisonment thresholds — to national law. This design mirrors how the GDPR delegated criminal penalties to Member States (GDPR Art.84), with similar consequences: a patchwork of national implementations with very different risk profiles depending on where your company operates or where you are resident.

What Art.102 Actually Says

Article 102 — Penalties for Natural Persons (paraphrased)

Member States shall lay down rules on penalties applicable to infringements of this Regulation by natural persons and shall take all measures necessary to ensure that they are implemented properly and effectively. The penalties shall be effective, proportionate and dissuasive. Member States shall notify the Commission of the measures taken by [date of application] and shall notify it without delay of any subsequent amendment affecting them.

Three mandatory requirements flow from this text:

  1. Effectiveness: The penalty must actually deter — a symbolic fine does not meet Art.102's standard
  2. Proportionality: Severity must match the harm and the individual's role in the infringement
  3. Dissuasiveness: Criminal exposure must credibly deter technical professionals from enabling violations

Art.102 vs Art.99: The Liability Split

DimensionArt.99Art.102
TargetLegal entity (company, operator, provider)Natural person (individual, officer, engineer)
Sanction typeAdministrative fineCriminal penalty (fine, imprisonment, disqualification)
EnforcerNational Market Surveillance Authority (NCA)Public prosecutor / criminal court
ProcedureAdministrative investigationCriminal proceeding
Maximum exposure€35M or 7% global turnoverDetermined by national law (could include imprisonment)
Mens rea requiredNot required for strict-liability administrative violationsUsually requires wilful conduct or serious negligence
RecordAdministrative recordCriminal record
ConcurrenceCan run in parallel with Art.102Can run in parallel with Art.99

Who Is Exposed Under Art.102

Art.102 applies to natural persons — any human individual acting in a professional or personal capacity in connection with an AI system that violates the Regulation.

Primary Risk Categories

Technical Officers (CTOs, VPs of Engineering, ML Leads) These individuals approve deployment decisions for high-risk AI systems and sign off on technical documentation. If a system violates Art.5 (prohibited practices) or causes serious harm under an inadequate Art.9 risk management framework, the technical officer who approved the deployment is personally exposed.

Founders and CEOs Under national corporate law in most EU jurisdictions, corporate officers can be held criminally liable for company-level violations when they had knowledge of the violation and failed to prevent it. Art.102 creates the AI-Act-specific hook for prosecutors to use.

Individual ML Engineers and Developers Direct builders of prohibited AI systems face Art.102 exposure in their personal capacity, particularly when they:

Compliance Officers and Legal Counsel Professionals who signed off on a flawed conformity assessment or falsely attested to CE marking compliance face individual criminal exposure for deliberate misrepresentation to regulatory authorities.

The Wilful Infringement Threshold

Criminal prosecution under Art.102 typically requires some level of culpability beyond pure strict liability. Member State implementations will generally require:

The Art.5 prohibited practices are the clearest trigger for Art.102 exposure because the prohibition is absolute — there is no good-faith misclassification defence for deploying real-time remote biometric identification in public spaces without law enforcement authorisation, or for building systems that exploit psychological vulnerabilities.

For high-risk AI system compliance failures, the criminal threshold is higher: prosecutors generally need to show that the natural person knew about the deficiency and either concealed it or actively chose not to remediate it.

Art.102 vs Art.99: Parallel Exposure (Double-Track Enforcement)

Art.102 does not displace Art.99. Both tracks can run simultaneously:

The only constraint is the ne bis in idem principle (prohibition on double jeopardy for the same individual for the same conduct). Because Art.99 targets the legal entity and Art.102 targets the natural person, parallel proceedings against different subjects are generally permissible.

Practical consequence: A serious Art.5 violation can simultaneously result in:

  1. A €35M company fine (Art.99)
  2. Criminal prosecution of the CTO who approved deployment (Art.102)
  3. Criminal prosecution of the ML engineer who built the system (Art.102)

This triple exposure — company fine + multiple individual criminal cases — is not hypothetical. EU product safety law already uses equivalent structures (e.g., machinery directive criminal sanctions in Germany under §21 ProdSG).

Member State Implementation Patterns

Germany

Germany's implementation of AI-related criminal exposure builds on the existing Produktsicherheitsgesetz (ProdSG) framework. German prosecutors have jurisdiction over product safety violations, and AI systems qualify as products under certain interpretations. Expected Art.102 implementation will likely:

France

France has a strong tradition of corporate officer personal liability under the Code pénal. French implementation is expected to:

Ireland

Ireland is home to many EU headquarters of US tech companies operating in the EU. Irish implementation will interact with the Criminal Justice (Liability of Senior Managers) framework. Expect:

Art.102 and the CLOUD Act: Cross-Border Criminal Jurisdiction

Art.102 adds a criminal dimension to the CLOUD Act problem that Art.99 only addressed administratively.

Scenario: A US-incorporated company's Irish CTO approves deployment of a prohibited real-time biometric identification system in Germany. The system runs on AWS infrastructure in Frankfurt.

Criminal jurisdiction question: Germany, Ireland, and potentially the United States all have potential jurisdiction claims:

The CLOUD Act complication: Criminal investigators in Germany could issue a domestic court order for AWS logs to establish the timeline of the deployment. AWS, as a US company, is simultaneously subject to CLOUD Act orders from US law enforcement. If the AI system touched US-person data or operated with US government infrastructure involvement, parallel US criminal investigation becomes possible.

EU-native infrastructure (no US parent) eliminates the CLOUD Act angle entirely: a German NCA criminal referral goes to German prosecutors, who access German-domiciled infrastructure under German court orders. There is no parallel US jurisdiction hook.

Python: PersonalLiabilityAssessment

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional

class CulpabilityLevel(Enum):
    NONE = "none"
    NEGLIGENCE = "negligence"
    SERIOUS_NEGLIGENCE = "serious_negligence"
    RECKLESSNESS = "recklessness"
    WILFUL = "wilful"

class ViolationType(Enum):
    PROHIBITED_PRACTICE = "art5_prohibited_practice"       # Art.5 — highest exposure
    HIGH_RISK_NON_COMPLIANCE = "high_risk_noncompliance"   # Art.6-51 — moderate
    TRANSPARENCY_FAILURE = "transparency_failure"          # Art.52 — lower
    GPAI_OBLIGATION = "gpai_obligation"                    # Art.51-55 — moderate
    MISLEADING_AUTHORITIES = "misleading_authorities"      # Art.99(3) — high

@dataclass
class PersonalLiabilityProfile:
    role: str                         # "CTO", "ML Engineer", "Founder", "Compliance Officer"
    jurisdiction: str                 # "DE", "FR", "IE", "NL", etc.
    violation_type: ViolationType
    culpability: CulpabilityLevel
    decision_authority: bool          # Did this person have final say?
    documentation_signed: bool        # Did they sign technical docs / conformity assessment?
    prior_warnings: int = 0           # Number of prior NCA or internal warnings ignored
    vulnerable_persons_affected: bool = False  # Significantly increases exposure
    
    def criminal_risk_score(self) -> int:
        """Returns 0-100 risk score for Art.102 criminal prosecution."""
        score = 0
        
        # Violation type base
        base = {
            ViolationType.PROHIBITED_PRACTICE: 70,
            ViolationType.MISLEADING_AUTHORITIES: 60,
            ViolationType.GPAI_OBLIGATION: 35,
            ViolationType.HIGH_RISK_NON_COMPLIANCE: 30,
            ViolationType.TRANSPARENCY_FAILURE: 10,
        }[self.violation_type]
        score += base
        
        # Culpability modifier
        culp = {
            CulpabilityLevel.NONE: -30,
            CulpabilityLevel.NEGLIGENCE: -15,
            CulpabilityLevel.SERIOUS_NEGLIGENCE: 0,
            CulpabilityLevel.RECKLESSNESS: 10,
            CulpabilityLevel.WILFUL: 20,
        }[self.culpability]
        score += culp
        
        # Aggravating factors
        if self.decision_authority:
            score += 10
        if self.documentation_signed:
            score += 8
        if self.prior_warnings > 0:
            score += min(self.prior_warnings * 5, 20)
        if self.vulnerable_persons_affected:
            score += 15
            
        return min(max(score, 0), 100)
    
    def risk_category(self) -> str:
        score = self.criminal_risk_score()
        if score >= 70:
            return "HIGH — immediate legal counsel required"
        elif score >= 40:
            return "MEDIUM — document decision rationale now"
        elif score >= 20:
            return "LOW — maintain compliance records"
        else:
            return "MINIMAL — standard due diligence sufficient"
    
    def mitigation_actions(self) -> list[str]:
        actions = []
        score = self.criminal_risk_score()
        
        if score >= 40:
            actions.append("Retain criminal defence counsel in jurisdiction: " + self.jurisdiction)
        if self.documentation_signed:
            actions.append("Preserve all communications around documentation sign-off")
        if self.violation_type == ViolationType.PROHIBITED_PRACTICE:
            actions.append("Initiate voluntary cessation of prohibited system immediately")
            actions.append("Consider NCA voluntary disclosure to demonstrate good faith")
        if self.culpability in (CulpabilityLevel.SERIOUS_NEGLIGENCE, CulpabilityLevel.RECKLESSNESS):
            actions.append("Commission independent technical audit to establish true scope")
        if self.prior_warnings > 0:
            actions.append("Document corrective actions taken in response to prior warnings")
        if self.vulnerable_persons_affected:
            actions.append("Conduct humanitarian impact assessment for potential Art.88 reporting")
            
        return actions

# Example: CTO who approved a social scoring system deployment in Germany
profile = PersonalLiabilityProfile(
    role="CTO",
    jurisdiction="DE",
    violation_type=ViolationType.PROHIBITED_PRACTICE,
    culpability=CulpabilityLevel.RECKLESSNESS,
    decision_authority=True,
    documentation_signed=True,
    prior_warnings=1,
    vulnerable_persons_affected=False,
)

score = profile.criminal_risk_score()
category = profile.risk_category()
actions = profile.mitigation_actions()

print(f"Art.102 Criminal Risk Score: {score}/100")
print(f"Category: {category}")
print("Mitigation actions:")
for action in actions:
    print(f"  - {action}")

# Output:
# Art.102 Criminal Risk Score: 93/100
# Category: HIGH — immediate legal counsel required
# Mitigation actions:
#   - Retain criminal defence counsel in jurisdiction: DE
#   - Preserve all communications around documentation sign-off
#   - Initiate voluntary cessation of prohibited system immediately
#   - Consider NCA voluntary disclosure to demonstrate good faith
#   - Document corrective actions taken in response to prior warnings

Art.102 in the Penalty Chapter Context

Art.102 occupies a specific position in the penalty architecture of the EU AI Act:

ArticleMechanismTargetEnforcer
Art.99Administrative fineLegal entity (operator, provider, importer)NCA
Art.100Administrative fineEU institution (EDPS enforcement)EDPS
Art.101Administrative fineGPAI providerAI Office
Art.102Criminal sanctionNatural personNational prosecutor
Art.88Protection mechanismWhistleblowerNCA / national court

Art.102 is the enforcement backstop that ensures individuals cannot hide behind corporate structures. Without Art.102, a deliberate Art.5 violation by a knowing executive would impose liability only on the legal entity — the individual could resign and face no personal consequence. Art.102 closes this gap.

The Art.102 Relationship with Art.99(7): Mitigating Factors

Art.99(7) lists factors NCAs must consider when setting administrative fines. While those factors apply to Art.99 administrative proceedings, they provide useful signals for Art.102 criminal exposure:

Building and documenting a genuine compliance programme — not just a paper exercise — is simultaneously the best Art.99 mitigation and the strongest criminal defence under Art.102. Documented risk assessments, board-level AI governance minutes, and technical review records all contribute to a "reasonable officer" defence against criminal prosecution.

Infrastructure Choices and Art.102 Documentation

The primary Art.102 criminal defence is establishing that you acted as a reasonable professional with adequate due diligence. Documentation supporting that defence includes:

  1. Technical review records: Showing you assessed the system against Annex III before deployment
  2. Legal counsel opinions: Written assessments that the system was outside Art.5 scope
  3. Risk management documentation: Art.9 compliant risk management records
  4. Infrastructure audit trails: Demonstrating the system operated within its intended parameters

EU-native infrastructure creates a documentation advantage: audit logs, access records, and system state data exist exclusively in EU jurisdictions without CLOUD Act complications. When a criminal investigation demands infrastructure evidence, EU-native records have a cleaner chain of custody than logs scattered across US-parent infrastructure subject to competing legal orders.

20-Item Individual Liability Checklist for Developers and Officers

Before Deployment:

During Operation:

If an NCA Investigation Begins:

Documentation Fundamentals:

Role-Specific:

Art.102 and sota.io: Infrastructure as Criminal Risk Mitigation

The CLOUD Act cross-border criminal jurisdiction problem is eliminated for companies operating on EU-native infrastructure. When a German prosecutor requests infrastructure logs in a criminal investigation of an AI system hosted on sota.io (EU-native, no US parent):

For individuals facing Art.102 exposure, clean infrastructure dramatically simplifies the criminal defence evidence situation.

See Also