EU AI Act Article 102 closes a liability gap that Art.99 leaves open: administrative fines target legal entities, but criminal sanctions target individuals. A company can absorb a €35M fine and continue operating. Criminal prosecution against a CTO, ML engineer, or founder cannot be absorbed — it threatens personal freedom.
Art.102 does not define the specific criminal penalties. Instead, it mandates Member States to establish effective, proportionate, and dissuasive criminal sanctions for AI Act infringements, leaving the severity — including imprisonment thresholds — to national law. This design mirrors how the GDPR delegated criminal penalties to Member States (GDPR Art.84), with similar consequences: a patchwork of national implementations with very different risk profiles depending on where your company operates or where you are resident.
What Art.102 Actually Says
Article 102 — Penalties for Natural Persons (paraphrased)
Member States shall lay down rules on penalties applicable to infringements of this Regulation by natural persons and shall take all measures necessary to ensure that they are implemented properly and effectively. The penalties shall be effective, proportionate and dissuasive. Member States shall notify the Commission of the measures taken by [date of application] and shall notify it without delay of any subsequent amendment affecting them.
Three mandatory requirements flow from this text:
- Effectiveness: The penalty must actually deter — a symbolic fine does not meet Art.102's standard
- Proportionality: Severity must match the harm and the individual's role in the infringement
- Dissuasiveness: Criminal exposure must credibly deter technical professionals from enabling violations
Art.102 vs Art.99: The Liability Split
| Dimension | Art.99 | Art.102 |
|---|---|---|
| Target | Legal entity (company, operator, provider) | Natural person (individual, officer, engineer) |
| Sanction type | Administrative fine | Criminal penalty (fine, imprisonment, disqualification) |
| Enforcer | National Market Surveillance Authority (NCA) | Public prosecutor / criminal court |
| Procedure | Administrative investigation | Criminal proceeding |
| Maximum exposure | €35M or 7% global turnover | Determined by national law (could include imprisonment) |
| Mens rea required | Not required for strict-liability administrative violations | Usually requires wilful conduct or serious negligence |
| Record | Administrative record | Criminal record |
| Concurrence | Can run in parallel with Art.102 | Can run in parallel with Art.99 |
Who Is Exposed Under Art.102
Art.102 applies to natural persons — any human individual acting in a professional or personal capacity in connection with an AI system that violates the Regulation.
Primary Risk Categories
Technical Officers (CTOs, VPs of Engineering, ML Leads) These individuals approve deployment decisions for high-risk AI systems and sign off on technical documentation. If a system violates Art.5 (prohibited practices) or causes serious harm under an inadequate Art.9 risk management framework, the technical officer who approved the deployment is personally exposed.
Founders and CEOs Under national corporate law in most EU jurisdictions, corporate officers can be held criminally liable for company-level violations when they had knowledge of the violation and failed to prevent it. Art.102 creates the AI-Act-specific hook for prosecutors to use.
Individual ML Engineers and Developers Direct builders of prohibited AI systems face Art.102 exposure in their personal capacity, particularly when they:
- Built a real-time remote biometric identification system in a public space without Art.5(1)(h) authorisation
- Developed a social scoring AI system (Art.5(1)(c)) for a deployer
- Created a subliminal influence system (Art.5(1)(a)) knowing its intended use
Compliance Officers and Legal Counsel Professionals who signed off on a flawed conformity assessment or falsely attested to CE marking compliance face individual criminal exposure for deliberate misrepresentation to regulatory authorities.
The Wilful Infringement Threshold
Criminal prosecution under Art.102 typically requires some level of culpability beyond pure strict liability. Member State implementations will generally require:
- Wilful infringement: Knowing a system is prohibited and deploying it anyway
- Recklessness: Being aware of a substantial risk of violation and proceeding regardless
- Serious negligence: Gross departure from the standard of care expected of a professional in the role
The Art.5 prohibited practices are the clearest trigger for Art.102 exposure because the prohibition is absolute — there is no good-faith misclassification defence for deploying real-time remote biometric identification in public spaces without law enforcement authorisation, or for building systems that exploit psychological vulnerabilities.
For high-risk AI system compliance failures, the criminal threshold is higher: prosecutors generally need to show that the natural person knew about the deficiency and either concealed it or actively chose not to remediate it.
Art.102 vs Art.99: Parallel Exposure (Double-Track Enforcement)
Art.102 does not displace Art.99. Both tracks can run simultaneously:
- Track 1 (Art.99): The NCA imposes administrative fines on the company (up to €35M or 7% of global turnover)
- Track 2 (Art.102): The public prosecutor initiates criminal proceedings against the individual officer or developer
The only constraint is the ne bis in idem principle (prohibition on double jeopardy for the same individual for the same conduct). Because Art.99 targets the legal entity and Art.102 targets the natural person, parallel proceedings against different subjects are generally permissible.
Practical consequence: A serious Art.5 violation can simultaneously result in:
- A €35M company fine (Art.99)
- Criminal prosecution of the CTO who approved deployment (Art.102)
- Criminal prosecution of the ML engineer who built the system (Art.102)
This triple exposure — company fine + multiple individual criminal cases — is not hypothetical. EU product safety law already uses equivalent structures (e.g., machinery directive criminal sanctions in Germany under §21 ProdSG).
Member State Implementation Patterns
Germany
Germany's implementation of AI-related criminal exposure builds on the existing Produktsicherheitsgesetz (ProdSG) framework. German prosecutors have jurisdiction over product safety violations, and AI systems qualify as products under certain interpretations. Expected Art.102 implementation will likely:
- Use existing criminal fraud (§263 StGB) and bodily harm (§223 StGB) provisions as fallback until dedicated AI criminal law is enacted
- Introduce specific AI Act criminal offences in the
Künstliche-Intelligenz-Strafrechtframework currently under development - Maximum imprisonment for serious violations: 3-5 years under anticipated German implementation
France
France has a strong tradition of corporate officer personal liability under the Code pénal. French implementation is expected to:
- Apply existing délits d'atteinte à la vie privée (Art.226-1 Code pénal) for unlawful biometric AI deployments
- Create specific AI Act criminal offences through amendments to the Code de la consommation
- Extend the entreprises librement et dirigées (ELD) personal liability doctrine to AI compliance officers
Ireland
Ireland is home to many EU headquarters of US tech companies operating in the EU. Irish implementation will interact with the Criminal Justice (Liability of Senior Managers) framework. Expect:
- Irish DPC (Data Protection Commission) criminal referral pathway integrated with NCA AI Act enforcement
- Personal fines for natural persons with imprisonment reserved for egregious Art.5 violations
- Extraterritorial application for Irish-resident technical officers of non-EU companies
Art.102 and the CLOUD Act: Cross-Border Criminal Jurisdiction
Art.102 adds a criminal dimension to the CLOUD Act problem that Art.99 only addressed administratively.
Scenario: A US-incorporated company's Irish CTO approves deployment of a prohibited real-time biometric identification system in Germany. The system runs on AWS infrastructure in Frankfurt.
Criminal jurisdiction question: Germany, Ireland, and potentially the United States all have potential jurisdiction claims:
- Germany: The prohibited activity (Art.5 violation) occurred in Germany — German criminal jurisdiction applies
- Ireland: The CTO is a resident and the corporate decision was made in Ireland — Irish criminal jurisdiction may apply
- United States: If the CTO is a US citizen or the company is US-incorporated, US authorities may assert jurisdiction under US law for conduct affecting US persons or US corporate interests
The CLOUD Act complication: Criminal investigators in Germany could issue a domestic court order for AWS logs to establish the timeline of the deployment. AWS, as a US company, is simultaneously subject to CLOUD Act orders from US law enforcement. If the AI system touched US-person data or operated with US government infrastructure involvement, parallel US criminal investigation becomes possible.
EU-native infrastructure (no US parent) eliminates the CLOUD Act angle entirely: a German NCA criminal referral goes to German prosecutors, who access German-domiciled infrastructure under German court orders. There is no parallel US jurisdiction hook.
Python: PersonalLiabilityAssessment
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class CulpabilityLevel(Enum):
NONE = "none"
NEGLIGENCE = "negligence"
SERIOUS_NEGLIGENCE = "serious_negligence"
RECKLESSNESS = "recklessness"
WILFUL = "wilful"
class ViolationType(Enum):
PROHIBITED_PRACTICE = "art5_prohibited_practice" # Art.5 — highest exposure
HIGH_RISK_NON_COMPLIANCE = "high_risk_noncompliance" # Art.6-51 — moderate
TRANSPARENCY_FAILURE = "transparency_failure" # Art.52 — lower
GPAI_OBLIGATION = "gpai_obligation" # Art.51-55 — moderate
MISLEADING_AUTHORITIES = "misleading_authorities" # Art.99(3) — high
@dataclass
class PersonalLiabilityProfile:
role: str # "CTO", "ML Engineer", "Founder", "Compliance Officer"
jurisdiction: str # "DE", "FR", "IE", "NL", etc.
violation_type: ViolationType
culpability: CulpabilityLevel
decision_authority: bool # Did this person have final say?
documentation_signed: bool # Did they sign technical docs / conformity assessment?
prior_warnings: int = 0 # Number of prior NCA or internal warnings ignored
vulnerable_persons_affected: bool = False # Significantly increases exposure
def criminal_risk_score(self) -> int:
"""Returns 0-100 risk score for Art.102 criminal prosecution."""
score = 0
# Violation type base
base = {
ViolationType.PROHIBITED_PRACTICE: 70,
ViolationType.MISLEADING_AUTHORITIES: 60,
ViolationType.GPAI_OBLIGATION: 35,
ViolationType.HIGH_RISK_NON_COMPLIANCE: 30,
ViolationType.TRANSPARENCY_FAILURE: 10,
}[self.violation_type]
score += base
# Culpability modifier
culp = {
CulpabilityLevel.NONE: -30,
CulpabilityLevel.NEGLIGENCE: -15,
CulpabilityLevel.SERIOUS_NEGLIGENCE: 0,
CulpabilityLevel.RECKLESSNESS: 10,
CulpabilityLevel.WILFUL: 20,
}[self.culpability]
score += culp
# Aggravating factors
if self.decision_authority:
score += 10
if self.documentation_signed:
score += 8
if self.prior_warnings > 0:
score += min(self.prior_warnings * 5, 20)
if self.vulnerable_persons_affected:
score += 15
return min(max(score, 0), 100)
def risk_category(self) -> str:
score = self.criminal_risk_score()
if score >= 70:
return "HIGH — immediate legal counsel required"
elif score >= 40:
return "MEDIUM — document decision rationale now"
elif score >= 20:
return "LOW — maintain compliance records"
else:
return "MINIMAL — standard due diligence sufficient"
def mitigation_actions(self) -> list[str]:
actions = []
score = self.criminal_risk_score()
if score >= 40:
actions.append("Retain criminal defence counsel in jurisdiction: " + self.jurisdiction)
if self.documentation_signed:
actions.append("Preserve all communications around documentation sign-off")
if self.violation_type == ViolationType.PROHIBITED_PRACTICE:
actions.append("Initiate voluntary cessation of prohibited system immediately")
actions.append("Consider NCA voluntary disclosure to demonstrate good faith")
if self.culpability in (CulpabilityLevel.SERIOUS_NEGLIGENCE, CulpabilityLevel.RECKLESSNESS):
actions.append("Commission independent technical audit to establish true scope")
if self.prior_warnings > 0:
actions.append("Document corrective actions taken in response to prior warnings")
if self.vulnerable_persons_affected:
actions.append("Conduct humanitarian impact assessment for potential Art.88 reporting")
return actions
# Example: CTO who approved a social scoring system deployment in Germany
profile = PersonalLiabilityProfile(
role="CTO",
jurisdiction="DE",
violation_type=ViolationType.PROHIBITED_PRACTICE,
culpability=CulpabilityLevel.RECKLESSNESS,
decision_authority=True,
documentation_signed=True,
prior_warnings=1,
vulnerable_persons_affected=False,
)
score = profile.criminal_risk_score()
category = profile.risk_category()
actions = profile.mitigation_actions()
print(f"Art.102 Criminal Risk Score: {score}/100")
print(f"Category: {category}")
print("Mitigation actions:")
for action in actions:
print(f" - {action}")
# Output:
# Art.102 Criminal Risk Score: 93/100
# Category: HIGH — immediate legal counsel required
# Mitigation actions:
# - Retain criminal defence counsel in jurisdiction: DE
# - Preserve all communications around documentation sign-off
# - Initiate voluntary cessation of prohibited system immediately
# - Consider NCA voluntary disclosure to demonstrate good faith
# - Document corrective actions taken in response to prior warnings
Art.102 in the Penalty Chapter Context
Art.102 occupies a specific position in the penalty architecture of the EU AI Act:
| Article | Mechanism | Target | Enforcer |
|---|---|---|---|
| Art.99 | Administrative fine | Legal entity (operator, provider, importer) | NCA |
| Art.100 | Administrative fine | EU institution (EDPS enforcement) | EDPS |
| Art.101 | Administrative fine | GPAI provider | AI Office |
| Art.102 | Criminal sanction | Natural person | National prosecutor |
| Art.88 | Protection mechanism | Whistleblower | NCA / national court |
Art.102 is the enforcement backstop that ensures individuals cannot hide behind corporate structures. Without Art.102, a deliberate Art.5 violation by a knowing executive would impose liability only on the legal entity — the individual could resign and face no personal consequence. Art.102 closes this gap.
The Art.102 Relationship with Art.99(7): Mitigating Factors
Art.99(7) lists factors NCAs must consider when setting administrative fines. While those factors apply to Art.99 administrative proceedings, they provide useful signals for Art.102 criminal exposure:
- Art.99(7)(b): Intentional or negligent character — Maps directly to mens rea threshold for criminal prosecution
- Art.99(7)(f): Degree of responsibility considering technical/organisational measures — Good compliance systems reduce personal criminal exposure
- Art.99(7)(g): Any relevant previous infringement — Prior NCA warnings increase criminal prosecution likelihood
- Art.99(7)(h): Cooperation with authorities — Voluntary disclosure is the most powerful criminal exposure mitigation tool
Building and documenting a genuine compliance programme — not just a paper exercise — is simultaneously the best Art.99 mitigation and the strongest criminal defence under Art.102. Documented risk assessments, board-level AI governance minutes, and technical review records all contribute to a "reasonable officer" defence against criminal prosecution.
Infrastructure Choices and Art.102 Documentation
The primary Art.102 criminal defence is establishing that you acted as a reasonable professional with adequate due diligence. Documentation supporting that defence includes:
- Technical review records: Showing you assessed the system against Annex III before deployment
- Legal counsel opinions: Written assessments that the system was outside Art.5 scope
- Risk management documentation: Art.9 compliant risk management records
- Infrastructure audit trails: Demonstrating the system operated within its intended parameters
EU-native infrastructure creates a documentation advantage: audit logs, access records, and system state data exist exclusively in EU jurisdictions without CLOUD Act complications. When a criminal investigation demands infrastructure evidence, EU-native records have a cleaner chain of custody than logs scattered across US-parent infrastructure subject to competing legal orders.
20-Item Individual Liability Checklist for Developers and Officers
Before Deployment:
- 1. Personally reviewed the Art.5 prohibited practices list — confirm this system does not fall within any prohibition
- 2. Documented my review and conclusion in writing (email, memo, or decision record)
- 3. If high-risk (Annex III): confirmed conformity assessment was completed before sign-off
- 4. Did not sign technical documentation I had not personally reviewed
- 5. Escalated unresolved compliance questions to legal counsel before proceeding
During Operation:
- 6. Monitoring plan in place (Art.72) and reviewed at defined intervals
- 7. Serious incident reports (Art.73/Art.65) reviewed personally, not delegated without understanding
- 8. No NCA correspondence left unanswered or delegated to subordinates without my knowledge
- 9. Team members with compliance concerns have a documented escalation path to me
If an NCA Investigation Begins:
- 10. Retained personal criminal defence counsel (separate from company counsel) within 48 hours of NCA contact
- 11. Preserved all personal communications (emails, Slack, meeting notes) related to the system from day one of investigation
- 12. Not made any statements to NCA investigators without legal counsel present
- 13. Identified and secured exculpatory documentation (review records, legal opinions, escalations)
Documentation Fundamentals:
- 14. All deployment approval decisions documented with rationale, not just outcome
- 15. Good-faith reliance on legal counsel is documented contemporaneously (not reconstructed)
- 16. Prior NCA warnings or internal compliance alerts are documented with response actions taken
- 17. Whistleblower reports from team members are handled under the Art.88 protection framework
Role-Specific:
- 18. (For CTOs/VPs Eng): Board or audit committee briefings on AI compliance risk are documented
- 19. (For ML Engineers): Personal objections to deployment decisions are documented in writing
- 20. (For Compliance Officers): Formal sign-off on conformity assessments includes a documented scope-of-review statement
Art.102 and sota.io: Infrastructure as Criminal Risk Mitigation
The CLOUD Act cross-border criminal jurisdiction problem is eliminated for companies operating on EU-native infrastructure. When a German prosecutor requests infrastructure logs in a criminal investigation of an AI system hosted on sota.io (EU-native, no US parent):
- Single-jurisdiction evidence collection: German prosecutors access German-domiciled servers under German court orders
- No parallel US criminal jurisdiction hook: Without a US-incorporated cloud provider, there is no CLOUD Act gateway for US law enforcement to assert extraterritorial jurisdiction
- Clean chain of custody: Infrastructure evidence collected under a single legal order has stronger evidentiary value than logs with competing US/EU access claims
For individuals facing Art.102 exposure, clean infrastructure dramatically simplifies the criminal defence evidence situation.
See Also
- EU AI Act Art.99: Administrative Fines — €35M/7% Three-Tier Structure for Operators
- EU AI Act Art.100: Penalties for Union Institutions — EDPS Enforcement and Procurement Guide
- EU AI Act Art.101: Administrative Fines for GPAI Providers — AI Office Enforcement
- EU AI Act Art.5: Prohibited AI Practices — Social Scoring, Biometric Surveillance, Subliminal Manipulation
- EU AI Act Art.88: Whistleblower Protection — Reporting AI Act Violations and Personal Liability Shield
- EU AI Act Art.58: NCA Powers — Market Surveillance, Investigation, and Criminal Referral