EU AI Act Article 99 Penalties: The Complete Fine Tier Guide for Developers
When the EU AI Act reaches full enforcement on August 2, 2026, developers and companies deploying AI systems face a structured penalty regime with fines that dwarf GDPR in their upper ranges. Article 99 lays out three distinct fine tiers. Understanding which tier applies to which violation — and what actually triggers an investigation — is essential before you ship.
The Three Fine Tiers at a Glance
Article 99 structures penalties around the severity and nature of the violation:
| Tier | Violation Type | Max Fine | Max % of Turnover |
|---|---|---|---|
| 1 | Article 5 prohibited AI practices | €35,000,000 | 7% global annual turnover |
| 2 | High-risk AI non-compliance, operator/notified body failures | €15,000,000 | 3% global annual turnover |
| 3 | Incorrect or misleading information to authorities | €7,500,000 | 1% global annual turnover |
In each case, the higher of the two values (flat amount vs. percentage) applies — except for SMEs, where the lower applies.
Tier 1: €35M / 7% — Prohibited AI Practices (Article 5)
This is the highest penalty tier, and it covers violations of Article 5's absolute prohibitions — systems that were banned from February 2025 onward, a full 18 months before the rest of the Act enters full enforcement.
What falls under Article 5 (and therefore Tier 1 fines):
- Subliminal manipulation: AI that exploits subconscious vulnerabilities to influence behavior causing harm
- Exploitation of vulnerabilities: Targeting age, disability, or social/economic situation to distort behavior
- Social scoring by public authorities: Government-run scoring systems that evaluate citizens based on behavior and impose detrimental treatment
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
- Predictive policing based solely on profiling or personality trait assessment
- Untargeted facial image scraping to build recognition databases
- Emotion recognition in workplace or educational contexts (with narrow exceptions)
- Biometric categorization to infer race, political opinion, religion, sexual orientation, or trade union membership
If you build a SaaS product that incorporates any of these features — even as an edge-case use — you are exposed to Tier 1. The system does not need to cause harm to trigger the fine; the violation is the deployment itself.
Tier 2: €15M / 3% — High-Risk AI Non-Compliance
The broadest category. Tier 2 covers failures by providers, operators, importers, distributors, and notified bodies to meet obligations for high-risk AI systems. Practically, this means:
- Shipping a high-risk AI system (Annex III categories) without a completed conformity assessment
- Missing technical documentation requirements (Article 11)
- Absence of a functioning risk management system (Article 9)
- Failure to maintain logs or enable traceability (Article 12)
- Not registering in the EU AI Act database before placing on market (Article 49)
- Inadequate human oversight mechanisms (Article 14)
- Transparency failures toward deployers and affected persons (Article 13)
High-risk AI spans employment decisions, credit scoring, critical infrastructure, education, law enforcement, migration, and administration of justice. If your AI product touches any of these domains and serves EU users, you are likely building a Tier 2-exposed system.
The 3% figure sounds manageable for a startup. It is not. The percentage is of global worldwide annual turnover, not EU revenue. A $50M ARR company faces up to $1.5M per violation. Multiple violations can stack.
Tier 3: €7.5M / 1% — Misleading Information to Authorities
The smallest fine tier, but one developers should not underestimate. This applies when you provide incorrect, incomplete, or misleading information to:
- National market surveillance authorities (MSAs) during an investigation
- The European AI Office during a GPAI model compliance evaluation
- Notified bodies during conformity assessment
The practical risk: a developer gives an authority an incomplete description of how their system works, omits a capability, or provides training data documentation that does not reflect the actual model behavior. Even without intent to deceive, Article 99 Tier 3 can apply.
SME and Startup Provisions
Article 99 contains a specific carve-out for small and medium-sized enterprises and startups: the fine shall be the lower of the percentage or flat-amount threshold, rather than the higher.
For a startup with €2M annual turnover:
- Tier 1: lower of €35M or 7% of €2M (€140K) = €140,000
- Tier 2: lower of €15M or 3% of €2M (€60K) = €60,000
- Tier 3: lower of €7.5M or 1% of €2M (€20K) = €20,000
These are still material fines for a seed-stage company, but they are proportionate. The key implication: early-stage AI companies are not exempt — they are just protected from disproportionate fines.
GPAI Models: Article 101, Not Article 99
Importantly, Article 99 does not cover general-purpose AI model (GPAI) providers. That penalty regime falls under Article 101, which applies to the European Commission's enforcement powers (not national MSAs). Article 101 covers:
- Failure to provide documentation for GPAI evaluation
- Withholding access to model weights or training data when required
- Non-compliance with systemic risk obligations for frontier models (Article 55)
GPAI-specific enforcement under Article 101 was delayed to August 2, 2026, aligned with the general enforcement date.
What Determines the Actual Fine Amount
Article 99(3) instructs authorities to consider multiple factors when setting the specific amount within the maximum range:
- Nature, gravity, duration of the infringement
- Number of people affected and the degree of harm
- Intent — was it deliberate or negligent?
- Mitigation actions taken once the violation was identified
- Prior penalties from other national authorities for the same violation
- Company size, market share, and overall financial strength
- Financial benefit obtained from the violation
- Cooperation with the investigating authority
- How the violation became known — self-reported vs. complaint vs. proactive surveillance
Demonstrating good-faith compliance efforts, complete documentation, and prompt cooperation meaningfully reduces final fine amounts. This is why investing in technical documentation and audit trails early is cost-effective — not just compliance theater.
Enforcement Timeline: When Do Fines Actually Apply?
- February 2025: Article 5 prohibitions in force — Tier 1 fines applicable from this date
- August 2, 2025: GPAI obligations and general Act provisions — Article 101 fines for GPAI
- August 2, 2026: Full enforcement — Tier 2 and Tier 3 fines fully operational, including for all high-risk AI systems
Market surveillance authorities are actively being designated (deadline: August 2, 2025). Germany, France, and the Netherlands have already appointed MSAs. The infrastructure for enforcement is operational before the full compliance deadline.
Practical Checklist for Developers
Before August 2026, verify:
- Have you identified whether any of your AI features fall under Article 5 prohibitions? (Those are already illegal.)
- Have you classified your AI system against Annex III to determine if it is high-risk?
- If high-risk: is your risk management system (Article 9) documented and operational?
- Is your technical documentation (Article 11) current and complete?
- Have you registered in the EU AI database (Article 49) if required?
- Are your logs and audit trails sufficient to respond to an MSA investigation?
- Does your infrastructure support the data residency and access requirements that an MSA might invoke under Article 75?
The fine structure is designed to be proportionate but not painless. For a Series A company doing €5M ARR, a Tier 2 violation is a €150K fine — survivable but significant. For a growth-stage company at €30M ARR, the same violation scales to €900K.
The August 2026 deadline is not a suggestion. Market surveillance authorities across the EU are already coordinating with the AI Office, and the investigation procedures under Articles 79-82 give them broad powers to access systems, documentation, and algorithms.
EU-native infrastructure for AI systems. Compliance-ready by default. Deploy on sota.io
See Also
- EU AI Act Art.79: MSA Investigation Procedure — Developer Guide — the national-level procedure that precedes Art.99 enforcement and fine issuance
- EU AI Act Art.82: Formal Non-Compliance Notification — Developer Guide — Art.82 notification is the formal step that triggers the Art.99 fine proceedings
- EU AI Act Art.74: Market Surveillance Authority Powers — Developer Guide — the investigative powers MSAs use before imposing Art.99 fines
- EU AI Act Art.88: Whistleblower Protection — Developer Guide — Art.88 reports are a primary source for investigations that lead to Art.99 penalties
- EU AI Act Art.85: The 2027 Review Clause — Developer Guide — Art.97 evaluation may recalibrate Art.99 fine thresholds after 2027 review