EU AI Act Art.87: Complaints to Market Surveillance Authorities — Developer Guide (2026)
Every rights framework needs an enforcement mechanism. Article 86 gives natural persons the right to an explanation of how a high-risk AI system influenced a decision that affected them. Article 87 gives those same persons — and anyone else — the right to complain to a public authority when they believe the AI Act has been violated.
For developers and deployers of high-risk AI systems, Article 87 is where abstract compliance obligations become concrete legal exposure. A complaint under Art.87 is the first step in a chain that can lead to formal investigation, corrective measures, and ultimately fines under Article 99.
This guide explains what Art.87 requires, who can file a complaint and about what, how Market Surveillance Authorities (MSAs) must respond, the Art.86 → Art.87 escalation path, and what developers need to build and maintain to survive an MSA investigation triggered by a complaint.
What Article 87 Actually Says
Article 87(1) — The Right to Complain:
Without prejudice to other administrative or judicial remedies, any natural or legal person having reason to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority.
Three things to note immediately:
-
"Any natural or legal person" — this is not limited to affected individuals. Competitors, civil society organisations, consumer advocacy groups, journalists, and other AI providers can all file complaints. The standing requirement is broader than Art.86.
-
"Having reason to consider" — a reasonable suspicion is sufficient; the complainant does not need to prove a violation. The burden of investigation falls on the MSA.
-
"Without prejudice to other administrative or judicial remedies" — Art.87 is additive. Filing a complaint does not prevent the person from simultaneously pursuing GDPR complaints to a Data Protection Authority, civil litigation, or national consumer protection proceedings.
Article 87(2) — MSA Obligations:
The market surveillance authority shall handle the complaint and follow up on it with due diligence and, where appropriate, inform the complainant of the progress and outcome of the investigation.
This creates a procedural obligation on the MSA: complaints must be handled "with due diligence" and complainants must be informed. This is not a black-box complaint box — it is a formal procedure with accountability obligations on the authority.
Article 87(3) — National Procedural Law:
The specific procedures for submitting and handling complaints are governed by the national law of each Member State. This means the mechanics differ across the EU — German MSA procedures differ from French, Dutch, or Polish ones — but the substantive right exists uniformly across all Member States under the AI Act.
Who Files Art.87 Complaints (and Why It Matters)
Understanding who is likely to file complaints shapes how developers design their compliance posture.
| Complainant Type | Likely Trigger | Complaint Target |
|---|---|---|
| Affected individual | Adverse decision (denied credit, job, benefits) — Art.86 explanation denied or unsatisfactory | Deployer's use of specific high-risk AI |
| Consumer org / NGO | Pattern of harm, algorithmic bias, opacity | Provider's AI system design or deployer's deployment practices |
| Competitor | Unfair advantage from non-compliant AI (no conformity assessment, CE marking absent) | Provider's market access without proper conformity |
| Journalist / researcher | Investigative finding of undisclosed AI use or non-compliant deployment | Both provider and deployer |
| Whistleblower | Internal knowledge of compliance failure (see Art.88) | Provider's technical documentation, testing, monitoring |
| DPA referral | GDPR complaint reveals AI Act dimension | Deployer's combined GDPR + AI Act violation |
| Other MSA (cross-border) | Art.75 mutual assistance — MSA of affected person's member state refers to MSA where provider is established | Provider in another Member State |
The implication: complaint risk is not limited to direct user interactions. Competitors monitoring your CE marking status, NGOs tracking algorithmic bias patterns, and cross-border MSA referrals are all realistic vectors.
The Art.86 → Art.87 Escalation Chain
The most common individual complaint path flows from Article 86:
1. Person receives high-risk AI decision (credit denial, job rejection, benefits cut)
↓
2. Person invokes Art.86 right to explanation from deployer
↓
3. Deployer either:
(a) Provides adequate explanation → complaint risk reduced
(b) Provides inadequate explanation → Art.87 complaint trigger
(c) Fails to respond within reasonable time → Art.87 complaint trigger
(d) Claims Art.86 does not apply → complainant seeks MSA determination
↓
4. Person files Art.87 complaint with national MSA
↓
5. MSA investigates → requests documentation from deployer (Art.74) and provider (Art.74)
↓
6. MSA may impose corrective measures (Art.79) or refer for penalties (Art.99)
For developers, the critical link is step 5: when an Art.87 complaint triggers an MSA investigation, the authority has broad access powers under Article 74, including the right to access technical documentation, post-market monitoring data, logs, and human oversight records.
What MSAs Can Request After a Complaint
An Art.87 complaint does not automatically mean an inspection, but it does mean the MSA will assess whether the complaint has merit. If it does, Art.74 gives MSAs extensive investigation powers:
Article 74(3) powers include:
- Access to any information and documentation necessary for the performance of tasks
- Access to any software, algorithms, models, or data used in high-risk AI systems
- Access to technical documentation required under Annex IV
- Access to post-market monitoring data (Art.72)
- Access to human oversight logs
- Request for testing and sample checking
What this means in practice for a complaint-triggered investigation:
If your Art.86 explanation records are stored in a US-based cloud (AWS, Azure, GCP), an MSA Art.74 investigation request will reach those records through the EU provider agreement — but simultaneously, those same records may be compellable by US authorities under the CLOUD Act (18 U.S.C. § 2713), which applies to any data stored or controlled by US-person entities regardless of geographic location.
The dual-compellability problem: EU MSA requests your explanation logs → you comply → US DOJ can independently compel the same logs under CLOUD Act → you now have two legal orders with potentially conflicting confidentiality or disclosure requirements.
EU-native infrastructure (German GmbH, no US ownership) eliminates this dual-compellability entirely: a single EU legal order, no CLOUD Act exposure.
What Triggers High-Risk Complaints (Developer Perspective)
From a developer's perspective, the complaint triggers to design around are:
1. Absent or Insufficient Art.86 Explanation
The clearest trigger. If your system cannot generate a meaningful explanation of how the AI contributed to a high-risk decision, affected persons have both an Art.86 violation to complain about and standing to file under Art.87.
Design implication: Art.86-capable explanation generation is not optional for Annex III systems. It is the primary defence against Art.87 complaints from affected individuals.
2. Missing CE Marking or EUAIDB Registration
Art.49 requires CE marking for high-risk AI systems. Art.71 requires registration in the EU AI Act database (EUAIDB) for many Annex III systems. Absence of either is directly visible to competitors and NGOs monitoring market entry.
Design implication: EUAIDB registration and CE marking must be in place before market deployment. These are public-facing compliance signals that any person can check without filing a formal complaint.
3. Undisclosed High-Risk Classification
Art.13(1) requires deployers to inform affected persons that they are interacting with a high-risk AI system. If a deployer uses a Annex III AI without disclosure, the affected person may not know they have Art.86 rights — but a third party (NGO, researcher) who discovers the undisclosed use can still file under Art.87.
4. Post-Market Monitoring Failures
Art.72 requires providers to operate post-market monitoring plans and report serious incidents under Art.73. If monitoring data shows systematic failures and the provider has not reported or corrected them, a whistleblower or affected person with access to that information can file an Art.87 complaint.
5. Prohibited Practice Suspicion (Art.5)
Art.5 prohibitions — social scoring, real-time remote biometric identification in public spaces (outside narrow exceptions), subliminal manipulation — are absolute. Anyone with reason to believe such a system is operating can file an Art.87 complaint even without being personally affected.
MSA Investigation Flow After Complaint
Understanding what happens inside the MSA after a complaint is filed helps developers anticipate what they need to produce quickly:
Art.87 Complaint Filed
↓
MSA Admissibility Assessment (is there "reason to consider" a violation?)
↓
[Inadmissible] → Complainant informed (Art.87(2))
↓
[Admissible] → Investigation opened
↓
MSA requests documentation (Art.74) from provider + deployer
↓
Provider/Deployer submits: technical documentation (Annex IV),
conformity assessment records, post-market monitoring data,
incident reports, explanation logs (Art.86 records)
↓
MSA evaluates compliance
↓
[Compliant] → Investigation closed, complainant informed
↓
[Non-compliant] → MSA issues corrective measures (Art.79)
or requests recall/withdrawal (Art.80)
or refers to national penalty authority (Art.99)
↓
Cross-border dimension? → MSA notifies Commission + other MSAs (Art.82)
↓
Findings contribute to MSA annual report (Art.84)
→ Commission review 2027 (Art.85)
The key developer insight: the MSA investigation is a documentation audit. If your technical documentation (Annex IV), conformity assessment records, post-market monitoring reports, and Art.86 explanation infrastructure are in order, a complaint-triggered investigation is manageable. If they are not, the complaint is the first step toward Art.99 fines.
Art.87 Complaint Readiness: Python Implementation
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional
import uuid
class ComplaintStatus(Enum):
RECEIVED = "received"
UNDER_INVESTIGATION = "under_investigation"
INFORMATION_REQUESTED = "information_requested"
RESPONDING = "responding"
CLOSED_COMPLIANT = "closed_compliant"
CLOSED_NON_COMPLIANT = "closed_non_compliant"
@dataclass
class Art87ComplaintRecord:
"""Track MSA complaint and internal response."""
complaint_id: str = field(default_factory=lambda: str(uuid.uuid4()))
received_date: datetime = field(default_factory=datetime.utcnow)
msa_reference: Optional[str] = None
complainant_type: str = "" # individual, ngo, competitor, dpa_referral
alleged_violation: str = "" # art_86_denial, no_ce_marking, prohibited_practice, etc.
system_id: str = ""
status: ComplaintStatus = ComplaintStatus.RECEIVED
documentation_requested: list[str] = field(default_factory=list)
response_deadline: Optional[datetime] = None
legal_counsel_notified: bool = False
explanation_records_retrieved: bool = False
cloud_act_risk_assessed: bool = False
def set_response_deadline(self, days: int = 30) -> datetime:
"""MSA typically gives 30 days to respond to documentation requests."""
self.response_deadline = self.received_date + timedelta(days=days)
return self.response_deadline
def assess_cloud_act_risk(self, infrastructure_provider: str) -> dict:
"""
Assess CLOUD Act dual-compellability risk for documentation storage.
US-controlled infrastructure = parallel US compellability risk.
"""
us_providers = {"aws", "azure", "gcp", "google", "microsoft", "amazon"}
is_us_controlled = any(p in infrastructure_provider.lower() for p in us_providers)
self.cloud_act_risk_assessed = True
return {
"infrastructure": infrastructure_provider,
"cloud_act_exposure": is_us_controlled,
"dual_compellability_risk": is_us_controlled,
"recommendation": (
"Consider migrating compliance documentation to EU-native infrastructure "
"to eliminate CLOUD Act parallel compellability during MSA investigations."
if is_us_controlled
else "EU-native infrastructure: single legal order, no CLOUD Act exposure."
),
}
@dataclass
class Art87ReadinessChecker:
"""Verify complaint-readiness for Annex III AI systems."""
system_id: str
checks_passed: list[str] = field(default_factory=list)
checks_failed: list[str] = field(default_factory=list)
def run_all_checks(self, system_config: dict) -> dict:
checks = [
("technical_documentation_annex_iv", system_config.get("annex_iv_docs_complete", False)),
("conformity_assessment_complete", system_config.get("conformity_assessment_done", False)),
("ce_marking_applied", system_config.get("ce_marking", False)),
("euaidb_registration_current", system_config.get("euaidb_registered", False)),
("post_market_monitoring_active", system_config.get("pms_active", False)),
("art86_explanation_capable", system_config.get("explanation_system_operational", False)),
("art86_records_retained_3yr", system_config.get("explanation_log_retention_3yr", False)),
("serious_incident_reporting_ready", system_config.get("art73_reporting_configured", False)),
("instructions_for_use_current", system_config.get("instructions_for_use_published", False)),
("human_oversight_logs_retained", system_config.get("human_oversight_logging", False)),
]
for check_name, passed in checks:
if passed:
self.checks_passed.append(check_name)
else:
self.checks_failed.append(check_name)
return {
"system_id": self.system_id,
"readiness_score": len(self.checks_passed) / len(checks),
"passed": self.checks_passed,
"failed": self.checks_failed,
"complaint_risk": "HIGH" if len(self.checks_failed) > 3 else "MEDIUM" if self.checks_failed else "LOW",
}
The Art.87 + Art.88 Distinction
Article 87 (Complaints) and Article 88 (Whistleblower Protection) are related but distinct:
| Dimension | Art.87 Complaints | Art.88 Whistleblower Protection |
|---|---|---|
| Who can use it | Any natural or legal person | Persons who report violations in professional context |
| Purpose | Formal complaint about suspected violation | Protection for internal reporters from retaliation |
| Recipient | MSA (external regulatory body) | Applies to internal reporting + external disclosure |
| Legal basis | AI Act Art.87 | AI Act Art.88 + Whistleblower Directive 2019/1937 |
| Subject matter | Any AI Act provision | Breaches of the AI Act |
| Developer action | Maintain complaint-ready documentation | Establish internal AI Act reporting channels |
A whistleblower (Art.88) who internally reports that a provider is deploying a non-compliant AI system may subsequently file an Art.87 complaint with the MSA if the internal report is ignored — and is protected from retaliation for doing so.
For developers: Art.88 implies you need an internal AI Act reporting channel (separate from GDPR DPO reporting). An employee who identifies that a system has been miscategorised as non-high-risk when it meets Annex III criteria can report internally under Art.88 protection — and if ignored, escalate to an MSA under Art.87.
Cross-Border Complaint Dynamics
Where a high-risk AI system is placed on the market in multiple Member States, an Art.87 complaint filed in one Member State can trigger cross-border coordination:
- Article 75 mutual assistance: MSAs must cooperate. The MSA receiving the complaint may request information from the MSA of the Member State where the provider is established.
- Article 76 joint operations: MSAs from multiple Member States may conduct joint investigations for systemic issues.
- Article 77: If a violation is identified across multiple Member States, the Commission may coordinate a Union-wide response.
Practical implication: A complaint filed by a German consumer about a French provider's credit-scoring AI used in Germany can trigger: (1) German MSA investigation, (2) German MSA requesting cooperation from French MSA, (3) French MSA investigating the provider's technical documentation, (4) Commission notification if the violation is systemic.
This cross-border reach means providers with EU-wide deployments face complaint risk from all Member States where their system operates — not just the Member State of establishment.
CLOUD Act Intersection: When Documentation is Subpoenaed
When an Art.87 complaint triggers an MSA investigation, the investigation will reach your compliance documentation. If that documentation is stored on US-controlled infrastructure, you face dual compellability:
Scenario: Art.87 complaint filed in Germany → German MSA requests your AI system's technical documentation (Annex IV), post-market monitoring data, and Art.86 explanation logs → documentation stored on AWS us-east-1 → German MSA issues formal request → simultaneously, US DOJ can compel the same data under CLOUD Act § 2713 with no EU authority to override.
The conflict: GDPR Art.48 prohibits transfer of personal data in response to foreign court orders unless authorised by an EU adequacy mechanism. CLOUD Act compels disclosure regardless. This creates legal exposure in both directions: comply with CLOUD Act → potential GDPR Art.48 violation; refuse CLOUD Act → US legal contempt risk.
EU-native hosting eliminates this entirely. A provider whose AI system documentation is hosted on EU-native infrastructure (incorporated in the EU, no US ownership or control) is subject only to EU legal orders. CLOUD Act does not apply. When the German MSA requests documentation, there is one legal order to comply with.
| Infrastructure | MSA Compliance | CLOUD Act Exposure | Legal Orders |
|---|---|---|---|
| AWS / Azure / GCP | ✓ possible | ✓ YES — parallel US compellability | 2 (conflicting) |
| EU-native (German GmbH, no US control) | ✓ possible | ✗ NO | 1 |
30-Item Art.87 Complaint-Readiness Checklist
Provider Obligations (10 items)
- P1 — Technical documentation (Annex IV) complete and current for all Annex III systems
- P2 — Conformity assessment (Art.43/44) completed and records retained ≥10 years
- P3 — CE marking affixed (Art.49) before market placement
- P4 — EUAIDB registration current (Art.71) with accurate system description
- P5 — Post-market monitoring plan (Art.72) operational with data collection active
- P6 — Serious incident reporting (Art.73) configured with MSA notification channels identified
- P7 — Art.86 explanation capability built into system (Art.86(2) enablement obligation)
- P8 — Deployer instructions for use (Art.13) current and published
- P9 — Internal AI Act reporting channel established (Art.88 whistleblower prerequisite)
- P10 — Compliance documentation stored on infrastructure with documented legal order risk assessment
Deployer Obligations (10 items)
- D1 — Art.86 explanation process defined with response SLA
- D2 — Explanation records retained (minimum 3 years recommended)
- D3 — Art.13 disclosure implemented — affected persons informed AI is used in decision
- D4 — Human oversight logs retained per Art.26 obligations
- D5 — Internal escalation path defined: Art.86 complaint → Art.87 MSA complaint risk
- D6 — Legal counsel briefed on MSA documentation request procedures in each Member State
- D7 — GDPR DPO briefed on AI Act MSA intersection (GDPR DPA ≠ AI Act MSA in most states)
- D8 — Art.87 complaint response playbook exists (who responds, in what timeframe)
- D9 — Post-market monitoring data available for MSA review within 30 days of request
- D10 — Contract with AI provider specifies documentation access rights for MSA compliance
Infrastructure & Cross-Border (10 items)
- I1 — CLOUD Act exposure assessed for all compliance documentation storage
- I2 — Dual-compellability risk documented for each jurisdiction where system operates
- I3 — EU Member States where system deployed identified (determines potential MSA complaint jurisdictions)
- I4 — Cross-border MSA coordination understood (Art.75/76 mutual assistance implications)
- I5 — Whistleblower (Art.88) channel separate from DPO channel established
- I6 — MSA contact details for all deployment Member States documented
- I7 — National procedural law for Art.87 complaint handling researched for each deployment state
- I8 — Response timeline per Member State documented (30/60/90 day national variations)
- I9 — Art.99 fine exposure calculated for each Annex III system deployed
- I10 — Annual MSA reporting (Art.84) monitored for sector-level enforcement signals
Art.87 Fine Exposure: The Art.99 Link
Article 87 is the complaint mechanism; Article 99 is the penalty mechanism. The chain from complaint to fine:
Art.87 complaint → Art.74 investigation → Art.79 corrective measures → non-compliance → Art.99 penalties:
| Violation Type | Art.99 Maximum Fine |
|---|---|
| Prohibited practices (Art.5) | €35M or 7% global annual turnover |
| High-risk AI obligations violations | €15M or 3% global annual turnover |
| Incorrect/misleading information to MSA | €7.5M or 1% global annual turnover |
| GPAI model obligations violations | €15M or 3% |
A single Art.87 complaint, if it reveals a systematic failure to conduct conformity assessment, can trigger Art.99 fines in the 3% tier. For a €100M revenue company, that is €3M from one complaint.
See Also
- EU AI Act Art.86: Right to Explanation for AI Decisions — the explanation right that precedes Art.87 complaints
- EU AI Act Art.99: Penalties and Fines — Complete Developer Guide — the fine structure triggered by Art.87 investigations
- EU AI Act Art.84: Annual Market Surveillance Reporting — MSA annual reporting that Art.87 complaints feed into
- EU AI Act Art.13: Transparency and Provision of Information — disclosure obligations whose failure triggers Art.87 complaints
- EU AI Act Art.9: Risk Management System Requirements — the risk management foundation that complaint investigations audit