EU AI Act Art.58: NCA Powers — Investigation, Access Rights, Corrective Measures, and Sanctions (2026)
EU AI Act Article 58 answers a foundational enforcement question: what can a National Competent Authority (NCA) actually do to you? Where Art.57 defines who the NCAs are and how they are structured, Art.58 defines their operative enforcement toolkit — the powers they hold to investigate AI systems, compel access to documentation, order corrective action, and impose sanctions.
For AI developers and infrastructure providers, Art.58 is where abstract regulatory obligation becomes concrete legal exposure. An NCA that invokes Art.58 powers can walk into your premises, demand access to your AI system, request your training datasets, and — if they find a non-compliant system — order it withdrawn from the market.
Art.58 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689. Its companion provisions in Chapter IX (Art.74–79) define the market surveillance procedures under which NCAs exercise these powers in practice.
For EU infrastructure providers and PaaS operators — including sota.io — Art.58 has structural relevance: operators with EU-incorporated entities and EU-based infrastructure face a single NCA jurisdiction. US-incorporated cloud providers face a different situation: NCA investigation powers interact with CLOUD Act obligations in ways that can create compliance conflicts, particularly around data access requests.
Art.58 in the Chapter VI Governance Architecture
Art.58 sits at the heart of the NCA enforcement chain established by Chapter VI:
| Article | Title | Function |
|---|---|---|
| Art.57 | National Competent Authorities | Designates who enforces the Act nationally |
| Art.58 | NCA powers | Defines what NCAs can do (access, inspect, sanction) |
| Art.59 | AI Board | Coordinates NCAs at EU level; issues guidelines |
| Art.60 | EU AI database | Public registry of high-risk AI systems |
| Art.61 | AI Office | Union-level enforcement for GPAI models |
| Art.74 | Market surveillance | Procedures for investigating AI systems on the market |
| Art.75 | Mutual assistance | Cross-border NCA cooperation procedures |
| Art.99 | Penalties | Quantum of fines that NCAs can impose |
Art.58 defines the powers; Art.74 defines the procedures for exercising them; and Art.99 defines the fines that back them up. The three articles together form the operative enforcement architecture.
Art.58(1): Investigatory Access Powers
Art.58(1) grants NCAs the power to require economic operators — providers, deployers, importers, distributors, and authorised representatives — to provide:
- Documentation and technical files for AI systems covered by the Regulation
- Source code of AI systems, upon reasoned request and to the extent strictly necessary
- Information about the AI system's purpose, training data, risk management process, and post-market monitoring activities
- Access to the AI system itself for the purpose of assessing its compliance
The "reasoned request" qualifier on source code access is significant: NCAs cannot make blanket source code demands. They must provide a written reasoning explaining why source code access is necessary for the specific compliance assessment being conducted. This constraint is designed to balance enforcement effectiveness with intellectual property protection and trade secret obligations.
Scope of "economic operators": Art.58(1) powers apply across the full supply chain. An NCA can direct access requests not just to the AI system's provider but to any operator in the value chain — including deployers using a third-party AI system under their own branding, importers who placed a non-EU system on the EU market, and authorised representatives acting on behalf of non-EU providers.
Art.58(2): On-Site Inspection Powers
Art.58(2) grants NCAs the power to conduct announced and unannounced on-site inspections of premises used for the development, testing, deployment, or storage of AI systems. The inspection powers include:
- Entry to premises where AI systems are developed, tested, or deployed
- Examination of IT systems, algorithms, and software used in or for the AI system
- Sampling and copying of documents and electronic data
- Interviews with relevant personnel (technical teams, compliance officers, senior management)
- Access to testing environments and sandboxes where AI systems are evaluated
Unannounced inspections are explicitly permitted when an NCA has reasonable grounds to believe that an economic operator is concealing non-compliance, that advance notice would lead to destruction of evidence, or that the matter presents an urgent risk.
For developers, this means the compliance posture you maintain in production must be inspection-ready at all times — not just at annual audit cycles. The technical documentation required under Art.11 (high-risk AI systems) and Art.52 (GPAI models) must be current, accurate, and accessible without requiring advance preparation time.
Cross-border inspection coordination: When an NCA needs to conduct an inspection of premises in another Member State, Art.75 (mutual assistance) provides the mechanism. The requesting NCA can ask the host-country NCA to accompany or lead the inspection. This matters for EU-based AI developers operating across multiple Member States: the NCA of your principal establishment has primary jurisdiction, but it can draw on the inspection capacity of other Member States' NCAs.
Art.58(3): AI System Testing Powers
Art.58(3) is technically the most significant provision for AI developers: NCAs have the power to test AI systems directly, including in conditions that replicate real-world deployment.
The testing powers include:
- Access to AI systems in their production environment for compliance testing
- Access to training data used to train the AI system, to assess data governance compliance (Art.10)
- Execution of test inputs and evaluation of outputs to assess whether the system behaves as documented
- Review of logging and monitoring data generated during operation
Limitations on testing powers: Art.58(3) testing is bounded by proportionality. NCAs must document why direct system testing is necessary and cannot require access to data that is irrelevant to the specific compliance concern being investigated. For high-risk AI systems subject to third-party conformity assessment (Art.43), NCAs must coordinate with the relevant notified body before conducting their own testing.
Implications for AI infrastructure providers: If you operate a PaaS platform that hosts customers' AI systems, NCA testing powers create a layered obligation. The NCA can direct testing access requests to the AI system's provider (your customer) — but if the technical architecture means that access requires your cooperation (e.g., access credentials, network routing, container logs), you may be required to provide that cooperation as part of the investigation.
Art.58(4): Corrective Measure Orders
When an NCA identifies non-compliance through its investigatory powers, Art.58(4) grants it the authority to order corrective measures, including:
| Measure | Trigger | Effect |
|---|---|---|
| Withdrawal from the market | Non-compliant AI system on the market | System must be removed from all commercial channels |
| Recall | Non-compliant system already in service | System must be returned from end users |
| Restriction of use | Specific non-compliant deployment contexts | System may continue operating with defined limitations |
| Suspension of operation | Ongoing risk pending full investigation | Temporary halt to system operation |
| Modification requirement | Specific documented deficiency | Operator must implement specific technical or procedural changes |
| Prohibition on placing on market | Non-compliant system not yet deployed | System cannot enter the market until compliance is established |
Proportionality requirement: Art.58(4) orders must be proportionate to the compliance deficiency identified. An NCA cannot order full market withdrawal for a documentation gap that does not affect the system's fundamental safety profile. The measure must correspond to the risk level and the nature of the non-compliance.
Timing: Corrective measure orders take immediate effect unless the NCA specifies a compliance timeline for the operator. Where a gradual correction is technically feasible, NCAs are expected to grant reasonable implementation windows — but "reasonable" is assessed against the risk level: a high-risk AI system with a documented safety deficiency faces a tighter timeline than a lower-risk system with administrative non-compliance.
Art.58(5): Emergency Measures for Serious Risk
Art.58(5) establishes a fast-track enforcement channel for situations presenting a serious risk — defined as a significant likelihood of harm to health, safety, or fundamental rights. Emergency measures can be taken without prior notice to the operator and without the normal investigation sequence.
Emergency powers include:
- Immediate suspension of AI system operation
- Emergency withdrawal from the market
- Prohibition on supply of AI systems within the Member State's territory
- Emergency recall from end users
Emergency measures are provisional — they must be followed by a full investigation under the standard Art.58(4) procedure within a defined period. If the investigation finds that the serious risk assessment was incorrect, the NCA must lift the emergency measures and may be required to compensate the operator for losses caused by the unjustified suspension.
Notification to the EU AI Safety Database: When an NCA takes emergency measures, it must immediately notify the European Commission and other Member States through the EU AI safety database (Art.60 framework). This cross-border notification ensures that an AI system found to present serious risk in one Member State is not simultaneously deployed in others without awareness of the ongoing enforcement action.
Art.58(6): Publication of Decisions
Art.58(6) requires NCAs to publish their enforcement decisions — including corrective measure orders, emergency measures, and the outcomes of investigations — in a form that makes them accessible to the public and to other NCAs.
Publication obligations include:
- Identity of the non-compliant AI system and its provider
- Nature of the compliance deficiency identified
- Corrective measures ordered and compliance timeline
- Outcome of appeals where orders are challenged
Trade secret and confidentiality protections: Publication must respect trade secrets and commercially sensitive information. NCAs must balance transparency with proportionate protection for information that would cause disproportionate harm to the operator's competitive position if disclosed in full. In practice, this means published decisions typically describe the category of compliance failure without disclosing proprietary technical details.
Reputational implications: Publication of Art.58(6) decisions creates significant reputational exposure. For B2B AI developers, a published enforcement decision — even one that was ultimately resolved through corrective action — affects customer due diligence processes and procurement decisions. The compliance risk is not only regulatory: it is commercial.
Art.58(7)–(8): Cross-Border Cooperation and Information Sharing
Arts.58(7)-(8) establish the mutual assistance framework that enables NCAs to coordinate enforcement across Member State borders.
Art.58(7) — Information requests: An NCA can request another Member State's NCA to provide information it holds about an AI system or economic operator under investigation. The receiving NCA is obliged to respond within defined timeframes. This prevents economic operators from structuring their operations to exploit information asymmetries between NCAs.
Art.58(8) — Joint investigations: Where an AI system operates across multiple Member States — which is typical for cloud-based AI services — NCAs can conduct joint investigations with one NCA acting as lead authority and others as participating authorities. Joint investigations pool inspection capacity and prevent inconsistent enforcement outcomes across Member States.
AI Board coordination (Art.59): For complex cross-border cases, NCAs can escalate to the AI Board for coordination guidance. The AI Board does not itself have enforcement powers — it cannot order corrective measures — but it can issue opinions and recommendations that carry significant practical weight in guiding NCA enforcement priorities.
Art.58(9): GPAI Carve-Out and AI Office Primacy
Art.58(9) establishes the structural boundary between NCA enforcement powers and AI Office enforcement over general-purpose AI (GPAI) models:
NCAs cannot exercise Art.58 investigatory or corrective powers over:
- GPAI model providers in respect of obligations arising from Arts.51–56 (GPAI chapter)
- Systemic risk assessments conducted by the AI Office under Art.55
The AI Office (Art.61) has exclusive enforcement competence for GPAI model compliance. National NCAs retain competence over how GPAI models are deployed in high-risk AI systems within their territory — but the model provider itself, in respect of its GPAI obligations, is subject only to the AI Office.
Practical boundary for developers: If you are a developer who:
- Provides a GPAI model → AI Office has primary jurisdiction over your Art.51–56 obligations; NCAs have jurisdiction if your model is integrated into a high-risk AI system
- Deploys a GPAI model in a high-risk AI system → NCA of your principal establishment has jurisdiction over the high-risk AI system's compliance; AI Office retains jurisdiction over the underlying GPAI model
This jurisdictional layering can create parallel enforcement tracks. A GPAI model that is simultaneously under AI Office investigation (systemic risk assessment) and incorporated into a high-risk AI system under NCA investigation requires coordination between the two authorities to avoid conflicting corrective measures.
CLOUD Act Intersection: Art.58 Powers and US-Incorporated Infrastructure
Art.58 investigation powers — particularly source code access, AI system testing, and on-site inspection — interact in complex ways with the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act) when the AI system or its infrastructure involves US-incorporated entities.
The structural conflict arises in two directions:
Direction 1 — NCA → US-incorporated operator: When an EU NCA directs an Art.58 access request to a US-incorporated AI provider, the provider must assess whether complying would conflict with US law. The CLOUD Act permits US authorities to demand access to data held by US companies regardless of where the data is physically located. An AI provider that has already provided access to US authorities under CLOUD Act process may face conflicts if EU NCA investigation demands relate to the same data.
Direction 2 — US authority → EU-hosted AI data: US law enforcement or intelligence agencies can use CLOUD Act mechanisms to demand access to AI system logs, training data, or operational data held by US-incorporated cloud providers even when physically hosted in the EU. This creates a situation where data subject to NCA investigation under Art.58 is simultaneously accessible to US authorities without EU judicial authorisation.
EU-incorporated infrastructure: For AI developers operating on infrastructure provided by EU-incorporated entities — with no US parent, affiliate, or contractual nexus that creates CLOUD Act jurisdiction — this conflict does not arise. EU NCA investigation requests are handled solely under EU law. US authorities seeking access to data hosted on EU-incorporated infrastructure must use MLAT (mutual legal assistance treaty) procedures, which require judicial authorisation under EU law.
For enterprise AI developers handling sensitive data, this jurisdictional distinction is increasingly part of procurement criteria. The question is not "where are the servers?" but "what legal jurisdiction does the operator fall under?"
Real-World Art.58 Enforcement: What to Expect in 2025–2026
NCAs across the EU have designated their national structures since August 2025. The enforcement posture varies by Member State:
| Member State | NCA(s) | Art.58 Enforcement Focus |
|---|---|---|
| Germany | BNetzA (lead NCA) + BfDI coordination | High-risk AI systems in financial services, employment |
| France | CNIL (AI Act mandate) + ANSSI coordination | Biometric systems, public sector AI deployment |
| Netherlands | ACM (Autoriteit Consument & Markt) | E-commerce AI, automated decision-making |
| Sweden | IMY (Integritetsskyddsmyndigheten) | Biometric systems, fundamental rights impact |
| Ireland | ADMS (AI and Data Management Supervision unit) | GPAI deployers; coordination with AI Office for GPAI providers |
Enforcement priorities in 2025–2026: Based on Commission guidance and NCA public statements, early enforcement focus is on:
- Documentation completeness — are technical files (Art.11) actually created and maintained?
- High-risk classification accuracy — are operators correctly identifying whether their systems fall in Annex III categories?
- Fundamental rights impact assessments — public sector and financial services deployers are early targets
- Biometric systems — remote biometric identification restrictions attract intensive scrutiny
Developers who have completed risk classification, maintained up-to-date technical documentation, and established post-market monitoring systems (Art.72) are well-positioned for the initial enforcement wave.
Python Implementation: Art.58 Compliance Readiness Tracker
The following Python implementation provides a structured framework for maintaining Art.58 compliance readiness — ensuring your documentation, access controls, and corrective measure procedures are inspection-ready at all times:
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional
import json
class ComplianceStatus(Enum):
COMPLIANT = "compliant"
NEEDS_UPDATE = "needs_update"
NON_COMPLIANT = "non_compliant"
UNDER_REVIEW = "under_review"
class RiskCategory(Enum):
PROHIBITED = "prohibited"
HIGH_RISK_ANNEX_I = "high_risk_annex_i"
HIGH_RISK_ANNEX_III = "high_risk_annex_iii"
GPAI_SYSTEMIC = "gpai_systemic_risk"
GPAI_STANDARD = "gpai_standard"
LIMITED_RISK = "limited_risk"
MINIMAL_RISK = "minimal_risk"
@dataclass
class Art58ComplianceTracker:
"""
Tracks Art.58 NCA enforcement readiness for an AI system.
Maintains documentation currency, access procedures, and corrective
measure response capabilities required for NCA inspection readiness.
"""
system_id: str
system_name: str
provider_name: str
principal_establishment_ms: str # ISO 3166-1 alpha-2 (e.g. "DE", "FR")
risk_category: RiskCategory
last_technical_file_update: datetime
last_risk_assessment_update: datetime
source_code_access_documented: bool = False
training_data_access_documented: bool = False
corrective_measure_procedure: bool = False
cross_border_notification_procedure: bool = False
nca_contact_documented: bool = False
inspection_readiness_exercises: list[datetime] = field(default_factory=list)
# EU AI Act Art.57: NCA contact for principal establishment
NCA_CONTACTS = {
"DE": {"authority": "Bundesnetzagentur", "url": "https://www.bundesnetzagentur.de"},
"FR": {"authority": "CNIL", "url": "https://www.cnil.fr"},
"NL": {"authority": "Autoriteit Consument & Markt", "url": "https://www.acm.nl"},
"SE": {"authority": "IMY", "url": "https://www.imy.se"},
"IE": {"authority": "ADMS Unit", "url": "https://www.dataprotection.ie"},
"IT": {"authority": "AGID", "url": "https://www.agid.gov.it"},
"ES": {"authority": "AESIA", "url": "https://www.aesia.gob.es"},
"PL": {"authority": "UOKiK", "url": "https://www.uokik.gov.pl"},
}
def get_nca(self) -> dict:
return self.NCA_CONTACTS.get(
self.principal_establishment_ms,
{"authority": "NCA not mapped", "url": ""}
)
def technical_file_current(self, max_age_days: int = 365) -> bool:
age = datetime.now() - self.last_technical_file_update
return age.days <= max_age_days
def risk_assessment_current(self, max_age_days: int = 180) -> bool:
age = datetime.now() - self.last_risk_assessment_update
return age.days <= max_age_days
def days_since_last_readiness_exercise(self) -> Optional[int]:
if not self.inspection_readiness_exercises:
return None
latest = max(self.inspection_readiness_exercises)
return (datetime.now() - latest).days
def art58_readiness_score(self) -> dict:
"""
Calculates Art.58 inspection readiness score.
Returns score (0-100) and detailed breakdown.
"""
checks = {
"technical_file_current": self.technical_file_current(),
"risk_assessment_current": self.risk_assessment_current(),
"source_code_access_documented": self.source_code_access_documented,
"training_data_access_documented": self.training_data_access_documented,
"corrective_measure_procedure": self.corrective_measure_procedure,
"cross_border_notification_procedure": self.cross_border_notification_procedure,
"nca_contact_documented": self.nca_contact_documented,
"readiness_exercise_recent": (
self.days_since_last_readiness_exercise() is not None
and self.days_since_last_readiness_exercise() <= 180
),
}
weights = {
"technical_file_current": 25,
"risk_assessment_current": 20,
"source_code_access_documented": 10,
"training_data_access_documented": 10,
"corrective_measure_procedure": 15,
"cross_border_notification_procedure": 10,
"nca_contact_documented": 5,
"readiness_exercise_recent": 5,
}
total_score = sum(
weights[check] for check, passed in checks.items() if passed
)
return {
"score": total_score,
"checks": checks,
"nca": self.get_nca(),
"risk_category": self.risk_category.value,
"assessment_date": datetime.now().isoformat(),
}
def generate_art58_response_plan(self) -> dict:
"""
Generates structured response plan for Art.58 investigation notice.
Maps each NCA power to the operator's internal response procedure.
"""
return {
"system_id": self.system_id,
"nca_investigation_response_plan": {
"art58_1_documentation_access": {
"description": "Provide technical files and documentation",
"procedure": "Retrieve from compliance documentation system",
"response_time_target": "48 hours from NCA request",
"owner": "Compliance / Legal",
},
"art58_2_on_site_inspection": {
"description": "Host NCA on-site inspection",
"procedure": "Designate inspection coordinator; prepare IT access",
"response_time_target": "Accept inspection on NCA schedule",
"owner": "Facilities / Engineering Lead",
},
"art58_3_system_testing": {
"description": "Provide AI system access for NCA testing",
"procedure": "Provision sandboxed test environment",
"response_time_target": "72 hours from NCA request",
"owner": "Engineering / MLOps",
},
"art58_4_corrective_measures": {
"description": "Execute corrective measure orders",
"procedure": "Activate incident response runbook",
"response_time_target": "Within NCA-specified timeline",
"owner": "CTO / Compliance",
},
"art58_5_emergency_measures": {
"description": "Respond to emergency suspension/withdrawal",
"procedure": "Immediate system suspend via kill switch",
"response_time_target": "4 hours from emergency order",
"owner": "On-call Engineering + Legal",
},
},
"generated_at": datetime.now().isoformat(),
}
def assess_art58_exposure(tracker: Art58ComplianceTracker) -> None:
readiness = tracker.art58_readiness_score()
score = readiness["score"]
print(f"\nArt.58 Compliance Readiness: {tracker.system_name}")
print(f"{'=' * 55}")
print(f"Supervising NCA: {readiness['nca']['authority']}")
print(f"Risk Category: {readiness['risk_category']}")
print(f"Readiness Score: {score}/100")
print()
if score >= 85:
status = "INSPECTION READY"
elif score >= 65:
status = "GAPS TO ADDRESS"
else:
status = "SIGNIFICANT EXPOSURE"
print(f"Status: {status}")
print()
print("Check Results:")
for check, passed in readiness["checks"].items():
icon = "✓" if passed else "✗"
print(f" {icon} {check.replace('_', ' ').title()}")
# Usage example
tracker = Art58ComplianceTracker(
system_id="ai-sys-2024-001",
system_name="Employment Screening AI",
provider_name="Example GmbH",
principal_establishment_ms="DE",
risk_category=RiskCategory.HIGH_RISK_ANNEX_III,
last_technical_file_update=datetime.now() - timedelta(days=45),
last_risk_assessment_update=datetime.now() - timedelta(days=30),
source_code_access_documented=True,
training_data_access_documented=True,
corrective_measure_procedure=True,
cross_border_notification_procedure=False,
nca_contact_documented=True,
inspection_readiness_exercises=[datetime.now() - timedelta(days=120)],
)
assess_art58_exposure(tracker)
response_plan = tracker.generate_art58_response_plan()
print(json.dumps(response_plan, indent=2))
Art.58 Compliance Checklist
14 items to assess your Art.58 readiness:
Investigatory access (Art.58(1)):
- Technical documentation (Art.11 / Art.52) is complete, current, and retrievable on 48-hour notice
- Source code access procedure is documented (who authorises, how access is granted to NCA)
- Training data access procedure is documented (data lineage, access controls, location)
- NCA contact for your principal establishment is identified and accessible to compliance team
On-site inspection readiness (Art.58(2)):
- Inspection coordinator designated (person responsible for managing NCA on-site visits)
- IT access provisioning procedure documented (how NCA gets system access during inspection)
- Physical access procedure documented (premises access, visitor management for inspectors)
System testing (Art.58(3)):
- Test environment can be provisioned on 72-hour notice for NCA testing purposes
- Testing access credentials procedure documented (isolated from production credentials)
Corrective measures (Art.58(4)–(5)):
- Corrective measure response plan exists (withdrawal, recall, restriction, suspension procedures)
- Emergency kill switch / suspension procedure exists and has been tested
- Internal escalation chain for NCA orders is documented (legal, compliance, CTO, board)
Cross-border and publication (Art.58(6)–(8)):
- Cross-border notification procedure documented (how you notify other Member State NCAs if corrective measures affect multi-market deployment)
- Publication response procedure documented (how you respond to published enforcement decisions)
Key Takeaways
Art.58 defines the operative enforcement arsenal that EU NCAs hold over AI system providers and deployers. For developers:
-
Documentation must be inspection-ready at all times — not assembled at NCA request. Technical files, risk assessments, and post-market monitoring logs should be maintained in a state that can be provided within 48 hours.
-
Emergency powers are real — Art.58(5) allows suspension without prior notice. Operators with production AI systems need internal emergency procedures that can execute on NCA order timelines, not audit cycle timelines.
-
GPAI jurisdiction is with the AI Office — if you provide a GPAI model, Art.58 NCA powers do not apply to your GPAI obligations. National NCAs have jurisdiction only over how your model is deployed in high-risk systems.
-
Source code requests require NCA reasoning — Art.58(1) does not permit blanket source code demands. NCAs must justify why source code access is necessary. If you receive such a request, review the stated reasoning carefully before providing access.
-
CLOUD Act exposure matters for infrastructure choice — operators using EU-incorporated infrastructure with no US parent avoid the jurisdiction conflict that arises when NCA Art.58 access requests interact with CLOUD Act obligations of US-incorporated cloud providers.