2026-04-09·9 min read·sota.io team

MISRA C 2025 and EU AI Act Automotive: ASIL D Compliance Without CLOUD Act Exposure

MISRA C 2025 — the updated Motor Industry Software Reliability Association coding guidelines for C — arrives as EU automotive developers are simultaneously absorbing the EU AI Act's Annex III requirements. The timing is not coincidental. Both standards address the same underlying problem: how do you build safety-critical software that can be proven correct, not merely tested?

The intersection of MISRA C 2025 and EU AI Act Annex III creates a compliance challenge that DACH automotive developers — working at Bosch, Continental, ZF, BMW, Volkswagen, Daimler Truck — face more acutely than any other engineering community in the world. They build the embedded C code that controls brakes, steering, and lane-keeping systems. That code must simultaneously satisfy MISRA C 2025 rule sets for ISO 26262 ASIL D, and increasingly, the EU AI Act's Article 9 risk management requirements for AI-enabled driver assistance systems.

The tools they reach for most often — Polyspace by MathWorks — introduce a compliance risk that their Article 9 risk registers do not currently reflect.

What MISRA C 2025 Changes

MISRA C (Motor Industry Software Reliability Association, founded 1994, Warwickshire, GB) publishes coding guidelines that restrict C language usage to subsets provably analysable by static analysis tools. The original MISRA C:1998 was developed for the automotive industry. MISRA C:2004 and MISRA C:2012 extended coverage. MISRA C 2025 is the current edition.

MISRA C does not define a new language. It defines which features of standard C (ISO/IEC 9899) are prohibited, required, or advisory in safety-critical contexts. The rationale is straightforward: C contains constructs whose behaviour is implementation-defined or undefined under the standard. In safety-critical code compiled for microcontrollers (Infineon AURIX TC3xx, NXP S32K, Renesas RH850), implementation-defined behaviour produces results that are not portable, not predictable, and not formally analysable.

MISRA C 2025 reorganises the rule set around C23 (ISO/IEC 9899:2024) and strengthens rules in four areas relevant to EU AI Act Annex III compliance:

1. Arithmetic boundary behaviour. Rules governing integer overflow, pointer arithmetic, and type coercions have been tightened. In ASIL D systems under ISO 26262, undefined arithmetic behaviour is a safety hazard. In AI-enabled ADAS systems under EU AI Act Annex III, it is also a "foreseeable risk" that Article 9's risk management system must address.

2. Control flow complexity. MISRA C 2025 retains and strengthens McCabe complexity limits. Static analysis tools can verify compliance with these limits exhaustively — across all control flow paths, not just tested paths.

3. Dynamic memory prohibition. malloc, calloc, realloc, and free remain prohibited in MISRA C 2025 for safety-critical contexts. Heap allocation introduces non-deterministic timing behaviour incompatible with ASIL D requirements. Neural network inference engines deployed in ADAS (lane detection, object classification, radar fusion) must use statically allocated memory if they are to satisfy MISRA C requirements.

4. Formal analysis compatibility. MISRA C 2025 rule categories are explicitly mapped to the static analysis capabilities of compliant tools. This mapping — new in the 2025 edition — creates a direct path from MISRA C compliance to formal verification using abstract interpretation tools.

ISO 26262 ASIL D and the Proof Requirement

ISO 26262 (Road vehicles — Functional safety) defines four Automotive Safety Integrity Levels: ASIL A (lowest) through ASIL D (highest). Electronic Power Steering, Autonomous Emergency Braking, and Lane Keeping Assist systems targeting ASIL D require the highest rigor of software development and verification.

ASIL D Part 6 (Software for vehicles) mandates multiple independent testing and analysis methods, including formal verification (specifically listed in Table 10 of ISO 26262-6:2018 as "Highly Recommended" for ASIL D). For ASIL D, "highly recommended" means "required unless you can document an equivalent alternative and your safety case accepts the gap."

MISRA C 2025 compliance is a prerequisite for ASIL D software, not a sufficient condition. The software must also be verified using static analysis (abstract interpretation, model checking) against the safety goals derived from the hazard and risk analysis.

This is where the formal verification tools become essential.

Astrée: Abstract Interpretation for MISRA C + ASIL D

Astrée (Patrick Cousot FR, École Normale Supérieure Paris; Xavier Rival FR, INRIA Paris; Antoine Miné FR, Sorbonne; Jérôme Feret FR, INRIA Paris) is an industrial-grade static analyser based on abstract interpretation that proves the absence of runtime errors in C programs. It was initially developed at ENS Paris (2001) and commercialised by AbsInt GmbH (Saarbrücken, Germany).

Astrée proves:

The proof is exhaustive — it covers all possible execution paths, all possible input values, all possible scheduler interleavings. Not a sample of test cases. All cases.

For MISRA C 2025 Rule 1.3 (undefined behaviour prohibition), Astrée provides formal evidence that the prohibited undefined behaviours cannot occur — for all inputs the system will ever receive. This is precisely what Article 9's "foreseeable risks" requirement asks for: not evidence that tested inputs did not trigger the hazard, but evidence that the hazard cannot occur.

AbsInt GmbH is a German company (Saarbrücken, Saarland DE). AbsInt's Astrée licence, source code, and analysis infrastructure operate entirely on EU-incorporated infrastructure. No CLOUD Act exposure.

Frama-C: MISRA C Analysis with EU Provenance

Frama-C (François Bobot FR + Julien Signoles FR + Patrick Baudin FR + Pascal Cuoq FR, CEA-List / INRIA Paris) is an open-source analysis framework for C that supports MISRA C compliance checking through its Frama-C-MISRA plugin, alongside formal verification via the WP (weakest precondition) plugin and abstract interpretation via EVA (Evolved Value Analysis).

/*@ requires \valid(sensor_data) && sensor_data->speed >= 0.0f;
  @ requires sensor_data->speed <= MAX_SPEED;
  @ ensures \result >= 0 && \result <= BRAKE_MAX;
  @ assigns \nothing;
  @*/
int compute_brake_force(const SensorData *sensor_data) {
    /* MISRA C 2025 compliant: no dynamic allocation, bounded arithmetic */
    int force = (int)(sensor_data->speed * BRAKE_COEFFICIENT);
    if (force > BRAKE_MAX) force = BRAKE_MAX;
    return force;
}

The /*@ ... @*/ annotations are ACSL (ANSI/ISO C Specification Language) specifications. Frama-C's WP plugin converts these to proof obligations and discharges them using SMT solvers (Alt-Ergo, Z3, CVC5). The EVA plugin computes value ranges for all variables across all execution paths.

For EU AI Act Article 9, Frama-C specifications of ADAS inference functions constitute formal evidence that the function's output is bounded, its preconditions are explicit, and its behaviour has been proven consistent with those preconditions for all inputs satisfying them.

CEA-List is a division of CEA (Commissariat à l'énergie atomique et aux énergies alternatives), a French public research organisation funded by the French Ministry of Research. INRIA is a French national research institute. Frama-C is LGPL-licensed. No US corporate parent, no CLOUD Act exposure.

CPAchecker for AUTOSAR: EU-Native Model Checking

CPAchecker (Dirk Beyer DE, LMU Munich) applies configurable program analysis to C programs, including AUTOSAR C++ compliance checking and ISO 26262 property verification. For MISRA C 2025 rule violations that Astrée's value analysis cannot detect (complex control flow patterns, specific MISRA Rule 14/15 restrictions), CPAchecker's predicate abstraction engine provides complementary coverage.

CPAchecker is maintained by SoSy-Lab at Ludwig Maximilian University Munich (Staatliche Universität Bayern DE). Apache 2.0 licensed. EU provenance throughout.

The CLOUD Act Problem in AUTOSAR Toolchains

MISRA C compliance checking in production automotive software development is dominated by two tools: Polyspace (MathWorks Inc., Natick, Massachusetts, USA) and PC-lint Plus (Gimpel Software LLC, Collegeville, Pennsylvania, USA).

MathWorks is a US-incorporated private company. Its products — including Polyspace Bug Finder, Polyspace Code Prover, and the Embedded Coder toolchain — run on both local installations and MathWorks cloud infrastructure. The CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018) authorises US law enforcement to compel US-incorporated companies to produce data and communications stored anywhere in the world.

For automotive software teams, the relevant exposure is not primarily the inference data. It is the code itself. ADAS firmware, ISO 26262 safety case documentation, MISRA C violation reports, static analysis results, and formal verification artefacts are all sensitive intellectual property. A Polyspace cloud analysis sends C source code — including safety-critical ADAS algorithms — to MathWorks infrastructure. Under the CLOUD Act, that code becomes potentially compellable by US law enforcement without EU legal process.

EU AI Act Article 9's foreseeable risk assessment must account for this. If your high-risk AI system's safety verification toolchain sends source code to US-incorporated SaaS infrastructure, the risk of unauthorised disclosure is foreseeable — and therefore must appear in your risk register.

The EU-native alternative is available and production-proven:

ToolInstitutionJurisdictionMISRA CASIL D
AstréeAbsInt GmbHSaarbrücken, DEQualified DO-178C, IEC 61508
Frama-CCEA-List / INRIAParis, FRISO 26262 tool qualification path
CPAcheckerLMU MunichMunich, DESV-COMP verified
PolySpace Alternative: Axivion SuiteAxivion GmbHStuttgart, DEISO 26262, IEC 61508

Axivion GmbH (Stuttgart, Baden-Württemberg DE) provides the Axivion Suite — a MISRA C/C++ analysis tool developed entirely in Germany, used by Bosch and Continental for ISO 26262 ASIL D compliance. No MathWorks dependency, no US CLOUD Act exposure.

EU AI Act Annex III and Automotive AI

EU AI Act Annex III, Section 3 covers "AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity." This explicitly captures ADAS systems that use AI — not only autonomous vehicles but current-generation ADAS features:

From August 2, 2026, these systems require Article 9 risk management, Article 11 technical documentation, and Article 12 logging capabilities. The risk management system must identify foreseeable risks — including the infrastructure risks discussed above — and provide evidence of their mitigation.

MISRA C 2025 compliance is a component of this evidence. It demonstrates that the C codebase implementing the ADAS AI system has been developed according to guidelines designed to eliminate classes of programming errors that constitute foreseeable risks. Combined with formal verification using Astrée or Frama-C, it provides the machine-checked safety evidence that Article 9's foreseeable risk mandate requires.

The Deployment Layer

Formal verification and MISRA C compliance address the software-level risk requirements. The deployment layer introduces its own foreseeable risks.

ADAS AI inference may run at the edge (on-vehicle SoC), but the training infrastructure, model validation pipeline, OTA update mechanism, and audit log storage typically run on cloud infrastructure. For EU AI Act Annex III high-risk systems, this cloud infrastructure must satisfy the same data protection requirements as the vehicle itself.

Article 9(4) foreseeable risks for cloud-deployed ADAS support infrastructure include:

  1. CLOUD Act compulsion of model weights — if training infrastructure is on US cloud, model weights (representing significant R&D investment) are potentially compellable
  2. Audit log disclosure — Article 12 requires logging that constitutes personal data; if logs are on US infrastructure, CLOUD Act compulsion can expose personal data without GDPR process
  3. OTA update pipeline compromise — if OTA update signing keys or model update packages transit US infrastructure, CLOUD Act warrants could reach them

EU-native infrastructure — incorporated under EU law, no US parent company, operating in EU-jurisdiction datacenters — addresses these foreseeable risks at the architectural level. They become non-foreseeable because the legal basis for compulsion does not exist against EU-incorporated entities.

From MISRA C to EU AI Act: A Unified Compliance Path

DACH automotive developers have spent decades building MISRA C compliance into their processes. The EU AI Act does not replace this work — it extends it. MISRA C 2025 compliance with formal verification (Astrée, Frama-C, CPAchecker) provides the software-level safety evidence. EU AI Act Article 9 risk management provides the framework that connects this evidence to the regulatory requirements.

The path from MISRA C 2025 compliant C code to EU AI Act Article 9 compliant AI system looks like this:

  1. MISRA C 2025 static analysis with an EU-native tool (Astrée/Axivion Suite) — proves absence of undefined behaviour and restricted language feature violations
  2. Formal verification with Frama-C or CPAchecker — proves safety properties (output bounds, state invariants, timing constraints) for all inputs
  3. Documented risk register under Article 9 — maps each foreseeable risk to the formal evidence that mitigates it
  4. EU-native deployment for cloud infrastructure — eliminates CLOUD Act foreseeable risks by architecture
  5. Article 12 logging on EU-jurisdiction infrastructure — ensures audit logs are accessible to EU supervisory authorities, not US law enforcement

Steps 1 and 2 generate formal artefacts. Steps 3 through 5 integrate those artefacts into the regulatory framework. The result is an Article 9 risk management system backed by machine-checked proof — not documentation.

The August 2026 deadline is four months away. MISRA C 2025 adoption takes weeks of toolchain integration. Formal verification of existing codebases takes months of specification work. Teams that start in Q2 2026 — now — can integrate formal evidence before the deadline. Teams that start when enforcement begins cannot retroactively generate proof.

The formal methods community in the EU has built these tools over forty years. The regulatory framework that creates demand for them is now in force. The toolchain exists. The compliance gap is time.


sota.io deploys your containers on EU-incorporated infrastructure in Germany. Training pipelines, model validation, OTA update infrastructure, and Article 12 audit logs stay under EU jurisdiction — no CLOUD Act exposure. MISRA C 2025 formal verification artefacts, ISO 26262 safety case documentation, and EU AI Act Article 9 evidence all reside on EU-native compute. Free tier available.