2026-04-21·13 min read·

EU AI Act Art.3(1): "AI System" Definition and the April 2026 Commission Guidelines — Developer Classification Guide

The EU AI Act's scope depends entirely on one definition. Article 3(1) defines what an "AI system" is — and everything else in the Regulation flows from whether your software clears that threshold.

In April 2026, the European Commission published guidelines clarifying the Art.3(1) boundary. The stakes are significant: software classified as an AI system becomes subject to a four-tier risk framework (unacceptable → high → limited → minimal risk) with compliance obligations ranging from prohibited deployment to conformity assessments costing €50,000–€300,000. Software that falls outside Art.3(1) is simply not regulated by the AI Act.

This guide explains:


The Art.3(1) Definition: Full Text and Structure

Article 3(1) of Regulation (EU) 2024/1689 reads:

"'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments."

This single sentence contains five distinct elements. Each is a gate:

ElementKey TermWhat It Excludes
1Machine-basedPurely human decision-making
2Varying levels of autonomyFully deterministic systems (contested)
3May exhibit adaptiveness after deploymentStatic, never-updated software (contested)
4Infers from inputRule-based logic, hardcoded lookups
5Outputs influencing real/virtual environmentsPurely analytical, no-output systems

The Commission Guidelines focus almost entirely on Element 4 — the inference criterion — as the primary distinguishing factor between AI systems and traditional software.


The April 2026 Commission Guidelines: What Changed

The Commission's April 8, 2026 guidelines (C(2026) 2032 final) were issued under Art.96 AI Act mandate to clarify implementation. Three clarifications are operationally critical:

Clarification 1: Inference ≠ Statistical Complexity

The Commission confirmed that "inference" in Art.3(1) does not mean statistically complex computation. A lookup table with 10 million entries is not inference. A simple linear regression model IS inference.

The test is mechanistic: does the system derive its output by learning patterns from data, or by executing logic explicitly programmed by a human?

"A software system that applies a fixed decision tree with branches determined entirely by human-authored rules does not 'infer' within the meaning of Art.3(1), even if that tree is very large. A system trained on data to minimise a loss function infers, even if the resulting model is simple." — Commission Guidelines C(2026) 2032, para. 14

Clarification 2: "After Deployment" Adaptiveness is Not Required

Element 3 ("may exhibit adaptiveness after deployment") uses the word "may" — the Commission confirmed this is not a requirement for classification as an AI system. A frozen, static ML model that never updates post-deployment is still an AI system if it satisfies the inference criterion.

This closes a gap some developers tried to use: "our model never retrains, so it's not adaptive." The Commission explicitly rejected this argument.

Clarification 3: "Varying Levels of Autonomy" Includes Fully Supervised Systems

Element 2 ("varying levels of autonomy") was sometimes interpreted as requiring the system to operate without human oversight. The Commission clarified that the spectrum from 0% to 100% autonomy all qualifies — a system that always requires a human to approve its recommendations before acting still satisfies this element.

"The phrase 'varying levels' indicates a spectrum. A system that provides a recommendation always reviewed by a human operator is at the low end of that spectrum. It remains a machine-based system operating with some level of autonomy (it independently processes input and generates output) and satisfies Art.3(1)(2)." — Commission Guidelines C(2026) 2032, para. 9


The Five-Factor Test (Commission-Aligned)

Based on the Art.3(1) text and April 2026 Guidelines, the classification test is:

F1: Machine-based?           → Is this software running on a computing system?
F2: Autonomy (any level)?    → Does it independently process input and generate output?
F3: Inference mechanism?     → Does it derive outputs by pattern-learning from data?
F4: Output type?             → Predictions, content, recommendations, or decisions?
F5: Environmental influence? → Can the output affect real-world or digital processes?

F3 is the decisive factor in the vast majority of cases. F1, F2, F4, and F5 are met by almost all software. F3 separates AI systems from traditional software.

What "Inference" Looks Like in Practice

Systems that infer (AI systems under Art.3(1)):

Systems that do not infer (not AI systems under Art.3(1)):


Python Implementation: The Art.3(1) Classifier

from dataclasses import dataclass
from enum import Enum
from typing import Optional


class InferenceMechanism(Enum):
    """Whether the system derives outputs through data-learned patterns."""
    TRAINED_MODEL = "trained_model"           # ML model fitted on data
    LEARNED_EMBEDDINGS = "learned_embeddings" # Embedding-based retrieval
    REINFORCEMENT = "reinforcement_learning"  # RL agent
    RULE_BASED = "rule_based"                # Human-authored rules only
    HYBRID = "hybrid"                        # Rules + ML components
    UNKNOWN = "unknown"


class OutputType(Enum):
    """Art.3(1) output categories."""
    PREDICTION = "prediction"         # Future state, classification
    CONTENT = "content"              # Generated text, image, audio
    RECOMMENDATION = "recommendation" # Suggested action or item
    DECISION = "decision"            # Binding or highly influential choice
    OTHER = "other"                  # Does not fit above


class ClassificationResult(Enum):
    AI_SYSTEM = "ai_system"
    GPAI_MODEL = "gpai_model"         # General-purpose AI model (Art.3(63))
    OUTSIDE_SCOPE = "outside_scope"
    REQUIRES_ANALYSIS = "requires_analysis"


@dataclass
class SoftwareProfile:
    name: str
    machine_based: bool
    inference_mechanism: InferenceMechanism
    output_type: OutputType
    influences_environment: bool
    is_general_purpose: bool = False
    has_human_authored_rules_only: bool = False
    description: str = ""


@dataclass
class Art3Classification:
    result: ClassificationResult
    confidence: str              # HIGH / MEDIUM / LOW
    triggering_factor: str
    compliance_path: str
    explanation: str


class EUAIActArt3Classifier:
    """
    Classifies software under EU AI Act Art.3(1) based on April 2026
    Commission Guidelines C(2026) 2032.
    """

    def classify(self, profile: SoftwareProfile) -> Art3Classification:
        # F1: Machine-based check
        if not profile.machine_based:
            return Art3Classification(
                result=ClassificationResult.OUTSIDE_SCOPE,
                confidence="HIGH",
                triggering_factor="F1_not_machine_based",
                compliance_path="No EU AI Act obligations",
                explanation="Purely human decision-making is outside Art.3(1) scope.",
            )

        # F3: Inference mechanism — decisive factor per Commission Guidelines
        if profile.inference_mechanism == InferenceMechanism.RULE_BASED:
            if profile.has_human_authored_rules_only:
                return Art3Classification(
                    result=ClassificationResult.OUTSIDE_SCOPE,
                    confidence="HIGH",
                    triggering_factor="F3_no_inference_rule_based_only",
                    compliance_path="No EU AI Act obligations",
                    explanation=(
                        "Human-authored rule-based systems without learned components "
                        "do not 'infer' within Art.3(1). Commission Guidelines C(2026) 2032 "
                        "para.14 explicitly excludes fixed decision trees authored by humans."
                    ),
                )

        if profile.inference_mechanism == InferenceMechanism.HYBRID:
            return Art3Classification(
                result=ClassificationResult.REQUIRES_ANALYSIS,
                confidence="MEDIUM",
                triggering_factor="F3_hybrid_requires_decomposition",
                compliance_path="Analyse ML component scope; if dominant → AI system",
                explanation=(
                    "Hybrid systems require decomposition analysis. If the ML component "
                    "determines outputs (even partially), the system likely qualifies as an "
                    "AI system. Pure rule orchestration over ML sub-components = AI system."
                ),
            )

        # F3: Inference confirmed (trained model, embeddings, RL)
        inference_confirmed = profile.inference_mechanism in {
            InferenceMechanism.TRAINED_MODEL,
            InferenceMechanism.LEARNED_EMBEDDINGS,
            InferenceMechanism.REINFORCEMENT,
        }

        if not inference_confirmed:
            return Art3Classification(
                result=ClassificationResult.REQUIRES_ANALYSIS,
                confidence="LOW",
                triggering_factor="F3_mechanism_unclear",
                compliance_path="Conduct internal classification assessment",
                explanation="Inference mechanism not clearly established. Legal review required.",
            )

        # F4 + F5: Output type and environmental influence
        if profile.output_type == OutputType.OTHER or not profile.influences_environment:
            return Art3Classification(
                result=ClassificationResult.REQUIRES_ANALYSIS,
                confidence="LOW",
                triggering_factor="F4_F5_output_scope_unclear",
                compliance_path="Review output classification against Art.3(1) output types",
                explanation="Output does not clearly fit Art.3(1) output categories or has no environmental influence.",
            )

        # GPAI Model check (Art.3(63)) — separate category from AI system
        if profile.is_general_purpose:
            return Art3Classification(
                result=ClassificationResult.GPAI_MODEL,
                confidence="HIGH",
                triggering_factor="GPAI_Art3_63",
                compliance_path="GPAI obligations: transparency (Art.53), systemic risk assessment if FLOP > 10^25 (Art.51)",
                explanation=(
                    "General-purpose AI models (trained on broad data, usable across domains) "
                    "are regulated as GPAI models under Art.3(63), not as AI systems directly. "
                    "Downstream deployers who integrate GPAI into a specific use case create "
                    "an AI system subject to risk-category rules."
                ),
            )

        # Full Art.3(1) qualification
        return Art3Classification(
            result=ClassificationResult.AI_SYSTEM,
            confidence="HIGH",
            triggering_factor="F1_F2_F3_F4_F5_all_met",
            compliance_path=_determine_compliance_path(profile),
            explanation=(
                f"This system satisfies all five Art.3(1) elements: machine-based, "
                f"operates with autonomy, uses {profile.inference_mechanism.value} inference, "
                f"produces {profile.output_type.value} outputs that influence the environment. "
                f"Next step: determine risk category (Art.6 high-risk list, Art.5 prohibited uses)."
            ),
        )


def _determine_compliance_path(profile: SoftwareProfile) -> str:
    if profile.output_type == OutputType.DECISION:
        return (
            "HIGH PRIORITY: Decision-making systems are frequent Art.6 high-risk candidates. "
            "Check Annex III (biometric, critical infrastructure, employment, education, "
            "essential services, law enforcement, migration, justice contexts)."
        )
    if profile.output_type == OutputType.RECOMMENDATION:
        return (
            "Check Annex III: recommendations in employment, credit, essential services "
            "contexts qualify as high-risk. Otherwise: likely limited/minimal risk → "
            "transparency obligations (Art.50) if interacting with humans."
        )
    return (
        "Determine risk category: Art.5 (prohibited) → Annex III (high-risk) → "
        "Art.50 (limited risk transparency) → minimal risk (no mandatory obligations)."
    )

Category-by-Category Classification

1. Large Language Models (LLMs) — GPAI, then AI System

A standalone LLM (GPT-4, Claude, Gemini, Llama) is a GPAI model under Art.3(63), not directly an AI system. However, when you integrate an LLM into a specific application — a customer service chatbot, a medical record summariser, a credit assessment assistant — that application becomes an AI system subject to risk-category rules.

classifier = EUAIActArt3Classifier()

# Standalone LLM provider
llm_provider = SoftwareProfile(
    name="Commercial LLM API",
    machine_based=True,
    inference_mechanism=InferenceMechanism.TRAINED_MODEL,
    output_type=OutputType.CONTENT,
    influences_environment=True,
    is_general_purpose=True,  # GPAI
)
# → GPAI_MODEL: Art.53 transparency, Art.51 systemic risk if >10^25 FLOPs

# LLM integrated into HR screening tool
hr_llm = SoftwareProfile(
    name="LLM-powered CV Screener",
    machine_based=True,
    inference_mechanism=InferenceMechanism.TRAINED_MODEL,
    output_type=OutputType.RECOMMENDATION,
    influences_environment=True,
    is_general_purpose=False,  # Specific use case
)
result = classifier.classify(hr_llm)
# → AI_SYSTEM (HIGH confidence)
# → Check Annex III: employment context = HIGH RISK
# → Art.9 risk management + Art.10 data governance + Art.13 transparency mandatory

2. Recommendation Engines — Depends on Context

E-commerce product recommendations are typically minimal risk AI systems. Credit scoring recommendations are high risk (Annex III point 5(b)). Content recommendation systems on platforms with >45M users may trigger Digital Services Act obligations in addition to AI Act.

# E-commerce recommender
ecommerce_rec = SoftwareProfile(
    name="Product Recommendation Engine",
    machine_based=True,
    inference_mechanism=InferenceMechanism.LEARNED_EMBEDDINGS,
    output_type=OutputType.RECOMMENDATION,
    influences_environment=True,
    is_general_purpose=False,
)
# → AI_SYSTEM (HIGH confidence)
# → Not on Annex III list → minimal risk → no mandatory obligations
# → Good practice: Art.50 transparency disclosure voluntary

# Credit scoring
credit_scorer = SoftwareProfile(
    name="Creditworthiness Assessment Model",
    machine_based=True,
    inference_mechanism=InferenceMechanism.TRAINED_MODEL,
    output_type=OutputType.DECISION,
    influences_environment=True,
    is_general_purpose=False,
)
# → AI_SYSTEM (HIGH confidence)
# → Annex III point 5(b): essential private services, creditworthiness → HIGH RISK
# → Full Chapter III obligations: conformity assessment, CE marking, Annex IV documentation

3. Fraud Detection Systems — High Risk in Financial Context

Bank fraud detection models (output: transaction flagged/not flagged) are AI systems. Whether they are high risk depends on whether the output constitutes a "decision affecting natural persons" in an essential service context.

The Commission Guidelines note that purely internal fraud detection — where a human always reviews flags before action — sits at the lower end of autonomy but still qualifies as an AI system. If the system auto-blocks transactions without human review, it likely qualifies as high risk under Annex III point 5(b).

4. Traditional Business Rule Engines — Outside Scope

A Drools-based underwriting engine where every rule was explicitly written by underwriters: outside Art.3(1) scope. Key test: can you read the rules as human-authored logic? If yes, no inference → not an AI system.

rule_engine = SoftwareProfile(
    name="Drools Underwriting Rules Engine",
    machine_based=True,
    inference_mechanism=InferenceMechanism.RULE_BASED,
    output_type=OutputType.DECISION,
    influences_environment=True,
    has_human_authored_rules_only=True,
)
result = classifier.classify(rule_engine)
# → OUTSIDE_SCOPE (HIGH confidence)
# → No EU AI Act obligations
# NOTE: If ML models are later added to tune rule thresholds → reclassify

5. Anomaly Detection Systems — Context-Dependent

Industrial anomaly detection (sensor data → maintenance alert) is an AI system. If deployed in critical infrastructure (energy, water, transport), it likely qualifies as high risk under Annex III points 2 or 3.

industrial_anomaly = SoftwareProfile(
    name="Manufacturing Line Anomaly Detector",
    machine_based=True,
    inference_mechanism=InferenceMechanism.TRAINED_MODEL,
    output_type=OutputType.PREDICTION,
    influences_environment=True,  # Physical machinery downstream
    is_general_purpose=False,
)
# → AI_SYSTEM → check critical infrastructure status
# If plant is "critical infrastructure" under NIS2: HIGH RISK (Annex III point 2)
# Otherwise: likely minimal risk

6. Chatbots — Limited Risk with Transparency Obligations

Chatbots interacting with humans using pre-scripted responses (no LLM): traditional software, outside Art.3(1) if no inference. Chatbots using an LLM or dialogue model: AI system, limited risk — subject to Art.50(1) disclosure obligation (must inform users they are interacting with AI, unless obvious).

Emotion recognition systems embedded in chatbots (detecting user stress/sentiment) may trigger high-risk classification under Annex III point 1(b) if used in employment or education contexts.


The GPAI vs. AI System Distinction

This is the most common classification confusion for developers building on foundation models:

EntityClassificationKey Obligation
OpenAI (providing GPT-4 via API)GPAI Model ProviderArt.53: technical documentation, copyright policy, training data summary
OpenAI (if GPT-4 FLOPs > 10^25)GPAI Model with Systemic RiskArt.51+55: adversarial testing, incident reporting, cybersecurity
Developer building HR tool on GPT-4AI System ProviderFull risk-category analysis, Annex III check, conformity assessment if high risk
Developer fine-tuning Llama for medical useModifying GPAI → AI System ProviderAssumes provider obligations for the fine-tuned system

The Commission Guidelines clarify that fine-tuning does not automatically reclassify a GPAI as an AI system — the purpose-specificity matters. A fine-tuned model deployed for a specific downstream use case is an AI system; a fine-tuned model published as a general-purpose base model remains a GPAI.


Art.3(1) Classification Decision Tree

                    ┌─────────────────────────────┐
                    │  Is it machine-based?        │
                    └─────────────┬───────────────┘
                                  │
                    NO ───────────┴──────── YES
                    │                        │
             Outside scope          ┌────────▼─────────┐
                                    │  Does it infer   │
                                    │  from data?      │
                                    └────────┬─────────┘
                              NO ────────────┴──── YES
                              │                     │
                         ┌────▼──────┐    ┌────────▼────────┐
                         │ Rule-based│    │ General purpose? │
                         │ only?     │    └────────┬─────────┘
                         └────┬──────┘             │
                    YES ──────┘                YES / NO
                    │                           │     │
             Outside scope                  GPAI  AI System
                                            Model
                                              │
                                   ┌──────────▼──────────┐
                                   │ Risk Category:       │
                                   │ Art.5 Prohibited?    │
                                   │ Annex III High Risk? │
                                   │ Art.50 Limited Risk? │
                                   │ Minimal Risk?        │
                                   └─────────────────────┘

Practical Implications for Developers and Operators

If Your Software IS an AI System

Immediate steps:

  1. Risk category determination — check Art.5 prohibited use cases first, then Annex III (13 high-risk categories), then Art.50 limited risk
  2. If high risk: start conformity assessment process (Art.43) — internal check for most Annex III systems, third-party for biometric and critical infrastructure systems
  3. CE marking (Art.48): required for high-risk AI systems placed on EU market
  4. Technical documentation (Annex IV): mandatory for high-risk, extensive requirements
  5. Post-market monitoring (Art.72): ongoing obligation once deployed

Timeline: EU AI Act's high-risk provisions apply from August 2026 for Annex III systems. Prohibited use cases: already in force since February 2025.

If Your Software is a GPAI Model

  1. Art.53 documentation: provide technical documentation, training data summary, copyright compliance policy to downstream deployers
  2. Systemic risk assessment (Art.51): if training compute exceeds 10^25 FLOPs — adversarial testing, incident reporting to Commission, cybersecurity measures
  3. Downstream pass-through: your API terms must allow deployers to fulfill their AI system obligations

If Your Software is Outside Scope

Document the reasoning. Regulators and enterprise customers will ask. A brief internal memo — "this system uses human-authored rules with no ML component, therefore outside Art.3(1) per Commission Guidelines C(2026) 2032" — is the minimum.


Hosting Considerations: Where Classification Happens

For EU developers, the Art.3(1) classification question is separate from the question of where the AI system runs. A high-risk AI system deployed on EU-based infrastructure is still a high-risk AI system — hosting does not affect risk classification.

However, EU-hosted infrastructure matters for two adjacent reasons:

Data governance (Art.10): Training data for high-risk AI systems must comply with GDPR data minimisation and purpose limitation. EU-hosted training pipelines give you a single-jurisdiction compliance perimeter rather than cross-border transfers requiring Standard Contractual Clauses or equivalence decisions.

Incident reporting (Art.72+): Post-market monitoring systems and serious incident logging for high-risk AI systems require reliable, auditable infrastructure. EU-native hosting eliminates CLOUD Act exposure — a US-based cloud provider hosting your incident logs faces potential US government access to data that is simultaneously protected under GDPR and subject to NIS2 reporting obligations.


What to Do Now

The April 2026 Commission Guidelines are soft law — they clarify but do not change the underlying Regulation. The Art.3(1) text itself has applied since the AI Act entered into force. For developers who have been deferring AI classification:

  1. Run the five-factor test on every ML component in your stack
  2. Separate GPAI from AI system questions — your LLM provider's obligations and your obligations as a deployer are distinct
  3. Document the inference mechanism — this is the decisive element NCAs will examine first
  4. Check high-risk Annex III now — August 2026 compliance deadline is 4 months away
  5. If you build on EU infrastructure: document how your hosting choice supports Art.10 data governance and Art.72 incident monitoring

The classification question has a definitive answer for most software — the Commission Guidelines make that clearer than ever. The harder question is what comes after: risk category, conformity assessment, CE marking, documentation. But you cannot get there without first answering Art.3(1).