2026-04-22·14 min read·

EU AI Act Art.3(4)–(12): Provider, Deployer, Importer — Role Classification Guide for Developers

The EU AI Act does not impose the same obligations on everyone who touches AI. It distinguishes four roles in the supply chain — provider, deployer, importer, distributor — and maps dramatically different compliance requirements to each. A provider building high-risk AI must complete a conformity assessment. A deployer of the same system must implement human oversight. A distributor needs only verify that the CE marking is present.

Getting your role wrong means following the wrong compliance path.

In practice, most developers building AI-powered products fall into one of two categories — provider or deployer — and the boundary between them is less obvious than it appears. This guide explains:


The Four Roles: Statutory Definitions

Regulation (EU) 2024/1689 (the EU AI Act) defines roles in Article 3. Each definition is a gate — you either clear it or you do not.

Art.3(4): Provider

"'provider' means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge"

Three elements must all be present:

  1. Develops — creates, trains, or substantially modifies an AI system or GPAI model
  2. Places on market or puts into service — makes it available to others, or uses it directly (including internal deployment)
  3. Under its own name or trademark — asserts authorship or brand ownership

The "whether for payment or free of charge" clause eliminates the argument that open-source releases or free-tier products are outside the Act's scope. A developer who trains a model and releases it on Hugging Face under their organisation's name is a provider.

What provider status triggers (for high-risk AI):

Art.3(9): Deployer

"'deployer' means a natural or legal person, public authority, agency or other body that uses an AI system under its own responsibility, except where the AI system is used in the course of a personal non-professional activity"

Deployers acquire an AI system from a provider and use it — they do not develop it. The defining characteristic is use without development.

What deployer status triggers (for high-risk AI):

Art.3(11): Importer

"'importer' means a natural or legal person established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union"

Importers are EU-established entities who bring non-EU AI systems into the EU market. If you are an EU company reselling or distributing an AI system built by a US or Asian company, you are an importer — and you inherit significant provider-equivalent obligations under Art.23.

Art.3(12): Distributor

"'distributor' means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without modifying its intended purpose"

The key limiting clause is "without modifying its intended purpose". A distributor adds no technical modification and takes no brand ownership. If you resell a third-party AI tool under your own brand name, you are not a distributor — you are likely an importer or even a provider.


The Critical Threshold: Provider vs. Deployer

The distinction between provider and deployer is the most consequential classification decision for most developers. The compliance cost difference is substantial:

DimensionProvider (High-Risk AI)Deployer (High-Risk AI)
Conformity assessmentRequiredNot required
Technical documentationMust createMust receive from provider
Training data governanceRequiredNot applicable
CE markingRequiredNot required
EU database registrationRequiredMay be required separately
Human oversightMust enable itMust implement it
Post-market monitoringMust conductMust report to provider

The Baseline Rule

You are a provider if you develop an AI system and deploy it or make it available under your own brand.

You are a deployer if you use an AI system that someone else developed, without taking on brand ownership of the AI system itself.

The Development Threshold

"Develops" is not defined further in Art.3, but Recital 85 and the legislative history clarify that development encompasses:

The operative question: did you change the model weights? If yes, and if you deploy the result, you are likely a provider. If no, you are likely a deployer.


The Art.25 Role-Flip: When Deployers Become Providers

Article 25 is the section most developers miss. It establishes that deployers must assume provider obligations in three specific scenarios:

Art.25(1)(a): Own-Name Deployment

A deployer becomes a provider when they put a high-risk AI system into service under their own name or trademark.

If you license a high-risk AI system from a provider and market it to your customers as "YourCompany AI" — without disclosing that it is built on another system — you have assumed provider obligations. The trigger is brand assertion over the AI system, not just over the product it is embedded in.

The test: Would a user understand they are interacting with your AI system, or with the original provider's AI system embedded in your product?

Art.25(1)(b): Substantial Modification

A deployer who substantially modifies a high-risk AI system becomes the provider for the modified version.

"Substantial modification" means a change that affects the AI system's compliance with Title III Chapter 2 requirements — essentially, a change that would require re-evaluation of conformity. Recital 66 provides examples: changing the intended purpose, retraining on new data categories, modifying output thresholds in ways that affect risk levels.

The test: Does your modification alter what the risk assessment would conclude?

Art.25(1)(c): High-Risk Use Beyond Intended Purpose

A deployer who uses a high-risk AI system for a purpose not covered by the provider's Declaration of Conformity assumes provider obligations for that use.

If a provider deploys a facial recognition system certified for building access control, and a deployer uses it for employee performance monitoring — a purpose not covered in the original conformity assessment — the deployer becomes a provider for that use.

The test: Is your use case within the scope of the provider's technical documentation and Declaration of Conformity?


Role Classification by Development Pattern

Pattern 1: Building an LLM-Powered SaaS Application

Scenario: You call OpenAI's API (or any third-party LLM) and build a customer-facing product on top.

Role: Deployer — unless one of the Art.25 triggers applies.

The LLM itself is developed by OpenAI (the provider). You use it. You do not train or fine-tune. Even if you build complex prompt chains, agent orchestration, or multi-step pipelines — the model weights are unchanged.

Exception: If your SaaS product is itself categorised as a high-risk AI system under Annex III (e.g., employment-related AI, credit scoring, biometric identification) and you deploy it under your brand for that high-risk purpose, you trigger Art.25(1)(a) and assume provider obligations for the high-risk AI system you have created by combining the LLM with your business logic.

Pattern 2: Fine-Tuning an Open-Source Model

Scenario: You take Llama 3 (open-source) or Mistral, fine-tune it on your proprietary dataset, and deploy the resulting model in your product.

Role: Provider — the fine-tuned model is your AI system.

Fine-tuning produces a new model with modified weights. You are responsible for:

The original model's provider (Meta for Llama, Mistral AI for Mistral) was the provider of the base model. You are the provider of the fine-tuned variant.

Pattern 3: RAG (Retrieval-Augmented Generation) Pipeline

Scenario: You build a knowledge base, chunk documents, store embeddings, and retrieve relevant context at query time before passing it to a third-party LLM.

Role: Deployer — RAG does not modify the model.

Retrieval-augmented generation changes the input to the model, not the model itself. The inference mechanism (Element 4 of Art.3(1)) operates unchanged. You are shaping what goes into the model, not how the model processes information.

This holds even for sophisticated RAG implementations with query rewriting, re-ranking, multi-hop retrieval, and hybrid search. As long as the LLM weights are untouched, you are a deployer.

Pattern 4: Embedding a Third-Party AI into Your Product

Scenario: You embed a third-party AI module (e.g., a fraud detection model, a document classification API) into your software product and sell the combined product to customers.

Role classification depends on what you claim:

Pattern 5: Training a Model From Scratch

Scenario: You collect training data, define the architecture, train the model, and deploy it.

Role: Provider — unambiguously.

Pattern 6: Reselling a Third-Party AI Tool in the EU Market

Scenario: You are a European company reselling a US-built AI tool to European enterprise customers.

Role: Importer (if you place it on the EU market under the US company's brand) or Provider (if you rebrand it as your own product).

As an importer, Art.23 requires you to verify that the provider has performed conformity assessment, that the CE marking is present, that technical documentation exists, and that you can provide contact information for the provider to market surveillance authorities.


Decision Tree: Determining Your Role

START: Does your organisation interact with an AI system?
│
├─ Did you train the model (from scratch or via fine-tuning)?
│   ├─ YES → Did you place it on the market or put it into service?
│   │   ├─ YES → You are a PROVIDER
│   │   └─ NO → Not yet regulated (pre-market development phase)
│   │
│   └─ NO → You use an AI system developed by another party
│       │
│       ├─ Are you an EU company placing a non-EU AI system on the EU market?
│       │   ├─ YES, under the original provider's brand → IMPORTER
│       │   └─ YES, under your own brand → PROVIDER (Art.25 applies)
│       │
│       └─ You use the AI system internally or pass outputs to customers
│           │
│           ├─ Art.25 check:
│           │   ├─ Do you deploy it under your own brand as an AI system? → PROVIDER
│           │   ├─ Did you substantially modify it? → PROVIDER
│           │   └─ Do you use it beyond the intended purpose? → PROVIDER
│           │
│           └─ None of the above → DEPLOYER

Python Role Classifier

from dataclasses import dataclass
from enum import Enum
from typing import Optional

class AIActRole(Enum):
    PROVIDER = "provider"
    DEPLOYER = "deployer"
    IMPORTER = "importer"
    DISTRIBUTOR = "distributor"
    PROVIDER_VIA_ART25A = "provider_via_art25_1a_own_name"
    PROVIDER_VIA_ART25B = "provider_via_art25_1b_substantial_modification"
    PROVIDER_VIA_ART25C = "provider_via_art25_1c_beyond_intended_purpose"

@dataclass
class DevelopmentProfile:
    trained_from_scratch: bool
    fine_tuned_existing_model: bool
    rag_only_no_fine_tuning: bool
    api_integration_only: bool
    places_on_eu_market: bool
    original_developer_is_non_eu: bool
    deploys_under_own_brand_as_ai_system: bool
    substantially_modified: bool
    use_beyond_intended_purpose: bool
    for_high_risk_application: bool

@dataclass
class RoleClassification:
    primary_role: AIActRole
    is_high_risk_provider: bool
    art25_triggered: bool
    rationale: str
    compliance_path: str

def classify_ai_act_role(profile: DevelopmentProfile) -> RoleClassification:
    """
    Classify an organisation's EU AI Act role based on their development profile.
    
    References:
    - Art.3(4): provider definition
    - Art.3(9): deployer definition
    - Art.3(11): importer definition
    - Art.3(12): distributor definition
    - Art.25: deployer-to-provider role flip
    """
    
    # Base provider: trained or fine-tuned + places on market
    if profile.trained_from_scratch or profile.fine_tuned_existing_model:
        if profile.places_on_eu_market:
            return RoleClassification(
                primary_role=AIActRole.PROVIDER,
                is_high_risk_provider=profile.for_high_risk_application,
                art25_triggered=False,
                rationale=(
                    "You developed the AI system (training or fine-tuning) and place it "
                    "on the market or into service. Art.3(4) provider definition applies."
                ),
                compliance_path=(
                    "High-risk: Art.9 risk management, Art.10 data governance, "
                    "Art.11 technical documentation, Art.17 QMS, Art.43 conformity "
                    "assessment, Art.49 EU database registration."
                ) if profile.for_high_risk_application else (
                    "Non-high-risk provider: transparency obligations (Art.50 if "
                    "applicable), general GPAI obligations if GPAI model (Art.51-55)."
                ),
            )
    
    # Importer: EU entity placing non-EU AI on EU market
    if (profile.places_on_eu_market 
            and profile.original_developer_is_non_eu
            and not profile.trained_from_scratch
            and not profile.fine_tuned_existing_model
            and not profile.deploys_under_own_brand_as_ai_system):
        return RoleClassification(
            primary_role=AIActRole.IMPORTER,
            is_high_risk_provider=False,
            art25_triggered=False,
            rationale=(
                "You are an EU-established entity placing a non-EU developer's AI system "
                "on the EU market under the original developer's name. Art.3(11) importer "
                "definition applies."
            ),
            compliance_path=(
                "Art.23 obligations: verify conformity assessment completed, CE marking "
                "present, technical documentation available, registration in EU database. "
                "Keep records of providers' contact details for market surveillance."
            ),
        )
    
    # Art.25 role-flip checks for deployers
    if profile.deploys_under_own_brand_as_ai_system:
        return RoleClassification(
            primary_role=AIActRole.PROVIDER_VIA_ART25A,
            is_high_risk_provider=profile.for_high_risk_application,
            art25_triggered=True,
            rationale=(
                "Art.25(1)(a): You deploy the AI system under your own name or trademark. "
                "Even though you did not develop it, you assume provider obligations for "
                "this deployment."
            ),
            compliance_path=(
                "Full provider compliance path applies. Obtain technical documentation "
                "from original provider. Perform own conformity assessment for your "
                "deployment context. Register in EU database."
            ),
        )
    
    if profile.substantially_modified:
        return RoleClassification(
            primary_role=AIActRole.PROVIDER_VIA_ART25B,
            is_high_risk_provider=profile.for_high_risk_application,
            art25_triggered=True,
            rationale=(
                "Art.25(1)(b): You substantially modified a high-risk AI system. "
                "The modified system is treated as a new AI system for which you are "
                "the provider."
            ),
            compliance_path=(
                "Full provider compliance path for the modified version. Document what "
                "was modified and why it constitutes a substantial modification. New "
                "conformity assessment required."
            ),
        )
    
    if profile.use_beyond_intended_purpose and profile.for_high_risk_application:
        return RoleClassification(
            primary_role=AIActRole.PROVIDER_VIA_ART25C,
            is_high_risk_provider=True,
            art25_triggered=True,
            rationale=(
                "Art.25(1)(c): You use the AI system for a purpose not covered by the "
                "provider's Declaration of Conformity. For this use, you assume provider "
                "obligations."
            ),
            compliance_path=(
                "Perform conformity assessment for the specific use case. Create "
                "supplementary technical documentation. Consider whether you can obtain "
                "an updated Declaration of Conformity from the original provider instead."
            ),
        )
    
    # Default: deployer
    return RoleClassification(
        primary_role=AIActRole.DEPLOYER,
        is_high_risk_provider=False,
        art25_triggered=False,
        rationale=(
            "You use an AI system developed by another party, under the deployer "
            "definition of Art.3(9). None of the Art.25 role-flip conditions apply."
        ),
        compliance_path=(
            "Deployer obligations: Art.26 human oversight, Art.26(2) input data "
            "relevance, Art.26(5) suspend if unacceptable risk, Art.26(7) employee "
            "information (if applicable), Art.27 FRIA for public authorities."
        ) if profile.for_high_risk_application else (
            "Minimal-risk deployer: voluntary codes of conduct (Art.95), general "
            "AI literacy obligations (Art.4)."
        ),
    )


# Examples
if __name__ == "__main__":
    # Example 1: RAG-based enterprise search built on a third-party LLM
    rag_deployer = DevelopmentProfile(
        trained_from_scratch=False,
        fine_tuned_existing_model=False,
        rag_only_no_fine_tuning=True,
        api_integration_only=False,
        places_on_eu_market=True,
        original_developer_is_non_eu=True,
        deploys_under_own_brand_as_ai_system=False,
        substantially_modified=False,
        use_beyond_intended_purpose=False,
        for_high_risk_application=False,
    )
    result = classify_ai_act_role(rag_deployer)
    print(f"RAG SaaS: {result.primary_role.value}")
    # → deployer

    # Example 2: Fine-tuned model for CV screening (high-risk, Annex III category 4)
    finetuned_provider = DevelopmentProfile(
        trained_from_scratch=False,
        fine_tuned_existing_model=True,  # fine-tuned on HR data
        rag_only_no_fine_tuning=False,
        api_integration_only=False,
        places_on_eu_market=True,
        original_developer_is_non_eu=True,
        deploys_under_own_brand_as_ai_system=True,
        substantially_modified=False,
        use_beyond_intended_purpose=False,
        for_high_risk_application=True,  # CV screening = Annex III
    )
    result = classify_ai_act_role(finetuned_provider)
    print(f"Fine-tuned HR model: {result.primary_role.value}")
    # → provider (fine-tuned + high-risk + own brand)

The Dual-Role Scenario

Organisations may simultaneously be a provider of one AI system and a deployer of another. A company that:

must comply as a provider for the fraud model and as a deployer for the chat system. The roles apply system by system, not organisation-wide.

This also occurs within the GPAI value chain. Under Art.3(4), if you are a downstream developer building a product on top of a GPAI model (e.g., an LLM API), and you deploy that product under your own name:

Art.55 governs the obligations of downstream providers who integrate GPAI models: they must receive sufficient information from GPAI providers (technical documentation, usage restrictions) to fulfil their own compliance obligations.


Compliance Timeline: Role-Specific Deadlines

MilestoneProvider DeadlineDeployer Deadline
Prohibited AI practices prohibition2 Feb 20252 Feb 2025 (cannot deploy prohibited AI)
GPAI obligations2 Aug 20252 Aug 2025 (cannot deploy non-compliant GPAI models)
High-risk AI (Annex III) full obligations2 Aug 20262 Aug 2026
Art.6(1) high-risk AI (products)2 Aug 20272 Aug 2027
Technical standards adoptionCEN/CENELEC target H1 2026Not applicable
NIS2 compliance audit (cloud providers)30 Jun 2026Depends on entity classification

The 2 August 2026 deadline is the most immediate critical date. High-risk AI systems under Annex III must achieve full compliance — conformity assessment, technical documentation, EU database registration — by that date.


EU Hosting and Role-Specific Compliance Implications

Data residency considerations differ significantly between providers and deployers:

Providers developing AI systems classified as high-risk under Annex III must comply with Art.10 data governance, which imposes requirements on training, validation, and testing datasets: relevance, representativeness, freedom from errors, completeness. If training data includes personal data, Art.10(5) restricts processing to what is "strictly necessary" — with specific safeguards for sensitive categories.

Hosting training infrastructure in the EU simplifies Art.10 compliance: data does not cross EU borders during training, GDPR Chapter V transfer restrictions do not apply, and audit trails remain under EU jurisdiction.

Deployers using high-risk AI systems must implement Art.26 human oversight, which requires logging and monitoring capabilities. For cloud-based AI deployments, ensuring that inference infrastructure and logs are hosted within the EU eliminates ambiguity about which data protection regime applies to inference-time personal data processing.

EU-native hosting for both training infrastructure (provider) and inference infrastructure (deployer) produces a unified compliance posture across both the AI Act's data governance requirements and GDPR's data minimisation and transfer rules.


Key Takeaways

  1. Provider = developer who places AI on market under own brand. Fine-tuning triggers provider status. API integration does not.

  2. Deployer = user of third-party AI under their own responsibility. RAG, prompt engineering, and API integration are deployer patterns.

  3. Art.25 flips deployers to providers in three scenarios: own-name deployment, substantial modification, and use beyond intended purpose.

  4. Importers are EU entities bringing non-EU AI into the EU market — Art.23 imposes significant verification obligations.

  5. Roles are system-specific, not organisation-wide. You can simultaneously be a provider of one AI system and a deployer of another.

  6. August 2, 2026 is the compliance deadline for high-risk AI under Annex III. Providers need conformity assessments; deployers need human oversight implementation.


This guide covers Article 3(4)–(12) of Regulation (EU) 2024/1689 (EU AI Act). For the Article 3(1) AI system definition and the April 2026 Commission Guidelines on what qualifies as an AI system, see the EU AI Act Art.3(1) guide.