2026-04-22·16 min read·

EU AI Act Art.26 Obligations of Deployers of High-Risk AI Systems: FRIA, Human Oversight, Log Retention, Worker Information, and the Art.26(10) Fine-Tuning Deemed-Provider Trigger (2026)

Article 26 of the EU AI Act is the most operationally dense provision in the supply chain obligations chapter. While providers bear the primary design-time compliance burden (Art.16-22), Article 26 establishes that deployers — the organisations and individuals who put high-risk AI systems into use for a specific intended purpose — carry a substantial and parallel compliance stack that cannot be delegated upstream.

Article 26 applies to any natural or legal person who deploys a high-risk AI system under their own authority in a professional context. It does not require the deployer to have developed the system: purchasing, licensing, or otherwise acquiring a third-party high-risk AI system and integrating it into a business process activates the full Art.26 obligation set. The eleven paragraphs of Article 26 cover organisational governance, human oversight, operational monitoring, incident reporting, log management, worker transparency, fundamental rights impact assessment, EU database registration, a fine-tuning transformation rule, and natural-person disclosure.

Understanding Art.26 is mandatory for enterprise users of high-risk AI in HR, credit scoring, critical infrastructure, education, law enforcement adjacent contexts, and any other Annex III-listed deployment category.

The Position of Art.26 in the Supply Chain Framework

The EU AI Act distributes compliance obligations across five roles:

A single organisation can hold multiple roles simultaneously. A bank that licenses an AI credit-scoring system from a third party is a deployer under Art.26. If that bank then fine-tunes the model on its own data — potentially shifting the intended purpose or architecture — it may become a deemed provider under Art.25(1)(c) and acquire the full Art.16-22 obligation stack on top of its Art.26 obligations.

Art.26 obligations are not satisfied by contract: a deployer cannot agree with its provider that the provider will "handle compliance." Art.26 assigns obligations directly to deployers by law. Contracts can allocate information flows and cooperation duties, but they cannot reallocate the statutory obligations Art.26 places on deployers.

Art.26(1): Organisational and Technical Measures for Proper Use

The foundational obligation. Deployers must take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions for use that accompany those systems.

This is a process obligation, not a documentation obligation. Simply possessing the instructions is insufficient; the deployer must operationalise them. What constitutes "appropriate" measures depends on the deployment context, the system's risk profile, and the scale of operation — but the obligation creates a clear expectation of governance:

The phrase "in accordance with the instructions for use" is significant. If the provider's instructions define the permitted user population as adults over 18 in employment contexts, a deployer who routes the system's outputs to affect decisions about minors or in welfare contexts is operating outside the instructions and outside the Art.26(1) mandate. That gap creates both regulatory exposure and potential Art.25(1)(b) intended-purpose-change risk.

Art.26(2): Human Oversight — Assignment and Competence

Deployers must assign the task of human oversight to natural persons — specifically to persons who have the competence, authority, and resources necessary to perform that oversight.

This paragraph operationalises the Art.14 human oversight requirements that providers must build into systems at design time. Art.14 requires providers to design systems that enable natural persons to oversee operation, intervene, and halt the system. Art.26(2) requires deployers to actually put those capabilities to use by designating specific individuals.

Three conditions must be met for each oversight designee:

  1. Competence: The person must understand what the system does, what its outputs mean, and what the system cannot do reliably. Technical literacy is not required, but functional literacy specific to the system's domain is. A credit officer assigned to oversee an AI credit-scoring system must understand credit risk fundamentals, how the system generates scores, and what the score does not capture.

  2. Authority: The person must be empowered to intervene — to override the system's output, flag a case for manual review, suspend operation, or escalate. Oversight without authority is compliance theatre. The organisational structure must support this: oversight persons must not face incentive systems that punish overrides.

  3. Resources: Time, information, and tooling. Human oversight is not credible if the designated person is reviewing 500 AI decisions per hour with three seconds per case. The resourcing question is one regulators will examine.

The combination of these three requirements means that Art.26(2) is as much an HR and organisational design question as a technology question.

Art.26(3): Information and Training for Human Oversight Personnel

Deployers must ensure that the persons assigned to human oversight receive appropriate training. This requirement links to the provider's obligation under Art.16(g) to provide instructions for use that contain the information necessary to enable oversight.

The training obligation is ongoing, not one-time. If the system is updated — even within the provider's control — deployers must ensure oversight personnel are retrained on material changes. If the system's known failure modes, edge cases, or confidence thresholds change, that information must flow to the humans responsible for overseeing outputs.

Practically, Art.26(3) means deployers should maintain:

The training requirement intersects with employment law in ways specific to each Member State. In jurisdictions with strong works council rights (Germany, Netherlands, Austria), training programmes may require consultation before implementation.

Art.26(4): Compliance with Instructions for Use

Deployers must use the high-risk AI system in accordance with its instructions for use. This reinforces Art.26(1) at the individual use-act level: not just governance structures, but each deployment decision and each operational use of the system must conform to what the provider has specified.

Instructions for use are a provider obligation under Art.13 (transparency and information provision) and Art.16(d). They constitute the legal parameters of the permitted deployment envelope. Operating outside them — for instance, using a fraud detection system trained on retail banking patterns to assess insurance fraud claims — is a breach of Art.26(4) and may simultaneously trigger Art.25(1)(b) if the intended purpose change is material enough to qualify as a non-high-risk to high-risk transformation.

For deployers, the practical implication is due diligence before deployment:

  1. Read the instructions for use before contracting and before deployment
  2. Verify that the intended use case falls within the specified intended purpose
  3. Identify any use restrictions, excluded populations, or prohibited configurations
  4. Document that the deployment decision was made with reference to those parameters

Art.26(5): Operational Monitoring, Risk Detection, and Serious Incident Reporting

Deployers must monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform the provider or distributor about risks, potential risks, or serious incidents detected during operation.

This is the deployer's surveillance obligation — the deployment-phase counterpart to the provider's post-market monitoring system under Art.72. While providers monitor aggregate performance across deployments, deployers observe the system as it actually operates in their specific context: on their data, with their users, in their business process.

Art.26(5) has two components:

Operational monitoring: An ongoing process to detect when the system is producing outputs that deviate from expected performance, when the input data has shifted in ways the system may not handle well, or when real-world conditions have changed since the instructions for use were written. This requires defining what "normal" looks like and establishing thresholds for anomaly detection.

Serious incident reporting: When a deployer identifies a serious incident — a malfunction or failure that results in death or serious harm, significant property damage, or serious fundamental rights violations — they must report to the provider or distributor and, in certain cases, directly to market surveillance authorities. The definition of "serious incident" in Art.3(49) and the reporting framework in Art.73 apply. Deployers do not merely have an internal response obligation; they have an information obligation that feeds the provider's post-market monitoring and the regulator's market surveillance.

The reporting chain typically runs: Deployer → Provider → Market Surveillance Authority. But in direct market relationship contexts, deployers may need to report to national authorities without waiting for provider intermediation.

Art.26(6): Log Retention

Where deployers have access to the logs automatically generated by the high-risk AI system — a function that providers must build in under Art.12 — they must retain those logs for the period indicated in the instructions for use, or, where no such indication exists, for at least six months.

Several important qualifications apply:

Conditional obligation: The duty to retain applies only "where possible." If the system architecture does not expose logs to deployers, or if the system is operated as a cloud service where logs remain at the provider's infrastructure, deployers cannot retain what they cannot access. However, the practical expectation is that deployers will request log access as part of their procurement and contracting process, and providers who refuse to provide any log access to deployers will face scrutiny.

Retention period hierarchy: Instructions for use take precedence. If the provider specifies a three-year retention period, the deployer must retain for three years. The six-month floor applies only in the absence of provider specification.

Data protection interface: Logs of high-risk AI system operation may contain personal data — input data, identifiers, decision outputs tied to individuals. Retention obligations under Art.26(6) must be reconciled with GDPR data minimisation and storage limitation principles (Art.5(1)(c)-(e) GDPR). The result is a tension between regulatory log-retention mandates and GDPR storage limitation that deployers must manage through documented retention schedules with legal basis documentation.

Audit readiness: The primary regulatory purpose of log retention is to enable market surveillance authorities and other competent authorities to reconstruct what happened when a serious incident occurs or a complaint is filed. Logs must be retained in a format that is actually usable for this purpose — not archived in proprietary formats with no read path.

Art.26(7): Worker Information Obligations

Before deploying a high-risk AI system that will affect workers, deployers must inform workers' representatives — where they exist under national or EU law — and workers themselves about the deployment. This obligation applies specifically to systems covered by the employment-related entries in Annex III (Point 4: AI systems used in recruitment, performance evaluation, promotion, task allocation, monitoring, termination).

The worker information obligation reflects the EU AI Act's recognition that AI deployment in employment contexts carries specific dignity and due process concerns. Workers subject to AI-assisted decisions about their employment have a legitimate interest in knowing that such systems are being used, what they are being used for, and what the decision chain looks like.

The obligation is to inform, not to obtain consent or to consult (though national employment law may impose stronger requirements). However, "inform" must mean genuine communication, not a clause buried in an employment contract or a page in an employee handbook that nobody reads. Market surveillance authorities will likely assess whether the information was communicated in a way that employees could reasonably understand.

The timing of the obligation — before deployment, not after — means deployers must plan information campaigns, works council briefings, and communication materials as part of their deployment project, not as an afterthought.

Works council intersection: In co-determination jurisdictions (Germany, Austria, Netherlands, Sweden), works council consultation requirements may be triggered by AI deployment decisions independently of Art.26(7). Deployers in those jurisdictions face a layered obligation set: Art.26(7) information plus national co-determination rights. These must be sequenced correctly — typically co-determination consultation must complete before deployment.

Art.26(8): Fundamental Rights Impact Assessment (FRIA)

The FRIA is the most significant and contested provision of Art.26. Certain deployers — public bodies and private bodies carrying out public functions — must conduct a Fundamental Rights Impact Assessment before deploying certain high-risk AI systems.

Who must conduct a FRIA: The FRIA obligation applies to:

For which systems: The FRIA applies to systems in the Annex III categories most likely to affect fundamental rights: AI in critical infrastructure, education, employment, essential services access, law enforcement, migration and asylum, administration of justice.

What the FRIA must assess:

  1. The specific intended purpose of the deployment
  2. The geographic scope and the time period over which the system will be used
  3. Categories of natural persons who will be affected
  4. The specific risks to fundamental rights likely to arise, having regard to the intended purpose
  5. Whether the deployer intends to use the system to make decisions affecting those persons, or whether it will serve as decision support
  6. The measures foreseen to address identified risks, including safeguards, technical measures, and human oversight

FRIA versus DPIA: The FRIA is not a GDPR Data Protection Impact Assessment (DPIA), but the two are closely related where the AI system processes personal data. Recital 97 of the AI Act recognises that a DPIA under Art.35 GDPR may serve the purposes of the FRIA if it covers the fundamental rights dimensions in addition to data protection. In practice, deployers who already conduct DPIAs should expand them to incorporate FRIA requirements rather than running separate parallel processes.

Registration of FRIA: Deployers who complete a FRIA must notify their market surveillance authority and register the FRIA outcome in the EU database established under Art.71. This creates a public record of the assessment — distinguishing the FRIA from an internal risk assessment exercise.

Timeline: The FRIA must be completed before deployment. It is not a one-time exercise: material changes to the deployment scope, the affected population, or the system's capabilities must trigger a review.

Practical FRIA structure:

FRIA — High-Risk AI System Deployment

System: [Name, version, provider]
Deployer: [Organisation name, role, contact]
Intended purpose: [Specific deployment use case]
Deployment period: [Start date, review dates, end date or open-ended with review cycle]
Geographic scope: [Regions, facilities, populations]

Category of affected persons:
  - [Primary affected group, estimated scale]
  - [Secondary affected groups]
  - [Vulnerable populations? Y/N — if Y, describe]

Fundamental rights at stake:
  - [Right to non-discrimination — Art.21 EU Charter]
  - [Right to privacy — Art.7-8 EU Charter]
  - [Right to an effective remedy — Art.47 EU Charter]
  - [Other applicable rights]

Risk assessment:
  - Risk 1: [Description] — Likelihood [H/M/L] — Severity [H/M/L]
  - Risk 2: [Description] — Likelihood [H/M/L] — Severity [H/M/L]

Mitigation measures:
  - [Measure 1 addressing Risk 1]
  - [Measure 2 — Human oversight designation and competence requirements]
  - [Measure 3 — Appeal and redress mechanism for affected persons]

Residual risk: [Description of risk remaining after mitigation]
Conclusion: [Deployment approved / Deployment approved with conditions / Deployment not approved]

Review date: [Date or trigger condition]

Art.26(9): EU Database Registration

Deployers of certain high-risk AI systems used in public contexts must register their deployment in the EU database established under Art.71. The registration obligation applies when the deployer is a public authority or body and uses the system in one of the Annex III categories that require registration.

The EU database (originally termed "EUDAMED" in draft texts, but distinct from the medical device database) creates public transparency about which high-risk AI systems are being deployed in public interest contexts. Registration serves two purposes: public accountability and market surveillance efficiency.

Registration data includes:

For deployers who are private entities, registration is not generally required unless they perform a public function that triggers the registration obligation under national implementation law.

Art.26(10): Fine-Tuning and the Deemed-Provider Trigger

Art.26(10) is the provision that closes the deployer-to-provider transformation loop. It specifies that if a deployer fine-tunes a high-risk AI system — adjusting it on the deployer's own data, extending its capabilities, or otherwise modifying it in a way that could affect its properties — the deployer may become a deemed provider under Art.25 and inherit the full Art.16-22 obligation stack.

The operative condition is whether the fine-tuning constitutes a "substantial modification" as defined in Art.3(23): a modification of a high-risk AI system that affects its compliance with essential requirements or changes its intended purpose. Not all fine-tuning crosses this threshold. Adaptation of an NLP model to domain-specific vocabulary without changing its decision categories probably does not. Retraining a credit-scoring model on a new population with different demographic characteristics probably does.

Art.26(10) creates a decision point that deployers must work through explicitly:

from enum import Enum
from dataclasses import dataclass
from typing import Optional

class FinetuningRisk(Enum):
    ADAPTATION_ONLY = "adaptation_only"          # vocabulary, style — no deemed-provider risk
    PERFORMANCE_TUNING = "performance_tuning"    # accuracy improvement, same task — borderline
    SUBSTANTIAL_MODIFICATION = "substantial"     # new population, new objective — deemed provider


@dataclass
class FinetuningAssessment:
    modification_type: str
    affects_intended_purpose: bool
    affects_essential_requirements: bool
    changes_decision_population: bool
    changes_output_categories: bool
    changes_training_data_scope: bool

    def classify(self) -> FinetuningRisk:
        if (
            self.affects_intended_purpose
            or self.affects_essential_requirements
            or self.changes_decision_population
            or self.changes_output_categories
        ):
            return FinetuningRisk.SUBSTANTIAL_MODIFICATION
        if self.changes_training_data_scope:
            return FinetuningRisk.PERFORMANCE_TUNING
        return FinetuningRisk.ADAPTATION_ONLY

    def compliance_path(self) -> str:
        risk = self.classify()
        if risk == FinetuningRisk.SUBSTANTIAL_MODIFICATION:
            return (
                "Deemed provider under Art.25(1)(c). Full Art.16-22 obligations apply. "
                "Must: establish QMS, create/update technical documentation, "
                "conduct new conformity assessment, update DoC, register in EU database as provider, "
                "notify provider (Art.25(3)), establish PMSP (Art.72), "
                "and continue Art.26 deployer obligations for downstream use."
            )
        if risk == FinetuningRisk.PERFORMANCE_TUNING:
            return (
                "Borderline case. Conduct formal substantial-modification assessment. "
                "Document conclusion with legal and technical rationale. "
                "Notify provider per Art.25(3) in any case. "
                "If conclusion is not-substantial: maintain Art.26 deployer obligations only."
            )
        return (
            "Adaptation only. Deemed-provider trigger not activated. "
            "Maintain Art.26 deployer obligations. "
            "Document adaptation rationale for market surveillance purposes."
        )

The critical practical implication: enterprise AI teams who fine-tune models on internal data must conduct and document an Art.26(10) / Art.25(1)(c) assessment as part of their MLOps process. This assessment should happen before fine-tuning, not after, because the compliance path for substantial modification requires pre-deployment actions (conformity assessment, QMS establishment) that cannot be retroactively satisfied.

Art.26(11): Transparency to Natural Persons

Deployers of high-risk AI systems must inform natural persons who are subject to decisions made using those systems. The information obligation is twofold: that an AI system is being used in the decision process, and that the person has the right to an explanation of the individual decision where applicable.

This provision works in conjunction with Art.86 (right to explanation of individual decisions) and with Art.50 (transparency obligations for certain AI systems). For high-risk AI in employment, credit, essential services, and other Annex III contexts, affected persons have a right to know:

  1. That their case is being processed using an AI system
  2. That the AI system is classified as high-risk under the EU AI Act
  3. That they can request a meaningful explanation of any decision that significantly affects them

The transparency obligation must be operationalised at the point where the decision is communicated to the natural person. This typically means:

The combination of Art.26(11) disclosure and Art.86 explanation rights creates a meaningful accountability mechanism for individuals — assuming deployers implement both in good faith rather than through generic disclosures that technically satisfy the letter but provide no actionable information.

Art.26 Obligation Matrix by Paragraph

ParagraphObligationWho Must ActWhen
Art.26(1)Organisational measures for proper useAll deployersBefore and during deployment
Art.26(2)Assign competent oversight personsAll deployersBefore deployment
Art.26(3)Train oversight personsAll deployersBefore deployment + ongoing
Art.26(4)Use per instructions for useAll deployersEach use act
Art.26(5)Monitor operation + report incidentsAll deployersOngoing
Art.26(6)Retain logs (where accessible)All deployers with log access6 months minimum
Art.26(7)Inform workers and representativesDeployers of employment-AIBefore deployment
Art.26(8)Conduct FRIA + notify authorityPublic bodies + private public functionBefore deployment
Art.26(9)Register in EU databasePublic bodies in scopeBefore deployment
Art.26(10)Assess fine-tuning for deemed-providerDeployers who fine-tuneBefore fine-tuning
Art.26(11)Disclose AI use to natural personsAll deployersAt point of decision communication

Cross-Reference Map: Art.26 in the AI Act Architecture

Art.26 × Art.9 (Risk Management System): While the QMS obligation formally falls on providers (Art.17), deployers must implement risk management measures appropriate to their context. Art.26(1) and (5) together create a deployer-side risk management expectation that mirrors the provider obligation at the operational level.

Art.26 × Art.14 (Human Oversight): Art.14 requires providers to design human oversight capabilities into systems. Art.26(2)-(3) require deployers to use those capabilities by designating and training oversight persons. The two articles are complementary and fail together if either is neglected: a system with no oversight mechanism cannot be overseen; an organisation with oversight-capable systems but no designated persons cannot fulfill Art.26(2).

Art.26(8) × Art.35 GDPR (DPIA): Where the AI system processes personal data, FRIA and DPIA requirements overlap. Recital 97 permits a consolidated assessment if it covers both data protection and broader fundamental rights dimensions. Deployers who already have DPIA processes should expand them rather than running parallel assessments.

Art.26(10) × Art.25 (Deemed Provider): Fine-tuning that constitutes substantial modification converts a deployer into a deemed provider, stacking Art.16-22 obligations on top of Art.26 obligations. The transformation assessment must be documented.

Art.26 × Art.72 (Post-Market Monitoring): Providers maintain post-market monitoring systems; deployers feed them through Art.26(5) incident reporting. The information flow is: deployer observation → incident notification to provider → provider PMSP → regulator reporting where applicable.

Art.26 × Art.86 (Right to Explanation): Art.86 creates an individual right to explanation of significant decisions made using high-risk AI. Art.26(11) requires deployers to disclose AI use. Together they create the affected person's information and recourse framework.

Art.26 × Art.88 (Employment Data Processing): For deployers in employment contexts, processing of employment-related data through the AI system must comply with Art.88 GDPR (processing in the employment context). Member State employment data protection rules add a further layer.

Art.26 × Art.93 (Penalties): Deployers who fail to comply with Art.26 obligations are subject to fines of up to €15,000,000 or 3% of total worldwide annual turnover (whichever is higher) under Art.93. The penalty tier for deployer non-compliance is lower than the tier for providers who place non-compliant systems on the market (Art.93(1): €30m/6% for prohibited AI, €20m/4% for provider non-compliance), but €15m/3% represents material regulatory exposure for any organisation.

Art.26 Implementation Checklist

Organisational Governance

Human Oversight

Operational Monitoring

Log Retention

Worker Information (Employment AI)

FRIA (Public Bodies and Private Public Functions)

EU Database Registration (Public Bodies)

Fine-Tuning Assessment (If Applicable)

Natural Person Transparency

Summary

Article 26 is the operational compliance core for deployers of high-risk AI systems. Its eleven paragraphs create obligations across governance, human oversight, monitoring, log management, worker transparency, fundamental rights assessment, database registration, fine-tuning transformation rules, and natural person disclosure. None of these obligations can be outsourced to the provider through contract; all of them require deployer-side action.

The most demanding provisions for organisations new to AI regulation are Art.26(2)-(3) (human oversight assignment and training, which require genuine organisational design rather than nominal designation), Art.26(8) (the FRIA for public bodies, which requires substantive analysis rather than a checkbox exercise), and Art.26(10) (the fine-tuning deemed-provider trigger, which is frequently overlooked by enterprise AI teams and can result in unintended provider obligations).

The supply chain chapter closes with Art.26 as the deployer obligation provision and connects back through Art.25 for the transformation cases and forward to Art.27-31 for the notified body and conformity assessment framework. Deployers who have structured their Art.26 compliance well — with documented oversight assignments, FRIA processes where applicable, log retention disciplines, and fine-tuning assessment procedures — are also well-positioned for market surveillance inspections and for the broader GDPR-AI Act integration that will define the EU AI compliance landscape through 2026 and beyond.

See Also