2026-04-22·11 min read·

EU AI Act Art.4: AI Literacy Obligations for Providers and Deployers — Developer Compliance Guide (2026)

Most EU AI Act commentary focuses on risk categories, conformity assessments, and the August 2026 high-risk deadline. Article 4 rarely appears in that conversation — which is exactly why it creates compliance risk.

Art.4 applies to every provider and every deployer, regardless of risk classification. It has been in force since 2 August 2025. And unlike the technical documentation or conformity assessment requirements, there is no minimum threshold — a company deploying a low-risk chatbot carries the same literacy obligation as one building a high-risk Annex III system.

This guide covers:


The Statutory Text

Article 4 of Regulation (EU) 2024/1689 reads:

"Providers and deployers of AI systems shall take measures to ensure, to the best of their ability, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used."

Recital 20 adds interpretive context:

"AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems."

Six elements govern the obligation:

ElementTextOperative meaning
Subject"Providers and deployers"Both roles, no threshold
Standard"to the best of their ability"Effort-based, not strict liability
Target"staff and other persons"Employees plus third parties acting on their behalf
Condition"dealing with the operation and use"Covers direct users of the AI system
Calibration"taking into account technical knowledge, experience..."Proportional to the individual's role
End-user awareness"considering persons on whom AI systems are used"Includes impact on affected individuals

Applicability Timeline

Art.4 does not wait for the August 2026 high-risk deadline.

Under Art.113(3), Art.4 became applicable 12 months after entry into force — on 2 August 2025. Organisations that have been deploying AI systems for internal workflows, customer interactions, or automated decisions since then are already within scope.

2 August 2024    — EU AI Act enters into force
2 February 2025  — Prohibited AI (Art.5) applicable
2 August 2025    — Art.4 AI literacy + GPAI obligations applicable  ← NOW
2 August 2026    — High-risk AI (Annex III) full application
2 August 2027    — High-risk AI (Annex II, product safety) full application

Art.4 is not contingent on your system being high-risk. If you deploy a recommendation engine, a customer support chatbot, or an HR screening tool — you have had an AI literacy obligation for your staff since August 2025.


Who is Covered

The "and other persons" Extension

Art.4 does not limit the literacy obligation to direct employees. "Other persons dealing with the operation and use of AI systems on their behalf" captures:

A deployer cannot discharge the Art.4 obligation by outsourcing AI operations. If a third party operates an AI system on your behalf, you remain responsible for their AI literacy.

The "dealing with operation and use" Gate

Not every person in the organisation is covered — only those who interact with the AI system operationally. The phrase "dealing with the operation and use" draws a boundary:

In scope:

Out of scope:

The "persons on whom AI is used" Awareness Element

The final clause — "considering the persons or groups of persons on whom the AI systems are to be used" — introduces a dimension that goes beyond internal literacy. Staff must be aware of:

A deployer running a credit scoring AI must ensure that the loan officers reviewing AI recommendations understand how the model was trained, what demographic patterns it might encode, and when to override the output.


What "Sufficient AI Literacy" Means in Practice

The Regulation does not define AI literacy beyond Recital 20's reference to "necessary notions to make informed decisions." Regulatory guidance from ENISA and national competent authorities (particularly BfDI in Germany and CNIL in France) has begun filling this gap, but no harmonised standard exists as of April 2026.

Operationally, "sufficient AI literacy" should cover at least:

1. Understanding AI System Scope and Limitations

Staff must know:

A customer service agent using an LLM-based response tool should know that the model can hallucinate, that it may not have current product information, and that its outputs require review before sending to customers.

2. Understanding When and How to Exercise Human Oversight

For high-risk systems under Art.14, human oversight is a legal requirement. For lower-risk systems, Art.4 literacy indirectly supports oversight quality. Staff should be able to:

3. Understanding Bias and Fairness Implications

Recital 20 references AI literacy enabling users to "critically evaluate AI system outputs." For systems that produce decisions affecting individuals (scoring, classification, prioritisation), staff should understand:

4. Understanding Data Privacy Implications

AI systems processing personal data create GDPR obligations that intertwine with AI Act requirements. Staff dealing with AI systems should understand:

5. Understanding the Reporting Chain

Staff should know who to contact when they observe:


Proportionality: Not All Staff Need the Same Training

Art.4's "taking into account their technical knowledge, experience, education and training" clause creates an explicit proportionality mechanism. The regulation does not require every operator to complete an advanced ML engineering course.

A practical tiering approach:

Tier 1 — Operational Users (Minimal Literacy)

Scope: Staff who use AI-generated outputs as inputs to their work

Required literacy:

Appropriate format: 30–60 minute e-learning module, annual refresh

Tier 2 — System Operators and Supervisors (Intermediate Literacy)

Scope: Staff who configure, monitor, or make decisions based on AI outputs

Required literacy:

Appropriate format: Half-day training, quarterly review of system changes

Tier 3 — Technical and Compliance Staff (Advanced Literacy)

Scope: Staff who build, fine-tune, audit, or maintain the AI system

Required literacy:

Appropriate format: Structured certification programme, ongoing regulatory updates


Documentation Requirements

Art.4 does not specify a documentation format, but national competent authorities conducting market surveillance under Art.74 can request evidence that literacy measures were taken. Without documentation, the "to the best of their ability" standard becomes undefendable.

Minimum documentation set:

  1. AI literacy policy — stating the organisation's approach, responsible owner, and review cycle
  2. Training records — per-employee log of completed training, date, and version of AI system covered
  3. Role mapping — which roles interact with which AI systems (supports proportionality argument)
  4. System-specific briefings — short written summary of each deployed AI system's scope, limitations, and oversight procedures
  5. Update log — records of when training was refreshed following system changes

Python Implementation: AI Literacy Compliance Tracker

from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional


class LiteracyTier(Enum):
    OPERATIONAL = "tier_1_operational"
    OPERATOR = "tier_2_operator"
    TECHNICAL = "tier_3_technical"


class ComplianceStatus(Enum):
    COMPLIANT = "compliant"
    EXPIRING_SOON = "expiring_soon"  # within 60 days
    OVERDUE = "overdue"
    NOT_TRAINED = "not_trained"


@dataclass
class AISystemProfile:
    system_id: str
    name: str
    risk_category: str  # "prohibited", "high_risk", "limited_risk", "minimal_risk", "gpai"
    description: str
    last_updated: date
    known_limitations: list[str] = field(default_factory=list)
    affected_groups: list[str] = field(default_factory=list)


@dataclass
class StaffMember:
    employee_id: str
    name: str
    role: str
    tier: LiteracyTier
    systems_in_scope: list[str]  # list of system_ids
    training_records: dict[str, date] = field(default_factory=dict)  # system_id -> completion date
    refresh_interval_days: int = 365


@dataclass
class LiteracyAssessment:
    employee_id: str
    system_id: str
    status: ComplianceStatus
    last_trained: Optional[date]
    next_due: Optional[date]
    gap_description: Optional[str]


class AILiteracyTracker:
    """
    Tracks Art.4 EU AI Act AI literacy compliance across an organisation.
    
    Supports the proportionality argument by maintaining tier-differentiated
    training records with system-specific refresh cycles.
    """

    def __init__(self):
        self.systems: dict[str, AISystemProfile] = {}
        self.staff: dict[str, StaffMember] = {}

    def register_system(self, system: AISystemProfile) -> None:
        self.systems[system.system_id] = system

    def register_staff(self, member: StaffMember) -> None:
        self.staff[member.employee_id] = member

    def record_training(
        self,
        employee_id: str,
        system_id: str,
        completion_date: date,
    ) -> None:
        if employee_id not in self.staff:
            raise ValueError(f"Unknown employee: {employee_id}")
        if system_id not in self.systems:
            raise ValueError(f"Unknown system: {system_id}")
        self.staff[employee_id].training_records[system_id] = completion_date

    def assess_compliance(
        self,
        employee_id: str,
        system_id: str,
        as_of: Optional[date] = None,
    ) -> LiteracyAssessment:
        as_of = as_of or date.today()
        member = self.staff.get(employee_id)
        system = self.systems.get(system_id)

        if not member or not system:
            raise ValueError("Unknown employee or system")

        if system_id not in member.systems_in_scope:
            return LiteracyAssessment(
                employee_id=employee_id,
                system_id=system_id,
                status=ComplianceStatus.NOT_TRAINED,
                last_trained=None,
                next_due=None,
                gap_description="System not in employee's scope — verify role mapping",
            )

        last_trained = member.training_records.get(system_id)

        if last_trained is None:
            return LiteracyAssessment(
                employee_id=employee_id,
                system_id=system_id,
                status=ComplianceStatus.NOT_TRAINED,
                last_trained=None,
                next_due=None,
                gap_description="No training record found — initial training required",
            )

        # Refresh cycle resets on system updates
        effective_base = max(last_trained, system.last_updated)
        next_due = effective_base + timedelta(days=member.refresh_interval_days)
        days_until_due = (next_due - as_of).days

        if as_of > next_due:
            status = ComplianceStatus.OVERDUE
            gap = f"Training expired {(as_of - next_due).days} days ago"
        elif days_until_due <= 60:
            status = ComplianceStatus.EXPIRING_SOON
            gap = f"Training expires in {days_until_due} days"
        else:
            status = ComplianceStatus.COMPLIANT
            gap = None

        return LiteracyAssessment(
            employee_id=employee_id,
            system_id=system_id,
            status=status,
            last_trained=last_trained,
            next_due=next_due,
            gap_description=gap,
        )

    def organisation_gap_report(self, as_of: Optional[date] = None) -> list[LiteracyAssessment]:
        as_of = as_of or date.today()
        gaps = []
        for member in self.staff.values():
            for system_id in member.systems_in_scope:
                assessment = self.assess_compliance(member.employee_id, system_id, as_of)
                if assessment.status != ComplianceStatus.COMPLIANT:
                    gaps.append(assessment)
        return sorted(gaps, key=lambda a: (a.status.value, a.employee_id))

    def generate_policy_summary(self) -> dict:
        """
        Produces a structured summary suitable for inclusion in the Art.4 compliance
        documentation pack submitted to competent authorities on request.
        """
        total_staff = len(self.staff)
        total_systems = len(self.systems)
        gaps = self.organisation_gap_report()

        tier_counts = {}
        for member in self.staff.values():
            tier = member.tier.value
            tier_counts[tier] = tier_counts.get(tier, 0) + 1

        return {
            "regulation": "EU AI Act Art.4",
            "applicable_since": "2025-08-02",
            "report_date": str(date.today()),
            "covered_staff": total_staff,
            "covered_systems": total_systems,
            "tier_distribution": tier_counts,
            "open_gaps": len(gaps),
            "gap_breakdown": {
                "overdue": sum(1 for g in gaps if g.status == ComplianceStatus.OVERDUE),
                "expiring_soon": sum(1 for g in gaps if g.status == ComplianceStatus.EXPIRING_SOON),
                "not_trained": sum(1 for g in gaps if g.status == ComplianceStatus.NOT_TRAINED),
            },
        }

Usage

from datetime import date

tracker = AILiteracyTracker()

# Register the AI system
tracker.register_system(AISystemProfile(
    system_id="credit-scoring-v2",
    name="Credit Risk Scoring Model",
    risk_category="high_risk",  # Annex III point 5(b)
    description="ML model producing creditworthiness scores for retail loan applications",
    last_updated=date(2026, 3, 1),
    known_limitations=["Limited accuracy for thin-file applicants", "Training data pre-2021"],
    affected_groups=["retail loan applicants", "SME applicants"],
))

# Register staff members with their tiers
tracker.register_staff(StaffMember(
    employee_id="emp-001",
    name="Anna Schmidt",
    role="Loan Officer",
    tier=LiteracyTier.OPERATIONAL,
    systems_in_scope=["credit-scoring-v2"],
    refresh_interval_days=365,
))

tracker.register_staff(StaffMember(
    employee_id="emp-042",
    name="Bernd Müller",
    role="Credit Risk Manager",
    tier=LiteracyTier.OPERATOR,
    systems_in_scope=["credit-scoring-v2"],
    refresh_interval_days=180,
))

# Record training completions
tracker.record_training("emp-001", "credit-scoring-v2", date(2025, 9, 15))
tracker.record_training("emp-042", "credit-scoring-v2", date(2025, 8, 20))

# Generate gap report
gaps = tracker.organisation_gap_report()
for gap in gaps:
    print(f"{gap.employee_id}: {gap.status.value} — {gap.gap_description}")

# Generate policy summary for regulator
summary = tracker.generate_policy_summary()
print(summary)

Intersection with Other Art.4-Adjacent Requirements

Art.4 does not exist in isolation. Several other provisions either reinforce or extend the literacy obligation:

Art.26(6): Deployer Staff Information Rights

Deployers must ensure that operators receive "relevant information" about the AI system, its limitations, and appropriate use. This is the practical delivery mechanism for Art.4 literacy — the AI system's technical documentation (Art.11) and instructions for use (Art.13) are the source material.

Art.14: Human Oversight for High-Risk Systems

Art.14 requires that high-risk AI systems be designed to allow human oversight. The quality of that oversight depends directly on the operators' AI literacy. An operator who does not understand the system's failure modes cannot meaningfully exercise the override capability Art.14 mandates.

Art.9: Risk Management

Providers must identify and assess risks to health, safety, and fundamental rights. AI-literate staff who flag unexpected system behaviour are a primary channel through which Art.9 risk management receives operational feedback.

Art.72: Post-Market Monitoring

Providers must actively monitor deployed AI systems. Staff who understand what normal AI outputs look like are the first line of detection for the performance degradation that triggers post-market monitoring obligations.


Enforcement Context

Art.4 violations fall under the general penalties structure at Art.99(3):

Fines of up to €15,000,000 or, for undertakings, 3% of total worldwide annual turnover — whichever is higher.

As of April 2026, no competent authority has published an Art.4 enforcement action. However, Art.4 is likely to appear as an aggravating factor in cases where:

  1. A deployer's staff made an AI-assisted decision that harmed an individual
  2. Investigation reveals that the operator did not understand the AI system's limitations
  3. The deployer cannot produce evidence of any literacy training

In that scenario, Art.4 non-compliance compounds the underlying substantive violation rather than standing alone.


Hosting and Infrastructure Dimension

Art.4 literacy applies to AI systems wherever they run — cloud, on-premises, or hybrid. However, the infrastructure choice affects who counts as an "other person" dealing with the AI system's operation.

When deployers use a third-party cloud provider to run their AI system, the provider's staff who operate the underlying infrastructure generally fall outside Art.4 scope (they are not "dealing with the operation and use" of the AI system in the regulated sense — they are running compute). But:

For compliance teams documenting their Art.4 measures, knowing the infrastructure topology — what runs where, who operates it, what data passes through — is necessary to define the scope of persons covered.

Sota.io provides European-only infrastructure, which simplifies the "who has access to the system" question for Art.4 scope mapping and reduces the jurisdictional complexity of third-party operator documentation.


Art.4 Compliance Checklist

Art.4 AI Literacy — Compliance Checklist

□ Role mapping complete
  □ All staff who interact with AI systems identified
  □ Third-party operators mapped to their systems
  □ Tiers assigned (operational / operator / technical)

□ AI system profiles documented
  □ Scope and purpose in plain language
  □ Known limitations and failure modes
  □ Affected persons or groups
  □ Last system update date (triggers refresh cycle)

□ Training delivered
  □ Tier-1 training completed by all operational users
  □ Tier-2 training completed by all system operators
  □ Tier-3 training completed by all technical and compliance staff
  □ All training dated and recorded per employee

□ Refresh cycle established
  □ Annual refresh minimum for all tiers
  □ Refresh triggered on material system changes
  □ Refresh records maintained

□ Documentation pack maintained
  □ AI literacy policy with owner and review date
  □ System-specific briefings
  □ Per-employee training records
  □ Role-to-system mapping

□ Integration with Art.26/Art.14
  □ Instructions for use (Art.13) distributed to operators
  □ Human oversight procedures documented and trained
  □ Incident escalation path known to all covered staff

See Also