EU AI Act Art.7: Commission Delegated Acts to Amend Annex III — When New High-Risk Categories Are Added (2026)
Annex III of the EU AI Act is not a static list. Article 7 explicitly empowers the Commission to amend it — adding or modifying high-risk AI system categories — through delegated acts. This means the compliance boundary for high-risk AI is subject to administrative expansion without a new legislative cycle.
For developers and compliance teams, Art.7 creates a forward-looking obligation: monitoring the Annex III amendment process is part of maintaining legal certainty. A product that sits outside Annex III today may fall within it after a Commission delegated act. The lead time between a Commission proposal and Annex III entry into force can be as short as three months.
This guide covers:
- The Art.7(1) amendment power and its scope
- All seven criteria the Commission must assess (Art.7(1)(a)–(g))
- The GPAI model intersection under Art.7(2)
- The emergency procedure under Art.7(3)
- Parliamentary and Council scrutiny (Art.7(4)–(5))
- First-cycle delegated act timeline expectations for 2026–2028
- Python tooling to monitor and assess Annex III amendment exposure
What Art.7 Authorises
Article 7 gives the Commission a delegated power under Art.97 EU AI Act to:
- Add new AI system use-cases or areas to Annex III, or
- Remove AI system use-cases from Annex III that no longer meet the risk threshold
The delegation runs for an initial five-year period from 1 August 2024, automatically renewable unless the Parliament or Council objects (Art.97(2)).
The power to remove entries is equally significant: if an AI system that was once classified as high-risk no longer presents significant risk due to technological development, the Commission can declassify it. This is the inverse mechanism developers rarely plan for but should track.
The Seven Criteria: Art.7(1)(a)–(g)
Before adding a new area or use-case to Annex III, the Commission must assess all of the following factors. Failing to address any criterion makes the delegated act vulnerable to annulment by the Court of Justice.
(a) Intended Purpose and Manner of Use
The Commission must evaluate whether the AI system is intended to be used in high-stakes decision-making contexts — employment, education, essential services, law enforcement, justice, democratic participation, or critical infrastructure. Systems designed for low-stakes advisory functions are harder to justify as high-risk under Art.7(1)(a).
Developer implication: Systems that are technically equivalent but differ in intended deployment context may face different Art.7 exposure. A recommendation engine marketed to streaming platforms differs from the same algorithm deployed for hospital bed allocation even if the underlying model is identical.
(b) Extent to Which the System Is Already in Use or Is Likely to Be Used
The Commission must consider the deployment scale. A technology used by a handful of experimental research institutes is less likely to trigger an Art.7 addition than one deployed by millions of public authorities or private sector deployers across Member States.
Developer implication: Market adoption data matters. When a technology reaches scale — particularly in Annex III-adjacent sectors (biometric identification, critical infrastructure management, employment, essential private services) — Art.7 review becomes more probable.
(c) Extent to Which Use Has or Is Likely to Have a Significant Impact on Fundamental Rights
This criterion directly links Art.7 to the EU Charter of Fundamental Rights. The Commission must assess whether the AI system affects dignity, equality, non-discrimination, privacy (Art.8 Charter), right to an effective remedy (Art.47), and other fundamental rights.
Practical scope: This is the most expansive of the seven criteria. Almost any AI system used in public administration, benefits determination, housing allocation, or law enforcement has at least some potential fundamental rights impact. The Commission must show the impact is significant — not merely theoretical.
(d) The Extent to Which Adverse Outcomes Are Irreversible or Difficult to Reverse
Irreversibility is a key risk amplifier in the Commission's assessment. An AI system that generates employment recommendations that a human reviewer can easily override is structurally different from one that makes loan blacklisting decisions that propagate across credit bureaux and affect access to financial services for years.
Developer implication: Implement human override capability and correction mechanisms not only to satisfy Art.14 (human oversight for existing high-risk AI) but also to reduce Art.7(1)(d) exposure if your system comes under Commission scrutiny.
(e) The Extent to Which Adverse Outcomes Result in a Significant Impact on Health, Safety, or Fundamental Rights
This criterion requires the Commission to assess not just likelihood but magnitude. A system that occasionally misclassifies a restaurant recommendation has low magnitude even if errors are frequent. A system used in credit scoring for mortgage applications that systematically disadvantages a protected group has high magnitude even if statistically rare.
Note the overlap with Art.7(1)(c): the Commission must separately assess fundamental rights impact under both (c) and (e). The difference is that (c) asks about the extent of any rights impact while (e) asks about whether adverse outcomes — actual negative consequences — reach a significant threshold.
(f) The Extent to Which Those Potentially Adversely Affected Are in a Vulnerable Position
Vulnerable population exposure is a statutory amplifier. Systems that primarily affect children, persons with disabilities, migrants, elderly persons, or economically disadvantaged groups are assessed more stringently than systems affecting the general population.
Compliance design: If your AI system operates in a domain where vulnerable persons are disproportionately represented — social housing allocation, disability benefits, child custody proceedings, refugee status determination — Art.7(1)(f) creates heightened delegated act exposure regardless of current Annex III scope.
(g) The Extent to Which There Is an Imbalance of Power Between the Deployer and the Persons on Whom the AI System Is Used
This is the most novel of the seven criteria — explicitly recognising that power asymmetry amplifies AI risk. An AI system used by a private employer to screen job applicants creates an inherent power imbalance: the deployer has complete information about the AI system, the applicant has none, and the applicant has no realistic alternative avenue if rejected.
Art.7(1)(g) makes power imbalance a formal Commission consideration. High asymmetry systems — particularly those used in public authority contexts where individuals cannot opt out — face stronger Art.7 justification.
Interaction With the Seven Criteria: A Decision Matrix
The Commission need not satisfy all seven criteria to justify an Annex III addition. Article 7(1) requires the Commission to "take into account" the criteria — they are factors in a balancing exercise, not threshold gates. However, a delegated act unsupported by any meaningful showing on most criteria would be vulnerable to challenge.
| Criterion | Key Question | High Risk Score Drivers |
|---|---|---|
| (a) Intended purpose | High-stakes decision domain? | Employment, law enforcement, justice, benefits |
| (b) Scale of use | Already or likely widely deployed? | Cross-MS deployment, public sector |
| (c) Fundamental rights | Significant Charter impact? | Discrimination, privacy, effective remedy |
| (d) Irreversibility | Hard to correct bad outcomes? | Blacklisting, deportation, criminal conviction |
| (e) Health/safety/rights | Significant adverse outcome magnitude? | Physical harm, systemic discrimination |
| (f) Vulnerable populations | Disproportionate exposure? | Children, disabled, migrants, economically disadvantaged |
| (g) Power asymmetry | Structural imbalance? | Public authority/individual, employer/applicant |
Art.7(2): GPAI Models and the Annex III Interface
Article 7(2) addresses an intersection the drafters anticipated: General Purpose AI (GPAI) models that are integrated into AI systems that then become high-risk.
Art.7(2) provides that when the Commission adds a new category to Annex III, it shall also specify whether that addition creates obligations for GPAI model providers under Chapter V (Arts.51–56) — the specific GPAI transparency, technical documentation, and systemic-risk rules — when those models are integrated into the newly Annex III-classified AI systems.
Why this matters: If the Commission adds, for example, "AI systems used for predictive maintenance in critical infrastructure" to Annex III, and a major LLM provider's API is embedded in dozens of such systems, the Commission must decide whether the LLM provider itself becomes subject to additional GPAI obligations arising from those downstream integrations.
This is a second-order compliance obligation that GPAI providers must monitor. A GPAI provider whose API is used downstream in systems that fall within a new Art.7 Annex III addition may find themselves with new Art.51–56 obligations flowing from the delegated act rather than from a change to GPAI-specific regulation.
Art.7(3): The Emergency Procedure
Article 7(3) provides an accelerated delegated act mechanism for urgent situations. If the Commission determines that an AI system presents a serious and imminent risk to health, safety, or fundamental rights, it can adopt a delegated act amending Annex III with immediate effect — without waiting for the standard scrutiny period.
Emergency delegated acts:
- Enter into force immediately on notification to the Parliament and Council
- Remain in force for a period not exceeding 12 months
- Are confirmed, amended, or repealed through a standard delegated act within that period
Practical significance: The emergency procedure is the legislative analogue of a RAPEX or market surveillance rapid alert. A high-profile AI incident — a biometric surveillance system deployed at scale for political surveillance, or an AI-powered credit scoring system exposed as systematically discriminatory — could trigger Art.7(3) within weeks of the incident becoming publicly known.
Developers operating in borderline-high-risk areas (just outside current Annex III scope) should assess Art.7(3) exposure, not just standard Art.7(1) exposure.
Parliamentary and Council Scrutiny: Arts.7(4)–(5)
The standard delegated act procedure under Art.7 gives both the European Parliament and the Council a three-month objection period from notification of the delegated act.
How Scrutiny Works
- Commission adopts delegated act and notifies Parliament + Council simultaneously
- Three-month scrutiny period begins (extendable once by a further three months)
- If neither Parliament nor Council objects → delegated act enters into force
- If either objects by qualified majority → delegated act does not enter into force
The Parliament or Council can also signal that they will not object before the three months expire — allowing early entry into force. This is commonly used when the Commission has pre-consulted both institutions extensively.
Strategic Implication for Industry
The three-month (potentially six-month) scrutiny window creates a notification period during which Annex III classification is announced but not yet binding. Developers whose systems would be captured by a proposed addition have at minimum three months after formal Commission adoption to begin compliance work — though starting earlier (at Commission consultation stage) is strongly advisable.
Timeline: First-Cycle Delegated Acts (2026–2028)
The EU AI Act entered into force on 1 August 2024. The Commission's internal review under Art.7(1) of existing Annex III categories and potential additions has been ongoing since then. Based on the legislative trajectory:
| Phase | Expected Timing |
|---|---|
| Commission consultation + impact assessment | 2025–2026 |
| Formal Commission proposal for delegated act | 2026–2027 |
| Parliament + Council scrutiny (3 months) | 2027 |
| First delegated act amendments to Annex III | Late 2027–Early 2028 |
| August 2026 | Current Annex III HIGH-RISK AI obligations apply |
The August 2026 deadline is for obligations under the current Annex III, not for any first-cycle Art.7 additions. Developers must:
- Comply with current Annex III high-risk obligations by August 2026
- Monitor Art.7 consultation processes for additions affecting their systems
- Build compliance readiness for potential new categories by 2027–2028
Monitoring Art.7: What to Watch
Commission Consultation Signals
The Commission publishes pre-legislative work on EUR-Lex and through the Better Regulation Portal. Art.7 proposals will appear as:
- Impact Assessments under delegation authority (Art.97 EU AI Act)
- AI Office consultations on technology-specific risk assessments
- ENISA technical assessments on AI system risk categorisation
ENISA and AI Office Outputs
The EU AI Office (established Art.64–70 EU AI Act) is the primary Commission advisory body on AI risk. ENISA publishes sector-specific AI risk assessments. Both feed into the Art.7(1) criteria evaluation. Compliance teams should monitor:
- EU AI Office publications at digital-strategy.ec.europa.eu/ai-office
- ENISA AI risk assessments at enisa.europa.eu
Regulatory Horizon Scanning
from dataclasses import dataclass, field
from datetime import date
from typing import Literal
AnnexIIIArea = Literal[
"biometric_identification",
"critical_infrastructure",
"education_vocational",
"employment_workforce",
"essential_private_services",
"law_enforcement",
"migration_asylum",
"justice_democratic",
]
@dataclass
class Art7ExposureAssessment:
system_name: str
deployment_domains: list[AnnexIIIArea]
uses_vulnerable_populations: bool
power_asymmetry_score: int # 1-5
irreversibility_score: int # 1-5
fundamental_rights_impact: bool
scale_deployed: Literal["research", "limited", "regional", "national", "eu_wide"]
def compute_art7_exposure(self) -> dict:
domain_risk = len(self.deployment_domains) * 10
vuln_factor = 20 if self.uses_vulnerable_populations else 0
asymmetry_factor = self.power_asymmetry_score * 5
irreversibility_factor = self.irreversibility_score * 5
rights_factor = 15 if self.fundamental_rights_impact else 0
scale_scores = {
"research": 0, "limited": 5, "regional": 10,
"national": 15, "eu_wide": 20
}
scale_factor = scale_scores[self.scale_deployed]
total = (domain_risk + vuln_factor + asymmetry_factor +
irreversibility_factor + rights_factor + scale_factor)
if total >= 70:
level = "HIGH — Commission review likely within 2 years"
elif total >= 40:
level = "MEDIUM — Monitor Art.7 consultations actively"
else:
level = "LOW — No immediate Art.7 exposure"
return {
"system": self.system_name,
"score": total,
"exposure_level": level,
"domains": self.deployment_domains,
"recommended_actions": self._recommend_actions(total),
}
def _recommend_actions(self, score: int) -> list[str]:
actions = []
if score >= 70:
actions.append("Register for EU AI Office newsletter and EUR-Lex Art.7 alerts")
actions.append("Engage legal counsel for Annex III expansion monitoring")
actions.append("Begin Art.9–15 gap analysis proactively")
if score >= 40:
actions.append("Subscribe to ENISA AI risk assessment publications")
actions.append("Map current system against Art.7(1)(a)–(g) criteria")
if self.uses_vulnerable_populations:
actions.append("Document Art.7(1)(f) mitigation measures now")
if self.power_asymmetry_score >= 4:
actions.append("Implement human override and explanation mechanisms for Art.7(1)(g)")
return actions
Example assessment:
assessment = Art7ExposureAssessment(
system_name="Tenant Screening AI",
deployment_domains=["essential_private_services"],
uses_vulnerable_populations=True,
power_asymmetry_score=5, # landlord vs tenant
irreversibility_score=4, # housing rejection hard to reverse
fundamental_rights_impact=True,
scale_deployed="national",
)
result = assessment.compute_art7_exposure()
# score: 75 — HIGH exposure
# Commission review likely: housing/tenant-screening AI was discussed
# in Parliament during Art.6 negotiations
Annex III: Current Scope and Expansion Vectors
Current Annex III (as of 2026) covers eight areas:
- Biometric identification and categorisation
- Critical infrastructure management and operation
- Education and vocational training
- Employment, workers management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Highest-probability expansion vectors (based on Commission consultation signals and ENISA outputs through Q1 2026):
| Vector | Art.7 Criteria Alignment | Probability (2026–2028) |
|---|---|---|
| Insurance risk scoring AI | (c)(d)(e)(g) | Medium-High |
| AI in healthcare diagnostic support | (a)(c)(e) | Medium |
| AI-powered tenant/housing screening | (c)(d)(f)(g) | Medium-High |
| Social media content moderation | (b)(c)(g) | Low-Medium |
| AI in judicial/quasi-judicial proceedings | (a)(c)(d)(g) | Medium |
Compliance Posture: Three Tiers
Tier 1 — Currently Annex III (August 2026 deadline): Full Art.9–15 compliance required. No Art.7 monitoring needed for classification — classification is already settled.
Tier 2 — Art.7 Exposure Zone (domains adjacent to Annex III, high criteria alignment): Begin gap analysis against Art.9–15 now. Implement Art.7(1)(d)(g) mitigation (human override, correction mechanisms, power asymmetry documentation). Monitor Commission consultation formally.
Tier 3 — Low Art.7 Exposure: Maintain situational awareness via EUR-Lex alerts. No immediate action required.
Key Dates and Reference Points
| Event | Date / Status |
|---|---|
| EU AI Act entry into force | 1 August 2024 |
| Art.5 prohibitions applicable | 2 February 2025 |
| GPAI model obligations applicable | 2 August 2025 |
| Current Annex III high-risk AI obligations | 2 August 2026 |
| Art.97 delegation period (5 years) | Until 2029 (renewable) |
| First Art.7 delegated acts expected | 2027–2028 |
| Commission Art.7 consultation phase | Now — 2026 |
Summary
Article 7 is the EU AI Act's living list mechanism. The Commission has broad authority to expand Annex III through delegated acts, constrained only by the requirement to assess seven statutory criteria and subject to a three-month parliamentary scrutiny window. The emergency procedure (Art.7(3)) can compress that timeline to weeks for urgent situations.
For developers:
- August 2026 is the compliance deadline for current Annex III systems — this date is not affected by Art.7
- Art.7 expansion will follow a first cycle of delegated acts expected 2027–2028
- GPAI providers must monitor Art.7(2) implications when downstream integrations enter newly classified categories
- Tier 2 systems (adjacent to Annex III, high criteria alignment) should begin compliance readiness now rather than waiting for a formal delegated act
The seven criteria in Art.7(1)(a)–(g) are not just Commission decision factors — they are a diagnostic framework for any developer assessing whether their system is on a regulatory expansion trajectory.