EU AI Act Art.111: CRR Amendments — AI in Banking Dual Regulation FinTech Developer Guide (2026)
When regulators write final provisions that amend other regulations, the inclination is to skip past them. Article 111 of the EU AI Act is one such provision — a cross-reference to banking law that looks like a housekeeping measure. It is not. Art.111 inserts AI Act compliance requirements into the Capital Requirements Regulation (CRR, Regulation (EU) No 575/2013), creating a formal dual-regulation framework that governs every AI system used in credit risk assessment, internal model validation, and prudential stress testing at European banks.
The practical consequence: if you are building or deploying AI for a European bank — credit scoring models, IRB parameter estimation engines, stress testing platforms, fraud detection — you now operate at the intersection of two heavyweight regulatory frameworks. The EBA (European Banking Authority) acquires new mandate to develop regulatory technical standards on AI use in banking. National competent authorities (NCAs) acting as prudential supervisors and national market surveillance authorities under the AI Act must coordinate oversight. And the conformity assessment pathway under the AI Act runs in parallel with CRR model approval processes that banks have already navigated for years.
This guide explains what Art.111 does to the CRR, why the dual-regulation architecture matters for development teams, and what FinTech firms building AI for European banks need to do before August 2026.
What Art.111 Amends in the CRR
The Capital Requirements Regulation establishes the prudential framework for European banks — minimum capital requirements, liquidity coverage ratios, leverage ratios, and the governance standards for internal models. Banks that use internal ratings-based (IRB) approaches to credit risk must have those models approved by their national competent authority (typically the ECB via the SSM for significant institutions, or national prudential regulators for less-significant ones).
Article 111 of the EU AI Act adds a layer to this framework by inserting references to AI Act compliance requirements where banking AI systems are used. Specifically:
IRB model governance (CRR Art.173, Art.185): Banks using AI systems as part of their IRB approach — for PD (probability of default), LGD (loss given default), or EAD (exposure at default) estimation — must now demonstrate that those AI systems meet AI Act Art.9 (risk management), Art.10 (data governance), Art.11 (technical documentation), Art.13 (transparency to human reviewers), and Art.14 (human oversight) requirements. The model approval process that prudential supervisors conduct under Art.185 CRR must incorporate evidence of AI Act conformity.
Credit risk assessment AI (Annex III Category 5b): The AI Act classifies "AI systems intended to be used for creditworthiness assessment or credit scoring" as high-risk AI under Annex III, Category 5b. Art.111 creates the formal link between this classification and CRR requirements, so that banks using AI in credit assessment face a unified oversight regime rather than parallel but disconnected regulatory tracks.
EBA mandate for technical standards: Art.111 empowers the EBA to develop regulatory technical standards (RTS) specifying how AI Act conformity assessment outputs should be integrated into the CRR model approval process, what documentation banks must submit to prudential supervisors demonstrating AI Act compliance, and how NCAs should coordinate with market surveillance authorities designated under Art.70 of the AI Act.
Supervisory coordination: The provision requires NCAs acting as prudential supervisors to share relevant supervisory findings about AI systems with designated market surveillance authorities and vice versa. This eliminates the scenario where a bank's AI model passes prudential validation but fails AI Act conformity assessment, or vice versa, without either regulator knowing.
Why Dual Regulation Matters for Developers
Banking AI in Europe was already subject to extensive model risk management requirements before the AI Act. The EBA's Guidelines on internal governance (EBA/GL/2021/05), the EBA's Guidelines on loan origination and monitoring (EBA/GL/2020/06), and the ECB's Guide to internal models established rigorous standards for model development, validation, and ongoing monitoring. The AI Act does not replace these — it adds to them.
For development teams, this means:
Two sets of documentation requirements. CRR model approval requires technical documentation of model design, data sources, performance metrics, and validation results — submitted to prudential supervisors in formats they specify. AI Act Art.11 requires technical documentation in the format specified in Annex IV, submitted to market surveillance authorities. Both must be maintained and kept current. They overlap substantially (both require data governance documentation, performance metrics, validation evidence) but use different terminology and have different formal requirements.
Two validation processes. Banks' internal model validation functions — required under CRR — conduct independent validation of AI models as part of IRB approval. AI Act conformity assessment for high-risk systems (Art.43) requires either self-assessment (for most systems) or third-party conformity assessment by a notified body (for systems covered by certain harmonized standards). These processes can be aligned — the same validation evidence often satisfies both — but alignment must be designed in, not discovered after the fact.
Two ongoing monitoring obligations. CRR requires continuous monitoring of approved models with re-validation when models drift or business conditions change. AI Act Art.72 requires post-market monitoring through a plan established before market placement. For banking AI, these monitoring obligations can and should be unified into a single ongoing monitoring program, but the reporting outputs must satisfy both frameworks simultaneously.
Coordinated supervisory interactions. Banks subject to SSM (Single Supervisory Mechanism) supervision interact with the ECB/JST on model approval. For AI Act compliance, market surveillance authority interactions follow different procedural rules. Art.111 creates coordination obligations, but in practice FinTech developers and banks will need to manage relationships with both types of authorities — and prepare documentation that anticipates both sets of questions.
Annex III Category 5b: Creditworthiness AI as High-Risk
The EU AI Act's high-risk classification under Annex III applies to several categories of AI systems. Category 5 covers "Employment, workers management and access to self-employment" (5a) and "Access to essential private and public services and benefits" (5b). Within 5b, "AI systems intended to be used for creditworthiness assessment or credit scoring, including insurance, to evaluate the credit score or creditworthiness of natural persons" are explicitly listed as high-risk.
The practical scope of this classification is wide:
Direct credit scoring models — ML models that estimate probability of default, credit score, or lending eligibility for natural persons. This includes traditional bank credit scoring but also buy-now-pay-later platforms, peer-to-peer lending, and embedded finance.
Features of creditworthiness assessment — AI that extracts behavioral signals, transaction patterns, or alternative data for credit decisioning is in scope even if the final credit decision is human-made, provided the AI output materially influences the decision.
Insurance risk classification — AI systems used to set insurance premiums or determine coverage eligibility for natural persons fall under the same Category 5b classification. Insurance AI regulatory interactions involve both AI Act requirements and the Solvency II framework.
What is excluded: B2B credit assessment (AI used to assess creditworthiness of legal entities rather than natural persons) sits outside the Category 5b classification, though other provisions (Art.6(2), Art.50 transparency) may still apply.
The high-risk classification triggers the full Chapter III Section 2 obligation set: risk management system (Art.9), data governance (Art.10), technical documentation (Art.11), record-keeping (Art.12), transparency and information to deployers (Art.13), human oversight (Art.14), accuracy, robustness and cybersecurity (Art.15), and for providers the quality management system (Art.17), post-market monitoring (Art.72), and registration in the EU database (Art.71).
For banks already complying with CRR model governance requirements, many of these are familiar — but the AI Act formalization makes them legally mandatory obligations with penalty exposure (Art.99: up to 3% of global annual turnover for non-compliance with obligations other than prohibited practices).
EBA Technical Standards and Guidance
The EBA has been engaged with AI governance in banking ahead of the AI Act's application date. Key EBA work relevant to Art.111:
EBA/REP/2020/01 — Report on Big Data and Advanced Analytics (BDAA): Established the foundational framework for EBA thinking on model governance for ML models in banking. Introduced the BDAA lifecycle (from data sourcing through validation and monitoring) that maps closely to what the AI Act now requires formally.
EBA consultation on the use of ML for IRB models (2023): Proposed guidance on how banks can use machine learning within the IRB framework while maintaining explainability and validation standards compatible with CRR requirements. This consultation directly anticipated the AI Act intersection.
Draft RTS on AI Act integration (post-Art.111 mandate): Following the AI Act's entry into force, the EBA is developing regulatory technical standards on how AI Act conformity assessment outputs should be presented in model approval submissions, what additional information prudential supervisors need to assess AI Act compliance, and how ongoing AI Act monitoring reports relate to CRR model performance monitoring obligations.
EBA supervisory handbook updates: NCAs conducting model reviews under the ECB/SSM framework are updating their review methodologies to incorporate AI Act criteria. The SSM's Targeted Review of Internal Models (TRIM) process is being extended to cover AI systems within scope.
For FinTech developers, the practical implication is that EBA technical standards will define the specific documentation formats and validation evidence that banking supervisors expect. Aligning with EBA guidance is not optional — it is the mechanism by which Art.111 CRR compliance becomes operationally concrete.
IRB Model Approval and AI Act Conformity Assessment
European banks using the IRB approach to credit risk must obtain supervisory approval for their models before using them in capital calculations. This approval process involves model review by supervisory teams, assessment of validation reports, and ongoing monitoring requirements. The process is resource-intensive and takes 12-24 months for significant institutions.
Art.111 requires that AI systems within IRB models — any ML component used in PD/LGD/EAD estimation — demonstrate AI Act high-risk compliance as part of the approval package. In practice:
Pre-submission: Before submitting an IRB model approval request that includes an AI component, banks (and FinTech vendors providing model components) must complete AI Act technical documentation (Annex IV), conduct conformity assessment (Art.43), register the system in the EU database (Art.71), and affix CE marking. This pre-submission work adds to the model development timeline.
During review: Supervisory teams reviewing IRB model approvals will examine AI Act compliance evidence alongside traditional model validation documentation. Reviewers will assess whether the AI system's risk management system (Art.9) is adequate, whether data governance (Art.10) meets AI Act requirements, and whether human oversight mechanisms (Art.14) are implemented for the AI component.
Post-approval monitoring: Approved IRB models with AI components must satisfy both CRR model monitoring obligations and AI Act post-market monitoring requirements (Art.72). When either framework requires re-validation (e.g., due to model drift, distribution shift, or significant changes in input data), the re-validation must satisfy both sets of standards simultaneously.
For FinTech vendors providing model components: If you supply ML model components to a bank that uses them in IRB calculation, you are a provider of a high-risk AI system under the AI Act (Art.3(3)). You bear the provider obligations regardless of whether your customer (the bank) is the deployer who bears other obligations. Establish clear contractual allocation of provider/deployer obligations in vendor agreements with banking customers.
Data Governance: AI Act Art.10 and Basel III Data Quality Requirements
The EU AI Act's data governance requirements (Art.10) and Basel III/IV data quality standards for model inputs converge on many of the same practical requirements. Understanding the overlap reduces duplication; identifying the gaps prevents compliance failures.
Where they converge:
- Training data documentation requirements (AI Act Art.10(2)(c): "data collection and data origin" documentation) match CRR requirements for documenting data sources and data quality assessments for internal model inputs
- Data relevance and representativeness (AI Act Art.10(3): training data "relevant, sufficiently representative and, to the best extent possible, free of errors") parallels EBA expectations for training data quality in ML models
- Bias examination (AI Act Art.10(5): examining training data for biases that could lead to prohibited discrimination) intersects with EBA guidance on avoiding algorithmic bias in credit decisions
Where the AI Act goes further:
- AI Act Art.10(4) requires data governance "practices for data collection, data preparation and data annotation" to be documented — more specific than generic CRR data quality requirements
- AI Act Art.10(5)(a)-(f) enumerates specific bias categories to examine (age, sex, disability, racial or ethnic origin, sexual orientation, religion or belief) that are not explicitly enumerated in CRR model governance requirements
- AI Act Art.10(6) explicitly covers processing of special categories of personal data under GDPR Art.9 conditions — relevant where banks use health or behavioral data as credit model inputs
Practical recommendation: Extend your existing CRR data governance documentation to explicitly address AI Act Art.10(2) and Art.10(5) requirements. The additional work is modest if your CRR data governance is already rigorous; it is substantial if your internal model documentation has gaps.
Dual Reporting Obligations
Banks with AI systems in scope under Art.111 face reporting obligations under both the CRR framework and the AI Act. These reporting channels are distinct:
CRR/SSM reporting: Model performance reports, validation reports, and material model change notifications go to the NCA/SSM through established supervisory channels. These reports use supervisor-specified formats and timelines.
AI Act Art.73 incident reporting: Providers (or deployers under Art.73(3)) must report serious incidents — AI system malfunctions causing harm to health, safety, or fundamental rights — to market surveillance authorities within 15 working days. This is a separate reporting obligation to a different authority, using AI Act-specified formats.
AI Act Art.72 post-market monitoring: Annual (or as specified in the monitoring plan) performance monitoring reports submitted to market surveillance authorities, plus logging data maintained under Art.12. The monitoring plan must be established before market placement.
Coordination requirement under Art.111: The amendment requires coordination between prudential supervisors and market surveillance authorities on AI systems that are both CRR-governed and AI Act high-risk. This means supervisory findings shared bilaterally — a good thing for reducing duplicative supervisory burden, but requiring banks and their AI vendors to manage communications with both authority types.
Python Tooling for FinTech AI Compliance Tracking
from dataclasses import dataclass, field
from datetime import date, timedelta
from typing import Optional
from enum import Enum
class AnnexIIICategory(Enum):
CREDITWORTHINESS_5B = "5b_creditworthiness"
INSURANCE_5B = "5b_insurance"
EMPLOYMENT_5A = "5a_employment"
NOT_HIGH_RISK = "not_high_risk"
class IRBApproach(Enum):
FOUNDATION = "F-IRB"
ADVANCED = "A-IRB"
NOT_IRB = "standardised"
@dataclass
class BankingAISystem:
"""Represents an AI system subject to Art.111 dual regulation."""
name: str
annex_iii_category: AnnexIIICategory
irb_approach: IRBApproach
deployment_date: Optional[date] = None
conformity_assessment_date: Optional[date] = None
irb_approval_date: Optional[date] = None
monitoring_plan_established: bool = False
eu_database_registered: bool = False
ce_marking_affixed: bool = False
vendor_provider: bool = True # True if you are the AI provider (not the bank)
@dataclass
class Art111ComplianceStatus:
"""Tracks dual-regulation compliance status for a banking AI system."""
system: BankingAISystem
gaps: list[str] = field(default_factory=list)
def assess(self) -> dict:
"""Perform Art.111 dual-regulation compliance assessment."""
self.gaps = []
if self.system.annex_iii_category in [
AnnexIIICategory.CREDITWORTHINESS_5B,
AnnexIIICategory.INSURANCE_5B,
]:
self._check_high_risk_obligations()
if self.system.irb_approach != IRBApproach.NOT_IRB:
self._check_irb_ai_alignment()
return {
"system": self.system.name,
"high_risk_classified": self.system.annex_iii_category != AnnexIIICategory.NOT_HIGH_RISK,
"irb_in_scope": self.system.irb_approach != IRBApproach.NOT_IRB,
"gaps": self.gaps,
"compliance_ready": len(self.gaps) == 0,
"days_to_deadline": (date(2026, 8, 2) - date.today()).days,
}
def _check_high_risk_obligations(self):
if not self.system.conformity_assessment_date:
self.gaps.append("Conformity assessment (Art.43) not completed")
if not self.system.eu_database_registered:
self.gaps.append("EU database registration (Art.71) missing")
if not self.system.ce_marking_affixed:
self.gaps.append("CE marking (Art.48) not affixed")
if not self.system.monitoring_plan_established:
self.gaps.append("Post-market monitoring plan (Art.72) not established")
def _check_irb_ai_alignment(self):
if self.system.irb_approval_date and self.system.conformity_assessment_date:
if self.system.irb_approval_date < self.system.conformity_assessment_date:
self.gaps.append(
f"IRB approval ({self.system.irb_approval_date}) preceded "
f"conformity assessment ({self.system.conformity_assessment_date}) — "
"re-submission may be required under Art.111 CRR amendments"
)
elif self.system.irb_approval_date and not self.system.conformity_assessment_date:
self.gaps.append(
"IRB approval obtained but AI Act conformity assessment not yet complete — "
"prudential supervisor may require supplementary submission"
)
def assess_banking_ai_portfolio(systems: list[BankingAISystem]) -> dict:
"""Assess an entire banking AI portfolio for Art.111 compliance."""
results = []
high_risk_count = 0
irb_count = 0
total_gaps = 0
for system in systems:
status = Art111ComplianceStatus(system=system)
result = status.assess()
results.append(result)
if result["high_risk_classified"]:
high_risk_count += 1
if result["irb_in_scope"]:
irb_count += 1
total_gaps += len(result["gaps"])
return {
"total_systems": len(systems),
"high_risk_systems": high_risk_count,
"irb_ai_systems": irb_count,
"total_compliance_gaps": total_gaps,
"compliant_systems": sum(1 for r in results if r["compliance_ready"]),
"days_to_august_2026": (date(2026, 8, 2) - date.today()).days,
"systems": results,
}
# Example usage
credit_scoring_ai = BankingAISystem(
name="ML Credit Scoring Model v3",
annex_iii_category=AnnexIIICategory.CREDITWORTHINESS_5B,
irb_approach=IRBApproach.ADVANCED,
deployment_date=date(2025, 3, 1),
conformity_assessment_date=None, # Not yet done
irb_approval_date=date(2024, 11, 15),
monitoring_plan_established=False,
eu_database_registered=False,
ce_marking_affixed=False,
)
status = Art111ComplianceStatus(system=credit_scoring_ai)
result = status.assess()
print(f"System: {result['system']}")
print(f"Compliance ready: {result['compliance_ready']}")
print(f"Days to August 2026: {result['days_to_deadline']}")
for gap in result['gaps']:
print(f" GAP: {gap}")
30-Item Art.111 Banking AI Dual-Regulation Readiness Checklist
System Classification (Art.6, Annex III)
- 1. Identified all AI systems used in credit risk assessment, scoring, or IRB calculation
- 2. Confirmed Annex III Category 5b classification applies (natural persons in scope)
- 3. Distinguished provider vs. deployer role for each AI system in scope (Art.3)
- 4. Assessed whether any AI components are embedded in vendor-supplied model packages
- 5. Confirmed scope exclusion for pure B2B credit assessment (legal entities only)
AI Act Provider Obligations (Art.9–Art.15, Art.17)
- 6. Risk management system (Art.9) documented and operational for each high-risk system
- 7. Data governance (Art.10) documentation explicitly covers training data origin, representativeness, and bias examination per Art.10(5)
- 8. Technical documentation (Art.11, Annex IV) complete and version-controlled
- 9. Logging capability (Art.12) implemented to record AI system operation during deployment
- 10. Transparency documentation (Art.13) prepared for bank deployers, including system description and limitations
- 11. Human oversight mechanisms (Art.14) designed and implemented — humans can override AI credit decisions
- 12. Accuracy, robustness, and cybersecurity requirements (Art.15) assessed and validated
- 13. Quality management system (Art.17) established for the AI development lifecycle
Conformity Assessment and Market Entry (Art.43, Art.47, Art.48, Art.71)
- 14. Conformity assessment (Art.43) completed — self-assessment pathway documented
- 15. EU declaration of conformity (Art.47) drafted and signed
- 16. CE marking (Art.48) affixed to system documentation and product
- 17. Registration in EU AI systems database (Art.71) completed before deployment
- 18. Conformity assessment evidence packaged for inclusion in IRB approval submission
CRR Integration (Art.111, CRR Art.173, Art.185)
- 19. AI Act compliance evidence incorporated into CRR model approval submission
- 20. IRB model validation report explicitly addresses AI Act Art.9, Art.10, Art.14 requirements
- 21. Confirmed with prudential supervisor (NCA/SSM) what Art.111 documentation supplements are required
- 22. Material model change notifications to prudential supervisor will also trigger AI Act re-assessment review
- 23. Existing IRB-approved models with AI components assessed for supplementary Art.111 submission requirements
EBA Regulatory Technical Standards
- 24. Monitoring EBA consultation papers and final RTS on AI Act integration in banking supervision
- 25. Assigned responsibility for implementing EBA technical standards as they are finalized
- 26. Engaged with industry associations (EBF, AFME) on EBA RTS development consultation responses
Data Governance Dual-Track (Art.10, Basel III)
- 27. CRR data quality documentation extended to explicitly cover AI Act Art.10(2) and Art.10(5) requirements
- 28. Special categories of personal data (GDPR Art.9) use in credit models assessed and authorized under Art.10(6)
- 29. Data retention and access controls satisfy both AI Act Art.12 logging requirements and CRR record-keeping obligations
Ongoing Monitoring and Incident Reporting
- 30. Post-market monitoring plan (Art.72) integrated with CRR model performance monitoring — unified monitoring program satisfies both frameworks; incident reporting routes to both market surveillance authority (Art.73) and prudential supervisor (CRR material model change process)
Art.111 closes the gap that existed before the AI Act — where banking AI operated under sophisticated model governance requirements but without a formal legal framework mandating the specific AI safety, transparency, and human oversight standards that are now Table stakes for any high-risk AI system in Europe. For FinTech developers building AI for European banks, the dual-regulation architecture is an added compliance burden in the short term and a market differentiator in the medium term: banks will increasingly require vendor AI systems to arrive with complete Art.111-compliant documentation packages, and those who build that capability first gain access to the most risk-averse institutional buyers in the market.