The EU AI Act and Banking: What Operations Leaders Need to Know
The EU Artificial Intelligence Act (AI Act) entered into force on 1 August 2024, with its provisions being phased in through 2 August 2027. It is the world's first comprehensive legal framework for artificial intelligence, and it will fundamentally change how banks develop, deploy, and govern AI systems.
For banking operations leaders, the AI Act is not an abstract policy debate. It directly affects the AI tools you are already using or planning to deploy—from credit scoring models to transaction monitoring systems, from chatbots to automated decision-making in claims processing. Understanding the classification system, the obligations, and the timeline is not optional. It is an operational imperative.
The Risk-Based Classification System
The AI Act classifies AI systems into four risk tiers. The obligations increase with the risk level.
Unacceptable Risk (Prohibited)
These AI practices are banned outright from 2 February 2025:
- Social scoring by public authorities (and by extension, any system that evaluates individuals based on social behavior unrelated to the service).
- Emotion recognition in the workplace and educational institutions.
- Real-time biometric identification in public spaces (with narrow law enforcement exceptions).
Banking impact: Limited direct impact, but institutions should audit any employee monitoring tools or customer-facing systems that might inadvertently perform emotion recognition (e.g., sentiment analysis on customer calls used for staff performance evaluation).
High Risk (Heavily Regulated)
This is where the bulk of banking AI falls. The AI Act's Annex III explicitly lists the following as high-risk:
- Credit scoring and creditworthiness assessment: Any AI system used to evaluate an individual's credit risk, determine loan eligibility, or set pricing.
- Insurance risk assessment and pricing: AI systems that determine premiums or assess claims.
- Recruitment and HR decisions: AI tools used for CV screening, candidate ranking, or performance assessment.
- Essential services access: AI systems that determine access to essential private or public services, including financial services.
Banking impact: This captures a wide swathe of banking AI applications—credit decisioning, fraud scoring, AML alert triage (where it affects customer access to services), and automated underwriting.
Limited Risk (Transparency Obligations)
AI systems that interact with humans (chatbots, virtual assistants) must disclose that the user is interacting with an AI system.
Banking impact: Customer-facing chatbots, virtual assistants, and automated call systems must clearly identify themselves as AI.
Minimal Risk (No Specific Obligations)
AI systems that pose minimal risk (spam filters, inventory management, search optimization) face no specific requirements.
The Obligations for High-Risk AI Systems
30-second video summary
For banking AI classified as high-risk, the obligations under Articles 8-15 are substantial:
1. Risk Management System (Article 9)
You must establish and maintain a continuous, iterative risk management process for each high-risk AI system. This includes:
- Identification and analysis of known and foreseeable risks.
- Estimation and evaluation of risks that may emerge when the system is used as intended and when it is "reasonably foreseeable misuse."
- Adoption of appropriate risk mitigation measures.
Practical implication: Every AI model used in credit scoring, fraud detection, or AML must have a documented risk assessment that is reviewed and updated regularly—not just at deployment.
2. Data Governance (Article 10)
Training, validation, and testing datasets must be subject to data governance and management practices that address:
- Relevance, representativeness, and completeness.
- Possible biases that could lead to discriminatory outcomes.
- Data quality and appropriate data preparation and labeling.
Practical implication: If your credit scoring model was trained predominantly on data from one demographic segment, you must assess and mitigate the risk that it discriminates against underrepresented groups. This requirement has teeth—the European Commission has made clear that algorithmic discrimination in financial services is a priority concern.
3. Technical Documentation (Article 11)
Detailed technical documentation must be maintained before the system is placed on the market or put into service. This documentation must be sufficient for competent authorities to assess the system's compliance.
Practical implication: The "we fine-tuned a model and deployed it" approach is no longer acceptable. You need comprehensive documentation of the model architecture, training methodology, data sources, performance metrics, known limitations, and testing results.
4. Record-Keeping and Logging (Article 12)
High-risk AI systems must be designed to automatically record events (logs) throughout their lifecycle. Logs must enable traceability of the system's operation, including:
- When and by whom the system was used.
- The input data and the output generated.
- Any anomalies or errors detected.
Practical implication: Every credit decision, every fraud score, every AML alert triage must be logged with sufficient detail to reconstruct the AI's reasoning after the fact. This is not just good practice—it is a legal requirement.
5. Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective human oversight. This includes:
- The ability for a human to understand the system's capabilities and limitations.
- The ability to correctly interpret the system's output.
- The ability to override or reverse the AI's decision.
Practical implication: Fully automated credit decisions with no human review pathway will not comply. You must design systems where humans can intervene, override, and are empowered to do so.
6. Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, and these levels must be declared in the documentation.
Practical implication: You must define and monitor accuracy KPIs (e.g., precision, recall, AUC) and demonstrate that the model maintains performance over time. Model drift is not just an operational concern—it is a compliance concern.
The Conformity Assessment
Before deploying a high-risk AI system, providers must conduct a conformity assessment (Article 43). For most banking AI applications, this is a self-assessment (as opposed to third-party assessment), but it must be documented and defensible.
The conformity assessment must demonstrate that the system meets all the requirements listed above. Think of it as the AI equivalent of the ICAAP (Internal Capital Adequacy Assessment Process)—a structured, evidence-based self-assessment that supervisors will scrutinize.
The Role of Financial Supervisors
The AI Act designates national market surveillance authorities to enforce compliance. However, for financial institutions, the Act explicitly recognizes the role of existing financial supervisors:
- The EBA has been designated to advise on AI in banking and is developing supplementary guidance on how the AI Act interacts with existing prudential and conduct-of-business requirements.
- The ECB has indicated that it will incorporate AI governance into its SREP assessments for significant institutions.
- The FCA in the UK, while not directly subject to the EU AI Act, has published its own AI and Machine Learning Discussion Paper (DP5/22) and subsequent feedback statement, signaling convergence on similar principles—particularly around explainability, fairness, and human oversight.
- The PRA has incorporated AI model risk into its supervisory expectations under SS1/23 (Model Risk Management), which requires banks to apply consistent model risk management to all models, including AI/ML.
The Timeline: What to Do Now
| Date | Milestone | Action Required |
|---|---|---|
| Feb 2025 | Prohibited practices apply | Audit and remove any banned AI uses |
| Aug 2025 | Obligations for General Purpose AI (GPAI) models | Assess if you use or deploy GPAI systems |
| Aug 2026 | High-risk AI obligations apply | Full compliance: risk management, documentation, logging, human oversight |
| Aug 2027 | Extended deadline for certain embedded AI | Final compliance for AI embedded in regulated products |
The clock is ticking. The August 2026 deadline for high-risk systems is 6 months away. Institutions that have not started their readiness programs are already behind.
A Practical Readiness Roadmap
Phase 1: AI Inventory (Now)
You cannot comply with rules about AI systems if you do not know what AI systems you have. Conduct a comprehensive AI inventory across the organization:
- What AI/ML models are in production?
- Who owns them?
- What data do they use?
- What decisions do they influence?
- Which risk tier do they fall under?
Phase 2: Gap Assessment (Q1 2026)
For each high-risk AI system, assess compliance against each Article (9-15). Identify gaps in documentation, logging, human oversight, and bias testing.
Phase 3: Remediation (Q2 2026)
Close the gaps. This may involve:
- Enhancing model documentation.
- Implementing logging and audit trail capabilities.
- Adding human override mechanisms.
- Conducting bias and fairness assessments on training data.
- Establishing ongoing monitoring frameworks.
Phase 4: Conformity Assessment (Q3 2026)
Conduct and document the conformity assessment for each high-risk system. Prepare for supervisory review.
Conclusion: Regulation as Competitive Advantage
The EU AI Act will increase the cost and complexity of deploying AI in banking. That is undeniable. But institutions that build robust AI governance frameworks will gain a competitive advantage—not despite the regulation, but because of it. They will deploy AI with greater confidence, face fewer supervisory challenges, and build greater trust with customers and counterparties. The institutions that treat the AI Act as a checkbox exercise will struggle. Those that treat it as a catalyst for mature, responsible AI deployment will thrive.
Need expert support?
Our specialists deliver audit-ready documentation and transformation programmes in weeks, not months. Let's discuss your requirements.
Book a Consultation