Skip to main content
AI Governance & Risk

EU AI Act High-Risk Classification: A Practical Guide for COOs in Regulated Industries

April 05, 2026
EU AI Act High-Risk Classification: A Practical Guide for COOs in Regulated Industries

The EU AI Act entered into force on 1 August 2024. The high-risk provisions, which matter most for regulated industries, apply from August 2026. If you are a COO in banking, insurance, healthcare, energy, or the public sector and you have not yet reconciled your AI portfolio against the Act's high-risk classification, you are running out of runway.

This guide cuts through the legal commentary and focuses on what a COO actually needs to know: which of your AI use cases are high-risk, what obligations attach to that classification, and what governance machinery you need to have in place by the compliance deadline.

What "high-risk" means under the Act

The EU AI Act uses a risk-based classification framework. AI systems are classified into four tiers: unacceptable risk (banned), high-risk (heavy obligations), limited risk (transparency obligations), and minimal risk (no specific obligations).

High-risk AI systems are listed in Annex III of the Act and include AI used in:

  • Credit scoring and creditworthiness assessment (directly relevant to banking)
  • Insurance pricing and underwriting where AI materially affects access to insurance (relevant to insurance)
  • Recruitment, HR, and workforce management (relevant to all sectors)
  • Access to essential public services and benefits (relevant to public sector)
  • Law enforcement, border control, and migration (relevant to public sector)
  • Education and vocational training (relevant to public sector)
  • Biometric identification (relevant to all sectors using biometric authentication)
  • Safety components of products regulated under EU product safety legislation (relevant to medical devices in healthcare and life sciences, and safety-critical systems in energy)
  • Critical infrastructure management (relevant to energy grid operations)

The European Commission's official text is the authoritative source. The European AI Office publishes guidance on interpretation.

The obligations on high-risk deployers

If you deploy (not just develop) a high-risk AI system, the Act imposes eight categories of obligation:

1. Risk management system

You must establish and maintain a risk management system that identifies, analyses, evaluates, and mitigates the risks of the AI system throughout its lifecycle. This is not a one-time risk assessment. It is a continuous risk management process similar to PRA SS1/23 model risk management.

2. Data governance

The training, validation, and testing datasets must meet quality criteria: relevance, representativeness, freedom from errors, and completeness. You must document what data was used, how it was prepared, and what biases may be present. For firms already building the action-data layer, this is an extension of existing discipline.

3. Technical documentation

You must maintain detailed technical documentation that allows regulators to assess the AI system's compliance. This includes the system's purpose, design, development process, performance metrics, and limitations. The documentation must be kept up to date throughout the system's lifecycle.

4. Record-keeping and logging

The AI system must automatically log events that are relevant for tracing the system's behaviour. Logs must be kept for a period appropriate to the system's purpose and at least as long as required by relevant sector-specific regulation. For financial services firms, this aligns with existing decision log requirements under supervisory expectations.

5. Transparency and provision of information to deployers

If you are a deployer (not the developer), the provider must give you sufficient information to understand the system's capabilities, limitations, and intended use. You must use the system in accordance with the instructions of use. This has procurement implications: your vendor contracts must include transparency provisions.

6. Human oversight

High-risk AI systems must be designed and developed to allow effective human oversight. This means the system must have interfaces that allow humans to understand the output, override the system, and intervene when necessary. This is the regulatory codification of the decision rights framework and system supervision model we use in every AI enablement engagement.

7. Accuracy, robustness, and cybersecurity

The system must achieve appropriate levels of accuracy, robustness, and cybersecurity. The metrics must be documented and monitored. This is continuous performance monitoring, similar to the data flywheel monitoring requirements but with an additional cybersecurity dimension.

8. Conformity assessment

Before placing a high-risk AI system on the EU market or putting it into service, it must undergo a conformity assessment. For most high-risk AI systems in regulated industries, this is a self-assessment (not a third-party audit), but it must be documented and the results must be available to regulators.

The Ada Lovelace Institute and AI Now Institute have published useful analysis of the Act's implications for different sectors.

What this means for your operating model

The practical implication for a COO is that every high-risk AI use case needs governance machinery that produces evidence of compliance across all eight obligation categories, continuously and as a by-product of operation. Not as a quarterly report. Not as a compliance retrofit before an audit. As a structural feature of the workflow.

This is exactly the embedded governance model we describe in the AI enablement framework. The governance is designed into the workflow from day one, and the evidence is captured as the system operates, not assembled after the fact.

For firms that have already done the structural AI enablement work under PRA SS1/23 or MHRA expectations, the EU AI Act adds an incremental layer rather than a new programme. The core discipline (risk management, data governance, documentation, logging, human oversight, monitoring) is the same. The EU AI Act just formalises it into law.

For firms that have not yet done the structural work, the EU AI Act is the forcing function. The compliance deadline is real, the penalties are material (up to 3% of global annual turnover for non-compliance with high-risk obligations), and the supervisory expectation is that firms will be able to demonstrate compliance from August 2026.

The sector-specific lens

Each regulated industry faces a slightly different set of high-risk classifications:

Banking: Credit scoring and creditworthiness are explicitly named. KYC/AML models that use biometric identification are also in scope. Most banks have 5-15 AI use cases that fall under high-risk.

Insurance: Insurance pricing AI that affects access to insurance is in scope. Claims AI that affects settlement outcomes may be in scope depending on how the regulator interprets "access to services." Actuarial models are being actively discussed.

Healthcare: Clinical decision support and diagnostic AI that qualify as medical devices are in scope through the MDR/IVDR pathway AND the EU AI Act simultaneously. This is a dual regulatory burden that most health systems have not yet reconciled.

Energy: Safety-critical AI in grid operations and critical infrastructure management is explicitly in scope. The intersection with the safety case under HSE/OSHA adds a third regulatory layer.

Public sector: AI used in access to public services, benefits, immigration, and law enforcement is explicitly high-risk. Government departments face the most extensive set of high-risk classifications.

Getting started

The first step is to map your AI portfolio against Annex III and classify each use case as high-risk, limited-risk, or minimal-risk. Our AI Enablement diagnostic engagement includes this classification as a standard deliverable and produces a compliance gap analysis with a sequenced remediation roadmap.

For a self-assessment before engaging, the AI Enablement Maturity Diagnostic scores your governance pillar against the five dimensions that matter for EU AI Act readiness.

For the detailed regulatory mechanics, the AI Governance and Model Risk course covers the EU AI Act framework in Module 2 (Regulatory Landscape) and Module 3 (Use Case Triage).

The firms that reconcile their AI portfolios against the Act earliest will deploy with confidence while their competitors are still arguing about scope. The August 2026 deadline is not aspirational. It is a compliance obligation.

Score this against your own organisation

Take the AI Enablement Maturity Diagnostic — 25 questions across the five pillars (production function, data layer, decision systems, operating model, governance). Per-pillar breakdown and prioritised next steps in 5 minutes.

Take the diagnostic

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service
Monthly newsletter

More like this — once a month

Get the next long-form essay on AI enablement, embedded governance, and operating-model design straight to your inbox. One considered piece per month, written for senior practitioners in regulated industries.

No spam. Unsubscribe anytime. Read by senior practitioners across FS, healthcare, energy, and the public sector.