From regulatory abstraction to portfolio reality
Module 2 covered the regulatory landscape. This module covers the practical work of turning that landscape into a portfolio view of your specific organisation's AI use cases — what you have, what risk tier each one is in, and what governance each one needs.
Without this exercise, governance is abstract. With it, you have a working risk view that the C-suite can act on, that the regulator can be shown, and that the AI team can use to understand what level of governance each new use case will need. The exercise typically takes 30–60 days for the first pass and then becomes ongoing.
Step 1: Build the inventory (and find the shadow AI)
Before you can triage, you need to know what you have. This is harder than it sounds, because most enterprises have substantially more AI in production than the central function knows about.
A typical first-pass inventory exercise discovers:
- Officially-tracked AI use cases. Listed in the AI/data/innovation team's portfolio. These are the easy ones.
- Vendor-embedded AI. Features inside SaaS products that nobody thought of as "AI" — Salesforce Einstein, Microsoft Copilot, ServiceNow Now Assist, Workday recommendations, your CRM's lead scoring. These are AI under any meaningful definition and are usually missing from the central inventory.
- Shadow AI in business teams. Models or AI tools deployed inside specific business units without central governance — a marketing team using GenAI to draft customer communications, a fraud team running an off-the-shelf anomaly detector, an ops team using a vendor's "intelligent automation" feature.
- Pilots and experiments. Things the AI team knows about but that are not formally tracked because they "aren't in production yet."
- Decommissioned-but-still-running. Use cases that were officially shut down but where parts of the workflow are still being run by the model.
The right approach to this discovery is collaborative, not punitive. Most shadow AI is well-intentioned and often well-built. Punishing the teams who built it just drives the next round of shadow AI further into the shadows. The goal is to find it, document it, and integrate it into the governance framework.
A useful inventory format captures, per use case:
- Name and owning team
- Business purpose (in one sentence)
- Type of AI (rule engine, classical ML, deep learning, GenAI, vendor-supplied)
- Inputs and outputs
- Decision impact (what happens because of the model's output)
- Data sources
- Model provenance (in-house, vendor, open-source)
- Deployment status (production, pilot, dev, shadow)
You should aim to have this inventory complete within 30 days. It is the foundation of everything else.
Step 2: Triage each use case against the EU AI Act tiers
For each inventoried use case, ask the EU AI Act question first:
- Is this an unacceptable-risk use case? (Almost certainly no — these are mostly around government social scoring and certain biometric surveillance.)
- Is this a high-risk use case? Check the named categories: credit scoring/creditworthiness, biometric ID, employment-related decisions, AI used in essential services, certain critical infrastructure. If you operate consumer credit or HR-related AI, expect high-risk.
- Is this limited-risk? Customer-facing AI that needs disclosure (chatbots, recommender systems with significant influence on user choice).
- Is this minimal-risk? Most internal productivity AI lands here.
The triage is sometimes ambiguous, which is the point of having a documented decision. If you have a use case where reasonable people could disagree, document the analysis, document the decision, and document who made it. That document will be the artefact you show a supervisor if asked.
Step 3: Triage each use case against SS1/23 materiality
For UK-regulated firms, the EU AI Act tier is not enough — you also need to apply the materiality test from SS1/23. SS1/23 is technology-agnostic and asks a different question: how consequential is the failure of this model?
A useful materiality framework looks at four dimensions:
- Customer impact. Could a model failure produce direct customer harm, especially to vulnerable customers?
- Financial impact. Could a failure produce material financial loss, either directly or through broken regulatory submissions?
- Reputational impact. Could a failure produce reputational damage at the firm level?
- Regulatory impact. Could a failure produce a supervisory finding or breach?
A use case that scores material on any one of these dimensions is material under SS1/23 and needs the full governance treatment: documented model risk artefacts, independent validation, ongoing monitoring, named ownership.
A use case that scores low on all four can be governed more lightly — but should still be in the inventory and reviewed periodically.
Step 4: Cross-reference with FCA and DORA
The FCA principles-based view adds two questions:
- Consumer Duty. Does this use case touch retail customers in a way that affects their outcomes? If yes, the Consumer Duty framework applies — you need to be able to show that the use case contributes to good outcomes, that customers can understand the AI's role, and that you monitor for poor outcomes.
- SYSC. Are there adequate systems and controls? This is broad and judgment-based but should be answered explicitly.
DORA adds two more:
- Third-party criticality. If the AI depends on a vendor (LLM provider, AI platform, embedded SaaS feature), is that vendor a "critical ICT third party"? If yes, the contract, exit strategy, and supervisory access requirements of DORA apply.
- Operational resilience. What happens if the AI goes down? Is there a fallback? Has it been tested?
Step 5: Produce the portfolio view
The output of all this triage is a single document — the AI portfolio view — that lists every use case with its risk classifications and the governance treatment each one requires. A working format:
| Use case | Status | EU AI Act | SS1/23 | Consumer Duty | DORA | Governance treatment |
|---|---|---|---|---|---|---|
| Credit decisioning | Production | High | Material | Yes | Critical vendor | Full + monthly review |
| Fraud detection | Production | Limited | Material | No | Internal | Full + quarterly review |
| Coding copilot | Production | Minimal | Non-material | No | Vendor | Light + annual review |
| Customer FAQ chatbot | Production | Limited | Non-material | Yes | Vendor | Standard + Consumer Duty |
This is the document you should be able to put in front of any executive or supervisor and walk them through.
Step 6: Set governance levels per tier
Once each use case is classified, define what "governance" actually means at each level:
Full governance (high-risk + material):
- Documented model risk file
- Independent validation
- Continuous monitoring with SLOs
- Named accountable owner in first line
- Second-line oversight
- Quarterly model risk committee review
- Incident reporting line
Standard governance (mid-risk):
- Documented model card
- Pre-deployment validation
- Periodic monitoring
- Named owner
- Annual review
Light governance (low-risk):
- Listed in inventory
- Lightweight description
- Owner identified
- Annual existence check
These tiers should be calibrated to your organisation's risk appetite and the supervisory environment you operate in. The point is to have explicit, defensible categories rather than ad-hoc treatment.
What's next
In Module 4 we'll cover three lines of defence for AI specifically — how to redraw the classic 3LoD structure for AI use cases that don't fit neatly into the legacy model.