Skip to main content
AI Governance & Risk

Vulnerable Customer Identification with AI Under FCA Consumer Duty

April 01, 2026
Vulnerable Customer Identification with AI Under FCA Consumer Duty

FCA Consumer Duty requires firms to deliver good outcomes for all retail customers, with particular attention to vulnerable customers. The four outcomes (products and services, price and value, consumer understanding, consumer support) must be evidenced continuously, not just assessed at product launch.

AI creates both an opportunity and a risk in this context. The opportunity: AI can identify vulnerability signals that human advisors miss, at scale, in real time, and route vulnerable customers to appropriate support before poor outcomes materialise. The risk: AI that fails to identify vulnerability, or worse, that systematically disadvantages vulnerable customers through biased pricing, restricted access, or inappropriate automated communication, creates a Consumer Duty failure that the FCA will scrutinise.

This post covers how to design AI systems that get the vulnerable customer dimension right, based on our experience across banking, insurance, and wealth management Consumer Duty programmes.

What vulnerability looks like in data

The FCA defines vulnerability broadly: health conditions, life events, low resilience, and low capability. In practice, the signals that an AI system can detect fall into four categories:

Behavioural signals. Sudden changes in transaction patterns, missed payments after a long period of clean history, repeated contact centre calls on the same issue, difficulty navigating digital self-service, calls abandoned after long hold times. These are patterns that a well-designed model can detect from operational data.

Demographic and contextual signals. Age-related vulnerability indicators, geographic deprivation indices, language barriers evident from communication preferences, and declared disability. These must be handled with extreme care to avoid discrimination, and the Equality Act 2010 constrains how demographic data can be used in decision-making.

Declared vulnerability. Customers who have told the firm about a vulnerability (bereavement, illness, financial difficulty) through any channel. The challenge is ensuring this declaration propagates to every system that interacts with the customer, including automated systems.

Inferred vulnerability. Signals from third-party data (credit reference agencies, open banking data where consented) that suggest financial difficulty or life events. This is the most sensitive category because the customer has not actively disclosed the vulnerability.

The Financial Conduct Authority's guidance on vulnerability is the authoritative source. The Money and Mental Health Policy Institute publishes useful research on the intersection of mental health and financial vulnerability.

The design principles

Based on our AI enablement engagements in retail-facing financial services, the governance framework for vulnerable customer AI has five principles:

1. Vulnerability identification is a first-class workflow event, not a flag

When the system identifies a potential vulnerability signal, the response must be a structured workflow event: the customer record is tagged, the appropriate care pathway is triggered, the human advisor is notified with context, and the tagging decision is logged for audit. It is not a flag that sits in a database field and is optionally checked by downstream systems.

This requires the action-data layer to include vulnerability status as a first-class field in the customer wide-row, with lineage from the signal source through the identification model to the tagging decision.

2. The model must be fair, and fairness must be evidenced

A vulnerability identification model that systematically under-identifies vulnerability in certain demographic groups is discriminatory. A pricing model that charges vulnerable customers more is a Consumer Duty failure. Fairness must be assessed and evidenced for every AI model that affects customer outcomes, using the three lines of defence framework.

The Alan Turing Institute has published useful frameworks on AI fairness in financial services. The EU AI Act's high-risk obligations also require bias assessment for AI systems that affect access to financial services.

3. Vulnerable customers must have a human pathway

Automated workflows (chatbots, digital self-service, automated claims processing) must detect vulnerability and route vulnerable customers to human support. The routing must be proactive (the system detects vulnerability and offers human contact) rather than reactive (the customer must ask for help after the automated system fails them).

For insurance claims, this means the claims segmentation model must include vulnerability as a routing criterion. For wealth management, this means the suitability monitoring system must flag potential vulnerability before the RM's next review.

4. Continuous outcome monitoring, not quarterly reporting

Consumer Duty requires firms to monitor customer outcomes continuously. For AI-assisted services, this means tracking outcome metrics (resolution time, complaint rate, NPS, product suitability, price fairness) disaggregated by vulnerability status, and alerting when vulnerable customer outcomes deteriorate relative to the general population.

The decision rights framework must include defined escalation thresholds for vulnerable customer outcome metrics, with clear accountability under SMCR for the senior manager who owns consumer outcomes.

5. Override patterns tell you if the system is working

If human advisors consistently override the vulnerability identification model (either overriding "not vulnerable" to "vulnerable" or vice versa), the override pattern is the most valuable diagnostic signal for model quality. Structured override capture, as described in the data flywheel essay, turns advisor judgment into training signal that improves the model over time.

What the FCA will ask

The FCA's approach to Consumer Duty supervision includes a focus on how firms use AI in customer-facing processes. The questions will follow a pattern:

  1. How do you identify vulnerable customers across all channels, including automated ones?
  2. Can you demonstrate that your AI systems produce equitable outcomes for vulnerable customers?
  3. What happens when a vulnerable customer enters an automated workflow?
  4. How do you monitor vulnerable customer outcomes continuously?
  5. Who is the senior manager accountable under SMCR for vulnerable customer outcomes in AI-assisted processes?

If you cannot answer these with documentary evidence, the AI Enablement Maturity Diagnostic is a useful starting point for identifying the gaps. For a structured assessment, our diagnostic engagement produces a Consumer Duty integration assessment alongside the AI portfolio audit and the regulatory triage.

For practitioners who want the detailed mechanics, Module 4 of the AI Governance and Model Risk course covers the Consumer Duty integration framework, and the AI Enablement for Banking and AI Enablement for Wealth Management service pages describe how vulnerability identification fits into the broader workflow redesign.

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service
Monthly newsletter

More like this — once a month

Get the next long-form essay on AI enablement, embedded governance, and operating-model design straight to your inbox. One considered piece per month, written for senior practitioners in regulated industries.

No spam. Unsubscribe anytime. Read by senior practitioners across FS, healthcare, energy, and the public sector.