A 12-24 month arc
You now have the framing (Module 1), the regulatory landscape (Module 2), the use case triage (Module 3), 3LoD for AI (Module 4), embedded governance (Module 5), and model risk operations (Module 6). This final module is about how to sequence all of that into a real programme inside a real organisation.
The honest framing: this is a 12-24 month build. The first 90 days is triage and visibility. The next 6 months is operationalising controls on the highest-risk use cases. The next 6 months is extending to the rest of the portfolio. The remaining months are about embedding the model so AI governance becomes part of how the organisation operates by default.
This module gives you the sequencing pattern.
Phase 1: Visibility and triage (Months 0-3)
Goal: know what you have, classify the risk, and produce a defensible posture for the highest-risk use cases already in production.
Workstreams:
- Use case inventory. Comprehensive sweep of AI in production, in development, and in shadow IT. Use the inventory format from Module 3. Include vendor-embedded AI.
- Risk classification. Apply the EU AI Act tiers, SS1/23 materiality, and Consumer Duty/DORA cross-references to each inventoried use case. Produce the portfolio view from Module 3.
- Defensibility check on the top tier. For each high-risk and material use case, ask: if a supervisor asked us about this tomorrow, what would we be able to show them? Where the answer is "very little," start the immediate remediation.
- Light governance for everything else. Inventory entries, named owners, lightweight model cards.
- Shadow AI integration. Bring discovered shadow use cases into the framework without punishment.
Deliverables at Day 90:
- Complete use case inventory
- Working risk classification with portfolio view
- Defensibility memo on top-tier use cases (gaps + remediation plan)
- Initial governance framework (lightweight)
- Executive readout on portfolio risk
Stakeholder narrative: "We now know what we have, we can show the C-suite the portfolio, and we have a plan for the highest-risk gaps."
Phase 2: Operationalising the highest-risk tier (Months 3-9)
Goal: for each high-risk and material use case, build the operational governance that makes the use case defensible — embedded second line, decision logs, monitoring SLOs, override review, model risk files, incident response.
Workstreams:
- Embedded second line. Assign a second-line risk specialist to each high-risk use case as an embedded partner. Train them on the technical depth they need.
- Decision logging. Build the decision log infrastructure for high-risk use cases (Module 5). For deployed use cases, this may require retrofitting; for new use cases, design it in.
- Monitoring SLOs. Define and instrument the SLOs from Module 6 for each high-risk use case. Wire them into existing observability.
- Model risk files. Produce the formal documentation for high-risk and material use cases. Tie them to the model versions they document.
- Incident response. Stand up the model incident response process. Run a tabletop exercise.
- First model risk committee. Now that you have substance, convene the central AI risk committee. Use the portfolio view as the standing agenda.
Deliverables at Month 9:
- Embedded second line on top-tier use cases
- Decision logs in production
- Monitoring SLOs defined and alerting
- Model risk files for top tier
- Incident response process tested
- Model risk committee operating monthly
Stakeholder narrative: "Our highest-risk AI use cases are now governed substantively. We can defend them in front of any supervisor."
Phase 3: Scaling across the portfolio (Months 9-18)
Goal: extend the same operational pattern to the second tier of use cases. Build the central capabilities that make this scaling cheaper per use case.
Workstreams:
- Standardised tooling. Build (or buy) reusable infrastructure for decision logging, monitoring, and lineage that new use cases can plug into without rebuilding from scratch.
- Self-service governance. Templates, playbooks, and patterns that let first-line teams produce the right artefacts without bespoke help from the second line.
- Training. Roll out AI governance training across first-line teams. Build the next generation of system supervisors and AI risk partners.
- Vendor governance. Extend the framework to cover vendor AI properly under DORA.
- Audit competence. Work with internal audit to build their AI competence so they can perform substantive audits in Phase 4.
Deliverables at Month 18:
- Reusable governance infrastructure
- Self-service templates and playbooks
- Trained first-line teams
- Vendor AI brought into the framework
- Audit-ready posture across the top two tiers
Stakeholder narrative: "AI governance is now scalable. New use cases inherit the framework rather than rebuilding it."
Phase 4: Embedding and continuous improvement (Months 18-24+)
Goal: AI governance is now how the organisation operates by default. New use cases are governed from day one. Continuous improvement is driven by operational evidence.
Workstreams:
- Governance is design-time, not gate-time. Every new use case has embedded governance from day one.
- Continuous improvement. Operational data from monitoring, override patterns, and incidents feeds back into policy updates.
- Regulatory dialogue. Proactive engagement with supervisors. Show your governance machinery before they ask.
- Cross-portfolio learning. Share what's working between teams. Build community of practice for system supervisors and embedded second-line partners.
By month 24, governance should feel like infrastructure rather than overhead. New use cases ship faster than they used to under the legacy "approval gate" model, because the governance work is happening continuously alongside the build.
What can go wrong
The most common failure modes for AI governance programmes:
Policy-first. Trying to write a comprehensive policy framework before you have any deployed use cases. The result is paper that nobody uses, because it isn't grounded in operational experience. Always lead with the operational work; let policy emerge from it.
Committee-first. Standing up a central AI governance committee before you have substance for it to govern. The committee meets, has nothing to discuss, and becomes performative. Build substance first.
Big-bang. Trying to govern everything at once. This consumes the team's bandwidth without producing visible results, and the programme dies of under-investment in the things that actually matter (the high-risk use cases).
Brake mode. Letting governance be perceived as the function that says no. This drives shadow AI further into the shadows and undermines the second line's credibility. The executive sponsor has to actively reframe the function.
Audit-driven. Building governance only because audit asked. This produces compliance posture without operational substance, which both audit and the regulator will eventually see through.
If you can avoid these five failure modes and follow the four-phase arc, you have a realistic path to mature AI governance in 24 months. Without that discipline, the same work takes twice as long and produces half the value.
Ready for the exam
You have now covered the seven modules of the AI Governance & Model Risk for Financial Services course:
- Governance as accelerator, not brake
- The regulatory landscape — EU AI Act, PRA SS1/23, FCA SYSC, DORA
- Mapping AI use cases to risk tiers
- Three lines of defence for AI
- Embedded governance in practice
- Model risk operations — monitoring, drift, overrides, incidents
- The governance roadmap — 12 to 24 months
The final assessment is 25 multiple-choice questions covering the full course. Pass mark 70%. On completion you will earn your AI Governance Practitioner certification from Insight Centric.
When you are ready to apply this to a real programme, our AI Enablement service builds governance into the workflow design from day one — exactly the embedded model this course recommends.