KPI Frameworks for TOM Success
Measuring the success of a Target Operating Model implementation requires a structured KPI (Key Performance Indicator) framework that captures performance across all TOM dimensions — not just cost savings, but process efficiency, quality, risk, client satisfaction, and organisational health.
A comprehensive KPI framework for a banking TOM includes four categories of metrics:
1. Efficiency Metrics
Efficiency metrics measure how productively the operating model converts inputs (people, technology, time) into outputs (processed transactions, settled trades, reconciled breaks).
Cost-to-income ratio. The most closely watched metric in banking operations. It measures total operational cost as a percentage of revenue. The TOM business case typically projects a specific cost-to-income improvement — for example, from 68% to 55%. This metric is reported quarterly and tracked against the business case projection.
Cost per transaction. The average cost of processing a single unit of work — a payment, a trade settlement, a reconciliation break. This metric captures the combined impact of automation, standardisation, and location strategy. A well-implemented TOM typically reduces cost per transaction by 25-50%.
Straight-through processing (STP) rate. The percentage of transactions that process end-to-end without manual intervention. STP is a direct measure of automation effectiveness. Target STP rates vary by process — 95%+ for standard payments, 85%+ for trade settlement, 90%+ for cash reconciliation.
Cycle time. The elapsed time from the start of a process to its completion. For settlement, this might be measured from trade execution to settlement confirmation. For client onboarding, from initial application to account activation. The TOM should set target cycle times for each key process.
Headcount per transaction volume. The number of FTEs relative to the volume of work processed. This normalised metric allows comparison across time periods and business cycles, separating productivity improvement from volume fluctuation.
2. Quality Metrics
Quality metrics measure the accuracy and reliability of operational outputs.
Error rate. The number of errors per thousand (or per million) transactions processed. Errors include settlement fails caused by incorrect instructions, payment misdirections, reconciliation mismatches due to data entry errors, and incorrect client reports. The TOM target should specify error rate targets for each key process.
Settlement fail rate. The percentage of settlement instructions that fail to settle on the intended settlement date. Settlement fails are a critical metric in securities operations, subject to regulatory scrutiny under CSDR (Central Securities Depositories Regulation) penalty regime. A well-designed TOM targets fail rates below 2% for standard instruments.
Reconciliation break rate. The percentage of reconciliation items that do not match automatically and require manual investigation. High break rates indicate data quality issues, process gaps, or system integration problems. Target break rates of below 3% for automated reconciliation are achievable with a well-designed TOM.
Rework rate. The percentage of work that must be redone due to errors in initial processing. Rework is a direct measure of process quality — high rework rates indicate underlying process or training issues.
Regulatory reporting accuracy. The percentage of regulatory reports submitted without errors, amendments, or restatements. Given the financial and reputational impact of reporting errors, this is a high-priority quality metric.
3. Risk Metrics
Risk metrics measure the operating model's effectiveness in managing operational risk.
Operational loss events. The number and value of operational losses attributable to process failures, system errors, or human mistakes. A well-implemented TOM should demonstrate a reduction in operational loss events over time.
Key person dependency. The number of processes or capabilities that depend on a single individual. The TOM should systematically reduce key person dependencies through documentation, cross-training, and automation.
Audit findings. The number and severity of internal and external audit findings related to operational processes. A well-designed TOM — with standardised processes, clear governance, and robust controls — should result in fewer and less severe audit findings.
Operational resilience metrics. Time to recover from disruption, availability of critical business services, and the results of business continuity testing. These are increasingly important metrics given regulatory focus on operational resilience.
Control effectiveness. The results of control testing — what percentage of controls are operating effectively? A TOM with well-designed, embedded controls should achieve control effectiveness rates above 95%.
4. Client and Stakeholder Metrics
Client satisfaction. Measured through client surveys, Net Promoter Score (NPS), or client feedback mechanisms. The TOM should set specific client satisfaction targets and track them through the transformation.
Internal stakeholder satisfaction. Measured through surveys of business line leaders and front-office teams who rely on operations. Their assessment of operations responsiveness, quality, and partnership is a valuable indicator of TOM effectiveness.
Employee engagement. Measured through staff surveys covering job satisfaction, confidence in the organisation's direction, and perceptions of management effectiveness. TOM transformations create uncertainty that can impact engagement — tracking this metric helps identify and address issues.
Regulatory relationship. While not quantifiable in the same way as other metrics, the quality of the bank's relationship with its regulators — measured through supervisory feedback, SREP outcomes, and the number and severity of supervisory actions — is an important indicator of TOM success.
Benefits Tracking and Realisation
A TOM business case projects specific benefits — cost savings, efficiency improvements, risk reductions, and revenue enablement. Benefits realisation tracking is the discipline of monitoring whether these projected benefits are actually being achieved.
The Benefits Realisation Framework
A structured benefits realisation framework includes:
Benefits register. A detailed log of every projected benefit, including:
- Benefit description and category (cost saving, efficiency gain, risk reduction, revenue enablement)
- Quantified value (annual run-rate saving or one-time benefit)
- Baseline measurement (pre-transformation performance level)
- Target measurement (projected post-transformation performance level)
- Measurement methodology (how will achievement be verified?)
- Benefit owner (the named individual accountable for delivering the benefit)
- Timeline (when is the benefit expected to materialise?)
- Dependencies (what must be completed for the benefit to be realised?)
Monthly tracking. Each benefit is measured monthly against its target. A traffic-light status (green/amber/red) provides a quick view of overall benefits health:
- Green: Benefit is on track to be delivered as projected
- Amber: Benefit is at risk — current trajectory suggests it will fall short of the projection
- Red: Benefit is significantly behind projection and requires corrective action
Variance analysis. For any benefit that is amber or red, a variance analysis investigates:
- Why is the benefit behind projection?
- Is the shortfall temporary (e.g., migration delay) or structural (e.g., the projection was unrealistic)?
- What corrective actions can be taken to recover the benefit?
- Does the shortfall need to be reported to the steering committee and reflected in revised business case projections?
Common Causes of Benefit Leakage
Headcount benefits not realised. The TOM projects headcount reduction from automation and consolidation, but managers retain staff "just in case" or redeploy them to other activities rather than releasing them. Mitigation: headcount targets are formally tracked, and workforce plans are approved by the CFO.
Technology cost savings delayed. Legacy system decommissioning takes longer than planned — the old platform continues to run (and cost) alongside the new one. Mitigation: firm decommissioning dates tied to licence renewal cycles.
Process reversion. Staff revert to old processes and workarounds rather than adopting the new standardised processes. The efficiency gains from standardisation are never realised. Mitigation: process compliance monitoring, management reinforcement, and removal of access to old tools.
Unrealistic projections. The business case contained optimistic assumptions — overestimated savings, underestimated costs, or ignored implementation friction. Mitigation: rigorous business case governance, conservative assumptions, and sensitivity analysis.
Operational Dashboards
An operational dashboard provides real-time visibility of operating model performance. A well-designed dashboard is the primary management tool for the COO and operations leadership — it shows at a glance whether the operating model is performing to target.
Dashboard Design Principles
Audience-appropriate. Different stakeholders need different views. The COO needs a strategic summary with key metrics and exceptions. The operations manager needs team-level performance data. The process owner needs process-specific KPIs. Design dashboards for each audience.
Real-time where possible. For time-critical operations (settlement, payment processing), real-time dashboards enable proactive management — identifying issues before they impact clients or regulatory deadlines.
Exception-driven. Dashboards should highlight exceptions — metrics that are outside acceptable ranges — rather than simply displaying data. This focuses management attention where it is needed.
Trending. Show performance over time, not just current state. Trends reveal whether performance is improving, stable, or deteriorating — information that point-in-time metrics cannot provide.
Standard Dashboard Components
A banking operations dashboard typically includes:
- Volume summary: Current day's transaction volumes by process type, compared to expected volumes and capacity limits
- STP rates: Real-time STP rates by process, with thresholds highlighted
- Exception queues: Number and age of unresolved exceptions by type and priority
- Settlement status: Percentage of instructions matched, settled, and failed, with drill-down to individual fails
- Reconciliation status: Number of breaks by type, age analysis, and resolution progress
- Regulatory reporting status: Status of all regulatory reports due today, with alerts for any at risk of late submission
- Staffing and capacity: Staff availability, workload distribution, and overtime indicators
- Risk indicators: Operational incidents, near misses, and control failures
Continuous Improvement After Go-Live
The implementation of a TOM is not the end — it is the beginning of a continuous improvement cycle. The target operating model as designed will not be perfect; real-world operations will reveal issues, opportunities, and adjustments that were not anticipated during design. The mark of a successful TOM is not that it works perfectly on day one, but that it includes the mechanisms to continuously improve.
Embedding Continuous Improvement
Regular process reviews. Each process owner conducts a structured review of their process at least annually — assessing performance against KPIs, identifying improvement opportunities, and updating documentation.
Operational retrospectives. Monthly retrospectives bringing together operations teams to discuss what is working well, what is not, and what should be changed. These are not blame sessions — they are constructive forums for surfacing issues and generating improvements.
Benchmarking. Regular comparison of operational performance against industry benchmarks and peer institutions. Benchmarking reveals where the bank is performing well and where it lags — and provides external evidence to support investment in improvement.
Innovation pipeline. A structured process for identifying, evaluating, and implementing new technologies and approaches — AI, machine learning, blockchain, process mining — that could further improve the operating model.
Lean and Six Sigma methodologies. Embedding structured improvement methodologies — Lean for waste elimination, Six Sigma for defect reduction — into operational management. Training operations managers in these methodologies creates a cadre of improvement practitioners within the business.
Model Iteration
The TOM itself should be treated as a living document, not a static artefact. As the business environment changes — new regulations, new products, new markets, new technologies — the operating model must evolve. A well-governed TOM includes a formal review cycle:
- Quarterly review: Assess whether the operating model is performing to target and identify any immediate adjustments needed
- Annual strategic review: Assess whether the TOM remains aligned with corporate strategy and whether material changes in the operating environment require a refresh of the target state
- Trigger-based review: Significant events — regulatory change, acquisition, major technology shift — trigger an ad hoc review of the relevant TOM dimensions
Lessons from Failed TOM Implementations
Understanding why TOM implementations fail is as important as understanding how they succeed. Common failure modes in banking include:
Failure 1: Design without implementation. The bank invests heavily in designing a beautiful target operating model — detailed capability maps, process architectures, and technology strategies — but never commits the resources to implement it. The TOM becomes a shelf-ware document that is referenced in management presentations but never translates into operational change.
Failure 2: Technology-only transformation. The bank treats the TOM as a technology migration programme, investing in new platforms but failing to redesign processes, restructure the organisation, or update governance. The result is new technology running old processes — expensive and no more effective than the original model.
Failure 3: Insufficient change management. The bank designs and implements the target state but fails to bring people along. Staff do not understand why the change is happening, are not trained on new processes and systems, and do not have confidence in the new model. They revert to familiar ways of working, and the projected benefits are never realised.
Failure 4: Unrealistic timeline. The board mandates a TOM implementation in 12 months that realistically requires 30 months. The programme cuts corners — reduces testing, skips parallel running, rushes training — and the result is a flawed implementation that creates more problems than it solves.
Failure 5: Loss of sponsorship. The executive sponsor who championed the TOM programme leaves the organisation or moves to a different role. Without active sponsorship, the programme loses momentum, budget, and organisational attention.
Failure 6: Ignoring the regulator. The bank implements significant operational changes without adequate regulatory engagement. The regulator learns about the changes through supervisory channels, expresses concern, and requires the bank to pause or modify the programme — creating delays, additional costs, and a strained supervisory relationship.
Banking Example: Post-Implementation Review of Custody Operations TOM
Consider a global custody bank — let us call it SafeKeep Trust — that completed a major TOM transformation of its custody operations 12 months ago. The transformation consolidated custody operations from 6 European offices into a single Dublin-based shared services centre, migrated to a new cloud-based custody platform, and redesigned all custody processes for straight-through processing.
The post-implementation review (PIR) is conducted at the 12-month mark, assessing performance against the original business case and TOM design objectives.
KPI Assessment
| Metric | Pre-TOM Baseline | 12-Month Target | 12-Month Actual | Status |
|---|---|---|---|---|
| Cost-to-income ratio | 72% | 58% | 61% | Amber |
| Annual operations cost | EUR 95M | EUR 68M | EUR 73M | Amber |
| FTE headcount | 480 | 340 | 365 | Amber |
| STP rate (settlements) | 71% | 92% | 89% | Amber |
| STP rate (corporate actions) | 45% | 80% | 78% | Green |
| Settlement fail rate | 4.8% | 1.5% | 1.9% | Amber |
| Cash reconciliation break rate | 8.2% | 2.0% | 2.4% | Amber |
| Cycle time — income collection | 3.2 days | 1.0 day | 1.1 days | Green |
| Client satisfaction (NPS) | +12 | +35 | +31 | Amber |
| Operational loss events | 47 p.a. | <15 p.a. | 18 | Amber |
| Regulatory audit findings | 12 | <3 | 4 | Amber |
| Staff engagement score | 58% | 75% | 69% | Amber |
Post-Implementation Review: 12-Month KPI Dashboard
| Baseline | 12-Mo Target | 12-Mo Actual | Status | |
|---|---|---|---|---|
| Cost-to-Income | 72% | 58% | 61% | Amber |
| STP Rate (Settlements) | 71% | 92% | 89% | Amber |
| STP Rate (Corp Actions) | 45% | 80% | 78% | Green |
| Settlement Fail Rate | 4.8% | 1.5% | 1.9% | Amber |
| Cash Recon Breaks | 8.2% | 2.0% | 2.4% | Amber |
| Client NPS | +12 | +35 | +31 | Amber |
Benefits Realisation Analysis
Cost savings: The business case projected EUR 27M annual cost savings. Actual savings at the 12-month mark are EUR 22M — an 81% achievement rate. The shortfall is attributable to:
- Delayed decommissioning of one legacy system (EUR 2.1M annual licence retained for an additional 6 months while remaining functionality is migrated)
- Higher-than-projected Dublin salary costs (EUR 1.8M, due to a competitive talent market in Dublin financial services)
- Retained transitional staff (EUR 1.1M, 25 staff retained for 6 months longer than planned to support stabilisation)
Efficiency gains: STP rates have improved significantly but have not yet reached the 12-month target. Analysis shows that 60% of remaining manual interventions are caused by poor-quality data from client-side — particularly incomplete settlement instructions and incorrect SSIs (Standard Settlement Instructions). This is an external dependency that the bank is addressing through client engagement and an SSI enrichment project.
Quality improvements: Settlement fail rates and reconciliation break rates have improved substantially but are slightly above target. Root cause analysis shows that most residual fails are concentrated in two specific markets (Italy and Belgium) where CSD connectivity issues on the new platform remain under resolution with the vendor.
Client satisfaction: NPS has improved from +12 to +31, driven by faster processing, real-time reporting, and a dedicated client service team in Dublin. The shortfall versus the +35 target is attributed to initial teething issues during the first 3 months post-go-live that affected some clients.
Lessons Learned
Data quality is the biggest ongoing challenge. The new platform performs well, but its effectiveness is constrained by data quality issues in upstream systems and client-provided data. Future TOM programmes should include a dedicated data quality workstream.
Dublin talent market is competitive. Salary assumptions in the business case were based on data from 18 months before recruitment. The Dublin financial services talent market has since become more competitive. Future location decisions should include salary inflation scenarios.
Parallel running duration was appropriate. The 12-week parallel running period was the right decision — it caught 340+ issues that would have caused production problems. Attempts to shorten parallel running should be resisted.
Change management investment paid off. The EUR 1.5M invested in change management — training, communication, stakeholder engagement, and cultural integration — was essential for adoption. Teams where change management was most intensive show the highest process compliance and the best KPI performance.
Regulatory engagement built trust. The proactive regulatory engagement approach — notifying the Central Bank of Ireland early, providing quarterly updates, and inviting supervisory review of the parallel running results — built trust and avoided any supervisory concerns.
Remediation Actions
Based on the PIR findings, SafeKeep Trust defines a 6-month remediation plan:
- Complete legacy system decommissioning by Month 18 (recovers EUR 2.1M annual saving)
- Launch SSI enrichment project to address client data quality issues (targets 3 percentage point STP improvement)
- Resolve Italy and Belgium CSD connectivity issues with the vendor (targets 0.5 percentage point fail rate improvement)
- Conduct second staff engagement survey and implement targeted interventions for teams with low engagement scores
- Transition residual programme staff to BAU roles or release by Month 15
The PIR demonstrates that while the TOM transformation has delivered substantial improvements across all dimensions, achieving full target performance requires continued investment in optimisation, data quality, and people engagement. The target operating model is performing — but it is not yet fully optimised. This is normal and expected for a transformation of this scale, and the structured PIR process ensures that the remaining gaps are identified, owned, and addressed.
You have now completed all seven modules of the Target Operating Model Design course. You are ready to take the final exam and earn your TOM Practitioner certification.