Module 7

Sustaining Change & Embedding Behaviours

Learn how to prevent change from reverting, embed new behaviours into business as usual, and build a culture of continuous improvement in banking operations.

Module 7 — 90-second video overview

Why Changes Revert

The most frustrating outcome in change management is not a failed launch — it is a successful launch that gradually, silently reverts. The new system goes live successfully, training is completed, the programme team celebrates, the steering committee signs off, and everyone moves on to the next priority. Six months later, a routine audit reveals that half the team has reverted to old practices, workarounds have proliferated, and the promised benefits have not materialised.

This is not a rare outcome. It is the default outcome when change is not actively sustained and reinforced. Research consistently shows that organisations that invest in formal reinforcement and embedding activities achieve sustainment rates 2-3 times higher than those that do not. In banking, where the cost of failed transformation is amplified by regulatory, operational, and reputational consequences, sustaining change is not a "nice to have" — it is a programme-critical activity.

Understanding why changes revert is the first step to preventing it:

Old Habits Are Powerful

Human behaviour is largely habitual. An analyst who has performed the same process the same way for five years has deeply ingrained neural pathways that make the old way of working automatic, effortless, and comfortable. The new way of working requires conscious effort, concentration, and cognitive energy — it is harder, slower, and more error-prone in the early weeks.

When the programme support fades, the pressure of daily operations reasserts itself, and the analyst is under time pressure to process a backlog, the path of least resistance is to revert to the old, automatic habits. This is not resistance in the traditional sense — it is the natural gravitational pull of ingrained behaviour. Overcoming it requires sustained, deliberate reinforcement until the new behaviours themselves become habitual.

Leadership Attention Shifts

In banking, leadership bandwidth is a scarce and contested resource. The executive sponsor who was visibly championing the change during the programme will inevitably shift attention to the next strategic priority. When leadership attention moves on, the signal to the organisation is clear: "This change is no longer a priority." Without continued leadership reinforcement, the new way of working loses its organisational energy and momentum.

Reinforcement Mechanisms Are Not In Place

The most critical sustaining mechanism is the alignment of performance management, KPIs, and reward systems to the new way of working. If people are measured on the same KPIs as before the change, their behaviour will not change. If the old metrics still drive recognition, promotion, and bonus decisions, people will optimise for the old metrics — regardless of what the programme communicated about the new expectations.

In the AML transformation example from Module 1, if analysts continue to be measured on "alerts dispositioned per hour" (the old metric) rather than "case quality score" (the new metric), they will continue to prioritise speed over quality — exactly the behaviour the transformation was designed to eliminate.

The Old Way Is Still Available

As long as old systems, old templates, old workarounds, and old informal processes remain available, they provide an escape route for people who are struggling with or uncomfortable with the new way of working. The presence of the old system is a constant invitation to revert. Formal decommissioning — removing access to old systems, withdrawing old templates, and actively closing down workaround processes — is an essential sustaining activity.

Reinforcement Mechanisms

Effective reinforcement uses multiple mechanisms simultaneously, creating an environment where the new way of working is supported, expected, measured, and rewarded:

Performance Management Alignment

This is the single most powerful reinforcement mechanism. When people's performance reviews, objectives, and career progression are explicitly linked to the new behaviours, the new way of working moves from "something the programme asked us to do" to "how I succeed in my role."

Practical steps include:

  • Update performance objectives to reflect the new competencies, processes, and behaviours expected after the change
  • Revise KPIs to measure the outcomes the change was designed to deliver (e.g., case quality scores instead of volume metrics)
  • Include change adoption as an explicit dimension in performance evaluations for the first 6-12 months after go-live
  • Align bonus and recognition criteria to the new expectations

Manager Reinforcement

First-line managers are the most important reinforcement mechanism after performance management. They set the tone for their teams. If the manager consistently reinforces the new way of working — asking about the new metrics, recognising good adoption, coaching people who are struggling, and modelling the new behaviours themselves — the team will follow. If the manager signals that the old way is acceptable, the team will revert.

Manager reinforcement requires:

  • Manager training on their reinforcement role — not just training on the new process, but training on how to coach, recognise, and sustain adoption within their team
  • Regular check-ins between programme leadership and front-line managers to identify sustaining challenges and provide support
  • Manager accountability for their team's adoption metrics

Recognition and Celebration

Recognising and celebrating people who have successfully adopted the new way of working creates positive social reinforcement. In banking, recognition might include:

  • Public acknowledgement in team meetings or town halls for individuals or teams that have achieved adoption milestones
  • Change champion awards recognising the contributions of change champions during the transition
  • Success stories — sharing real examples of how the new way of working has improved outcomes, reduced errors, or enabled better results. These stories, told by the practitioners themselves, are more powerful than any programme communication.
  • Career development opportunities for early adopters — demonstrating that embracing change leads to professional advancement

Process and Procedural Embedding

The new way of working must be formally embedded into the organisation's process framework:

  • Standard operating procedures (SOPs) must be updated to reflect the new processes before go-live, and the old SOPs must be formally retired
  • Quality assurance (QA) and compliance monitoring must be aligned to the new processes — if QA continues to check against old process standards, it reinforces the old behaviours
  • Onboarding and induction programmes for new joiners must teach the new way of working from day one. If new joiners are trained on outdated processes, the change is already being undermined.
  • Regulatory and compliance documentation must be updated to reflect the new control framework, process flows, and risk assessments

System and Technology Reinforcement

  • Decommission old systems as soon as practically possible. The longer old systems remain available, the more people will use them as workarounds. Plan a clear decommissioning timeline and communicate it in advance.
  • Remove access to old tools — withdraw login credentials, archive old templates, and disable old report generators
  • Use system-enforced workflows where possible — if the new system requires certain steps to be completed in a specific order, it prevents people from reverting to old shortcuts

Reinforcement Mechanisms for Sustaining Change

Sustaining Change
Performance ManagementAlign objectives and KPIs
Manager ReinforcementRegular check-ins and accountability
RecognitionCelebrate adoption and early adopters
Process EmbeddingUpdate SOPs and QA procedures
System ReinforcementDecommission legacy, enforce new workflows

Metrics and Monitoring for Adoption

You cannot sustain what you do not measure. Post-go-live adoption monitoring should track both leading indicators (early signals of adoption health) and lagging indicators (business outcomes):

Leading Indicators

  • System usage metrics — login frequency, feature usage, session duration. Are people actually using the new system? Are they using all the features or only a subset?
  • Process compliance — are people following the new process, or are workarounds emerging? QA sampling can reveal whether the documented process is being followed in practice.
  • Training completion and assessment scores — have all staff completed the required training? Did they achieve the required proficiency levels?
  • Support ticket volume and type — a high volume of support tickets in the early weeks is normal and expected. But if the volume remains high after 6-8 weeks, or if tickets are clustering around specific issues, it indicates a sustaining problem that needs intervention.
  • Sentiment and engagement — pulse surveys measuring confidence, satisfaction, and perceived support. A declining sentiment trend is an early warning of adoption fatigue.

Lagging Indicators

  • Operational KPIs — are the business outcomes improving? Processing times, error rates, STP rates, exception volumes, and other operational metrics should be trending towards the targets that justified the change.
  • Regulatory and compliance metrics — are regulatory reporting deadlines being met? Are compliance monitoring results improving? In banking, these metrics carry particular weight because they directly affect the bank's regulatory standing.
  • Customer impact metrics — are customer-facing outcomes improving? Faster processing, fewer errors, better response times, and improved customer satisfaction.
  • Cost metrics — is the change delivering the expected cost reductions or efficiency improvements?

Monitoring Cadence

For a major banking transformation, the recommended monitoring cadence is:

  • Daily during the first two weeks post-go-live — monitoring system stability, critical process execution, and escalated issues
  • Weekly from weeks 3-8 — monitoring adoption metrics, training completion, and emerging issues
  • Fortnightly from weeks 9-16 — monitoring adoption trends, KPI progress, and reinforcement effectiveness
  • Monthly from months 5-6 — monitoring sustained adoption and business outcome delivery
  • Formal reviews at 30, 60, 90, and 180 days post-go-live — structured reviews with senior leadership, examining adoption data, KPI trends, and outstanding issues

Change Adoption: Leading vs Lagging Indicators

System Usage
Leading
Login frequency and feature adoption rates
Process Compliance
Leading
QA sampling and workaround detection
Training Completion
Leading
Assessment scores and certification rates
Operational KPIs
Lagging
Processing time, error rates, STP rates
Cost Metrics
Lagging
Headcount, overtime, rework costs
Client Satisfaction
Lagging
NPS and complaint volumes

Building a Continuous Improvement Culture

Sustaining change is not just about preventing reversion — it is about creating the conditions for continuous improvement. Once the new way of working is embedded, the organisation should be positioned to identify further improvements, optimise processes, and drive ongoing efficiency gains.

In banking, continuous improvement after a transformation programme involves:

Establishing feedback mechanisms. Create structured channels for staff to suggest improvements, report issues, and share innovative practices. In banking operations, this might include regular improvement forums, suggestion schemes, or Lean-style kaizen events focused on specific process areas.

Empowering front-line improvement. Give operations teams the authority and tools to make small, incremental improvements within their area of responsibility. Not every improvement requires a programme — many of the most impactful improvements come from front-line staff who see opportunities that are invisible from above.

Regular process reviews. Schedule periodic reviews (quarterly or semi-annually) of key processes to assess performance, identify emerging issues, and evaluate improvement opportunities. In banking, these reviews should involve process owners, operations teams, and the compliance function.

Connecting to the broader improvement framework. Link post-change continuous improvement to the organisation's broader operational excellence or Lean Six Sigma framework. Changes that have been successfully embedded become the baseline for the next round of improvement — the as-is state for the next improvement cycle.

Post-Implementation Reviews

A post-implementation review (PIR) is a structured retrospective conducted after the change has been embedded, typically 3-6 months after go-live. Its purpose is to capture lessons learned that improve the organisation's change management capability for future initiatives.

A PIR should examine:

What went well. Which change management activities were most effective? Which communication channels resonated? Which training methods produced the best results? What worked in resistance management? These successes should be documented and replicated in future programmes.

What did not go well. Where did the change management approach fall short? Were there populations that were inadequately supported? Was resistance underestimated? Was training insufficient? Was communication too late, too vague, or too infrequent? Honest assessment of failures is essential for organisational learning.

What was unexpected. What challenges emerged that were not anticipated? Were there stakeholders who were not identified? Were there impacts that were not assessed? Did external factors (regulatory changes, market events, organisational restructuring) affect the programme in ways that were not foreseen?

Recommendations for future programmes. Based on the lessons learned, what should future change programmes do differently? These recommendations should be specific and actionable — not generic platitudes like "communicate more" but specific guidance like "schedule manager briefings 48 hours before team communications to ensure managers can prepare."

The PIR should involve all key programme stakeholders — the programme team, the change management team, middle managers, change champions, and representatives from the affected populations. It should be facilitated by someone independent of the programme to encourage honest reflection.

In banking, PIR outputs should be submitted to the change management governance function and incorporated into the organisation's change management methodology — ensuring that every programme benefits from the accumulated experience of previous initiatives.

Lessons Learned Documentation

Lessons learned are only valuable if they are captured, stored accessibly, and actually consulted by future programme teams. In banking, best practice includes:

A centralised lessons learned repository — a searchable database or document library where PIR outputs are stored and categorised by change type, affected function, and key themes.

Mandatory consultation — requiring new programme teams to review relevant lessons learned as part of their programme initiation process. This should be a governance checkpoint, not a suggestion.

Pattern identification — periodically reviewing accumulated lessons to identify systemic themes. If multiple programmes report that "middle manager engagement was inadequate," this is a systemic issue that needs to be addressed through manager development, not just individual programme effort.

Integration into methodology — updating the organisation's change management methodology and templates based on lessons learned. If multiple PIRs identify that the standard communication plan template does not include sufficient guidance on regulatory communications, the template should be updated.

Banking Example: Embedding New Ways of Working After a Regulatory Reporting Transformation

A large European bank had completed a major transformation of its regulatory reporting function. The programme had migrated from a fragmented, spreadsheet-dependent reporting process to an integrated regulatory reporting platform that automated data sourcing, calculation, validation, and submission for key regulatory returns including COREP, FINREP, and LCR/NSFR reports.

The programme had been delivered successfully — the new platform was live, staff had been trained, and the first regulatory submissions using the new system had been completed on time and without material errors. The programme team declared the project complete and began to wind down.

Six months later, an internal audit review revealed concerning findings:

  • 35% of analysts were maintaining shadow spreadsheets alongside the new platform — manually cross-checking the platform's calculations against their own models before submitting. This duplication of effort was consuming approximately 400 person-hours per reporting cycle and undermining the efficiency benefits the programme was designed to deliver.
  • Quality assurance processes were still based on the old methodology — checking individual data points rather than the platform's automated validation rules. This meant that the QA function was not leveraging the new platform's capabilities.
  • New joiners who had started after the programme were being trained by colleagues using a mix of new and old practices — perpetuating the shadow processes.
  • The old spreadsheet templates were still available on the shared drive. Nobody had removed them.

The bank recognised that the change had been technically delivered but not behaviourally embedded. A six-month post-go-live reinforcement programme was designed and implemented:

Month 1: Diagnosis and planning. The change management team (reconvened from the original programme) conducted interviews, observations, and data analysis to understand the root causes of the shadow processes. The primary root cause was trust — analysts did not fully trust the new platform's calculations and were using their spreadsheets as a safety net. Secondary causes included habit (the spreadsheets were deeply ingrained in daily routines) and incomplete decommissioning (the old templates were still available and accessible).

Month 2: Trust-building. The programme team designed a structured reconciliation exercise. For one reporting cycle, a dedicated team ran the regulatory calculations through both the old spreadsheet methodology and the new platform, with a detailed line-by-line comparison. The results were shared with all reporting analysts: the new platform matched or exceeded the spreadsheet calculations in 99.7% of data points, with the 0.3% variance traced to a rounding difference that had no regulatory impact. This evidence-based demonstration addressed the trust deficit directly.

Month 3: Decommissioning and process embedding. The old spreadsheet templates were formally archived and removed from the shared drive. Access to the legacy data sources used by the spreadsheets was withdrawn (after appropriate data retention procedures). Standard operating procedures were updated to explicitly prohibit shadow processes and to describe the new platform's validation workflow as the sole approved methodology. The QA process was redesigned to leverage the platform's automated validation, with human review focused on exception analysis rather than line-by-line checking.

Month 4: Performance management alignment. Performance objectives for all reporting analysts were updated to include adoption metrics: "Utilises the regulatory reporting platform as the sole tool for data preparation, calculation, and validation — with no reliance on shadow processes or manual workarounds." Team leads were given specific adoption targets for their teams, and their performance reviews included an assessment of their effectiveness in reinforcing the new way of working.

Month 5: Training refresh and new joiner alignment. A refresher training session was delivered to all analysts, focusing on the platform features that were being underutilised (particularly the automated validation and exception analysis tools). The new joiner onboarding programme was updated to train exclusively on the new platform — eliminating any reference to the legacy methodology.

Month 6: Recognition and celebration. Teams that had achieved 100% platform adoption were publicly recognised. Two analysts who had developed innovative workflow improvements using the platform's advanced features were invited to present at the department town hall. A "lessons learned" session captured the experience for future programmes.

Results at the six-month mark: Shadow spreadsheet usage had dropped from 35% to 2% (two analysts who were on extended leave during the reinforcement programme and were addressed individually upon return). QA process cycle time had reduced by 40% through adoption of automated validation. New joiner onboarding time had reduced from 12 weeks to 8 weeks because there was now a single, clear methodology to learn. The Head of Regulatory Reporting reported to the steering committee that the benefits case — which had been underperforming at the six-month mark — was now on track, with the full efficiency benefit expected to be realised by month 12.

The lesson is clear: go-live is not the finish line. The programme that declares victory at go-live is the programme that finds its benefits eroding six months later. Sustained reinforcement — through trust-building, decommissioning, performance alignment, training refresh, and recognition — is what turns a technically successful project into a genuinely embedded transformation.

This concludes the seven learning modules of the Change Management for Financial Services course. You are now ready to take the final exam to earn your Change Management Practitioner certification.

Module Quiz

5 questions — Pass mark: 60%

Q1.What is the MOST common reason that successfully implemented changes revert to old ways of working in banking?

Q2.Which of the following is the MOST effective reinforcement mechanism for embedding new behaviours after a banking transformation?

Q3.What is the recommended minimum duration for post-go-live adoption monitoring in a major banking transformation?

Q4.Why is it important to formally decommission old systems and remove access to legacy tools after a migration?

Q5.What is the primary purpose of a post-implementation review (PIR) in a change management context?