The Improve Phase
By this point in DMAIC, you have defined the problem, measured baseline performance, and analyzed the data to identify verified root causes. Now comes the phase that everyone has been waiting for — Improve. This is where you design, evaluate, and test solutions that directly address the root causes you validated in the Analyze phase.
The Improve phase transforms data-driven insights into tangible process changes. The goal is not to jump to the first idea that sounds good, but to generate multiple options, prioritize objectively, and test rigorously before committing to full-scale implementation.
A common mistake in banking operations is skipping straight from problem identification to solution implementation. Teams may default to what worked last time, or gravitate toward technology purchases without testing whether they actually solve the problem. The Improve phase provides the discipline to avoid these traps.
Solution Generation
Before you can select the best solution, you need a diverse set of options to choose from. Effective solution generation involves structured creativity, not just a room full of people shouting ideas. Here are four techniques that work well in banking environments:
Traditional brainstorming — A facilitated session where the team generates as many ideas as possible without judgement. The key rule is to defer criticism. Capture every idea, no matter how unconventional, and evaluate later. This works best with a clear prompt, such as "How might we eliminate manual data entry in the COREP submission process?"
Brainwriting (6-3-5) — Six participants each write down three ideas in five minutes, then pass their paper to the next person who builds on those ideas. This continues for six rounds. Brainwriting is particularly effective in banking teams where junior staff may hesitate to challenge senior colleagues in open discussion. The written format levels the playing field and often produces more ideas than verbal brainstorming.
Benchmarking — Investigate how other banks, financial institutions, or even other industries handle the same problem. If a peer institution automated their regulatory data aggregation and cut processing time by 70%, that provides a proven reference point for your own solution design.
Best practice review — Consult industry frameworks, vendor white papers, and professional bodies such as ISDA, SWIFT, or the Basel Committee for established approaches. In many cases, the solution to a banking process problem already exists in documented best practice.
Prioritization Matrix
With a list of potential solutions in hand, you need a structured way to decide which ones to pursue. The prioritization matrix — also called an impact-effort matrix — is a simple but powerful tool. It plots solutions on a 2x2 grid:
Quick wins (high impact, low effort) — Implement these first. They build momentum and demonstrate early results. Example: standardizing the reconciliation break categorization to eliminate rework from inconsistent classifications.
Major projects (high impact, high effort) — These are worth the investment but require careful planning. Example: implementing automated data feeds from SWIFT to replace manual data entry in corporate actions processing.
Fill-ins (low impact, low effort) — Do these when you have spare capacity. They add incremental value but should not distract from high-impact work.
Time sinks (low impact, high effort) — Avoid these. They consume resources without meaningful return.
Consider a banking example: a team identified five potential solutions for reducing errors in regulatory reporting. After scoring each on impact (1-5) and effort (1-5), the prioritization matrix revealed that automated data validation was the clear quick win, while a full system replacement was a major project better suited to a longer-term roadmap. Two other ideas fell into the time sink quadrant and were set aside.
Pilot Testing
Once you have prioritized your solutions, pilot testing lets you validate them in practice before committing to a full rollout. A well-designed pilot answers one critical question: does this solution actually work in our environment?
Every pilot should define four elements clearly:
Scope — Which specific team, process, or product will be included? Keep it narrow enough to manage but representative enough to produce meaningful results. For example, pilot with the London equities team rather than rolling out across all global desks simultaneously.
Duration — Typically two to four weeks for banking process improvements. Long enough to encounter normal variation in workload, short enough to maintain urgency and focus.
Success criteria — What specific, measurable outcomes will determine whether the pilot succeeded? Define these before the pilot begins. For example: "Reduce manual data entry steps from 14 to 3" or "Achieve a 90% auto-match rate on cash reconciliation items."
Measurement plan — How will you collect data during the pilot? Who is responsible for recording results? What tools or reports will you use?
A practical banking example: a regulatory reporting team wanted to introduce AI-driven data validation on their COREP reports. Rather than deploying across all templates at once, they piloted the tool on a single report template for one quarter. The pilot measured error detection rate, time saved per submission cycle, and user feedback. Results showed a 65% reduction in data quality issues, providing the evidence needed to justify broader rollout.
Implementation Planning
When the pilot confirms that a solution works, you need a structured plan for full-scale implementation. Key components include:
RACI matrix — Define who is Responsible, Accountable, Consulted, and Informed for each rollout activity. In banking, this often spans operations, technology, compliance, and risk teams.
Change management — Process changes affect people. Communicate the "why" behind the change early. Address concerns about job impact directly and honestly. Involve frontline staff in refining the solution.
Training needs — Identify what new skills or knowledge staff need. Build training materials, run workshops, and provide a support channel for questions during the transition period.
Communication plan — Determine who needs to know what, and when. Stakeholders, regulators, clients, and partner teams may all require notification at different stages of the rollout.
Banking Example: Automating Regulatory Reporting
A mid-tier bank's finance team spent 10 working days each month compiling data for a single regulatory return. The process involved extracting data from seven source systems, manually reconciling figures in spreadsheets, and formatting the output for submission. Errors were common, and late submissions had triggered regulatory warnings.
After analyzing the root causes — manual data extraction, inconsistent source data, and lack of validation — the team identified automated data aggregation as the highest-priority solution. They piloted an automated extract-and-reconcile workflow on one regulatory return for two monthly cycles. The results were clear: processing time dropped from 10 days to 2 days, and submission errors fell to zero during the pilot period.
Armed with these results, the team built a phased rollout plan covering all major regulatory returns over the following two quarters. Each subsequent return was onboarded using the same pilot-then-scale approach, with lessons from each phase feeding into the next.
In the next module, we will explore how to make these improvements stick through the Control phase — control plans, SOPs, control charts, and the critical handoff to process owners.