Technology Landscape Assessment
The technology dimension of a Target Operating Model defines the systems, platforms, infrastructure, and integration architecture that enable the operating model to function. In banking, technology is not merely a support function — it is a critical determinant of operational capability, cost, risk, and competitive positioning. A bank's technology landscape is typically one of the most complex and expensive components of its operating model, and technology rationalisation is often the single largest source of cost savings in a TOM programme.
A technology landscape assessment is the starting point for technology design within the TOM. It creates a complete inventory of the current technology estate, documenting:
Application inventory. Every system used in operations — core banking platforms, trading systems, settlement engines, reconciliation tools, payment processing systems, reporting platforms, CRM systems, workflow tools, and desktop applications. For each application:
- Business capability supported
- Number of users
- Transaction volumes processed
- Annual cost (licence, support, infrastructure, development)
- Technology age and technical health (is it current, approaching end-of-life, or already unsupported?)
- Vendor and contract status
- Integration points (which systems does it connect to, and how?)
- Data flows (what data does it produce, consume, and store?)
Shadow IT. A critical but often overlooked component. Shadow IT includes the spreadsheets, Access databases, locally developed macros, and unofficial tools that staff have built to compensate for gaps in formal systems. In many banks, shadow IT represents 20-40% of the actual technology landscape and introduces significant operational risk (no version control, no testing, no business continuity). The TOM must decide which shadow IT capabilities to formalise (by building them into proper systems) and which to eliminate.
Infrastructure. On-premise data centres, cloud infrastructure, network connectivity, disaster recovery facilities, and end-user computing. The assessment documents capacity, utilisation, cost, and the age and supportability of infrastructure components.
Integration architecture. How systems are connected — point-to-point interfaces, batch file transfers, messaging middleware, enterprise service buses, and API gateways. The complexity and fragility of the integration landscape is often the single biggest constraint on operational agility in banking.
Technology Health Assessment
Each application is assessed against a standard set of criteria to determine its suitability for the target state:
- Strategic fit: Does this system align with the target operating model?
- Functional completeness: Does the system meet current and anticipated business requirements?
- Technical health: Is the platform on a supported version? Is the vendor investing in the product? Are there known security vulnerabilities?
- Scalability: Can the system handle projected growth in volumes, users, and functionality?
- Total cost of ownership (TCO): What is the full cost of running this system, including licence, support, infrastructure, development, and operational overhead?
- Risk: What operational, technology, and vendor risks does this system introduce?
Systems are classified into four categories based on this assessment:
- Retain and invest: Strategic systems that align with the target model and require investment to meet future requirements
- Retain and maintain: Systems that are adequate for the target model but do not require significant investment
- Replace: Systems that do not fit the target model and must be replaced by a new solution
- Decommission: Systems that are redundant and can be retired, with their functionality absorbed by other systems
Buy vs Build vs Partner
For each technology capability in the target state, the TOM must determine whether the bank should buy (acquire a vendor product), build (develop a custom solution in-house), or partner (use a third-party service, including SaaS and managed services).
Buy
Buying a vendor product is appropriate when:
- The capability is non-differentiating — it does not provide competitive advantage
- Established vendor products exist that meet 80%+ of requirements
- The bank wants to benefit from vendor investment in product development
- Speed to market is important — buying is typically faster than building
- The bank lacks the internal development capability to build and maintain a custom solution
Examples in banking: reconciliation platforms (SmartStream, Gresham, Duco), payment processing (Finastra, FIS, Temenos), risk management (Murex, Calypso), regulatory reporting (Axiom, Regnology).
Build
Building a custom solution is appropriate when:
- The capability is strategically differentiating — it provides competitive advantage that cannot be achieved with a standard vendor product
- No vendor product exists that meets the bank's specific requirements
- The bank has strong internal development capability and can commit to ongoing maintenance
- The function is so central to the bank's operations that dependence on an external vendor is unacceptable
Examples in banking: proprietary trading algorithms, bespoke client portals, custom pricing engines, specialised risk models.
Partner
Partnering — including SaaS, managed services, and utility models — is appropriate when:
- The bank wants to avoid the cost and complexity of operating the platform itself
- The capability is available as a utility service shared across the industry (e.g., KYC utilities, trade reporting platforms)
- The bank lacks the scale to operate the capability cost-effectively on its own
- Rapid deployment is required and the partner can provide an operational service faster than the bank can implement a product
Examples in banking: KYC utilities (Refinitiv, Moody's), trade repositories (DTCC, Regis-TR), cloud infrastructure (AWS, Azure, GCP), managed reconciliation services.
The Decision Matrix
A structured decision matrix scores each option against weighted criteria:
| Criterion | Weight | Buy | Build | Partner |
|---|---|---|---|---|
| Strategic differentiation | 25% | Low | High | Low |
| Time to market | 20% | Medium | Low | High |
| Total cost of ownership | 20% | Medium | High | Low |
| Control and customisation | 15% | Medium | High | Low |
| Vendor/partner risk | 10% | Medium | Low | High |
| Internal capability required | 10% | Medium | High | Low |
The option with the highest weighted score is the recommended approach, subject to strategic validation and risk assessment.
Platform Rationalisation
Platform rationalisation is the process of reducing the number of technology platforms in the bank's landscape — consolidating from multiple systems performing similar functions to a single platform per function. This is one of the most impactful (and most complex) activities in a TOM technology programme.
Why Rationalise?
The case for rationalisation is compelling:
- Cost reduction: Each platform incurs licence fees, support costs, infrastructure costs, and development costs. Eliminating redundant platforms reduces direct costs by 15-40%.
- Operational simplification: Fewer platforms mean fewer interfaces, fewer data flows, fewer reconciliation points, and fewer systems for staff to learn.
- Improved data consistency: When multiple platforms maintain overlapping data, inconsistencies are inevitable. A single platform provides a single version of the truth.
- Reduced operational risk: Each platform is a potential point of failure. Fewer platforms mean fewer failure points, simpler disaster recovery, and more focused operational support.
- Faster change delivery: Changes to a single platform are simpler and faster than changes to multiple platforms that must remain synchronised.
The Rationalisation Process
- Map capabilities to platforms: Using the capability map from Module 3, identify which platforms serve which capabilities. This reveals duplication — multiple platforms serving the same capability.
- Assess platform fit: For each capability with multiple platforms, assess which platform best fits the target state requirements (using the technology health criteria above).
- Design the target landscape: Define the target platform for each capability, specifying which platforms are retained, which are decommissioned, and which are replaced by new solutions.
- Plan the migration: For each platform decommission, plan the migration of data, processes, and users to the target platform. This is where the real complexity lies.
- Execute and decommission: Migrate in phases, validate each migration, and formally decommission retired platforms (including revoking licences, archiving data, and updating documentation).
API-First Architecture
Modern banking TOM designs increasingly adopt an API-first architecture — designing integration between systems around well-defined, standardised APIs rather than the traditional approach of custom point-to-point interfaces and batch file transfers.
The Problem with Traditional Integration
In many banks, the integration landscape has evolved organically over decades. Systems are connected through:
- Batch file transfers: Flat files exchanged at scheduled intervals (hourly, daily, end-of-day). These create latency, are fragile (failures in one file can cascade), and are difficult to monitor.
- Point-to-point interfaces: Custom-built connections between specific systems. Each interface is bespoke, undocumented, and must be rebuilt when either system changes. A bank with 50 systems can have 500+ point-to-point interfaces.
- Message queues: Middleware-based messaging that provides some decoupling but often lacks standardisation and governance.
This organic integration landscape is one of the biggest sources of operational risk and cost in banking. It is fragile, opaque, expensive to maintain, and a major barrier to change.
The API-First Approach
An API-first architecture replaces organic integration with a structured, standardised approach:
- Standardised interfaces: Every system exposes its capabilities through well-defined APIs with documented contracts (request/response formats, error handling, authentication).
- Reusability: A single API can serve multiple consumers. When a new system needs client data, it calls the existing client data API rather than building a new point-to-point connection.
- Real-time capability: APIs enable real-time, synchronous integration — data is available when requested, not when the next batch runs.
- Version control: APIs are versioned, allowing systems to evolve independently. A new version of an API can be deployed alongside the old version, giving consumers time to migrate.
- Governance: An API gateway provides centralised management, monitoring, security, and rate limiting for all integration traffic.
Implementation Considerations
Adopting API-first architecture in banking requires careful planning:
- Legacy system integration: Many core banking systems were not designed to expose APIs. Integration layers, adapters, or API wrappers may be needed to connect legacy systems to the API ecosystem.
- Performance: APIs must meet the performance requirements of real-time operations — latency, throughput, and availability.
- Security: APIs must be secured with appropriate authentication (OAuth 2.0, mutual TLS), authorisation, and encryption, particularly for APIs handling sensitive financial data.
- Market infrastructure connectivity: Integration with market infrastructure (SWIFT, CSDs, CCPs) must conform to the standards and protocols defined by those institutions, which may not align with the bank's preferred API standards.
Data Architecture and Lineage
Data architecture defines how data flows through the organisation, where it is stored, how it is governed, and how it supports operational processes, regulatory reporting, and management decision-making. In banking, data architecture is subject to extensive regulatory requirements and is a critical enabler of operational efficiency.
Data Domains in Banking
A banking data architecture is typically structured around data domains:
- Client data: Client identification, classification, contact details, account details, KYC/AML information
- Security master data: Instrument identification (ISIN, CUSIP, SEDOL), instrument attributes, pricing, corporate actions
- Transaction data: Trades, orders, payments, settlements, and their lifecycle events
- Position data: Holdings, balances, exposures, and collateral
- Reference data: Counterparty data, market data, calendar data, fee schedules, regulatory classifications
- Regulatory data: Data specifically required for regulatory reporting (LEI, MiFID classification, EMIR fields)
Golden Source Strategy
A golden source strategy designates a single authoritative source for each data domain. The golden source is the system of record — the one version of the truth that all downstream systems must consume. This eliminates the data inconsistency that arises when multiple systems maintain their own versions of the same data.
For example:
- Client reference data golden source: CRM system or dedicated client data utility
- Security master golden source: Dedicated security master platform (e.g., Bloomberg, Refinitiv, proprietary)
- Trade data golden source: Order management system or trade booking platform
- Position data golden source: Core banking or custody platform
- Pricing golden source: Market data platform with defined pricing hierarchy
The golden source strategy must define:
- Which system is the golden source for each data domain
- How data flows from the golden source to consuming systems
- What data quality rules are applied at the golden source
- Who is the data owner and data steward for each domain
- How data lineage is documented and maintained
Data Lineage
Data lineage traces the journey of data from its point of origin through all transformations, enrichments, and systems to its final consumption point. In banking, data lineage is a regulatory requirement under BCBS 239 (Principles for Effective Risk Data Aggregation and Risk Reporting) and is essential for:
- Regulatory reporting accuracy: Regulators expect banks to demonstrate that reported data can be traced back to its source and that all transformations are documented and validated.
- Error investigation: When a data quality issue is identified, lineage enables rapid identification of where the error was introduced.
- Change impact assessment: When a source system is changed or decommissioned, lineage shows all downstream impacts.
Automation and AI Integration Points
A modern banking TOM identifies specific opportunities for automation and artificial intelligence to improve efficiency, accuracy, and scalability.
Robotic Process Automation (RPA)
RPA is appropriate for processes that are:
- Rule-based with clear decision logic
- High-volume and repetitive
- Performed across multiple systems (bridging system gaps)
- Currently manual due to lack of system-to-system integration
Common RPA use cases in banking operations include: data entry across systems, reconciliation break categorisation, email-based trade confirmation processing, regulatory report data extraction, and nostro account balance monitoring.
Machine Learning and AI
More advanced AI capabilities are appropriate for:
- Predictive analytics: Predicting settlement fails before they occur based on historical patterns
- Anomaly detection: Identifying unusual transactions or patterns that may indicate errors or fraud
- Intelligent matching: Using ML algorithms to improve reconciliation matching rates beyond rule-based approaches
- Natural language processing: Extracting structured data from unstructured sources (SWIFT messages, emails, PDF documents)
- Chatbots and virtual assistants: Handling routine client queries and internal support requests
Integration Points
The TOM should identify specific automation integration points within the process architecture:
- Which L2 processes are candidates for RPA?
- Which L2 processes could benefit from ML-enhanced decision-making?
- What data feeds are required to support predictive analytics?
- Where should human-in-the-loop checkpoints be retained even when automation is deployed?
Vendor Selection Criteria
When the TOM calls for buying a vendor product, a structured vendor selection process ensures the right product is chosen. Standard criteria for banking vendor selection include:
- Functional fit: Does the product meet the bank's functional requirements? Score against a detailed requirements matrix.
- Technical architecture: Is the product cloud-native, API-enabled, and compatible with the bank's technology stack?
- Banking domain expertise: Does the vendor understand banking operations, regulatory requirements, and market infrastructure?
- Implementation track record: Has the vendor successfully implemented the product at comparable banks?
- Financial stability: Is the vendor financially stable and likely to continue investing in the product?
- Regulatory compliance: Does the product meet regulatory requirements for data residency, audit trails, and operational resilience?
- Total cost of ownership: What is the full cost over the contract period, including licence, implementation, support, infrastructure, and internal effort?
- Contract flexibility: Can the bank exit the contract without prohibitive cost? Are pricing terms scalable?
- Support and service: What level of support is provided? What are the SLAs for incident resolution?
- Product roadmap: Does the vendor's product roadmap align with the bank's future requirements?
Banking Example: Reconciliation Platform Rationalisation
Consider a European custody bank — let us call it EuroCustody AG — with a technology landscape that has grown through years of acquisitions and organic development. The current reconciliation landscape includes seven separate platforms:
Platform A — An in-house developed Access database used by the equities settlement team for trade reconciliation. Built 12 years ago by a developer who has since left the bank. No documentation, no source control, no support arrangement.
Platform B — A vendor product (SmartStream TLM) used for cash reconciliation (nostro accounts). Well-supported but running an outdated version, three major releases behind the current version.
Platform C — An Excel-based reconciliation tool used by the fixed income team for position reconciliation. Contains complex macros that no one fully understands.
Platform D — A vendor product (Gresham Clareti) used by the derivatives team for collateral reconciliation. Recently implemented and well-functioning.
Platform E — An in-house Java application used for depot reconciliation with CSDs. Developed 8 years ago, with the original development team now disbanded.
Platform F — A vendor product (FIS IntelliMatch) used by the fund administration subsidiary for NAV reconciliation. Adequate but expensive.
Platform G — An in-house Python script collection used for regulatory reconciliation (EMIR/MiFID reporting reconciliation). Developed by a single analyst who maintains it as a side responsibility.
The problem: This landscape costs EUR 4.2 million annually (licences, support, infrastructure, and the internal development effort to maintain in-house tools). It produces inconsistent results — each platform has different matching algorithms, different tolerance rules, and different reporting formats. There is no single view of reconciliation status across the bank. Data quality issues in one platform cascade to others. The three in-house tools represent significant operational risk — they have no documentation, no testing, no business continuity, and high key-person dependency.
The TOM technology design consolidates all reconciliation onto a single cloud-based platform — in this case, Duco Cube, selected through a structured vendor evaluation process.
Target state design:
- Single platform: All seven reconciliation types (trade, cash, position, collateral, depot, NAV, and regulatory) are migrated to the Duco platform
- Cloud-based delivery: SaaS deployment eliminates the need for on-premise infrastructure and reduces IT operational overhead
- Automated data feeds: Direct API connections to source systems (settlement platform, core banking, CSD gateways, EMIR trade repository) replace manual file uploads and email-based data exchange
- Standardised matching rules: A single set of configurable matching algorithms with consistent tolerance rules, replacing the inconsistent approaches across seven platforms
- Unified reporting: A single dashboard providing real-time visibility of reconciliation status across all types, with drill-down capability and exception management workflow
- Machine learning matching: The platform's ML capability is deployed to improve matching rates for complex breaks that rule-based matching cannot resolve
Migration approach:
- Phase 1: Migrate cash reconciliation (Platform B) — the highest-volume, best-understood reconciliation type, providing a proof of concept
- Phase 2: Migrate trade and position reconciliation (Platforms A and C) — replacing the two highest-risk in-house tools
- Phase 3: Migrate depot and regulatory reconciliation (Platforms E and G) — replacing the remaining in-house tools
- Phase 4: Migrate collateral and NAV reconciliation (Platforms D and F) — consolidating the remaining vendor products
Results achieved:
- Cost reduction: Annual reconciliation technology cost reduced from EUR 4.2M to EUR 1.8M (57% saving)
- Matching rate improvement: Average auto-matching rate increased from 72% to 94% through standardised rules and ML enhancement
- Risk reduction: Elimination of 3 in-house tools removes key-person dependency, undocumented code, and business continuity risk
- Operational efficiency: 40% reduction in analyst time spent on manual break investigation, redeployed to exception management and process improvement
- Data consistency: A single matching engine with a single set of rules produces consistent, auditable results across all reconciliation types
In the next module, we will explore how to build the implementation roadmap for a TOM — phasing the transformation, planning migrations, managing risks, and delivering early wins while pursuing long-term structural change.