UAT Test Script Library

What is User Acceptance Testing (UAT)?

User Acceptance Testing (UAT) is the final phase of testing where business users verify that a system works as expected in real-world scenarios. UAT confirms that:

  1. Functional Requirements Met: System does what it's supposed to do
  2. Business Scenarios Work: Real workflows function end-to-end
  3. User-Friendly: Users can actually use the system effectively
  4. Fit for Purpose: Solution solves the business problem

Why UAT Matters

Without Proper UAT: -Bugs discovered in production (costly to fix) -User resistance ("this doesn't work for us") -Workarounds and manual interventions -Rework and delays -Regulatory breaches due to untested scenarios

With Comprehensive UAT:

  • Bugs caught before go-live (cheaper to fix)
  • User confidence and buy-in
  • Smooth transition to BAU
  • On-time, on-budget delivery
  • Regulatory compliance assured

UAT Test Plan Template

Test Plan Overview

PROJECT: Real-Time Trade Surveillance Implementation
UAT PHASE: User Acceptance Testing
VERSION: 1.0
DATE: 15 February 2025

TEST MANAGER: Compliance Manager
TEST LEAD: Senior Surveillance Analyst
TEST ENVIRONMENT: UAT Environment (dedicated AWS instance)

OBJECTIVES:
1. Verify all functional requirements are met
2. Test real-world surveillance scenarios
3. Confirm integration with Actimize case management
4. Validate performance (process 10,000 trades/day with <1 min latency)
5. Obtain business sign-off for go-live

SCOPE:
In Scope:
• 8 surveillance rules (layering, spoofing, wash trading, etc.)
• Alert generation and risk scoring
• Alert investigation workflow
• Case escalation to Actimize
• Dashboards and reporting

Out of Scope:
• System administration functions (tested by IT)
• Infrastructure performance testing (tested separately)
• Historical data migration (tested in data migration phase)

TEST APPROACH:
• Scenario-based testing (real-world use cases)
• End-to-end workflows (from trade execution to case escalation)
• Parallel testing (run alongside old system for comparison)
• Exploratory testing (ad-hoc scenarios by power users)

ENTRY CRITERIA:
System deployed to UAT environment
Test data loaded (sample trades, reference data)
Integration with Actimize configured
Test scripts prepared and reviewed
Testers trained on new system

EXIT CRITERIA:
All P0 (Critical) and P1 (High) defects resolved
P2 (Medium) defects have workarounds or defer to post-launch
All test cases executed (100% coverage)
Business sign-off obtained

TEST SCHEDULE:
• Week 1 (1-5 Mar): Functional testing
• Week 2 (8-12 Mar): Integration testing
• Week 3 (15-19 Mar): Performance testing
• Week 4 (22-26 Mar): Regression testing + sign-off

DEFECT MANAGEMENT:
• P0 (Critical): System unusable → Must fix before go-live
• P1 (High): Major functionality broken → Must fix before go-live
• P2 (Medium): Minor issue, has workaround → Fix or defer
• P3 (Low): Cosmetic, nice-to-have → Defer to post-launch

Test Team Roles

RoleNameResponsibilities
Test ManagerCompliance ManagerOverall test planning, resource allocation, sign-off
Test LeadSenior AnalystDay-to-day test execution, defect triage, status reporting
Business Testers2x Surveillance AnalystsExecute test scripts, log defects, validate fixes
Technical SupportIT DeveloperInvestigate defects, deploy fixes, environment management
Business AnalystLead BAClarify requirements, validate expected results

Test Case Template

Standard Test Case Format

TEST CASE ID: TC-SURV-001
TEST CASE TITLE: Alert Generation - Layering Detection
MODULE: Surveillance Rules
PRIORITY: P1 (High)
WRITTEN BY: Lead BA
DATE: 15 Jan 2025

PRE-CONDITIONS:
• User logged in with Surveillance Analyst role
• Test trade data loaded in system
• Surveillance rules configured with default thresholds

TEST OBJECTIVE:
Verify that the system generates a layering alert when a trader places and
cancels multiple orders without execution, as per FCA market abuse definition.

TEST DATA:
• Trader: T-001 (John Smith, Equity Desk)
• Instrument: VOD.L (Vodafone Group PLC)
• Orders: 10 orders placed and cancelled within 5 minutes, no executions
• Expected Trigger: Layering rule (threshold: >5 orders cancelled within 10 min)

TEST STEPS:
┌────┬──────────────────────────────────────────────────────────────┐
│ #  │ Step Description                                             │
├────┼──────────────────────────────────────────────────────────────┤
│ 1  │ Navigate to Surveillance Dashboard                           │
│ 2  │ Verify dashboard is empty (no alerts)                        │
│ 3  │ Trigger test scenario: Load test trade data (10 orders)     │
│ 4  │ Wait for alert generation (max 1 minute)                     │
│ 5  │ Verify alert appears in dashboard                            │
│ 6  │ Click on alert to open detail screen                         │
│ 7  │ Verify alert details are correct (see Expected Results)     │
│ 8  │ Verify risk score is calculated (should be 70-100, HIGH)    │
│ 9  │ Verify supporting evidence is displayed (list of orders)    │
│ 10 │ Verify alert is logged in database                           │
└────┴──────────────────────────────────────────────────────────────┘

EXPECTED RESULTS:
┌────┬──────────────────────────────────────────────────────────────┐
│ #  │ Expected Result                                              │
├────┼──────────────────────────────────────────────────────────────┤
│ 5  │ Alert "LAYERING - T-001 - VOD.L" appears in dashboard       │
│ 7  │ Alert details display:                                       │
│    │ • Trader: John Smith (T-001)                                 │
│    │ • Instrument: VOD.L                                          │
│    │ • Rule Triggered: Layering                                   │
│    │ • Timestamp: [Auto-populated]                                │
│    │ • Status: OPEN                                               │
│ 8  │ Risk score: 85 (HIGH)                                        │
│ 9  │ Supporting evidence lists all 10 orders with timestamps      │
│ 10 │ Database query confirms alert record exists                  │
└────┴──────────────────────────────────────────────────────────────┘

ACTUAL RESULTS:
[To be filled during testing]
┌────┬──────────────────────────────────────────────────────────────┐
│ #  │ Actual Result                                                │
├────┼──────────────────────────────────────────────────────────────┤
│    │ [Tester notes what actually happened]                        │
└────┴──────────────────────────────────────────────────────────────┘

PASS/FAIL: [  ] PASS    [  ] FAIL    [  ] BLOCKED

DEFECTS RAISED:
[If FAIL, log defect ID and brief description]

COMMENTS:
[Any observations, edge cases discovered, or suggestions]

TESTED BY: ___________________ DATE: __/__/____
REVIEWED BY: _________________ DATE: __/__/____

Test Script Library by Category

Category 1: Functional Testing

TC-SURV-002: Alert Generation - Spoofing Detection

OBJECTIVE: Verify spoofing alert generated when large order cancelled after opposite-side execution

PRE-CONDITIONS:
• Trader places large BUY order (10,000 shares VOD.L @ £125.50)
• Trader executes small SELL order (1,000 shares VOD.L @ £125.52)
• Trader cancels large BUY order within 30 seconds

EXPECTED RESULT:
• Spoofing alert generated
• Risk score >70 (HIGH)
• Evidence shows BUY order cancelled post-SELL execution

TEST DATA:
• Order 1: BUY 10,000 VOD.L @ £125.50 (09:30:00)
• Order 2: SELL 1,000 VOD.L @ £125.52 (09:30:15) [EXECUTED]
• Order 1: CANCELLED (09:30:25)

TC-SURV-003: Alert Investigation - Close as False Positive

OBJECTIVE: Verify analyst can close alert as false positive with mandatory comment

STEPS:
1. Open alert from dashboard
2. Review supporting evidence
3. Click "Close as False Positive" button
4. Attempt to submit without comment → Should show error
5. Enter comment (min 50 characters)
6. Click "Submit"
7. Verify alert status updated to "CLOSED - FALSE POSITIVE"
8. Verify comment saved and displayed in audit trail

EXPECTED RESULT:
• Comment validation enforced (min 50 chars)
• Alert status updated in database
• Audit trail logs user ID, timestamp, action

TC-SURV-004: Case Escalation - Actimize Integration

OBJECTIVE: Verify alert escalated to Actimize creates case successfully

STEPS:
1. Open high-risk alert
2. Review evidence
3. Click "Escalate to Investigation"
4. Enter mandatory comment
5. Click "Submit"
6. Wait for case creation (max 5 minutes)
7. Verify confirmation message displayed
8. Verify case ID displayed and clickable link to Actimize
9. Log into Actimize
10. Verify case exists with correct details

EXPECTED RESULT:
• Case created in Actimize within 5 minutes
• Case ID stored in surveillance system
• Case details include: Alert details, supporting evidence, analyst comment
• Surveillance alert status updated to "ESCALATED"

Category 2: Integration Testing

TC-INT-001: OMS Trade Feed Integration

OBJECTIVE: Verify real-time trade data ingested from Fidessa OMS via Kafka

TEST APPROACH: Black-box integration test

STEPS:
1. Execute test trade in Fidessa OMS
2. Verify trade message sent to Kafka topic
3. Verify surveillance system receives message within 1 minute
4. Verify trade data enriched with reference data (trader name, instrument details)
5. Verify trade stored in database with all fields populated
6. Compare fields between OMS and surveillance system

EXPECTED RESULT:
• Trade message received within 1 minute
• All required fields populated (TradeID, Instrument, Quantity, Price, Timestamp, TraderID)
• Reference data enrichment successful (trader name from HR system, ISIN from Bloomberg)
• No data loss or corruption

DEFECT SCENARIO:
If trade not received within 1 minute → P0 (Critical) defect
If data enrichment fails → P1 (High) defect
If minor field missing (e.g., CounterpartyID) → P2 (Medium) defect

TC-INT-002: Actimize API Failure Handling

OBJECTIVE: Verify system handles Actimize API failures gracefully

TEST APPROACH: Negative testing (simulate API failure)

STEPS:
1. Disconnect Actimize API (simulate outage)
2. Escalate alert
3. Verify system displays error message (not generic crash)
4. Verify alert queued for retry
5. Verify analyst notified of failure
6. Re-enable Actimize API
7. Verify queued alert auto-retries and case created

EXPECTED RESULT:
• Graceful error handling (no system crash)
• Alert queued for retry (up to 3 attempts)
• Analyst notified via email/dashboard notification
• After API restored, case created successfully
• Audit trail logs all retry attempts

Category 3: Performance Testing

TC-PERF-001: High Volume Trade Processing

OBJECTIVE: Verify system processes 10,000 trades/day with <1 minute latency

TEST APPROACH: Load testing with simulated trade data

TEST DATA:
• 10,000 simulated trades over 8-hour trading day (~1,250 trades/hour)
• Peak: 500 trades in 1 hour (09:00-10:00, market open)
• Mix of instruments, traders, venues

SUCCESS CRITERIA:
• 95th percentile latency <1 minute (trade execution → alert generation)
• 99th percentile latency <3 minutes
• Zero message loss
• System availability 100% during test

MONITORING:
• AWS CloudWatch metrics (CPU, memory, network)
• Application logs (processing time per trade)
• Kafka lag (message queue backlog)

EXPECTED RESULT:
95% of trades processed within 1 minute
99% of trades processed within 3 minutes
Zero errors or message loss
System stable throughout test

TC-PERF-002: Dashboard Load Time

OBJECTIVE: Verify surveillance dashboard loads within 2 seconds

TEST APPROACH: Frontend performance testing

TEST SCENARIO:
• 150 active alerts in system
• User logs in and navigates to dashboard
• Measure time from click to fully rendered dashboard

SUCCESS CRITERIA:
• Dashboard loads within 2 seconds
• All alerts visible
• Filters and search functional

TOOLS: Browser DevTools (Network tab, Performance tab)

EXPECTED RESULT:
Initial page load <2 seconds
No JavaScript errors in console
All data rendered correctly

Category 4: Security Testing

TC-SEC-001: Role-Based Access Control

OBJECTIVE: Verify only authorised users can access surveillance system

TEST SCENARIOS:

1. Surveillance Analyst Role:
   Can view alerts
   Can investigate alerts
   Can close or escalate alerts
   ✗ CANNOT configure rules
   ✗ CANNOT access admin functions

2. Compliance Manager Role:
   Can view alerts
   Can configure rules and thresholds
   Can view dashboards and reports
   ✗ CANNOT access system admin functions

3. Read-Only Role (e.g., Internal Audit):
   Can view alerts (read-only)
   Can view dashboards
   ✗ CANNOT modify or close alerts
   ✗ CANNOT configure rules

4. No Access (e.g., Trading Desk):
   ✗ CANNOT log in to system at all

EXPECTED RESULT:
• Roles enforced at application layer (not just UI hide/show)
• Unauthorised actions rejected with error message
• All access attempts logged in audit trail

TC-SEC-002: Data Encryption

OBJECTIVE: Verify sensitive data encrypted at rest and in transit

TEST APPROACH: Technical validation with IT Security team

VERIFICATION:
1. Data at Rest:
   • Database encryption enabled (AES-256)
   • Verify via database configuration query
   • Attempt to read raw database files → Should be encrypted

2. Data in Transit:
   • HTTPS enforced (TLS 1.3)
   • Verify via browser DevTools (Security tab)
   • Attempt HTTP connection → Should redirect to HTTPS

EXPECTED RESULT:
Database encryption confirmed (AES-256)
All HTTP traffic redirected to HTTPS
Certificate valid and not expired
No sensitive data transmitted in plaintext

Category 5: Regression Testing

TC-REG-001: End-to-End Smoke Test

OBJECTIVE: Verify core functionality still works after defect fixes

SCOPE: Abbreviated test covering critical happy paths

TEST CASES (10 core scenarios):
1. User login
2. View dashboard
3. Generate alert (layering)
4. Investigate alert
5. Close as false positive
6. Generate alert (spoofing)
7. Escalate to Actimize
8. View reports
9. Configure rule threshold
10. User logout

TIME BUDGET: 2 hours

EXPECTED RESULT:
All 10 scenarios execute successfully
No new defects introduced
System stable

RUN FREQUENCY:
• After each defect fix deployed to UAT
• Before final sign-off

Test Execution Tracking

Test Execution Log Template

Test Case IDTest Case NamePriorityTesterDateStatusDefectsComments
TC-SURV-001Layering DetectionP1Analyst A01/03PASSNoneAll checks passed
TC-SURV-002Spoofing DetectionP1Analyst A01/03✗ FAILDEF-001Risk score incorrect
TC-SURV-003Close as False PosP1Analyst B02/03PASSNoneComment validation works
TC-SURV-004Actimize IntegrationP0Analyst B02/03⏸ BLOCKEDDEF-002API not configured
TC-INT-001OMS Trade FeedP0IT + Analyst03/03PASSNoneLatency <30 sec

Status Legend:

  • PASS: Test executed successfully, meets expected results
  • FAIL: Test failed, defect raised
  • BLOCKED: Cannot execute due to dependency/environment issue
  • IN PROGRESS: Test execution ongoing
  • SKIP: Test skipped (scope change or not applicable)

Test Summary Dashboard

UAT TEST SUMMARY - Week Ending 5 March 2025

OVERALL STATUS: 🟡 ON TRACK (some issues, but manageable)

TEST EXECUTION:
• Total Test Cases: 87
• Executed: 65 (75%)
• Passed: 58 (89% pass rate)
• Failed: 7 (11%)
• Blocked: 5
• Remaining: 22 (due by 19 March)

DEFECTS:
• Total Defects: 12
• P0 (Critical): 1 [OPEN] ← BLOCKER for go-live
• P1 (High): 4 [2 OPEN, 2 FIXED]
• P2 (Medium): 5 [3 OPEN, 2 FIXED]
• P3 (Low): 2 [OPEN, defer to post-launch]

RISKS:
🔴 P0 defect (Actimize integration failure) blocking final sign-off
🟡 5 test cases blocked due to test data issues
🟢 Performance testing on track

ACTIONS:
1. Vendor escalation for Actimize integration (due: 6 Mar)
2. Test data refresh planned for weekend (8-9 Mar)
3. Extend UAT by 1 week if P0 not resolved by 10 Mar

Defect Management

Defect Log Template

DEFECT ID: DEF-001
SEVERITY: P1 (High)
STATUS: OPEN
REPORTED BY: Analyst A
REPORTED DATE: 1 March 2025

SUMMARY:
Risk score calculation incorrect for spoofing alerts

DESCRIPTION:
When testing spoofing detection (TC-SURV-002), observed that risk score
displayed as 45 (MEDIUM), but expected >70 (HIGH) based on ML model output.

STEPS TO REPRODUCE:
1. Execute test scenario for spoofing (large order cancelled after opposite execution)
2. Observe alert generated
3. Check risk score → Shows 45 instead of expected 85

EXPECTED BEHAVIOR:
Risk score should be 85 (HIGH) based on:
• Rule-based score: 80
• ML model output: 90
• Combined score (60% ML + 40% rule): 86

ACTUAL BEHAVIOR:
Risk score displays as 45 (MEDIUM)

IMPACT:
High-risk spoofing alerts may be deprioritised, leading to missed market abuse.
Regulatory risk if FCA identifies false negatives.

ROOT CAUSE ANALYSIS (to be filled by Developer):
[Developer investigates and documents root cause]

FIX:
[Developer describes fix implemented]

RESOLUTION DATE:
[Date defect fixed and deployed to UAT]

RETEST:
Analyst A re-tested TC-SURV-002 on [Date]: PASS

UAT Sign-Off Template

USER ACCEPTANCE TESTING SIGN-OFF

PROJECT: Real-Time Trade Surveillance Implementation
UAT PHASE: User Acceptance Testing
VERSION: 1.0
DATE: 26 March 2025

TEST SUMMARY:
• Total Test Cases Executed: 87
• Pass Rate: 96% (84 passed, 3 failed)
• Critical Defects (P0): 0 (all resolved)
• High Defects (P1): 0 (all resolved)
• Medium Defects (P2): 3 (2 resolved, 1 deferred to post-launch with workaround)
• Low Defects (P3): 2 (deferred to post-launch)

OUTSTANDING ITEMS:
1. DEF-010 (P2): Dashboard filter dropdown slow to load (>3 seconds)
   • Workaround: Users can type in search box instead
   • Fix planned for post-launch patch (Q3 2025)

2. DEF-011 (P3): Export PDF button alignment off by 2 pixels
   • Cosmetic issue, no functional impact
   • Deferred to post-launch

READINESS ASSESSMENT:
All functional requirements met
Integration with Actimize working
Performance criteria met (95% of trades processed <1 min)
Security controls validated
Users trained and confident
Runbook and support processes in place
Rollback plan tested

RECOMMENDATION: APPROVED FOR GO-LIVE

SIGN-OFF:

Business Owner (Compliance Manager):
Signature: _________________ Date: __/__/____
Comments: System meets business requirements. Users are confident and ready.

Test Manager (Compliance Manager):
Signature: _________________ Date: __/__/____
Comments: All critical and high defects resolved. Outstanding items acceptable.

Technical Lead (CTO):
Signature: _________________ Date: __/__/____
Comments: System stable and performant. Infrastructure ready for production.

Project Sponsor (CCO):
Signature: _________________ Date: __/__/____
Comments: Approved for go-live. Well done to the team.

UAT Best Practices

Business Users Lead: UAT is business-led, not IT-led Real Scenarios: Test actual workflows, not just happy paths Real Data: Use realistic test data (anonymised production data if possible) Independent Testing: Testers should not be developers Structured Approach: Follow test scripts (but allow exploratory testing too) Document Everything: Log all defects, even minor ones Daily Stand-Ups: Daily sync between testers, developers, BA Clear Exit Criteria: Define what "pass" means before starting UAT Sign-Off Process: Formal sign-off from business before go-live

Common UAT Pitfalls

Starting UAT Too Late: Leave time to fix defects (at least 4 weeks for major projects) Poor Test Data: Unrealistic data leads to missed edge cases Skipping Negative Tests: Only testing happy paths misses errors Testers Too Busy: Testers doing UAT "on the side" → slow progress No Defect Triage: All defects treated equal → wrong priorities Weak Sign-Off: "Looks good to me" → not documented, not traceable

Next Steps

  1. Download this template library
  2. Customise test scripts for your project
  3. Train your test team
  4. Execute UAT systematically
  5. Log defects and track to resolution
  6. Obtain formal sign-off before go-live

Need Expert Support?

Running effective UAT requires discipline, business engagement, and structured test management. If you need support with test planning, defect triage, or UAT governance, contact our team for a consultation.


Template Version: 1.0 Last Updated: January 2025 Compatible With: Agile, Waterfall, Hybrid delivery methodologies License: Free for commercial use with attribution