The Release 1.0 system test plan for a call-center loan app.
Six build cycles. WebLogic + Oracle + MQ scoring. A hard go-live.
System test plan for a bank's home equity loan call-center application. Six weekly build cycles, credit-scoring mainframe integration, cloned call-center desktops, and a three-week no-crash exit bar. Client name and individuals scrubbed; the method is intact.
Key Takeaways
Four things to remember.
Positive use cases were the scope
The plan explicitly scoped OUT negative use cases, localization, operations, documentation, unit/FVT, and white-box testing. It could not cover everything in the time available, and it said so in writing.
Exit bar: three weeks no server crash
System test exit required no panic / crash / halt / wedge / unexpected process termination on any server for the previous three weeks. Not "incidents resolved." Three consecutive weeks of zero.
Reference platform as oracle
The existing origination system ("Some Other Loan App") acted as the reference oracle for offered products — testers entered the same application on both, then compared. That solved the hardest part of test design for free.
Escalation process is in the plan
Contact lists, escalation paths, and release management are part of the plan itself, not a separate runbook. The plan is the operations manual.
Overview
This system test plan was written for the Release 1.0 deployment of a home-equity loan application used by Call Center agents on live customer calls. Names of people and places (Minneapolis Test Group, Fairbanks Call Center, specific individuals) are preserved as they appear in the source, but the client is pseudonymized as "Some Client" / "Somebank" and the product as "Some Loan App".
What makes this plan worth studying is the ruthless specificity of its exit criteria and the operational discipline of its test execution process. It is a plan written to survive a go-live date that will not move.
01
Overview
The 'Some Loan App', as deployed in Release 1.0, allows Home Equity Loan Call Center Agents to fit home equity products (loans and lines of credit) to customers. The system is a group of Java programs running on WebLogic, with Oracle storage and a Netscape gateway. Call Center agents interview customers through a Web browser; the system scores credit via a mainframe connection (MQ) and displays eligible products. If a product is accepted, the loan is transmitted to the existing origination platform for document generation and finalization.
02
Scope — What system test IS and IS NOT
The system test scope table was written up front to eliminate later arguments about what "done" meant.
IS
- Positive use cases (functionality)
- Capacity and volume
- Error handling and recovery
- Standards and regulatory compliance (as covered in the use cases)
- Client configuration (browser and call center desktop compatibility)
- Security [scope TBD at plan time]
- Distributed (leverage Webdotbank testing)
- Performance
- Black-box / behavioral testing
- "Some Loan App" / "Some Other Loan App" status communications
- Confirmation testing in QA region
IS NOT
- Negative use cases
- Operations (paperwork processing, loan initiation, rate updates)
- Usability or user interface
- Date and time processing
- Localization
- Test database development
- Documentation
- Code coverage
- Software reliability
- Testing of the complete system
- Horizontal (end-to-end) integration
- Data flow or data quality
- Unit or FVT testing
- White-box / structural testing
03
Milestone schedule
The plan laid out the six-cycle schedule from unit test through deployment, against real calendar dates.
- Unit test complete
- Smoke build delivered and installed
- System Test Entry Criteria met
- System Test (six release cycles) — ~6 weeks
- System Test Launch Meeting
- Builds 1–6 delivered and installed (weekly)
- Golden Code review (all bugs fixed: ready for final build)
- System Test Exit Criteria met
- System Test Phase Exit Meeting
- User Acceptance Test (two-week window)
- Go / No-Go Decision
- Deployment
04
System Test Entry Criteria
System Test can begin when the following criteria are met:
- The "Tracker" bug tracking system is in place and available for all project participants.
- All software objects are under formal, automated source code and configuration management control.
- The HEG System Support team has configured the System Test clients and servers for testing — cloned call-center agent desktops, LoadRunner Virtual User hosts, Netscape, WebLogic, Oracle (including indices and referential integrity), MQ connections, and network infrastructure. Test Team has been granted access.
- The Development Teams have code-completed all features and bug fixes scheduled for Release 1.0.
- The Development Teams have unit-tested all features and bug fixes scheduled for Release 1.0 and transitioned the appropriate bug reports into a "verify" state.
- Fewer than ten (10) must-fix bugs are open, including bugs found during unit testing. Must-fix status is determined by the Project Manager and the AVP of Home Equity.
- The Development Teams provide revision-controlled, complete software products to MTG (see Release Management).
05
System Test Continuation Criteria
System Test will continue provided:
- All software released to the Test Team is accompanied by Release Notes. These must specify the bug reports the Development Teams believe are resolved in each software release.
- No change is made to the 'Some Loan App' — whether in source code, configuration files, or other setup instructions or processes — without an accompanying bug report.
- Twice-weekly bug review meetings occur until System Test Phase Exit to manage the open bug backlog and bug closure times.
06
System Test Exit Criteria
System Test will end when the following criteria are met:
- No panic, crash, halt, wedge, unexpected process termination, or other stoppage of processing has occurred on any server software or hardware for the previous three (3) weeks.
- The Test Team has executed all the planned tests against the GA-candidate software release.
- The Development Teams have resolved all must-fix bugs (defined by the Project Manager and the AVP, Home Equity Group).
- The Test Team has checked that all issues in the bug tracking system are either closed or deferred, and, where appropriate, verified by regression and confirmation testing.
- The open / close curve indicates that product stability and reliability have been achieved.
- The Project Management Team agrees that the product, as defined during the final cycle of System Test, will satisfy the Call Center Agent's reasonable expectations of quality.
- The Project Management Team holds a System Test Phase Exit Meeting and agrees that these System Test exit criteria are met.
07
Test configurations and environments
Testing involved both client systems and server regions.
Client systems
- LoadRunner Virtual User clients ("LR clients") — Windows NT, configured for large numbers of simultaneous virtual-user sessions, used for stress, performance, and capacity test cases.
- Call Center Desktop Agent clients ("CC clients") — Windows 95, configured to resemble the Fairbanks Call Center Agent Desktop as closely as possible, used for manual test cases.
Server regions
- "Some Loan App" QA Region — where CC and LR clients send loan applications during testing.
- Scoring QA Region — provides credit-bureau scoring to the Some Loan App so it can assign a customer to a credit-risk tier.
- "Some Other Loan App" Regression Region — the existing origination platform, used as the reference oracle for offered products.
08
Test execution process
The plan described exactly how the team would run test execution, not just what they would test.
- Test Hours — the weekly envelope
- Test Cycles — what a single build cycle looked like
- Test Execution Process — step-by-step (see QA Library test-execution-process)
- Human Resources — roles, chair time, and responsibilities
- Escalation Process — with Test Contact List, Support Contact List, and Management Contact List named
- Test Case and Bug Tracking — the Tracker workflow
- Release Management — how a build becomes a testable build
09
Risks and contingencies
The plan closed with named risks to the test effort itself — environment instability, late-breaking scope additions, third-party dependencies — and the contingency actions MTG would take if each materialized. Then a change history, referenced documents, and a frequently-asked-questions appendix.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →