Get builds into the lab without wasting a cycle.
Seven steps, two smoke-test gates.
A broken build in the test lab burns a cycle and a morale dividend. These seven steps — anchored by two smoke-test gates — prevent that by catching failures before they reach the team.
The test release process is the most under-engineered link in most test programs. Fix the link and the whole execution cycle stops leaking days.
Key Takeaways
Four things to remember.
Decide content before you cut the build
Bug fixes, features, docs — resolve what is in and what is out before development spins up. A build with ambiguous content produces ambiguous test results.
Smoke test twice
Once after build, once after install in the lab. The same test from two sides catches different classes of failure.
Reject bad builds loudly
When the lab smoke test fails, uninstall and resume the old cycle. Do not try to salvage a half-working build; it costs more than it saves.
Version everything
Mark the build with a version both in the artifact and in the source repository. Traceability during triage depends on it.
Why this exists
The problem this process fixes.
Test releases are where time gets lost. A build lands, a test case fails, a tester files a bug, a developer responds that the build is broken — and four days of reporting later, the team realizes nobody actually ran a smoke test.
These seven steps close that gap. The two smoke-test gates (after build, after install) make the question "is the build good enough to test?" answered in minutes, not days.
The checklist
7 steps, in order.
- 1
Select the content (bug fixes, new features, and documentation) for a particular test release.
- 2
Implement the changes required for the bug fixes and new features, checking those changes into the source repository as they are completed and unit tested.
- 3
Fetch the source files from the repository; compile, link, and otherwise assemble the build; and, mark (in the build and in the repository) the build with a version number.
- 4
Smoke test the build. If the tests pass, continue with the next step; if the tests fail, figure out what went wrong, fix the problem, and return to the previous step.
- 5
Create an installable media image of the build; package it appropriately; and, deliver it to the person responsible for installing it in the test lab.
- 6
Install the build in the test lab.
- 7
Smoke test the build in the lab environment. If the tests pass, begin the test cycle; if the tests fail, uninstall the build, resume the old test cycle, and return the build to the development team to start over at the first step.
One more thing
The seven-step sequence takes minutes per step and saves days per cycle. The discipline is the gate-keeping: the two smoke tests are not optional, and neither is the "return the build to development" path when one fails.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →