Skip to main content
Template · Test Case

Three test case templates, one workbook.
Step-by-step, screen-by-screen, and IEEE 829 formal.

Every test case style you will need for most software engagements, in one workbook. Pick the variant that matches how your team thinks — they are equivalent in rigor, different in presentation.

Variants
3
Standard
IEEE 829
Use when
Manual or automated

A test case is a written question to the system: "does this behave correctly?". The template is the grammar of the question.

Key Takeaways

Four things to remember.

01

Template 1 — step-by-step

For procedural, command-driven, or API-level testing. Numbered major / minor steps with a result column. Works well for automated and manual test cases alike.

02

Template 2 — screen-by-screen

For GUI-heavy applications. Rows map to screens and fields; expected result is captured once at the end. Easier to author; reads faster during execution.

03

Template 3 — IEEE 829 formal

For regulated or safety-critical environments. Full input / output specifications with states, timing, and inter-case dependencies. Takes longer to write; reads like a specification.

04

All three share the same core metadata

Test ID, suite, priority, hardware, software, duration, effort, setup, teardown. Pick a variant for body; the header stays the same.

Why this exists

What this template is for.

The three variants exist because one size does not fit all. A DevOps team running API regression runs Template 1. A consumer product team running manual GUI regression runs Template 2. A medical-device team running audit-visible test protocols runs Template 3.

The columns below are the union of fields across all three. Each variant populates a subset that matches the testing style.

The columns

What each field means.

Test Case Name

Mnemonic identifier. Short enough to reference in conversation; long enough to recognize at a glance.

Test ID

Five-digit hierarchical ID: XX.YYY where XX is the suite number and YYY is the test number within that suite.

Test Suite(s)

The name(s) of the test suite(s) that use this case. A case may belong to multiple suites.

Priority

Derived from the quality risk coverage analysis. Drives selection order during compressed cycles.

Hardware Required

One row per required hardware item. Match exactly to the Test Environment sheet entries.

Software Required

One row per required software item, including versions.

Duration

Elapsed clock time to run the test. Distinct from effort — a 2-hour soak test has 2h duration but ~5 min effort.

Effort

Person-hours required to execute the test.

Setup

Steps to bring the system under test into the required initial state. Kept separate from the test body so the initial state can be saved and reused.

Teardown

Steps to return the SUT to pretest state. Often mirrors setup in reverse.

Body (varies by template)

Template 1: numbered major / minor steps with result column. Template 2: screen × field × input table. Template 3: IEEE 829 input / output spec with states, timing, dependencies.

Execution Summary

Status, system config ID, tester, date completed, actual effort, actual duration, bug IDs linked.

Live preview

What it looks like populated.

Header fields of Template 1 — the step-by-step variant.

FieldDescription
Test Case NameMnemonic identifier
Test IDFive-digit ID, XX.YYY: XX suite number, YYY test number.
Test Suite(s)The name of the test suite(s) that use this test case.
PriorityFrom quality risk coverage analysis
Hardware RequiredList hardware in rows
Software RequiredList software in rows
DurationElapsed clock time
EffortPerson-hours
SetupList steps needed to set up the test
TeardownList steps needed to return SUT to pretest state
— Body —ID / Step / Result / Bug ID / Bug RPN
Execution SummaryStatus, config, tester, dates, effort, duration

How to use it

6 steps, in order.

  1. 1

    Pick the variant that matches how your team thinks. Do not try to mix variants inside a single test suite.

  2. 2

    Populate the common header fields (Name, ID, Suite, Priority, etc.) for every case — these drive the tracking workbook.

  3. 3

    Write the body with one atomic check per step / row. A step that checks three things is three steps, not one.

  4. 4

    Keep Setup and Teardown explicit, even when they seem obvious. Automated runners depend on them.

  5. 5

    Leave the Execution Summary fields blank in the template; they are filled in when the case runs, not when it is designed.

  6. 6

    Review authored cases with the test lead before they enter the tracking workbook. A bad case in tracking is harder to remove than a bad case in review.

Methodology

The thinking behind it.

Template 3 follows IEEE 829-2008 Test Case Specification. Inter-case dependencies, timing constraints, and explicit environmental needs are what distinguish it from the lighter-weight variants.

For automated cases, Template 1 is almost always correct. For highly visual or wizard-style UIs, Template 2 is easier to maintain. Template 3 is reserved for regulated environments where the case itself is a controlled document.

Take it with you

Download the piece you just read.

We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.

Need a QA program to back this up in your organization?

If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.

Related reading

Articles, talks, guides, and case studies tagged for the same audience.