Skip to main content
WhitepaperUpdated April 2026·9 min read

Shoestring Manual Testing: A Paper for the Test Manager Running the Program

The paper version of the Shoestring Manual Testing playbook — with the implementation detail: sizing math, staffing sources, training, shift planning, and the management pitch.

Manual TestingTest ManagementStaffingBudget ConstrainedTest Operations

Companion paper · pairs with the Shoestring Manual Testing talk

Automation isn't always the answer. Most real test programs live in the messy middle: some automation, a lot of manual testing, tight budgets, and a schedule that's already slipping. This paper is the implementation manual for running the manual side of that program without burning money.

Read time: ~10 minutes. Written for test managers and engineering leaders running budget-constrained QA programs.

Why manual testing is not going away

A common fantasy has a test team composed entirely of senior test engineers. They converse fluently with developers about code coverage one minute and with marketing about release trade-offs the next. They write every test in code, maintain the harness, and run sophisticated load and performance suites. Manual testing is used sparingly, to produce edge conditions that automation can't generate. The headcount and budget for all of this is adequate. Everyone is cross-functional. Everyone ships on time.

This paper is not for the test managers living in that world. It's for the rest of us — test managers and engineering leads running programs with:

  • Impractical full automation. Rapidly-changing products, limited tooling, gaps in quality risks where automation doesn't reach.
  • Insufficient engineer headcount. Not enough senior test engineers to staff the depth the program needs.
  • Collapsing schedules. Deliverables slipping into the test window, compressing time further.

In that world — which is most worlds — you end up running a significant amount of manual testing. This paper covers how to run it without the wheels falling off. The talk version (Shoestring Manual Testing) is the high-level pitch; this paper is the implementation manual.

Why automation alone doesn't solve it

Several reasons full automation isn't a complete solution for most programs:

  • System change rate. Automated tests need constant update when the system under test changes frequently. A fast-moving product consumes automation maintenance budget at a rate that often exceeds the savings.
  • Up-front cost. Automation pays back across many runs. Tight schedules and budgets cap the number of tests you can reasonably automate, creating coverage gaps.
  • Tool fit. For cutting-edge products on new platforms, off-the-shelf automation tools may not exist or may not fit. Custom harness work is often necessary.
  • Skill mismatch. Good test automation requires software-engineering skill. If your team doesn't already have it, training takes time and experience not available during the current project.
  • Inherent manual tests. Configuration, compatibility, installation, error recovery, localization, usability, and accessibility testing all fundamentally require human interaction.

These aren't reasons to skip automation. They're reasons you'll always have a manual component, and why you need to run it well.

Sizing the manual test team

The first concrete question: how many people? You can answer this analytically if you know three things about each test case:

  1. Person-hours of effort per test case (execution, result recording, any ancillary work).
  2. Wall-clock hours — some tests have long runs that block personnel, others are short and parallelize freely.
  3. Dependencies between tests — serialization constraints that affect the critical path.

With those three inputs, you can build a Gantt chart and resource plan in any project planning tool (Asana, Monday, Jira, Microsoft Project, Linear roadmap views). If you don't have those inputs, the sizing exercise becomes a guess — which is the state most programs are actually in.

The two key rules of thumb

Effective time rule: 6 hours of testing per 8-10 hour day. Testers have overhead that can't be eliminated: filing bug reports, updating status, communication, standups, email, management touch, and breaks. Budget 75% of the nominal workday for actual testing. Planning for 8 hours a day of pure testing consistently burns out the team and produces worse bug reports.

Downtime rule: 25–75% lost to blocking issues. Downtime is time the test team cannot make progress because they're waiting for builds, fixes, environment repairs, debug assistance, or access to hardware. When the product enters test reasonably well unit-tested, 25% is realistic. When the product is thrown over the wall raw, 75% is not unusual. Plan to the average, but watch the actual number.

These rules apply whether you're managing five technicians or fifty.

The team structure

A workable manual test team structure looks like this:

  • Test manager — owns the program, communicates with stakeholders, handles budget and staffing.
  • Test engineers (1 per 5–7 technicians) — technical leadership, test design, harness work, bug triage support, training new technicians.
  • Test technicians — execute tests, report results, file bug reports.

The engineers provide the technical ballast. The technicians provide the scale.

Hiring good test technicians

Staffing the technician layer is where most programs stall. The good news: a surprising number of people have the aptitude once you know where to look.

Four reliable sources:

  1. Students — two-year, four-year, and technical school. Engineering and CS majors are especially good, but humanities students with strong attention to detail also work. Watch out for finals weeks and course schedule conflicts.
  2. Customer support and technical support staff. They already understand the product and the customer's experience of failure. The adaptation they need: identifying problems instead of solving them.
  3. Moonlighters from adjacent technical roles. Day-job experience in related fields is valuable. Watch for fatigue and priority conflicts when their primary job gets busy.
  4. Detail-oriented data-entry professionals. Word processors, call center agents, claims processors — people with demonstrated ability to do precise, repetitive work accurately. Screen carefully for curiosity to investigate problems, not just execute scripts.

Hiring channels that work: local university career services, community and technical colleges, temporary staffing agencies (for seasonal surge), internal referrals, LinkedIn (narrow on "quality assurance" + local + recent graduates). In current terms, specialized QA staffing agencies and crowd-testing platforms are additional channels, especially for compatibility and localization surges.

Realistic ramp rate: one new technician per test engineer per week. Faster and the engineer can't onboard effectively; slower and you're over-staffing your engineering layer.

Training new technicians

New technicians need two things: the local environment context and the universal testing skills.

Local context — how things work here. Network access, tooling credentials, time tracking, test case repository, bug tracker, status reporting cadence, who to ask about what. Document this or assign a mentor. Expect a week of ramp.

Universal skills — applicable across projects and organizations:

How to execute a manual test case. Addresses:

  • How much ambiguity is in the test case descriptions, and how much exploration is expected.
  • How long a test should take to run; when to escalate if it's running long.
  • What states are assignable (pass, fail, warn, blocked).
  • How test assignments happen — self-assigned, manager-assigned, or round-robin.
  • How dependencies between tests are resolved.
  • How to prevent overlaps and gaps when multiple technicians are working similar areas.

A one- or two-page documented process suffices.

How to file a good bug report. This is the universal skill that separates effective technicians from the rest. Ten steps — structure, reproduce, isolate, generalize, compare, summarize, condense, disambiguate, neutralize, review. The full treatment is at The Bug Reporting Process; the printable reference is Bug Reporting Process Checklist.

Training on bug reporting is the single highest-leverage investment in a new technician. Get it right and every subsequent defect they find produces useful output. Get it wrong and the value of their entire test effort drops.

Shifts and equipment utilization

A practical way to stretch a tight budget: run multiple shifts on the same equipment. Shift patterns that work:

  • Single 10-hour day shift. One shift of technicians per workday. Simple, easy to staff, lowest equipment utilization.
  • Two 8-hour shifts (16 hours/day). Morning and evening teams. Doubles equipment utilization. Requires clear handoff discipline at shift boundaries.
  • Three shifts (24 hours/day). Typical in large programs with physical hardware constraints. Requires mature handoff and state-of-test visibility.
  • Hybrid with remote team in another timezone. Effectively extends the workday without true shift work. Works when time zones are 6–10 hours offset and the remote team has its own shift supervisor.

Shift work requires:

  • Clear test state tracking — who ran what, current status, blockers.
  • Documented handoff procedures — typically a 15-minute standup at shift overlap.
  • Explicit rules for cross-shift bug report ownership.
  • Management discipline on not letting shift handoffs become a hiding place for lost work.

Management caveats

Running a shoestring manual test program has a few traps to avoid:

Don't confuse warm bodies with capability. A test team of warm bodies executing scripts does less good than a smaller team of engaged, trained technicians. Screen for curiosity, not just availability.

Don't let the program become scripted-only. Scripted manual testing catches obvious functional bugs. Mature programs also include exploratory testing, where technicians (with guidance from engineers) investigate the product beyond the scripts. A program with no exploratory component is a program with blind spots.

Don't underinvest in the engineer layer. Under-staffing senior test engineers to save budget looks like a win on the spreadsheet and loses in practice. The engineers are the force multipliers. Skipping them produces programs that can execute tests but not improve the test strategy.

Don't skip the career path. Technicians who see no path to engineer advancement churn. Build visible progression — from technician, to senior technician, to junior engineer, to engineer. Not everyone will take it, but offering it retains the good ones.

Don't ignore morale. Manual testing is tedious work done under pressure. The manager's job includes making the work feel valued: crediting discoveries, making good bug reports visible, celebrating test-phase exits. Small gestures compound.

Selling the plan to management

Budget-constrained programs require leadership to explicitly approve the tradeoffs. The pitch:

  1. The cost of quality — what defects escaping to production cost the business right now. Numbers from support tickets, churn analysis, sales-cycle escape data. (See Investing in Testing, Part 1 for the framework.)
  2. The ROI of the proposed program — even a shoestring manual program materially reduces cost of quality. Use a worked example with your numbers.
  3. The constraints honestly stated — what this program catches well (functional, configuration, compatibility, usability) and what it doesn't (scale, performance, extensive regression). Draw the line between "tested" and "known not tested."
  4. The alternative — what continuing without this program costs. Typically a higher number than funding the program.
  5. The phased plan — how you'll expand the program as the business case proves out. Shoestring programs earn their way to larger ones.

Leadership approves what they understand. A clean pitch that grounds the request in dollars and real constraints works. An abstract plea for "more testing resources" doesn't.

The upgrade path

A shoestring manual program isn't the terminal state. It's the starting state. As the business case proves out, the program expands:

  • Add senior engineers to deepen technique (exploratory, risk-based, performance probing).
  • Introduce lightweight automation for the regression sets that truly earn their keep — API-level contract tests are usually the highest-ROI first automation.
  • Invest in observability so that production telemetry closes the loop back to the risk register.
  • Mature the process — metrics, status reporting, defect data analysis (see Charting Defect Data).

A mature program doesn't look like a shoestring program with more people. It looks structurally different. Plan for the evolution.


RBI

Rex Black, Inc.

Enterprise technology consulting · Dallas, Texas

Related reading

Other articles, talks, guides, and case studies tagged for the same audience.

Working on something like this?

Whether you are scoping an architecture, shipping an agent, or sizing a QA program — we can help.