Spec Review Checklist Before Coding
Informal spec reviews miss the same categories every time: rollback plans, idempotency constraints, the failure cases that aren't the obvious ones. A checklist doesn't make the review faster — it makes sure the right questions get asked regardless of who's reviewing or how much pressure the team is under to ship. This is that checklist, organized so a missing rollback plan or an untestable criterion gets caught before implementation, not mid-sprint.
Why a checklist beats an informal review
Informal spec reviews rely on reviewers remembering what to look for. They usually miss the same categories: rollback plans, idempotency requirements, error paths for the failure cases that aren't the obvious ones. A checklist doesn't make the review faster — it makes sure the same question gets asked every time, regardless of who's reviewing or how much pressure the team is under to ship this sprint.
Work through this in order. Each section targets a specific failure mode that routinely survives informal reviews and then shows up during implementation, QA, or production.
| Reviewer Role | Primary Focus | Key Questions |
|---|---|---|
| Product Manager | Scope & intent | Does the goal match the user need? Are non-goals agreed? |
| Engineer | Feasibility & gaps | Are edge cases covered? Is the rollback plan realistic? |
| QA | Testability | Can I write test cases from these criteria without asking questions? |
| Ops / SRE | Deployment safety | Is monitoring defined? Is the rollback reversible? |
Scope clarity
Before reviewing anything else, the spec needs to answer these four questions clearly enough that two people reading it independently would give the same answers:
- What is this change trying to accomplish? Can you state it in one sentence?
- Who is the intended user or caller of this feature?
- What are the explicit non-goals — things this change is deliberately not doing?
- Is there a named decision-maker for scope changes that come up during implementation?
The non-goals section is the one most commonly missing or empty. Flag it immediately if it's absent. Non-goals aren't optional — they're what prevents scope from expanding silently once implementation is underway and changing direction becomes expensive.
Acceptance criteria completeness
Work through each acceptance criterion with the following checks. This is not a speed exercise:
For each acceptance criterion, verify: [ ] It follows a Given/When/Then structure (or equivalent) [ ] The "Given" names a specific state or precondition [ ] The "When" names a specific action or event [ ] The "Then" describes an observable outcome — not "works correctly" or "handles it" [ ] No vague qualifiers: fast, reasonable, appropriate, gracefully, user-friendly [ ] A QA engineer could write a test script from this criterion alone [ ] Two different testers would reach the same pass/fail verdict independently
Criteria that fail the last two checks are not acceptance criteria — they're descriptions. They won't prevent implementation errors and they won't enable confident QA testing. Replace them before the spec moves forward.
Edge cases covered
These categories show up in production every week. If they're not in the spec, each engineer will handle them independently — inconsistently, without review, and without a record:
- Empty state: what happens when there's no data, no results, or an empty input?
- Maximum bounds: what happens at the maximum allowed value or list length?
- Concurrent operations: can two users or processes trigger this simultaneously? What happens?
- Expired or invalid tokens, sessions, or references
- Deleted dependencies: what happens if a related record was deleted between steps?
- Permission variations: behavior for users with different roles or access levels
Error paths defined
For each external dependency or validation step, the spec needs to answer these four questions. If any of them get a "TBD" or a blank, the spec is not ready:
- What does the user see if this step fails?
- Does the system retry? How many times? With what backoff?
- Is data written before the failure cleaned up, or does it remain in a partial state?
- Is the failure surfaced to monitoring, and if so, which alert or log?
Error path check example — payment service timeout: [ ] User-facing message defined: yes/no [ ] Order state after timeout: not created / partially created / undefined [ ] Retry behavior: yes (up to 3x, 1s backoff) / no [ ] Ops alert: yes (PagerDuty on >3 timeouts/min) / no
Dependencies identified
List every external system, service, API, or team this feature depends on. For each one:
- Is it available in all environments where this will be tested? (Dev, staging, production.)
- Does another team need to be notified or make a change on their end?
- What's the failure behavior if this dependency is unavailable at runtime?
- Are there rate limits or quotas that could affect behavior at scale?
Dependencies discovered mid-implementation are one of the most reliable sources of sprint delays. Naming them here costs 10 minutes. Missing one costs a sprint.
Rollback defined
Every feature that reaches production needs a rollback plan. This section is missing from more specs than any other, usually because writing it feels like planning for failure. It isn't — it's clarifying what you're actually shipping. Check that the spec answers:
- Can the change be reverted without a database migration? If not, what does the migration require?
- Is there a feature flag? What is its name and what does each state mean?
- If rolled back, are there unrecoverable side effects — emails sent, payments charged, records created?
- Is the rollback procedure documented in the spec or in a linked runbook?
QA can test without interviewing the author
This is the final and most important check. Hand the spec to someone who was not involved in writing it. Ask them to answer using only the document:
- What are the three most important test cases for the happy path?
- What are the two most important error path test cases?
- What would cause this feature to fail after deployment?
If they can't answer without asking the author, the spec isn't ready. The gaps they surface in this exercise are exactly the gaps that would have appeared during implementation or QA — at a much higher cost. Fix them now.
Sign-off
The checklist is most useful when it produces an explicit record of who reviewed what before implementation started. Simple, takes 60 seconds:
Spec review sign-off — [Feature Name] Engineering lead: [ ] approved [ ] changes requested Product owner: [ ] approved [ ] changes requested QA lead: [ ] approved [ ] changes requested Approved for implementation: yes / no Date approved: Reviewer notes:
When questions come up mid-implementation — and they will — the team returns to the spec and this record instead of reconstructing decisions from memory or a Slack scroll-back.
Supplemental: if the spec was drafted by an AI tool
Teams that use AI to draft initial specs need one additional pass. AI-generated acceptance criteria often look complete but contain hidden vagueness — they pattern-match the right structure without the domain knowledge to fill it correctly. Run the following checks specifically on AI-drafted criteria before treating the spec as ready:
- Does each "Then" clause describe something literally observable — a specific HTTP status, a field value, a message string — or does it describe an intention ("the system handles the error correctly")?
- Are the edge cases domain-specific — covering the actual failure modes of this feature — or generic placeholders that would apply to any feature?
- Does the non-goals section reflect deliberate product decisions, or does it list things the AI inferred were out of scope without evidence?
- Are the dependencies named from the actual system — real service names, real API contracts — or fictional examples that look plausible?
An AI-drafted spec can be a useful starting point — it fills the structural skeleton quickly and prompts the team to think through cases they might have skipped. The checklist above still applies in full. The AI draft saves time getting to review; the review itself cannot be skipped.
Keep reading
Editorial note
This article covers Spec Review Checklist Before Coding for software delivery teams. Examples are illustrative engineering scenarios, not legal, tax, or investment advice.
- Author details: Daniel Marsh
- Editorial policy: How we review and update articles
- Corrections: Contact the editor