Spec Review Checklist (Before Implementation)
A good spec review is really a risk review in advance. The goal is to prevent avoidable rework and production incidents, which matters more than documentation completeness.
Quick answer
Run one structured checklist across scope, acceptance quality, contract impact, dependency ownership, rollout safety, and rollback viability. If any critical item is unresolved, implementation should wait.
Review areas and pass criteria
- Scope: goals and non-goals are explicit and stable.
- Acceptance: every criterion is testable and unambiguous.
- Contracts: API/DB changes include compatibility and migration notes.
- Dependencies: owner, timeline, and fallback exist for each dependency.
- Operations: rollout, monitoring, alerts, and rollback are defined.
15-minute spec review agenda
# Spec Review Gate 1) Scope check - Is this change solving one clear user/business problem? - Are non-goals strong enough to block scope creep? 2) Behavior check - Can QA derive tests directly from acceptance criteria? - Are error and permission paths covered? 3) Contract check - Do API/DB changes preserve backward compatibility? - Are schema and migration risks documented? 4) Release check - Can we detect failures with monitoring? - Is rollback technically possible without data corruption?
Red flags that should block coding
- "We will decide in implementation" for contract behavior.
- No owner for cross-team dependencies.
- Missing rollback strategy for stateful changes.
- Acceptance criteria rely on subjective language.
Treat these as blockers, not follow-up tasks. Late fixes cost more and are harder to validate.
Sign-off record (recommended)
- Review date and participants.
- Open risks and decision owner.
- Explicit approval status: approved, approved with conditions, or blocked.
A lightweight sign-off log reduces memory loss when the release spans multiple sprints.
Review questions by role
A spec review produces better outcomes when each role asks the questions specific to their responsibility, rather than everyone reviewing the same surface. The 15-minute agenda above provides the structure. These role-specific questions sharpen it.
The product manager should ask whether the spec goal matches the PRD intent exactly, whether non-goals are explicit enough to prevent scope expansion, whether the acceptance criteria set matches the success metric from the PRD, and whether rollout sequencing is aligned with the launch plan.
The tech lead or senior engineer should ask whether there are missing dependencies the spec does not account for, whether the API/DB changes introduce backward-compatibility risk, whether the edge-case list is complete for the system's known failure modes, and whether the rollback strategy is technically viable given the data changes.
The QA engineer should ask whether each acceptance criterion can be converted to a test case without clarifying questions, whether error codes and response bodies are specific enough to assert against, whether permission paths are tested for all roles that interact with this feature, and whether monitoring is in place to detect edge-case failures in production.
If the review is running out of time, PM and QA questions should be prioritized over engineering questions. Scope and testability failures are the most common source of post-release rework. Engineering unknowns can often be resolved during implementation with less cost than scope or testability gaps.
When to block vs when to proceed with conditions
Not every gap in a spec should block implementation. Some gaps are genuinely low-risk and can be resolved during coding. Others are high-risk and must be resolved first. The distinction is whether the gap affects a decision that cannot be safely revisited after implementation begins.
Block implementation when the rollback strategy for stateful changes is missing, when dependency ownership is unresolved, when acceptance criteria cannot be tested, when scope boundaries are ambiguous between two teams, or when security or compliance questions remain open.
Proceed with conditions when the gap is lower-risk: non-critical UI copy not yet finalized, monitoring threshold values pending data from staging, minor edge cases documented as out-of-scope with rationale, or performance targets that can be validated against staging load.
Document and defer items that surfaced during review but fall outside the current scope: future version candidates, observability gaps planned for the next sprint, and out-of-scope improvements.
The condition for "proceed with conditions" is explicit: the condition is written down, has an owner, and has a resolution date before the release gate. An undocumented condition remains a hidden risk rather than a managed one.
How often to update specs during implementation
A spec should not change silently during implementation. When the implementation reveals that an acceptance criterion is not achievable as written — because of a technical constraint, a dependency behavior, or new information from an API consumer — the change must go back through review, not just be updated in the document.
- Minor clarifications (adding a missing status code, specifying a response field format) can be updated by the tech lead with a comment noting the change.
- Scope changes (removing an acceptance criterion, narrowing an edge case) require PM and QA sign-off before the criterion is removed from the test plan.
- Breaking changes (discovering that a required behavior is technically infeasible) require a full spec review restart, including a revised acceptance set and rollout plan.
The cost of keeping the spec synchronized with implementation is low — a comment and a notification. The cost of shipping implementation that diverged from the agreed spec without updating the document is high: QA tests against the old spec, PM announces features that were not built, and production incidents reference a spec that does not match the deployed behavior.
Use these templates
Related articles
Editorial note
This guide covers Spec Review Checklist (Before Implementation) for spec-first engineering teams. Examples are illustrative scenarios, not production code.
- Author details: Daniel Marsh
- Editorial policy: How we review and update articles
- Corrections: Contact the editor