10 Common Mistakes in Software Specifications
Bad specs don't announce themselves. They look fine — sections, headers, some acceptance criteria. You only find the holes when QA can't write a test, or someone asks "what happens when the user has no email?" and the answer is a long silence.
Before we start
The broken specs I've seen over twelve years of B2B SaaS didn't fail because the author was careless. They failed because everyone in the room thought the same assumptions were obvious, and nobody wrote them down. I've been the person who wrote the broken spec more times than I'd like to admit — mistake #6 on this list was one I kept making for years before a junior engineer pointed it out during a review.
These ten mistakes are the repeatable ones — the failures that show up sprint after sprint, not the exotic edge cases. I've ordered them roughly by how much damage they cause, not by how obvious they are.
| # | Mistake | Why It Hurts | One-Line Fix |
|---|---|---|---|
| 1 | Untestable acceptance criteria | QA can't write test cases | Add a metric, threshold, or observable outcome |
| 2 | No non-goals section | Scope creep via ambiguity | List 3+ explicit exclusions with ticket refs |
| 3 | No rollback plan | 2am incident with no exit | Write one sentence: feature flag or revert steps |
| 4 | Tribal knowledge | Spec only readable by insiders | Hand spec to a new engineer — count their questions |
| 5 | No edge cases | Bugs discovered in production | Check: null, duplicate, concurrent, permission |
| 6 | Vague error paths | Inconsistent error handling | Specify: retry? error code? user message? |
| 7 | Scope expands in review | Invisible scope creep | "New scope = new spec or updated spec + re-review" |
| 8 | Implementation as criteria | Spec constrains architecture | Describe behavior, not technology |
| 9 | No scope change owner | Unreviewed drift | Name the person who approves spec changes |
| 10 | Spec filed and forgotten | Spec ≠ reality after week 1 | Update spec when implementation diverges |
Acceptance criteria that can't be tested
"The system should respond quickly." "Errors should be handled gracefully." "The UI should feel intuitive."
These are aspirations, not criteria. A QA engineer cannot write a test case from an aspiration. What actually happens is that each engineer interpreting them makes their own private judgment call — and you end up with three different working definitions of "quickly" living inside the same codebase, none of them written down.
The fix is almost always just adding a number. Not "responds quickly" — "responds within 400ms at the 95th percentile under 200 concurrent users." Now QA can test it. Now you can reject a build that misses it. Now everyone means the same thing when they say it's done.
No non-goals section
Non-goals feel redundant when you're writing the spec. "Of course we're not doing CSV import in this release — we're fixing the search bug." You don't write it down because it feels too obvious to bother.
Three weeks later, someone reads the spec and asks "are we handling CSV import here?" The conversation that follows is not short. By the time you've sorted it out, implementation has already started down the wrong path.
Non-goals need to be specific, not vague. "We won't address performance" gets ignored. "We won't optimize query performance for datasets exceeding 100,000 records — tracked separately in TECH-442" actually holds. People stop asking, because the answer is right there.
Non-goals for this feature: - Bulk CSV import (tracked in TECH-442) - Audit logging for individual field edits - Any changes to the read-only user permission model
No rollback plan
This one cost me three days of my life once. We shipped a billing migration on a Friday (yes, I know). Something broke. Someone asked "can we roll this back?" The migration had been written three weeks earlier by an engineer who was now on vacation. The rollback script hadn't been tested. It took us until Monday afternoon to resolve, and I spent most of Saturday on a Zoom call with the DBA.
A rollback plan doesn't have to be complicated. It has to exist. "Feature flag checkout_v2_enabled — flip to false, no migration needed." Or: "Rolling back requires reverting migration 0047, estimated 15 minutes, needs DBA." If writing that sentence reveals the rollback is genuinely complicated, that's good information — simplify the implementation before shipping, don't discover the problem during an incident.
Tribal knowledge embedded in the spec
Some specs are only readable by the people who were in the planning meeting. They reference "the decision we made about the legacy billing system," use internal nicknames for external services, or assume the reader already knows why a particular constraint exists.
There's a simple test: hand the spec to a competent engineer who wasn't involved in any of the planning. Count how many questions they ask before they can start. More than two means the spec is holding context hostage in people's memories instead of on the page. Write the background. Explain the constraint. Link the Slack thread. This isn't bureaucracy — it's how the spec stays useful when team composition changes.
No edge cases documented
Edge cases are where bugs live. Empty carts, null emails, concurrent submissions, expired sessions, records deleted between API calls — these aren't rare. They happen every day in production. They're predictable. And when they're not in the spec, each engineer discovering one makes an independent judgment call about how to handle it. Those calls don't get reviewed. They don't stay consistent. They surface as customer-reported bugs months later.
Edge cases to handle: - User has zero items in cart when checkout is triggered - Product goes out of stock between add-to-cart and order submit - Session expires partway through the checkout flow - Same order submitted twice within 10 seconds (idempotency)
Listing them doesn't guarantee they're handled correctly — but it creates the conversation before implementation starts, when changing the decision costs nothing.
Error paths left to engineering judgment
Related to edge cases, but worth keeping separate: error paths are what happens when external dependencies fail. Payment gateway timeout. Email service returns 503. Database write fails midway through a multi-step transaction.
"Handle errors appropriately" is not a spec. It's a blank check. Your system will handle them — just inconsistently, depending on who wrote which function and what "appropriate" meant to them that week. Retry? Return a specific error code? Show the user a message? Log silently and move on? These are product decisions, not engineering calls. They belong in the spec, not in the heads of whoever is coding that afternoon.
Scope that expands during review
Spec review meetings have a predictable shape. Someone reads a section, thinks of a related feature that "would only take a day," says it out loud, and it gets absorbed into the plan before anyone registers what just happened. Two weeks later, the feature is three weeks of work and nobody can point to where the decision was made.
Good review process separates feedback on what's written from suggestions for what to add. The moment someone says "can we also..." the right response is: "Great — do you want to write that as a separate spec, or update this one and restart the review?" Either answer is fine. Silently incorporating scope changes is not.
Criteria that describe implementation instead of behavior
"The system should use Redis to cache the session." "Background jobs should run via Sidekiq." These look like requirements but they're architecture decisions dressed up as spec criteria.
Behavioral criteria describe what a user or caller observes. Implementation criteria describe how the system achieves it. The spec owns the former; the engineering team owns the latter. When implementation details end up in the spec, you create friction every time a reasonable technical decision diverges from them — the spec looks violated even when the observable behavior is exactly right.
"The session persists across browser restarts for a maximum of 30 days" is a spec. "Store the session in Redis with a 30-day TTL" is an implementation note. First one belongs in the spec. Second one goes in a code comment or an architecture doc.
No named owner for scope changes
Without a clear process for how changes enter the spec during implementation, they arrive informally. A Slack message from a PM adds a new requirement. A designer updates mockups to include something that wasn't in the criteria. An engineer adds a field "while they were in there." None of it goes through review. None of it updates the acceptance criteria. The implementation drifts away from the spec, and nobody notices until QA starts flagging things that were never agreed to.
It doesn't need to be a formal change control process. It needs to exist. "Any scope change during implementation requires updating this spec and notifying the team in #spec-review" is enough. That's a one-sentence policy that prevents the most common cause of implementation confusion.
The spec gets filed and forgotten
The most corrosive habit: writing a careful spec before implementation, then never opening it again once coding starts. By the time the feature ships, the spec reflects Week 1 thinking. The implementation reflects six weeks of discovered constraints, changed requirements, and engineering decisions nobody documented. Now you have two different competing records of what was built, and neither one is complete.
A spec is a living document until the code is in production. When implementation reveals that something in the spec was wrong — update the spec. When QA finds a behavior nobody anticipated — update the spec. When a scope decision changes — update the spec. It should be the accurate record of what was built and why, not a historical artifact of what was originally planned.
If your team writes good specs and ignores them for the rest of the delivery cycle, you're getting around 20% of the available value. The other 80% comes from keeping the spec current and treating it as the actual source of truth through shipping — not a document you file after kickoff.
The common thread
Ten different problems, one root cause: decisions that should have been made in writing got made informally, privately, or not at all. None of these are difficult to fix. They're just easy to skip when you're under sprint pressure.
I considered adding an eleventh: specs that are too long. There's a version of this mistake where the spec becomes a 20-page document that nobody reads. But I left it out because, honestly, I've seen it cause less damage than any of the ten above. A spec that's too long still contains the decisions. A spec that's missing a rollback plan doesn't. If I had to choose between a long spec and a short one with no edge cases, I'd take the long one every time.
Keep reading
Editorial note
This article covers 10 Common Mistakes in Software Specifications for software delivery teams. Examples are illustrative engineering scenarios, not legal, tax, or investment advice.
- Author details: Daniel Marsh
- Editorial policy: How we review and update articles
- Corrections: Contact the editor