Software Spec Template (Free Download)
Most spec templates fail in the same direction: either too lightweight to be useful (a blank ticket with three fields) or so heavy they don't survive the second sprint. This one sits in the middle — six sections, each with a specific job to do.
What this template is trying to do
Templates are only useful when they lower the cost of doing the right thing. The goal here isn't completeness for its own sake — it's to force the decisions that actually prevent rework while skipping the overhead that doesn't matter yet. Every section below exists because something specific goes wrong when it's missing.
| Section | What It Prevents |
|---|---|
| Goal | Building the wrong feature |
| Non-goals | Scope creep |
| Acceptance Criteria | Ambiguous "done" definition |
| Edge Cases | Production bugs from unconsidered inputs |
| Error Paths | Inconsistent error handling |
| Rollback | 2am panic with no exit strategy |
Section 1: Goal
One sentence. Not what you're building — what changes for the user once it's shipped. Who benefits, and how?
Teams that skip this often end up in implementation debates where half the engineers are optimizing for one outcome and the other half for a different one. A one-sentence goal doesn't prevent that debate entirely, but it gives the team something to point at when the drift starts.
Bad: "Add export functionality."
Good: "Allow account managers to export their full contact list to CSV so they can import it into external tools without contacting support."
Section 2: Non-goals
What are we explicitly not building in this spec? This section is as important as the goal — it's what defines the boundary. Without it, every reviewer brings their own assumptions about scope, and implementation review turns into a debate about whether the feature should also do X, Y, and Z.
Non-goals aren't a to-do list for later. They're deliberate, conscious exclusions from this delivery. That distinction matters: it tells future reviewers that the scope was considered and decided, not accidentally omitted.
Example non-goals for the export feature: "We are not building scheduled exports. We are not building Excel format support. We are not building export for custom field types in this release."
Section 3: Acceptance criteria
The most important section. Every criterion needs to be testable — Given/When/Then, with a specific starting state, a named action, and an observable result. If QA would need to ask you clarifying questions to write a test, the criterion isn't done.
Acceptance criteria — Contact CSV export: Given I have at least one contact When I click "Export to CSV" Then a CSV file downloads within 5 seconds with columns: id, name, email, phone, created_at Given I have zero contacts When I click "Export to CSV" Then a CSV downloads with only the header row and a success toast: "Export complete (0 records)" Given I have more than 10,000 contacts When I click "Export to CSV" Then the request queues a background job and I receive an email with a download link within 10 minutes Given the export job fails When I check my email Then I receive a failure notification with a "Try again" link, no partial file is sent
Section 4: Edge cases
These are the behaviors in the gray area that engineering will decide about during implementation if the spec doesn't specify them. The goal isn't exhaustive coverage — it's to name every case where a developer would make a judgment call that might be wrong without explicit guidance.
Edge cases — Contact CSV export: - Contact has null email → include row, email column is empty string - Contact name contains a comma → field is quoted per RFC 4180 - Concurrent export requests from same user → second request returns 429, message: "Export already in progress" - User's session expires mid-export → job continues, sends email regardless - Downloaded file is opened on Windows → line endings must be CRLF
Section 5: Dependencies
What could block or delay this feature? Name it now. External services, other teams' work, feature flags, infrastructure changes. Blockers discovered mid-sprint cost more than blockers named up front. Also note if this feature creates dependencies for others — if another team builds on top of this, flag the contract stability expectation.
Section 6: Rollout plan
Who gets this first, what tells you it's working, and what does rollback look like? For low-risk features, two sentences is fine. Anything touching payments, auth, data migrations, or core user flows needs the full staged rollout with an explicit rollback definition. Don't leave this section blank because it feels like planning for failure — it's actually clarifying what you're shipping.
The complete template
Here is the full template, ready to copy. Delete sections you do not need rather than leaving them blank — a section with "N/A" is noise; just remove it.
## Spec: [Feature Name] **Status:** Draft | In Review | Approved **Author:** [Name] **Date:** [YYYY-MM-DD] ### Goal [One sentence: what outcome does this deliver, and for whom?] ### Non-Goals - [What are we explicitly not building?] - [What is deferred to a future spec?] ### Acceptance Criteria - Given [state] When [action] Then [observable result] (add one block per significant scenario) ### Edge Cases - [scenario] → [expected behavior] - [scenario] → [expected behavior] ### Dependencies - [Dependency name]: [what we need from it, and when] ### Rollout Plan - Release strategy: [feature flag / canary / full release] - Success signal: [metric or observable condition] - Rollback: [how to undo — config flip / code revert / data repair] - Stop-loss: [threshold that triggers rollback consideration]
Delete sections that don't apply rather than leaving them blank — a section with "N/A" is noise. Just remove it.
When to use the full template vs a lighter version
Not every feature needs all six sections. A config toggle that hides a UI element from non-admins doesn't need a staged rollout plan. A billing logic change absolutely does. Treat the template as a menu, not a checklist — take what applies, remove what doesn't, and note why you skipped anything someone might expect to see.
Using this template with AI coding tools
The template works as a constraint layer for AI code generators. When you paste the completed spec — goal, non-goals, acceptance criteria, edge cases — into a prompt alongside the implementation request, the AI has explicit boundaries to work within rather than inferred ones. The non-goals section in particular prevents the AI from adding scope that looks reasonable but wasn't agreed on.
Two practical adjustments when pairing this template with AI tools: First, be more explicit in the "Then" clauses than you would for a human reviewer. AI tools take criterion text literally — "returns an error" will produce some form of error handling, but may not match your expected HTTP status, message format, or retry behavior. Specify those exactly. Second, fill in the edge cases section before prompting for implementation. AI tools will handle edge cases not mentioned in the spec in whatever way looks most common — which may not match your system's actual behavior for similar scenarios.
Keep reading
Editorial note
This article covers Software Spec Template for software delivery teams. Examples are illustrative engineering scenarios, not legal, tax, or investment advice.
- Author details: Daniel Marsh
- Editorial policy: How we review and update articles
- Corrections: Contact the editor