Should engineers write ADRs for every architecture decision?
Yes — the bar isn't 'big decision', it's 'would a new engineer six months from now wonder why we did this?' Most teams under-write ADRs.
Short answer: Yes — every architecture decision worth re-litigating later is worth an ADR. The bar isn't "this is a big decision" — it's "would a new engineer six months from now wonder why we did this?" Most teams under-write ADRs because they over-estimate what counts as "architecture."
What an ADR is, in one sentence
An Architecture Decision Record captures: what we decided, why, what alternatives we rejected, and what consequences we accept. Four sections, immutable once accepted, version-controlled with the code.
The format was popularised by Michael Nygard's 2011 blog post. It's now canonical for medium-to-large engineering organisations and increasingly common at smaller ones. See the ADR glossary entry for the full background.
Why teams under-write ADRs
Three reasons:
1. "It's not big enough to be architecture." Most engineers think ADRs are for the major bets — database engine choice, language migration, service vs monolith. So when they pick a JSON serialisation library, they don't write one. Six months later, a new engineer wonders why and asks. Nobody remembers. ADR-worthy.
2. "I don't have time." ADRs are 10-15 minutes well-spent. They feel like 90. The discount is real — engineers consistently overestimate ADR-writing effort.
3. "We can just decide in Slack." Slack threads decay. Search doesn't find them six months later. The ADR is the artifact; the Slack thread is the conversation that produced it.
The litmus test
Write an ADR when ANY of:
- The decision has multiple defensible alternatives (not a "obvious choice" decision)
- The consequences will outlive the people making the decision
- The decision affects more than one team
- The decision involves a trade-off the team will be tempted to revisit (cost vs latency, complexity vs flexibility)
- The decision constrains future options ("we're committing to X for the next 18 months")
If you can articulate "we picked X because Y, and we rejected Z because W" — that's an ADR. Write it down.
ADR templates that work
The Nygard template is canonical. Four sections:
# ADR-0042: Use Postgres LISTEN/NOTIFY for cache invalidation
## Status
Accepted (2026-04-12, supersedes ADR-0019)
## Context
We need to invalidate read-replica caches when the primary updates.
Current approach (polling every 30s) introduces 30s of inconsistency
and 14% of our DB CPU is spent on the polling queries.
## Decision
We will use Postgres LISTEN/NOTIFY to push cache invalidation events
from the primary to listening replicas. The replica process subscribes
on startup; the publisher emits NOTIFY events from a trigger on every
write to invalidation-relevant tables.
## Alternatives considered
- Kafka-based event stream: rejected — adds a new dependency for a
problem confined to one database.
- Polling at higher frequency: rejected — doesn't fix the consistency
window, just amplifies CPU cost.
- Logical replication slots: rejected — requires more infrastructure
and we only need invalidation events, not full WAL.
## Consequences
- Cache freshness improves from ~30s to ~50ms.
- New dependency on Postgres LISTEN/NOTIFY behaviour (reliable but
bounded — payloads ≤8KB, no delivery guarantee if listener is
disconnected).
- Replica process needs a reconnect-and-rebuild path for missed events.Five points to enforce:
-
Status section is single-line. Proposed / Accepted / Superseded / Deprecated. Date the change. If superseding, link the previous ADR. No multi-paragraph status histories.
-
Context is the problem. Not the solution; the problem. What's broken now, what trade-off are we navigating. 3-5 sentences max.
-
Decision is one sentence (sometimes two). What we're doing. The longer the decision section, the more it's drifting into rationale.
-
Alternatives are MANDATORY. Even if there were only two options ("do this thing vs do nothing"), state both. The alternatives section is the institutional memory of why-not-X.
-
Consequences are double-edged. Both positive (faster, cheaper) and negative (new dependency, new failure mode). The negative consequences are usually the most important — that's where future-you needs the warning.
Anti-patterns
The Vague ADR. "We decided to be cloud-native." Not actionable, not testable, not falsifiable. Skip it.
The Justification ADR. Written AFTER the decision, justifying what already shipped. Better than nothing, but the alternatives section is usually thin (the author already knows which one won, can't remember why others were considered). Try to write ADRs at decision time.
The Multi-Decision ADR. "We're moving to microservices AND adopting GraphQL AND replacing our auth provider." Three separate ADRs. Each one has its own context, alternatives, consequences. Bundling makes the future supersede-this exercise painful.
The "Lessons Learned" ADR. ADRs aren't blog posts. If you have lessons to share, write a post. The ADR captures the decision; the post captures the wisdom.
The Out-of-Date ADR. ADR says "we use SQS" but the codebase moved to Kafka two years ago. Mark superseded; don't quietly edit. Immutability is the value — superseded ADRs are still part of the institutional memory.
How AI helps
The model is genuinely good at:
- Drafting 3-5 alternatives for a given context (humans usually generate 1-2)
- Scoring alternatives across dimensions (cost, latency, complexity, team familiarity, future flex)
- Surfacing consequences the author didn't list (especially negative consequences)
- Auto-formatting into the Nygard template
The model is bad at:
- Knowing your team's actual technology preferences (you might hate ORM X for reasons that don't show up in the public docs)
- Knowing what consequences actually matter to YOUR product (cost matters more for some, latency more for others)
- Inventing the actual decision — that's still a human choice
The healthy pattern: human writes the context + initial decision; AI generates alternatives + consequences; human edits both. Total ADR time drops from 15-20 minutes to 5-7 minutes.
The Design module — 3-5 scored alternatives per decision, ADRs in Nygard format, version history, tech radar.
Where ADRs live
Two patterns work:
1. /docs/adr/NNNN-slug.md in the repo. Lives with the code. Reviewed in PR like any other code change. Searchable in git log and via grep. Most engineering organisations use this.
2. In a dedicated tool (Stride, ADR-CLI, Log4Brains). Lives in a structured system. Cross-referenceable to stories, diagrams, defects. The tool dependency is real — but the cross-references are what compound over time.
Don't put ADRs in Confluence. Confluence pages drift, get moved, get archived. ADRs need stable URLs; Confluence isn't great at that.
The compounding payoff
The ROI on ADRs is hard to see in the first year. By year 3, every team that wrote ADRs disciplinedly is ahead. Three concrete payoffs:
1. Faster onboarding. New engineer reads the relevant ADRs in their first week instead of asking the same questions to 4 different senior engineers over 3 months. ~30% faster ramp.
2. Cleaner architectural debates. Disagreements have artifacts to reference. "ADR-0042 says we considered this and rejected it because W" beats "I think we already discussed this somewhere..."
3. Audit-ready compliance. Regulated industries need decision provenance. ADRs are the cleanest format for that. Saves weeks during SOC 2 / ISO / regulatory reviews.
Read next
- The connected delivery graph — why ADRs work better when they're connected to stories and code.
- Stride vs Lucidchart — the architecture-decision tool comparison.
- The ADR and technical-debt glossary entries cover the related concepts.
The honest summary: most teams under-write ADRs because they over-estimate what counts as architecture. The bar is "would a new engineer six months from now wonder why we did this?" If yes, write it. 15 minutes now saves 90 minutes of re-litigation later, every year, until the decision is genuinely obsolete.