Story-point estimator
Five questions in. A Fibonacci estimate out (1, 2, 3, 5, 8, or 13). Calibrated against 3,000+ stories from real teams.
How it works
Pick the option for each question that best describes the story. The tool sums a weighted score and maps it to the nearest Fibonacci point. You'll see your estimate as soon as you answer all 5 questions. The result includes a recommendation (commit it, split it, spike it first) based on the total.
No signup. No tracking. Pure client-side; nothing leaves your browser.
1How complex is the implementation logic?
Complexity is about how many branches, edge cases, and state transitions the engineer has to reason about. A single CRUD endpoint is low; a multi-tenant permissions rewrite is high.
2How large is the change?
Pure size — lines of code, files touched, surface area. A bug fix in one function is small; a new module is medium; a cross-module refactor is large. Size doesn't equal complexity, so we ask separately.
3How many unknowns does the team have going in?
Unknowns drag estimates more than any other factor — they're risk you can't precisely price. A familiar pattern with a clear acceptance criteria is low risk; a story that says 'figure out how to integrate X' is high.
4How many external dependencies does this touch?
External dependencies — third-party APIs, other teams, infrastructure changes — add coordination cost. Story estimates often miss this because dependency time isn't 'work' the engineer does, but it's wall-clock time on the sprint.
5How much testing effort is needed?
Testing effort is part of every story by default — Definition of Done usually requires it. But some stories need deep test scaffolding (new integration suites, new mocks, new fixtures) that doubles the work.
Why these five dimensions?
Most teams that try to estimate effort directly fall into one of two traps: they collapse everything into "hours" (which compresses risk into a fake-precise number) or they vote on points without a shared rubric (which produces inconsistent estimates).
The five dimensions here are the ones that, in our data, independently shift the team's actual delivery time: complexity (cognitive load), scope (raw size), unknowns (risk you can't precisely price), dependencies (wall-clock vs work-time), and testing (often missed at estimate time). Sum them with simple weights and you get a calibrated heuristic.
For the full framework — including how to run estimation sessions, when to NOT estimate, and how AI helps without taking over — read the sprint-planning hub article.
Want this built in?
Stride's Plan module estimates story points from the story's own text + your team's velocity. No form required; AI does the work in 2 seconds.
See AI sprint planning