Regression test
A regression test verifies that previously working functionality still works after a code change. Regression tests are run on every change (CI), every release, or on a schedule, and are the primary defence against re-introducing bugs that were once fixed.
Regression suites accumulate over time — every fixed bug ideally leaves behind a regression test that prevents the bug from returning. Healthy suites run in minutes (parallelised) and produce actionable failures (clear message, single point of failure). Anti-patterns: regression suites that take hours to run (developers stop running them locally), or that produce flaky pass/fail results (teams learn to ignore failures, defeating the suite).
Long-form posts that explore regression test in depth — when to use it, common failure modes, how AI helps.
- Are AI-generated test cases worth shipping?Yes, with a sharp caveat — when they're tied to AC and reviewed by a human. Five categories where AI test generation is great, five anti-patterns to catch.9 min read
- Can AI write Gherkin? (yes — here's how)Yes. AI writes Gherkin well, often better than humans for surface area coverage. Five wins, five recognisable failure modes, and the prompts that work.8 min read
Related terms
- Integration test
An integration test verifies that multiple components work together correctly — a service hitting a real database, two microservices communicating, a frontend talking to a real API.
- Smoke test
A smoke test is a small, fast set of tests that verify the most critical paths of a system work at all — does the app start, can a user log in, do the top three workflows respond.
- Mutation testing
Mutation testing measures test quality by introducing small bugs (mutations) into the source code and checking whether tests catch them.