<?xml version="1.0" encoding="UTF-8"?>
<urlset
  xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
  xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"
>
  <url>
    <loc>https://www.stride.page/blog/bpmn-process-mining-celonis-alternative</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=BPMN%20process%20mining%20without%20Celonis%20money&amp;subtitle=Celonis%20charges%20%24100K-%241M%2B%20for%20process%20mining.%20It&apos;s%20genuinely%20good.%20It&apos;s%20also%20wildly%20overpriced%20for%2095%25%20of%20teams.%20This%20is%20the%20lighter-weight%20playbook%20that%20actually%20works.&amp;eyebrow=BLOG</image:loc>
      <image:title>BPMN process mining without Celonis money</image:title>
      <image:caption>Celonis charges $100K-$1M+ for process mining. It&apos;s genuinely good. It&apos;s also wildly overpriced for 95% of teams. This is the lighter-weight playbook that actually works.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/ai-acceptance-criteria</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=How%20AI%20writes%20acceptance%20criteria%20(and%20where%20it%20fails)&amp;subtitle=The%20honest%20map%20of%20where%20AI%20is%20dramatically%20better%20than%20humans%20at%20writing%20acceptance%20criteria%20%E2%80%94%20and%20the%20five%20places%20it%20confidently%20writes%20garbage.%20Plus%20the%20prompts%20that%20work.&amp;eyebrow=BLOG</image:loc>
      <image:title>How AI writes acceptance criteria (and where it fails)</image:title>
      <image:caption>The honest map of where AI is dramatically better than humans at writing acceptance criteria — and the five places it confidently writes garbage. Plus the prompts that work.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/replacing-jira-30-day-playbook</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Replacing%20Jira%3A%20a%2030-day%20playbook&amp;subtitle=The%20honest%2030-day%20playbook%20for%20moving%20off%20Jira.%20Four%20phases%20%E2%80%94%20audit%2C%20parallel%20run%2C%20cutover%2C%20decommission%20%E2%80%94%20plus%20the%20three%20patterns%20where%20this%20doesn&apos;t%20work.&amp;eyebrow=BLOG</image:loc>
      <image:title>Replacing Jira: a 30-day playbook</image:title>
      <image:caption>The honest 30-day playbook for moving off Jira. Four phases — audit, parallel run, cutover, decommission — plus the three patterns where this doesn&apos;t work.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/connected-delivery-graph</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=The%20connected%20delivery%20graph%3A%20one%20source%20of%20truth%20from%20PRD%20to%20prod&amp;subtitle=Most%20teams%20ship%20software%20with%20five%20tools%20that%20don&apos;t%20talk%20to%20each%20other.%20The%20friction%20isn&apos;t%20any%20individual%20tool%20%E2%80%94%20it&apos;s%20the%20missing%20graph%20between%20them.%20This%20is%20the%20case%20for%20one%20conne&amp;eyebrow=BLOG</image:loc>
      <image:title>The connected delivery graph: one source of truth from PRD to prod</image:title>
      <image:caption>Most teams ship software with five tools that don&apos;t talk to each other. The friction isn&apos;t any individual tool — it&apos;s the missing graph between them. This is the case for one connected graph.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/should-engineers-write-adrs</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Should%20engineers%20write%20ADRs%20for%20every%20architecture%20decision%3F&amp;subtitle=Yes%20%E2%80%94%20the%20bar%20isn&apos;t%20&apos;big%20decision&apos;%2C%20it&apos;s%20&apos;would%20a%20new%20engineer%20six%20months%20from%20now%20wonder%20why%20we%20did%20this%3F&apos;%20Most%20teams%20under-write%20ADRs.&amp;eyebrow=BLOG</image:loc>
      <image:title>Should engineers write ADRs for every architecture decision?</image:title>
      <image:caption>Yes — the bar isn&apos;t &apos;big decision&apos;, it&apos;s &apos;would a new engineer six months from now wonder why we did this?&apos; Most teams under-write ADRs.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/ai-generated-test-cases-worth-shipping</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Are%20AI-generated%20test%20cases%20worth%20shipping%3F&amp;subtitle=Yes%2C%20with%20a%20sharp%20caveat%20%E2%80%94%20when%20they&apos;re%20tied%20to%20AC%20and%20reviewed%20by%20a%20human.%20Five%20categories%20where%20AI%20test%20generation%20is%20great%2C%20five%20anti-patterns%20to%20catch.&amp;eyebrow=BLOG</image:loc>
      <image:title>Are AI-generated test cases worth shipping?</image:title>
      <image:caption>Yes, with a sharp caveat — when they&apos;re tied to AC and reviewed by a human. Five categories where AI test generation is great, five anti-patterns to catch.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/roi-of-ai-in-software-delivery</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=What&apos;s%20the%20actual%20ROI%20of%20AI%20in%20software%20delivery%3F&amp;subtitle=%244-%248%20back%20for%20every%20dollar%20spent%20within%206%20months%2C%20for%20most%20teams.%20The%20honest%20math%20from%20real%20data%2C%20not%20the%20deck.&amp;eyebrow=BLOG</image:loc>
      <image:title>What&apos;s the actual ROI of AI in software delivery?</image:title>
      <image:caption>$4-$8 back for every dollar spent within 6 months, for most teams. The honest math from real data, not the deck.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/migrate-from-confluence</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=How%20to%20migrate%20from%20Confluence%20to%20a%20structured%20doc%20tool&amp;subtitle=The%2030-day%20playbook%20for%20leaving%20Confluence.%20The%20hard%20part%20isn&apos;t%20the%20content%20move%20%E2%80%94%20it&apos;s%20deciding%20what%20NOT%20to%20move.&amp;eyebrow=BLOG</image:loc>
      <image:title>How to migrate from Confluence to a structured doc tool</image:title>
      <image:caption>The 30-day playbook for leaving Confluence. The hard part isn&apos;t the content move — it&apos;s deciding what NOT to move.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/can-ai-write-gherkin</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Can%20AI%20write%20Gherkin%3F%20(yes%20%E2%80%94%20here&apos;s%20how)&amp;subtitle=Yes.%20AI%20writes%20Gherkin%20well%2C%20often%20better%20than%20humans%20for%20surface%20area%20coverage.%20Five%20wins%2C%20five%20recognisable%20failure%20modes%2C%20and%20the%20prompts%20that%20work.&amp;eyebrow=BLOG</image:loc>
      <image:title>Can AI write Gherkin? (yes — here&apos;s how)</image:title>
      <image:caption>Yes. AI writes Gherkin well, often better than humans for surface area coverage. Five wins, five recognisable failure modes, and the prompts that work.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/sprint-length-with-ai</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=How%20long%20should%20a%20sprint%20be%20when%20using%20AI%20to%20write%20stories%3F&amp;subtitle=1-week%20sprints%20become%20the%20right%20default%20with%20AI.%20The%202-week%20standard%20was%20calibrated%20to%20slow%20manual%20planning%20%E2%80%94%20AI%20changes%20the%20math.&amp;eyebrow=BLOG</image:loc>
      <image:title>How long should a sprint be when using AI to write stories?</image:title>
      <image:caption>1-week sprints become the right default with AI. The 2-week standard was calibrated to slow manual planning — AI changes the math.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/blog/best-ai-tool-for-sprint-planning</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=What&apos;s%20the%20best%20AI%20tool%20for%20sprint%20planning%3F&amp;subtitle=Stride%20leads%2C%20Linear%20is%20second%2C%20everything%20else%20competes%20on%20a%20different%20axis.%20The%20litmus%20test%3A%20drop%20a%20PRD%20in%20and%20see%20what%20comes%20back%20in%2090%20seconds.&amp;eyebrow=BLOG</image:loc>
      <image:title>What&apos;s the best AI tool for sprint planning?</image:title>
      <image:caption>Stride leads, Linear is second, everything else competes on a different axis. The litmus test: drop a PRD in and see what comes back in 90 seconds.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/jira</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Jira&amp;subtitle=Stride%20vs%20Jira%20%E2%80%94%20for%20teams%20who%20want%20AI%2C%20not%20configuration.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Jira</image:title>
      <image:caption>Jira is the incumbent issue tracker, endlessly configurable. Stride is an AI-native delivery platform that replaces Jira AND adds architect, QA, and process intelligence — with a fraction of the admin surface.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/linear</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Linear&amp;subtitle=Stride%20vs%20Linear%20%E2%80%94%20beautiful%20issues%20AND%20architect%20%2B%20QA%20in%20one%20tool.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Linear</image:title>
      <image:caption>Linear nailed the opinionated issue-tracking UX that Jira forgot. Stride is similarly opinionated on UX but solves a wider problem — same speed and polish, plus architecture decisions, QA coverage, and AI-generated artifacts across every module.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/asana</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Asana&amp;subtitle=Stride%20vs%20Asana%20%E2%80%94%20for%20teams%20who%20want%20AI%20writing%20the%20work%2C%20not%20assigning%20it.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Asana</image:title>
      <image:caption>Asana is a generalist work-management tool that scales from marketing campaigns to engineering. Stride is purpose-built for software delivery — AI that writes acceptance criteria from stories, generates test cases from requirements, and connects PRDs to ADRs to defects on one graph. If you&apos;re shipping software, the depth matters.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/clickup</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20ClickUp&amp;subtitle=Stride%20vs%20ClickUp%20%E2%80%94%20focused%20AI%20for%20delivery%2C%20not%20surface%20sprawl.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs ClickUp</image:title>
      <image:caption>ClickUp ships a feature for every workflow your team has ever asked for — docs, whiteboards, chat, mind maps, time tracking, CRM. Stride is the opposite philosophy: deep AI on four software-delivery surfaces (Plan, Design, Optimize, Verify) and integrations for the rest. Choose ClickUp if breadth matters; choose Stride if your team ships software for a living.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/notion</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Notion&amp;subtitle=Stride%20vs%20Notion%20%E2%80%94%20when%20your%20%22PM%20database%22%20stops%20scaling.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Notion</image:title>
      <image:caption>Notion is a brilliant document-and-database hybrid that early-stage teams stretch into a PM tool. It works — until it doesn&apos;t. Stride is what teams move to when the sprints get serious, the test cases need traceability, and the AI prompts need real software-delivery context instead of free-form pages. We say this with love: Notion is the right answer for the first 18 months.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/monday</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Monday.com&amp;subtitle=Stride%20vs%20Monday.com%20%E2%80%94%20software%20delivery%2C%20not%20work-OS%20slick.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Monday.com</image:title>
      <image:caption>Monday.com built its category as the spreadsheet-meets-CRM &quot;Work OS&quot; — colorful, configurable, and equally at home in marketing, sales ops, and engineering. Stride is the opposite: opinionated, software-delivery-focused, with AI that speaks Gherkin and ADRs. If your engineering team is running on Monday boards, this is the page for you.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/shortcut</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Shortcut&amp;subtitle=Stride%20vs%20Shortcut%20%E2%80%94%20when%20your%20tracker%20needs%20to%20think.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Shortcut</image:title>
      <image:caption>Shortcut (formerly Clubhouse) earned its loyal user base by keeping the tracker simple — fast, opinionated, focused on stories and iterations. Stride is built for teams who appreciate Shortcut&apos;s restraint but want more: AI that writes acceptance criteria and test cases, architecture decisions on the same graph, and process intelligence across the delivery pipeline.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/productboard</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Productboard&amp;subtitle=Stride%20vs%20Productboard%20%E2%80%94%20when%20the%20PM%20tool%20needs%20to%20talk%20to%20engineering.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Productboard</image:title>
      <image:caption>Productboard is a PM-favourite for prioritisation and roadmapping — strong opinions on how product strategy should be structured. Stride is built on the premise that strategy is meaningless if the PRDs don&apos;t connect to the stories, ADRs, and tests engineering ships against. Different bet on where the PM workflow should live.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/aha</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Aha!&amp;subtitle=Stride%20vs%20Aha!%20%E2%80%94%20strategic%20roadmaps%20plus%20the%20engineering%20execution.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Aha!</image:title>
      <image:caption>Aha! built its category on strategy-first roadmapping — goals, initiatives, releases, features cascading top-down. Stride is built on the premise that strategy without the connected delivery layer is theatre. Different theory of where the PM tool should optimise.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/trello</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Trello&amp;subtitle=Stride%20vs%20Trello%20%E2%80%94%20when%20boards%20stop%20being%20enough.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Trello</image:title>
      <image:caption>Trello pioneered Kanban-for-everyone — beautifully simple, infinitely flexible, and beloved by small teams. Stride is what teams move to when &apos;flexible&apos; starts feeling like &apos;unstructured&apos;, when sprints get real, and when AI working on actual delivery artifacts starts mattering more than colour-coded cards.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/testrail</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20TestRail&amp;subtitle=Stride%20vs%20TestRail%20%E2%80%94%20when%20QA%20tooling%20stops%20needing%20its%20own%20silo.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs TestRail</image:title>
      <image:caption>TestRail is the incumbent test management tool — strong feature surface, mature, and broadly deployed in QA-heavy organisations. Stride takes a different bet: test management belongs on the same graph as stories, defects, and code, not in a separate tool that maintains its own copy of every story.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/lucidchart</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Lucidchart&amp;subtitle=Stride%20vs%20Lucidchart%20%E2%80%94%20when%20diagrams%20need%20to%20connect%20to%20the%20rest%20of%20delivery.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Lucidchart</image:title>
      <image:caption>Lucidchart is the best general-purpose diagramming tool: smooth canvas, huge shape library, real-time collaboration. Stride takes a narrower position: architecture work for software delivery is more than diagrams — it&apos;s ADRs, scored alternatives, tech radar, fitness, and traceability to the stories implementing each decision. Lucidchart draws; Stride decides.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/vs/wrike</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Stride%20vs%20Wrike&amp;subtitle=Stride%20vs%20Wrike%20%E2%80%94%20software%20delivery%2C%20not%20enterprise%20project%20portfolio.&amp;eyebrow=Comparison</image:loc>
      <image:title>Stride vs Wrike</image:title>
      <image:caption>Wrike is built for enterprise project portfolio management (PPM) — heavy reporting, custom workflows, Gantt charts, and time tracking for organisations running 100+ initiatives across departments. Stride is the opposite: opinionated software-delivery focus with AI on real delivery artifacts. If your engineering team has been forcibly moved onto a PPM tool because finance or PMO mandated it, this is your page.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/acceptance-criteria</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Acceptance%20criteria&amp;subtitle=Acceptance%20criteria%20are%20the%20conditions%20a%20story%20must%20satisfy%20to%20be%20considered%20complete%20%E2%80%94%20testable%2C%20bounded%20statements%20describing%20what%20the%20system%20does.%20Good%20AC%20are%20behavioural%20(user-&amp;eyebrow=Glossary</image:loc>
      <image:title>Acceptance criteria</image:title>
      <image:caption>Acceptance criteria are the conditions a story must satisfy to be considered complete — testable, bounded statements describing what the system does. Good AC are behavioural (user-visible outcome), no</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/adr</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=ADR&amp;subtitle=An%20Architecture%20Decision%20Record%20is%20a%20short%20document%20that%20captures%20a%20single%20architecture%20choice%20%E2%80%94%20what%20was%20decided%2C%20why%2C%20what%20alternatives%20were%20rejected%2C%20and%20what%20consequences%20the%20t&amp;eyebrow=Glossary</image:loc>
      <image:title>ADR</image:title>
      <image:caption>An Architecture Decision Record is a short document that captures a single architecture choice — what was decided, why, what alternatives were rejected, and what consequences the team accepts. ADRs ar</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/backlog-refinement</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Backlog%20refinement&amp;subtitle=Backlog%20refinement%20(sometimes%20called%20grooming)%20is%20the%20recurring%20practice%20of%20clarifying%2C%20splitting%2C%20estimating%2C%20and%20prioritising%20stories%20before%20they%20enter%20a%20sprint.%20A%20well-refined%20b&amp;eyebrow=Glossary</image:loc>
      <image:title>Backlog refinement</image:title>
      <image:caption>Backlog refinement (sometimes called grooming) is the recurring practice of clarifying, splitting, estimating, and prioritising stories before they enter a sprint. A well-refined backlog has its top 2</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/bdd</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=BDD&amp;subtitle=Behavior-Driven%20Development%20is%20a%20software%20practice%20that%20builds%20on%20TDD%20by%20writing%20tests%20in%20business-readable%2C%20scenario-style%20language%20(typically%20Gherkin).%20The%20goal%3A%20shared%20understan&amp;eyebrow=Glossary</image:loc>
      <image:title>BDD</image:title>
      <image:caption>Behavior-Driven Development is a software practice that builds on TDD by writing tests in business-readable, scenario-style language (typically Gherkin). The goal: shared understanding between enginee</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/blue-green-deploy</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Blue-green%20deploy&amp;subtitle=Blue-green%20deployment%20maintains%20two%20identical%20production%20environments%20%E2%80%94%20blue%20(current)%20and%20green%20(new).%20Releases%20deploy%20to%20green%3B%20once%20health%20checks%20pass%2C%20traffic%20flips%20from%20blue%20t&amp;eyebrow=Glossary</image:loc>
      <image:title>Blue-green deploy</image:title>
      <image:caption>Blue-green deployment maintains two identical production environments — blue (current) and green (new). Releases deploy to green; once health checks pass, traffic flips from blue to green. Rollback is</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/bpmn</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=BPMN&amp;subtitle=Business%20Process%20Model%20and%20Notation%20is%20the%20ISO%2019510%20standard%20for%20graphically%20representing%20business%20processes%20as%20flowcharts.%20BPMN%20diagrams%20use%20a%20small%20vocabulary%20of%20shapes%20(rectang&amp;eyebrow=Glossary</image:loc>
      <image:title>BPMN</image:title>
      <image:caption>Business Process Model and Notation is the ISO 19510 standard for graphically representing business processes as flowcharts. BPMN diagrams use a small vocabulary of shapes (rectangles for activities, </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/canary-release</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Canary%20release&amp;subtitle=A%20canary%20release%20routes%20a%20small%20percentage%20of%20production%20traffic%20(typically%201-5%25)%20to%20a%20new%20version%2C%20monitors%20error%20rates%20and%20latency%2C%20and%20rolls%20forward%20to%20100%25%20only%20when%20metrics%20st&amp;eyebrow=Glossary</image:loc>
      <image:title>Canary release</image:title>
      <image:caption>A canary release routes a small percentage of production traffic (typically 1-5%) to a new version, monitors error rates and latency, and rolls forward to 100% only when metrics stay healthy. The name</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/capacity-planning</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Capacity%20planning&amp;subtitle=Capacity%20planning%20is%20the%20practice%20of%20estimating%20how%20much%20work%20a%20team%20can%20realistically%20take%20on%20in%20a%20sprint%2C%20accounting%20for%20PTO%2C%20meetings%2C%20on-call%20duty%2C%20and%20other%20non-coding%20time.%20C&amp;eyebrow=Glossary</image:loc>
      <image:title>Capacity planning</image:title>
      <image:caption>Capacity planning is the practice of estimating how much work a team can realistically take on in a sprint, accounting for PTO, meetings, on-call duty, and other non-coding time. Capacity is the upper</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/chaos-engineering</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Chaos%20engineering&amp;subtitle=Chaos%20engineering%20is%20the%20practice%20of%20deliberately%20injecting%20failures%20into%20production%20(or%20production-like)%20systems%20to%20validate%20they%20recover%20gracefully.%20Pioneered%20by%20Netflix%20with%20Cha&amp;eyebrow=Glossary</image:loc>
      <image:title>Chaos engineering</image:title>
      <image:caption>Chaos engineering is the practice of deliberately injecting failures into production (or production-like) systems to validate they recover gracefully. Pioneered by Netflix with Chaos Monkey in 2010, t</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/ci-cd</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=CI%2FCD%20pipeline&amp;subtitle=A%20CI%2FCD%20pipeline%20is%20the%20automated%20chain%20of%20build%20%2F%20test%20%2F%20deploy%20steps%20that%20runs%20on%20every%20code%20change.%20CI%20(continuous%20integration)%20means%20merging%20changes%20to%20a%20shared%20branch%20frequent&amp;eyebrow=Glossary</image:loc>
      <image:title>CI/CD pipeline</image:title>
      <image:caption>A CI/CD pipeline is the automated chain of build / test / deploy steps that runs on every code change. CI (continuous integration) means merging changes to a shared branch frequently with automated te</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/circuit-breaker</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Circuit%20breaker&amp;subtitle=A%20circuit%20breaker%20is%20a%20pattern%20that%20monitors%20calls%20to%20a%20downstream%20service%20and%20&apos;trips&apos;%20(stops%20calling)%20when%20failures%20exceed%20a%20threshold%2C%20returning%20a%20fallback%20or%20error%20immediately.%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Circuit breaker</image:title>
      <image:caption>A circuit breaker is a pattern that monitors calls to a downstream service and &apos;trips&apos; (stops calling) when failures exceed a threshold, returning a fallback or error immediately. After a cool-down, i</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/code-coverage</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Code%20coverage&amp;subtitle=Code%20coverage%20is%20the%20percentage%20of%20source%20code%20executed%20by%20a%20test%20suite%2C%20broken%20down%20by%20statement%2C%20branch%2C%20or%20line.%20High%20coverage%20indicates%20wide%20test%20reach%3B%20it%20does%20NOT%20indicate%20te&amp;eyebrow=Glossary</image:loc>
      <image:title>Code coverage</image:title>
      <image:caption>Code coverage is the percentage of source code executed by a test suite, broken down by statement, branch, or line. High coverage indicates wide test reach; it does NOT indicate test quality — tests c</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/code-review</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Code%20review&amp;subtitle=Code%20review%20is%20the%20practice%20of%20having%20another%20engineer%20evaluate%20proposed%20changes%20before%20they%20merge.%20It%20catches%20bugs%2C%20enforces%20style%20consistency%2C%20distributes%20knowledge%20across%20the%20te&amp;eyebrow=Glossary</image:loc>
      <image:title>Code review</image:title>
      <image:caption>Code review is the practice of having another engineer evaluate proposed changes before they merge. It catches bugs, enforces style consistency, distributes knowledge across the team, and surfaces des</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/continuous-deployment</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Continuous%20deployment&amp;subtitle=Continuous%20deployment%20automatically%20deploys%20every%20change%20that%20passes%20the%20test%20suite%20into%20production%20%E2%80%94%20no%20human%20gate%20between%20merging%20code%20and%20serving%20traffic.%20CD%20assumes%20high%20test%20c&amp;eyebrow=Glossary</image:loc>
      <image:title>Continuous deployment</image:title>
      <image:caption>Continuous deployment automatically deploys every change that passes the test suite into production — no human gate between merging code and serving traffic. CD assumes high test coverage, automated r</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/cycle-time</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Cycle%20time&amp;subtitle=Cycle%20time%20is%20the%20elapsed%20time%20from%20when%20work%20starts%20on%20an%20item%20(first%20commit%2C%20status%20change%20to%20In%20Progress)%20to%20when%20it%20ships%20to%20users.%20It%20measures%20team%20flow%20without%20the%20queue%20nois&amp;eyebrow=Glossary</image:loc>
      <image:title>Cycle time</image:title>
      <image:caption>Cycle time is the elapsed time from when work starts on an item (first commit, status change to In Progress) to when it ships to users. It measures team flow without the queue noise that lead time inc</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/dark-launch</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Dark%20launch&amp;subtitle=A%20dark%20launch%20ships%20a%20feature%20to%20production%20but%20leaves%20it%20disabled%20for%20users%20%E2%80%94%20the%20code%20runs%20(sometimes%20against%20real%20traffic%2C%20sometimes%20against%20shadow%20traffic)%20to%20validate%20behaviou&amp;eyebrow=Glossary</image:loc>
      <image:title>Dark launch</image:title>
      <image:caption>A dark launch ships a feature to production but leaves it disabled for users — the code runs (sometimes against real traffic, sometimes against shadow traffic) to validate behaviour under load before </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/definition-of-done</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Definition%20of%20Done&amp;subtitle=Definition%20of%20Done%20(DoD)%20is%20a%20team-wide%20checklist%20that%20every%20story%20must%20satisfy%20before%20being%20marked%20complete%20%E2%80%94%20typical%20entries%20include%3A%20code%20reviewed%2C%20tests%20passing%2C%20documentation%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Definition of Done</image:title>
      <image:caption>Definition of Done (DoD) is a team-wide checklist that every story must satisfy before being marked complete — typical entries include: code reviewed, tests passing, documentation updated, deployed to</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/definition-of-ready</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Definition%20of%20ready&amp;subtitle=Definition%20of%20Ready%20is%20the%20team&apos;s%20explicit%20checklist%20that%20a%20story%20must%20pass%20before%20it%20can%20enter%20a%20sprint.%20Companion%20to%20Definition%20of%20Done%2C%20but%20at%20the%20entry%20side.%20Typical%20entries%3A%20a&amp;eyebrow=Glossary</image:loc>
      <image:title>Definition of ready</image:title>
      <image:caption>Definition of Ready is the team&apos;s explicit checklist that a story must pass before it can enter a sprint. Companion to Definition of Done, but at the entry side. Typical entries: acceptance criteria a</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/dora-metrics</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=DORA%20metrics&amp;subtitle=The%20four%20DORA%20metrics%20measure%20software-delivery%20performance%3A%20deployment%20frequency%2C%20lead%20time%20for%20changes%2C%20mean%20time%20to%20recovery%20(MTTR)%2C%20and%20change-failure%20rate.%20Defined%20by%20Google&apos;s&amp;eyebrow=Glossary</image:loc>
      <image:title>DORA metrics</image:title>
      <image:caption>The four DORA metrics measure software-delivery performance: deployment frequency, lead time for changes, mean time to recovery (MTTR), and change-failure rate. Defined by Google&apos;s DORA team via the a</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/error-budget</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Error%20budget&amp;subtitle=An%20error%20budget%20is%20the%20allowable%20reliability%20gap%20between%20the%20SLA%20(customer%20contract)%20and%20the%20SLO%20(operational%20target).%20If%20your%20SLO%20is%2099.9%25%20and%20you&apos;re%20meeting%2099.95%25%2C%20you%20have%20a%200.&amp;eyebrow=Glossary</image:loc>
      <image:title>Error budget</image:title>
      <image:caption>An error budget is the allowable reliability gap between the SLA (customer contract) and the SLO (operational target). If your SLO is 99.9% and you&apos;re meeting 99.95%, you have a 0.05% error budget to </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/feature-flag</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Feature%20flag&amp;subtitle=A%20feature%20flag%20is%20a%20runtime%20toggle%20that%20gates%20whether%20a%20code%20path%20is%20active.%20Flags%20decouple%20deployment%20(ship%20the%20code%20dark)%20from%20release%20(turn%20the%20flag%20on%20for%20some%2Fall%20users)%20and%20e&amp;eyebrow=Glossary</image:loc>
      <image:title>Feature flag</image:title>
      <image:caption>A feature flag is a runtime toggle that gates whether a code path is active. Flags decouple deployment (ship the code dark) from release (turn the flag on for some/all users) and enable instant rollba</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/five-whys</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Five%20whys&amp;subtitle=Five%20Whys%20is%20a%20root-cause-analysis%20technique%3A%20ask%20&apos;why%3F&apos;%20five%20times%20in%20a%20row%20(or%20until%20the%20answer%20becomes%20systemic%20rather%20than%20situational)%20to%20find%20the%20underlying%20cause%20of%20a%20proble&amp;eyebrow=Glossary</image:loc>
      <image:title>Five whys</image:title>
      <image:caption>Five Whys is a root-cause-analysis technique: ask &apos;why?&apos; five times in a row (or until the answer becomes systemic rather than situational) to find the underlying cause of a problem. Popularised by To</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/gherkin</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Gherkin&amp;subtitle=Gherkin%20is%20a%20structured%20plain-English%20DSL%20for%20writing%20executable%20acceptance%20tests%2C%20using%20the%20Given%20%2F%20When%20%2F%20Then%20format.%20It%20originated%20with%20Cucumber%20and%20is%20now%20used%20across%20BDD%20fram&amp;eyebrow=Glossary</image:loc>
      <image:title>Gherkin</image:title>
      <image:caption>Gherkin is a structured plain-English DSL for writing executable acceptance tests, using the Given / When / Then format. It originated with Cucumber and is now used across BDD frameworks (SpecFlow, Be</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/idempotency</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Idempotency&amp;subtitle=An%20operation%20is%20idempotent%20if%20calling%20it%20multiple%20times%20has%20the%20same%20effect%20as%20calling%20it%20once.%20In%20distributed%20systems%2C%20idempotent%20operations%20let%20you%20retry%20on%20network%20failure%20witho&amp;eyebrow=Glossary</image:loc>
      <image:title>Idempotency</image:title>
      <image:caption>An operation is idempotent if calling it multiple times has the same effect as calling it once. In distributed systems, idempotent operations let you retry on network failure without duplicating side </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/integration-test</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Integration%20test&amp;subtitle=An%20integration%20test%20verifies%20that%20multiple%20components%20work%20together%20correctly%20%E2%80%94%20a%20service%20hitting%20a%20real%20database%2C%20two%20microservices%20communicating%2C%20a%20frontend%20talking%20to%20a%20real%20API&amp;eyebrow=Glossary</image:loc>
      <image:title>Integration test</image:title>
      <image:caption>An integration test verifies that multiple components work together correctly — a service hitting a real database, two microservices communicating, a frontend talking to a real API. Integration tests </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/lead-time</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Lead%20time&amp;subtitle=Lead%20time%20is%20the%20elapsed%20time%20from%20when%20work%20is%20requested%20(story%20created%2C%20ticket%20filed)%20to%20when%20it&apos;s%20delivered%20(deployed%20to%20production).%20It&apos;s%20a%20DORA%20metric%20measuring%20end-to-end%20del&amp;eyebrow=Glossary</image:loc>
      <image:title>Lead time</image:title>
      <image:caption>Lead time is the elapsed time from when work is requested (story created, ticket filed) to when it&apos;s delivered (deployed to production). It&apos;s a DORA metric measuring end-to-end delivery flow — includi</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/mob-programming</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Mob%20programming&amp;subtitle=Mob%20programming%20is%20a%20practice%20where%20the%20entire%20team%20works%20on%20the%20same%20problem%20at%20the%20same%20time%2C%20on%20the%20same%20screen%2C%20with%20one%20person%20typing%20(the%20&apos;driver&apos;)%20and%20the%20rest%20navigating.%20O&amp;eyebrow=Glossary</image:loc>
      <image:title>Mob programming</image:title>
      <image:caption>Mob programming is a practice where the entire team works on the same problem at the same time, on the same screen, with one person typing (the &apos;driver&apos;) and the rest navigating. Originated at Hunter </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/mttr</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=MTTR&amp;subtitle=Mean%20Time%20To%20Recovery%20is%20the%20average%20elapsed%20time%20between%20an%20incident&apos;s%20detection%20and%20its%20resolution.%20It&apos;s%20one%20of%20the%20four%20DORA%20metrics%20(lead%20time%2C%20deploy%20frequency%2C%20change%20failure&amp;eyebrow=Glossary</image:loc>
      <image:title>MTTR</image:title>
      <image:caption>Mean Time To Recovery is the average elapsed time between an incident&apos;s detection and its resolution. It&apos;s one of the four DORA metrics (lead time, deploy frequency, change failure rate, MTTR) and ind</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/mutation-testing</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Mutation%20testing&amp;subtitle=Mutation%20testing%20measures%20test%20quality%20by%20introducing%20small%20bugs%20(mutations)%20into%20the%20source%20code%20and%20checking%20whether%20tests%20catch%20them.%20If%20a%20test%20suite%20has%2080%25%20coverage%20but%20kills%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Mutation testing</image:title>
      <image:caption>Mutation testing measures test quality by introducing small bugs (mutations) into the source code and checking whether tests catch them. If a test suite has 80% coverage but kills only 40% of mutants,</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/pair-programming</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Pair%20programming&amp;subtitle=Pair%20programming%20has%20two%20engineers%20at%20one%20workstation%2C%20alternating%20between%20driver%20(typing)%20and%20navigator%20(reviewing%2C%20suggesting%2C%20thinking%20ahead).%20Practiced%20widely%20at%20Pivotal%2C%20Thoug&amp;eyebrow=Glossary</image:loc>
      <image:title>Pair programming</image:title>
      <image:caption>Pair programming has two engineers at one workstation, alternating between driver (typing) and navigator (reviewing, suggesting, thinking ahead). Practiced widely at Pivotal, Thoughtworks, and other X</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/planning-poker</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Planning%20poker&amp;subtitle=Planning%20poker%20is%20a%20consensus-based%20estimation%20technique%20where%20each%20engineer%20privately%20picks%20a%20Fibonacci%20card%20(1%2C%202%2C%203%2C%205%2C%208%2C%2013%2C%20%E2%80%A6)%20for%20a%20story%2C%20then%20reveals%20simultaneously.%20Diver&amp;eyebrow=Glossary</image:loc>
      <image:title>Planning poker</image:title>
      <image:caption>Planning poker is a consensus-based estimation technique where each engineer privately picks a Fibonacci card (1, 2, 3, 5, 8, 13, …) for a story, then reveals simultaneously. Divergent estimates trigg</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/postmortem</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Postmortem&amp;subtitle=A%20postmortem%20is%20a%20structured%20retrospective%20on%20an%20incident%20or%20failure%20%E2%80%94%20capturing%20what%20happened%2C%20why%2C%20what%20was%20learned%2C%20and%20what%20will%20change.%20Blameless%20postmortems%20focus%20on%20systemic&amp;eyebrow=Glossary</image:loc>
      <image:title>Postmortem</image:title>
      <image:caption>A postmortem is a structured retrospective on an incident or failure — capturing what happened, why, what was learned, and what will change. Blameless postmortems focus on systemic causes rather than </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/pull-request</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Pull%20request&amp;subtitle=A%20pull%20request%20(PR)%20%E2%80%94%20also%20called%20a%20merge%20request%20in%20GitLab%20%2F%20Bitbucket%20%E2%80%94%20is%20a%20proposal%20to%20merge%20changes%20from%20one%20git%20branch%20into%20another%2C%20typically%20with%20code%20review%20and%20CI%20checks%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Pull request</image:title>
      <image:caption>A pull request (PR) — also called a merge request in GitLab / Bitbucket — is a proposal to merge changes from one git branch into another, typically with code review and CI checks gating the merge. PR</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/refactor</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Refactor&amp;subtitle=Refactoring%20is%20changing%20the%20internal%20structure%20of%20code%20without%20changing%20its%20external%20behaviour.%20The%20goal%20is%20to%20make%20code%20easier%20to%20understand%2C%20modify%2C%20or%20test%20%E2%80%94%20not%20to%20add%20features&amp;eyebrow=Glossary</image:loc>
      <image:title>Refactor</image:title>
      <image:caption>Refactoring is changing the internal structure of code without changing its external behaviour. The goal is to make code easier to understand, modify, or test — not to add features. Refactoring under </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/regression-test</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Regression%20test&amp;subtitle=A%20regression%20test%20verifies%20that%20previously%20working%20functionality%20still%20works%20after%20a%20code%20change.%20Regression%20tests%20are%20run%20on%20every%20change%20(CI)%2C%20every%20release%2C%20or%20on%20a%20schedule%2C%20an&amp;eyebrow=Glossary</image:loc>
      <image:title>Regression test</image:title>
      <image:caption>A regression test verifies that previously working functionality still works after a code change. Regression tests are run on every change (CI), every release, or on a schedule, and are the primary de</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/retention-cohort</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Retention%20cohort&amp;subtitle=A%20retention%20cohort%20is%20a%20group%20of%20users%20who%20joined%20in%20the%20same%20time%20window%20(e.g.%20all%20signups%20in%20week%2014)%2C%20tracked%20over%20time%20to%20measure%20how%20many%20remain%20active.%20Cohort%20analysis%20surfac&amp;eyebrow=Glossary</image:loc>
      <image:title>Retention cohort</image:title>
      <image:caption>A retention cohort is a group of users who joined in the same time window (e.g. all signups in week 14), tracked over time to measure how many remain active. Cohort analysis surfaces whether retention</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/slo</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=SLO&amp;subtitle=A%20Service-Level%20Objective%20is%20a%20target%20reliability%20metric%20for%20a%20service%20%E2%80%94%20typically%20expressed%20as%20a%20percentage%20over%20a%20time%20window.%20For%20example%3A%2099.9%25%20of%20API%20requests%20return%20successfu&amp;eyebrow=Glossary</image:loc>
      <image:title>SLO</image:title>
      <image:caption>A Service-Level Objective is a target reliability metric for a service — typically expressed as a percentage over a time window. For example: 99.9% of API requests return successfully within 200ms ove</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/smoke-test</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Smoke%20test&amp;subtitle=A%20smoke%20test%20is%20a%20small%2C%20fast%20set%20of%20tests%20that%20verify%20the%20most%20critical%20paths%20of%20a%20system%20work%20at%20all%20%E2%80%94%20does%20the%20app%20start%2C%20can%20a%20user%20log%20in%2C%20do%20the%20top%20three%20workflows%20respond.%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Smoke test</image:title>
      <image:caption>A smoke test is a small, fast set of tests that verify the most critical paths of a system work at all — does the app start, can a user log in, do the top three workflows respond. Smoke tests run on e</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/spike</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Spike&amp;subtitle=A%20spike%20is%20a%20timeboxed%20research%20story%20%E2%80%94%20the%20team%20commits%20to%20spending%20a%20fixed%20amount%20of%20effort%20(1%20day%2C%203%20days%2C%20a%20sprint)%20exploring%20a%20question%2C%20with%20a%20defined%20deliverable%20(a%20recommen&amp;eyebrow=Glossary</image:loc>
      <image:title>Spike</image:title>
      <image:caption>A spike is a timeboxed research story — the team commits to spending a fixed amount of effort (1 day, 3 days, a sprint) exploring a question, with a defined deliverable (a recommendation, a prototype,</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/sprint-burndown</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Sprint%20burndown&amp;subtitle=A%20sprint%20burndown%20chart%20shows%20remaining%20work%20in%20a%20sprint%20over%20time%20%E2%80%94%20typically%20Y-axis%20is%20story%20points%20or%20hours%2C%20X-axis%20is%20sprint%20day.%20The%20ideal%20line%20is%20a%20straight%20diagonal%20from%20spr&amp;eyebrow=Glossary</image:loc>
      <image:title>Sprint burndown</image:title>
      <image:caption>A sprint burndown chart shows remaining work in a sprint over time — typically Y-axis is story points or hours, X-axis is sprint day. The ideal line is a straight diagonal from sprint start to sprint </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/sprint-goals</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Sprint%20goals&amp;subtitle=A%20sprint%20goal%20is%20a%20one-sentence%20outcome%20the%20team%20commits%20to%20delivering%20in%20the%20sprint%20%E2%80%94%20not%20a%20list%20of%20stories%2C%20but%20the%20customer%20or%20business%20outcome%20those%20stories%20produce.%20Sprint%20goa&amp;eyebrow=Glossary</image:loc>
      <image:title>Sprint goals</image:title>
      <image:caption>A sprint goal is a one-sentence outcome the team commits to delivering in the sprint — not a list of stories, but the customer or business outcome those stories produce. Sprint goals are what protect </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/story-points</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Story%20points&amp;subtitle=A%20story-point%20estimate%20is%20a%20unit-less%20measure%20of%20relative%20effort%20assigned%20to%20a%20user%20story.%20Points%20capture%20complexity%2C%20uncertainty%2C%20and%20time%20taken%20together%3B%20they&apos;re%20meant%20to%20be%20comp&amp;eyebrow=Glossary</image:loc>
      <image:title>Story points</image:title>
      <image:caption>A story-point estimate is a unit-less measure of relative effort assigned to a user story. Points capture complexity, uncertainty, and time taken together; they&apos;re meant to be compared within a team (</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/story-splitting</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Story%20splitting&amp;subtitle=Story%20splitting%20is%20the%20practice%20of%20breaking%20a%20large%20user%20story%20into%20smaller%20stories%20that%20each%20independently%20deliver%20value.%20The%20smaller%20the%20stories%2C%20the%20smoother%20the%20flow%20%E2%80%94%20and%20the%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Story splitting</image:title>
      <image:caption>Story splitting is the practice of breaking a large user story into smaller stories that each independently deliver value. The smaller the stories, the smoother the flow — and the easier they are to e</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/swarm-pattern</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Swarm%20pattern&amp;subtitle=Swarming%20is%20the%20practice%20of%20having%20multiple%20team%20members%20work%20on%20the%20same%20story%20until%20it&apos;s%20done%2C%20then%20move%20together%20to%20the%20next.%20Swarming%20maximises%20throughput%20(one%20story%20done%20in%20a%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Swarm pattern</image:title>
      <image:caption>Swarming is the practice of having multiple team members work on the same story until it&apos;s done, then move together to the next. Swarming maximises throughput (one story done in a day beats five stori</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/tdd</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=TDD&amp;subtitle=Test-Driven%20Development%20is%20a%20workflow%20where%20you%20write%20a%20failing%20test%20first%2C%20write%20the%20minimum%20code%20to%20make%20it%20pass%2C%20then%20refactor%20%E2%80%94%20repeated%20in%20tight%20loops.%20Popularised%20by%20Kent%20Bec&amp;eyebrow=Glossary</image:loc>
      <image:title>TDD</image:title>
      <image:caption>Test-Driven Development is a workflow where you write a failing test first, write the minimum code to make it pass, then refactor — repeated in tight loops. Popularised by Kent Beck in the early 2000s</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/technical-debt</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Technical%20debt&amp;subtitle=Technical%20debt%20is%20the%20accumulated%20cost%20of%20shortcuts%20taken%20during%20development%20%E2%80%94%20code%20that&apos;s%20harder%20to%20change%20than%20it%20should%20be%2C%20missing%20tests%2C%20outdated%20dependencies%2C%20or%20architectura&amp;eyebrow=Glossary</image:loc>
      <image:title>Technical debt</image:title>
      <image:caption>Technical debt is the accumulated cost of shortcuts taken during development — code that&apos;s harder to change than it should be, missing tests, outdated dependencies, or architectural choices that no lo</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/throughput</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Throughput&amp;subtitle=Throughput%20is%20the%20count%20of%20work%20items%20completed%20per%20unit%20of%20time%20(typically%20per%20week%20or%20per%20sprint).%20Unlike%20velocity%20(which%20is%20points-based%20and%20team-specific)%2C%20throughput%20uses%20raw%20&amp;eyebrow=Glossary</image:loc>
      <image:title>Throughput</image:title>
      <image:caption>Throughput is the count of work items completed per unit of time (typically per week or per sprint). Unlike velocity (which is points-based and team-specific), throughput uses raw story count and is c</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/traceability-matrix</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Traceability%20matrix&amp;subtitle=A%20traceability%20matrix%20maps%20requirements%20to%20the%20test%20cases%20that%20verify%20them%2C%20and%20to%20the%20defects%20discovered%20against%20each.%20Traceability%20lets%20a%20QA%20lead%20answer%20&apos;is%20requirement%20X%20tested%3F&amp;eyebrow=Glossary</image:loc>
      <image:title>Traceability matrix</image:title>
      <image:caption>A traceability matrix maps requirements to the test cases that verify them, and to the defects discovered against each. Traceability lets a QA lead answer &apos;is requirement X tested?&apos; and &apos;which require</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/trunk-based-development</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Trunk-based%20development&amp;subtitle=Trunk-based%20development%20is%20a%20source-control%20workflow%20where%20engineers%20integrate%20small%20changes%20to%20a%20single%20shared%20branch%20(trunk%20%2F%20main)%20at%20least%20once%20per%20day%2C%20gated%20by%20automated%20test&amp;eyebrow=Glossary</image:loc>
      <image:title>Trunk-based development</image:title>
      <image:caption>Trunk-based development is a source-control workflow where engineers integrate small changes to a single shared branch (trunk / main) at least once per day, gated by automated tests and feature flags </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/value-stream</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Value%20stream&amp;subtitle=A%20value%20stream%20is%20the%20end-to-end%20sequence%20of%20activities%20that%20delivers%20a%20product%20or%20feature%20to%20a%20customer.%20Value-stream%20mapping%20(VSM)%20makes%20the%20stream%20visible%2C%20identifies%20waste%20(han&amp;eyebrow=Glossary</image:loc>
      <image:title>Value stream</image:title>
      <image:caption>A value stream is the end-to-end sequence of activities that delivers a product or feature to a customer. Value-stream mapping (VSM) makes the stream visible, identifies waste (handoffs, queues, rewor</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/velocity</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Velocity&amp;subtitle=A%20team&apos;s%20velocity%20is%20the%20average%20number%20of%20story%20points%20completed%20per%20sprint%20over%20a%20rolling%20window%20(typically%20the%20last%203-6%20sprints).%20Velocity%20is%20used%20to%20plan%20future%20sprints%3A%20if%20a%20t&amp;eyebrow=Glossary</image:loc>
      <image:title>Velocity</image:title>
      <image:caption>A team&apos;s velocity is the average number of story points completed per sprint over a rolling window (typically the last 3-6 sprints). Velocity is used to plan future sprints: if a team&apos;s average is 32 </image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/glossary/wip-limit</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=WIP%20limit&amp;subtitle=A%20work-in-progress%20(WIP)%20limit%20caps%20how%20many%20items%20the%20team%20can%20have%20in%20flight%20at%20once%2C%20per%20workflow%20stage.%20WIP%20limits%20force%20teams%20to%20finish%20work%20before%20starting%20new%20work%20%E2%80%94%20the%20cen&amp;eyebrow=Glossary</image:loc>
      <image:title>WIP limit</image:title>
      <image:caption>A work-in-progress (WIP) limit caps how many items the team can have in flight at once, per workflow stage. WIP limits force teams to finish work before starting new work — the central practice of Kan</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/ai-sprint-planning</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=AI%20sprint%20planning&amp;subtitle=Let%20the%20AI%20fill%20the%20sprint.%20You%20spend%20the%20saved%20hours%20on%20actual%20work.&amp;eyebrow=Use%20case</image:loc>
      <image:title>AI sprint planning</image:title>
      <image:caption>Most sprint planning meetings spend 60% of the time on capacity math the AI can do in 2 seconds. Stride&apos;s Plan module computes realistic capacity from PTO + meetings + historical velocity, then proposes a draft sprint your team can edit instead of author from scratch.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/ai-prd-generation</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=AI%20PRD%20generation&amp;subtitle=From%20a%204-page%20PRD%20to%2015%20stories%2C%2060%20acceptance%20criteria%2C%20and%20a%20sprint%20draft%20in%2090%20seconds.&amp;eyebrow=Use%20case</image:loc>
      <image:title>AI PRD generation</image:title>
      <image:caption>Translating a PRD into actionable stories takes most PMs a half-day per epic. Stride generates the epic, story breakdown, acceptance criteria, test cases, and dependency graph in under two minutes — leaving the PM to edit instead of author.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/bpmn-process-mining</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=BPMN%20process%20mining&amp;subtitle=Find%20the%20bottleneck%20in%20your%20delivery%20pipeline%20without%20paying%20Celonis%20money.&amp;eyebrow=Use%20case</image:loc>
      <image:title>BPMN process mining</image:title>
      <image:caption>Celonis is great. It also starts at $200K/year. For most software-delivery teams, the diagnostic value of process mining (finding where lead time is actually being spent) doesn&apos;t require enterprise BI. Stride mines your Jira/Linear/Stride events into BPMN diagrams and bottleneck heatmaps in one day.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/legacy-modernization</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Legacy%20modernization&amp;subtitle=AI%20reads%20decades-old%20code%20so%20your%20modernization%20plan%20stops%20being%20a%20guess.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Legacy modernization</image:title>
      <image:caption>Modernizing legacy systems (COBOL, mainframe, old .NET) is one of the highest-stakes engineering investments. Stride&apos;s Legacy Intelligence reads the legacy code, extracts implicit requirements, generates a phased modernization roadmap, and computes payback math grounded in real LOC and complexity.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/architecture-decisions</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Architecture%20decisions&amp;subtitle=AI-scored%20architecture%20options%2C%20ADRs%20with%20rationale%2C%20and%20a%20tech%20radar%20that%20updates%20as%20decisions%20ship.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Architecture decisions</image:title>
      <image:caption>Most engineering organisations make a hundred architecture decisions a year, document maybe ten, and re-litigate the same trade-offs every 18 months. Stride&apos;s Design module generates 3-5 scored alternatives per decision, captures the chosen option as an ADR, and maintains a living tech radar.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/ai-test-generation</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=AI%20test%20generation&amp;subtitle=Test%20cases%20written%20by%20AI%20from%20your%20stories%20%E2%80%94%20with%20traceability%20that%20maintains%20itself.&amp;eyebrow=Use%20case</image:loc>
      <image:title>AI test generation</image:title>
      <image:caption>Test management as a separate discipline (TestRail + Jira + spreadsheets) was a workaround for tools that couldn&apos;t see across stories and tests. Stride generates test cases from AC at story-creation time, maintains the traceability matrix automatically, and predicts which areas are likely to regress.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/defect-prediction</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Defect%20prediction&amp;subtitle=Know%20which%20areas%20of%20the%20codebase%20are%20likely%20to%20break%20before%20they%20do.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Defect prediction</image:title>
      <image:caption>Defects don&apos;t distribute uniformly — most defects cluster in a small fraction of modules. Stride&apos;s defect-prediction model scores every module by risk and tells reviewers which PRs deserve careful eyes. It&apos;s the same model that&apos;s used to surface &apos;review carefully&apos; at PR time and to plan regression-test investments.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/release-notes-automation</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Release%20notes%20automation&amp;subtitle=Release%20notes%20written%20from%20the%20stories%20actually%20shipped%20%E2%80%94%20not%20from%20memory.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Release notes automation</image:title>
      <image:caption>Release notes are typically written by someone reading through closed Jira tickets and rephrasing them into user-friendly language. Stride does the rephrasing automatically: every release surfaces a draft from the merged stories, in your team&apos;s voice, with the option to edit before shipping.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/team-onboarding</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Team%20onboarding&amp;subtitle=New%20engineers%20ramp%20in%20days%2C%20not%20weeks%20%E2%80%94%20by%20reading%20the%20graph%2C%20not%20Slack%20history.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Team onboarding</image:title>
      <image:caption>New engineers spend their first month searching Slack history for &apos;why does this work this way?&apos; and asking senior engineers who&apos;d rather be coding. Stride lets the AI answer those questions from the actual project graph — ADRs, stories, dependencies, and decisions — instead of from your senior engineers&apos; time.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/use-cases/quality-gates</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Quality%20gates&amp;subtitle=Definition%20of%20Done%20enforced%20by%20the%20tool%2C%20not%20by%20team%20memory.&amp;eyebrow=Use%20case</image:loc>
      <image:title>Quality gates</image:title>
      <image:caption>Most teams have a Definition of Done that lives on a Confluence page nobody reads. Stride enforces it: stories can&apos;t transition to Done until AC are verified, tests pass, code is reviewed, and any other team-specific gates are met. Drift between intent and reality drops to zero.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Sprint%20planning&amp;subtitle=Plan%2C%20run%2C%20and%20improve%20sprints%20with%20AI%20in%20the%20loop.&amp;eyebrow=Learn</image:loc>
      <image:title>Sprint planning</image:title>
      <image:caption>Everything teams need to plan, run, and improve sprints — capacity, story sizing, sprint goals, retrospectives, and burndown.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning/capacity-planning</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Capacity%20planning%20that%20survives%20reality&amp;subtitle=Naive%20capacity%20is%20team-size%20%C3%97%20sprint-days.%20Realistic%20capacity%20is%2050-65%25%20of%20that.%20Why%2C%20and%20how%20to%20compute%20it%20for%20your%20team.&amp;eyebrow=Learn</image:loc>
      <image:title>Capacity planning that survives reality</image:title>
      <image:caption>Naive capacity is team-size × sprint-days. Realistic capacity is 50-65% of that. Why, and how to compute it for your team.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning/story-sizing</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Story%20sizing%20without%20flame%20wars&amp;subtitle=Fibonacci%20vs%20t-shirt%2C%20when%20to%20estimate%2C%20when%20to%20stop%2C%20and%20how%20AI%20helps%20without%20taking%20over%20the%20room.&amp;eyebrow=Learn</image:loc>
      <image:title>Story sizing without flame wars</image:title>
      <image:caption>Fibonacci vs t-shirt, when to estimate, when to stop, and how AI helps without taking over the room.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning/sprint-goals</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Sprint%20goals%20worth%20committing%20to&amp;subtitle=The%20difference%20between%20&apos;complete%20these%2012%20stories&apos;%20and%20&apos;deliver%20the%20multi-tenant%20CSV%20export&apos;.%20Goals%20teams%20actually%20care%20about.&amp;eyebrow=Learn</image:loc>
      <image:title>Sprint goals worth committing to</image:title>
      <image:caption>The difference between &apos;complete these 12 stories&apos; and &apos;deliver the multi-tenant CSV export&apos;. Goals teams actually care about.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning/retrospectives</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Retrospectives%20that%20change%20behavior&amp;subtitle=Formats%20that%20work%20(Mad%2FSad%2FGlad%2C%20Sailboat%2C%204Ls%2C%20Lean%20Coffee)%2C%20formats%20that%20don&apos;t%2C%20and%20the%20action-item%20discipline%20that%20turns%20retros%20into%20actual%20change.&amp;eyebrow=Learn</image:loc>
      <image:title>Retrospectives that change behavior</image:title>
      <image:caption>Formats that work (Mad/Sad/Glad, Sailboat, 4Ls, Lean Coffee), formats that don&apos;t, and the action-item discipline that turns retros into actual change.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://www.stride.page/learn/sprint-planning/burndown-charts</loc>
    <image:image>
      <image:loc>https://www.stride.page/api/og?title=Burndown%20charts%20and%20what%20they%20actually%20tell%20you&amp;subtitle=The%20false-positive%20trap%2C%20the%20right%20metrics%20next%20to%20burndown%2C%20and%20what%20burndown%20does%20NOT%20show.%20Plus%20the%20patterns%20that%20mean%20something.&amp;eyebrow=Learn</image:loc>
      <image:title>Burndown charts and what they actually tell you</image:title>
      <image:caption>The false-positive trap, the right metrics next to burndown, and what burndown does NOT show. Plus the patterns that mean something.</image:caption>
    </image:image>
  </url>
</urlset>