BPMN process mining without Celonis money
Celonis charges $100K-$1M+ for process mining. It's genuinely good. It's also wildly overpriced for 95% of teams. This is the lighter-weight playbook that actually works.
Celonis charges enterprises ~$100K to ~$1M+ per year to mine business processes. The product is genuinely good. It's also wildly overpriced for ~95% of the teams that want process mining. This post is for the 95%.
Process mining is the practice of reconstructing real process behavior from event logs — system records of who did what when. The output is a BPMN diagram showing how the process actually flows, where the bottlenecks live, and which variants exist. It's diagnostic, not prescriptive. You read it the way a doctor reads an X-ray.
If your team is shipping software, you already have all the event logs you need (Jira/Linear/Stride transitions, Git events, deploy logs). You don't need Celonis. You need a tool that reads the data you already have and draws the diagram.
What process mining actually is
Three concepts. Learn them in 10 minutes; the rest is engineering.
The event log
Every process emits events. For software delivery: "story moved to In Progress at 09:14 by Alice." Each event has:
- Case ID — the entity the process is about (the story, in our example).
- Activity name — what happened ("moved to In Progress").
- Timestamp — when.
- Resource — who or what (Alice; or "auto-bot" for system events).
That's it. Four columns. A CSV with these columns is a valid event log.
The process model
Given the event log, you can reconstruct the underlying flow. If 80% of stories go Backlog → To Do → In Progress → In Review → Done, that's the dominant variant. The other 20% take detours: In Progress → Blocked → In Progress, or In Review → To Do (rework), or skip steps entirely.
The output is a BPMN diagram (Business Process Model and Notation — the standard graphical syntax for process flow). Nodes are activities, arrows are transitions, numbers on the arrows are case counts or average durations.
The bottleneck
The interesting part isn't the dominant path. It's the wait times. If In Review averages 3.2 days but the actual review work takes 20 minutes, your bottleneck is review scheduling, not review effort. Process mining surfaces this gap automatically.
That's the entire conceptual surface. From here on, it's just engineering.
Who actually needs process mining
Three honest profiles:
1. Software-delivery teams with sprint-length anxiety. "Why does it take us 6 weeks to ship a 5-day feature?" Process mining shows you the answer is usually a 4-week wait at one specific transition (often code review or QA scheduling). Without the diagnostic, you guess and tune the wrong thing.
2. Operations teams with handoff-heavy workflows. Customer onboarding, claims processing, expense approval. Anywhere a case touches 4+ people, process mining is more useful than dashboards.
3. Compliance / audit teams. Demonstrating that a process actually follows its documented variant is what Celonis sells. If you're regulated and need this with evidentiary weight, you may need Celonis. If you just want internal confidence, a lighter tool is fine.
Everyone else mostly just needs a better dashboard. Process mining is overkill if your process is step1 → step2 → done and there's no variance.
Why Celonis is the wrong default
Two reasons, both pragmatic.
Cost. Celonis pricing isn't public but the data points we've seen from the procurement side: $40K-$80K/year for a small team license, $200K-$500K/year for typical mid-market, $1M+ for enterprise. That's per year. The price scales with the number of cases (events processed), so a high-volume process gets very expensive.
Time-to-value. Celonis is a full data-warehouse-class product. Implementation cycles are typically 3-6 months for the first useful diagram. Most teams pay 6 months of license before they see actionable output.
For 95% of software-delivery teams, the math is brutal: you spend $200K and 6 months to learn your bottleneck is code review. You could've spent $0 and a week to learn the same thing with a CSV export.
Stride's approach
Specifics, because we ship the tool.
Stride reads event logs from three places automatically:
- Internal: every story state transition, every comment, every assignment change. No setup.
- GitHub (if connected): PR opened/reviewed/merged events.
- CSV upload: arbitrary external logs. The schema is "case_id, activity, timestamp, resource."
From these, Stride auto-generates a BPMN diagram per process. The default view: dominant path highlighted, variant paths in grey, transition arrows annotated with average duration. Click any transition to see the case-level breakdown.
The bottleneck heatmap is a second view: every transition colored by wait time relative to its work time. Red transitions are wait-heavy (the bottleneck candidates). Blue transitions are work-heavy (probably not your problem).
Time-to-value: hours, not months. Connect the integration, look at the diagram, find the red. We've seen teams find a 3-day-average wait at one specific transition in their first 20 minutes with the tool — saving 15+ days per sprint after they fixed it.
Pricing parity
Stride Pro is $29/seat/month. The Optimize module (where process mining lives) is bundled. There is no $200K starting price. There is no 6-month implementation.
This isn't "Celonis cheaper." Celonis is genuinely deeper — handles harder data sources, more variants, more compliance evidence. For a Fortune 100 with a regulatory mandate to mine 50 processes, the depth matters.
For everyone else, the depth is wasted.
A 1-day playbook
For teams that want to do process mining in Stride this week:
Hour 1: Pick a case type
What process do you want to understand? Common starting points:
- Sprint cycle time: case = story. Activities = state transitions. Best for engineering teams.
- PR lead time: case = PR. Activities = opened, reviewed, requested-changes, approved, merged. Best for code-review-heavy teams.
- Onboarding cycle: case = new hire. Activities = each onboarding step. Best for HR/people teams.
Pick one. Don't try to mine three at once on day 1.
Hours 2-3: Pull the event log
For Stride-native processes: already done. Optimize → New Process → pick the case type from a dropdown.
For external sources: CSV upload. Schema is case_id,activity,timestamp,resource. Most existing systems can produce this with a SQL query or a small script. We have a template per common integration.
Hours 4-6: Read the diagram
The first read is always wrong. You'll see the dominant variant and conclude "yep, that's how it works." Spend the next hour looking at deviations:
- What % of cases take the dominant path? (Less than 60% is a signal.)
- Where do cases loop back? (Rework loops are a known bottleneck class.)
- Where do cases dead-end? (Cancelled work is sometimes a signal of intake quality.)
The bottleneck heatmap tells you where to look. Don't trust your intuition about where the bottleneck is — your intuition is wrong. The data isn't.
Hours 7-8: Pick one intervention
The trap with process mining is trying to fix everything you see. The discipline: pick ONE transition with the highest wait time. Design ONE intervention. Re-measure in 2 weeks.
Common interventions:
- Wait-heavy review transitions: assign reviewers automatically based on file ownership, add a "review SLA" alert.
- Rework loops: tighten acceptance criteria (see our AC post).
- Long backlog dwell: prune the backlog quarterly; stale stories never ship.
One change. Re-measure. Repeat.
ROI math (back-of-envelope)
For a 20-engineer team running 2-week sprints:
- Sprint capacity = ~100 story points / sprint
- Average story = 3 days end-to-end
- Bottleneck wait time (typical) = 1.5 days per story
- Stories per quarter = ~150
- Wait time eliminated by one mining-driven intervention = 0.5-1 day per story
- Hours saved per quarter = 150 × 0.75 day × 8 hours = 900 hours
- At $100/hr blended cost: $90K saved per quarter
Stride Pro for 20 engineers: $580/month = $7K/year. The math is absurd in favor.
These numbers come from teams we've worked with. Yours will vary — sometimes higher (heavy review-bottleneck teams), sometimes lower (already-optimized teams). The order of magnitude is consistent.
When to graduate to Celonis
Three signals that you've outgrown lightweight process mining:
- You need to mine 20+ distinct processes simultaneously, each with its own data source.
- You need certified compliance evidence that processes match their documented variant (SOX, FDA 21 CFR Part 11, GxP).
- You're processing 10M+ events per process and need to handle the data engineering at warehouse scale.
If you're not hitting any of those, you don't need Celonis. You need 1 day with Stride's Optimize module.
The 6-minute deep-dive on process mining, bottleneck heatmaps, and AI-suggested automation. The same tool teams are using to ship 30% faster without buying enterprise BI.
What to read next
For the broader story on how Stride connects process data to the rest of the delivery graph (so the AI can answer "which stories are in the bottleneck transition right now?"), the connected delivery graph post is the thesis explainer. For procurement-stage comparison against the tools that try to be a tracker and a process tool, see Stride vs Jira (Jira has zero native process mining; the Marketplace add-ons that claim to do it are roughly $30K/year and not great).
Process mining isn't magic. It's just looking at the data you already have, in a shape that surfaces what you can't see in a dashboard. The procurement question isn't "do we want Celonis" — it's "is our process worth one day of investigation." For most teams, the answer is obviously yes.