9 min read • Guide 780 of 877
Agile Metrics and KPIs
Metrics drive behavior, so choose them carefully. GitScrum provides built-in analytics and custom tracking to help teams improve continuously.
Metrics Categories
Flow Metrics
FLOW METRICS:
┌─────────────────────────────────────────────────────────────┐
│ │
│ VELOCITY: │
│ Story points completed per sprint │
│ Use for: Sprint planning, capacity │
│ ⚠️ Do not: Compare across teams │
│ │
│ Example: │
│ Sprint 10: 24 points │
│ Sprint 11: 28 points │
│ Sprint 12: 22 points │
│ Average: 25 points/sprint │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ CYCLE TIME: │
│ Time from work started to work completed │
│ Use for: Predictability, process improvement │
│ │
│ Example: │
│ Median cycle time: 3.5 days │
│ 85th percentile: 7 days │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ LEAD TIME: │
│ Time from request to delivery │
│ Use for: Customer expectations, SLAs │
│ │
│ Example: │
│ From "requested" to "deployed": 12 days average │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ THROUGHPUT: │
│ Number of items completed per time period │
│ Use for: Forecasting, capacity planning │
│ │
│ Example: │
│ 8 stories/week (average) │
│ Range: 6-10 stories/week │
└─────────────────────────────────────────────────────────────┘
Quality Metrics
QUALITY METRICS:
┌─────────────────────────────────────────────────────────────┐
│ │
│ DEFECT RATE: │
│ Bugs found per unit of work │
│ Use for: Quality trend tracking │
│ │
│ Example: │
│ 0.3 bugs per story (after QA) │
│ Trend: Decreasing (was 0.5 last quarter) │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ ESCAPED DEFECTS: │
│ Bugs found in production │
│ Use for: Quality gate effectiveness │
│ │
│ Example: │
│ 2 production bugs this sprint │
│ Target: < 3 per sprint │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ CODE COVERAGE: │
│ Percentage of code with tests │
│ Use for: Test quality indicator (not absolute) │
│ ⚠️ High coverage ≠ quality │
│ │
│ Example: │
│ Overall: 78% │
│ New code: 90% (target: > 80%) │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ MTTR (Mean Time to Recover): │
│ Average time to fix production issues │
│ Use for: Incident response effectiveness │
│ │
│ Example: │
│ MTTR: 45 minutes (target: < 1 hour) │
└─────────────────────────────────────────────────────────────┘
Outcome Metrics
OUTCOME METRICS:
┌─────────────────────────────────────────────────────────────┐
│ │
│ THESE MATTER MOST: │
│ (Output metrics are means to these ends) │
│ │
│ USER SATISFACTION: │
│ NPS, CSAT, user feedback │
│ Use for: Are we building the right things? │
│ │
│ Example: │
│ NPS: 45 (up from 38 last quarter) │
│ Feature satisfaction: 4.2/5 │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ BUSINESS VALUE: │
│ Revenue impact, cost savings, goal achievement │
│ Use for: Prioritization, ROI │
│ │
│ Example: │
│ New checkout: +12% conversion │
│ Automation: saved 20 hrs/week │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ ADOPTION: │
│ Usage of new features │
│ Use for: Feature success validation │
│ │
│ Example: │
│ New search: 65% adoption (target: 50%) │
│ Used daily by 40% of users │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ TIME TO VALUE: │
│ How long until work delivers value │
│ Use for: Reducing batch sizes, faster feedback │
│ │
│ Example: │
│ Average: 3 weeks from idea to production │
│ Goal: Reduce to 2 weeks │
└─────────────────────────────────────────────────────────────┘
Dashboard Design
Team Dashboard
METRICS DASHBOARD:
┌─────────────────────────────────────────────────────────────┐
│ │
│ TEAM DASHBOARD EXAMPLE: │
│ │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ Team Alpha - Sprint 12 ││
│ │ ││
│ │ VELOCITY CYCLE TIME QUALITY ││
│ │ ────────── ────────── ─────── ││
│ │ 25 pts 3.5 days 0.2 bugs/story ││
│ │ ↑ +2 vs avg ↓ -0.5 vs avg ↓ improving ││
│ │ ││
│ │ SPRINT PROGRESS ││
│ │ ████████████████░░░░ 80% complete ││
│ │ 20/25 points done, 3 days remaining ││
│ │ ││
│ │ VELOCITY TREND (last 6 sprints) ││
│ │ 30│ ┌───┐ ││
│ │ │ ┌───┤ ├───┐ ┌───┐ ││
│ │ 20│─┤ │ │ ├───┬─┤ │ ││
│ │ │ │ │ │ │ │ │ │ ││
│ │ 10│ │ │ │ │ │ │ │ ││
│ │ └─┴───┴───┴───┴───┴─┴───┴─── ││
│ │ S7 S8 S9 S10 S11 S12 ││
│ │ ││
│ │ CYCLE TIME DISTRIBUTION ││
│ │ ┌───────┐ ││
│ │ ┌───┤ ├───┐ ││
│ │ ──┤ │ │ ├── ││
│ │ 1 3 5 7 days ││
│ │ Median: 3.5 days ││
│ └─────────────────────────────────────────────────────────┘│
│ │
│ FOCUS ON TRENDS, NOT ABSOLUTE NUMBERS │
│ Are we improving over time? │
└─────────────────────────────────────────────────────────────┘
Healthy Metrics
Balanced Metrics
AVOID DYSFUNCTION:
┌─────────────────────────────────────────────────────────────┐
│ │
│ SINGLE METRICS ARE DANGEROUS: │
│ │
│ If you only measure velocity: │
│ → Stories get inflated │
│ → Quality suffers │
│ → Tech debt ignored │
│ │
│ If you only measure cycle time: │
│ → Small stories only │
│ → Important work avoided │
│ │
│ If you only measure bugs: │
│ → Defensive coding, less innovation │
│ → Fear of trying new things │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ BALANCE METRICS: │
│ │
│ Speed + Quality: │
│ Track velocity AND defect rate │
│ Fast but broken is not good │
│ │
│ Output + Outcome: │
│ Track throughput AND user satisfaction │
│ Shipping fast but wrong is not good │
│ │
│ Short-term + Long-term: │
│ Track velocity AND tech debt │
│ Fast now, slow later is not good │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ METRICS FOR IMPROVEMENT, NOT PUNISHMENT: │
│ │
│ ❌ "Your velocity is lower than Team B" │
│ ✅ "Our velocity dropped - what changed?" │
│ │
│ ❌ "You need to increase velocity 20%" │
│ ✅ "What's blocking us from delivering faster?" │
└─────────────────────────────────────────────────────────────┘
Tracking in GitScrum
Metrics Tasks
METRICS REVIEW TASK:
┌─────────────────────────────────────────────────────────────┐
│ │
│ RECURRING: Metrics Review │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ METRICS-SPRINT: Sprint 12 Metrics Review ││
│ │ ││
│ │ Frequency: Every sprint ││
│ │ Owner: Scrum Master ││
│ │ ││
│ │ REVIEW: ││
│ │ ☐ Velocity vs forecast ││
│ │ ☐ Cycle time trends ││
│ │ ☐ Quality metrics ││
│ │ ☐ Burndown accuracy ││
│ │ ☐ Unplanned work percentage ││
│ │ ││
│ │ THIS SPRINT: ││
│ │ ││
│ │ Velocity: 22 pts (forecast: 25) ⚠️ ││
│ │ Analysis: Lost 1 day to production issue ││
│ │ ││
│ │ Cycle time: 4.2 days (was 3.5) ⚠️ ││
│ │ Analysis: Larger stories this sprint ││
│ │ ││
│ │ Defects: 1 production bug (within target) ✅ ││
│ │ ││
│ │ ACTIONS: ││
│ │ • Discuss velocity miss in retro ││
│ │ • Break down large stories earlier ││
│ └─────────────────────────────────────────────────────────┘│
│ │
│ QUARTERLY: Trend Analysis │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ METRICS-Q1: Quarterly Metrics Analysis ││
│ │ ││
│ │ Q1 SUMMARY: ││
│ │ ││
│ │ Velocity: Stable (avg 24 pts, range 20-28) ││
│ │ Cycle time: Improved (4.5 → 3.5 days) ││
│ │ Quality: Improved (0.5 → 0.2 bugs/story) ││
│ │ Predictability: 85% sprint goals met ││
│ │ ││
│ │ KEY WINS: ││
│ │ • Cycle time reduction from smaller stories ││
│ │ • Quality improvement from TDD adoption ││
│ │ ││
│ │ AREAS TO IMPROVE: ││
│ │ • Sprint goal completion (85% → 90%) ││
│ │ • Reduce unplanned work (20% → 15%) ││
│ └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘
Team Discussions
Metrics in Retrospectives
USING METRICS IN RETROS:
┌─────────────────────────────────────────────────────────────┐
│ │
│ METRICS AS CONVERSATION STARTERS: │
│ │
│ Present data, then discuss: │
│ │
│ VELOCITY DROPPED: │
│ "Velocity was 18 points, down from 25 average. │
│ What happened? What can we learn?" │
│ │
│ Not: "Why didn't you hit 25 points?" │
│ │
│ CYCLE TIME INCREASED: │
│ "Cycle time went from 3 to 5 days. │
│ What's causing delays? How can we reduce it?" │
│ │
│ Not: "You're taking too long on stories." │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ GOOD QUESTIONS: │
│ │
│ • "What does this trend tell us?" │
│ • "Is this the right metric to track?" │
│ • "What's behind this change?" │
│ • "What experiment could improve this?" │
│ • "Is this metric reflecting our goals?" │
│ │
│ BAD APPROACHES: │
│ │
│ • Blaming individuals for metrics │
│ • Setting arbitrary targets │
│ • Comparing to other teams │
│ • Rewarding metric achievement │
│ (leads to gaming) │
└─────────────────────────────────────────────────────────────┘