Developer Productivity Metrics | DORA & Cycle Time
Track developer productivity with DORA metrics and cycle time analysis. GitScrum analytics identify bottlenecks and process improvements for teams.
7 min read
Productivity measurement should identify systemic improvements, not rank individual developers. GitScrum's analytics provide team-level insights into velocity, cycle time, and throughput that help managers identify bottlenecks and process improvements. The key is measuring the system, not the peopleβand using data to remove obstacles rather than apply pressure.
Productivity Metric Categories
| Category | Good Metrics | Bad Metrics |
|---|---|---|
| Output | Features shipped, PRs merged | Lines of code, commits |
| Quality | Bug rate, incidents | Code coverage % alone |
| Speed | Cycle time, lead time | Hours logged |
| Flow | WIP, blocker time | Tasks started |
| Impact | Customer value, revenue | Story points as absolute |
DORA Metrics Framework
DORA METRICS OVERVIEW
DEPLOYMENT FREQUENCY (Speed):
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β How often do you deploy to production? β
β β
β Elite: Multiple times per day β
β High: Between once per day and once per weekβ
β Medium: Between once per week and once per mo β
β Low: Less than once per month β
β β
β Your team: Weekly deployments β
β Status: High performer β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
LEAD TIME FOR CHANGES (Speed):
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Time from commit to production β
β β
β Elite: Less than one hour β
β High: Between one day and one week β
β Medium: Between one week and one month β
β Low: More than one month β
β β
β Your team: 2-3 days average β
β Status: High performer β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
CHANGE FAILURE RATE (Stability):
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β % of deployments causing production failure β
β β
β Elite: 0-15% β
β High: 16-30% β
β Medium: 31-45% β
β Low: 46-60% β
β β
β Your team: 12% β
β Status: Elite performer β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
TIME TO RESTORE (Stability):
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Time to recover from production failure β
β β
β Elite: Less than one hour β
β High: Less than one day β
β Medium: Between one day and one week β
β Low: More than one week β
β β
β Your team: 2-4 hours β
β Status: High performer β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Developer Experience Metrics
DEVELOPER EXPERIENCE (DX) METRICS
SPACE FRAMEWORK:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β S - Satisfaction and Well-being β
β βββ Developer satisfaction surveys β
β βββ Burnout indicators β
β βββ Team morale scores β
β β
β P - Performance β
β βββ Code review turnaround time β
β βββ Quality of reviews β
β βββ Outcomes delivered β
β β
β A - Activity (use carefully) β
β βββ PRs merged (team level) β
β βββ Build/deploy frequency β
β βββ NOT: commits, lines of code β
β β
β C - Communication and Collaboration β
β βββ Knowledge sharing β
β βββ Documentation quality β
β βββ Onboarding time β
β β
β E - Efficiency and Flow β
β βββ Time in meetings β
β βββ Wait/blocked time β
β βββ Context switches per day β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
QUARTERLY DX SURVEY:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Scale 1-5: β
β β
β Q1: I can do my best work here β
β Q2: I rarely have to wait on blockers β
β Q3: I understand what I should be working on β
β Q4: Our tools help rather than hinder me β
β Q5: I have enough focus time β
β Q6: I'm proud of the quality of our code β
β Q7: I'm learning and growing β
β β
β Track trends quarter over quarter β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Cycle Time Analysis
CYCLE TIME BREAKDOWN
END-TO-END CYCLE TIME:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Task Created β Done β
β β
β Total: 8.5 days average β
β β
β Breakdown: β
β ββββββββββββββββββββββββββββββββββββββββββββββ β
β β Queue (To Do) βββββββββ 3.2 days (38%)β β
β β Development ββββββ 2.1 days (25%)β β
β β Code Review βββββ 1.8 days (21%)β β
β β Testing/QA ββββ 1.1 days (13%)β β
β β Deployment Queue ββ 0.3 days (3%) β β
β ββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Observation: 38% time waiting in queue β
β Action: Reduce WIP to improve flow β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
FLOW EFFICIENCY:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Active work time: 2.1 days (development) β
β Total time: 8.5 days β
β Flow efficiency: 25% β
β β
β Industry targets: β
β βββ Poor: < 15% β
β βββ Average: 15-30% β
β βββ Good: 30-50% β
β βββ Elite: > 50% β
β β
β Your status: Average - room to improve β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Metrics Dashboard
PRODUCTIVITY DASHBOARD
TEAM HEALTH OVERVIEW:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β Velocity Trend (Story Points): β
β 40 β€ ββββββ β
β 35 β€ ββββββββ β
β 30 β€ ββββββββ β
β 25 β€ βββββ β
β βββββββββββββββββββββββββββββ β
β S1 S2 S3 S4 S5 S6 S7 S8 β
β β
β Trend: Steady improvement (+50% over 8 sprints)β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
KEY METRICS THIS SPRINT:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Metric Value Trend β
β ββββββββββββββββββββββββββββββββββββββββββ β
β Cycle Time 7.2 days β Improving β
β Deployment Frequency 3/week β Stable β
β PR Review Time 4.5 hours β Improving β
β Bug Escape Rate 5% β Stable β
β Blocked Time 8% β Concern β
β β
β Action: Investigate blocked time increase β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
BLOCKERS ANALYSIS:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Top blockers this sprint: β
β βββ Waiting for code review: 35% β
β βββ Unclear requirements: 28% β
β βββ Dependencies: 22% β
β βββ Environment issues: 15% β
β β
β Actions: β
β 1. Add code review SLA (< 4 hours) β
β 2. Improve backlog refinement β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
What Not to Measure
ANTI-PRODUCTIVITY METRICS
VANITY METRICS (Don't track):
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β β Lines of code β
β Why: Encourages bloat, punishes refactoring β
β β
β β Commits per day β
β Why: Encourages small meaningless commits β
β β
β β Hours logged β
β Why: Measures presence, not output β
β β
β β Story points per developer β
β Why: Points aren't comparable across people β
β β
β β PRs per developer β
β Why: Encourages splitting unnecessarily β
β β
β β Issues closed per person β
β Why: Encourages cherry-picking easy ones β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
GOODHART'S LAW:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β "When a measure becomes a target, β
β it ceases to be a good measure" β
β β
β Example: β
β Target: Reduce bug count β
β Result: Team stops logging bugs as bugs β
β β
β Solution: Measure outcomes, not activities β
β Focus on customer value delivered β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Best Practices
Anti-Patterns
β Lines of code as productivity measure
β Individual leaderboards
β Metrics used punitively
β Measuring activity over outcomes
β Gaming metrics (Goodhart's Law)
β Secret dashboards management sees only