Analytics
Project analytics and reporting through MCP. Access pulse health, risk analysis, flow metrics, velocity reports, and activity insights.
Open Source β GitScrum MCP Server is open source under the MIT license. Available on npm and GitHub. Model Context Protocol server for GitScrum β Claude, GitHub Copilot, Cursor, and any MCP-compatible client full operational access to your project management stack.
The analytics tool provides 1 action with 10 report types that cover every dimension of project performance β from high-level health assessments to granular cycle time analysis. Each report is purpose-built for a specific decision-making context, giving your AI assistant access to the same data that drives executive dashboards and agile retrospectives.
Analytics in GitScrum aggregate data from tasks, sprints, time tracking, and team activity into actionable metrics. The MCP Server surfaces these reports through natural language, so you can ask "How healthy is the Backend project?" and receive structured data with trend indicators, risk flags, and performance scores.
Actions Overview
| Action | Purpose | Required Parameters |
|---|---|---|
get | Retrieve a specific analytics report | companyslug, projectslug, report |
The single get action accepts a report parameter that determines which analytics report to generate. All reports are scoped to a specific project within a workspace.
Parameters
| Parameter | Type | Description |
|---|---|---|
company_slug | string | Workspace identifier (required) |
project_slug | string | Project identifier (required) |
report | string | Report type to generate (required) β see Available Reports below |
Available Reports
| Report | What It Measures | When to Use |
|---|---|---|
pulse | Real-time project activity snapshot | Daily check-ins, quick status updates |
health | Overall project health score with contributing factors | Sprint planning, stakeholder reporting |
risks | Identified project risks and severity levels | Risk mitigation planning, retrospectives |
flow | Work item flow through workflow stages | Bottleneck identification, process optimization |
velocity | Team delivery rate over time | Sprint planning, capacity estimation |
cumulative_flow | Cumulative work distribution across statuses over time | Trend analysis, WIP monitoring |
throughput | Number of completed items per time period | Delivery predictability, SLA tracking |
cycle_time | Time from work started to work completed | Process efficiency, estimation accuracy |
lead_time | Time from item created to item completed | End-to-end delivery measurement |
activity | Team and project activity patterns | Engagement tracking, workload distribution |
Pulse Report
The pulse report provides a real-time activity snapshot β recent commits, task movements, comments, and team engagement within a short time window. Think of it as a heartbeat monitor for your project.
You: "What's the pulse on the Backend project?"
AI: Calls analytics action=get report="pulse"
β returns recent activity summary, active contributors, and momentum indicators
You: "Is the Mobile App project still active?"
AI: Calls analytics action=get report="pulse"
β reports activity level and last action timestampsHealth Report
The health report calculates an overall project health score based on multiple factors: sprint completion rates, blocker count, overdue tasks, team velocity trends, and workload balance. The result is a composite score with individual factor breakdowns.
You: "Show project health report for Backend"
AI: Calls analytics action=get report="health"
β returns health score, contributing factors, and recommendations
You: "Which areas of the project need attention?"
AI: Calls analytics action=get report="health"
β AI highlights low-scoring factors and suggests actions
You: "Is the project on track for the release?"
AI: Calls analytics action=get report="health"
β AI evaluates health indicators against delivery timelineRisks Report
The risks report identifies active project risks based on data patterns β stale tasks, overdue items, unbalanced workloads, blocked dependencies, and scope creep indicators. Each risk includes a severity level and contributing data points.
You: "Identify project risks for the Backend project"
AI: Calls analytics action=get report="risks"
β returns categorized risks with severity and evidence
You: "What could derail our current sprint?"
AI: Calls analytics action=get report="risks"
β AI filters for sprint-relevant risks
You: "Are there any high-severity risks I should escalate?"
AI: Calls analytics action=get report="risks"
β AI highlights critical and high-severity itemsFlow Metrics
The flow report analyzes how work items move through your workflow stages β from backlog through in-progress to completed. It identifies bottlenecks where items accumulate, stages with unusually long residence times, and throughput rates at each stage.
You: "Show flow metrics for the Backend project"
AI: Calls analytics action=get report="flow"
β returns stage-by-stage flow data with bottleneck indicators
You: "Where are tasks getting stuck?"
AI: Calls analytics action=get report="flow"
β AI identifies stages with high WIP or long residence timesVelocity Report
The velocity report tracks your team's delivery rate across sprints β how many story points or tasks are completed per sprint. This is the foundational metric for sprint planning and capacity estimation.
You: "What's our velocity trend?"
AI: Calls analytics action=get report="velocity"
β returns sprint-over-sprint velocity with trend direction
You: "How many points should we plan for next sprint?"
AI: Calls analytics action=get report="velocity"
β AI calculates average velocity and recommends capacity
You: "Is our velocity improving or declining?"
AI: Calls analytics action=get report="velocity"
β AI analyzes trend and reports direction with percentage changeCumulative Flow
The cumulative_flow report shows the distribution of work items across all workflow statuses over time. This stacked-area visualization data reveals WIP trends, bottleneck formation, and delivery consistency.
You: "Show cumulative flow diagram data for Backend"
AI: Calls analytics action=get report="cumulative_flow"
β returns time-series data with item counts per status
You: "Is our WIP increasing?"
AI: Calls analytics action=get report="cumulative_flow"
β AI analyzes in-progress item trend over timeThroughput
The throughput report measures the number of work items completed per time period. Unlike velocity (which uses story points), throughput counts items regardless of size. This metric is valuable for teams that don't estimate or want a simpler delivery measure.
You: "What's our throughput this month?"
AI: Calls analytics action=get report="throughput"
β returns completed items per week/day with trend data
You: "Are we delivering consistently?"
AI: Calls analytics action=get report="throughput"
β AI evaluates throughput variance and consistencyCycle Time and Lead Time
Cycle time measures the duration from when work starts (item moves to in-progress) to when it's completed. Lead time measures the full duration from item creation to completion. Together, they reveal process efficiency and predictability.
Cycle Time
You: "What's our average cycle time?"
AI: Calls analytics action=get report="cycle_time"
β returns average, median, and percentile cycle times
You: "How long does it take to complete a task once we start?"
AI: Calls analytics action=get report="cycle_time"
β AI reports average cycle time with distribution breakdownLead Time
You: "What's our lead time for the Backend project?"
AI: Calls analytics action=get report="lead_time"
β returns average, median, and percentile lead times
You: "How long from request to delivery?"
AI: Calls analytics action=get report="lead_time"
β AI reports end-to-end delivery time with trendActivity Report
The activity report analyzes team engagement patterns β who is active, what types of actions they perform, peak activity periods, and contribution distribution. This provides visibility into team dynamics and workload balance.
You: "Show activity report for the Backend project"
AI: Calls analytics action=get report="activity"
β returns activity breakdown by member, type, and time period
You: "Who's been most active on the project this week?"
AI: Calls analytics action=get report="activity"
β AI ranks team members by activity levelData-Driven Decision Workflow
Analytics reports are most powerful when combined. Here's how to use them for informed decision-making:
Sprint Planning
You: "Help me plan the next sprint"
AI: 1. Calls analytics report="velocity" β average velocity = 34 points
2. Calls analytics report="risks" β 2 active risks to address
3. Calls analytics report="health" β health score = 78/100
β "Based on velocity of 34 pts/sprint with 2 active risks,
I recommend planning 28-30 points with risk mitigation tasks"Retrospective Preparation
You: "Prepare data for our retrospective"
AI: 1. Calls analytics report="cycle_time" β avg 3.2 days
2. Calls analytics report="flow" β bottleneck in code review
3. Calls analytics report="throughput" β 23 items completed
β "Sprint delivered 23 items, avg cycle time 3.2 days.
Code review stage is a bottleneck β 40% of cycle time spent there"Stakeholder Reporting
You: "Generate a project status summary for leadership"
AI: 1. Calls analytics report="health" β overall score and breakdown
2. Calls analytics report="velocity" β delivery trend
3. Calls analytics report="risks" β risk summary
β Produces structured executive summary with metricsAgile Metrics Reference
| Metric | Formula | Target |
|---|---|---|
| Velocity | Story points completed per sprint | Stable or increasing trend |
| Throughput | Items completed per time period | Consistent with low variance |
| Cycle Time | Start β Complete duration | Decreasing or stable |
| Lead Time | Created β Complete duration | Decreasing or stable |
| WIP | Items currently in-progress | Below team WIP limit |
| Flow Efficiency | Active time Γ· Total time Γ 100 | Above 15% (industry benchmark) |