Try free
5 min read Guide 658 of 877

How to Use GitScrum for Data Science Development Teams

Data science teams face unique challenges with iterative experiments, uncertain timelines, and research-heavy work. GitScrum adapts to these needs with flexible workflows, experiment tracking, and visibility into both research progress and production deployments.

Data Science Workflow

Work Categories

DATA SCIENCE TASK TYPES:
┌─────────────────────────────────────────────────────────────┐
│                                                             │
│ RESEARCH (Exploratory):                                     │
│ • Uncertain outcomes                                       │
│ • Time-boxed, not estimate-driven                          │
│ • Success = learning, not just delivery                    │
│ Example: "Explore NLP approaches for sentiment (2 days)"   │
│                                                             │
│ EXPERIMENT (Hypothesis-driven):                             │
│ • Clear hypothesis to test                                 │
│ • Defined success metrics                                  │
│ • May succeed or fail (both valuable)                      │
│ Example: "Test BERT vs GPT for classification"             │
│                                                             │
│ DEVELOPMENT (Production):                                   │
│ • Traditional development estimation                       │
│ • Build on validated experiments                           │
│ • Clear deliverables                                       │
│ Example: "Implement recommendation API endpoint"           │
│                                                             │
│ MAINTENANCE (Operational):                                  │
│ • Model monitoring and retraining                          │
│ • Data pipeline maintenance                                │
│ • Bug fixes and improvements                               │
│ Example: "Retrain fraud model with Q4 data"               │
└─────────────────────────────────────────────────────────────┘

Experiment Tracking

EXPERIMENT BOARD:
┌─────────────────────────────────────────────────────────────┐
│ IDEATION     │ ACTIVE      │ ANALYSIS   │ DECISION         │
├──────────────┼─────────────┼────────────┼──────────────────┤
│              │             │            │                  │
│ Clustering   │ BERT vs GPT │ Feature    │ → Productionize  │
│ approaches   │ comparison  │ selection  │   gradient boost │
│              │             │ results    │                  │
│ Graph-based  │ Gradient    │            │ → Abandon        │
│ recommender  │ boosting    │            │   RNN approach   │
│              │ optimization│            │                  │
│ Real-time    │             │            │ → More research  │
│ anomaly      │             │            │   graph approach │
│ detection    │             │            │                  │
│              │             │            │                  │
└──────────────┴─────────────┴────────────┴──────────────────┘

Adapting Agile

Sprint Planning

DATA SCIENCE SPRINT STRUCTURE:
┌─────────────────────────────────────────────────────────────┐
│ 2-WEEK SPRINT                                               │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ ALLOCATION GUIDELINES:                                      │
│ • 60% Committed work (production, maintenance)             │
│ • 30% Experiments (time-boxed research)                    │
│ • 10% Learning (papers, tools, upskilling)                 │
│                                                             │
│ SPRINT EXAMPLE:                                             │
│                                                             │
│ COMMITTED (60%):                                            │
│ • Deploy recommendation model v2.3                         │
│ • Fix data pipeline timeout issue                          │
│ • Document model training process                          │
│                                                             │
│ EXPERIMENTS (30%):                                          │
│ • Compare BERT vs GPT-2 for classification (3 days)        │
│   Success: Determine which performs better                 │
│ • Explore graph features for fraud detection (2 days)      │
│   Success: Identify promising signals                      │
│                                                             │
│ LEARNING (10%):                                             │
│ • Review recent papers on transformer efficiency           │
│ • Explore new MLOps tooling                                │
└─────────────────────────────────────────────────────────────┘

Estimation Approach

ESTIMATION BY WORK TYPE:
┌─────────────────────────────────────────────────────────────┐
│                                                             │
│ RESEARCH/EXPERIMENTS:                                       │
│ Use TIME-BOXING:                                            │
│ "Spend 2 days exploring this. Report findings."            │
│ NOT: "Estimate how long to find a solution."               │
│                                                             │
│ Typical time boxes:                                         │
│ • Quick spike: 4 hours                                     │
│ • Standard experiment: 2-3 days                            │
│ • Deep research: 1 week                                    │
│                                                             │
│ PRODUCTION DEVELOPMENT:                                     │
│ Use STORY POINTS:                                           │
│ • Clear requirements                                       │
│ • Known technology                                         │
│ • Comparable to past work                                  │
│                                                             │
│ HANDLING UNCERTAINTY:                                       │
│ Phase 1: Explore (time-boxed) → Learning                   │
│ Phase 2: Prototype (rough estimate) → Working code         │
│ Phase 3: Productionize (firm estimate) → Deployed          │
└─────────────────────────────────────────────────────────────┘

Model Development Workflow

Model Lifecycle

MODEL DEVELOPMENT STAGES:
┌─────────────────────────────────────────────────────────────┐
│                                                             │
│ PROBLEM DEFINITION                                          │
│ │ • Business problem clear                                 │
│ │ • Success metrics defined                                │
│ │ • Data availability confirmed                            │
│ ▼                                                          │
│ DATA EXPLORATION                                            │
│ │ • Understand data quality                                │
│ │ • Identify features                                      │
│ │ • Baseline established                                   │
│ ▼                                                          │
│ MODEL EXPERIMENTATION                                       │
│ │ • Try multiple approaches                                │
│ │ • Track experiments systematically                       │
│ │ • Select best performer                                  │
│ ▼                                                          │
│ MODEL DEVELOPMENT                                           │
│ │ • Production-ready code                                  │
│ │ • Testing and validation                                 │
│ │ • Documentation                                          │
│ ▼                                                          │
│ DEPLOYMENT                                                  │
│ │ • API/batch integration                                  │
│ │ • Monitoring setup                                       │
│ │ • A/B testing if applicable                              │
│ ▼                                                          │
│ MONITORING & ITERATION                                      │
│   • Track model performance                                │
│   • Detect drift                                           │
│   • Plan retraining                                        │
└─────────────────────────────────────────────────────────────┘

Team Collaboration

DATA SCIENCE + ENGINEERING HANDOFF:
┌─────────────────────────────────────────────────────────────┐
│                                                             │
│ DATA SCIENCE DELIVERS:                                      │
│ ✓ Trained model artifact                                   │
│ ✓ Model card (performance, limitations)                    │
│ ✓ Feature requirements                                     │
│ ✓ Expected input/output formats                            │
│ ✓ Performance benchmarks                                   │
│                                                             │
│ ENGINEERING PROVIDES:                                       │
│ ✓ Feature pipeline infrastructure                          │
│ ✓ Model serving platform                                   │
│ ✓ Monitoring and alerting                                  │
│ ✓ A/B testing framework                                    │
│ ✓ Scaling and reliability                                  │
│                                                             │
│ SHARED RESPONSIBILITIES:                                    │
│ • Integration testing                                      │
│ • Performance optimization                                 │
│ • Incident response                                        │
│ • Documentation                                            │
└─────────────────────────────────────────────────────────────┘