Try free
7 min read Guide 354 of 877

Technical Spike Management

Technical spikes are time-boxed research efforts to answer questions and reduce risk. Good spike management keeps research focused and translates findings into action. Poor spike management wastes time on unfocused exploration.

Spike Characteristics

AspectSpikeFeature
GoalAnswer questionDeliver value
OutputKnowledgeWorking code
Time-boxFixed (1-2 days)Estimated
PointsOften 0 or fixedVariable

When to Spike

Use Cases

WHEN TO USE SPIKES
══════════════════

TECHNOLOGY EVALUATION:
─────────────────────────────────────
Questions like:
├── "Can library X do what we need?"
├── "Will this scale to our load?"
├── "What's the learning curve?"
├── "How does it integrate?"
├── Evaluate before committing
└── Informed technology choices

ESTIMATION UNCERTAINTY:
─────────────────────────────────────
When you can't estimate:
├── "How long will migration take?"
├── Unknown complexity
├── Never done before
├── Spike to understand
├── Then estimate accurately
└── Reduce planning uncertainty

PROTOTYPING:
─────────────────────────────────────
Proof of concept:
├── "Is this approach viable?"
├── Quick validation
├── Throw-away code
├── Learn by doing
├── Before committing
└── Cheap learning

BUG INVESTIGATION:
─────────────────────────────────────
Deep investigation:
├── "Why is this happening?"
├── Root cause analysis
├── Reproduce conditions
├── Time-boxed investigation
├── Then fix separately
└── Understand before fix

INTEGRATION RESEARCH:
─────────────────────────────────────
Third-party services:
├── "How does their API work?"
├── "What are the limitations?"
├── Evaluate vendors
├── Document findings
└── Reduce integration risk

Spike Structure

Clear Definition

SPIKE DEFINITION
════════════════

SPIKE TEMPLATE:
─────────────────────────────────────
Title: [Spike] Evaluate Auth0 for user authentication

Question:
What is the primary question this spike answers?
"Can Auth0 meet our authentication requirements
 including SSO, MFA, and custom claims?"

Context:
Why do we need to answer this?
"We need to choose an auth provider. Auth0 is a
 candidate but we're unsure about integration
 complexity and feature coverage."

Time-box:
How long? (max 2 days typically)
"2 days (16 hours)"

Output:
What will be delivered?
├── Written recommendation
├── Prototype code (if applicable)
├── Pros/cons analysis
├── Estimated effort for full implementation
└── Go/no-go recommendation

Acceptance Criteria:
How do we know the spike is complete?
├── ☐ Set up test Auth0 tenant
├── ☐ Implement basic login flow
├── ☐ Test SSO with test IdP
├── ☐ Test MFA configuration
├── ☐ Document API integration approach
├── ☐ Write recommendation
└── Clear definition of done

SPIKE CARD EXAMPLE:
─────────────────────────────────────
┌──────────────────────────────────────┐
│ 🔬 [Spike] Evaluate Auth0            │
├──────────────────────────────────────┤
│ Question: Can Auth0 meet our needs   │
│           for SSO, MFA, custom claims│
│                                      │
│ Time-box: 2 days                     │
│                                      │
│ Output: Written recommendation +     │
│         prototype                    │
│                                      │
│ Assigned: Sarah                      │
│ Due: Thursday                        │
└──────────────────────────────────────┘

Running Spikes

Execution

SPIKE EXECUTION
═══════════════

FOCUSED RESEARCH:
─────────────────────────────────────
During the spike:
├── Stay focused on the question
├── Don't expand scope
├── Time-box strictly
├── Document as you go
├── Note dead ends
├── Capture learnings
└── Answer the question

DAILY CHECK-INS:
─────────────────────────────────────
For multi-day spikes:
├── Brief update in standup
├── "Still investigating X"
├── "Found Y, exploring implications"
├── "Blocked on Z, need help"
├── Keep team informed
└── Surface blockers early

IF TIME RUNS OUT:
─────────────────────────────────────
When time-box ends:
├── Stop and document
├── What did you learn?
├── What's still unknown?
├── Enough to decide?
├── Extend only if justified
├── Don't let spikes drag on
└── Time-box is real

PROTOTYPE CODE:
─────────────────────────────────────
If coding:
├── Write to learn, not to ship
├── Skip tests, skip polish
├── Focus on answer
├── Document what you learned
├── Code is throw-away
├── Don't get attached
└── Knowledge is the product

Spike Output

Documenting Findings

SPIKE OUTPUT
════════════

RECOMMENDATION DOCUMENT:
─────────────────────────────────────
Structure:
1. Summary (1 paragraph)
   "Auth0 is recommended for our auth needs.
    It meets all requirements with reasonable
    integration effort."

2. Question Recap
   "Can Auth0 meet our needs for SSO, MFA,
    and custom claims?"

3. Findings
   - SSO: ✅ Works with SAML and OIDC
   - MFA: ✅ Built-in, configurable
   - Custom claims: ✅ Via Rules/Actions
   - Pricing: ~$X/month for our volume

4. Concerns/Risks
   - Vendor lock-in
   - Learning curve for Actions
   - Cost at scale

5. Recommendation
   "Proceed with Auth0. Estimated 2 sprints
    for full implementation."

6. Next Steps
   - Create implementation stories
   - Set up production tenant
   - Define migration plan

SHARE FINDINGS:
─────────────────────────────────────
├── Present to team
├── Answer questions
├── Document in wiki
├── Link from original spike task
├── Knowledge preserved
└── Inform future work

DECISION:
─────────────────────────────────────
After spike:
├── Make the decision
├── Don't leave open
├── Act on findings
├── Create follow-up tasks
├── Spike leads to action
└── Closure

Sprint Integration

Spikes in Sprints

SPIKES IN SPRINT PLANNING
═════════════════════════

PLANNING SPIKES:
─────────────────────────────────────
When to plan:
├── Before estimating uncertain work
├── When technology decision needed
├── When design unclear
├── Sprint before feature work
└── Reduce uncertainty early

SPIKE SIZING:
─────────────────────────────────────
Approaches:
├── No points (knowledge, not delivery)
├── Fixed points (e.g., always 3)
├── Time-box as estimate
├── Team decides approach
└── Consistent method

LIMITING SPIKES:
─────────────────────────────────────
Per sprint:
├── Max 1-2 spikes
├── Most work is delivery
├── Spikes are investment
├── Don't overdo research
├── Bias toward action
└── Spikes supplement, not replace work

GitScrum Spikes

Tracking

GITSCRUM FOR SPIKES
═══════════════════

SPIKE TASK TYPE:
─────────────────────────────────────
├── Label: spike, research
├── Clear title: "[Spike] ..."
├── Question in description
├── Time-box specified
├── Output defined
└── Distinct from features

SPIKE WORKFLOW:
─────────────────────────────────────
├── To Do → In Progress → Done
├── Same as other work
├── Track in sprint
├── Visible progress
└── Part of sprint work

DOCUMENTATION:
─────────────────────────────────────
├── Link findings in NoteVault
├── Attach to task
├── Searchable later
├── Knowledge preserved
└── Institutional memory

FOLLOW-UP:
─────────────────────────────────────
├── Create tasks from spike
├── Implementation stories
├── Link to original spike
├── Traceability
└── Research leads to action

Best Practices

For Technical Spikes

  1. Clear question — What are we answering?
  2. Time-box strictly — Stop when time ends
  3. Defined output — Recommendation, not open research
  4. Document findings — Knowledge preserved
  5. Act on results — Spikes lead to decisions

Anti-Patterns

SPIKE MISTAKES:
✗ No clear question
✗ Open-ended time
✗ No defined output
✗ Gold-plating prototype
✗ Not documenting
✗ Never deciding
✗ Too many spikes
✗ Spikes as excuse for unfocused work