GitScrum / Docs
All Best Practices

Performance Optimization Workflow | Measure First

Systematic performance workflow: measure baseline, find bottlenecks, optimize, verify. Track latency percentiles and Core Web Vitals with GitScrum.

6 min read

Performance optimization without measurement is guessing. Good performance work starts with data, targets specific bottlenecks, and measures results. This guide covers a systematic approach to performance optimization.

Optimization Cycle

StepActionOutput
MeasureBaseline currentData
AnalyzeFind bottleneckTarget
OptimizeFix issueChange
VerifyMeasure againResult

Setting Goals

Performance Targets

PERFORMANCE GOALS
═════════════════

DEFINE TARGETS:
─────────────────────────────────────
Be specific:
β”œβ”€β”€ "Page load < 2 seconds"
β”œβ”€β”€ "API response p95 < 200ms"
β”œβ”€β”€ "Throughput > 1000 req/sec"
β”œβ”€β”€ "Error rate < 0.1%"
β”œβ”€β”€ Measurable targets
└── User-focused

PERCENTILES:
─────────────────────────────────────
Use percentiles, not averages:
β”œβ”€β”€ p50 (median): 50% faster than this
β”œβ”€β”€ p95: 95% of requests faster
β”œβ”€β”€ p99: 99% of requests faster
β”œβ”€β”€ p99.9: for critical paths
└── Tail latency matters

Example:
β”œβ”€β”€ Average: 100ms (hides problems)
β”œβ”€β”€ p50: 50ms (half are fast)
β”œβ”€β”€ p95: 150ms (most are fine)
β”œβ”€β”€ p99: 2000ms (some very slow!)
└── p99 reveals real issues

SLA TARGETS:
─────────────────────────────────────
Service Level Agreements:
β”œβ”€β”€ "99.9% of requests < 500ms"
β”œβ”€β”€ "p99 < 1 second"
β”œβ”€β”€ "Error rate < 0.01%"
β”œβ”€β”€ Contractual obligations
└── Must meet

USER EXPERIENCE:
─────────────────────────────────────
Web vitals:
β”œβ”€β”€ LCP (Largest Contentful Paint): < 2.5s
β”œβ”€β”€ FID (First Input Delay): < 100ms
β”œβ”€β”€ CLS (Cumulative Layout Shift): < 0.1
β”œβ”€β”€ INP (Interaction to Next Paint): < 200ms
└── User-facing metrics

Baseline Measurement

Current State

BASELINE MEASUREMENT
════════════════════

BEFORE OPTIMIZATION:
─────────────────────────────────────
Measure current state:
β”œβ”€β”€ Run load tests
β”œβ”€β”€ Collect production metrics
β”œβ”€β”€ Profile application
β”œβ”€β”€ Document baseline
β”œβ”€β”€ Compare later
└── Know starting point

LOAD TESTING:
─────────────────────────────────────
Tools:
β”œβ”€β”€ k6, Locust, JMeter
β”œβ”€β”€ Artillery, Gatling
β”œβ”€β”€ Simulate real traffic
β”œβ”€β”€ Measure under load
└── Find limits

Example k6 test:
import http from 'k6/http';

export const options = {
  stages: [
    { duration: '1m', target: 100 },
    { duration: '3m', target: 100 },
    { duration: '1m', target: 0 },
  ],
};

export default function() {
  http.get('https://api.example.com/users');
}

METRICS TO CAPTURE:
─────────────────────────────────────
β”œβ”€β”€ Response times (p50, p95, p99)
β”œβ”€β”€ Throughput (req/sec)
β”œβ”€β”€ Error rate
β”œβ”€β”€ CPU utilization
β”œβ”€β”€ Memory usage
β”œβ”€β”€ Database query times
β”œβ”€β”€ External service latency
└── Complete picture

Finding Bottlenecks

Profiling

FINDING BOTTLENECKS
═══════════════════

PROFILING TOOLS:
─────────────────────────────────────
Application:
β”œβ”€β”€ APM tools (Datadog, New Relic)
β”œβ”€β”€ Language profilers
β”œβ”€β”€ Flame graphs
β”œβ”€β”€ Trace analysis
└── Where is time spent?

Database:
β”œβ”€β”€ Slow query logs
β”œβ”€β”€ Query explain plans
β”œβ”€β”€ Index analysis
β”œβ”€β”€ Connection pool stats
└── Database-specific

Infrastructure:
β”œβ”€β”€ CPU/memory monitoring
β”œβ”€β”€ Network latency
β”œβ”€β”€ Disk I/O
β”œβ”€β”€ Container metrics
└── Resource constraints

COMMON BOTTLENECKS:
─────────────────────────────────────
Database:
β”œβ”€β”€ Missing indexes
β”œβ”€β”€ N+1 queries
β”œβ”€β”€ Slow queries
β”œβ”€β”€ Connection exhaustion
└── Often the bottleneck

Network:
β”œβ”€β”€ External API calls
β”œβ”€β”€ Large payloads
β”œβ”€β”€ Too many requests
β”œβ”€β”€ No caching
└── Latency adds up

Application:
β”œβ”€β”€ Inefficient algorithms
β”œβ”€β”€ Memory leaks
β”œβ”€β”€ Blocking operations
β”œβ”€β”€ CPU-bound processing
└── Code problems

FLAME GRAPH ANALYSIS:
─────────────────────────────────────
β”‚β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ”‚
β”‚β–ˆ db.query β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ”‚
β”‚β–ˆ http.get β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ”‚ β”‚ process β”‚
β”‚β–ˆ β”‚ β”‚ json β–ˆβ–ˆβ–ˆβ”‚ β”‚ result β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Reading:
β”œβ”€β”€ Width = time spent
β”œβ”€β”€ Stacks show call hierarchy
β”œβ”€β”€ Wide = slow (optimize this)
β”œβ”€β”€ Find widest blocks
└── Visual bottleneck identification

Optimization Techniques

Common Fixes

OPTIMIZATION TECHNIQUES
═══════════════════════

DATABASE:
─────────────────────────────────────
Add missing indexes:
CREATE INDEX idx_users_email ON users(email);

Fix N+1 queries:
β”œβ”€β”€ Use eager loading
β”œβ”€β”€ Batch queries
β”œβ”€β”€ Join instead of loop
└── Reduce query count

Query optimization:
β”œβ”€β”€ EXPLAIN ANALYZE
β”œβ”€β”€ Rewrite slow queries
β”œβ”€β”€ Add WHERE clauses
β”œβ”€β”€ Limit result sets
└── Efficient queries

CACHING:
─────────────────────────────────────
Layers:
β”œβ”€β”€ Application cache (Redis)
β”œβ”€β”€ Database query cache
β”œβ”€β”€ CDN for static content
β”œβ”€β”€ Browser caching
└── Appropriate cache levels

Cache patterns:
β”œβ”€β”€ Cache-aside (most common)
β”œβ”€β”€ Write-through
β”œβ”€β”€ Read-through
β”œβ”€β”€ TTL-based expiration
└── Invalidation strategy

API OPTIMIZATION:
─────────────────────────────────────
β”œβ”€β”€ Pagination
β”œβ”€β”€ Field selection
β”œβ”€β”€ Compression (gzip)
β”œβ”€β”€ Connection pooling
β”œβ”€β”€ Async processing
β”œβ”€β”€ Background jobs
└── Reduce work per request

FRONTEND:
─────────────────────────────────────
β”œβ”€β”€ Code splitting
β”œβ”€β”€ Lazy loading
β”œβ”€β”€ Image optimization
β”œβ”€β”€ Bundle size reduction
β”œβ”€β”€ CDN for assets
└── Faster initial load

Verification

Measuring Results

VERIFY IMPROVEMENTS
═══════════════════

A/B COMPARISON:
─────────────────────────────────────
Before:
β”œβ”€β”€ p50: 150ms
β”œβ”€β”€ p95: 500ms
β”œβ”€β”€ p99: 2000ms
└── Throughput: 500 req/s

After:
β”œβ”€β”€ p50: 50ms (66% faster)
β”œβ”€β”€ p95: 120ms (76% faster)
β”œβ”€β”€ p99: 300ms (85% faster)
└── Throughput: 1500 req/s (3x)

LOAD TEST AGAIN:
─────────────────────────────────────
β”œβ”€β”€ Same test as baseline
β”œβ”€β”€ Same conditions
β”œβ”€β”€ Compare results
β”œβ”€β”€ Quantify improvement
└── Data-driven validation

PRODUCTION MONITORING:
─────────────────────────────────────
After deploy:
β”œβ”€β”€ Watch metrics
β”œβ”€β”€ Compare to before
β”œβ”€β”€ User experience improved?
β”œβ”€β”€ Error rate unchanged?
β”œβ”€β”€ Real-world validation
└── Actual impact

DOCUMENT RESULTS:
─────────────────────────────────────
Performance improvement:
β”œβ”€β”€ What was the problem
β”œβ”€β”€ What was changed
β”œβ”€β”€ Before metrics
β”œβ”€β”€ After metrics
β”œβ”€β”€ Percentage improvement
└── Future reference

GitScrum Integration

Tracking Performance Work

GITSCRUM FOR PERFORMANCE
════════════════════════

PERFORMANCE TASKS:
─────────────────────────────────────
β”œβ”€β”€ Label: performance
β”œβ”€β”€ Priority based on impact
β”œβ”€β”€ Linked to metrics
β”œβ”€β”€ Before/after documented
└── Tracked work

TASK STRUCTURE:
─────────────────────────────────────
Task: "Optimize user search API"

Description:
β”œβ”€β”€ Current p95: 800ms
β”œβ”€β”€ Target p95: 200ms
β”œβ”€β”€ Cause: Missing index + N+1 query
β”œβ”€β”€ Approach: Add index, eager load

Acceptance:
β”œβ”€β”€ p95 < 200ms
β”œβ”€β”€ Throughput > 500 req/s
β”œβ”€β”€ Verified in production
└── Clear criteria

DOCUMENTATION:
─────────────────────────────────────
NoteVault:
β”œβ”€β”€ Performance baselines
β”œβ”€β”€ Optimization history
β”œβ”€β”€ Common issues
β”œβ”€β”€ Runbooks
└── Knowledge base

Best Practices

For Performance Optimization

  • Measure first β€” No guessing
  • One change at a time β€” Know what helped
  • Target bottlenecks β€” Biggest impact first
  • Verify in production β€” Real-world results
  • Document everything β€” Learn for next time
  • Anti-Patterns

    PERFORMANCE MISTAKES:
    βœ— Optimizing without measuring
    βœ— Premature optimization
    βœ— Multiple changes at once
    βœ— Ignoring percentiles
    βœ— Lab only, no production
    βœ— No baseline
    βœ— Not monitoring after
    βœ— Micro-optimizations
    

    Related Solutions