Performance Optimization Workflow | Measure First
Systematic performance workflow: measure baseline, find bottlenecks, optimize, verify. Track latency percentiles and Core Web Vitals with GitScrum.
6 min read
Performance optimization without measurement is guessing. Good performance work starts with data, targets specific bottlenecks, and measures results. This guide covers a systematic approach to performance optimization.
Optimization Cycle
| Step | Action | Output |
|---|---|---|
| Measure | Baseline current | Data |
| Analyze | Find bottleneck | Target |
| Optimize | Fix issue | Change |
| Verify | Measure again | Result |
Setting Goals
Performance Targets
PERFORMANCE GOALS
βββββββββββββββββ
DEFINE TARGETS:
βββββββββββββββββββββββββββββββββββββ
Be specific:
βββ "Page load < 2 seconds"
βββ "API response p95 < 200ms"
βββ "Throughput > 1000 req/sec"
βββ "Error rate < 0.1%"
βββ Measurable targets
βββ User-focused
PERCENTILES:
βββββββββββββββββββββββββββββββββββββ
Use percentiles, not averages:
βββ p50 (median): 50% faster than this
βββ p95: 95% of requests faster
βββ p99: 99% of requests faster
βββ p99.9: for critical paths
βββ Tail latency matters
Example:
βββ Average: 100ms (hides problems)
βββ p50: 50ms (half are fast)
βββ p95: 150ms (most are fine)
βββ p99: 2000ms (some very slow!)
βββ p99 reveals real issues
SLA TARGETS:
βββββββββββββββββββββββββββββββββββββ
Service Level Agreements:
βββ "99.9% of requests < 500ms"
βββ "p99 < 1 second"
βββ "Error rate < 0.01%"
βββ Contractual obligations
βββ Must meet
USER EXPERIENCE:
βββββββββββββββββββββββββββββββββββββ
Web vitals:
βββ LCP (Largest Contentful Paint): < 2.5s
βββ FID (First Input Delay): < 100ms
βββ CLS (Cumulative Layout Shift): < 0.1
βββ INP (Interaction to Next Paint): < 200ms
βββ User-facing metrics
Baseline Measurement
Current State
BASELINE MEASUREMENT
ββββββββββββββββββββ
BEFORE OPTIMIZATION:
βββββββββββββββββββββββββββββββββββββ
Measure current state:
βββ Run load tests
βββ Collect production metrics
βββ Profile application
βββ Document baseline
βββ Compare later
βββ Know starting point
LOAD TESTING:
βββββββββββββββββββββββββββββββββββββ
Tools:
βββ k6, Locust, JMeter
βββ Artillery, Gatling
βββ Simulate real traffic
βββ Measure under load
βββ Find limits
Example k6 test:
import http from 'k6/http';
export const options = {
stages: [
{ duration: '1m', target: 100 },
{ duration: '3m', target: 100 },
{ duration: '1m', target: 0 },
],
};
export default function() {
http.get('https://api.example.com/users');
}
METRICS TO CAPTURE:
βββββββββββββββββββββββββββββββββββββ
βββ Response times (p50, p95, p99)
βββ Throughput (req/sec)
βββ Error rate
βββ CPU utilization
βββ Memory usage
βββ Database query times
βββ External service latency
βββ Complete picture
Finding Bottlenecks
Profiling
FINDING BOTTLENECKS
βββββββββββββββββββ
PROFILING TOOLS:
βββββββββββββββββββββββββββββββββββββ
Application:
βββ APM tools (Datadog, New Relic)
βββ Language profilers
βββ Flame graphs
βββ Trace analysis
βββ Where is time spent?
Database:
βββ Slow query logs
βββ Query explain plans
βββ Index analysis
βββ Connection pool stats
βββ Database-specific
Infrastructure:
βββ CPU/memory monitoring
βββ Network latency
βββ Disk I/O
βββ Container metrics
βββ Resource constraints
COMMON BOTTLENECKS:
βββββββββββββββββββββββββββββββββββββ
Database:
βββ Missing indexes
βββ N+1 queries
βββ Slow queries
βββ Connection exhaustion
βββ Often the bottleneck
Network:
βββ External API calls
βββ Large payloads
βββ Too many requests
βββ No caching
βββ Latency adds up
Application:
βββ Inefficient algorithms
βββ Memory leaks
βββ Blocking operations
βββ CPU-bound processing
βββ Code problems
FLAME GRAPH ANALYSIS:
βββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββ
ββ db.query βββββββββββββββββββ
ββ http.get ββββββ β process β
ββ β β json ββββ β result β
ββββββββββββββββββββββββββββββββββ
Reading:
βββ Width = time spent
βββ Stacks show call hierarchy
βββ Wide = slow (optimize this)
βββ Find widest blocks
βββ Visual bottleneck identification
Optimization Techniques
Common Fixes
OPTIMIZATION TECHNIQUES
βββββββββββββββββββββββ
DATABASE:
βββββββββββββββββββββββββββββββββββββ
Add missing indexes:
CREATE INDEX idx_users_email ON users(email);
Fix N+1 queries:
βββ Use eager loading
βββ Batch queries
βββ Join instead of loop
βββ Reduce query count
Query optimization:
βββ EXPLAIN ANALYZE
βββ Rewrite slow queries
βββ Add WHERE clauses
βββ Limit result sets
βββ Efficient queries
CACHING:
βββββββββββββββββββββββββββββββββββββ
Layers:
βββ Application cache (Redis)
βββ Database query cache
βββ CDN for static content
βββ Browser caching
βββ Appropriate cache levels
Cache patterns:
βββ Cache-aside (most common)
βββ Write-through
βββ Read-through
βββ TTL-based expiration
βββ Invalidation strategy
API OPTIMIZATION:
βββββββββββββββββββββββββββββββββββββ
βββ Pagination
βββ Field selection
βββ Compression (gzip)
βββ Connection pooling
βββ Async processing
βββ Background jobs
βββ Reduce work per request
FRONTEND:
βββββββββββββββββββββββββββββββββββββ
βββ Code splitting
βββ Lazy loading
βββ Image optimization
βββ Bundle size reduction
βββ CDN for assets
βββ Faster initial load
Verification
Measuring Results
VERIFY IMPROVEMENTS
βββββββββββββββββββ
A/B COMPARISON:
βββββββββββββββββββββββββββββββββββββ
Before:
βββ p50: 150ms
βββ p95: 500ms
βββ p99: 2000ms
βββ Throughput: 500 req/s
After:
βββ p50: 50ms (66% faster)
βββ p95: 120ms (76% faster)
βββ p99: 300ms (85% faster)
βββ Throughput: 1500 req/s (3x)
LOAD TEST AGAIN:
βββββββββββββββββββββββββββββββββββββ
βββ Same test as baseline
βββ Same conditions
βββ Compare results
βββ Quantify improvement
βββ Data-driven validation
PRODUCTION MONITORING:
βββββββββββββββββββββββββββββββββββββ
After deploy:
βββ Watch metrics
βββ Compare to before
βββ User experience improved?
βββ Error rate unchanged?
βββ Real-world validation
βββ Actual impact
DOCUMENT RESULTS:
βββββββββββββββββββββββββββββββββββββ
Performance improvement:
βββ What was the problem
βββ What was changed
βββ Before metrics
βββ After metrics
βββ Percentage improvement
βββ Future reference
GitScrum Integration
Tracking Performance Work
GITSCRUM FOR PERFORMANCE
ββββββββββββββββββββββββ
PERFORMANCE TASKS:
βββββββββββββββββββββββββββββββββββββ
βββ Label: performance
βββ Priority based on impact
βββ Linked to metrics
βββ Before/after documented
βββ Tracked work
TASK STRUCTURE:
βββββββββββββββββββββββββββββββββββββ
Task: "Optimize user search API"
Description:
βββ Current p95: 800ms
βββ Target p95: 200ms
βββ Cause: Missing index + N+1 query
βββ Approach: Add index, eager load
Acceptance:
βββ p95 < 200ms
βββ Throughput > 500 req/s
βββ Verified in production
βββ Clear criteria
DOCUMENTATION:
βββββββββββββββββββββββββββββββββββββ
NoteVault:
βββ Performance baselines
βββ Optimization history
βββ Common issues
βββ Runbooks
βββ Knowledge base
Best Practices
For Performance Optimization
Anti-Patterns
PERFORMANCE MISTAKES:
β Optimizing without measuring
β Premature optimization
β Multiple changes at once
β Ignoring percentiles
β Lab only, no production
β No baseline
β Not monitoring after
β Micro-optimizations