7 min read • Guide 303 of 877
Quality Assurance Best Practices
Quality isn't a phase—it's a practice woven throughout development. The best teams don't test quality in at the end; they build it in from the start. This guide covers practical approaches to quality assurance that scale with your team.
Testing Pyramid
| Level | Speed | Scope | Quantity |
|---|---|---|---|
| Unit | Fast | Small | Many |
| Integration | Medium | Medium | Some |
| E2E | Slow | Full | Few |
| Manual | Slowest | Variable | Targeted |
Shift Left
Quality From the Start
SHIFT LEFT APPROACH
═══════════════════
TRADITIONAL (SHIFT RIGHT):
─────────────────────────────────────
Requirements → Dev → Dev → Dev → QA → QA → Deploy
QA at the end:
├── Bugs found late
├── Expensive to fix
├── Pressure to ship anyway
├── Quality afterthought
└── Firefighting mode
SHIFT LEFT:
─────────────────────────────────────
[QA] Requirements → [QA] Dev → [QA] Review → Deploy
QA throughout:
├── QA in planning
├── QA reviews requirements
├── Tests written early
├── Testing during dev
├── Bugs caught early
└── Quality built in
PRACTICES:
─────────────────────────────────────
Planning:
├── QA reviews acceptance criteria
├── Identifies test scenarios
├── Flags complexity
├── Estimates testing effort
└── Part of the team
Development:
├── Developers write unit tests
├── TDD for critical paths
├── Pair with QA on scenarios
├── Test as you code
└── No "throw over wall"
Review:
├── Tests required in PR
├── Coverage thresholds
├── Automated checks pass
├── QA reviews functionality
└── Gate before merge
Automation Strategy
What to Automate
AUTOMATION DECISIONS
════════════════════
AUTOMATE:
─────────────────────────────────────
Unit tests (high value, fast):
├── Business logic
├── Calculations
├── Data transformations
├── Edge cases
└── Run on every commit
Integration tests (medium value):
├── API endpoints
├── Database operations
├── Service interactions
├── Key flows
└── Run on every PR
E2E tests (selective):
├── Critical user journeys
├── Smoke tests
├── Happy paths
├── Revenue-impacting flows
└── Run on deploy
Regression suite:
├── Previously found bugs
├── Fixed issues stay fixed
├── High-risk areas
└── Run regularly
DON'T AUTOMATE:
─────────────────────────────────────
├── Exploratory testing
├── Usability evaluation
├── One-time checks
├── Rapidly changing UI
├── Edge cases still being discovered
└── Manual is more effective
AUTOMATION PRIORITY:
─────────────────────────────────────
ROI = (manual time × frequency) / automation cost
Automate first:
├── Run many times
├── Time-consuming manually
├── Critical paths
├── Stable functionality
└── High payoff
Test Pyramid
TESTING PYRAMID
═══════════════
╱╲
╱ ╲
╱ E2E╲ Few
╱ tests╲ (10%)
╱────────╲
╱Integration╲ Some
╱ tests ╲ (20%)
╱───────────────╲
╱ Unit tests ╲ Many
╱ ╲(70%)
╱───────────────────────╲
UNIT TESTS (Base):
─────────────────────────────────────
├── Fast (milliseconds)
├── Many (hundreds/thousands)
├── Test one thing each
├── No external dependencies
├── Run constantly
└── Developer-written
INTEGRATION TESTS (Middle):
─────────────────────────────────────
├── Medium speed (seconds)
├── Moderate count (dozens/hundreds)
├── Test component interactions
├── May use test databases
├── Run on PR and deploy
└── Developer or QA
E2E TESTS (Top):
─────────────────────────────────────
├── Slow (minutes)
├── Few (dozens max)
├── Full user journeys
├── Real browser/app
├── Run before production
└── Often QA-owned
ANTI-PATTERN: ICE CREAM CONE
─────────────────────────────────────
Many E2E, few unit tests:
├── Slow feedback
├── Flaky tests
├── Hard to maintain
├── Not pinpointing failures
└── Invert the pyramid!
Quality Gates
CI/CD Quality
QUALITY GATES IN CI/CD
══════════════════════
ON COMMIT:
─────────────────────────────────────
├── Linting passes
├── Unit tests pass
├── Build succeeds
├── Fast feedback (<5 min)
└── Fail fast
ON PR:
─────────────────────────────────────
├── All unit tests pass
├── Integration tests pass
├── Coverage threshold met
├── Static analysis clean
├── Security scan clean
├── Code review approved
└── Gate before merge
ON MERGE TO MAIN:
─────────────────────────────────────
├── Full test suite
├── E2E tests
├── Performance benchmarks
├── Deploy to staging
├── Smoke tests
└── Ready for production
ON PRODUCTION DEPLOY:
─────────────────────────────────────
├── Smoke tests
├── Health checks
├── Monitoring active
├── Rollback ready
├── Feature flags for gradual rollout
└── Observe and confirm
EXAMPLE PIPELINE:
─────────────────────────────────────
[Commit] → Lint → Unit Tests → Build
↓
[PR] → Integration → Coverage → Security
↓
[Merge] → E2E → Staging → Smoke
↓
[Deploy] → Production → Monitor → Alerts
Exploratory Testing
Manual Testing Value
EXPLORATORY TESTING
═══════════════════
WHAT IT IS:
─────────────────────────────────────
Simultaneous:
├── Learning about the system
├── Designing tests
├── Executing tests
├── Analyzing results
└── Human creativity and intuition
WHEN TO DO:
─────────────────────────────────────
├── New features
├── Complex flows
├── Risk assessment
├── Edge case discovery
├── Usability evaluation
└── Automation can't replace
SESSION-BASED APPROACH:
─────────────────────────────────────
Session: 45-90 min focused exploration
Charter example:
"Explore the checkout flow
with various payment methods
to find edge cases and usability issues"
Document:
├── Time spent
├── Scenarios explored
├── Issues found
├── Questions raised
├── Areas for automation
└── Brief report
COMPLEMENT AUTOMATION:
─────────────────────────────────────
Automation: Known scenarios
Exploration: Unknown scenarios
Together:
├── Automation for regression
├── Exploration for discovery
├── Both valuable
└── Different purposes
Bug Management
Tracking and Learning
BUG LIFECYCLE
═════════════
DISCOVERY:
─────────────────────────────────────
Bug found by:
├── Automated test
├── Manual testing
├── Production monitoring
├── User report
└── Record source for analysis
DOCUMENTATION:
─────────────────────────────────────
Good bug report:
├── Clear title
├── Steps to reproduce
├── Expected behavior
├── Actual behavior
├── Environment details
├── Screenshots/logs
└── Enough to fix without questions
TRIAGE:
─────────────────────────────────────
Severity:
├── Critical: Production down
├── High: Major feature broken
├── Medium: Feature degraded
├── Low: Minor issue
└── Prioritize accordingly
FIX + TEST:
─────────────────────────────────────
├── Fix the bug
├── Add test that catches it
├── Verify test fails before fix
├── Verify test passes after fix
├── Bug → Permanent test
└── Never regress
POST-MORTEM (for significant bugs):
─────────────────────────────────────
"How did this happen?"
├── Root cause
├── Why tests didn't catch
├── What would prevent next time
├── Action items
├── Blameless learning
└── Systemic improvement
GitScrum QA Integration
Quality Tracking
GITSCRUM QA FEATURES
════════════════════
BUG TRACKING:
─────────────────────────────────────
Task type: Bug
├── Severity field
├── Steps to reproduce
├── Environment
├── Linked to feature
├── QA workflow
└── Track to resolution
QA WORKFLOW:
─────────────────────────────────────
Custom workflow for QA:
├── New
├── Investigating
├── Cannot Reproduce
├── Fixing
├── Ready for Retest
├── Verified
├── Closed
└── Clear status
TEST STATUS:
─────────────────────────────────────
Story task checklist:
☐ Unit tests written
☐ Integration tests written
☐ Manual testing done
☐ Acceptance criteria verified
☐ Code review passed
└── Definition of Done
QUALITY DASHBOARD:
─────────────────────────────────────
├── Bugs by severity
├── Bug trend (up/down)
├── Mean time to fix
├── Test coverage
├── Escaped defects
└── Quality visibility
Best Practices
For Quality Assurance
- Shift left — Quality from planning, not end
- Test pyramid — Many unit, few E2E
- Automate regression — Manual for discovery
- Quality gates — CI/CD enforcement
- Learn from bugs — Post-mortems improve system
Anti-Patterns
QA MISTAKES:
✗ Testing only at end
✗ Manual regression testing
✗ No automated tests
✗ E2E-heavy pyramid
✗ QA as gatekeeper (not partner)
✗ Ignoring failed tests
✗ No post-mortems
✗ Quality is "QA's job"