7 min read • Guide 351 of 877
Test Automation Strategies
Good test automation catches bugs before users do. Bad test automation is slow, flaky, and gets ignored. The goal is building automation that teams trust and maintain. This guide covers practical approaches to test automation.
Testing Pyramid
| Level | Speed | Cost | Coverage |
|---|---|---|---|
| Unit | Fast | Low | Many |
| Integration | Medium | Medium | Some |
| E2E | Slow | High | Few |
Test Types
Different Testing Levels
TESTING PYRAMID
═══════════════
▲
/E\ E2E Tests
/2E \ (few, slow)
/─────\
/ \
/ Integr- \ Integration Tests
/ ation \ (some, medium)
/─────────────\
/ \
/ Unit Tests \ Unit Tests
/ \ (many, fast)
/─────────────────────\
UNIT TESTS:
─────────────────────────────────────
Purpose:
├── Test individual functions/classes
├── Isolated from dependencies
├── Fast execution
├── Many tests
├── Foundation of automation
└── Majority of tests
Characteristics:
├── Run in milliseconds
├── No external dependencies
├── No database, no network
├── Mock external calls
├── Run frequently
└── Developer-owned
INTEGRATION TESTS:
─────────────────────────────────────
Purpose:
├── Test component interactions
├── Real database (test instance)
├── Real API calls (to test env)
├── Verify integrations work
└── Medium coverage
Characteristics:
├── Run in seconds
├── Some external dependencies
├── Test database
├── More realistic
├── Run on CI
└── Critical paths
E2E TESTS:
─────────────────────────────────────
Purpose:
├── Test full user flows
├── Real browser/app
├── End-to-end verification
├── Key journeys only
└── Highest confidence
Characteristics:
├── Run in minutes
├── Full environment
├── Slower, more expensive
├── Flakier (more moving parts)
├── Run before release
└── Selective coverage
Unit Testing
Foundation of Automation
UNIT TEST BEST PRACTICES
════════════════════════
GOOD UNIT TEST:
─────────────────────────────────────
Characteristics:
├── Fast (< 100ms)
├── Isolated (no external deps)
├── Deterministic (same result)
├── Focused (tests one thing)
├── Readable (documentation)
└── Maintainable
Example:
describe('calculateTotal', () => {
it('sums item prices', () => {
const items = [
{ price: 10, quantity: 2 },
{ price: 5, quantity: 1 }
];
expect(calculateTotal(items)).toBe(25);
});
it('returns 0 for empty cart', () => {
expect(calculateTotal([])).toBe(0);
});
it('handles negative quantities', () => {
const items = [{ price: 10, quantity: -1 }];
expect(() => calculateTotal(items))
.toThrow('Invalid quantity');
});
});
TEST NAMING:
─────────────────────────────────────
Clear names:
├── Describe what is being tested
├── Describe expected behavior
├── Describe conditions
└── Self-documenting
Format: "should [expected behavior] when [condition]"
├── "should return 0 when cart is empty"
├── "should throw error when quantity negative"
├── "should apply discount when code valid"
└── Readable as specification
COVERAGE:
─────────────────────────────────────
Coverage targets:
├── 80% is good goal
├── 100% is often overkill
├── Focus on critical paths
├── Don't game coverage
├── Coverage ≠ quality
└── Meaningful tests > high %
Integration Testing
Testing Connections
INTEGRATION TEST STRATEGIES
═══════════════════════════
API INTEGRATION TESTS:
─────────────────────────────────────
Test real API:
describe('User API', () => {
it('creates and retrieves user', async () => {
// Create
const createRes = await api.post('/users', {
name: 'Test User',
email: 'test@example.com'
});
expect(createRes.status).toBe(201);
// Retrieve
const userId = createRes.data.id;
const getRes = await api.get(`/users/${userId}`);
expect(getRes.data.name).toBe('Test User');
// Cleanup
await api.delete(`/users/${userId}`);
});
});
DATABASE INTEGRATION:
─────────────────────────────────────
Test with real database:
├── Use test database
├── Setup/teardown per test
├── Transactions for isolation
├── Realistic data
└── Catch real issues
describe('UserRepository', () => {
beforeEach(async () => {
await db.clear('users');
});
it('saves and finds user', async () => {
await userRepo.save({ name: 'Test' });
const users = await userRepo.findAll();
expect(users).toHaveLength(1);
});
});
CONTRACT TESTING:
─────────────────────────────────────
API contract verification:
├── Consumer defines expected contract
├── Provider verifies it can fulfill
├── Catches breaking changes
├── Pact, Dredd tools
└── API compatibility guaranteed
E2E Testing
Full Flow Testing
E2E TEST STRATEGIES
═══════════════════
WHAT TO E2E TEST:
─────────────────────────────────────
Critical user journeys:
├── Sign up / login
├── Main purchase flow
├── Key feature workflows
├── Payment processing
├── High-value paths
└── Not everything
E2E TEST EXAMPLE:
─────────────────────────────────────
Using Playwright:
test('user can complete purchase', async ({ page }) => {
// Login
await page.goto('/login');
await page.fill('[name=email]', 'user@test.com');
await page.fill('[name=password]', 'password');
await page.click('button[type=submit]');
// Add to cart
await page.goto('/products/1');
await page.click('button.add-to-cart');
// Checkout
await page.goto('/cart');
await page.click('button.checkout');
// Payment
await page.fill('[name=card]', '4242424242424242');
await page.click('button.pay');
// Verify
await expect(page.locator('.success-message'))
.toBeVisible();
});
AVOIDING FLAKINESS:
─────────────────────────────────────
├── Wait for elements, not time
├── Use data-testid for selectors
├── Isolate test data
├── Handle loading states
├── Retry on network issues
├── Clean environment
└── Stable tests > more tests
Flaky Tests
Dealing with Flakiness
HANDLING FLAKY TESTS
════════════════════
WHAT MAKES TESTS FLAKY:
─────────────────────────────────────
├── Time dependencies
├── Random data without seeds
├── Shared test state
├── Network timing
├── UI animation timing
├── External service dependencies
├── Race conditions
└── Many causes
STRATEGIES:
─────────────────────────────────────
Fix or delete:
├── Don't ignore flaky tests
├── They destroy trust
├── Fix the root cause
├── Or delete if not valuable
└── Zero tolerance
Isolation:
├── Each test independent
├── No shared state
├── Setup/teardown per test
├── Parallel-safe
└── Reproducible
Proper waits:
├── Wait for condition, not time
├── page.waitForSelector() not sleep(3000)
├── Explicit over implicit
├── Timeouts appropriate
└── Network-aware
Deterministic data:
├── Seed random generators
├── Fixed test data
├── Controlled environment
├── Predictable results
└── Reproducible
QUARANTINE FLAKY:
─────────────────────────────────────
If can't fix immediately:
├── Move to separate suite
├── Don't block CI
├── Track for fixing
├── Time-boxed quarantine
├── Don't let them rot
└── Fix soon
CI Integration
Tests in Pipeline
CI TEST INTEGRATION
═══════════════════
PIPELINE STRUCTURE:
─────────────────────────────────────
stages:
- lint
- unit-tests # Fast, first
- integration # Medium
- e2e # Slow, last
unit-tests:
stage: unit-tests
script:
- npm test
timeout: 5m
integration:
stage: integration
services:
- postgres
script:
- npm run test:integration
timeout: 15m
e2e:
stage: e2e
script:
- npm run test:e2e
timeout: 30m
only:
- main
FAST FEEDBACK:
─────────────────────────────────────
├── Lint first (fastest)
├── Unit tests second
├── Fail fast on errors
├── Integration if unit passes
├── E2E last (slowest)
└── Quick feedback loop
TEST REPORTING:
─────────────────────────────────────
├── JUnit XML reports
├── Coverage reports
├── Test duration tracking
├── Failure screenshots
├── Visible in CI
└── Easy debugging
GitScrum Integration
Test Tasks
GITSCRUM FOR TESTING
════════════════════
TEST TASK TRACKING:
─────────────────────────────────────
├── Test tasks in backlog
├── Label: test, automation
├── Linked to features
├── Definition of done includes tests
└── Testing visible
BUG TRACKING:
─────────────────────────────────────
When test catches bug:
├── Create bug task
├── Link to failing test
├── Priority based on severity
├── Tracked to resolution
└── Full traceability
QUALITY METRICS:
─────────────────────────────────────
├── Test pass rates
├── Coverage trends
├── Bug escape rates
├── Data for improvement
└── Quality visibility
Best Practices
For Test Automation
- Pyramid approach — Many unit, few E2E
- Fast feedback — Quick test runs
- Zero flaky tolerance — Fix or delete
- Meaningful tests — Not just coverage
- CI integration — Automated always
Anti-Patterns
TEST AUTOMATION MISTAKES:
✗ Inverted pyramid (too many E2E)
✗ Ignoring flaky tests
✗ Slow test suites
✗ Tests not in CI
✗ Testing implementation, not behavior
✗ Coverage without value
✗ No test maintenance
✗ Brittle selectors