Test Automation Strategies | Testing Pyramid Guide
Build test automation with the testing pyramid: many unit tests, some integration, few E2E. GitScrum tracks test tasks and quality metrics.
7 min read
Good test automation catches bugs before users do. Bad test automation is slow, flaky, and gets ignored. The goal is building automation that teams trust and maintain. This guide covers practical approaches to test automation.
Testing Pyramid
| Level | Speed | Cost | Coverage |
|---|---|---|---|
| Unit | Fast | Low | Many |
| Integration | Medium | Medium | Some |
| E2E | Slow | High | Few |
Test Types
Different Testing Levels
TESTING PYRAMID
βββββββββββββββ
β²
/E\ E2E Tests
/2E \ (few, slow)
/βββββ\
/ \
/ Integr- \ Integration Tests
/ ation \ (some, medium)
/βββββββββββββ\
/ \
/ Unit Tests \ Unit Tests
/ \ (many, fast)
/βββββββββββββββββββββ\
UNIT TESTS:
βββββββββββββββββββββββββββββββββββββ
Purpose:
βββ Test individual functions/classes
βββ Isolated from dependencies
βββ Fast execution
βββ Many tests
βββ Foundation of automation
βββ Majority of tests
Characteristics:
βββ Run in milliseconds
βββ No external dependencies
βββ No database, no network
βββ Mock external calls
βββ Run frequently
βββ Developer-owned
INTEGRATION TESTS:
βββββββββββββββββββββββββββββββββββββ
Purpose:
βββ Test component interactions
βββ Real database (test instance)
βββ Real API calls (to test env)
βββ Verify integrations work
βββ Medium coverage
Characteristics:
βββ Run in seconds
βββ Some external dependencies
βββ Test database
βββ More realistic
βββ Run on CI
βββ Critical paths
E2E TESTS:
βββββββββββββββββββββββββββββββββββββ
Purpose:
βββ Test full user flows
βββ Real browser/app
βββ End-to-end verification
βββ Key journeys only
βββ Highest confidence
Characteristics:
βββ Run in minutes
βββ Full environment
βββ Slower, more expensive
βββ Flakier (more moving parts)
βββ Run before release
βββ Selective coverage
Unit Testing
Foundation of Automation
UNIT TEST BEST PRACTICES
ββββββββββββββββββββββββ
GOOD UNIT TEST:
βββββββββββββββββββββββββββββββββββββ
Characteristics:
βββ Fast (< 100ms)
βββ Isolated (no external deps)
βββ Deterministic (same result)
βββ Focused (tests one thing)
βββ Readable (documentation)
βββ Maintainable
Example:
describe('calculateTotal', () => {
it('sums item prices', () => {
const items = [
{ price: 10, quantity: 2 },
{ price: 5, quantity: 1 }
];
expect(calculateTotal(items)).toBe(25);
});
it('returns 0 for empty cart', () => {
expect(calculateTotal([])).toBe(0);
});
it('handles negative quantities', () => {
const items = [{ price: 10, quantity: -1 }];
expect(() => calculateTotal(items))
.toThrow('Invalid quantity');
});
});
TEST NAMING:
βββββββββββββββββββββββββββββββββββββ
Clear names:
βββ Describe what is being tested
βββ Describe expected behavior
βββ Describe conditions
βββ Self-documenting
Format: "should [expected behavior] when [condition]"
βββ "should return 0 when cart is empty"
βββ "should throw error when quantity negative"
βββ "should apply discount when code valid"
βββ Readable as specification
COVERAGE:
βββββββββββββββββββββββββββββββββββββ
Coverage targets:
βββ 80% is good goal
βββ 100% is often overkill
βββ Focus on critical paths
βββ Don't game coverage
βββ Coverage β quality
βββ Meaningful tests > high %
Integration Testing
Testing Connections
INTEGRATION TEST STRATEGIES
βββββββββββββββββββββββββββ
API INTEGRATION TESTS:
βββββββββββββββββββββββββββββββββββββ
Test real API:
describe('User API', () => {
it('creates and retrieves user', async () => {
// Create
const createRes = await api.post('/users', {
name: 'Test User',
email: 'test@example.com'
});
expect(createRes.status).toBe(201);
// Retrieve
const userId = createRes.data.id;
const getRes = await api.get(`/users/${userId}`);
expect(getRes.data.name).toBe('Test User');
// Cleanup
await api.delete(`/users/${userId}`);
});
});
DATABASE INTEGRATION:
βββββββββββββββββββββββββββββββββββββ
Test with real database:
βββ Use test database
βββ Setup/teardown per test
βββ Transactions for isolation
βββ Realistic data
βββ Catch real issues
describe('UserRepository', () => {
beforeEach(async () => {
await db.clear('users');
});
it('saves and finds user', async () => {
await userRepo.save({ name: 'Test' });
const users = await userRepo.findAll();
expect(users).toHaveLength(1);
});
});
CONTRACT TESTING:
βββββββββββββββββββββββββββββββββββββ
API contract verification:
βββ Consumer defines expected contract
βββ Provider verifies it can fulfill
βββ Catches breaking changes
βββ Pact, Dredd tools
βββ API compatibility guaranteed
E2E Testing
Full Flow Testing
E2E TEST STRATEGIES
βββββββββββββββββββ
WHAT TO E2E TEST:
βββββββββββββββββββββββββββββββββββββ
Critical user journeys:
βββ Sign up / login
βββ Main purchase flow
βββ Key feature workflows
βββ Payment processing
βββ High-value paths
βββ Not everything
E2E TEST EXAMPLE:
βββββββββββββββββββββββββββββββββββββ
Using Playwright:
test('user can complete purchase', async ({ page }) => {
// Login
await page.goto('/login');
await page.fill('[name=email]', 'user@test.com');
await page.fill('[name=password]', 'password');
await page.click('button[type=submit]');
// Add to cart
await page.goto('/products/1');
await page.click('button.add-to-cart');
// Checkout
await page.goto('/cart');
await page.click('button.checkout');
// Payment
await page.fill('[name=card]', '4242424242424242');
await page.click('button.pay');
// Verify
await expect(page.locator('.success-message'))
.toBeVisible();
});
AVOIDING FLAKINESS:
βββββββββββββββββββββββββββββββββββββ
βββ Wait for elements, not time
βββ Use data-testid for selectors
βββ Isolate test data
βββ Handle loading states
βββ Retry on network issues
βββ Clean environment
βββ Stable tests > more tests
Flaky Tests
Dealing with Flakiness
HANDLING FLAKY TESTS
ββββββββββββββββββββ
WHAT MAKES TESTS FLAKY:
βββββββββββββββββββββββββββββββββββββ
βββ Time dependencies
βββ Random data without seeds
βββ Shared test state
βββ Network timing
βββ UI animation timing
βββ External service dependencies
βββ Race conditions
βββ Many causes
STRATEGIES:
βββββββββββββββββββββββββββββββββββββ
Fix or delete:
βββ Don't ignore flaky tests
βββ They destroy trust
βββ Fix the root cause
βββ Or delete if not valuable
βββ Zero tolerance
Isolation:
βββ Each test independent
βββ No shared state
βββ Setup/teardown per test
βββ Parallel-safe
βββ Reproducible
Proper waits:
βββ Wait for condition, not time
βββ page.waitForSelector() not sleep(3000)
βββ Explicit over implicit
βββ Timeouts appropriate
βββ Network-aware
Deterministic data:
βββ Seed random generators
βββ Fixed test data
βββ Controlled environment
βββ Predictable results
βββ Reproducible
QUARANTINE FLAKY:
βββββββββββββββββββββββββββββββββββββ
If can't fix immediately:
βββ Move to separate suite
βββ Don't block CI
βββ Track for fixing
βββ Time-boxed quarantine
βββ Don't let them rot
βββ Fix soon
CI Integration
Tests in Pipeline
CI TEST INTEGRATION
βββββββββββββββββββ
PIPELINE STRUCTURE:
βββββββββββββββββββββββββββββββββββββ
stages:
- lint
- unit-tests # Fast, first
- integration # Medium
- e2e # Slow, last
unit-tests:
stage: unit-tests
script:
- npm test
timeout: 5m
integration:
stage: integration
services:
- postgres
script:
- npm run test:integration
timeout: 15m
e2e:
stage: e2e
script:
- npm run test:e2e
timeout: 30m
only:
- main
FAST FEEDBACK:
βββββββββββββββββββββββββββββββββββββ
βββ Lint first (fastest)
βββ Unit tests second
βββ Fail fast on errors
βββ Integration if unit passes
βββ E2E last (slowest)
βββ Quick feedback loop
TEST REPORTING:
βββββββββββββββββββββββββββββββββββββ
βββ JUnit XML reports
βββ Coverage reports
βββ Test duration tracking
βββ Failure screenshots
βββ Visible in CI
βββ Easy debugging
GitScrum Integration
Test Tasks
GITSCRUM FOR TESTING
ββββββββββββββββββββ
TEST TASK TRACKING:
βββββββββββββββββββββββββββββββββββββ
βββ Test tasks in backlog
βββ Label: test, automation
βββ Linked to features
βββ Definition of done includes tests
βββ Testing visible
BUG TRACKING:
βββββββββββββββββββββββββββββββββββββ
When test catches bug:
βββ Create bug task
βββ Link to failing test
βββ Priority based on severity
βββ Tracked to resolution
βββ Full traceability
QUALITY METRICS:
βββββββββββββββββββββββββββββββββββββ
βββ Test pass rates
βββ Coverage trends
βββ Bug escape rates
βββ Data for improvement
βββ Quality visibility
Best Practices
For Test Automation
Anti-Patterns
TEST AUTOMATION MISTAKES:
β Inverted pyramid (too many E2E)
β Ignoring flaky tests
β Slow test suites
β Tests not in CI
β Testing implementation, not behavior
β Coverage without value
β No test maintenance
β Brittle selectors