The Safety Net Analogy
Tightrope walkers have safety nets:
- Walk carefully: You might not fall
- Have a net: If you fall, you're caught
Tests are safety nets for your code. They catch bugs before users do.
Without tests:
Push code → Hope it works → Users find bugs 😱
With tests:
Push code → Tests catch bugs → Fix before users see 😌
Why Testing Matters
The Cost of Bugs
Bug found during:
Development: often quick to fix
Code review: usually takes longer
Testing: can take longer still
Production: Days + angry users + lost revenue
Earlier = Cheaper
Confidence to Change
Without tests:
"I'm afraid to touch that code"
"What if I break something?"
With tests:
Make changes, run tests
Green? Lower risk to ship.
Red? Fixed before it's a problem.
The Testing Pyramid
╱╲
╱ ╲ E2E Tests (Few)
╱────╲ Slow, expensive, realistic
╱ ╲
╱ Int. ╲ Integration Tests (Some)
╱──────────╲ Components working together
╱ ╲
╱ Unit ╲ Unit Tests (Many)
╱────────────────╲ Fast, isolated, specific
Why a Pyramid?
Unit tests: Fast (ms), cheap, many
Integration tests: Medium speed, medium cost
E2E tests: Slow (seconds), expensive, few
Base of pyramid = most tests
Top of pyramid = fewest tests
Types of Tests
1. Unit Tests
Test ONE small piece in isolation:
Function: calculateTotal(items)
Tests:
✓ Empty list returns 0
✓ Single item returns item price
✓ Multiple items sums correctly
✓ Handles negative numbers
Isolated: No database, no network, no files
Fast: Thousands per second
2. Integration Tests
Test components WORKING TOGETHER:
Test: User signup flow
Components:
API → Validation → Database → Email service
Tests:
✓ Valid data saves to database
✓ Welcome email is sent
✓ Invalid data returns error
Uses real (or realistic) dependencies.
3. End-to-End (E2E) Tests
Test the WHOLE SYSTEM like a user:
Test: Purchase flow
Steps:
1. Open browser
2. Log in
3. Add item to cart
4. Enter payment info
5. Complete purchase
6. Verify confirmation email
Slow but realistic!
4. Other Test Types
| Type | What It Tests |
|---|---|
| Smoke | Basic functionality works |
| Regression | Old bugs don't return |
| Performance | Speed under load |
| Security | Vulnerabilities |
| Acceptance | Business requirements |
Test-Driven Development (TDD)
The Process
1. RED: Write a failing test first
Test for feature that doesn't exist yet
2. GREEN: Write minimal code to pass
Just enough to make the test green
3. REFACTOR: Clean up the code
Tests ensure you don't break anything
Repeat!
Why TDD?
✓ Forces you to think about design first
✓ Guarantees test coverage
✓ Small, focused steps
✓ Confidence throughout
What to Test
Test Behavior, Not Implementation
Bad test:
"Method calls database.query() once"
(Testing HOW it works)
Good test:
"Returns user with correct email"
(Testing WHAT it does)
The Right Things
DO test:
✓ Business logic
✓ Edge cases (empty, null, maximum)
✓ Error handling
✓ Integration points
DON'T test:
✗ Third-party libraries
✗ Simple getters/setters
✗ Framework code
Edge Cases
User age input:
✓ Normal: 25
✓ Zero: 0
✓ Negative: -5 (what happens?)
✓ Very large: 999
✓ Non-number: "abc"
✓ Empty: null/undefined
Test Structure: AAA
Arrange, Act, Assert
TEST: User can update their email
ARRANGE - Set up the test:
Create a user with email "old@example.com"
ACT - Do the thing:
Call updateEmail("new@example.com")
ASSERT - Check the result:
User's email should now be "new@example.com"
Mocking and Stubs
Why Mock?
Unit test shouldn't:
- Actually send emails
- Actually charge credit cards
- Actually call external APIs
Mock = Fake version for testing
Example
Testing: sendWelcomeEmail(user)
Real email service: Actually sends email
Mock email service: Records that it was called
Test checks:
"Mock was called with user's email address"
No actual email sent!
Code Coverage
What It Means
Coverage: % of code executed by tests
80% coverage:
80% of lines run during tests
Some lines not exercised
The Truth About Coverage
100% coverage ≠ Bug-free
Can test every line without testing behavior
Coverage is a metric, not a goal.
High coverage + good tests = confidence
High coverage + bad tests = false confidence
Continuous Integration
Tests in CI/CD
1. Push code to repository
2. CI server runs ALL tests
3. Tests pass? → Allow merge
4. Tests fail? → Block merge, fix first
Reduces the chance that buggy changes reach the main branch.
The Safety Gate
Pull request:
✓ Unit tests passed (247 tests)
✓ Integration tests passed (43 tests)
✓ Code coverage: 85%
✓ Ready to merge
or
✗ Unit tests FAILED (2 failures)
→ Fix before merging
Common Mistakes
1. Testing Implementation, Not Behavior
Bad: "Method calls X internally"
Good: "Method returns correct result"
2. Flaky Tests
Test passes sometimes, fails sometimes.
Usually: timing issues, random data, shared state.
Flaky tests destroy trust in the suite.
3. Slow Test Suite
10-minute test suite?
Developers stop running it.
Keep unit tests FAST.
4. No Tests for Bugs
Found a bug? Write a test that fails.
Then fix the bug. Test now passes.
Bug is less likely to return unnoticed.
FAQ
Q: How much test coverage is enough?
80% is a good target for most projects. Focus on critical code paths.
Q: Should I test private methods?
Test through public interfaces. If private method needs testing, maybe it should be its own class.
Q: Unit tests or integration tests?
Both! Lots of unit tests (fast, specific), some integration tests (realistic).
Q: When to skip testing?
Prototypes and experiments are common cases to skip. For production code, it’s usually worth having at least some tests.
Summary
Testing ensures your code works correctly and gives confidence to make changes.
Key Takeaways:
- Testing pyramid: many unit, some integration, few E2E
- Test behavior, not implementation
- TDD: write tests first
- AAA pattern: Arrange, Act, Assert
- Mock external dependencies
- Run tests in CI/CD
- Fix flaky tests immediately
Tests are your safety net - build them strong!
Leave a Comment
Comments (0)
Be the first to comment on this concept.
Comments are approved automatically.