Skip to main content

⏱️ Time Complexity

How fast algorithms grow

The Restaurant Analogy

Imagine you're a chef preparing meals:

With a few orders: Easy, you finish quickly. With many more orders: It takes much longer. With a huge number of orders: Your method matters a lot.

If you cook one meal at a time: work grows roughly in proportion to the number of orders (linear) If you check every order against every other: 1,000,000 steps! (quadratic)

Time complexity measures how the amount of work grows as input grows (usually focusing on the dominant trend for large inputs).


Why Time Complexity Matters

The Scaling Problem

Algorithm A: Takes 1 second for 1,000 items
Algorithm B: Takes 1 second for 1,000 items

Same performance? Not necessarily!

For 1,000,000 items:
Algorithm A (linear): ~1,000× more work
Algorithm B (quadratic): ~1,000,000× more work

Small inputs hide the truth. Time complexity helps predict what happens at scale.


Common Time Complexities

O(1) - Constant

Work doesn’t grow with input size (as an upper bound, for large enough inputs):

Get first element of array: arr[0]
Check if hash map contains key: key in map (typically O(1) on average, but can degrade in worst cases)

Input: 10 items → ~constant work
Input: 1,000,000 items → ~constant work

O(log n) - Logarithmic

Steps increase slowly, cut problem in half each time:

Binary search: check middle, eliminate half

Input: 8 items → ~3 steps
Input: 1,000,000 items → ~20 steps

Doubling input adds only 1 step!
Doubling input typically adds about one more step.

O(n) - Linear

Steps grow proportionally to input:

Find maximum in unsorted array: check every element

Input: 10 items → 10 steps
Input: 1,000,000 items → 1,000,000 steps

O(n log n) - Linearithmic

Slightly worse than linear, common in sorting:

Merge sort: divide log(n) times, merge n elements

Input: 8 items → on the order of 8×log₂(8) comparisons
Input: 1,000,000 items → on the order of 1,000,000×log₂(1,000,000) comparisons

O(n²) - Quadratic

Steps are input squared, nested loops:

Check all pairs: compare every element to every other

Input: 10 items → 100 steps
Input: 1,000 items → 1,000,000 steps
Input: 1,000,000 items → 1,000,000,000,000 steps 💀

O(2^n) - Exponential

Doubles with each input increase, usually brute force:

All subsets: each element is included or not

Input: 10 items → 1,024 subsets
Input: 20 items → 1,048,576 subsets
Input: 30 items → 1,073,741,824 subsets 💀

Visual Comparison

n =           10       100        1,000        1,000,000
---------------------------------------------------------
O(1)          1        1          1            1
O(log n)      ~3       ~7         ~10          ~20
O(n)          10       100        1,000        1,000,000
O(n log n)    ~30      ~700       ~10,000      ~20,000,000
O(n²)         100      10,000     1,000,000    about a trillion 💀
O(2^n)        about 1k vast       quickly becomes impractical

(All numbers above are rough intuition to compare growth rates; real runtime depends on constants, hardware, and implementation details.)

Key insight: The difference becomes MASSIVE as n grows!


How to Calculate Time Complexity

Rule 1: Count the Loops

No loops: O(1)
One loop over n: O(n)
Two nested loops over n: O(n²)
Three nested loops: O(n³)

Rule 2: Drop Constants

50n + 100 → O(n)
2n² + 3n + 5 → O(n²)

Constants don't matter at scale

Rule 3: Focus on the Dominant Term

n² + n + 1 → O(n²)
n³ + n² → O(n³)

The biggest term dominates as n grows

Rule 4: Different Variables Stay Separate

Loop over n, then loop over m:
n + m → O(n + m)

Nested loops over n and m:
n × m → O(nm)

Common Algorithm Complexities

AlgorithmTimeWhat it does
Array accessO(1)Direct index lookup
Binary searchO(log n)Find in sorted array
Single loopO(n)Visit each element once
Merge sortO(n log n)Efficient sorting
Bubble sortO(n²)Inefficient sorting
All subsetsO(2^n)Generate power set
All permutationsO(n!)Generate all orderings

Space Complexity

Memory usage follows similar patterns:

O(1): Fixed variables
O(n): Array of size n
O(n²): 2D matrix of size n×n

Time-Space Tradeoffs

Sometimes you can trade space for time:

Slow + Less Memory: Recompute values each time
Fast + More Memory: Store computed values (memoization)

Analyzing Recursion

For recursive functions, build a recurrence relation:

Binary search:
T(n) = T(n/2) + O(1)
    = O(log n)

Merge sort:
T(n) = 2T(n/2) + O(n)
    = O(n log n)

Fibonacci (naive):
T(n) = T(n-1) + T(n-2) + O(1)
    = O(2^n) 💀

Best, Average, Worst Cases

Algorithms can behave differently based on input:

Quick sort:
  Best case (balanced partitions): O(n log n)
  Average case (typical data): O(n log n)
  Worst case (bad pivot choices): O(n²) 💀

If you care about tail latency or adversarial inputs, worst case matters a lot. For typical workloads, average-case or amortized behavior can be more representative.

Common Mistakes

1. Ignoring Hidden Loops

for item in array:        # O(n)
    if item in other_array:  # O(m) - hidden loop!
# Total: O(n × m), not O(n)!

2. Confusing Log Base

All logarithms differ by a constant, so: log₂(n), log₁₀(n), ln(n) are all O(log n)

3. Thinking O(n) is Automatically Fast

O(n) with n = 1 billion is still 1 billion operations!


FAQ

Q: Why do we ignore constants?

At scale, n² grows so much faster than 100n that the constant becomes irrelevant.

Q: What's "amortized" time complexity?

Average cost over many operations. ArrayList append is O(n) worst case but O(1) amortized.

Q: Is O(n log n) closer to O(n) or O(n²)?

Much closer to O(n)! Log n grows very slowly.

Q: How do I improve time complexity?

Use better data structures, divide-and-conquer, or trade space for time.


Summary

Time complexity measures how algorithm performance scales with input size.

Key Takeaways:

  • O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2^n) < O(n!)
  • Drop constants and lower-order terms
  • Focus on worst case for safety
  • Count loops to estimate complexity
  • The difference is HUGE at scale
  • Space complexity follows similar rules

Understanding time complexity is essential for writing efficient code!

Leave a Comment

Comments (0)

Be the first to comment on this concept.

Comments are approved automatically.