Back to Blog
algorithmsbig-ocomplexitycomputer-sciencebeginner

Big O Notation: A Practical Guide for Beginners

Big O notation describes how an algorithm's performance scales with input size. Learn O(1), O(n), O(log n), O(n log n), and O(n²) with clear examples and comparisons.

CS VisualizationsMarch 8, 20268 min

Interactive Visualization

Sorting Algorithm Visualizations

See this concept in action with our step-by-step interactive visualization.

Try the Visualization

When someone says "this algorithm is O(n log n)" — what does that actually mean? Big O notation is the language computer scientists use to talk about algorithm performance, and understanding it is essential for writing efficient code.

The good news: it's simpler than it looks.

What Big O Measures

Big O notation describes how an algorithm's runtime grows as the input size grows. It doesn't tell you exact execution time — it tells you the scaling pattern.

Think of it this way:

  • "This algorithm is O(n)" means: double the input, double the time
  • "This algorithm is O(n²)" means: double the input, quadruple the time
  • "This algorithm is O(log n)" means: double the input, add one more step

Big O focuses on the worst case and large inputs, because that's where performance differences actually matter.

The Common Complexities

Input Size (n)TimeO(1)O(log n)O(n)O(n log n)O(n²)
How different complexities scale: O(1) stays flat, O(n²) explodes, O(n log n) is the sweet spot.

O(1) — Constant Time

The runtime doesn't change no matter how large the input is.

def get_first(arr):
    return arr[0]  # Always one operation

Examples: Array access by index, hash table lookup, push/pop from a stack.

Whether the array has 10 or 10 million elements, getting the first one takes the same time.

O(log n) — Logarithmic Time

The runtime grows by one step each time the input doubles. Extremely efficient.

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

Examples: Binary search, balanced BST operations, finding an element in a sorted array.

For 1 billion elements, binary search needs at most 30 comparisons. That's the power of O(log n).

O(n) — Linear Time

The runtime grows directly proportional to the input size.

def find_max(arr):
    maximum = arr[0]
    for num in arr:       # Visit every element once
        if num > maximum:
            maximum = num
    return maximum

Examples: Linear search, finding min/max, summing an array, most single-pass operations.

Double the array, double the time. Simple and predictable.

O(n log n) — Linearithmic Time

The sweet spot for comparison-based sorting. Slightly worse than linear, much better than quadratic.

def merge_sort(arr):
    if len(arr) <= 1:
        return arr
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])
    return merge(left, right)

Examples: Merge sort, quick sort (average), heap sort, many divide-and-conquer algorithms.

It's been mathematically proven that O(n log n) is the best possible time complexity for comparison-based sorting. You can't sort faster by comparing elements.

O(n²) — Quadratic Time

Runtime grows with the square of the input size. Gets slow fast.

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(n - 1):     # Nested loop = n × n
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]

Examples: Bubble sort, selection sort, insertion sort (worst case), comparing all pairs.

10 elements: 100 operations. Fine. 10,000 elements: 100,000,000 operations. Slow. 1,000,000 elements: 1,000,000,000,000 operations. Unusable.

O(2ⁿ) — Exponential Time

Runtime doubles with each additional input element. Only feasible for tiny inputs.

def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)  # Two recursive calls

Examples: Brute-force subset enumeration, naive recursive Fibonacci, some backtracking problems.

n=20: ~1 million operations. n=30: ~1 billion. n=50: good luck.

The Comparison Table

NotationNamen=10n=1,000n=1,000,000Vibe
O(1)Constant111Instant
O(log n)Logarithmic31020Barely grows
O(n)Linear101,0001,000,000Fair
O(n log n)Linearithmic3310,00020,000,000Good enough
O(n²)Quadratic1001,000,00010¹²Gets painful
O(2ⁿ)Exponential1,02410³⁰⁰LOLImpossible

How to Determine Big O

Rule 1: Drop the Constants

O(2n) = O(n). O(500) = O(1). Constants don't matter for scaling behavior.

Why? If one algorithm takes 2n steps and another takes 3n steps, they both scale linearly. At large n, the difference between them is just a constant factor.

Rule 2: Drop Lower-Order Terms

O(n² + n) = O(n²). O(n + log n) = O(n).

Why? At large n, the highest-order term dominates. When n = 1,000,000, the n² term is 10¹², while the n term is just 10⁶ — negligible.

Rule 3: Count the Loops

  • One loop through n elements: O(n)
  • Two nested loops: O(n²)
  • Three nested loops: O(n³)
  • Loop that halves each time: O(log n)
  • O(n) work done O(log n) times: O(n log n)

Rule 4: Different Inputs = Different Variables

If a function takes two arrays of size n and m:

def compare(arr1, arr2):  # O(n × m), not O(n²)
    for a in arr1:
        for b in arr2:
            if a == b:
                return True

Big O in Practice: Sorting Algorithms

Sorting is the perfect case study for Big O because we have algorithms spanning multiple complexity classes:

AlgorithmBestAverageWorstWhy
Bubble SortO(n)O(n²)O(n²)Nested comparisons
Insertion SortO(n)O(n²)O(n²)Shifts elements one by one
Merge SortO(n log n)O(n log n)O(n log n)Divide, sort halves, merge
Quick SortO(n log n)O(n log n)O(n²)Depends on pivot choice
Heap SortO(n log n)O(n log n)O(n log n)Heap operations are O(log n)

Our sorting visualizations let you see the performance difference in real time — watch bubble sort crawl through a large array while merge sort finishes in a fraction of the time.

Common Misconceptions

"O(n) is always fast enough." Not necessarily. If n is 10 billion and each operation is a database call, O(n) is way too slow. Context matters.

"O(n²) is always bad." For small n (< 50), O(n²) algorithms can be faster than O(n log n) ones due to lower constant factors. Insertion sort beats merge sort on tiny arrays.

"Big O tells you the actual runtime." It doesn't. O(n) with disk I/O can be slower than O(n²) in pure memory. Big O describes scaling, not absolute speed.

"Lower Big O always wins." An O(n) algorithm with a constant factor of 1000 is slower than O(n log n) with a factor of 1 for all reasonable n. Theory and practice don't always align.

Space Complexity Too

Big O also describes memory usage:

  • Merge sort: O(n) extra space (for the merge buffer)
  • Quick sort: O(log n) extra space (for the call stack)
  • Bubble sort: O(1) extra space (in-place)

Sometimes you trade time for space (hash tables use O(n) space to get O(1) lookups) or space for time (merge sort uses extra memory to guarantee O(n log n)).

See the Difference

Numbers on a page only go so far. To really internalize why O(n log n) beats O(n²), you need to see it.

Our sorting visualizations run different algorithms on the same data side by side. Watch merge sort divide and conquer while bubble sort is still on its first pass. The visual difference is striking — and it makes Big O intuition click in a way that tables and formulas can't.

Practice These Concepts

See Big O in action with these coding problems:

Related Articles

Interactive Visualization

Sorting Algorithm Visualizations

See this concept in action with our step-by-step interactive visualization.

Try the Visualization