Big-O Complexity Reference
Time and space complexity for the data structures, sorting algorithms, and search routines that show up in coding interviews. Skim the row, remember the row, defend the row in an interview.
How to read this sheet
Time complexity is the dominant term for input size n - constants and lower-order terms are dropped. "Average" assumes a reasonable distribution of inputs and a well-implemented data structure (good hash function, balanced tree, etc.). "Worst" is the adversarial case, which is the case interviewers will probe with. When two columns disagree (e.g. hash map average O(1) vs worst O(n)), name both - claiming O(1) for hash map worst-case is a common interview red flag.
Data structures - core operations
Time for the listed operation on the structure. Space is the storage cost for n elements.
| Structure | Access | Search | Insert | Delete | Space | Notes |
|---|---|---|---|---|---|---|
| Array (static) | O(1) | O(n) | O(n) | O(n) | O(n) | Insert/delete at end is O(1) amortized for dynamic arrays (e.g. Python list, JS Array, std::vector). Middle inserts shift elements. |
| Dynamic array (vector / ArrayList) | O(1) | O(n) | O(1) amortized | O(n) | O(n) | Append is amortized O(1) thanks to doubling. Single push can pay O(n) on a resize. |
| Linked list (singly) | O(n) | O(n) | O(1)* | O(1)* | O(n) | *Insert/delete at a known node is O(1). At an arbitrary index it is O(n) because you traverse first. |
| Doubly linked list | O(n) | O(n) | O(1)* | O(1)* | O(n) | Backbone of LRU cache (paired with hash map for O(1) lookup + O(1) move-to-front). |
| Stack | O(n) | O(n) | O(1) | O(1) | O(n) | push/pop are O(1). Used for DFS, expression evaluation, monotonic-stack patterns. |
| Queue | O(n) | O(n) | O(1) | O(1) | O(n) | enqueue/dequeue are O(1). Use a deque (ArrayDeque, collections.deque) - never a list with pop(0) which is O(n). |
| Hash map / hash set | N/A | O(1) avg / O(n) worst | O(1) avg / O(n) worst | O(1) avg / O(n) worst | O(n) | Worst case is when many keys collide. Cryptographic-strength hashing or randomized seeds (Python, Rust) prevent adversarial degradation. |
| Balanced BST (Red-Black, AVL, std::map) | O(log n) | O(log n) | O(log n) | O(log n) | O(n) | Ordered iteration is O(n). Pick this over hash map when you need range queries or sorted order. |
| Binary heap (min/max) | O(1) for top | O(n) | O(log n) | O(log n) | O(n) | peek is O(1). build-heap from an array is O(n) (Floyd). Used for priority queues, top-k, Dijkstra. |
| Trie (prefix tree) | O(L) | O(L) | O(L) | O(L) | O(N * sigma) | L = key length, N = total chars stored, sigma = alphabet size. Use for autocomplete, prefix queries. |
| Disjoint set (union-find) | N/A | alpha(n) amortized | O(1) | N/A | O(n) | With path compression + union by rank, find/union are effectively O(1) (inverse Ackermann). Used for Kruskal, connected components, dynamic connectivity. |
| Skip list | O(log n) avg | O(log n) avg | O(log n) avg | O(log n) avg | O(n) | Probabilistic balance. Used internally by Redis sorted sets (ZSET). |
| B-tree / B+tree | O(log n) | O(log n) | O(log n) | O(log n) | O(n) | High fanout = shallow tree = fewer disk reads. Backbone of relational database indexes. |
| Bloom filter | N/A | O(k) | O(k) | Not supported | O(m bits) | k = number of hash functions, m = bit-array size. False positives possible, false negatives impossible. Use to short-circuit expensive lookups. |
Sorting algorithms
Comparison sorts have a proven Omega(n log n) lower bound. Linear-time sorts (counting / radix / bucket) only beat that by exploiting input structure (small key range, fixed-width keys, uniform distribution).
| Algorithm | Best | Average | Worst | Space | Stable | Notes |
|---|---|---|---|---|---|---|
| Quicksort | O(n log n) | O(n log n) | O(n^2) | O(log n) | No | Worst case on already-sorted input with naive pivot. Randomized pivot or median-of-three avoids it. Default in many stdlibs (introsort = quicksort + heapsort fallback). |
| Mergesort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes | Predictable performance, stable, parallelizable. Used by Java Arrays.sort for objects, and external sort on disk. |
| Heapsort | O(n log n) | O(n log n) | O(n log n) | O(1) | No | In-place. Worse cache behavior than quicksort in practice. Used as the bail-out leg of introsort. |
| Timsort | O(n) | O(n log n) | O(n log n) | O(n) | Yes | Hybrid merge + insertion sort exploiting natural runs. Default in Python (sorted/list.sort) and Java Collections.sort. |
| Insertion sort | O(n) | O(n^2) | O(n^2) | O(1) | Yes | Best-of-class for tiny n (< ~16) and nearly-sorted input. Used as the inner-loop in introsort and timsort. |
| Bubble sort | O(n) | O(n^2) | O(n^2) | O(1) | Yes | Pedagogical only. Don't propose this in an interview unless asked specifically. |
| Selection sort | O(n^2) | O(n^2) | O(n^2) | O(1) | No | Always O(n^2) - even on sorted input. Useful when minimizing writes (each element moves at most once). |
| Counting sort | O(n + k) | O(n + k) | O(n + k) | O(n + k) | Yes | k = key range. Linear when k = O(n). Falls apart when keys are large or sparse. |
| Radix sort (LSD) | O(d * (n + k)) | O(d * (n + k)) | O(d * (n + k)) | O(n + k) | Yes | d = number of digits, k = base. Linear for fixed-width keys (32-bit ints, fixed strings). |
| Bucket sort | O(n + k) | O(n + k) | O(n^2) | O(n + k) | Yes | Worst case when all elements land in one bucket. Average is linear for uniformly-distributed input. |
Searches and graph algorithms
n = nodes / elements, m = edges, V = |vertices|, E = |edges|.
| Operation | Time | Space | Notes |
|---|---|---|---|
| Linear search | O(n) | O(1) | Works on any sequence. The only option for unsorted data. |
| Binary search (sorted array) | O(log n) | O(1) | Requires sorted input. Variants: lower_bound, upper_bound, search-on-answer, peak-finding, rotated-array search. |
| BFS (adjacency list) | O(V + E) | O(V) | Shortest path in unweighted graph. Visit order = level order from source. |
| DFS (adjacency list) | O(V + E) | O(V) | Use for connectivity, cycle detection, topological sort, bridge / articulation finding. |
| Dijkstra (binary heap) | O((V + E) log V) | O(V) | Shortest path on graphs with non-negative weights. With Fibonacci heap: O(E + V log V). |
| Bellman-Ford | O(V * E) | O(V) | Handles negative weights; detects negative cycles. Used when Dijkstra's non-negative assumption fails. |
| Floyd-Warshall (all pairs) | O(V^3) | O(V^2) | All-pairs shortest paths. Fine up to a few hundred nodes. |
| A* | O((V + E) log V) typical | O(V) | Same big-O as Dijkstra but constant-factor faster with an admissible heuristic. Optimal iff heuristic is admissible (and consistent for graph search). |
| Topological sort (Kahn / DFS) | O(V + E) | O(V) | Requires DAG. Detects cycles when no valid order exists. |
| Kruskal MST | O(E log E) | O(V) | Sort edges + union-find. O(E log E) = O(E log V). |
| Prim MST (binary heap) | O((V + E) log V) | O(V) | Grow tree from a start vertex via min-edge cuts. |
Classic problem-pattern complexities
Quick sanity checks: if your solution to one of these is asymptotically worse, the interviewer is looking for the lower bound.
| Operation | Time | Space | Notes |
|---|---|---|---|
| Two-sum (hash set) | O(n) | O(n) | Beats the brute O(n^2) and the sort-then-two-pointer O(n log n). |
| Sliding window (fixed / variable) | O(n) | O(k) | Each element enters and leaves the window at most once. k = window contents (often O(1) or O(alphabet)). |
| Two pointers (sorted) | O(n) | O(1) | After an O(n log n) sort if needed. 3-Sum is O(n^2) overall. |
| Top-k (min-heap of size k) | O(n log k) | O(k) | Beats full sort O(n log n) when k << n. Quickselect averages O(n) but is O(n^2) worst. |
| Quickselect (kth smallest) | O(n) avg / O(n^2) worst | O(1) | Median-of-medians pivot makes worst-case O(n) (high constant). |
| Merge k sorted lists (heap) | O(N log k) | O(k) | N = total elements, k = number of lists. |
| Subset / power-set generation | O(n * 2^n) | O(n) | There are 2^n subsets, each up to length n to write down. The lower bound is just the output size. |
| Permutation generation | O(n * n!) | O(n) | n! permutations, each length n. |
| DP on grid (m x n) | O(m * n) | O(m * n) or O(n) rolled | Many grid DPs (unique paths, edit distance, knapsack-on-grid) collapse to O(min(m, n)) space. |
| 0/1 Knapsack (W capacity) | O(n * W) | O(W) | Pseudo-polynomial: scales with the value of W, not the bit-length. Not strictly polynomial. |
| Edit distance (Levenshtein) | O(n * m) | O(min(n, m)) | Two-row rolling DP for the space win. |
| Trie autocomplete (prefix p) | O(p + k) | O(N * sigma) | p = prefix length, k = number of suggestions returned. N = total stored chars. |
Common pitfalls interviewers will catch
- ·Quoting hash map operations as O(1) worst-case. They are O(1) average, O(n) worst when the hash function is adversarial.
- ·Saying binary search is O(log n) on an unsorted array. It requires sorted input - sort first (O(n log n)) or use a different structure.
- ·Forgetting that sorting the input is O(n log n), which often dominates the rest of your algorithm. State the total complexity.
- ·Confusing space complexity with auxiliary space. Recursion depth is part of space - a recursive O(log n)-time DP still has O(log n) stack space at minimum.
- ·Calling 0/1 knapsack "polynomial." It is pseudo-polynomial - O(n * W) where W is a number, not the size of the input encoding.
- ·Assuming Dijkstra works with negative edges. It does not. Use Bellman-Ford, or transform.
- ·Quoting BFS as O(V) - it is O(V + E). For dense graphs (E ~ V^2), edges dominate.
Other cheat sheets
Interview Patterns
PatternsThe recurring shapes - sliding window, two pointers, fast/slow, BFS/DFS, backtracking, DP, divide & conquer, binary search variants, union-find, topological sort. Each entry: when to reach for it, the template, complexity, and which classic problems use it.
Design Tradeoffs
SystemsThe recurring forks in system design interviews. CAP, PACELC, sync vs async, push vs pull, SQL vs NoSQL, sharding shapes, consistency models, cache strategies, idempotency, and rate limiting. For each, the options and when to choose each.
Unix Essentials
ToolsFilesystem layout, the commands you actually use (find / grep / awk / sed / xargs), processes and signals, networking, permissions, basic shell scripting, and a vi survival kit.
Practice the patterns
Reading is the floor. The signal in interviews comes from working problems out loud and defending your tradeoffs. Spin up an AI mock interview or run a coding challenge to put these to work.