gitGood.dev
Back to Blog

Data Structures and Algorithms Cheat Sheet for Coding Interviews

P
Pat
28 min read

You don't need to memorize every algorithm ever invented. You need to know the right data structure for the right problem, understand the tradeoffs, and recognize the patterns. That's it.

This is the reference I wish I had when I was grinding through interview prep. Pin it, bookmark it, print it out - whatever works. It covers the stuff that actually shows up in interviews, not obscure academic trivia.

Let's get into it.

Big O Notation - The Only Math You Need

Big O tells you how your algorithm scales. Not how fast it runs on your laptop - how it behaves as input grows. That's the distinction interviewers care about.

Common Complexities

Big ONameExampleFeel
O(1)ConstantHash map lookupInstant, no matter the size
O(log n)LogarithmicBinary searchCuts problem in half each step
O(n)LinearLoop through arrayTouch every element once
O(n log n)LinearithmicMerge sortEfficient sorting
O(n²)QuadraticNested loopsGets painful fast
O(2ⁿ)ExponentialRecursive subsetsBlows up quickly
O(n!)FactorialPermutationsOnly for tiny inputs

How to Analyze Your Own Code

The rules are simpler than people make them:

  1. Single loop over input - O(n)
  2. Nested loop over same input - O(n²)
  3. Loop that halves the input each time - O(log n)
  4. Loop inside a halving operation - O(n log n)
  5. Two separate loops (not nested) - O(n + n) = O(n)
  6. Hash map lookup inside a loop - O(n) total, because the lookup is O(1)

Drop constants and lower-order terms. O(2n + 5) is just O(n). O(n² + n) is just O(n²).

# O(n) - single pass
def find_max(arr):
    max_val = arr[0]
    for num in arr:
        max_val = max(max_val, num)
    return max_val

# O(n^2) - nested loops
def has_duplicate_pair(arr):
    for i in range(len(arr)):
        for j in range(i + 1, len(arr)):
            if arr[i] == arr[j]:
                return True
    return False

# O(log n) - halving each step
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

Space Complexity Matters Too

Don't forget space. Interviewers will ask "can you do it in-place?" regularly.

SpaceWhat It MeansExample
O(1)Fixed extra spaceSwapping variables
O(n)Proportional to inputCreating a copy of the array
O(n²)Matrix storage2D DP table

Arrays & Strings

Arrays are the most common data structure in interviews. Period. If you're comfortable with arrays, you can handle at least half of what gets thrown at you.

Operations and Complexities

OperationArrayDynamic Array (Python list / JS array)
Access by indexO(1)O(1)
Search (unsorted)O(n)O(n)
Search (sorted)O(log n)O(log n)
Insert at endN/AO(1) amortized
Insert at beginningN/AO(n)
Delete at endN/AO(1)
Delete at beginningN/AO(n)
Delete by valueO(n)O(n)

Pattern: Two Pointers

The bread and butter of array problems. Use two pointers when you need to find pairs, remove duplicates, or compare elements from different positions.

# Two Sum (sorted array) - O(n)
def two_sum_sorted(arr, target):
    left, right = 0, len(arr) - 1
    while left < right:
        current = arr[left] + arr[right]
        if current == target:
            return [left, right]
        elif current < target:
            left += 1
        else:
            right -= 1
    return []

# Remove duplicates in-place - O(n) time, O(1) space
def remove_duplicates(arr):
    if not arr:
        return 0
    write = 1
    for read in range(1, len(arr)):
        if arr[read] != arr[read - 1]:
            arr[write] = arr[read]
            write += 1
    return write
// Container With Most Water - O(n)
function maxArea(height) {
  let left = 0, right = height.length - 1;
  let maxWater = 0;

  while (left < right) {
    const width = right - left;
    const h = Math.min(height[left], height[right]);
    maxWater = Math.max(maxWater, width * h);

    if (height[left] < height[right]) {
      left++;
    } else {
      right--;
    }
  }
  return maxWater;
}

Pattern: Sliding Window

Use this when you need to track a contiguous subarray or substring. Fixed-size or variable-size - same core idea.

# Maximum sum subarray of size k (fixed window)
def max_subarray_sum(arr, k):
    window_sum = sum(arr[:k])
    max_sum = window_sum

    for i in range(k, len(arr)):
        window_sum += arr[i] - arr[i - k]
        max_sum = max(max_sum, window_sum)

    return max_sum

# Longest substring without repeating characters (variable window)
def length_of_longest_substring(s):
    char_index = {}
    left = 0
    max_length = 0

    for right, char in enumerate(s):
        if char in char_index and char_index[char] >= left:
            left = char_index[char] + 1
        char_index[char] = right
        max_length = max(max_length, right - left + 1)

    return max_length
// Minimum window substring (variable window) - O(n)
function minWindow(s, t) {
  const need = {};
  for (const c of t) need[c] = (need[c] || 0) + 1;

  let have = 0, required = Object.keys(need).length;
  let left = 0, result = "", resultLen = Infinity;
  const window = {};

  for (let right = 0; right < s.length; right++) {
    const c = s[right];
    window[c] = (window[c] || 0) + 1;

    if (need[c] && window[c] === need[c]) have++;

    while (have === required) {
      if (right - left + 1 < resultLen) {
        result = s.substring(left, right + 1);
        resultLen = right - left + 1;
      }
      const leftChar = s[left];
      window[leftChar]--;
      if (need[leftChar] && window[leftChar] < need[leftChar]) have--;
      left++;
    }
  }
  return result;
}

String-Specific Tips

  • Strings are immutable in both Python and JavaScript. Concatenation in a loop is O(n²) - use "".join() in Python or array + join in JS.
  • For anagram/character counting problems, use a frequency map or a fixed-size array of 26.
  • Sorting a string is a quick anagram check: O(n log n).

Hash Maps & Sets

If the brute force is O(n²) and involves searching, a hash map probably gets you to O(n). This is the single most important optimization pattern in interviews.

When to Reach for a Hash Map

  • You need O(1) lookups
  • You're counting frequencies
  • You need to check "have I seen this before?"
  • You're mapping one value to another
  • The brute force involves nested loops where the inner loop is searching

Operations and Complexities

OperationAverageWorst Case
InsertO(1)O(n)
DeleteO(1)O(n)
LookupO(1)O(n)
SpaceO(n)O(n)

Worst case is O(n) due to collisions, but in practice (and in interviews) you treat everything as O(1).

Pattern: Frequency Counting

# Check if two strings are anagrams - O(n)
def is_anagram(s, t):
    if len(s) != len(t):
        return False
    count = {}
    for c in s:
        count[c] = count.get(c, 0) + 1
    for c in t:
        count[c] = count.get(c, 0) - 1
        if count[c] < 0:
            return False
    return True

# Top K frequent elements - O(n)
from collections import Counter

def top_k_frequent(nums, k):
    count = Counter(nums)
    # Bucket sort approach - O(n)
    buckets = [[] for _ in range(len(nums) + 1)]
    for num, freq in count.items():
        buckets[freq].append(num)

    result = []
    for i in range(len(buckets) - 1, -1, -1):
        for num in buckets[i]:
            result.append(num)
            if len(result) == k:
                return result
    return result

Pattern: Two Sum (The Classic)

This is the pattern that turns O(n²) into O(n). Learn it cold.

// Two Sum - O(n) time, O(n) space
function twoSum(nums, target) {
  const map = new Map();

  for (let i = 0; i < nums.length; i++) {
    const complement = target - nums[i];
    if (map.has(complement)) {
      return [map.get(complement), i];
    }
    map.set(nums[i], i);
  }
  return [];
}

// Group Anagrams - O(n * k) where k is max string length
function groupAnagrams(strs) {
  const map = new Map();

  for (const s of strs) {
    const key = [...s].sort().join("");
    if (!map.has(key)) map.set(key, []);
    map.get(key).push(s);
  }

  return [...map.values()];
}

Sets for Uniqueness

# Longest consecutive sequence - O(n)
def longest_consecutive(nums):
    num_set = set(nums)
    longest = 0

    for num in num_set:
        # Only start counting from the beginning of a sequence
        if num - 1 not in num_set:
            length = 1
            while num + length in num_set:
                length += 1
            longest = max(longest, length)

    return longest

Collision Handling (Conceptual)

You probably won't implement a hash table from scratch, but know the concepts:

  • Chaining - each bucket holds a linked list of entries. Simple, works well.
  • Open addressing - on collision, probe for the next open slot (linear probing, quadratic probing, double hashing).
  • Load factor - ratio of entries to buckets. When it gets too high, the table resizes (usually doubles).

Stacks & Queues

Stacks and queues show up in more problems than people expect. If you see "matching pairs", "nearest greater/smaller", or "process in order" - think stack or queue.

Operations

OperationStackQueue
Push/EnqueueO(1)O(1)
Pop/DequeueO(1)O(1)
PeekO(1)O(1)
SearchO(n)O(n)

Stack: Last In, First Out

# Valid parentheses - O(n)
def is_valid(s):
    stack = []
    pairs = {')': '(', '}': '{', ']': '['}

    for char in s:
        if char in pairs:
            if not stack or stack[-1] != pairs[char]:
                return False
            stack.pop()
        else:
            stack.append(char)

    return len(stack) == 0
// Evaluate Reverse Polish Notation - O(n)
function evalRPN(tokens) {
  const stack = [];
  const ops = {
    "+": (a, b) => a + b,
    "-": (a, b) => a - b,
    "*": (a, b) => a * b,
    "/": (a, b) => Math.trunc(a / b),
  };

  for (const token of tokens) {
    if (ops[token]) {
      const b = stack.pop();
      const a = stack.pop();
      stack.push(ops[token](a, b));
    } else {
      stack.push(Number(token));
    }
  }
  return stack[0];
}

Pattern: Monotonic Stack

This pattern solves "next greater element" and "largest rectangle" type problems. The stack maintains elements in increasing or decreasing order.

# Next Greater Element - O(n)
def next_greater_element(nums):
    result = [-1] * len(nums)
    stack = []  # stores indices

    for i in range(len(nums)):
        while stack and nums[i] > nums[stack[-1]]:
            idx = stack.pop()
            result[idx] = nums[i]
        stack.append(i)

    return result

# Daily Temperatures - O(n)
def daily_temperatures(temperatures):
    result = [0] * len(temperatures)
    stack = []

    for i, temp in enumerate(temperatures):
        while stack and temp > temperatures[stack[-1]]:
            idx = stack.pop()
            result[idx] = i - idx
        stack.append(i)

    return result

Queue: First In, First Out

Queues are essential for BFS (breadth-first search). In Python, use collections.deque - not a regular list, since list.pop(0) is O(n).

from collections import deque

# BFS with a queue - see the Trees and Graphs sections below
queue = deque()
queue.append(item)      # enqueue
item = queue.popleft()  # dequeue - O(1)
// In JavaScript, arrays work as queues but shift() is O(n)
// For interviews, this is usually fine
const queue = [];
queue.push(item);        // enqueue
const item = queue.shift(); // dequeue

Linked Lists

Linked lists show up less in real-world code these days, but they're still a favorite in interviews. The key insight: pointer manipulation is where most people mess up. Draw it out on paper.

Operations and Complexities

OperationSingly LinkedDoubly Linked
Access by indexO(n)O(n)
SearchO(n)O(n)
Insert at headO(1)O(1)
Insert at tailO(n) / O(1)*O(1)
Delete at headO(1)O(1)
Delete at tailO(n)O(1)
Delete by valueO(n)O(n)

*O(1) if you maintain a tail pointer.

Pattern: Fast/Slow Pointer (Floyd's)

This pattern detects cycles and finds middle elements. Two pointers move at different speeds.

# Detect cycle - O(n) time, O(1) space
def has_cycle(head):
    slow = fast = head
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
        if slow == fast:
            return True
    return False

# Find middle of linked list
def find_middle(head):
    slow = fast = head
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
    return slow
// Find where cycle begins
function detectCycle(head) {
  let slow = head, fast = head;

  while (fast && fast.next) {
    slow = slow.next;
    fast = fast.next.next;
    if (slow === fast) {
      // Reset one pointer to head
      slow = head;
      while (slow !== fast) {
        slow = slow.next;
        fast = fast.next;
      }
      return slow; // cycle start
    }
  }
  return null;
}

Pattern: Reversal

Reversing a linked list is a fundamental operation. It shows up on its own and as a building block for other problems.

# Reverse a linked list - iterative - O(n) time, O(1) space
def reverse_list(head):
    prev = None
    current = head
    while current:
        next_node = current.next
        current.next = prev
        prev = current
        current = next_node
    return prev
// Reverse a linked list - recursive
function reverseList(head) {
  if (!head || !head.next) return head;

  const newHead = reverseList(head.next);
  head.next.next = head;
  head.next = null;
  return newHead;
}

// Merge two sorted lists - O(n + m)
function mergeTwoLists(l1, l2) {
  const dummy = { val: 0, next: null };
  let current = dummy;

  while (l1 && l2) {
    if (l1.val <= l2.val) {
      current.next = l1;
      l1 = l1.next;
    } else {
      current.next = l2;
      l2 = l2.next;
    }
    current = current.next;
  }
  current.next = l1 || l2;
  return dummy.next;
}

Pro Tips for Linked List Problems

  • Use a dummy node when the head might change (merging, deleting, inserting at front)
  • Draw it out - seriously, pen and paper. Pointer bugs are visual problems.
  • Count your pointer assignments - if you're doing more than 3-4 per step, you're probably overcomplicating it.

Trees

Trees are everywhere in interviews. Binary trees, BSTs, and n-ary trees are the most common. The good news: most tree problems follow the same recursive template.

Binary Tree Traversals

TraversalOrderUse Case
In-orderLeft, Root, RightBST gives sorted order
Pre-orderRoot, Left, RightSerialize/copy a tree
Post-orderLeft, Right, RootDelete tree, evaluate expressions
Level-orderLevel by levelBFS, shortest path in unweighted tree
# All four traversals
class TreeNode:
    def __init__(self, val=0, left=None, right=None):
        self.val = val
        self.left = left
        self.right = right

# In-order (recursive)
def inorder(root):
    if not root:
        return []
    return inorder(root.left) + [root.val] + inorder(root.right)

# Pre-order (iterative)
def preorder(root):
    if not root:
        return []
    result, stack = [], [root]
    while stack:
        node = stack.pop()
        result.append(node.val)
        if node.right:
            stack.append(node.right)
        if node.left:
            stack.append(node.left)
    return result

# Post-order (recursive)
def postorder(root):
    if not root:
        return []
    return postorder(root.left) + postorder(root.right) + [root.val]

# Level-order (BFS)
from collections import deque

def level_order(root):
    if not root:
        return []
    result = []
    queue = deque([root])
    while queue:
        level = []
        for _ in range(len(queue)):
            node = queue.popleft()
            level.append(node.val)
            if node.left:
                queue.append(node.left)
            if node.right:
                queue.append(node.right)
        result.append(level)
    return result

DFS vs BFS on Trees

DFSBFS
Data structureStack (or recursion)Queue
SpaceO(h) where h = heightO(w) where w = max width
Best forDeep trees, path problemsShortest path, level-by-level
ImplementationUsually simpler (recursion)Iterative with queue

BST Operations

// Search in BST - O(log n) average, O(n) worst
function searchBST(root, val) {
  if (!root || root.val === val) return root;
  return val < root.val
    ? searchBST(root.left, val)
    : searchBST(root.right, val);
}

// Validate BST - O(n)
function isValidBST(root, min = -Infinity, max = Infinity) {
  if (!root) return true;
  if (root.val <= min || root.val >= max) return false;
  return (
    isValidBST(root.left, min, root.val) &&
    isValidBST(root.right, root.val, max)
  );
}

// Lowest Common Ancestor in BST - O(log n)
function lowestCommonAncestor(root, p, q) {
  if (p.val < root.val && q.val < root.val) {
    return lowestCommonAncestor(root.left, p, q);
  }
  if (p.val > root.val && q.val > root.val) {
    return lowestCommonAncestor(root.right, p, q);
  }
  return root;
}

Common Tree Patterns

# Maximum depth - O(n)
def max_depth(root):
    if not root:
        return 0
    return 1 + max(max_depth(root.left), max_depth(root.right))

# Invert binary tree (the famous one)
def invert_tree(root):
    if not root:
        return None
    root.left, root.right = invert_tree(root.right), invert_tree(root.left)
    return root

# Diameter of binary tree
def diameter_of_binary_tree(root):
    diameter = 0

    def height(node):
        nonlocal diameter
        if not node:
            return 0
        left = height(node.left)
        right = height(node.right)
        diameter = max(diameter, left + right)
        return 1 + max(left, right)

    height(root)
    return diameter

Recursive Template

Most tree problems follow this skeleton:

def solve(root):
    # Base case
    if not root:
        return base_value

    # Recursive calls
    left = solve(root.left)
    right = solve(root.right)

    # Combine results
    return combine(root.val, left, right)

Graphs

Graphs are where interviews get hard. But the building blocks are the same: BFS, DFS, and knowing which representation to use.

Representations

Adjacency List - most common in interviews. Better for sparse graphs.

# Dictionary-based (Python)
graph = {
    'A': ['B', 'C'],
    'B': ['A', 'D'],
    'C': ['A', 'D'],
    'D': ['B', 'C']
}

# Or build from edges
def build_graph(edges):
    graph = {}
    for u, v in edges:
        graph.setdefault(u, []).append(v)
        graph.setdefault(v, []).append(u)  # undirected
    return graph
// Map-based (JavaScript)
function buildGraph(edges) {
  const graph = new Map();
  for (const [u, v] of edges) {
    if (!graph.has(u)) graph.set(u, []);
    if (!graph.has(v)) graph.set(v, []);
    graph.get(u).push(v);
    graph.get(v).push(u); // undirected
  }
  return graph;
}

Adjacency Matrix - better for dense graphs or when you need O(1) edge lookup.

Adjacency ListAdjacency Matrix
SpaceO(V + E)O(V²)
Add edgeO(1)O(1)
Check edgeO(degree)O(1)
Iterate neighborsO(degree)O(V)
Best forSparse graphsDense graphs

BFS on Graphs

from collections import deque

def bfs(graph, start):
    visited = {start}
    queue = deque([start])
    result = []

    while queue:
        node = queue.popleft()
        result.append(node)
        for neighbor in graph.get(node, []):
            if neighbor not in visited:
                visited.add(neighbor)
                queue.append(neighbor)

    return result

# Number of islands - O(m * n)
def num_islands(grid):
    if not grid:
        return 0

    rows, cols = len(grid), len(grid[0])
    count = 0

    def bfs(r, c):
        queue = deque([(r, c)])
        grid[r][c] = '0'
        while queue:
            row, col = queue.popleft()
            for dr, dc in [(1,0),(-1,0),(0,1),(0,-1)]:
                nr, nc = row + dr, col + dc
                if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == '1':
                    grid[nr][nc] = '0'
                    queue.append((nr, nc))

    for r in range(rows):
        for c in range(cols):
            if grid[r][c] == '1':
                bfs(r, c)
                count += 1

    return count

DFS on Graphs

// DFS - iterative
function dfs(graph, start) {
  const visited = new Set();
  const stack = [start];
  const result = [];

  while (stack.length) {
    const node = stack.pop();
    if (visited.has(node)) continue;
    visited.add(node);
    result.push(node);

    for (const neighbor of (graph.get(node) || [])) {
      if (!visited.has(neighbor)) {
        stack.push(neighbor);
      }
    }
  }
  return result;
}

// Clone graph - O(V + E)
function cloneGraph(node) {
  if (!node) return null;

  const map = new Map();

  function dfs(original) {
    if (map.has(original)) return map.get(original);

    const copy = { val: original.val, neighbors: [] };
    map.set(original, copy);

    for (const neighbor of original.neighbors) {
      copy.neighbors.push(dfs(neighbor));
    }
    return copy;
  }

  return dfs(node);
}

Topological Sort

Use for dependency ordering. Only works on DAGs (directed acyclic graphs).

# Kahn's algorithm (BFS-based) - O(V + E)
from collections import deque

def topological_sort(num_courses, prerequisites):
    graph = {i: [] for i in range(num_courses)}
    in_degree = {i: 0 for i in range(num_courses)}

    for course, prereq in prerequisites:
        graph[prereq].append(course)
        in_degree[course] += 1

    queue = deque([n for n in in_degree if in_degree[n] == 0])
    order = []

    while queue:
        node = queue.popleft()
        order.append(node)
        for neighbor in graph[node]:
            in_degree[neighbor] -= 1
            if in_degree[neighbor] == 0:
                queue.append(neighbor)

    return order if len(order) == num_courses else []  # empty = cycle

Cycle Detection

# Detect cycle in directed graph using DFS colors
def has_cycle(graph, num_nodes):
    WHITE, GRAY, BLACK = 0, 1, 2
    color = [WHITE] * num_nodes

    def dfs(node):
        color[node] = GRAY
        for neighbor in graph.get(node, []):
            if color[neighbor] == GRAY:
                return True  # back edge = cycle
            if color[neighbor] == WHITE and dfs(neighbor):
                return True
        color[node] = BLACK
        return False

    return any(color[i] == WHITE and dfs(i) for i in range(num_nodes))

Dijkstra's Algorithm (Shortest Path)

For weighted graphs with non-negative edges.

import heapq

def dijkstra(graph, start):
    distances = {node: float('inf') for node in graph}
    distances[start] = 0
    heap = [(0, start)]

    while heap:
        dist, node = heapq.heappop(heap)
        if dist > distances[node]:
            continue
        for neighbor, weight in graph[node]:
            new_dist = dist + weight
            if new_dist < distances[neighbor]:
                distances[neighbor] = new_dist
                heapq.heappush(heap, (new_dist, neighbor))

    return distances

Time: O((V + E) log V) with a min-heap.


Sorting Algorithms

You probably won't implement sorting from scratch in an interview, but you need to know the tradeoffs and when each one shines.

Comparison Table

AlgorithmTime (Best)Time (Average)Time (Worst)SpaceStable?
Merge SortO(n log n)O(n log n)O(n log n)O(n)Yes
Quick SortO(n log n)O(n log n)O(n²)O(log n)No
Heap SortO(n log n)O(n log n)O(n log n)O(1)No
Counting SortO(n + k)O(n + k)O(n + k)O(k)Yes
Radix SortO(d * n)O(d * n)O(d * n)O(n + k)Yes
Bubble SortO(n)O(n²)O(n²)O(1)Yes
Insertion SortO(n)O(n²)O(n²)O(1)Yes

When to Use Which

  • Default choice: Merge sort (guaranteed O(n log n), stable)
  • In-place needed: Quick sort or heap sort
  • Small input / nearly sorted: Insertion sort
  • Integer keys in known range: Counting sort or radix sort
  • Need stability: Merge sort or counting sort

Built-in Sort Behavior

LanguageAlgorithmStable?Notes
Python sorted() / .sort()TimsortYesHybrid merge + insertion sort
JavaScript .sort()Timsort (V8)YesWas unstable before ES2019
Java Arrays.sort()Dual-pivot quicksort (primitives), Timsort (objects)Objects: yesDifferent for primitives vs objects

Quick Implementations

# Merge sort - O(n log n) time, O(n) space
def merge_sort(arr):
    if len(arr) <= 1:
        return arr
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result
// Quick sort - O(n log n) average, O(n^2) worst
function quickSort(arr) {
  if (arr.length <= 1) return arr;

  const pivot = arr[Math.floor(arr.length / 2)];
  const left = arr.filter(x => x < pivot);
  const mid = arr.filter(x => x === pivot);
  const right = arr.filter(x => x > pivot);

  return [...quickSort(left), ...mid, ...quickSort(right)];
}

Dynamic Programming

DP scares people, but it's just "recursion + caching." If you can write the recursive solution, you can convert it to DP.

Top-Down vs Bottom-Up

ApproachHow It WorksProsCons
Top-down (memoization)Recursive + cache resultsNatural to write, only solves needed subproblemsStack overflow risk, recursion overhead
Bottom-up (tabulation)Iterative, build from base caseNo stack issues, often easier to optimize spaceMust figure out the iteration order

How to Identify DP Problems

The problem likely needs DP if:

  1. It asks for optimal (min/max), count, or possibility (can you...?)
  2. You have overlapping subproblems (same computation repeated)
  3. The problem has optimal substructure (optimal solution contains optimal solutions to subproblems)

Common signals in the problem statement: "minimum cost", "number of ways", "longest/shortest", "is it possible".

Classic Pattern: Fibonacci (The Template)

# Top-down (memoization)
def fib_memo(n, memo={}):
    if n <= 1:
        return n
    if n not in memo:
        memo[n] = fib_memo(n - 1) + fib_memo(n - 2)
    return memo[n]

# Bottom-up (tabulation)
def fib_tab(n):
    if n <= 1:
        return n
    dp = [0] * (n + 1)
    dp[1] = 1
    for i in range(2, n + 1):
        dp[i] = dp[i - 1] + dp[i - 2]
    return dp[n]

# Space-optimized bottom-up
def fib_optimal(n):
    if n <= 1:
        return n
    a, b = 0, 1
    for _ in range(2, n + 1):
        a, b = b, a + b
    return b

Classic Pattern: 0/1 Knapsack

# Given weights and values, maximize value within capacity
def knapsack(weights, values, capacity):
    n = len(weights)
    dp = [[0] * (capacity + 1) for _ in range(n + 1)]

    for i in range(1, n + 1):
        for w in range(capacity + 1):
            dp[i][w] = dp[i - 1][w]  # skip item
            if weights[i - 1] <= w:
                dp[i][w] = max(
                    dp[i][w],
                    dp[i - 1][w - weights[i - 1]] + values[i - 1]  # take item
                )

    return dp[n][capacity]

Classic Pattern: Longest Common Subsequence

function longestCommonSubsequence(text1, text2) {
  const m = text1.length, n = text2.length;
  const dp = Array.from({ length: m + 1 }, () => Array(n + 1).fill(0));

  for (let i = 1; i <= m; i++) {
    for (let j = 1; j <= n; j++) {
      if (text1[i - 1] === text2[j - 1]) {
        dp[i][j] = dp[i - 1][j - 1] + 1;
      } else {
        dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
      }
    }
  }
  return dp[m][n];
}

Classic Pattern: Coin Change

# Minimum coins to make amount - O(amount * len(coins))
def coin_change(coins, amount):
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0

    for i in range(1, amount + 1):
        for coin in coins:
            if coin <= i and dp[i - coin] + 1 < dp[i]:
                dp[i] = dp[i - coin] + 1

    return dp[amount] if dp[amount] != float('inf') else -1
// Number of ways to make change (order doesn't matter)
function coinChangeWays(coins, amount) {
  const dp = new Array(amount + 1).fill(0);
  dp[0] = 1;

  for (const coin of coins) {
    for (let i = coin; i <= amount; i++) {
      dp[i] += dp[i - coin];
    }
  }
  return dp[amount];
}

DP Problem-Solving Framework

  1. Define the state - what variables describe a subproblem?
  2. Write the recurrence - how does the current state relate to smaller states?
  3. Identify the base case - what's the smallest subproblem you can solve directly?
  4. Determine iteration order - make sure you solve dependencies first
  5. Optimize space if possible - many 2D DPs only need the previous row

The 15 Patterns That Solve Everything

Here's the real cheat sheet. These patterns cover roughly 90% of coding interview problems. When you see a new problem, match it to one of these.

#PatternWhen to UseKey Signal
1Two PointersSorted array, find pair, in-place operations"Two elements that...", sorted input
2Sliding WindowContiguous subarray/substring with constraint"Longest/shortest subarray with..."
3Hash MapO(1) lookups, counting, groupingBrute force has nested search loop
4Binary SearchSorted array or monotonic conditionSorted input, "minimum that satisfies..."
5DFS (Backtracking)Generate combinations, permutations, paths"All possible...", "generate all..."
6BFSShortest path, level-order, nearest"Minimum steps", "shortest path"
7Monotonic StackNext greater/smaller element, histogram"Next greater", "largest rectangle"
8Top K ElementsFind k largest/smallest/most frequent"K most frequent", "Kth largest"
9Merge IntervalsOverlapping intervalsGiven list of intervals
10Fast/Slow PointerCycle detection, middle of listLinked list cycle, Floyd's
11Dynamic ProgrammingOptimization, counting paths/ways"Minimum cost", "number of ways"
12Union FindConnected components, grouping"Connected", "same group"
13Topological SortDependency ordering"Prerequisites", "build order"
14TriePrefix matching, autocomplete"Starts with", "word search"
15Bit ManipulationSingle number, power of 2, XOR tricks"Without extra space", "appears once"

Quick Pattern Recognition Examples

"Find two numbers that add to target" - Hash Map (unsorted) or Two Pointers (sorted)

"Longest substring with at most K distinct chars" - Sliding Window

"Return all subsets of an array" - DFS/Backtracking

"Shortest path in unweighted graph" - BFS

"Merge overlapping meetings" - Sort + Merge Intervals

"Find if a path exists between two nodes" - BFS or DFS (or Union Find)

"Minimum coins to make change" - Dynamic Programming

"Kth largest element" - Heap (Top K pattern)

"Course schedule / build order" - Topological Sort

"Implement autocomplete" - Trie

Bonus Patterns Worth Knowing

# Binary Search on answer space
# When the answer is a number and you can check "is X valid?"
def min_eating_speed(piles, h):
    left, right = 1, max(piles)
    while left < right:
        mid = (left + right) // 2
        hours = sum((p + mid - 1) // mid for p in piles)
        if hours <= h:
            right = mid
        else:
            left = mid + 1
    return left
// Merge Intervals
function mergeIntervals(intervals) {
  intervals.sort((a, b) => a[0] - b[0]);
  const merged = [intervals[0]];

  for (let i = 1; i < intervals.length; i++) {
    const last = merged[merged.length - 1];
    if (intervals[i][0] <= last[1]) {
      last[1] = Math.max(last[1], intervals[i][1]);
    } else {
      merged.push(intervals[i]);
    }
  }
  return merged;
}
# Union Find template
class UnionFind:
    def __init__(self, n):
        self.parent = list(range(n))
        self.rank = [0] * n

    def find(self, x):
        if self.parent[x] != x:
            self.parent[x] = self.find(self.parent[x])  # path compression
        return self.parent[x]

    def union(self, x, y):
        px, py = self.find(x), self.find(y)
        if px == py:
            return False
        if self.rank[px] < self.rank[py]:
            px, py = py, px
        self.parent[py] = px
        if self.rank[px] == self.rank[py]:
            self.rank[px] += 1
        return True

Quick Reference Card

When you're about to walk into an interview, glance at this:

Problem TypeFirst TryTime Target
Array search/pairHash map or two pointersO(n)
Subarray/substringSliding windowO(n)
Sorted arrayBinary searchO(log n)
Tree problemRecursive DFSO(n)
Shortest path (unweighted)BFSO(V + E)
Shortest path (weighted)Dijkstra'sO((V+E) log V)
Connected componentsUnion Find or DFSO(V + E)
Optimization/countingDPO(n * state)
Top K anythingHeapO(n log k)

Complexity Cheat Sheet

Data StructureAccessSearchInsertDelete
ArrayO(1)O(n)O(n)O(n)
Hash MapN/AO(1)O(1)O(1)
BST (balanced)O(log n)O(log n)O(log n)O(log n)
HeapO(1) topO(n)O(log n)O(log n)
StackO(n)O(n)O(1)O(1)
QueueO(n)O(n)O(1)O(1)
Linked ListO(n)O(n)O(1)*O(1)*

*At known position.


Bookmark this page. Come back before every interview. The frameworks and languages will keep changing - React today, something else tomorrow - but these fundamentals don't. Arrays, trees, graphs, and the patterns that connect them are the same ones people were using twenty years ago. They'll be the same twenty years from now.

Master these, and you'll walk into any coding interview knowing you have the tools to solve whatever they throw at you.