We use cookies for analytics to improve our service. See our Privacy Policy.

    Sign up free to unlock interview prep materials and a free mock interview for your next role.

    Start Free
    coding
    algorithms
    interview prep
    leetcode

    15 Coding Interview Patterns Every Engineer Must Know

    Hoppers AI Team·April 8, 2026·16 min read

    Why Patterns Matter More Than Problem Count

    There's a persistent myth in interview prep: if you grind enough LeetCode problems, you'll be ready. Engineers solve 400, 600, sometimes 800 problems and still freeze when they see something unfamiliar. The reason is straightforward — they practiced problems without learning patterns.

    Patterns are the transferable units. A sliding window problem and a different sliding window problem share 80% of their structure. If you understand the pattern, a new variation takes minutes to decode. If you don't, every problem is a fresh puzzle. The difference between an engineer who solves 200 problems with pattern awareness and one who solves 600 without it is stark. The first candidate adapts. The second one panics when the problem doesn't match something they've memorized.

    Here's the math that should change your preparation strategy: roughly 90% of coding interview questions at major tech companies map to 15 patterns. Not 15 problems — 15 patterns, each with dozens of variations. Master the patterns and you have a framework for hundreds of problems. Memorize individual solutions and you have exactly that many solutions, no more.

    The goal of pattern study isn't to recognize a problem instantly and recite a solution. It's to look at a new problem, notice structural similarities to a known pattern, and adapt the approach. That adaptation — visible, narrated, in real time — is exactly what interviewers are scoring.

    This article is your map. We cover all 15 patterns with enough depth that you know when to apply each one and why it works. Individual patterns get deep-dive treatment in subsequent chapters — for now, the goal is to build the complete mental index so you can pattern-match under pressure.

    The 15 Patterns: Complete Reference

    Before we dive into each pattern, here's the full reference table. Bookmark this — it's the cheat sheet you'll come back to during practice.

    PatternWhen to Use (Trigger Signals)Canonical ProblemTime Complexity
    Sliding WindowContiguous subarray/substring, fixed or variable size windowLongest Substring Without Repeating CharactersO(n)
    Two PointersSorted array, pair/triplet search, comparing from both endsTwo Sum II (sorted input)O(n)
    Fast & Slow PointersCycle detection, finding middle, linked list problemsLinked List CycleO(n)
    Merge IntervalsOverlapping intervals, scheduling, time rangesMerge IntervalsO(n log n)
    Cyclic SortArray with values in range [0, n] or [1, n], find missing/duplicateFind the Missing NumberO(n)
    In-Place Reversal of Linked ListReverse a portion or all of a linked list without extra spaceReverse Linked List IIO(n)
    BFS (Breadth-First Search)Shortest path, level-order traversal, nearest neighborBinary Tree Level Order TraversalO(V + E)
    DFS (Depth-First Search)Path finding, exhaustive search, tree traversal, connected componentsNumber of IslandsO(V + E)
    Two HeapsFind median in stream, balance two halves of dataFind Median from Data StreamO(log n) per op
    Subsets / BacktrackingGenerate all combinations, permutations, or subsetsSubsetsO(2^n) or O(n!)
    Modified Binary SearchSorted/rotated array, search with constraints, find boundarySearch in Rotated Sorted ArrayO(log n)
    Top K ElementsFind k-th largest/smallest, k most frequentTop K Frequent ElementsO(n log k)
    K-Way MergeMerge k sorted lists/arrays, smallest range from k listsMerge K Sorted ListsO(n log k)
    Dynamic ProgrammingOverlapping subproblems, optimal substructure, counting pathsLongest Increasing SubsequenceVaries
    Topological SortDependency ordering, course prerequisites, build systemsCourse ScheduleO(V + E)

    Each Pattern in Detail

    1. Sliding Window

    The sliding window pattern maintains a subset of elements as a "window" that slides across an array or string. Instead of recalculating everything for each position, you add the new element entering the window and remove the one leaving. This turns an O(n*k) brute force into O(n).

    Trigger signals: The problem mentions contiguous subarray or substring. You see phrases like "maximum sum of subarray of size k," "longest substring with at most k distinct characters," or "smallest subarray with sum greater than X." The window can be fixed-size (easy) or variable-size (harder, requires two pointers expanding and contracting).

    Canonical problem: Longest Substring Without Repeating Characters. You expand the right pointer until you hit a duplicate, then contract from the left until the constraint is satisfied again. The window always represents a valid substring.

    Complexity: O(n) time, O(k) space where k is the window size or character set. For a deep dive with worked implementations, see our guide on Sliding Window and Two Pointers.

    2. Two Pointers

    Two pointers uses two references that move through a data structure — typically from opposite ends toward the center, or both from the start at different speeds. It's one of the most versatile patterns and often the first optimization you should consider for sorted array problems.

    Trigger signals: The input is sorted (or can be sorted without losing information). You're looking for pairs or triplets that satisfy a condition. You need to compare elements from both ends. You're removing duplicates in place or partitioning an array.

    Canonical problem: Two Sum II - Input Array Is Sorted. Left pointer starts at index 0, right pointer at the end. If the sum is too small, move left forward. Too large, move right backward. The sorted invariant guarantees you don't miss the answer.

    Complexity: O(n) time, O(1) space. The beauty of two pointers is that it usually requires no extra memory.

    3. Fast and Slow Pointers (Floyd's Tortoise and Hare)

    One pointer moves one step at a time, the other moves two steps. If there's a cycle, they'll meet. If there isn't, the fast pointer reaches the end. This simple idea solves an entire category of linked list problems.

    Trigger signals: Cycle detection in a linked list. Finding the middle node. Finding the start of a cycle. Any linked list problem where you need positional information without knowing the length upfront.

    Canonical problem: Linked List Cycle. Fast moves two nodes per step, slow moves one. If they ever point to the same node, there's a cycle. To find where the cycle starts, reset one pointer to the head and advance both at the same speed — they'll meet at the cycle entrance. The proof relies on modular arithmetic and is worth understanding once.

    Complexity: O(n) time, O(1) space. The O(1) space is the key advantage — you don't need a hash set to track visited nodes.

    4. Merge Intervals

    Sort intervals by start time, then iterate through them, merging overlapping ones. The pattern extends to problems about inserting intervals, finding gaps, and computing intersections.

    Trigger signals: The input is a list of intervals or time ranges. You need to find overlaps, merge overlapping ranges, insert a new interval, or determine if a person can attend all meetings. Any problem involving start/end time pairs.

    Canonical problem: Merge Intervals. Sort by start time. Initialize a result with the first interval. For each subsequent interval, if its start is less than or equal to the current end, extend the current interval's end. Otherwise, start a new merged interval.

    Complexity: O(n log n) for the sort, O(n) for the merge pass. Total: O(n log n).

    5. Cyclic Sort

    When you have an array containing numbers in a known range (like 1 to n), you can place each number at its correct index in O(n) time by swapping. After one pass, any index where the number doesn't match reveals a missing or duplicate value.

    Trigger signals: Array of integers in range [0, n] or [1, n]. Problem asks for missing numbers, duplicate numbers, or the first missing positive. The constraint that values fall within a bounded range is the giveaway.

    Canonical problem: Find the Missing Number. For each index i, while nums[i] != i (adjusting for 0 vs 1 indexing), swap nums[i] to its correct position. After the sort, scan for the index where the value doesn't match — that's your missing number.

    Complexity: O(n) time, O(1) space. Each element is swapped at most once, even though there's a while loop inside a for loop.

    6. In-Place Reversal of Linked List

    Reverse a linked list (or a sub-section of it) by manipulating pointers in place. You maintain three pointers — previous, current, and next — and rewire the links as you traverse. This pattern is a building block for more complex linked list manipulations.

    Trigger signals: Reverse a linked list. Reverse nodes between positions m and n. Reverse in groups of k. Palindrome linked list (reverse the second half and compare). Anytime you need to change the direction of pointers.

    Canonical problem: Reverse Linked List II (reverse between positions m and n). Navigate to position m-1, then reverse n-m+1 nodes using the three-pointer technique, and reconnect the reversed segment to the rest of the list.

    Complexity: O(n) time, O(1) space. The in-place requirement is what makes this pattern distinct from simply creating a new list.

    7. BFS (Breadth-First Search)

    BFS explores all neighbors at the current depth before moving to the next level. It uses a queue and guarantees the shortest path in unweighted graphs. Beyond graphs, BFS is the go-to pattern for any problem that asks for the minimum number of steps.

    Trigger signals: Shortest path in an unweighted graph. Level-order traversal of a tree. Nearest neighbor queries. "Minimum number of moves/steps" phrasing. Problems involving spreading (like rotting oranges or walls and gates).

    Canonical problem: Binary Tree Level Order Traversal. Queue-based: enqueue the root, then for each level, dequeue all nodes at that level, record their values, and enqueue their children. The key insight is tracking how many nodes belong to the current level before processing. For more graph patterns, see our guide on Graph Algorithms for Interviews.

    Complexity: O(V + E) time, O(V) space for the queue. In a tree, this simplifies to O(n) time and O(w) space where w is the maximum width.

    8. DFS (Depth-First Search)

    DFS goes as deep as possible along each branch before backtracking. It can be implemented recursively (using the call stack) or iteratively (using an explicit stack). DFS is the workhorse of tree and graph problems — when in doubt, try DFS first.

    Trigger signals: Path-finding problems. "Find all paths" or "does a path exist." Connected components. Tree problems involving root-to-leaf paths. Anything that requires exhaustive exploration. Grid problems where you need to visit all connected cells.

    Canonical problem: Number of Islands. For each unvisited land cell, run DFS to mark all connected land cells as visited. Each DFS invocation discovers one island. The number of DFS calls from the main loop equals the number of islands.

    Complexity: O(V + E) time, O(V) space for the recursion stack or explicit stack. Watch for stack overflow on deep recursion — iterative DFS is safer for very deep graphs.

    9. Two Heaps

    Maintain two heaps — a max-heap for the smaller half of data and a min-heap for the larger half. This lets you access the median (or other partition-dependent statistics) in O(1) while adding elements in O(log n). The balancing act between the two heaps is the core challenge.

    Trigger signals: Find the median of a data stream. Sliding window median. Any problem where you need to efficiently track the middle value of a changing dataset. The word "median" is almost always a two-heaps problem.

    Canonical problem: Find Median from Data Stream. Insert into the max-heap first. If the max-heap's top is greater than the min-heap's top, rebalance. Keep the heaps within one element of equal size. The median is either the max-heap's top (odd total) or the average of both tops (even total).

    Complexity: O(log n) per insertion, O(1) for median retrieval. Space is O(n) for storing all elements.

    10. Subsets and Backtracking

    Generate all possible combinations by building solutions incrementally and abandoning ("backtracking") paths that can't lead to valid solutions. This is the pattern for any problem that asks you to enumerate possibilities. The output is exponential — the key is pruning to avoid unnecessary work.

    Trigger signals: "Find all subsets," "generate all permutations," "all combinations that sum to X," "all valid parentheses combinations." The word "all" in a problem statement is a strong indicator. Constraint satisfaction problems (N-Queens, Sudoku) also use backtracking.

    Canonical problem: Subsets. For each element, you either include it or don't. Recursively build subsets by making this binary choice at each level. The decision tree has 2^n leaves, one for each subset. Adding constraints (like a target sum) lets you prune branches early.

    Complexity: O(2^n) for subsets, O(n!) for permutations. These are inherently exponential — the optimization is in pruning, not in improving the base complexity.

    11. Modified Binary Search

    Standard binary search finds a value in a sorted array. The modified version applies the same halving principle to arrays that are "almost" sorted (rotated), to find boundaries (first/last occurrence), or to search on answer spaces. If you can define a condition that's monotonically true/false, you can binary search on it.

    Trigger signals: Sorted or rotated sorted array. Finding the first or last position of an element. "Minimum value that satisfies X" — binary search on the answer. Any problem where you can eliminate half the search space with one comparison.

    Canonical problem: Search in Rotated Sorted Array. Standard binary search breaks because the array isn't fully sorted. The modification: determine which half is sorted (compare mid to left), check if the target falls within the sorted half, and eliminate accordingly. The invariant is that at least one half is always sorted.

    Complexity: O(log n) time, O(1) space. The logarithmic time is what makes this pattern valuable — whenever you see O(log n) in a required complexity, think binary search.

    12. Top K Elements

    Use a heap of size k to efficiently find the k largest, k smallest, or k most frequent elements. The insight: you don't need to sort the entire dataset. A min-heap of size k naturally evicts the smallest element, leaving you with the top k.

    Trigger signals: "Find the k-th largest element." "Return the k most frequent words." "Find k closest points." Any time the problem asks for "top k" or "k-th" of something, this pattern applies. If k is much smaller than n, a heap beats sorting.

    Canonical problem: Top K Frequent Elements. Count frequencies with a hash map. Then either use a min-heap of size k (O(n log k)) or bucket sort by frequency (O(n)). The heap approach generalizes to streaming data; the bucket sort approach is optimal for static input.

    Complexity: O(n log k) with a heap, where k is typically much smaller than n. Compare to O(n log n) for a full sort — the savings are significant when k is small.

    13. K-Way Merge

    When you have k sorted lists and need to produce a single sorted output, use a min-heap containing one element from each list. Extract the minimum, then insert the next element from that element's source list. This generalizes the merge step of merge sort.

    Trigger signals: Merge k sorted arrays or linked lists. Find the smallest range covering at least one element from each list. Kth smallest element across multiple sorted sources. Any problem involving multiple sorted inputs that need to be combined.

    Canonical problem: Merge K Sorted Lists. Initialize a min-heap with the head of each list. Pop the smallest node, add it to the result, and push its next node (if any) back into the heap. The heap always contains at most k elements.

    Complexity: O(n log k) where n is the total number of elements across all lists. Each element enters and leaves the heap exactly once, and each heap operation is O(log k).

    14. Dynamic Programming

    DP solves problems by breaking them into overlapping subproblems and storing results to avoid recomputation. If a brute-force recursive solution recomputes the same subproblems, DP is the optimization. It comes in two flavors: top-down (memoization) and bottom-up (tabulation).

    Trigger signals: "Find the minimum/maximum cost." "Count the number of ways." "Is it possible to reach the target?" Problems with choices at each step where the optimal solution depends on optimal solutions to subproblems. The overlap test: if you draw the recursion tree and see repeated nodes, it's DP.

    Canonical problem: Longest Increasing Subsequence. Define dp[i] as the length of the longest increasing subsequence ending at index i. For each i, check all j < i: if nums[j] < nums[i], then dp[i] = max(dp[i], dp[j] + 1). The answer is the maximum value in the dp array. For a comprehensive treatment with state transition frameworks, see our guide on Dynamic Programming for Interviews.

    Complexity: Varies widely. LIS is O(n^2) with basic DP, O(n log n) with patience sorting. The key is identifying the state and the transition — complexity follows from that. Most interview DP problems are O(n^2) or O(n*m).

    15. Topological Sort

    Topological sort produces a linear ordering of vertices in a directed acyclic graph (DAG) such that for every edge u -> v, u comes before v. It's the answer to any problem about ordering with dependencies. Two approaches: Kahn's algorithm (BFS with in-degree tracking) or DFS with post-order recording.

    Trigger signals: Prerequisites or dependencies. "In what order should we process these tasks?" Course scheduling problems. Build system dependency resolution. Detecting cycles in directed graphs (if topological sort fails, there's a cycle).

    Canonical problem: Course Schedule. Build an adjacency list and compute in-degrees. Start BFS from all nodes with in-degree 0. For each processed node, decrement in-degrees of its neighbors. If a neighbor's in-degree hits 0, enqueue it. If you process all nodes, the schedule is valid. If not, there's a cycle.

    Complexity: O(V + E) time, O(V + E) space for the adjacency list and queue. Both BFS (Kahn's) and DFS approaches have identical complexity.

    How to Recognize Which Pattern to Apply

    Pattern recognition under pressure is a learnable skill, not a talent. Here's the decision framework that experienced interviewers use — and that you should internalize:

    Step 1: Read the constraints. The constraints often tell you the expected complexity, which narrows the pattern space immediately. If n is up to 10^5 and the expected solution is O(n) or O(n log n), you're not doing backtracking. If n is up to 20, exponential solutions are on the table.

    Step 2: Identify the data structure. Linked list? Think fast/slow pointers, in-place reversal. Sorted array? Two pointers, modified binary search. Tree or graph? BFS or DFS. Matrix? Usually BFS/DFS with the grid as an implicit graph.

    Step 3: Look for keyword signals. "Contiguous subarray" means sliding window. "All combinations" means backtracking. "Minimum steps" means BFS. "How many ways" means DP. "K-th largest" means heap. These aren't rules — they're strong heuristics that are right 80% of the time.

    Step 4: When multiple patterns could work, choose the simplest. Two Sum can be solved with sorting + two pointers (O(n log n)) or a hash map (O(n)). In an interview, pick the one you can implement most cleanly. A correct O(n log n) solution beats a buggy O(n) attempt.

    Interviewers don't expect you to immediately see the pattern. They expect you to work through the problem systematically, consider approaches, and converge on a good one. Thinking out loud during this process — "The array is sorted, so I'm thinking two pointers might work here" — is exactly what earns high marks on the problem-solving dimension.

    A Practical Study Plan

    Knowing the patterns intellectually is step one. Being able to apply them under a 45-minute time constraint with someone watching you is step two. Here's how to bridge the gap:

    Phase 1: Learn the Patterns (1-2 Weeks)

    For each of the 15 patterns, solve 2-3 problems. Not more. The goal is comprehension, not volume. After each problem, write a one-sentence note about when to use this pattern. By the end of this phase, you should be able to name all 15 patterns from memory and describe each one's trigger signals.

    Phase 2: Mixed Practice (2-3 Weeks)

    Solve problems without knowing which pattern to use. This is where most people skip straight to and wonder why they're not improving. Without Phase 1, you're just guessing. With Phase 1, you're now training your classification instinct. Aim for one problem per day. Time yourself: 25 minutes for medium, 35 for hard. If you don't solve it, study the solution, identify the pattern, and solve a similar problem the next day.

    Phase 3: Simulate (1-2 Weeks)

    Do full mock coding interviews. Talk through your approach out loud before writing code. Practice the verbal component — interviewers care about your thought process as much as your solution. You need to articulate why you're choosing a pattern, not just use it silently. Hoppers AI's mock coding interviews can simulate this pressure with real-time feedback on your problem-solving approach and code quality, which is useful if you don't have a practice partner available.

    Common Mistake to Avoid

    Don't spend three months in Phase 1. Engineers get trapped in a cycle of reading explanations and watching tutorials without ever implementing under pressure. Two weeks of study followed by three weeks of timed practice will outperform two months of passive learning every time. The pattern knowledge needs to be accessible under stress, not just stored in long-term memory.

    What Interviewers Actually Evaluate

    Understanding the scoring rubric changes how you practice. Most coding interview rubrics evaluate four dimensions:

    • Problem solving. Did you break down the problem, identify the right approach, and handle edge cases? This is where pattern recognition pays off — quickly identifying the right pattern and explaining why it fits demonstrates strong problem-solving instinct.
    • Code quality. Is your code clean, readable, and correct? Variable names, function decomposition, and consistent style all matter. Don't write competition-style compressed code in an interview.
    • Communication. Did you explain your thinking? Did you respond well when the interviewer hinted at a different direction? Silent coding for 30 minutes is a red flag even if the solution is correct.
    • Verification. Did you test your solution? Walk through your code with an example. Check edge cases: empty input, single element, duplicates, negative numbers. Candidates who verify their own code signal maturity and thoroughness.

    Notice that getting the optimal solution is not the only axis. A clean, well-explained O(n log n) solution that you verify thoroughly will often score higher than a messy O(n) solution with no explanation and no testing. Interviewers are hiring a colleague, not a competitive programmer.

    Connecting Patterns to Each Other

    These 15 patterns don't exist in isolation. Many problems require combining two patterns, and recognizing these combinations is an advanced skill worth developing:

    • Binary search + two pointers: Problems like finding a triplet closest to a target. Sort the array, fix one element, then two-pointer on the rest — but use binary search logic for the optimization.
    • BFS + topological sort: Kahn's algorithm is literally BFS applied to a DAG. If you understand both patterns, Kahn's algorithm is intuitive rather than a separate thing to memorize.
    • Sliding window + hash map: Most variable-size window problems track element frequency with a hash map. The window provides the iteration strategy; the hash map provides the bookkeeping.
    • DFS + dynamic programming: Top-down DP is just DFS with memoization. If you can write the recursive DFS solution, adding a memo cache turns it into DP. This mental model makes DP far less intimidating.
    • Heap + merge: K-way merge is a direct application of a min-heap. Understanding heaps deeply makes the merge pattern trivial.

    As you practice, pay attention to these connections. They compress the amount you need to learn and make your pattern recognition more flexible. A candidate who sees that "this is a DFS problem with memoization, which makes it a top-down DP problem" is demonstrating exactly the kind of thinking that earns strong hire ratings.

    When you're ready to test these patterns under realistic conditions, Hoppers AI's coding interview mode lets you practice with a live AI interviewer that evaluates your approach, code, and communication — useful for building the habit of thinking out loud while you solve.