Terms

Dynamic Programming/DP
Solving complex problems by breaking them into overlapping subproblems.
Optimal Substructure
Optimal solution to a problem can be constructed from optimal solutions of its subproblems.
Memoization, or top-down DP
Involves recursion but with caching of results to avoid recomputation.
Tabulation, or bottom-up DP
Builds-up solutions iteratively from base cases to the final solution, e.g. via a table.

Fibonacci

The classic example is Fibonacci. The naive recursive solution, re-computing down-to fib(1) each level, grows in O(ϕn). This is improved with DP, either methods, to O(n).


    memo = {}
    def fib_memo(n):
        if n <= 1: return n
        if n in memo: return memo[n]
        memo[n] = fib_memo(n-1) + fib_memo(n-2)
        return memo[n]

    def fib_tab(n):
        if n <= 1: return n
        dp = [0] * (n + 1)
        dp[1] = 1
        for i in range(2, n + 1):
            dp[i] = dp[i-1] + dp[i-2]
        return dp[n]
    

Common DP Patterns

Several recurring problem types are indicative of Dynamic Programming. Classic examples include computing the Fibonacci sequence or climbing stairs variations, which demonstrate overlapping subproblems. Knapsack problems (e.g., 0/1 Knapsack for items with weights/values) involve optimizing selections given capacity constraints.

Sequence alignment problems often use DP, such as finding the Longest Common Subsequence (LCS) between two sequences or the Longest Increasing Subsequence (LIS) within a single sequence. Other frequent patterns include coin change problems (finding minimum coins or ways to make change) and grid problems (e.g., unique paths in a grid), typically solved by building solutions from adjacent cells.