Dynamic Programming: Knapsack & Longest Increasing Subsequence




Dynamic Programming: Knapsack & Longest Increasing Subsequence

Dynamic Programming: Knapsack & Longest Increasing Subsequence

Introduction to Dynamic Programming

Dynamic Programming (DP) is one of the cornerstone algorithms in computer science, used for optimizing complex problems by breaking them into simpler subproblems and storing their results for reuse. Unlike greedy algorithms or divide and conquer strategies, DP systematically solves each subproblem and makes decisions based on previously obtained information, ensuring that subproblems are solved only once and their results are reused efficiently.

"Dynamic Programming is both a mathematical optimization method and a computer programming method."

DP requires two key properties in a problem:

  • Optimal Substructure – The solution to a problem can be constructed from optimal solutions to its subproblems.
  • Overlapping Subproblems – The problem can be broken down into subproblems which are reused several times.

Famous DP problems include the Knapsack Problem and the Longest Increasing Subsequence, which we’ll explore in depth.

Types and Approaches in Dynamic Programming

  • Top-Down Approach (Memoization): Solve the problem recursively, store (memoize) each result to avoid duplicate calculations.
  • Bottom-Up Approach (Tabulation): Solve all possible small subproblems first and use them to build up solutions to larger subproblems, typically using a table (array).
  • Space Optimization Techniques: Reduce unnecessary memory storage using strategies like rolling arrays or just-in-time computation for only required states.

The selection between these depends on the problem constraints, available memory, and the need for backtracking or reconstructing the solution.

Knapsack Problem: Theory & Applications

The Knapsack Problem is a classic optimization problem:

  • You are given a set of items, each with a weight and a value.
  • Your goal: maximize the total value in a knapsack without exceeding its weight capacity.

Types of Knapsack Problems

  • 0/1 Knapsack: Each item can either be taken or left (no fractional selection).
  • Fractional Knapsack: You can take fractions of items (optimal with greedy, not DP).
  • Unbounded Knapsack: Unlimited copies of each item allowed.
Knapsack Type Allowed Items Strategy
0/1 Each item at most once Dynamic Programming
Fractional Any fraction per item Greedy Algorithm
Unbounded Unlimited repeats per item Dynamic Programming

Real-World Applications

  • Resource Allocation
  • Cargo Loading
  • Investment Portfolio Optimization
  • Budget Management
  • Scheduling and Project Selection

Knapsack: Algorithms & Python Code

Standard 0/1 Knapsack DP Formulation

Let n be the number of items, W the capacity.
Let wt[i] and val[i] be weight and value of the ith item.


# 0/1 Knapsack - Bottom-Up (Tabulation) DP
def knapsack_01(wt, val, W):
    n = len(wt)
    dp = [[0]*(W+1) for _ in range(n+1)]
    for i in range(1, n+1):
        for w in range(W+1):
            if wt[i-1] <= w:
                dp[i][w] = max(dp[i-1][w], val[i-1] + dp[i-1][w-wt[i-1]])
            else:
                dp[i][w] = dp[i-1][w]
    return dp[n][W]

# Example usage
weights = [2, 3, 4, 5]
values = [3, 4, 5, 6]
capacity = 5
print("Max Value:", knapsack_01(weights, values, capacity))  # Output: 7
    

Explanation: Build a DP table where entry dp[i][w] is max value using first i items and capacity w.

  • Time Complexity: O(nW)
  • Space Complexity: O(nW)

Space-Optimized Implementation


# 0/1 Knapsack - Space Optimized
def knapsack_01_optimized(wt, val, W):
    n = len(wt)
    dp = [0]*(W+1)
    for i in range(n):
        for w in range(W, wt[i]-1, -1):
            dp[w] = max(dp[w], val[i] + dp[w-wt[i]])
    return dp[W]
    

Variations: Unbounded Knapsack


# Unbounded Knapsack
def knapsack_unbounded(wt, val, W):
    n = len(wt)
    dp = [0]*(W+1)
    for w in range(W+1):
        for i in range(n):
            if wt[i] <= w:
                dp[w] = max(dp[w], val[i] + dp[w-wt[i]])
    return dp[W]
    

Memoized (Top-Down) Solution


# 0/1 Knapsack - Memoization
def knapsack_memo(wt, val, W, n, memo = None):
    if memo is None:
        memo = {}
    if n == 0 or W == 0:
        return 0
    key = (n,W)
    if key in memo:
        return memo[key]
    if wt[n-1] > W:
        result = knapsack_memo(wt, val, W, n-1, memo)
    else:
        result = max(knapsack_memo(wt, val, W, n-1, memo),
                     val[n-1] + knapsack_memo(wt, val, W-wt[n-1], n-1, memo))
    memo[key] = result
    return result

# Example usage
print("Max Value (Memoized):", knapsack_memo(weights, values, capacity, len(weights)))
    

Longest Increasing Subsequence (LIS)

The Longest Increasing Subsequence (LIS) problem is a fundamental DP problem:
Given: An array nums[0...n-1].
Required: The length of the longest subsequence where numbers are strictly increasing.

Example

Input: [10,9,2,5,3,7,101,18]
Output: 4
Explanation: The LIS is [2,3,7,101]
    

Applications

  • Bioinformatics (DNA sequence alignment)
  • Stock market analysis
  • Pattern recognition
  • Versioning and software updates

LIS: Algorithms & Python Implementations

Standard Dynamic Programming Solution (O(n²))


# Longest Increasing Subsequence - Dynamic Programming
def lis_dp(nums):
    n = len(nums)
    dp = [1]*n
    for i in range(1,n):
        for j in range(0,i):
            if nums[i] > nums[j]:
                dp[i] = max(dp[i], dp[j]+1)
    return max(dp) if dp else 0

# Example Usage
arr = [10,9,2,5,3,7,101,18]
print("LIS Length:", lis_dp(arr))  # Output: 4
    

Efficient Binary Search Solution (O(n log n))


# LIS using patience sorting + binary search
import bisect

def lis_patience(nums):
    seq = []
    for x in nums:
        i = bisect.bisect_left(seq, x)
        if i == len(seq):
            seq.append(x)
        else:
            seq[i] = x
    return len(seq)

print("LIS Length (Binary Search):", lis_patience(arr))  # Output: 4
    
  • Time Complexity: O(n log n)
  • Space Complexity: O(n)

Reconstructing the Actual LIS


# Reconstructing LIS
def lis_reconstruct(nums):
    n = len(nums)
    dp = [1] * n
    prev = [-1] * n
    max_idx = 0
    for i in range(n):
        for j in range(i):
            if nums[i] > nums[j] and dp[j]+1 > dp[i]:
                dp[i] = dp[j]+1
                prev[i] = j
        if dp[i] > dp[max_idx]:
            max_idx = i
    lis = []
    k = max_idx
    while k != -1:
        lis.append(nums[k])
        k = prev[k]
    lis.reverse()
    return lis

# Example Usage
print("LIS Sequence:", lis_reconstruct(arr))  # Output: [2, 3, 7, 101]
    

Where is DP Used? Real-World Applications

  • Finance & Investments: Portfolio selection (knapsack), risk minimization.
  • Operations Research: Scheduling, logistics, transportation.
  • Bioinformatics: Sequence alignment, mutation analysis.
  • Game Theory: Move evaluation in chess, Go, and other strategic games.
  • Manufacturing: Cut stock problem, assembly line scheduling.
DP powers some of the world's best AI, including AlphaGo, IBM Watson, and modern supply chain management!

Best Practices, Pitfalls & Tips

  • Identify Overlapping Subproblems: Not every recursive problem needs DP! Look for repeated work.
  • Memoization vs Tabulation: If recursion is intuitive, start with memoization, then convert to tabulation for speed/memory.
  • Beware of high space complexity (reduce dimensionality, use rolling arrays).
  • Debug with smaller test cases and print DP tables for insights.

Always check if a problem’s greedy approach suffices before reaching for DP — Greedy is faster and simpler for problems like the fractional knapsack!

Frequently Asked Questions

Q1. How do I know if a problem can be solved by DP?

If the problem can be expressed in terms of smaller, reusable subproblems (overlapping subproblems) and has optimal substructure, DP is likely suitable.

Q2. Why does the greedy approach fail for 0/1 Knapsack?

Greedy, which picks based on value or value-to-weight ratio, may miss combinations of items that together yield greater value. DP guarantees finding such optimal sets.

Q3. Is there a faster way than O(n²) for LIS?

Yes! LIS can be solved in O(n log n) using binary search with patience sorting principles.

Q4. Is DP only for optimization problems?

No—DP is also used for counting, decision, and path-construction problems, not just optimization.

Q5. What are some other classic DP problems?

  • Edit Distance (Levenshtein distance)
  • Matrix Chain Multiplication
  • Coin Change (Number of ways)
  • Minimum Path Sum in grid
  • Fibonacci (classic DP intro!)

Summary & Conclusion

Dynamic Programming unlocks the power to solve previously intractable problems efficiently by keeping track of solved subproblems and building on them. Both the Knapsack Problem and Longest Increasing Subsequence illustrate fundamental DP patterns featured in interviews, contests, and real-world systems.

Make DP your ally for tackling optimization, counting, and path problems, and always reason about the overlapping subproblems and optimal substructure at the heart of any successful DP solution.


0 Comments