1 month

# Dynamic Programming

Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization procedure. More so than the optimization techniques described previously, dynamic programming provides a general framework for analyzing many problem types.

The origin of the term dynamic programming has very little to do with writing code. It was first coined by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. Back then programming meant “planning,” and “dynamic programming” was conceived to optimally plan multistage processes.

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.

For example, if we write a simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.

Dynamic programming has applications in mathematical optimization, computational complexity theory, and computer programming. When an algorithm is broken down such that the runtime is most efficient, it is in its optimal substructure.

7 Steps to solve a Dynamic Programming problem

1. How to recognize a DP problem.
2. Identify problem variables.
3. Clearly express the recurrence relation.
4. Identify the base cases.
5. Decide if you want to implement it iteratively or recursively.
7. Determine time complexity.

Greedy algorithms look for locally optimum solutions or in other words, a greedy choice, in the hopes of finding a global optimum. Hence greedy algorithms can make a guess that looks optimum at the time but becomes costly down the line and does not guarantee a globally optimum

In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are motivated by the overall optimization of the problem.

In contrast, divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use Memoization to remember the output of already solved sub-problems.

Dynamic programming is a technique for solving problems recursively. It can be implemented by memoization or tabulation.

## Memoization

Memoization refers to the technique of caching and reusing previously computed results. It is a top-down approach where we just start by solving the problem in a natural manner and storing the solutions of the sub-problems along the way.

The main idea behind Memoization was to reuse already calculated sub-problem results in order to solve the original problem.

## Tabulation

Tabulation is a bottom-up method for solving DP problems. It starts from the base cases and finds the optimal solutions to the problems whose immediate sub-problems are the base cases. Then, it goes one level up and combines the solutions it previously obtained to construct the optimal solutions to more complex problems.

Eventually, tabulation combines the solutions of the original problem’s subproblems and finds its optimal solution.

 Memoization Tabulation State transition relation is easy to create. In tabulation, the state transition is difficult to create. Code is not complicated and it's not difficult to create. Code becomes difficult when lots of conditions are needed. It is slow because lots of recursive calls and return statements are required. It is fast as the result of the previously solved sub-problems can be directly accessed from the table. In a memoized version, entries in the table are filled on demand. In a tabulated version, all the entries must be filled one by one.

Fibonacci Sequence

Fibonacci series without recording the results of the subproblems.

`# Fibonacci Series using Recursion                             `

`def` `fib(n):                                         `

`    ``if` `(n <``=` `1):                                `

`        ``return` `n                                  `

`    ``return` `fib(n ``-` `1``) ``+` `fib(n ``-` `2``)`

Time Complexity: O(2n)

Auxiliary Space: O(n)  The extra space is used due to the recursion call stack.

### Memoization (Top Down):

`# a program for Memoized version of nth Fibonacci number         `

`def` `fib(n, lookup):                                   `

`    ``if` `n <``=` `1``:                           `

`        ``lookup[n] ``=` `n                              `

`    ``# if the value is not calculated previously then calculate it`

`    ``if` `lookup[n] ``is` `None``:                        `

`        ``lookup[n] ``=` `fib(n``-``1``, lookup) ``+` `fib(n-2, lookup)            `

`    ``return` `lookup[n]                                 `

Time Complexity: O(N)
Auxiliary Space: O(N) The extra space is used due to the recursion call stack.

Tabulation (Bottom Up):

`# Python program Tabulated (bottom up) version`

`def fib(n):                              `

`    f = [``0``] ``*` `(n ``+` `1)                `

`    f ``=` `1                                     `

`    # calculating the fibonacci and storing the values`

`    for i in` `xrange``(``2``, n + 1):      # xrange() in Python 2`

`        f[i] = f[i ``-` `1``] ``+` `f[i - 2]                         `

`    return f[n]``                     `

xrange() : This function returns the generator object that can be used to display numbers only by looping. The only particular range is displayed on demand and hence called “lazy evaluation“.

Time Complexity: O(N)
Auxiliary Space: O(N) The extra space is used due to the recursion call stack.

Applications of dynamic programming

• 0/1 knapsack problem.
• Mathematical optimization problem.
• All pair Shortest path problem.
• Reliability design problem.
• Longest common subsequence (LCS)
• Flight control and robotics control.
• Time-sharing: It schedules the job to maximize CPU usage.
• Fibonacci number series
• Knapsack problem
• Tower of Hanoi
• All pair shortest path by Floyd-Warshall
• Shortest path by Dijkstra
• Project scheduling

Dynamic programming can be used in both top-down and bottom-up manners. And of course, most of the time, referring to the previous solution output is cheaper than recomputing in terms of CPU cycles.

#### Responses(0)  Related