• Dynamic Programming Paradigm

     Dynamic Programming Paradigm

    Dynamic Programming, like the divide and conquer method solves problems by combining the solutions to sub problems. In divide and conquer algorithms, we partition the problem into disjoint sub problems. Then we solve the sub problems recursively and then combine their solutions to solve the original problem. In contrast, dynamic programming applies when the sub problems overlap- i.e. when sub problems share sub problems. In this context, a divide and conquer algorithm does more work than necessary, repeatedly solving the common sub problems. A dynamic programming algorithm solves each sub problem just once and then saves its answer in a table, thereby avoiding the work of re-computing the answer every time it solves each sub problem.


         We typically apply dynamic programming to optimization problems. Such problems can have many possible solutions. Each solution has a value and we wish to find a solution with the optimal value. We call such a solution an optimal solution to the problem as opposed to 'The Optimal Solution'. Since there may be several solutions that achieve the optimal value.

        When developing a dynamic programming algorithm, we follow a sequence of four steps:

    1.     Characterize the structure of an optimal solution.
    2.     Recursively define the value of an optimal solution.
    3.     Compute the value of an optimal solution typically in a bottom up fashion.
    4.     Construct an optimal solution from computed information.

       Steps 1-3 form the basis of a dynamic programming solution to a problem. If we need only the value of an optimal solution and not the solution itself, then we can omit step 4. When we do perform step 4, we sometimes maintain additional information during step 3 so that we can easily construct an optimal solution.

  • You might also like

    No comments:

    Post a Comment

search topics

NEWSLETTER

Get All The Latest Updates Delivered Straight Into Your Inbox For Free!