Dynamic vs Greedy Algorithms : The debate regarding Neil G's answer on the same topic
Asked Answered
B

1

3

I was trying to understand the differences between Dynamic and Greedy algorithms, and This answer by Neil G was quite helpful, but, there was this one statement that he made which caused a debate in the comments section.

The difference between dynamic programming and greedy algorithms is that with dynamic programming, the subproblems overlap. That means that by "memoizing" solutions to some subproblems, you can solve other subproblems more quickly.

Comments aren't the best place to solve a doubt, so i'm creating this post. My questions are :

  • Minimum spanning trees have Optimal substructure as well as Overlapping Sub-problems. Also, in an MST, a locally optimal choice is globally optimal. Thus both Dynamic and Greedy properties hold right? How does the above quoted part hold up to this?

  • How is the property of optimal substructure different to the property "locally optimal then also globally optimal" ? My head is going : If something has an optimal substructure, then all locally optimal choices must also be globally optimal right ? Can someone explain where i'm going wrong here ?

    English is not my native language, so please feel free to correct any mistakes with the language.

  • Blastomere answered 18/9, 2015 at 13:15 Comment(2)
    Local optimal must not always be globally optimal. Think about a wave, if you look at a segment and search for the lowest point in that segment, there is on guarantee that on the whole wave that is also the lowest point.Eightieth
    MST can be solved with a greedy algorithm such as Prim's. Which subproblem do you imagine is overlapping?Teador
    I
    4

    I think the explanation of the difference between a greedy and dynamic solutions is not good. A greedy solution makes choices only using local information i.e. what looks best as of the current position. As a result greedy solutions may "get stuck" at a local optimum instead of the global one. Dynamic programming is a technique in which you break a single more complex problem to simpler subproblems and then you combine the results from the subproblems to obtain the result for the initial problem. A solution can be both greedy and dynamic. Take a look at my answer to the original thread.

    However this statement of yours:

    If something has an optimal substructure, then all locally optimal
    choices must also be globally optimal right?
    

    Is wrong. Take for example the 0,1 knapsack example: you are a thief, breaking into some shop a night. You have a knapsack and it has a fixed weight capacity. The shop has some products each with associated price and weight. Steal the greatest price possible.

    Now take the example where you have knapsack of capacity 50 and products of price and weight respectively (21, 20), (30, 22), (22, 21), and (9, 9). If you make choices that are locally optimal(i.e. each time you select the item with greatest price/weight ratio) you will select the products (30, 22) and (21, 20) while this solution is not optimal. The optimal solution would be to take (21, 20), (22, 21) and (9,9) resulting in profit that is bigger by 2.

    Inquisition answered 18/9, 2015 at 13:24 Comment(2)
    Your example only supports his statement. Presumably the subproblems are whether each item is included. In that case, the problem does exhibit optimal substructure. It's just that your solutions to the subproblems are approximate, and so your solution to the whole problem is therefore approximate. If you had optimal solutions to the subproblems, then you would have an optimal solution to the whole.Teador
    Incidentally, a more common solution to the knapsack problem is to define the subproblems to be the maximum value I can steal using K spots in my knapsack, and having considered the first V items. That is, there are K_max * V_max subproblems. These subproblems also exhibit optimal substructure, but they also overlap because solutions to K, V tend to depend on solutions to {1...K, V-1}. An algorithm that exploits this dependency is a dynamic programming algorithm. One that tries to solve the subproblems independently is greedy.Teador

    © 2022 - 2024 — McMap. All rights reserved.