Is Kadane's Algorithm Greedy or Optimised DP?
Asked Answered
W

5

11

I feel like Kadane's algorithm is a modified version of the true dynamic programming solution of maximum subarray problem.Why do I feel so? I feel because the way to calculate the maximum subarray can be taken by:

for(i=0;i<N;i++)
    {
        DP[i][A[i]]=true;
        for(j= -ve maximum ;j<= +ve maximum ;j++)
        if(DP[i-1][j])
            DP[i][j+A[i]]=true;
    }

The recurrence being if it is possible to form j with a subarray ending at i-1 elements i can form j+A[i] using the i th element and also form A[i] alone by starting a subarray at i th position And at last we can search this DP array for maximum j that is marked true!

Note : DP[i][j] represents if it is possible to make j using a sub array ending at i! Here I assume j can be negative too.! Now one can easily derive that sum+ a negative number < sum . That implies adding any negative indices wont help getting a better sum thats why we can drop them! Morover we care about the maximum j till i-1 th position and connect it with i th element which makes me feel it is kind of making a greedy choice ( Just because maximum + element gives me a maximum).

NOTE: I haven't studied Greedy algorithms by now but I have an idea what a greedy choice is!

EDIT: SOmeone said my algorithm doesn't makes any sense so I am trying to post my code to make myself clear. I haven't taken j as -ve as they aren't fruitful. I repeat my state is defined as is it possible to make j using a subarray ending at i.

#include<bits/stdc++.h>
using namespace std;
int DP[101][101];
int main()
{
    int i,j,ans=INT_MIN;
    int A[]={3,-1,2,-1,5,-3};
    int N=sizeof(A)/sizeof(int);
    for(i=1;i<=N;i++)
    {
        if(A[i-1]>=0)
            DP[i][A[i-1]]++;
        for(j=0;j<=100;j++)
        {
            if(DP[i-1][j])
            {
                if(j+A[i-1]>=0)
                    DP[i][j+A[i-1]]++;
            }
            if(DP[i][j])
                ans=max(ans,j);
        }
    }
    cout<<ans<<"\n";
    return 0;
}

Output 8

Webby answered 1/7, 2015 at 7:37 Comment(7)
I have added my code for being clear!Webby
Dynamic programming is a technique, it is not a property. You can use dynamic programming anywhere to reduce time complexity, while greedy is a property. You can say that Dijkstra's algorithm is using dynamic programming, when it stores all previous calculated distances in an array, and use them to calculate the shortest distance to the next cities. However, at the same time, Dijkstra's algorithm is greedy.Diuresis
Is my question unclear?I am sorry if it is. I asked is kadane's algorithm a greedy algorithm too or optimised form of this DP? I said so because if we consider only the highest j in my solution , it will yield kadane's algorithm (and ignoring negative j too) where j is the sum that can be formed using a sub array.! due to the property maximum sum + something > smaller sum + same thing! And I got what you said @PhamTrung .Webby
IMO, it is not greedy, actually, it is brute force. DP only made sure that you don't need to solve a case twice, but basically you still need to check all cases and fill in each state dp[i][j] one by one.Diuresis
Usually, DP is used for recursive algorithms. Neither your alg nor Kadane's is recursive.....Boozer
Thanks for your full algorithm, but it still looks incoherent. Again, what's the meaning of DP ? Is it supposed to contain boolean values ? If so, why declare it as int and use ++, instead of assigning true/false ? And why is it not initialized ?Boozer
Global variables and static variables are automatically initialized to zero. It is using ++ just because it will have the same effect as setting it up as bool. It was written in hurry just to make you understand. Now can you tell me what it is ? I think it is sufficient to understand if one wants to!Webby
M
8

Kadane's is an iterative dynamic programming algorithm.

It is very common to optimize iterative DP algorithms to remove one dimension of the DP matrix along the major axis of the algorithm's progression.

The usual 'longest common subsequence' algorithm, for example, is usually described with a 2D matrix, but if the algorithm progresses from left to right, then you really only need space for 2 columns.

Kadane's algorithm is a similar optimization applied to a 1D problem, so the whole DP array disappears. The DP code in your question has a 2D matrix for some reason. I don't know why -- it doesn't really make sense.

This site does a pretty good job of explaining the derivation: https://hackernoon.com/kadanes-algorithm-explained-50316f4fd8a6

Medicinal answered 31/7, 2020 at 4:22 Comment(0)
P
4

I think that it is a greedy algorithm because kadanes algorithm finds the maximum sum at each step and then finds the overall solution .

Premeditate answered 31/7, 2020 at 3:51 Comment(0)
R
1

Kadane’s Algorithm can be viewed both as a greedy and DP. As we can see that we are keeping a running sum of integers and when it becomes less than 0, we reset it to 0 (Greedy Part). This is because continuing with a negative sum is way more worse than restarting with a new range. Now it can also be viewed as a DP, at each stage we have 2 choices: Either take the current element and continue with previous sum OR restart a new range. These both choices are being taken care of in the implementation.

Greedy Sol

# Python program to find maximum contiguous subarray
  
# Function to find the maximum contiguous subarray
from sys import maxint
def maxSubArraySum(a,size):
      
    max_so_far = -maxint - 1
    max_ending_here = 0
      
    for i in range(0, size):
        max_ending_here = max_ending_here + a[i]
        if (max_so_far < max_ending_here):
            max_so_far = max_ending_here
 
        if max_ending_here < 0:
            max_ending_here = 0  
    return max_so_far
  
# Driver function to check the above function
a = [-13, -3, -25, -20, -3, -16, -23, -12, -5, -22, -15, -4, -7]
print "Maximum contiguous sum is", maxSubArraySum(a,len(a))

DP Sol

# Python program to print largest contiguous array sum
 
from sys import maxsize
 
# Function to find the maximum contiguous subarray
# and print its starting and end index
def maxSubArraySum(a,size):
 
    max_so_far = -maxsize - 1
    max_ending_here = 0
    start = 0
    end = 0
    s = 0
 
    for i in range(0,size):
 
        max_ending_here += a[i]
 
        if max_so_far < max_ending_here:
            max_so_far = max_ending_here
            start = s
            end = i
 
        if max_ending_here < 0:
            max_ending_here = 0
            s = i+1
 
    print ("Maximum contiguous sum is %d"%(max_so_far))
    print ("Starting Index %d"%(start))
    print ("Ending Index %d"%(end))
 
# Driver program to test maxSubArraySum
a = [-2, -3, 4, -1, -2, 1, 5, -3]
maxSubArraySum(a,len(a))
Receptacle answered 4/12, 2021 at 15:46 Comment(0)
J
0

As defined in Introduction to Algorithms, "A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution."

Kadane's algorithm does look for a locally optimal solution by current_sum = max(0, current_sum + x); at the same time, this could also be seen as a space-optimized dynamic programming solution - dp[i] only relies on dp[i-1] hence we use an integer variable to save space.

Therefore I feel like the transition function of DP happens to have a greedy manner, which makes it look like both a DP and a greedy as well.

Jolly answered 23/3, 2021 at 4:10 Comment(0)
E
-1

I think that it is hard to say what exactly this algorithm is.
But most of the book classify this algorithm in DP section, because you combine solution from dp[n-1] to make solution for dp[n].

Note: I'm not understand why you use this version of algorithm which is O(n^2) You can simplify this algorithm to O(n)

curmax=0
sol=0
for x in array
    curmax+=a[x]
    if(curmax<0)curmax=0
    if(curmax>sol)sol=curmax
Eachelle answered 1/7, 2015 at 7:46 Comment(2)
It is not O(N^2) . It is O(N* maximum sum in any case)Webby
Oh yes, my bad.. But O(N* maximum sum in any case) is often bigger than O(N^2)Eachelle

© 2022 - 2024 — McMap. All rights reserved.