Reduce the sum of differences between adjacent array elements
Asked Answered
O

4

5

I came across a coding challenge on the internet the question is listed below:

Have the function FoodDistribution(arr) read the array of numbers stored in arr which will represent the hunger level of different people ranging from 0 to 5 (0 meaning not hungry at all, 5 meaning very hungry). You will also have N sandwiches to give out which will range from 1 to 20. The format of the array will be [N, h1, h2, h3, ...] where N represents the number of sandwiches you have and the rest of the array will represent the hunger levels of different people. Your goal is to minimize the hunger difference between each pair of people in the array using the sandwiches you have available.

For example: if arr is [5, 3, 1, 2, 1], this means you have 5 sandwiches to give out. You can distribute them in the following order to the people: 2, 0, 1, 0. Giving these sandwiches to the people their hunger levels now become: [1, 1, 1, 1]. The difference between each pair of people is now 0, the total is also 0, so your program should return 0. Note: You may not have to give out all, or even any, of your sandwiches to produce a minimized difference.

Another example: if arr is [4, 5, 2, 3, 1, 0] then you can distribute the sandwiches in the following order: [3, 0, 1, 0, 0] which makes all the hunger levels the following: [2, 2, 2, 1, 0]. The differences between each pair of people is now: 0, 0, 1, 1 and so your program should return the final minimized difference of 2.

My first approach was to try to solve it greedily as the following:

  1. Loop until the sandwiches are zero
  2. For each element in the array copy the array and remove one hunger at location i
  3. Get the best combination that will give you the smallest hunger difference
  4. Reduce the sandwiches by one and consider the local min as the new hunger array
  5. Repeat until sandwiches are zero or the hunger difference is zero

I thought when taking the local minimum it led to the global minimum which was wrong based on the following use case [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]

def FoodDistribution(arr):
    sandwiches = arr[0]
    hunger_levels = arr[1:]

    # Function to calculate the total difference
    def total_difference(hunger_levels):
        return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))

    def reduce_combs(combs):
        local_min = float('inf')
        local_min_comb = None
        for comb in combs:
            current_difference = total_difference(comb)
            if current_difference < local_min:
                local_min = current_difference
                local_min_comb = comb

        return local_min_comb
    # Function to distribute sandwiches
    def distribute_sandwiches(sandwiches, hunger_levels):
        global_min = total_difference(hunger_levels)
        print(global_min)
        while sandwiches > 0 and global_min > 0:
            combs = []
            for i in range(len(hunger_levels)):
                comb = hunger_levels[:]
                comb[i] -= 1
                combs.append(comb)

            local_min_comb = reduce_combs(combs)
            x = total_difference(local_min_comb)
            print( sandwiches, x, local_min_comb)
            global_min = min(global_min, x)
            hunger_levels = local_min_comb
            sandwiches -= 1
        return global_min

    # Distribute sandwiches and calculate the minimized difference
    global_min = distribute_sandwiches(sandwiches, hunger_levels)
    return global_min

if __name__ == "__main__":
    print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))

I changed my approach to try to brute force and then use memorization to optimize the time complexity

  1. Recurse until out of bounds or sandwiches are zero
  2. For each location there are two options either to use a sandwich or ignore
  3. When the option is to use a sandwich decrement sandwiches by one and stay at the same index.
  4. When the option is to ignore increment the index by one.
  5. Take the minimum between the two options and return it.

The issue here is that I didn't know what to store in the memo and storing the index and sandwiches is not enough. I am not sure if this problem has a better complexity than 2^(n+s). Is there a way to know if dynamic programming or memorization is not the way to solve the problem and in this case can I improve the complexity by memorization or does this problem need to be solved with a different approach?

def FoodDistribution(arr):
    sandwiches = arr[0]
    hunger_levels = arr[1:]

    # Distribute sandwiches and calculate the minimized difference
    global_min = solve(0, sandwiches, hunger_levels)
    return global_min


def solve(index, sandwiches, hunger_levels):
    if index >= len(hunger_levels) or sandwiches == 0:
        return total_difference(hunger_levels)

    # take a sandwich
    hunger_levels[index] += -1
    sandwiches += -1
    minTake = solve(index, sandwiches, hunger_levels)
    hunger_levels[index] += 1
    sandwiches += 1

    # dont take sandwich
    dontTake = solve(index + 1, sandwiches, hunger_levels)

    return min(minTake, dontTake)


def total_difference(hunger_levels):
    return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))

if __name__ == "__main__":
    print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))

Edit: Multiple states will give you the optimal answer for the use case above

sandwiches = 7 
hunger = [5, 4, 3, 4, 5, 2, 3, 1, 4, 5]
optimal is 6
states as follow
[3, 3, 3, 3, 3, 2, 2, 1, 4, 5]
[4, 3, 3, 3, 3, 2, 2, 1, 4, 4]
[4, 4, 3, 3, 2, 2, 2, 1, 4, 4]
[4, 4, 3, 3, 3, 2, 1, 1, 4, 4]
[4, 4, 3, 3, 3, 2, 2, 1, 3, 4]
[4, 4, 3, 3, 3, 2, 2, 1, 4, 4]
[5, 4, 3, 3, 3, 2, 2, 1, 3, 3]

Note: I accepted @Matt Timmermans answer as it provides the best time complexity n and nlogn. But the two other answer are amazing and good to understand and be able to implement the solution using dynamic programming or memorization. Personally I prefer the memorization version expected time complexity is snh where h is the max hunger level in the array.

Oppugnant answered 19/5, 2024 at 14:28 Comment(4)
@Unmitigated the correct output is 6Oppugnant
[6, 2, 1, 5, 12, 20, 20, 20, 18, 4, 3] seems to produce 33 for your second method and 34 for the method in Matt Timmermans' answer.Beograd
It could be an edge case in Matt implementation I ran it using brute force and memorization and the result should 33 The output is [2, 1, 5, 12, 18, 18, 18, 18, 4, 3]Oppugnant
I'm not convinced it can be solved for all cases optimally as Matt suggests, although I trust his knowledge and experience more than mine. I am testing it with random input and so far we don't seem to have a fool proof solution in that answer.Beograd
P
2

The sum of the absolute differences only goes down when you reduce a local maximum.

If you reduce a maximum on either end, the sum of differences goes down by one, like [3,2,1] -> [2,2,1].

If you reduce a maximum in the middle, the sum of differences goes down by two, like [1,3,2] -> [1,2,2].

If a maximum gets reduced, it may merge into another maximum that you can reduce, but the new maximum will never be cheaper or more cost effective. It can only get wider, like [1,3,2] -> [1,2,2].

The optimal strategy is, therefore, just to repeatedly reduce the most cost-effective maximum, in terms of benefit/width, that you have enough sandwiches to reduce. benefit is 1 for maximums on the ends or 2 for maximums in the middle.

Stop when you no longer have enough sandwiches to reduce the narrowest maximum.

You can do this in O(n) time by finding all the maximums and keeping them in a priority queue to process them in the proper order as they are reduced.

O(n log n) is easy. In order to make that O(n) bound, you'll need to use a counting-sort-type priority queue instead of a heap. You also need to be a little clever about keeping track of the regions of the array that are known to be at the same height so you can merge them in constant time.

Here's an O(n) implementation in python

def distribute(arr):

    foodLeft = arr[0]
    hungers = arr[1:]

    # For each element in hungers, calculate number of adjacent elements at same height
    spans = [1] * len(hungers)
    for i in range(1, len(hungers)):
        if hungers[i-1]==hungers[i]:
            spans[i] = spans[i-1]+1
    for i in range(len(hungers)-2, -1, -1):
        if hungers[i+1]==hungers[i]:
            spans[i] = spans[i+1]

    # spans are identified by their first element.  Only the counts and hungers on the edges need to be correct

    # if a span is a maximum, it's height.  Otherwise 0
    def maxHeight(left):
        ret = len(spans)
        if left > 0:
            ret = min(ret, hungers[left] - hungers[left-1])
        right = left + spans[left]-1
        if right < len(spans)-1:
            ret = min(ret, hungers[right] - hungers[right+1])
        return max(ret,0)
    
    # change the height of a span and return the maybe new span that it is a part of
    def reduce(left, h):
        right = left + spans[left] - 1
        hungers[left] -= h
        hungers[right] = hungers[left]
        if right < len(spans)-1 and hungers[right+1] == hungers[right]:
            # merge on the right
            w = spans[right+1]
            spans[right] = spans[right+1] = 0 # for debuggability
            right += w
        if left > 0 and hungers[left-1] == hungers[left]:
            # merge on left
            w = spans[left-1]
            spans[left] = spans[left-1] = 0 # for debuggability
            left -= w
        spans[left] = spans[right] = right - left + 1
        return left
    
    def isEdge(left):
        return left < 1 or left + spans[left] >= len(spans)
    
    # constant-time priority queue for non-edge spans
    # it's just a list of spans per width
    pq = [[] for _i in range(len(spans)+1)]

    # populate priority queue
    curspan = 0
    while curspan < len(spans):
        width = spans[curspan]
        if maxHeight(curspan) > 0 and not isEdge(curspan):
            pq[width].append(curspan)
        curspan += width

    # this will be True at the end if we can sacrifice one edge max selection to get one
    # mid max selection, which would produce one more point
    canBacktrack = False
    # process mid spans in order
    curpri = 1
    # while not all hungers are the same
    while spans[0] < len(spans):

        # find the best middle maximum
        bestmid = None
        midwidth = None
        if curpri < len(pq) and curpri <= foodLeft:
            if len(pq[curpri]) == 0:
                curpri += 1
                continue
            bestmid = pq[curpri][-1]
            midwidth = spans[bestmid]

        # find the best edge maximum
        bestedge = None
        edgewidth = None
        if maxHeight(0) > 0 and foodLeft >= spans[0]:
            bestedge = 0
            edgewidth = spans[0]
        r = len(spans)-spans[-1]
        if maxHeight(r) > 0 and foodLeft >= spans[r] and (bestedge == None or spans[r] < edgewidth):
            bestedge = r
            edgewidth = spans[r]

        # choose
        bestspan = None
        h = 0
        if bestedge == None:
            if bestmid == None:
                break
            bestspan = bestmid
            bestwidth = midwidth
            canBacktrack = False
        elif bestmid == None:
            bestspan = bestedge
            bestwidth = edgewidth
            canBacktrack = False
        elif midwidth <= edgewidth*2:
            # mid maximum is more cost effective
            # OR choo
            bestspan = bestmid
            bestwidth = midwidth
            canBacktrack = False
        else:
            bestspan = bestedge
            bestwidth = edgewidth
            # tentative
            canBacktrack = True
        
        if bestspan == bestmid:
            # chose the middle span -- remove from pq
            pq[curpri].pop()

        # how much we can reduce this maxium by
        h = min(foodLeft//bestwidth, maxHeight(bestspan))
        foodLeft -= bestwidth*h
        canBacktrack = canBacktrack and foodLeft < midwidth and foodLeft + edgewidth >= midwidth
        bestspan = reduce(bestspan, h)
        if maxHeight(bestspan) > 0 and not isEdge(bestspan):
            pq[spans[bestspan]].append(bestspan)
    
    # finally, calculate the new total diffs
    totaldiff = 0
    curspan = spans[0]
    while curspan < len(spans):
        totaldiff += abs(hungers[curspan] - hungers[curspan-1])
        curspan += spans[curspan]
    if canBacktrack:
        totaldiff -= 1
    return totaldiff

# test
cases = [
    [8, 11, 14, 15, 16, 13, 2, 3],
    [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5],
    [2, 4, 4, 3, 4, 5],
    [3, 3, 4, 4, 4, 3, 4],
    [4, 3, 4, 4, 4, 3, 5],
    [5, 3, 4, 4, 4, 3, 6],
    [3, 3, 4, 4, 3, 4, 5]
]
for case in cases:
    print("{0}: {1}".format(case, distribute(case)))
Peroneus answered 20/5, 2024 at 17:24 Comment(11)
Can you please add code we can test? Also, could you please speak to the example in my answer.Beograd
I added an implementation. In your example it doesn't matter which choice you make the minimum diff at the end is 2 either way. in 2 3 4 4 3 4 5 it does make a difference. I made that work by favoring maximums in the middle... but there might still be a problem there. I'll think about it. If there is a problem, I'm pretty sure it has a constant time solution, because we would only need to compare the best edge-adjacent maximum with the best middle one.Peroneus
Yes, there was a complication. Should be fixed now. I just kept track of whether it was possible at the end to sacrifice one reduction against an edge to gain one reduction in the middle, which would be worth an extra point.Peroneus
[8, 11, 14, 15, 16, 13, 2, 3] seems to produce 13 for OPs second method and 15 for the method in this answer.Beograd
[6, 2, 1, 5, 12, 20, 20, 20, 18, 4, 3] seems to produce 33 for OPs second method and 34 for the method in the newly edited answer.Beograd
I'm not much of an expert but how come you don't think it's NP'ish but rather possible to select without full view of the final, optimal partition?Beograd
Well, it's certainly not NP hard. The fact that selectable spans get monotonically worse on both the edges and the middle suggests that a greedy strategy should work. The only difficulty is figuring out how to limit the number of edge selections so that 1 edge selection doesn't prevent 1 middle selection by running out of food.Peroneus
Argh. I figured out how to fix this, but it exceeds the amount of effort I'm willing to put into this problem. I would prefer not to leave an incorrect answer, but SO won't let me delete it!. Sigh. Maybe in a few days I'll be inspired to put in the effort.Peroneus
Cool! Maybe just describe the algorithm?Beograd
The dynamic programming code in my answer shows that the optimal values for [8, 11, 14, 15, 16, 13, 2, 3] and [6, 2, 1, 5, 12, 20, 20, 20, 18, 4, 3] are 13 and 33, respectively.Avesta
Here's a longer test case for which the optimal value is 70: [30, 2, 9, 8, 7, 2, 11, 10, 11, 0, 2, 6, 4, 8, 0, 9, 4, 8, 7, 1, 8, 1, 0, 3, 4, 10, 6, 9, 4, 11, 6].Avesta
I
3

This can be done by dynamic programming. Just build up the following data structure.

by position in the array
    by current hunger level (after distributing food)
        by total food left
            smallest sum of differences so far

This is enough to find out what the smallest sum is at the end of the array.

Optionally, also track:

            # of ways to achieve smallest hunger level

That will let you find how many ways to achieve that smallest sum.

Optionally, also track:

            list of previous hunger levels that can achieve it

And now you can reconstruct all solutions that achieve it.

I will leave the exercise of actually doing it to you. (I have lots of examples in my history of solving similar problems.)

Isopropanol answered 19/5, 2024 at 19:38 Comment(5)
You may wish to include the expression for the recursive relationship.Avesta
Could you please explain the second level 'by current hunger level (after distributing food)' I tried to have it as the following key = "{index}-{sandwiches}-{hunger_levels" which I believe will reduce the complexity to s*n*(nPs) is that what you mean by current hunger level ?Oppugnant
@MohdAlomar the state can be expressed as f(i, value, remaining) -> min sum of adjacent differences (value is "hunger level."). Then we have f(...) = min(f(i - 1, v, r) + abs(value - v)), over the possible values of v and r, where A[i] - value ≥ remaining.Beograd
@CarySwoveland I would never think of doing that because I tend to find that style of explanation confusing, and I conversely find the code easy to write as soon as I know what the data structure is supposed to be. So it always surprises me to find that others think that is helpful.Isopropanol
@MohdAlomar What I mean is this. If index 9 started at hunger level 4, there will likely be entries at index 9 for hunger levels 0, 1, 2, 3, and 4, depending on whether we just distributed 4, 3, 2, 1 or 0 amounts of food to index 9. And if one of the ways that could get that minimum score was hunger level 2 at index 9, with 15 food left, there is a solution tracing backwards at index 8 with 17 food left. (You'll need to search for how much was distributed there if that was not recorded...)Isopropanol
L
2

Any one reduction of an element that has an element on each side would keep the same total if one of the neighbours is equal or higher and the other lower. If it's higher than both its neighbours, we reduce the total. If both neighbours are equal or higher, we would only increase the total.

But picking one element (or consecutive group) at a time that is certain to reduce the total, based on the conditions above, cannot be arbitrary, since a partition of the number of sandwiches could be created that does not utilise them all. For example,

2 sandwiches
2 4 4 3 4 5
          ^ Cost 1 reduces by 1
            but prevents further
            reductions.

  ^^^ Cost 2, reduces by 2.

I'm not sure if there is a general polynomial time solution because the choices are ultimately some partition of "hills" to reduce (meaning at least three consecutive elements where all the middle elements are greater than the leftmost and rightmost; or the array end sections if they are higher than their neighbour). We can have a dynamic program based on index, hunger level, and remaining sandwiches, as btilly described, where the latter is dependent on the range of values.

Lotetgaronne answered 20/5, 2024 at 0:7 Comment(0)
P
2

The sum of the absolute differences only goes down when you reduce a local maximum.

If you reduce a maximum on either end, the sum of differences goes down by one, like [3,2,1] -> [2,2,1].

If you reduce a maximum in the middle, the sum of differences goes down by two, like [1,3,2] -> [1,2,2].

If a maximum gets reduced, it may merge into another maximum that you can reduce, but the new maximum will never be cheaper or more cost effective. It can only get wider, like [1,3,2] -> [1,2,2].

The optimal strategy is, therefore, just to repeatedly reduce the most cost-effective maximum, in terms of benefit/width, that you have enough sandwiches to reduce. benefit is 1 for maximums on the ends or 2 for maximums in the middle.

Stop when you no longer have enough sandwiches to reduce the narrowest maximum.

You can do this in O(n) time by finding all the maximums and keeping them in a priority queue to process them in the proper order as they are reduced.

O(n log n) is easy. In order to make that O(n) bound, you'll need to use a counting-sort-type priority queue instead of a heap. You also need to be a little clever about keeping track of the regions of the array that are known to be at the same height so you can merge them in constant time.

Here's an O(n) implementation in python

def distribute(arr):

    foodLeft = arr[0]
    hungers = arr[1:]

    # For each element in hungers, calculate number of adjacent elements at same height
    spans = [1] * len(hungers)
    for i in range(1, len(hungers)):
        if hungers[i-1]==hungers[i]:
            spans[i] = spans[i-1]+1
    for i in range(len(hungers)-2, -1, -1):
        if hungers[i+1]==hungers[i]:
            spans[i] = spans[i+1]

    # spans are identified by their first element.  Only the counts and hungers on the edges need to be correct

    # if a span is a maximum, it's height.  Otherwise 0
    def maxHeight(left):
        ret = len(spans)
        if left > 0:
            ret = min(ret, hungers[left] - hungers[left-1])
        right = left + spans[left]-1
        if right < len(spans)-1:
            ret = min(ret, hungers[right] - hungers[right+1])
        return max(ret,0)
    
    # change the height of a span and return the maybe new span that it is a part of
    def reduce(left, h):
        right = left + spans[left] - 1
        hungers[left] -= h
        hungers[right] = hungers[left]
        if right < len(spans)-1 and hungers[right+1] == hungers[right]:
            # merge on the right
            w = spans[right+1]
            spans[right] = spans[right+1] = 0 # for debuggability
            right += w
        if left > 0 and hungers[left-1] == hungers[left]:
            # merge on left
            w = spans[left-1]
            spans[left] = spans[left-1] = 0 # for debuggability
            left -= w
        spans[left] = spans[right] = right - left + 1
        return left
    
    def isEdge(left):
        return left < 1 or left + spans[left] >= len(spans)
    
    # constant-time priority queue for non-edge spans
    # it's just a list of spans per width
    pq = [[] for _i in range(len(spans)+1)]

    # populate priority queue
    curspan = 0
    while curspan < len(spans):
        width = spans[curspan]
        if maxHeight(curspan) > 0 and not isEdge(curspan):
            pq[width].append(curspan)
        curspan += width

    # this will be True at the end if we can sacrifice one edge max selection to get one
    # mid max selection, which would produce one more point
    canBacktrack = False
    # process mid spans in order
    curpri = 1
    # while not all hungers are the same
    while spans[0] < len(spans):

        # find the best middle maximum
        bestmid = None
        midwidth = None
        if curpri < len(pq) and curpri <= foodLeft:
            if len(pq[curpri]) == 0:
                curpri += 1
                continue
            bestmid = pq[curpri][-1]
            midwidth = spans[bestmid]

        # find the best edge maximum
        bestedge = None
        edgewidth = None
        if maxHeight(0) > 0 and foodLeft >= spans[0]:
            bestedge = 0
            edgewidth = spans[0]
        r = len(spans)-spans[-1]
        if maxHeight(r) > 0 and foodLeft >= spans[r] and (bestedge == None or spans[r] < edgewidth):
            bestedge = r
            edgewidth = spans[r]

        # choose
        bestspan = None
        h = 0
        if bestedge == None:
            if bestmid == None:
                break
            bestspan = bestmid
            bestwidth = midwidth
            canBacktrack = False
        elif bestmid == None:
            bestspan = bestedge
            bestwidth = edgewidth
            canBacktrack = False
        elif midwidth <= edgewidth*2:
            # mid maximum is more cost effective
            # OR choo
            bestspan = bestmid
            bestwidth = midwidth
            canBacktrack = False
        else:
            bestspan = bestedge
            bestwidth = edgewidth
            # tentative
            canBacktrack = True
        
        if bestspan == bestmid:
            # chose the middle span -- remove from pq
            pq[curpri].pop()

        # how much we can reduce this maxium by
        h = min(foodLeft//bestwidth, maxHeight(bestspan))
        foodLeft -= bestwidth*h
        canBacktrack = canBacktrack and foodLeft < midwidth and foodLeft + edgewidth >= midwidth
        bestspan = reduce(bestspan, h)
        if maxHeight(bestspan) > 0 and not isEdge(bestspan):
            pq[spans[bestspan]].append(bestspan)
    
    # finally, calculate the new total diffs
    totaldiff = 0
    curspan = spans[0]
    while curspan < len(spans):
        totaldiff += abs(hungers[curspan] - hungers[curspan-1])
        curspan += spans[curspan]
    if canBacktrack:
        totaldiff -= 1
    return totaldiff

# test
cases = [
    [8, 11, 14, 15, 16, 13, 2, 3],
    [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5],
    [2, 4, 4, 3, 4, 5],
    [3, 3, 4, 4, 4, 3, 4],
    [4, 3, 4, 4, 4, 3, 5],
    [5, 3, 4, 4, 4, 3, 6],
    [3, 3, 4, 4, 3, 4, 5]
]
for case in cases:
    print("{0}: {1}".format(case, distribute(case)))
Peroneus answered 20/5, 2024 at 17:24 Comment(11)
Can you please add code we can test? Also, could you please speak to the example in my answer.Beograd
I added an implementation. In your example it doesn't matter which choice you make the minimum diff at the end is 2 either way. in 2 3 4 4 3 4 5 it does make a difference. I made that work by favoring maximums in the middle... but there might still be a problem there. I'll think about it. If there is a problem, I'm pretty sure it has a constant time solution, because we would only need to compare the best edge-adjacent maximum with the best middle one.Peroneus
Yes, there was a complication. Should be fixed now. I just kept track of whether it was possible at the end to sacrifice one reduction against an edge to gain one reduction in the middle, which would be worth an extra point.Peroneus
[8, 11, 14, 15, 16, 13, 2, 3] seems to produce 13 for OPs second method and 15 for the method in this answer.Beograd
[6, 2, 1, 5, 12, 20, 20, 20, 18, 4, 3] seems to produce 33 for OPs second method and 34 for the method in the newly edited answer.Beograd
I'm not much of an expert but how come you don't think it's NP'ish but rather possible to select without full view of the final, optimal partition?Beograd
Well, it's certainly not NP hard. The fact that selectable spans get monotonically worse on both the edges and the middle suggests that a greedy strategy should work. The only difficulty is figuring out how to limit the number of edge selections so that 1 edge selection doesn't prevent 1 middle selection by running out of food.Peroneus
Argh. I figured out how to fix this, but it exceeds the amount of effort I'm willing to put into this problem. I would prefer not to leave an incorrect answer, but SO won't let me delete it!. Sigh. Maybe in a few days I'll be inspired to put in the effort.Peroneus
Cool! Maybe just describe the algorithm?Beograd
The dynamic programming code in my answer shows that the optimal values for [8, 11, 14, 15, 16, 13, 2, 3] and [6, 2, 1, 5, 12, 20, 20, 20, 18, 4, 3] are 13 and 33, respectively.Avesta
Here's a longer test case for which the optimal value is 70: [30, 2, 9, 8, 7, 2, 11, 10, 11, 0, 2, 6, 4, 8, 0, 9, 4, 8, 7, 1, 8, 1, 0, 3, 4, 10, 6, 9, 4, 11, 6].Avesta
A
1

Here is a dynamic programming solution in Ruby, which, in view of the similarity of Ruby and Python, could be translated to Python fairly easily.1 This dynamic programming model may be what @btilly had in mind in his answer.

def opt(nbr_sands, hlevels)
  arr = Array.new(hlevels.size) { Array.new(nbr_sands+1) }
  (0..nbr_sands).each do |sands_left|
    arr[0][sands_left] = hash_person_0(nbr_sands-sands_left, hlevels.first)
  end
  (1..hlevels.size-1).each do |p|
    (0..nbr_sands).each do |sands_left|
      arr[p][sands_left] = hash_person_p(nbr_sands, sands_left, hlevels[p],
        hlevels[p-1], arr[p-1])
    end
  end
  arr
end
def hash_person_0(sands_for_person_0, hlevel)
  val = sands_for_person_0 <= hlevel ? 0 : Float::INFINITY
  { sands_for_person_0 => { val: val, opt_prev_sands: nil } }
end 
def hash_person_p(nbr_sands, sands_left, hlevel, prev_hlevel, prev_arr )
  max_sands_to_p = [nbr_sands - sands_left, hlevel].min
  (0..max_sands_to_p).each_with_object({}) do |sands_to_p, h|
    adj_hlevel = hlevel - sands_to_p
    prev_sands_left = sands_left + sands_to_p
    n, d = prev_arr[prev_sands_left].min_by do |n,d|
      d[:val] + (adj_hlevel - (prev_hlevel - n)).abs
    end
    val = d[:val] + (adj_hlevel - (prev_hlevel - n)).abs 
    h[sands_to_p] = { val: val, opt_prev_sands: n }
  end
end

This solution is actually incomplete as it does not include the code needed to recover the optimal solution once it has been computed. However, I will provide an example that will show how that final step could be accomplished quite easily.


In dynamic programming parlance familiar to some, the model has five stages, corresponding to the five persons, and at each stage the state variables are two-tuples comprised of the number of sandwiches remaining at the end of that stage and the optimal allotment of sandwiches at that stage, given the number of sandwiches remaining at the end of that stage, when that stage is the final one.

Here Bellman's Principle of Optimality requires that, given any of the state variables for person p, the optimal allocation of sandwiches to the remaining persons does not depend on the allocations of sandwiches to persons preceding person p that gave raise to the given state variable for person p. This requirement is clearly satisfied.


Suppose

nbr_sands = 2
hlevels = [3, 2, 1]

It is evident that there are two optimal allocations:

[1, 1, 1]

and

[1, 0, 0]

That is, persons 0 and 1 are to each get 1 sandwich and person 2 is to get none (reducing [3, 2, 1] to [2, 1, 1]) or person 0 is to get 1 sandwich and persons 1 and 2 are to get none (reducing [3, 2, 1] to [2, 2, 1]), yielding optimal values of 1 in both cases.


For these values of nbr_sands and hlevels

opt(nbr_sands, hlevels)

returns the following (annotated) array arr, where arr[p][n] equals a hash associated with person p (base zero) when n sandwiches are left after all allocations to persons up to and including person p. I explain the meaning of those hashes below.

[
  # p = 0 #
  [
    {2=>{:val=>0, :opt_prev_sands=>nil}},
    {1=>{:val=>0, :opt_prev_sands=>nil}}, # * and **
    {0=>{:val=>0, :opt_prev_sands=>nil}}
  ],
  # p = 1
  [
    # sands_left = 0
    {
      0=>{:val=>1, :opt_prev_sands=>2},
      1=>{:val=>1, :opt_prev_sands=>1}, # * prev sands left = 0 + 1 = 1, prev sands = 1
      2=>{:val=>3, :opt_prev_sands=>0}
    },
    # sands_left = 1
    {
      0=>{:val=>0, :opt_prev_sands=>1}, # * prev sands left = 1 + 0 = 1, prev sands = 1
      1=>{:val=>2, :opt_prev_sands=>0}
    },
    # sands_left = 2
    {
      0=>{:val=>1, :opt_prev_sands=>0}
    }
  ],
  # p = 2
  [
    # sands_left = 0
    {
      0=>{:val=>1, :opt_prev_sands=>1}, # * prev sans_left = 0 + 0 = 0, prev sands = 1
      1=>{:val=>2, :opt_prev_sands=>0}
    },
    # sands_left = 1
    {
      0=>{:val=>1, :opt_prev_sands=>0}, # ** prev sans_left = 1 + 0 = 1, prev sands = 0
      1=>{:val=>3, :opt_prev_sands=>0}
    },
    # sands_left = 2
    {
      0=>{:val=>2, :opt_prev_sands=>0}
    }
  ]
]

We may now recover an optimal solution, beginning at the end, with person p = 2.

We are interested in finding which combination of sandwiches left and allocation to person 2 minimizes the value of :val in the associated hash.

There are two, identified by * and **. We generally are only interested in finding any optimal solution when there is more than one. So let's look at the once marked by *. That corresponds to

arr[2][0][0]
  #=> {:val=>1, :opt_prev_sands=>1}

The first [0] corresponds to there being zero sandwiches left; the last [0] returns the value of the hash when zero sandwiches are given to person 2. That means that the number of sandwiches beginning of period 2, and hence at the end of period 1, equals 0 + 0 #=> 0 and the optimal number of sandwiches given to person 1 is 1.

That means that we next need to examine

arr[1][0][1]
  # {:val=>1, :opt_prev_sands=>1}

This means that the number of sandwiches remaining after that allocation to person 1 is 0 + 1 = 1 and person 0 is to receive 1 sandwich. Therefore, one optimal solution is given by [1, 1, 0], reducing hunger levels to [3-1, 2-1, 1-0] #=> [2, 1, 1].

I will leave it as an exercise for the reader to compute the second optimal solution.

1. I am in the process of teaching myself Python and as part of the endeavior will in time edit my answer to give the equivalent solution in Python.

Avesta answered 31/5, 2024 at 22:23 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.