How to make rounded percentages add up to 100%
Asked Answered
S

25

278

Consider the four percentages below, represented as float numbers:

    13.626332%
    47.989636%
     9.596008%
    28.788024%
   -----------
   100.000000%

I need to represent these percentages as whole numbers. If I simply use Math.round(), I end up with a total of 101%.

14 + 48 + 10 + 29 = 101

If I use parseInt(), I end up with a total of 97%.

13 + 47 + 9 + 28 = 97

What's a good algorithm to represent any number of percentages as whole numbers while still maintaining a total of 100%?


Edit: After reading some of the comments and answers, there are clearly many ways to go about solving this.

In my mind, to remain true to the numbers, the "right" result is the one that minimizes the overall error, defined by how much error rounding would introduce relative to the actual value:

        value  rounded     error               decision
   ----------------------------------------------------
    13.626332       14      2.7%          round up (14)
    47.989636       48      0.0%          round up (48)
     9.596008       10      4.0%    don't round up  (9)
    28.788024       29      2.7%          round up (29)

In case of a tie (3.33, 3.33, 3.33) an arbitrary decision can be made (e.g. 3, 4, 3).

Shea answered 20/11, 2012 at 22:38 Comment(10)
Suppose you have 3.33, 3.33 and 3.33. Which one will you make 4?Exposure
Exactly. The question embodies a contradiction in terms.Fremd
You guys are right of course, but that's part of my challenge. I clearly have to compromise something when removing precision. The question is, what's the most creative way to do that.Shea
It's a very common scenario in reporting - how to display a "total" of decimal values that doesn't always match the sum of the displayed values.Josiejosler
What is the "right" result in your example case? That may solve the disagreements on what the "best" solution is.Josiejosler
Here is a good article on javascript rounding.Exposure
@poezn: according to your definition I believe the solution would be to not round up 13.626332 (so taking 13). The round up already has a relative error of 2.74%. When you round down the relative error becomes -4.60%. However the absolute difference between those relatives errors (the net error introduced by this decision) is the lowest: 1.85%. That will minimize the overall relative errors.Locklin
Possible duplicate of How to deal with the sum of rounded percentage not being 100?Volkslied
Here is a nice realization in R. #40516369Overskirt
es6 Largest Remainder method implementation https://mcmap.net/q/110319/-how-to-deal-with-the-sum-of-rounded-percentage-not-being-100Decongestant
L
41

Since none of the answers here seem to solve it properly, here's my semi-obfuscated version using underscorejs:

function foo(l, target) {
    var off = target - _.reduce(l, function(acc, x) { return acc + Math.round(x) }, 0);
    return _.chain(l).
            sortBy(function(x) { return Math.round(x) - x }).
            map(function(x, i) { return Math.round(x) + (off > i) - (i >= (l.length + off)) }).
            value();
}

foo([13.626332, 47.989636, 9.596008, 28.788024], 100) // => [48, 29, 14, 9]
foo([16.666, 16.666, 16.666, 16.666, 16.666, 16.666], 100) // => [17, 17, 17, 17, 16, 16]
foo([33.333, 33.333, 33.333], 100) // => [34, 33, 33]
foo([33.3, 33.3, 33.3, 0.1], 100) // => [34, 33, 33, 0]
Linoleum answered 21/11, 2012 at 3:27 Comment(8)
Correct me if I am wrong, but isn't this an implementation of the Algorithm proposed by my answer? (Not to clear on underscorejs)Chacma
@VarunVohra sorry i didn't notice this until now, yes it looks like your algorithm is the same :) not sure why my post is the accepted answer, the obfuscated code was just for the lolz...Linoleum
@Linoleum Deleted my comment; I just didn't realize it was supposed to return a sorted list. I apologize!Altorelievo
There is aproblem with this function when the last element is 0 and previous ones add to 100. E.g. [52.6813880126183, 5.941114616193481, 24.55310199789695, 8.780231335436383, 8.04416403785489, 0]. The last one logically returns -1. I thought of the following solution really quickly but there's probably something better: jsfiddle.net/0o75bw43/1Acidulous
@Acidulous it shows all 1 when all entries are zero in the input arrayEtui
This algorithm implementation is incorrect. It sorts by reminder, so it you do foo([13.989636, 47.626332, 9.596008, 28.788024], 100) it will output [14, 29, 48, 9].Savate
What would be the approach if I want the algorithm to return two decimal numbers? For example I input foo([13.626332, 47.989636, 9.596008, 28.788024], 100) and I want the output to be [13.63, 47.99, 9.60, 28.78].Airs
@Airs just multiply everything by 100 (values in l and the target), then "divide by 100" at the end.Linoleum
C
210

There are many ways to do just this, provided you are not concerned about reliance on the original decimal data.

The first and perhaps most popular method would be the Largest Remainder Method

Which is basically:

  1. Rounding everything down
  2. Getting the difference in sum and 100
  3. Distributing the difference by adding 1 to items in decreasing order of their decimal parts

In your case, it would go like this:

13.626332%
47.989636%
 9.596008%
28.788024%

If you take the integer parts, you get

13
47
 9
28

which adds up to 97, and you want to add three more. Now, you look at the decimal parts, which are

.626332%
.989636%
.596008%
.788024%

and take the largest ones until the total reaches 100. So you would get:

14
48
 9
29

Alternatively, you can simply choose to show one decimal place instead of integer values. So the numbers would be 48.3 and 23.9 etc. This would drop the variance from 100 by a lot.

Chacma answered 20/11, 2012 at 23:2 Comment(3)
This "Feature Column" on the website of American Mathematical Society – Apportionment II: Apportionment Systems – describes several similar 'apportionment' methods.Orbiculate
This almost looks like a copy and paste of my answer here #5227715.Volkslied
Note that, contrary to your comment on @DStanley 's answer, in your answer 9.596008% was rounded to 9% which is more than a 0.5% difference. Still a good answer, though.Bolshevism
Z
63

Probably the "best" way to do this (quoted since "best" is a subjective term) is to keep a running (non-integral) tally of where you are, and round that value.

Then use that along with the history to work out what value should be used. For example, using the values you gave:

Value      CumulValue  CumulRounded  PrevBaseline  Need
---------  ----------  ------------  ------------  ----
                                  0
13.626332   13.626332            14             0    14 ( 14 -  0)
47.989636   61.615968            62            14    48 ( 62 - 14)
 9.596008   71.211976            71            62     9 ( 71 - 62)
28.788024  100.000000           100            71    29 (100 - 71)
                                                    ---
                                                    100

At each stage, you don't round the number itself. Instead, you round the accumulated value and work out the best integer that reaches that value from the previous baseline - that baseline is the cumulative value (rounded) of the previous row.

This works because you're not losing information at each stage but rather using the information more intelligently. The 'correct' rounded values are in the final column and you can see that they sum to 100.

You can see the difference between this and blindly rounding each value, in the third value above. While 9.596008 would normally round up to 10, the accumulated 71.211976 correctly rounds down to 71 - this means that only 9 is needed to add to the previous baseline of 62.


This also works for "problematic" sequence like three roughly-1/3 values, where one of them should be rounded up:

Value      CumulValue  CumulRounded  PrevBaseline  Need
---------  ----------  ------------  ------------  ----
                                  0
33.333333   33.333333            33             0    33 ( 33 -  0)
33.333333   66.666666            67            33    34 ( 67 - 33)
33.333333   99.999999           100            67    33 (100 - 67)
                                                    ---
                                                    100
Ziwot answered 20/11, 2012 at 22:43 Comment(7)
This approach also works well for rounding small numbers as it prevents negative number sin the outputLaflam
@Ziwot That's a smart solution. :) Could you share a short and fast pure JavaScript implementation?Failsafe
@Ziwot To what do you actually refer with "both those problems"?Failsafe
@Ben, regarding your query on my "fixes both these problems" comment, that would have been a response to an earlier comment that has now been deleted. I've deleted it now and hopefully answered in such a way that this comment will still make sense should you delete yours :-)Ziwot
@Ziwot Thanks for your answer. That's a pity! I would have been interested in what the two results were related to. Did you actually see my first comment?Failsafe
Implemented in JavaScript: lang-js export const sum = (array = []) => array.reduce((sum, value) => sum + value, 0) export const sharesToRoundedPercentages = (array = []) => { const total = sum(array) return array.reduce(({ roundedPcts, accPct }, share, index) => { const pct = ((share / total) * 100) accPct += pct const accRoundedPcts = sum(roundedPcts.slice(0, index)) roundedPcts.push(Math.round(pct) - accRoundedPcts) return { roundedPcts, accPct } }, { roundedPcts: [], accPct: 0 }).roundedPcts } Sy
This will work, but the problem is the rounding is a bit arbitrary and depends on the order of the numbers. Using @vvohra87's answer will make it independent of order and make sure the numbers with the highest remainders are rounded upActiniform
L
41

Since none of the answers here seem to solve it properly, here's my semi-obfuscated version using underscorejs:

function foo(l, target) {
    var off = target - _.reduce(l, function(acc, x) { return acc + Math.round(x) }, 0);
    return _.chain(l).
            sortBy(function(x) { return Math.round(x) - x }).
            map(function(x, i) { return Math.round(x) + (off > i) - (i >= (l.length + off)) }).
            value();
}

foo([13.626332, 47.989636, 9.596008, 28.788024], 100) // => [48, 29, 14, 9]
foo([16.666, 16.666, 16.666, 16.666, 16.666, 16.666], 100) // => [17, 17, 17, 17, 16, 16]
foo([33.333, 33.333, 33.333], 100) // => [34, 33, 33]
foo([33.3, 33.3, 33.3, 0.1], 100) // => [34, 33, 33, 0]
Linoleum answered 21/11, 2012 at 3:27 Comment(8)
Correct me if I am wrong, but isn't this an implementation of the Algorithm proposed by my answer? (Not to clear on underscorejs)Chacma
@VarunVohra sorry i didn't notice this until now, yes it looks like your algorithm is the same :) not sure why my post is the accepted answer, the obfuscated code was just for the lolz...Linoleum
@Linoleum Deleted my comment; I just didn't realize it was supposed to return a sorted list. I apologize!Altorelievo
There is aproblem with this function when the last element is 0 and previous ones add to 100. E.g. [52.6813880126183, 5.941114616193481, 24.55310199789695, 8.780231335436383, 8.04416403785489, 0]. The last one logically returns -1. I thought of the following solution really quickly but there's probably something better: jsfiddle.net/0o75bw43/1Acidulous
@Acidulous it shows all 1 when all entries are zero in the input arrayEtui
This algorithm implementation is incorrect. It sorts by reminder, so it you do foo([13.989636, 47.626332, 9.596008, 28.788024], 100) it will output [14, 29, 48, 9].Savate
What would be the approach if I want the algorithm to return two decimal numbers? For example I input foo([13.626332, 47.989636, 9.596008, 28.788024], 100) and I want the output to be [13.63, 47.99, 9.60, 28.78].Airs
@Airs just multiply everything by 100 (values in l and the target), then "divide by 100" at the end.Linoleum
R
35

The goal of rounding is to generate the least amount of error. When you're rounding a single value, that process is simple and straightforward and most people understand it easily. When you're rounding multiple numbers at the same time, the process gets trickier - you must define how the errors are going to combine, i.e. what must be minimized.

The well-voted answer by Varun Vohra minimizes the sum of the absolute errors, and it's very simple to implement. However there are edge cases it does not handle - what should be the result of rounding 24.25, 23.25, 27.25, 25.25? One of those needs to be rounded up instead of down. You would probably just arbitrarily pick the first or last one in the list.

Perhaps it's better to use the relative error instead of the absolute error. Rounding 23.25 up to 24 changes it by 3.2% while rounding 27.25 up to 28 only changes it by 2.8%. Now there's a clear winner.

It's possible to tweak this even further. One common technique is to square each error, so that large errors count disproportionately more than small ones. I'd also use a non-linear divisor to get the relative error - it doesn't seem right that an error at 1% is 99 times more important than an error at 99%. In the code below I've used the square root.

The complete algorithm is as follows:

  1. Sum the percentages after rounding them all down, and subtract from 100. This tells you how many of those percentages must be rounded up instead.
  2. Generate two error scores for each percentage, one when when rounded down and one when rounded up. Take the difference between the two.
  3. Sort the error differences produced above.
  4. For the number of percentages that need to be rounded up, take an item from the sorted list and increment the rounded down percentage by 1.

You may still have more than one combination with the same error sum, for example 33.3333333, 33.3333333, 33.3333333. This is unavoidable, and the result will be completely arbitrary. The code I give below prefers to round up the values on the left.

Putting it all together in Python looks like this.

from math import isclose, sqrt

def error_gen(actual, rounded):
    divisor = sqrt(1.0 if actual < 1.0 else actual)
    return abs(rounded - actual) ** 2 / divisor

def round_to_100(percents):
    if not isclose(sum(percents), 100):
        raise ValueError
    n = len(percents)
    rounded = [int(x) for x in percents]
    up_count = 100 - sum(rounded)
    errors = [(error_gen(percents[i], rounded[i] + 1) - error_gen(percents[i], rounded[i]), i) for i in range(n)]
    rank = sorted(errors)
    for i in range(up_count):
        rounded[rank[i][1]] += 1
    return rounded

>>> round_to_100([13.626332, 47.989636, 9.596008, 28.788024])
[14, 48, 9, 29]
>>> round_to_100([33.3333333, 33.3333333, 33.3333333])
[34, 33, 33]
>>> round_to_100([24.25, 23.25, 27.25, 25.25])
[24, 23, 28, 25]
>>> round_to_100([1.25, 2.25, 3.25, 4.25, 89.0])
[1, 2, 3, 4, 90]

As you can see with that last example, this algorithm is still capable of delivering non-intuitive results. Even though 89.0 needs no rounding whatsoever, one of the values in that list needed to be rounded up; the lowest relative error results from rounding up that large value rather than the much smaller alternatives.

This answer originally advocated going through every possible combination of round up/round down, but as pointed out in the comments a simpler method works better. The algorithm and code reflect that simplification.

Rolon answered 23/1, 2016 at 5:30 Comment(17)
I don't think you need to consider all combinations: process in order of decreasing drop in weighted error going from round to zero to round to infinity (pretty much just introducing weighing into Verun Vohras's and yonilevy's ("identical") answers).Backbreaker
@Backbreaker you're right, I was overthinking this. I couldn't just sort on the error since there are two errors for each value, but taking the difference resolved that problem. I've updated the answer.Rolon
I prefer to always have 0% when actual number is 0%. So adding if actual == 0: return 0 to error_gen works great.Vicissitude
what is the isclose method at the beginning of round_to_100?Thierry
@Thierry #5595925Rolon
I'd say the goal(s) of rounding are to make the number easier to report and communicate, and to avoid communicating false precision. I'd say whether generating the least amount of error is a goal or not is highly context dependent. Some times, some other consideration is more important (e.g. simplicity of rounding scheme).Stubbed
@M.Justin what I meant by "goal" was just giving a criteria for judging how good a rounding algorithm is. It's not a trivial subject, Wikipedia has a whole page dedicated to the rounding methods for a single number. I don't think false precision has much to do with it, since the decision to round means replacing too much precision with too little.Rolon
The "false precision" thing I got directly off the Wikipedia article you link to: "Rounding can also be important to avoid misleadingly precise reporting of a computed number, measurement or estimate; for example, a quantity that was computed as 123,456 but is known to be accurate only to within a few hundred units is usually better stated as 'about 123,500'.", where "misleadingly precise" links to the false precision article.Stubbed
@M.Justin certainly you can use rounding to avoid delivering misleading precision, but I'd wager it's most often used to deal with people's irrational preference for whole numbers. And I'll restate my observation that it's easy to replace too much precision with too little, with results that are just as misleading.Rolon
part 1/2 | if you put the following list. [23.50, 24.50, 25.50, 26.50] | You will get the following result. [24, 25, 25, 26] | But to minimize the squared error, it should have the following result. [23, 24, 26, 27] | To adjust the function, just remove the "error_gen(percents[i], rounded[i] + 1) - " part.Vellum
part 2/3 | Because this snippet cancels the square error analysis when the numbers have 0.5 of a decimal. In this case the error_gen results become zero, which causes the distribution of values to follow only the original order of the list, instead of the order that generates the smallest squared error in the rounding.Vellum
@Vellum our sequences have the exact same sum of errors squared, so I don't know why you claim yours is better. Every part of this code is there for a reason, if you tweak it to make one case better you will make it worse for another case.Rolon
Thanks for your answer which helped me to understand. Sorry for the incorrect note. | Could you please inform the name of the error function calculated in error_gen ? I couldn't find statistical concept in the relative error calculation that divides the absolute error by the square root of the initial value. I searched, but didn't find... It's been many years since I studied the subject.Vellum
@Vellum you couldn't find it because I didn't base it on any existing techniques. Leaving it unadjusted was wrong, and dividing by the initial value was wrong too. So I went for a compromise and divided by the square root. Some of it was intuition and trial-and-error, and could probably be improved by a rigorous analysis.Rolon
Allowing for a custom desired sum other than 100 and custom floating points precision would make it a killer. Also, I think Nikolay's comment should be considered, 0 should always remain 0 as it most often represents an irrevocable state in data.Intercut
is there a reason that we shouldn't consider the absolute value of error_gen(percents[i], rounded[i] + 1) - error_gen(percents[i], rounded[i]) rather than the raw difference? wouldn't a larger magnitude negative number indicate a larger relative error than a smaller magnitude positive number, and thus should be sorted first?Undershrub
@Undershrub it's been a while since I did this, but I think the way the values were rounded the difference would always be positive. I'd need to study it closer to be sure.Rolon
C
10

I wrote a C# version rounding helper, the algorithm is same as Varun Vohra's answer, hope it helps.

public static List<decimal> GetPerfectRounding(List<decimal> original,
    decimal forceSum, int decimals)
{
    var rounded = original.Select(x => Math.Round(x, decimals)).ToList();
    Debug.Assert(Math.Round(forceSum, decimals) == forceSum);
    var delta = forceSum - rounded.Sum();
    if (delta == 0) return rounded;
    var deltaUnit = Convert.ToDecimal(Math.Pow(0.1, decimals)) * Math.Sign(delta);

    List<int> applyDeltaSequence; 
    if (delta < 0)
    {
        applyDeltaSequence = original
            .Zip(Enumerable.Range(0, int.MaxValue), (x, index) => new { x, index })
            .OrderBy(a => original[a.index] - rounded[a.index])
            .ThenByDescending(a => a.index)
            .Select(a => a.index).ToList();
    }
    else
    {
        applyDeltaSequence = original
            .Zip(Enumerable.Range(0, int.MaxValue), (x, index) => new { x, index })
            .OrderByDescending(a => original[a.index] - rounded[a.index])
            .Select(a => a.index).ToList();
    }

    Enumerable.Repeat(applyDeltaSequence, int.MaxValue)
        .SelectMany(x => x)
        .Take(Convert.ToInt32(delta/deltaUnit))
        .ForEach(index => rounded[index] += deltaUnit);

    return rounded;
}

It pass the following Unit test:

[TestMethod]
public void TestPerfectRounding()
{
    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> {3.333m, 3.334m, 3.333m}, 10, 2),
        new List<decimal> {3.33m, 3.34m, 3.33m});

    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> {3.33m, 3.34m, 3.33m}, 10, 1),
        new List<decimal> {3.3m, 3.4m, 3.3m});

    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> {3.333m, 3.334m, 3.333m}, 10, 1),
        new List<decimal> {3.3m, 3.4m, 3.3m});


    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> { 13.626332m, 47.989636m, 9.596008m, 28.788024m }, 100, 0),
        new List<decimal> {14, 48, 9, 29});
    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> { 16.666m, 16.666m, 16.666m, 16.666m, 16.666m, 16.666m }, 100, 0),
        new List<decimal> { 17, 17, 17, 17, 16, 16 });
    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> { 33.333m, 33.333m, 33.333m }, 100, 0),
        new List<decimal> { 34, 33, 33 });
    CollectionAssert.AreEqual(Utils.GetPerfectRounding(
        new List<decimal> { 33.3m, 33.3m, 33.3m, 0.1m }, 100, 0),
        new List<decimal> { 34, 33, 33, 0 });
}
Clarify answered 18/1, 2016 at 0:55 Comment(2)
Nice! gave me a ground base to start with.. Enumerable doesn't have ForEach though I believeMccullough
But it fails with some data - it creates a number too large for Convert.ToInt32(delta/deltaUnit) e.g. -80298.70329 -9899.774653 -1219.826237 -12668.67994 -6545.783201 -4406.79133 -8027.827479 -4578.333489 -242.7060883 -1271.779903 -635.8899513 -768.5692796 -1239.937831 -627.3605659 -566.8920777 66250 130000 -970.8243532 -62279.32034Borst
J
9

DO NOT sum the rounded numbers. You're going to have inaccurate results. The total could be off significantly depending on the number of terms and the distribution of fractional parts.

Display the rounded numbers but sum the actual values. Depending on how you're presenting the numbers, the actual way to do that would vary. That way you get

 14
 48
 10
 29
 __
100

Any way you go you're going to have discrepancy. There's no way in your example to show numbers that add up to 100 without "rounding" one value the wrong way (least error would be changing 9.596 to 9)

EDIT

You need to choose between one of the following:

  1. Accuracy of the items
  2. Accuracy of the sum (if you're summing rounded values)
  3. Consistency between the rounded items and the rounded sum)

Most of the time when dealing with percentages #3 is the best option because it's more obvious when the total equals 101% than when the individual items don't total to 100, and you keep the individual items accurate. "Rounding" 9.596 to 9 is inaccurate in my opinion.

To explain this I sometimes add a footnote that explains that the individual values are rounded and may not total 100% - anyone that understands rounding should be able to understand that explanation.

Josiejosler answered 20/11, 2012 at 22:53 Comment(4)
That't not very helpful since the printed values won't add up to 100. The purpose of the question was to prevent users from thinking the values are incorrect, which in this case, most people would do when looking and comparing to the total.Chacma
@VarunVohra read my edit, you CAN'T display your numbers such that they add up to 100 without "rounding" one by more than 0.5.Josiejosler
@DStanley actually, barring a set where all numbers are shy of 0.5, you can. Check my answer - LRM does exactly that.Chacma
@VarunVohra In the original example LRM will yield 14, 48, 9, and 29 which will "round" 9.596 to 9. If we're allocating based on whole numbers LRM will be the most accurate, but it's still changing one result by more than a half-unit.Josiejosler
A
7

You could try keeping track of your error due to rounding, and then rounding against the grain if the accumulated error is greater than the fractional portion of the current number.

13.62 -> 14 (+.38)
47.98 -> 48 (+.02 (+.40 total))
 9.59 -> 10 (+.41 (+.81 total))
28.78 -> 28 (round down because .81 > .78)
------------
        100

Not sure if this would work in general, but it seems to work similar if the order is reversed:

28.78 -> 29 (+.22)
 9.59 ->  9 (-.37; rounded down because .59 > .22)
47.98 -> 48 (-.35)
13.62 -> 14 (+.03)
------------
        100

I'm sure there are edge cases where this might break down, but any approach is going to be at least somewhat arbitrary since you're basically modifying your input data.

Acclamation answered 20/11, 2012 at 22:50 Comment(2)
Accountants and bankers have been using a similar technique for hundreds of years. "Carry the remainder" from one row to the next. Start with 1/2 of one cent in the "carry." Add the "carry" to the first value, and truncate. Now the amount you lost by truncating, put that in the "carry." Do this all the way down, and the rounded numbers will add up to the desired total exactly every time.Nubianubian
Carolyn Kay suggested this implementation in Access VB 2007: <code> 'Round refund dollars using the "carry the remainder" method ref1 = rsQry![Refund Paid $$$] * rsQry![Property Value] / propValTot ref2 = ref1 + ref5 'Add the carried remainder, zero to start ref3 = ref2 * 100 'Multiply by 100 into an integer number ref4 = ref3 / 100 'Divide by 100 into a decimal number rsTbl![Refund Paid $$$] = ref4 'Put the "remainder" rounded number in the table ref5 = ref2 - ref4 'Carry the new remainder </code>Nubianubian
S
3

I'm not sure what level of accuracy you need, but what I would do is simply add 1 the first n numbers, n being the ceil of the total sum of decimals. In this case that is 3, so I would add 1 to the first 3 items and floor the rest. Of course this is not super accurate, some numbers might be rounded up or down when it shouldn't but it works okay and will always result in 100%.

So [ 13.626332, 47.989636, 9.596008, 28.788024 ] would be [14, 48, 10, 28] because Math.ceil(.626332+.989636+.596008+.788024) == 3

function evenRound( arr ) {
  var decimal = -~arr.map(function( a ){ return a % 1 })
    .reduce(function( a,b ){ return a + b }); // Ceil of total sum of decimals
  for ( var i = 0; i < decimal; ++i ) {
    arr[ i ] = ++arr[ i ]; // compensate error by adding 1 the the first n items
  }
  return arr.map(function( a ){ return ~~a }); // floor all other numbers
}

var nums = evenRound( [ 13.626332, 47.989636, 9.596008, 28.788024 ] );
var total = nums.reduce(function( a,b ){ return a + b }); //=> 100

You can always inform users that the numbers are rounded and may not be super-accurate...

Superdominant answered 20/11, 2012 at 23:38 Comment(0)
S
3

Note: the selected answer is changing the array order which is not preferred, here I provide more different variations that achieving the same result and keeping the array in order

Discussion

given [98.88, .56, .56] how do you want to round it? you have four option

1- round things up and subtract what is added from the rest of the numbers, so the result becomes [98, 1, 1]

this could be a good answer, but what if we have [97.5, .5, .5, .5, .5, .5]? then you need to round it up to [95, 1, 1, 1, 1, 1]

do you see how it goes? if you add more 0-like numbers, you will lose more value from the rest of your numbers. this could be very troublesome when you have a big array of zero-like number like [40, .5, .5 , ... , .5]. when you round up this, you could end up with an array of ones: [1, 1, .... , 1]

so round-up isn't a good option.

2- you round down the numbers. so [98.88, .56, .56] becomes [98, 0, 0], then you are 2 less than 100. you ignore anything that is already 0, then add up the difference to the biggest numbers. so bigger numbers will get more.

3- same as previous, round down numbers, but you sort descending based on the decimals, divide up the diff based on the decimal, so biggest decimal will get the diff.

4- you round up, but you add what you added to the next number. so like a wave what you have added will be redirected to the end of your array. so [98.88, .56, .56] becomes [99, 0, 1]

none of these are ideal, so be mindful that your data is going to lose its shape.

here I provide a code for cases 2 and 3 (as case No.1 is not practical when you have a lot of zero-like numbers). it's modern Js and doesn't need any library to use

2nd case

const v1 = [13.626332, 47.989636, 9.596008, 28.788024];// => [ 14, 48, 9, 29 ]
const v2 = [16.666, 16.666, 16.666, 16.666, 16.666, 16.666] // => [ 17, 17, 17, 17, 16, 16 ] 
const v3 = [33.333, 33.333, 33.333] // => [ 34, 33, 33 ]
const v4 = [33.3, 33.3, 33.3, 0.1] // => [ 34, 33, 33, 0 ]
const v5 = [98.88, .56, .56] // =>[ 100, 0, 0 ]
const v6 = [97.5, .5, .5, .5, .5, .5] // => [ 100, 0, 0, 0, 0, 0 ]

const normalizePercentageByNumber = (input) => {
    const rounded: number[] = input.map(x => Math.floor(x));
    const afterRoundSum = rounded.reduce((pre, curr) => pre + curr, 0);
    const countMutableItems = rounded.filter(x => x >=1).length;
    const errorRate = 100 - afterRoundSum;
    
    const deductPortion = Math.ceil(errorRate / countMutableItems);
    
    const biggest = [...rounded].sort((a, b) => b - a).slice(0, Math.min(Math.abs(errorRate), countMutableItems));
    const result = rounded.map(x => {
        const indexOfX = biggest.indexOf(x);
        if (indexOfX >= 0) {
            x += deductPortion;
            console.log(biggest)
            biggest.splice(indexOfX, 1);
            return x;
        }
        return x;
    });
    return result;
}

3rd case

const normalizePercentageByDecimal = (input: number[]) => {

    const rounded= input.map((x, i) => ({number: Math.floor(x), decimal: x%1, index: i }));

    const decimalSorted= [...rounded].sort((a,b)=> b.decimal-a.decimal);
    
    const sum = rounded.reduce((pre, curr)=> pre + curr.number, 0) ;
    const error= 100-sum;
    
    for (let i = 0; i < error; i++) {
        const element = decimalSorted[i];
        element.number++;
    }

    const result= [...decimalSorted].sort((a,b)=> a.index-b.index);
    
    return result.map(x=> x.number);
}

4th case

you just need to calculate how much extra air added or deducted to your numbers on each roundup and, add or subtract it again in the next item.

const v1 = [13.626332, 47.989636, 9.596008, 28.788024];// => [14, 48, 10, 28 ]
const v2 = [16.666, 16.666, 16.666, 16.666, 16.666, 16.666] // => [17, 16, 17, 16, 17, 17]
const v3 = [33.333, 33.333, 33.333] // => [33, 34, 33]
const v4 = [33.3, 33.3, 33.3, 0.1] // => [33, 34, 33, 0]

const normalizePercentageByWave= v4.reduce((pre, curr, i, arr) => {

    let number = Math.round(curr + pre.decimal);
    let total = pre.total + number;

    const decimal = curr - number;

    if (i == arr.length - 1 && total < 100) {
        const diff = 100 - total;
        total += diff;
        number += diff;
    }

    return { total, numbers: [...pre.numbers, number], decimal };

}, { total: 0, numbers: [], decimal: 0 });
Schaal answered 16/2, 2021 at 9:42 Comment(0)
H
2

I think the following will achieve what you are after

function func( orig, target ) {

    var i = orig.length, j = 0, total = 0, change, newVals = [], next, factor1, factor2, len = orig.length, marginOfErrors = [];

    // map original values to new array
    while( i-- ) {
        total += newVals[i] = Math.round( orig[i] );
    }

    change = total < target ? 1 : -1;

    while( total !== target ) {

        // Iterate through values and select the one that once changed will introduce
        // the least margin of error in terms of itself. e.g. Incrementing 10 by 1
        // would mean an error of 10% in relation to the value itself.
        for( i = 0; i < len; i++ ) {

            next = i === len - 1 ? 0 : i + 1;

            factor2 = errorFactor( orig[next], newVals[next] + change );
            factor1 = errorFactor( orig[i], newVals[i] + change );

            if(  factor1 > factor2 ) {
                j = next; 
            }
        }

        newVals[j] += change;
        total += change;
    }


    for( i = 0; i < len; i++ ) { marginOfErrors[i] = newVals[i] && Math.abs( orig[i] - newVals[i] ) / orig[i]; }

    // Math.round() causes some problems as it is difficult to know at the beginning
    // whether numbers should have been rounded up or down to reduce total margin of error. 
    // This section of code increments and decrements values by 1 to find the number
    // combination with least margin of error.
    for( i = 0; i < len; i++ ) {
        for( j = 0; j < len; j++ ) {
            if( j === i ) continue;

            var roundUpFactor = errorFactor( orig[i], newVals[i] + 1)  + errorFactor( orig[j], newVals[j] - 1 );
            var roundDownFactor = errorFactor( orig[i], newVals[i] - 1) + errorFactor( orig[j], newVals[j] + 1 );
            var sumMargin = marginOfErrors[i] + marginOfErrors[j];

            if( roundUpFactor < sumMargin) { 
                newVals[i] = newVals[i] + 1;
                newVals[j] = newVals[j] - 1;
                marginOfErrors[i] = newVals[i] && Math.abs( orig[i] - newVals[i] ) / orig[i];
                marginOfErrors[j] = newVals[j] && Math.abs( orig[j] - newVals[j] ) / orig[j];
            }

            if( roundDownFactor < sumMargin ) { 
                newVals[i] = newVals[i] - 1;
                newVals[j] = newVals[j] + 1;
                marginOfErrors[i] = newVals[i] && Math.abs( orig[i] - newVals[i] ) / orig[i];
                marginOfErrors[j] = newVals[j] && Math.abs( orig[j] - newVals[j] ) / orig[j];
            }

        }
    }

    function errorFactor( oldNum, newNum ) {
        return Math.abs( oldNum - newNum ) / oldNum;
    }

    return newVals;
}


func([16.666, 16.666, 16.666, 16.666, 16.666, 16.666], 100); // => [16, 16, 17, 17, 17, 17]
func([33.333, 33.333, 33.333], 100); // => [34, 33, 33]
func([33.3, 33.3, 33.3, 0.1], 100); // => [34, 33, 33, 0] 
func([13.25, 47.25, 11.25, 28.25], 100 ); // => [13, 48, 11, 28]
func( [25.5, 25.5, 25.5, 23.5], 100 ); // => [25, 25, 26, 24]

One last thing, I ran the function using the numbers originally given in the question to compare to the desired output

func([13.626332, 47.989636, 9.596008, 28.788024], 100); // => [48, 29, 13, 10]

This was different to what the question wanted => [ 48, 29, 14, 9]. I couldn't understand this until I looked at the total margin of error

-------------------------------------------------
| original  | question | % diff | mine | % diff |
-------------------------------------------------
| 13.626332 | 14       | 2.74%  | 13   | 4.5%   |
| 47.989636 | 48       | 0.02%  | 48   | 0.02%  |
| 9.596008  | 9        | 6.2%   | 10   | 4.2%   |
| 28.788024 | 29       | 0.7%   | 29   | 0.7%   |
-------------------------------------------------
| Totals    | 100      | 9.66%  | 100  | 9.43%  |
-------------------------------------------------

Essentially, the result from my function actually introduces the least amount of error.

Fiddle here

Hump answered 20/11, 2012 at 22:39 Comment(3)
that's pretty much what I had in mind, with the difference that the error should be measured relative to the value (rounding 9.8 to 10 is a bigger error than rounding from 19.8 to 20). This could be easily done by reflecting it in the sort callback, though.Shea
this is wrong for [33.33, 33.33, 33.33, 0.1], it returns [1, 33, 33, 33] rather than the more accurate [34, 33, 33, 0]Linoleum
not yet, for [16.666, 16.666, 16.666, 16.666, 16.666, 16.666] it returns [15, 17, 17, 17, 17, 17] rather than [16, 16, 17, 17, 17, 17] - see my answerLinoleum
C
2

I once wrote an unround tool, to find the minimal perturbation to a set of numbers to match a goal. It was a different problem, but one could in theory use a similar idea here. In this case, we have a set of choices.

Thus for the first element, we can either round it up to 14, or down to 13. The cost (in a binary integer programming sense) of doing so is less for the round up than the round down, because the round down requires we move that value a larger distance. Similarly, we can round each number up or down, so there are a total of 16 choices we must choose from.

  13.626332
  47.989636
   9.596008
+ 28.788024
-----------
 100.000000

I'd normally solve the general problem in MATLAB, here using bintprog, a binary integer programming tool, but there are only a few choices to be tested, so it is easy enough with simple loops to test out each of the 16 alternatives. For example, suppose we were to round this set as:

 Original      Rounded   Absolute error
   13.626           13          0.62633
    47.99           48          0.01036
    9.596           10          0.40399
 + 28.788           29          0.21198
---------------------------------------
  100.000          100          1.25266

The total absolute error made is 1.25266. It can be reduced slightly by the following alternative rounding:

 Original      Rounded   Absolute error
   13.626           14          0.37367
    47.99           48          0.01036
    9.596            9          0.59601
 + 28.788           29          0.21198
---------------------------------------
  100.000          100          1.19202

In fact, this will be the optimal solution in terms of the absolute error. Of course, if there were 20 terms, the search space will be of size 2^20 = 1048576. For 30 or 40 terms, that space will be of significant size. In that case, you would need to use a tool that can efficiently search the space, perhaps using a branch and bound scheme.

Ciao answered 21/11, 2012 at 0:1 Comment(1)
Just for future reference: the "largest remainder" algorithm must minimize the total absolute error according to your metric (See @varunvohra's answer). The proof is simple: suppose it does not minimize the error. Then there must be some set of values which it rounds down which should be rounded up, and vice versa (the two sets are the same size). But every value it rounds down is further from the next integer than any value it rounds up (and v.v.) so the new error amount must be greater. QED. However, it doesn't work for all error metrics; other algorithms are needed.Rf
P
2

If you have just just two options you are good to use Math.round(). Only problematic pair of values are X.5 (eg. 37.5 and 62.5) it will round both values up and you will end up with 101% as you can try here:

https://jsfiddle.net/f8np1t0k/2/

Since you need to show always 100% you simply remove one percentage from on of them, for example on first one

const correctedARounded = Number.isInteger(aRounded-0.5) ? a - 1 : a

Or you can favor the option with more % votes.

The error of 1% diff happens 114 times for 10k cases of divisions between pairs of 1-100 values.

Panteutonism answered 26/9, 2021 at 14:26 Comment(0)
S
2

My JS implementation for the well-voted answer by Varun Vohra

const set1 = [13.626332, 47.989636, 9.596008, 28.788024];
// const set2 = [24.25, 23.25, 27.25, 25.25];

const values = set1;

console.log('Total: ', values.reduce((accum, each) => accum + each));
console.log('Incorrectly Rounded: ', 
  values.reduce((accum, each) => accum + Math.round(each), 0));

const adjustValues = (values) => {
  // 1. Separate integer and decimal part
  // 2. Store both in a new array of objects sorted by decimal part descending
  // 3. Add in original position to "put back" at the end
  const flooredAndSortedByDecimal = values.map((value, position) => (
    {
        floored: Math.floor(value),
        decimal: value - Number.parseInt(value),
        position
    }
  )).sort(({decimal}, {decimal: otherDecimal}) => otherDecimal - decimal);

  const roundedTotal = values.reduce((total, value) => total + Math.floor(value), 0);
  let availableForDistribution = 100 - roundedTotal;

  // Add 1 to each value from what's available
  const adjustedValues = flooredAndSortedByDecimal.map(value => {
    const { floored, ...rest } = value;
    let finalPercentage = floored;
    if(availableForDistribution > 0){
        finalPercentage = floored + 1;
        availableForDistribution--;
    }

    return {
        finalPercentage,
        ...rest
    }
  });

  // Put back and return the new values
  return adjustedValues
    .sort(({position}, {position: otherPosition}) => position - otherPosition)
    .map(({finalPercentage}) => finalPercentage);
}

const finalPercentages = adjustValues(values);
console.log({finalPercentages})

// { finalPercentage: [14, 48, 9, 29]}
Sweep answered 1/10, 2021 at 18:42 Comment(4)
If I use the values [22, 25, 14, 36, 2, 2] I get 101... not sure if there is another workaround for that?Eduardo
This algorithm is for decimal that get rounded out the wrong way. 2 + 3 will NEVER be 4. Similarly, 50 + 60 will NEVER be 100. Can't do away with math :)Sweep
Yes, thanks, It was a bit of a late night comment from my side! Essentially what I am trying to achieve is to round out a series of values whilst retaining the integrity of the inherent value of the decimal place. This is for equity percentages of a company and a 0.4 percent could make quite a difference . So for example, if I have values [46.33, 22.13, 3.10, 11.61, 12.08, 4.76], using your code these round out to [46, 22, 3, 12, 12, 5]. This is all fine but the difference between 12.08 and 11.61 is 0.47 so in my case to have those numbers both rounded to 12 is not ideal.Eduardo
This is so bloated and ugly 🤢Vinyl
H
2

Or something like this for brevity, where you just accumulate the error...

const p = [13.626332, 47.989636, 9.596008, 28.788024];
const round = (a, e = 0) => a.map(x => (r = Math.round(x + e), e += x - r, r));
console.log(round(p));

Result: [14, 48, 9, 29]

Housemother answered 4/11, 2021 at 2:41 Comment(0)
P
1

If you are rounding it there is no good way to get it exactly the same in all case.

You can take the decimal part of the N percentages you have (in the example you gave it is 4).

Add the decimal parts. In your example you have total of fractional part = 3.

Ceil the 3 numbers with highest fractions and floor the rest.

(Sorry for the edits)

Peres answered 20/11, 2012 at 22:44 Comment(3)
While that may provide numbers that add to 100, you may end up turning 3.9 into 3 and 25.1 into 26.Exposure
no. 3.9 will be 4 and 25.1 will be 25. i said to ceil the 3 numbers with highest fractions not the highest value.Peres
if there are way too many fractions ending in .9 say 9 values of 9.9% and one value of 10.9 there one value which will end up as 9%, 8 as 10% and one as 11%.Peres
B
1

If you really must round them, there are already very good suggestions here (largest remainder, least relative error, and so on).

There is also already one good reason not to round (you'll get at least one number that "looks better" but is "wrong"), and how to solve that (warn your readers) and that is what I do.

Let me add on the "wrong" number part.

Suppose you have three events/entitys/... with some percentages that you approximate as:

DAY 1
who |  real | app
----|-------|------
  A | 33.34 |  34
  B | 33.33 |  33
  C | 33.33 |  33

Later on the values change slightly, to

DAY 2
who |  real | app
----|-------|------
  A | 33.35 |  33
  B | 33.36 |  34
  C | 33.29 |  33

The first table has the already mentioned problem of having a "wrong" number: 33.34 is closer to 33 than to 34.

But now you have a bigger error. Comparing day 2 to day 1, the real percentage value for A increased, by 0.01%, but the approximation shows a decrease by 1%.

That is a qualitative error, probably quite worse that the initial quantitative error.

One could devise a approximation for the whole set but, you may have to publish data on day one, thus you'll not know about day two. So, unless you really, really, must approximate, you probably better not.

Bolshevism answered 28/12, 2016 at 13:13 Comment(1)
anyone knowing how to make better tables please either edit or tell me how / whereBolshevism
D
1

Here's a simpler Python implementation of @varun-vohra answer:

def apportion_pcts(pcts, total):
    proportions = [total * (pct / 100) for pct in pcts]
    apportions = [math.floor(p) for p in proportions]
    remainder = total - sum(apportions)
    remainders = [(i, p - math.floor(p)) for (i, p) in enumerate(proportions)]
    remainders.sort(key=operator.itemgetter(1), reverse=True)
    for (i, _) in itertools.cycle(remainders):
        if remainder == 0:
            break
        else:
            apportions[i] += 1
            remainder -= 1
    return apportions

You need math, itertools, operator.

Disembowel answered 12/4, 2018 at 5:56 Comment(0)
G
0

check if this is valid or not as far as my test cases I am able to get this working.

let's say number is k;

  1. sort percentage by descending oder.
  2. iterate over each percentage from descending order.
  3. calculate percentage of k for first percentage take Math.Ceil of output.
  4. next k = k-1
  5. iterate over till all percentage is consumed.
Gush answered 31/7, 2017 at 6:50 Comment(0)
A
0

I have implemented the method from Varun Vohra's answer here for both lists and dicts.

import math
import numbers
import operator
import itertools


def round_list_percentages(number_list):
    """
    Takes a list where all values are numbers that add up to 100,
    and rounds them off to integers while still retaining a sum of 100.

    A total value sum that rounds to 100.00 with two decimals is acceptable.
    This ensures that all input where the values are calculated with [fraction]/[total]
    and the sum of all fractions equal the total, should pass.
    """
    # Check input
    if not all(isinstance(i, numbers.Number) for i in number_list):
        raise ValueError('All values of the list must be a number')

    # Generate a key for each value
    key_generator = itertools.count()
    value_dict = {next(key_generator): value for value in number_list}
    return round_dictionary_percentages(value_dict).values()


def round_dictionary_percentages(dictionary):
    """
    Takes a dictionary where all values are numbers that add up to 100,
    and rounds them off to integers while still retaining a sum of 100.

    A total value sum that rounds to 100.00 with two decimals is acceptable.
    This ensures that all input where the values are calculated with [fraction]/[total]
    and the sum of all fractions equal the total, should pass.
    """
    # Check input
    # Only allow numbers
    if not all(isinstance(i, numbers.Number) for i in dictionary.values()):
        raise ValueError('All values of the dictionary must be a number')
    # Make sure the sum is close enough to 100
    # Round value_sum to 2 decimals to avoid floating point representation errors
    value_sum = round(sum(dictionary.values()), 2)
    if not value_sum == 100:
        raise ValueError('The sum of the values must be 100')

    # Initial floored results
    # Does not add up to 100, so we need to add something
    result = {key: int(math.floor(value)) for key, value in dictionary.items()}

    # Remainders for each key
    result_remainders = {key: value % 1 for key, value in dictionary.items()}
    # Keys sorted by remainder (biggest first)
    sorted_keys = [key for key, value in sorted(result_remainders.items(), key=operator.itemgetter(1), reverse=True)]

    # Otherwise add missing values up to 100
    # One cycle is enough, since flooring removes a max value of < 1 per item,
    # i.e. this loop should always break before going through the whole list
    for key in sorted_keys:
        if sum(result.values()) == 100:
            break
        result[key] += 1

    # Return
    return result
Assistance answered 13/9, 2017 at 22:34 Comment(0)
D
0

For those having the percentages in a pandas Series, here is my implemantation of the Largest remainder method (as in Varun Vohra's answer), where you can even select the decimals to which you want to round.

import numpy as np

def largestRemainderMethod(pd_series, decimals=1):

    floor_series = ((10**decimals * pd_series).astype(np.int)).apply(np.floor)
    diff = 100 * (10**decimals) - floor_series.sum().astype(np.int)
    series_decimals = pd_series - floor_series / (10**decimals)
    series_sorted_by_decimals = series_decimals.sort_values(ascending=False)

    for i in range(0, len(series_sorted_by_decimals)):
        if i < diff:
            series_sorted_by_decimals.iloc[[i]] = 1
        else:
            series_sorted_by_decimals.iloc[[i]] = 0

    out_series = ((floor_series + series_sorted_by_decimals) / (10**decimals)).sort_values(ascending=False)

    return out_series
Delagarza answered 14/1, 2020 at 16:16 Comment(0)
G
0

Here's a Ruby gem that implements the Largest Remainder method: https://github.com/jethroo/lare_round

To use:

a =  Array.new(3){ BigDecimal('0.3334') }
# => [#<BigDecimal:887b6c8,'0.3334E0',9(18)>, #<BigDecimal:887b600,'0.3334E0',9(18)>, #<BigDecimal:887b4c0,'0.3334E0',9(18)>]
a = LareRound.round(a,2)
# => [#<BigDecimal:8867330,'0.34E0',9(36)>, #<BigDecimal:8867290,'0.33E0',9(36)>, #<BigDecimal:88671f0,'0.33E0',9(36)>]
a.reduce(:+).to_f
# => 1.0
Ghiberti answered 31/12, 2020 at 3:24 Comment(0)
V
0

I wrote a function in Javascript that takes an array of percentages and outputs an array with rounded percentages using the Largest Remainder Method. It doesn't use any libraries.

Input: [21.6, 46.7, 31, 0.5, 0.2]

Output: [22, 47, 31, 0, 0]

const values = [21.6, 46.7, 31, 0.5, 0.2];
console.log(roundPercentages(values));

function roundPercentages(values) {
        const flooredValues = values.map(e => Math.floor(e));
        const remainders = values.map(e => e - Math.floor(e));
        const totalRemainder = 100 - flooredValues.reduce((a, b) => a + b);

        // Deep copy because order of remainders is important
        [...remainders]
            // Sort from highest to lowest remainder
            .sort((a, b) => b - a)
            // Get the n largest remainder values, where n = totalRemainder
            .slice(0, totalRemainder)
            // Add 1 to the floored percentages with the highest remainder (divide the total remainder)
            .forEach(e => flooredValues[remainders.indexOf(e)] += 1);

        return flooredValues;
    }
Vinyl answered 28/11, 2022 at 14:34 Comment(0)
D
0

Here is my version and took me 1 hour to code. Let me know if you see 99 or 101 :)

var p = [per1, per2, per3, per4];
var pf = function percFix(p) {
    if (p.reduce((a, b) => a + Math.trunc(Math.round(b)), 0) != 100) {
        var e = [];
        for (i = 0; i < p.length; i++) { e[i] = (p[i] - Math.trunc(p[i]) >= 0.5) ? (Math.ceil(p[i]) - p[i]).toFixed(2) : (p[i] - Math.floor(p[i])).toFixed(2)}
        var c = 0;
        var et = e.reduce((a, b) => a + b, 0);
        for (i = 0; i < e.length; i++) { if (e[i] == Math.max(...e) && c == 0) { p[i] = (et < 0.5) ? Math.floor(p[i]) : Math.ceil(p[i]); c++} else { p[i] = Math.round(p[i]) } }
    }
    else { p = p.map((x) => Math.round(x))}
    return p;
}
p = pf(p);
console.log("Total %: " + p.reduce((a, b) => a + b, 0));
Disenthral answered 11/8, 2023 at 15:49 Comment(0)
U
0

I am building a flutter app where I needed to do something like this, so I implemented @mark-ransom's answer in dart:

  List<int> _adjustToTotal(List<double> values, int total) {
    final n = values.length;
    final rounded = values.map((v) => v.floor()).toList();
    final remainder = total - rounded.sum;
    
    final errorsToIndexesList = [
      for (int i = 0; i < n; i++)
        (index: i, error: _errorDiff(values[i], rounded[i]))
    ].sorted((a, b) =>
        a.error < b.error ? -1 : a.error > b.error ? 1
            : a.index - b.index); // for equivalent errors, choose the lowest index. this is arbitrary and can be modified to suit your needs
    final indexesToAdjust = errorsToIndexesList.map((error) => error.index).take(remainder).toSet();
    return rounded
        .mapIndexed((i, r) => r + (indexesToAdjust.contains(i) ? 1 : 0))
        .toList();
  }

  double _errorDiff(double actual, int rounded) =>
      _errorGen(actual, rounded + 1) - _errorGen(actual, rounded);

  double _errorGen(double actual, int rounded) {
    final divisor = sqrt(actual < 1.0 ? 1.0 : actual);
    return pow((rounded.toDouble() - actual).abs(), 2) / divisor;
  }

The slight adjustment I've made here allows the total being summed to to be specified (rather than assuming 100%, for example).

Undershrub answered 29/11, 2023 at 16:34 Comment(0)
F
-2

This is a case for banker's rounding, aka 'round half-even'. It is supported by BigDecimal. Its purpose is to ensure that rounding balances out, i.e. doesn't favour either the bank or the customer.

Fremd answered 20/11, 2012 at 23:3 Comment(3)
It does NOT ensure that rounding balances out - it just reduces the amount of error by distributing half-rounding between even and odd numbers. There are still scenarios where bankers rounding produces inaccurate results.Josiejosler
@DStanley Agreed. I didn't say otherwise. I stated its purpose. Very carefully.Fremd
Fair enough - I misinterpreted what you were trying to say. In either case I don't think it solves the problem as using bankers rounding will not change the results in the example.Josiejosler

© 2022 - 2024 — McMap. All rights reserved.