In the interest of creating cross-platform code, I'd like to develop a simple financial application in JavaScript. The calculations required involve compound interest and relatively long decimal numbers. I'd like to know what mistakes to avoid when using JavaScript to do this type of math—if it is possible at all!
You should probably scale your decimal values by 100, and represent all the monetary values in whole cents. This is to avoid problems with floating-point logic and arithmetic. There is no decimal data type in JavaScript - the only numeric data type is floating-point. Therefore it is generally recommended to handle money as 2550
cents instead of 25.50
dollars.
Consider that in JavaScript:
var result = 1.0 + 2.0; // (result === 3.0) returns true
But:
var result = 0.1 + 0.2; // (result === 0.3) returns false
The expression 0.1 + 0.2 === 0.3
returns false
, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling1.
Note that while the set of real numbers is infinite, only a finite number of them (18,437,736,874,454,810,627 to be exact) can be represented exactly by the JavaScript floating-point format. Therefore the representation of the other numbers will be an approximation of the actual number2.
1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).
2 David Flanagan: JavaScript: The Definitive Guide, Fourth Edition: 3.1.3 Floating-Point Literals (page 31).
3000.57 + 0.11 === 3000.68
returns false
. –
Uphill Scaling every value by 100 is the solution. Doing it by hand is probably useless, since you can find libraries that do that for you. I recommend moneysafe, which offers a functional API well suited for ES6 applications:
const { in$, $ } = require('moneysafe');
console.log(in$($(10.5) + $(.3)); // 10.8
https://github.com/ericelliott/moneysafe
Works both in Node.js and the browser.
in$, $
value names are ambiguous to someone who's not used the package before. I know it was Eric's choice to name things that way, but I still feel it's enough of a mistake that I'd probably rename them in the import/destructured require statement. –
Creeps Money$afe has not yet been tested in production at scale.
. Just pointing that out so anyone can then consider if that's appropriate for their use case –
Homorganic Unfortunately all of the answers so far ignore the fact that not all currencies have 100 sub-units (e.g., the cent is the sub-unit of the US dollar (USD)). Currencies like the Iraqi Dinar (IQD) have 1000 sub-units: an Iraqi Dinar has 1000 fils. The Japanese Yen (JPY) has no sub-units. So "multiply by 100 to do integer arithmetic" isn't always the correct answer.
Additionally for monetary calculations you also need to keep track of the currency. You can't add a US Dollar (USD) to an Indian Rupee (INR) (without first converting one to the other).
There are also limitations on the maximum amount that can be represented by JavaScript's integer data type.
In monetary calculations you also have to keep in mind that money has finite precision (typically 0-3 decimal points) & rounding needs to be done in particular ways (e.g., "normal" rounding vs. banker's rounding). The type of rounding to be performed might also vary by jurisdiction/currency.
How to handle money in javascript has a very good discussion of the relevant points.
In my searches I found the dinero.js library that addresses many of the issues wrt monetary calculations. Haven't used it yet in a production system so can't give an informed opinion on it.
There's no such thing as "precise" financial calculation because of just two decimal fraction digits but that's a more general problem.
In JavaScript, you can scale every value by 100 and use Math.round()
everytime a fraction can occur.
You could use an object to store the numbers and include the rounding in its prototypes valueOf()
method. Like this:
sys = require('sys');
var Money = function(amount) {
this.amount = amount;
}
Money.prototype.valueOf = function() {
return Math.round(this.amount*100)/100;
}
var m = new Money(50.42355446);
var n = new Money(30.342141);
sys.puts(m.amount + n.amount); //80.76569546
sys.puts(m+n); //80.76
That way, everytime you use a Money-object, it will be represented as rounded to two decimals. The unrounded value is still accessible via m.amount
.
You can build in your own rounding algorithm into Money.prototype.valueOf()
, if you like.
Money(0.1)
means that the JavaScript lexer reads the string "0.1" from the source and then converts it to a binary floating point and then you already did an unintended rounding. The problem is about representation (binary vs decimal) not about precision. –
Backstop use decimaljs ... It a very good library that solves a harsh part of the problem ...
just use it in all your operation.
Your problem stems from inaccuracy in floating point calculations. If you're just using rounding to solve this you'll have greater error when you're multiplying and dividing.
The solution is below, an explanation follows:
You'll need to think about mathematics behind this to understand it. Real numbers like 1/3 cannot be represented in math with decimal values since they're endless (e.g. - .333333333333333 ...). Some numbers in decimal cannot be represented in binary correctly. For example, 0.1 cannot be represented in binary correctly with a limited number of digits.
For more detailed description look here: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Take a look at the solution implementation: http://floating-point-gui.de/languages/javascript/
Due to the binary nature of their encoding, some decimal numbers cannot be represented with perfect accuracy. For example
var money = 600.90;
var price = 200.30;
var total = price * 3;
// Outputs: false
console.log(money >= total);
// Outputs: 600.9000000000001
console.log(total);
If you need to use pure javascript then you have need to think about solution for every calculation. For above code we can convert decimals to whole integers.
var money = 60090;
var price = 20030;
var total = price * 3;
// Outputs: true
console.log(money >= total);
// Outputs: 60090
console.log(total);
Avoiding Problems with Decimal Math in JavaScript
There is a dedicated library for financial calculations with great documentation. Finance.js
Use this code for currency calculation and round numbers in two digits.
<!DOCTYPE html>
<html>
<body>
<h1>JavaScript Variables</h1>
<p id="test1"></p>
<p id="test2"></p>
<p id="test3"></p>
<script>
function setDecimalPoint(num) {
if (isNaN(parseFloat(num)))
return 0;
else {
var Number = parseFloat(num);
var multiplicator = Math.pow(10, 2);
Number = parseFloat((Number * multiplicator).toFixed(2));
return (Math.round(Number) / multiplicator);
}
}
document.getElementById("test1").innerHTML = "Without our method O/P is: " + (655.93 * 9)/100;
document.getElementById("test2").innerHTML = "Calculator O/P: 59.0337, Our value is: " + setDecimalPoint((655.93 * 9)/100);
document.getElementById("test3").innerHTML = "Calculator O/P: 32.888.175, Our value is: " + setDecimalPoint(756.05 * 43.5);
</script>
</body>
</html>
Here we are using JavaScript and Node.js to trade real money in financial markets. We came up with a custom Decimal class representing immutable decimal numbers and providing logic for calculations.
import { decimal, } from "@reiryoku/mida";
0.1 + 0.2; //= 0.30000000000000004
decimal(0.1).add(0.2); //= 0.3
decimal("0.1").add("0.2"); //= 0.3
If you are curious you can see the definition here https://github.com/Reiryoku-Technologies/Mida/blob/master/src/core/decimals/MidaDecimal.ts
The package is available for anyone on npm.
© 2022 - 2024 — McMap. All rights reserved.