First off - you are receiving some incorrect advice from other answers. Obseve the following (64-bit OS on 64-bit architecture):
declare @op1 decimal(18,2) = 0.01
,@op2 decimal(18,2) = 0.01;
select result = @op1 * @op2;
result
---------.---------.---------.---------
0.0001
(1 row(s) affected)
Note the number of underscores underneath the title - 39 in all. (I changed every tenth to a period to aid counting.) That is precisely enough for 38 digits (the maximum allowable, and the default on a 64 bit CPU) plus a decimal point on display. Although both operands were declared as decimal(18,2) the calculation was performed, and reported, in decimal(38,4) datatype. (I am running SQL 2012 on a 64 bit machine - some details may vary based on machine architecture and OS.)
Therefore, it is clear that no precision is being lost. On the contrary, only overflow can occur, not precision loss. This is a direct consequence of all calculations on decimal operands being performed as integer arithmetic. You will occasionally see artifacts of this in intelli-sense when the type of intermediate fields of decimal type are reported as being int instead.
Consider the example above. The two operands are both of type decimal(18,2) and are stored as being integers of value 1, with a scale of 2. When multiplied the product is still 1, but the scale is evaluated by adding the scales, to create a result of integer value 1 and scale 4, which is a value of 0.0001 and of type decimal(18,4), stored as an integer with value 1 and scale 4.
Read that last paragraph again.
Rinse and repeat once more.
In practice, on a 64 bit machine and OS, this is actually stored and carried forward as being of type *decimal (38,4) because the calculations are being done on a CPU where the extra bits are free.
To return to your question - All major currencies of the world (that I am aware of) only require 2 decimal places, but there are a handful where 4 are required, and there are financial transactions such as currency transactions and bond sales where 4 decimal places are mandated by law. When devising the money datatype Microsoft appears to have opted for the maximum scale that might be required rather than the normal scale required. Given how few transactions, and corporations, actually require precision greater than 19 digits this seems eminently sensible.
If you have:
- A high expectation of only dealing with major currencies (which at the current time only require 2 digits of scale); and
- No expectation of dealing with transactions that are mandated by law to require 4 digits of scale
then you would be safe to use type decimal with scale 2 (such as decimal(19,2) or decimal(18,2) or decimal(38,2)) instead of money. This will ease some of your conversions and, given the assumptions above, have no cost. A typical case where these assumptions are met is in a GL or Subledger accounting system tracking transactions to the penny. However, a stock- or bond-trading system would not meet these assumptions because 4 digits of scale are mandated by law in those case.
A way to distinguish the two cases is whether transactions are reported in cents or percents, which only require 2 digits of scale, or in basis points which require 4 digits of scale.
If you are at all unsure as to which case applies to your programming circumstance, consult your Controller or Director of Finance as to the legal and GAAP requirements for your application. (S)he will be able to give you definitive advice.
DECIMAL(19, 4)
is a popular choice check this also check here World Currency Formats to decide how many decimal places to use , hope helps. – Yul