Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?
Asked Answered
P

13

552

I'm curious as to whether or not there is a real difference between the money datatype and something like decimal(19,4) (which is what money uses internally, I believe).

I'm aware that money is specific to SQL Server. I want to know if there is a compelling reason to choose one over the other; most SQL Server samples (e.g. the AdventureWorks database) use money and not decimal for things like price information.

Should I just continue to use the money datatype, or is there a benefit to using decimal instead? Money is fewer characters to type, but that's not a valid reason :)

Phenobarbitone answered 24/2, 2009 at 17:54 Comment(4)
DECIMAL(19, 4) is a popular choice check this also check here World Currency Formats to decide how many decimal places to use , hope helps.Orsino
I wondered why the money data type has 4 decimals .. and not 2. i.e. 100 cents in a dollar so only 2 decimal places are required? For storing a record of money amounts less than $9999.99, I was going to go with a data type of decimal(6,2). I'm not concerned about divide or multiply calculations, just storing and summation ..Onepiece
some currencies are divided into smaller parts than hundreds, i.e. Bahraini dinar is divided into 1000 filsAbney
There are good reasons to divide money by money. I found this thread because I had a "money / money" situation and was getting imprecise answers. The calculation was to determine the ratio of one dollar amount to another. It always seems to come out truncated to 4 decimal places, so I get 0.0109, where I'm looking for 0.0110 (properly rounded answer which to more places is 0.01095125.Playbill
C
415

Never ever should you use money. It is not precise, and it is pure garbage; always use decimal/numeric.

Run this to see what I mean:

DECLARE
    @mon1 MONEY,
    @mon2 MONEY,
    @mon3 MONEY,
    @mon4 MONEY,
    @num1 DECIMAL(19,4),
    @num2 DECIMAL(19,4),
    @num3 DECIMAL(19,4),
    @num4 DECIMAL(19,4)

    SELECT
    @mon1 = 100, @mon2 = 339, @mon3 = 10000,
    @num1 = 100, @num2 = 339, @num3 = 10000

    SET @mon4 = @mon1/@mon2*@mon3
    SET @num4 = @num1/@num2*@num3

    SELECT @mon4 AS moneyresult,
    @num4 AS numericresult

Output: 2949.0000 2949.8525

To some of the people who said that you don't divide money by money:

Here is one of my queries to calculate correlations, and changing that to money gives wrong results.

select t1.index_id,t2.index_id,(avg(t1.monret*t2.monret)
    -(avg(t1.monret) * avg(t2.monret)))
            /((sqrt(avg(square(t1.monret)) - square(avg(t1.monret))))
            *(sqrt(avg(square(t2.monret)) - square(avg(t2.monret))))),
current_timestamp,@MaxDate
            from Table1 t1  join Table1 t2  on t1.Date = traDate
            group by t1.index_id,t2.index_id
Cluster answered 24/2, 2009 at 17:57 Comment(18)
"Never" is a strong word. "Money" is useful for casting results to that type for display to the user in a culture-sensitive way, but you're right that it's very bad to use for the calculations itself.Bombazine
Your example is not meaningful, since nobody will ever multiply two objects of type money. If you want to prove your point, you need to compare multiplying a money by a decimal to multiplying a decimal by a decimal.Hildick
I don't think there would be an instance of multiplying a money by money; money by int or money by float perhaps.Phenobarbitone
.. but it's still puzzling why money * money would not have the precision of money.Bowls
@Learning: it does have a precision of money. However, you still end up with rounding errors that can accumulate over time. The decimal type doesn't use binary arithmetic: it guarantees it gets the same base 10 results you would from doing it on paper.Bombazine
It's perfectly appropriate for storage, addition/subtraction, and output. As others point out, multiplying two money values makes no sense in terms of units for typical usage. Those kind of statistical calculations should cast the money to a more appropriate type for that need.Calypso
Multiplication and division of money over money aside, this 'illustration' is manipulative. It's a documented fact that money has a fixed and very limited precision. In such cases one should first multiply, and then divide. Change the order of operators in this example and you will get identical results. A money essentially is a 64-bit int, and if you were to deal with ints, you would multiply before dividing.Blackandblue
I think both sides make good points. My take from this will be to continue using Decimals, but I occasionally cast to Money for display purposes.Slipshod
This example shows actually that decimal is broken. The highest order operator here is the divisor, which means you are dividing 100 by 339, and then the result is bound by the 4 decimal places in the money type. Multiplying this by 10000 will never give you a decimal value. The fact that a decimal(19,4) stores higher precision than 4 is in my opinion, an error.Purim
Having looked at it again, they are both wrong. If you break the steps down, Decimal comes out correct at 2950. This is because it rounds the intermediate step correctly up from 0.8525, whereas the money rounds it down. So the correct answer should really be 2950, not 2949 or 2949.8525.Purim
The result of your calculation is not money, isn't it? It should be a float I guess. You have to cast it to float anyway. You would loose a lot of precision with decimal here as well if you don't cast it to float right away but it wouldn't be as much as with money type so in many cases you just wouldn't notice.Unexperienced
I red all debates and aside accuracy subjects i believe they will use the region values set on the PC so it could $50 on a us PC and 50 euros on a french PC without any special conversion process. so i decided to use decimal as a accurate and maintainable type. Please read my post about other reason to accuracy of decimal type over money.Hoyt
The denominator exceeds the scale of Money. Valid compare would be with both using the decimal values in the denominator.Indira
@JoelCoehoorn "never" isn't strong enough. BOL warns since 2000 at least that the type is deprecated, shouldn't be used and definitely unsuitable to display values in a cultural-sensitive way. It's money not art. It has currency codes not cultures. One can't assume a currency, both due to sanity and accounting requirements. A company will deal with multiple currencies like USD, EUR and GBP.Dunstable
Money is multiplied and divided by int (logically).Methylamine
also -1 for subjective "pure garbage". The argument here is interesting and valuable and will gladly add an upvote for that on it's own. But it's just as interesting that int/int =int, but decimal(19,4)/decimal(19,4) != decimal(19,4)Dineric
This answer is wrong in just about every way it's possible to be wrong. Money types are integers with four implied implied decimal places. This makes them considerably more "precise" than float values, which can drift with large precision values. They are a specific type with a specific set of use cases (e.g. storing money values), and they are perfectly suited for this. Presumably, programmers will consider the appropriateness of the approach they're considering to the circumstances of the case. But declaring money "garbage" makes this just an opinion piece.Uriniferous
Criticisms of data type are mu. Change @mon2, @mon3, @num2, @num3 to int and the result is still the same. The calculation result is actually 2949.85250737463 - should we use a data type for storage that will account for all of those decimal places, and then, as Joel said, use a money data type for display purposes (as a simple, consistent, readable way to convert to monetary units)?Currency
P
309

SQLMenace said money is inexact. But you don't multiply/divide money by money! How much is 3 dollars times 50 cents? 150 dollarcents? You multiply/divide money by scalars, which should be decimal.

DECLARE
@mon1 MONEY,
@mon4 MONEY,
@num1 DECIMAL(19,4),
@num2 DECIMAL(19,4),
@num3 DECIMAL(19,4),
@num4 DECIMAL(19,4)

SELECT
@mon1 = 100,
@num1 = 100, @num2 = 339, @num3 = 10000

SET @mon4 = @mon1/@num2*@num3
SET @num4 = @num1/@num2*@num3

SELECT @mon4 AS moneyresult,
@num4 AS numericresult

Results in the correct result:

moneyresult           numericresult
--------------------- ---------------------------------------
2949.8525             2949.8525

money is good as long as you don't need more than 4 decimal digits, and you make sure your scalars - which do not represent money - are decimals.

Porcia answered 24/2, 2009 at 18:9 Comment(24)
how many 1 cent coins can a dollar bill get you? an answer to this requires money / money.Bowls
I added a correlation query to my answer that I run for monthly performance, what would happen if I used money data type?Cluster
@Bowls in that case the result is not money, but a float. (Basic dimensional analysis.)Fraudulent
@Learning: Do you ask the database how many cents in a dollar a lot? Anyway, that would return the right result. His problem was that money / money was only precise to four digits (it was 0.2949), then when multiplied by 10000 it became 2949.0000.Porcia
@SQLMenace: I don't know; I guess in that case you would have to use a datatype with more than 4 decimal digits. That still doesn't mean you should never use money.Porcia
@SQLMenace: In my (financial) program I also don't use money; that is because I need to make calculations with 12 (or 18, I'm not sure) decimal digits (we have multipliers like 0.027681943654)Porcia
@Porcia : I understood the fault in my logic (as you explained the precision was 4 and multiplication by 10K wiped it ). Thanks.Bowls
fwiw : I do not ask the db about how many cents to a dollar a lot. That was an expample as I think there should be no corner cases in a db implementation.Bowls
@learning your comment shows that you do not understand science and conversion. if you did need to do a calculation of how many 1 cent coins a dollar bill gives you, you are not dividing by 1 cent, you are dividing by an exact conversion. The conversion in this case is exact and does not in any way support your intent.Difference
@Difference He's correct and don't make personal criticisms as you did based of a one line remark, it's not helpful. If he doesn't divide by $0.01 but instead 0.01, then the result is $100 instead of 100. There's 100 cents in a dollar, not $100 cents. Units are important! There's definitely a big place for diving, and perhaps multiplying, money by money.Dowser
If I want to spend $1000 to buy stock shares priced at $36, is that not a legitimate usage of money / money? The answer is whole units with no dollar sign attached, which is correct! This is a perfectly reasonable scenario, suggesting that due to weird edge cases like this, money is not a good data type to use, as the result is always truncated back to fit money instead of following the normal rules for precision and scale. What if you want to calculate the standard deviation of a set of money values? Yes, Sqrt(Sum(money * money)) (simplified) actually does make sense.Puisne
even if money works if you make it work, what is the advantage?Counterglow
So what can I use for Bitcoins then?Wilds
@Toolkit: The current definition of Bitcoins allows 16 digits before the decimal point and 8 after, so decimal(24, 8). The BitCoin protocol allows however for a future change that would make it more divisible; if you want to be future-proof for that point I'd go with decimal(28, 12), which would take the same amount of storage space (13 bytes per value). However, if you do that keep in mind that not all your database values are guaranteed to be representable in Bitcoin's protocols, which don't allow for more than 8 decimal digits.Porcia
An exchange rate is another valid case of dividing money by money. In fact, there are rules on how many digits are allowed in XR calculations, as well as how many digits each currency is allowed to have. This makes the money type with its hard-coded precision unsuitable for saving monetary values when XRs are concernedDunstable
@PanagiotisKanavos It's tempting to think you're dividing money by money in an exchange rate, but it's still a QUANTITY of money. The exchange rate itself is still a scalar quantity, not money. I will agree that the money type is confusing enough to avoid using if possible.Weikert
@SteveSether no, the exchange rate isn't a scalar quantity. It requires two currencies to be meaningful. The money type on the other hand is - it has no unit, only an assumption that it may be used to store monetary data. That's not the point anyway. You can't assume that an arbitrary precision of 4 is enough. Different currencies have different precision (Yen has no decimals for example, Bitcoin has 8) and there are specific rules that govern how many digits should be used during conversions and calculationsDunstable
@SteveSether it would be different if we were talking about a Money class, with the accompanying Currency, precision and rounding behaviour.Dunstable
this just basically says "it's ok that it gets the wrong answer, because it only gets the wrong answer in technically semantically odd calculations". ok, but it still gets it wrong. why not just use a type that doesn't get it wrong, even IN semantically odd situations? (in other words, what's the advantage?)Counterglow
Of course you can divide money by money!!! Every interest rate calculation is dividing money by money. If I earned $12.00 on my investment of $200, the interest rate is $12 ÷ $200 = 0.06 = 6%. Whats more, dimensional analysis (which is something accountants should be force-fed) reveals that, since $ ÷ $ = 1, the units have no dimensions (i.e. 0.06 or 6% is just a number - no units of measurement). But multiplying money by money would have units of $² which has no meaning I can imagine. What can you measure in square dollars?Enthuse
@ReversedEngineer Maybe the cost per square foot when trying to buy a house ;) [/s]Elmoelmore
@Elmoelmore Cost per square foot is measured in dollars per square foot, not square dollars per square foot.Enthuse
@Porcia but change your @num2 and @num3 to int and you get different results again.Currency
@ReversedEngineer - I like that explanation about dividing money by money. Where you say "interest rate is $12 / $200" we can read the division symbol as "per". We earned twelve dollars per two hundred dollars invested = valid maths.Currency
T
93

Everything is dangerous if you don't know what you are doing

Even high-precision decimal types can't save the day:

declare @num1 numeric(38,22)
declare @num2 numeric(38,22)
set @num1 = .0000006
set @num2 = 1.0
select @num1 * @num2 * 1000000

1.000000 <- Should be 0.6000000


The money types are integers

The text representations of smallmoney and decimal(10,4) may look alike, but that doesn't make them interchangeable. Do you cringe when you see dates stored as varchar(10)? This is the same thing.

Behind the scenes, money/smallmoney are just a bigint/int The decimal point in the text representation of money is visual fluff, just like the dashes in a yyyy-mm-dd date. SQL doesn't actually store those internally.

Regarding decimal vs money, pick whatever is appropriate for your needs. The money types exist because storing accounting values as integer multiples of 1/10000th of unit is very common. Also, if you are dealing with actual money and calculations beyond simple addition and subtraction, you shouldn't be doing that at the database level! Do it at the application level with a library that supports Banker's Rounding (IEEE 754)

Tinfoil answered 26/2, 2013 at 18:31 Comment(6)
Can you explain what happened in this example?Baghdad
Scale overflow. At small scales, numeric(a,b) * numeric(c,d) yields numeric(a-b+c-d+1, max(b,d)). However, if (a+b+c+d)>38, SQL caps the scale, robbing precision from the fraction side to pad the integer side, causing the rounding error.Tinfoil
All numerical calculations are susceptible to loss of precision due to scaling: instead compute select 1000000 * @ num1 * @ num2Trapezoid
Anon's example is version and database specific. but the point be careful is valid. in sqlanywhere version 1.1, the example does give 0.600000 correctly. (I know we are talking about ms sql here). another point about money type being missing from other database, there ware ways calling decimal or numeric as money, such as creating domain.Cutworm
I think this is the most enlightening answer because you've explained not simply just the errors that get propagated, but what's really going on behind the scene, and why MONEY exists in the first place. I'm not sure why it took 4 years to get this answer, but perhaps it's because there's too much focus on the calculation "problem", and not so much on the whys and hows of money vs decimal.Weikert
This argument isn't valid. There's only an assumption that a money value actually represents money. There is no currency. 4 digits is NOT enough to store eg Bitcoins. When comparing varchar to date, money is the equivalent of varcharDunstable
V
47

I realise that WayneM has stated he knows that money is specific to SQL Server. However, he is asking if there are any reasons to use money over decimal or vice versa and I think one obvious reason still ought to be stated and that is using decimal means it's one less thing to worry about if you ever have to change your DBMS - which can happen.

Make your systems as flexible as possible!

Virile answered 13/8, 2012 at 0:28 Comment(6)
Possibly, but in theory you could declare a domain named Money in another DBMS (that supports the declaration of domains).Spall
As someone currently converting a SQL Server database to Amazon Redshift, I can vouch that this is a genuine issue. Try to avoid data types that are bespoke to a particular database platform unless there are very sound business reasons to use them.Whitney
@Nathn given Redshift's weak type system compared to PostgreSQL or SQL Server, eg no date type, I'd say you'll always have such issues. You should say "Don't use deprecated types when the database already provides better, more standard-compliant types and warns against the deprecated ones".Dunstable
@PanagiotisKanavos - money isn't deprecatedDynameter
@MartinSmith shocked to see this, as it can't actually handle money values like BTC in 2017. Or differentiate between GBP and EUR amounts - even though they move towards parity :P. Anyway, moving to Redshift introduces a lot of painDunstable
@Nathn oops, just saw that Redshift got Date! No TIME though and MONEY is one of the unsupported PostgreSQL types!Dunstable
P
34

Well, I like MONEY! It's a byte cheaper than DECIMAL, and the computations perform quicker because (under the covers) addition and subtraction operations are essentially integer operations. @SQLMenace's example—which is a great warning for the unaware—could equally be applied to INTegers, where the result would be zero. But that's no reason not to use integers—where appropriate.

So, it's perfectly 'safe' and appropriate to use MONEY when what you are dealing with is MONEY and use it according to mathematical rules that it follows (same as INTeger).

Would it have been better if SQL Server promoted division and multiplication of MONEY's into DECIMALs (or FLOATs?)—possibly, but they didn't choose to do this; nor did they choose to promote INTegers to FLOATs when dividing them.

MONEY has no precision issue; that DECIMALs get to have a larger intermediate type used during calculations is just a 'feature' of using that type (and I'm not actually sure how far that 'feature' extends).

To answer the specific question, a "compelling reason"? Well, if you want absolute maximum performance in a SUM(x) where x could be either DECIMAL or MONEY, then MONEY will have an edge.

Also, don't forget it's smaller cousin, SMALLMONEY—just 4 bytes, but it does max out at 214,748.3647 - which is pretty small for money—and so is not often a good fit.

To prove the point around using larger intermediate types, if you assign the intermediate explicitly to a variable, DECIMAL suffers the same problem:

declare @a decimal(19,4)
declare @b decimal(19,4)
declare @c decimal(19,4)
declare @d decimal(19,4)

select @a = 100, @b = 339, @c = 10000

set @d = @a/@b

set @d = @d*@c

select @d

Produces 2950.0000 (okay, so at least DECIMAL rounded rather than MONEY truncated—same as an integer would.)

Putrescine answered 29/4, 2013 at 5:29 Comment(4)
MONEY is one byte less than a large DECIMAL, with up to 19 digits of precision. However, most real-world monetary calculations (up to $9.99 M) can fit in a DECIMAL(9, 2), which requires just five bytes. You can save size, worry less about rounding errors, and make your code more portable.Chore
While @JonofAllTrades is correct, you could also get integer performance back by simply using an integer containing pennies or cents, and save the extra byte that encodes the position of the decimal point, and which has to be checked when adding decimals together.Putrescine
"So, it's perfectly 'safe' and appropriate to use MONEY when what you are dealing with is MONEY and use it according to mathematical rules that it follows" -> however, see also Anon's answer: "Everything is dangerous if you don't know what you are doing". You can never predict how people will end up querying the system you are creating, best to avoid the possibility of misuse when you can. The minuscule gain in storage is not worth it IMHO.Whitney
Try storing a Bitcoin value. It has 8 decimalsDunstable
B
17

We've just come across a very similar issue and I'm now very much a +1 for never using Money except in top level presentation. We have multiple tables (effectively a sales voucher and sales invoice) each of which contains one or more Money fields for historical reasons, and we need to perform a pro-rata calculation to work out how much of the total invoice Tax is relevant to each line on the sales voucher. Our calculation is

vat proportion = total invoice vat x (voucher line value / total invoice value)

This results in a real world money / money calculation which causes scale errors on the division part, which then multiplies up into an incorrect vat proportion. When these values are subsequently added, we end up with a sum of the vat proportions which do not add up to the total invoice value. Had either of the values in the brackets been a decimal (I'm about to cast one of them as such) the vat proportion would be correct.

When the brackets weren't there originally this used to work, I guess because of the larger values involved, it was effectively simulating a higher scale. We added the brackets because it was doing the multiplication first, which was in some rare cases blowing the precision available for the calculation, but this has now caused this much more common error.

Brahmanism answered 25/10, 2011 at 9:18 Comment(5)
But it only does so because an ignorant developer ignored the rules for VAT calculations and the documented precision limitations of money.Gourami
@Gourami but the point you are making about the possibility of misuse of this data type is an argument in favour of avoiding using it at all.Whitney
@Nathan No. I am pointing out that the argument given as a reason to never use it is basically incompetent developer, so the argument is bogus.Gourami
@Gourami sadly there are many incompetent developers (as well as many awesome ones) and for me, unless I could guarantee how this data was going to be queried in the future and no-one was ever going to make the kind of mistakes described here, better to err on the side of caution and use a decimal type. That said, having read all these answers I can see there are some specific use cases where money would be an optimum type to use, I just wouldn't use it unless there was a very good use case for it (e.g. column will only ever be aggregated by SUM, bulk loading large amounts of financial data).Whitney
@Gourami it isn't just the precision limitations. The internal representation can also cause issues. Money is a convenience type to catch ignorant developers, not a great type that developers are misusing. The vat example, regardless of the rules for calculating it, clearly should scare non-ignorant developers away when they apply the specific to the general.Dative
D
14

As a counter point to the general thrust of the other answers. See The Many Benefits of Money…Data Type! in SQLCAT's Guide to Relational Engine

Specifically I would point out the following

Working on customer implementations, we found some interesting performance numbers concerning the money data type. For example, when Analysis Services was set to the currency data type (from double) to match the SQL Server money data type, there was a 13% improvement in processing speed (rows/sec). To get faster performance within SQL Server Integration Services (SSIS) to load 1.18 TB in under thirty minutes, as noted in SSIS 2008 - world record ETL performance, it was observed that changing the four decimal(9,2) columns with a size of 5 bytes in the TPC-H LINEITEM table to money (8 bytes) improved bulk inserting speed by 20% ... The reason for the performance improvement is because of SQL Server’s Tabular Data Stream (TDS) protocol, which has the key design principle to transfer data in compact binary form and as close as possible to the internal storage format of SQL Server. Empirically, this was observed during the SSIS 2008 - world record ETL performance test using Kernrate; the protocol dropped significantly when the data type was switched to money from decimal. This makes the transfer of data as efficient as possible. A complex data type needs additional parsing and CPU cycles to handle than a fixed-width type.

So the answer to the question is "it depends". You need to be more careful with certain arithmetical operations to preserve precision but you may find that performance considerations make this worthwhile.

Dynameter answered 13/8, 2015 at 18:48 Comment(3)
How would you store bitcoin then?Dunstable
@PanagiotisKanavos - I've no idea. I've never looked into bitcoin and know practically nothing about it!Dynameter
Link not working. Here is the link to the PDF download.microsoft.com/download/0/F/B/…Gudren
M
12

I want to give a different view of MONEY vs. NUMERICAL, largely based my own expertise and experience... My point of view here is MONEY, because I have worked with it for a considerable long time and never really used NUMERICAL much...

MONEY Pro:

  • Native Data Type. It uses a native data type (integer) as the same as a CPU register (32 or 64 bit), so the calculation doesn't need unnecessary overhead so it's smaller and faster... MONEY needs 8 bytes and NUMERICAL(19, 4) needs 9 bytes (12.5% bigger)...

    MONEY is faster as long as it is used for it was meant to be (as money). How fast? My simple SUM test on 1 million data shows that MONEY is 275 ms and NUMERIC 517 ms... That is almost twice as fast... Why SUM test? See next Pro point

  • Best for Money. MONEY is best for storing money and do operations, for example, in accounting. A single report can run millions of additions (SUM) and a few multiplications after the SUM operation is done. For very big accounting applications it is almost twice as fast, and it is extremely significant...
  • Low Precision of Money. Money in real life doesn't need to be very precise. I mean, many people may care about 1 cent USD, but how about 0.01 cent USD? In fact, in my country, banks no longer care about cents (digit after decimal comma); I don't know about US bank or other country...

MONEY Con:

  • Limited Precision. MONEY only has four digits (after the comma) precision, so it has to be converted before doing operations such as division... But then again money doesn't need to be so precise and is meant to be used as money, not just a number...

But... Big, but here is even your application involved real-money, but do not use it in lots of SUM operations, like in accounting. If you use lots of divisions and multiplications instead then you should not use MONEY...

Monaural answered 26/9, 2016 at 8:9 Comment(4)
If you're going to say that money is faster than decimal, you need to tell us things like what rdbms, on what hardware, running what OS, with what specific data running what specific query you're talking about. Furthermore, if it is not COMPLETELY accurate, I am NOT doing any financial operations with it, because the tax man will be rather displeased at getting bad numbers back. You care about the thousandth cent if you're dealing with US currency for example. If money isn't doing that, it's not usable.Blague
@HaakonLøtveit this entire Q&A thread is about the SQL Server data type "money", so I don't think they need to specify this in the answer. Money can be faster to use than decimal in some circumstances (e.g. loading data), see the answer by Martin Smith for more details. I agree with the majority of this answer, in that there are certain use cases that may make Money a more efficient choice than decimal, however unless there is a very compelling case for using it I think it should be avoided.Whitney
Bitcoin has 8 decimals. You can't even store it in a money columnDunstable
@HaakonLøtveit .. well, What I am trying to say is, on architecture level, native type always win performance wise againts non native ones, and what is native depend on architecture for example on CPU that would be integer type, on FPU float and on GPU is array of float and so on..Monaural
S
10

All the previous posts bring valid points, but some don't answer the question precisely.

The question is: Why would someone prefer money when we already know it is a less precise data type and can cause errors if used in complex calculations?

You use money when you won't make complex calculations and can trade this precision for other needs.

For example, when you don't have to make those calculations, and need to import data from valid currency text strings. This automatic conversion works only with MONEY data type:

SELECT CONVERT(MONEY, '$1,000.68')

I know you can make your own import routine. But sometimes you don't want to recreate a import routine with worldwide specific locale formats.

Another example, when you don't have to make those calculations (you need just to store a value) and need to save 1 byte (money takes 8 bytes and decimal(19,4) takes 9 bytes). In some applications (fast CPU, big RAM, slow IO), like just reading huge amount of data, this can be faster too.

Stoneman answered 18/7, 2018 at 20:14 Comment(0)
H
5

I found a reason about using decimal over money in accuracy subject.

DECLARE @dOne   DECIMAL(19,4),
        @dThree DECIMAL(19,4),
        @mOne   MONEY,
        @mThree MONEY,
        @fOne   FLOAT,
        @fThree FLOAT

 SELECT @dOne   = 1,
        @dThree = 3,    
        @mOne   = 1,
        @mThree = 3,    
        @fOne   = 1,
        @fThree = 3

 SELECT (@dOne/@dThree)*@dThree AS DecimalResult,
        (@mOne/@mThree)*@mThree AS MoneyResult,
        (@fOne/@fThree)*@fThree AS FloatResult

DecimalResult > 1.000000

MoneyResult > 0.9999

FloatResult > 1

Just test it and make your decision.

Hoyt answered 21/1, 2014 at 11:49 Comment(4)
We would very much like to hear (read, that is) your conclusion.Tibetan
@PeterMortensen I think if I want to have completeness and accuracy between Money and Decimal types my decision should be Decimal one.Hoyt
Consider updating your answer with the actual result of the above. Then your conclusion will be immediately obvious to anyone reading :-)Andromada
@Ageax The result added to the answer.Hoyt
D
5

You shouldn't use money when you need to do multiplications / divisions on the value. Money is stored in the same way an integer is stored, whereas decimal is stored as a decimal point and decimal digits. This means that money will drop accuracy in most cases, while decimal will only do so when converted back to its original scale. Money is fixed point, so its scale doesn't change during calculations. However because it is fixed point when it gets printed as a decimal string (as opposed to as a fixed position in a base 2 string), values up to the scale of 4 are represented exactly. So for addition and subtraction, money is fine.

A decimal is represented in base 10 internally, and thus the position of the decimal point is also based on the base 10 number. Which makes its fractional part represent its value exactly, just like with money. The difference is that intermediate values of decimal can maintain precision up to 38 digits.

With a floating point number, the value is stored in binary as if it were an integer, and the decimal (or binary, ahem) point's position is relative to the bits representing the number. Because it is a binary decimal point, base 10 numbers lose precision right after the decimal point. 1/5th, or 0.2, cannot be represented precisely in this way. Neither money nor decimal suffer from this limitation.

It is easy enough to convert money to decimal, perform the calculations, and then store the resulting value back into a money field or variable.

From my POV, I want stuff that happens to numbers to just happen without having to give too much thought to them. If all calculations are going to get converted to decimal, then to me I'd just want to use decimal. I'd save the money field for display purposes.

Size-wise I don't see enough of a difference to change my mind. Money takes 4 - 8 bytes, whereas decimal can be 5, 9, 13, and 17. The 9 bytes can cover the entire range that the 8 bytes of money can. Index-wise (comparing and searching should be comparable).

Dative answered 13/8, 2015 at 15:36 Comment(1)
Thanks for the Upvote. I'm looking at this and while I get what I was trying to say, I can also see it being very confusing. Perhaps I'll rewrite it later with some bit vs decimal examples.Dative
H
2

I just saw this blog entry: Money vs. Decimal in SQL Server.

Which basically says that money has a precision issue...

declare @m money
declare @d decimal(9,2)

set @m = 19.34
set @d = 19.34

select (@m/1000)*1000
select (@d/1000)*1000

For the money type, you will get 19.30 instead of 19.34. I am not sure if there is an application scenario that divides money into 1000 parts for calculation, but this example does expose some limitations.

Hebephrenia answered 19/10, 2009 at 3:31 Comment(1)
It's not an "issue", it's "per specifications": «The money and smallmoney data types are accurate to a ten-thousandth of the monetary units that they represent.» as said in msdn.microsoft.com/en-us/library/ms179882.aspx so anyone saying "it's rubbish" doesn't know what's he's talking about.Adventurism
I
2

Someone could make a case that you should store currency as MONEY, but CONVERT it to DECIMAL before doing calculations.

Calculation should not be totally dependent on storage type.

It is always good practice to explicitly convert your data to the desired type BEFORE using it in calculation, AND it saves a little storage.

To borrow from @SQLMenace's example:

DECLARE
--Just to drive the point, I'll use SMALLMONEY (4 bytes)
@mon1 SMALLMONEY,
@mon2 SMALLMONEY,
@mon3 SMALLMONEY,
@mon4 SMALLMONEY,
--This is the smallest DECIMAL that will hold this calculation (5 bytes)
@num1 DECIMAL(9,4),
@num2 DECIMAL(9,4),
@num3 DECIMAL(9,4),
@num4 DECIMAL(9,4)

SELECT
@mon1 = 100, @mon2 = 339, @mon3 = 10000,
@num1 = 100, @num2 = 339, @num3 = 10000

--Convert it to decimal before calculation!
SET @mon4 = CONVERT(DECIMAL(19,4),@mon1)/
            CONVERT(DECIMAL(19,4),@mon2)*
            CONVERT(DECIMAL(19,4),@mon3)
SET @num4 = @num1/@num2*@num3
--Notice I didn't convert @mon4 from SMALLMONEY before presentation. 
--It is the appropriate data type for this presentation.

SELECT @mon4 AS moneyresult,
@num4 AS numericresult

Output:
2949.8525 2949.8525

The same!

Sure, it's not as clean, but you can take care of that with a little formatting.

To add to the point, try swapping the SELECT out for this:

SELECT
--I added a zero to @mon3 and @num3
@mon1 = 101.5, @mon2 = 339.253, @mon3 = 100000,
@num1 = 101.5, @num2 = 339.253, @num3 = 100000

Output:
Msg 8115, Level 16, State 8, Line 13
Arithmetic overflow error converting int to data type numeric.

DECIMAL(9,4) cannot handle this number size, but SMALLMONEY can.

DECIMAL(10,4) (9 bytes) is the actual equivalent to SMALLMONEY (4 bytes).

To agree with @GerardONeill, in most circumstances you are not saving much space, but I do see that a case could be made that SMALLMONEY (4 bytes) being one byte less than DECIMAL(9,4) (5 bytes) when scaled up to a trillion values saves you a terabyte. (Where would you have a trillion records in small values? Did I hear someone say "microtransactions?") And furthermore, DECIMAL(9,4) does not store as many numbers! DECIMAL(10,4) jumps up to 9 bytes, and scaled up to a trillion values SMALLMONEY saves you 5 terabytes.

NOTE: If you compare MONEY (8 bytes) with DECIMAL(19,4) (9 bytes). One byte less saves a terabyte once you reach a trillion, but in this case DECIMAL(19,4) gives you two more zeros if you add them to @mon3 and @num3 in the SELECT above. So it all really depends on how much you need to store.

Storage is cheep nowadays, so I'm not saying it's a strong case, but one could make the case and it is worth noting.

To summarize the value differences:

To REPLACE `SMALLMONEY (4 bytes) you would have to choose between

  1. DECIMAL(9,4) (5 bytes) and sacrifice range of values OR
  2. DECIMAL(10,4) (9 bytes) for the same range but at more than double the storage used.

To summarize the argument, here are a few principles to follow:

  1. MONEY is for presentation, not calculation
  2. DECIMAL(n,n) should be used for precise calculations
  3. CONVERT your data before calculating with it.

That last one may just be a programmer thing, but being deliberate with your data types at the point of use (NOT JUST the point of storage) is important.

For instance, let's mess with @SQLMenace's answer again:

select t1.index_id,t2.index_id,(avg(t1.monret*t2.monret)
    -(avg(t1.monret) * avg(t2.monret)))
        /((sqrt(avg(square(t1.monret)) - square(avg(t1.monret))))
        *(sqrt(avg(square(t2.monret)) - square(avg(t2.monret))))),
current_timestamp,@MaxDate
        from Table1 t1  join Table1 t2  on t1.Date = traDate
        group by t1.index_id,t2.index_id

What data types are being used in those calculations?

I HAVE NO IDEA

...except he did imply they were all DECIMAL

...BUT WERE THEY???

You can probably see where I'm going with this. From the code maintenance perspective, I personally prefer to have everything I need to know about the data I'm working with directly in front of me.

AND THEN

If you're already going to be converting the data into the type you need for calculation, why store it in a larger type?

Anyway, food for thought.

Igbo answered 21/7, 2023 at 21:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.