I've seen decimal used instead of int/long in various examples. I'm just trying to understand why
That's probably because .NET decimal
and Oracle NUMBER
maps a bit better than long
and NUMBER
and it also gives you more flexibility. If you at a later stage add a scale in the Oracle column then you wouldn't have to change datatype if you already used decimal
.
decimal
is certainly slower than int
and long
since the later two are supported in hardware. That said, you have to crunch some serious amount of data for it to make any difference. I still think that you should use long
if that that's what you're dealing with and then you should also let the table column definitions represent that. NUMBER(18,0)
for long
and so on.
The reason decimal
maps a little better is that long
is 64 bits and decimal
is (kind of) 128 bits.
.NET
Type: decimal
Approximate Range: ±1.0 × 10^−28 to ±7.9 × 10^28
Precision: 28-29 significant digits
Type: long
Range: –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
Precision: 18 (19 for ulong) significant digits
Oracle
NUMBER
defaults to 38 significant digits and scale 0 (integer).
Type: NUMBER
Range: +- 1 x 10^-130 to 9.99...9 x 10^125
Precision: 38 significant digits
Microsoft is aware of the problem and notes
This data type is an alias for the
NUMBER(38) data type, and is designed
so that the OracleDataReader returns a
System.Decimal or OracleNumber instead
of an integer value. Using the .NET
Framework data type can cause an
overflow.
Come to think of it you actually need BigInteger
to be able to represent the same number of significant digits as to what NUMBER
defaults to. I've never seen anyone do that and I would suppose it's a very rare need. Also BigInteger
still wouldn't cut it since NUMBER
can be of positive and negative infinity.
decimal
used instead ofint
/long
in various examples. I'm just trying to understand why. – Raleighraley