How do I divide two integers to get a double?
You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double
, a double
divide is used which results in a double
. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:
double num3 = (double)(num1/num2);
. This will just give you a double representation of the result of the integer division! –
Interphone double
instead of float
? I can see the question calls for double
but I'm curious anyway. –
Antarctic double
and not float
. When you write a variable just like var a = 1.0;
, this 1.0 is always a double
. I guess this is the main reason. –
Selfaddressed Complementing the @NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats
and doubles
and this one where he talks about decimals
.
double
has a precision of 53 bits, and it's a binary floating-point format, whereas decimal
is a... decimal one, of course, with 96 bits of precision. So double
is precise to ~15-17 decimal digits and decimal 28-29 digits (and not twice the precision of double
). More importantly decimal
actually uses only 102 of the 128 bits –
Englishry decimals
(96), but doubles
has 52 bits of mantissa, not 53. –
Melodee cast the integers to doubles.
Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator
var result = 1.0 * a / b;
–
Hollyanne var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );
double
and not decimal
. –
Antarctic var result = decimal.ToDouble(decimal.Divide(5, 2));
In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00
I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen. But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.
The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333
© 2022 - 2024 — McMap. All rights reserved.