Weird outcome when subtracting doubles [duplicate]

0 votes
asked Feb 6, 2010 by mike

Possible Duplicate:
Why is floating point arithmetic in C# imprecise?

I have been dealing with some numbers and C#, and the following line of code results in a different number than one would expect:

double num = (3600.2 - 3600.0);

I expected num to be 0.2, however, it turned out to be 0.1999999999998181. Is there any reason why it is producing a close, but still different decimal?

4 Answers

0 votes
answered Jan 6, 2010 by meinersbur

See Wikipedia

Can't explain it better. I can also suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions on StackOverflow.

0 votes
answered Jan 6, 2010 by fernando

Change your type to decimal:

decimal num = (3600.2m - 3600.0m);

You should also read this.

0 votes
answered Jan 6, 2010 by maciej-hehl

There is a reason.

The reason is, that the way the number is stored in memory, in case of the double data type, doesn't allow for an exact representation of the number 3600.2. It also doesn't allow for an exact representation of the number 0.2.

0.2 has an infinite representation in binary. If You want to store it in memory or processor registers, to perform some calculations, some number close to 0.2 with finite representation is stored instead. It may not be apparent if You run code like this.

double num = (0.2 - 0.0);

This is because in this case, all binary digits available for representing numbers in double data type are used to represent the fractional part of the number (there is only the fractional part) and the precision is higher. If You store the number 3600.2 in an object of type double, some digits are used to represent the integer part - 3600 and there is less digits representing fractional part. The precision is lower and fractional part that is in fact stored in memory differs from 0.2 enough, that it becomes apparent after conversion from double to string

0 votes
answered Feb 6, 2010 by adam-ralph

This is because double is a floating point datatype.

If you want greater accuracy you could switch to using decimal instead.

The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as

var num = (3600.2m - 3600.0m);

Note that there are disadvantages to using a decimal. It is a 128 bit datatype as opposed to 64 bit which is the size of a double. This makes it more expensive both in terms of memory and processing. It also has a much smaller range than double.

Welcome to Q&A, where you can ask questions and receive answers from other members of the community.
Website Online Counter