Simple double calculations...

• 08-31-2011, 10:54 AM
b0rt
Simple double calculations...
Hi there, it's me again asking basic Java questions ^^'

At some point in my program I have run into weird results when subtracting a hundredth to a double value.

I created this SSCCE to show what I'm talking about:

Code:

```    public static void main(String[] args) {         double value = 1.02;         System.out.println(value+" - 0.01 = "+(value-0.01));                 value = 2.02;         System.out.println(value+" - 0.01 = "+(value-0.01));                 value = 43.09;         System.out.println(value+" - 0.01 = "+(value-0.01));                 value = 95.07;         System.out.println(value+" - 0.01 = "+(value-0.01));                 value = 205.05;         System.out.println(value+" - 0.01 = "+(value-0.01));                     }```
Which returns:

Code:

```1.02 - 0.01 = 1.01 2.02 - 0.01 = 2.0100000000000002 43.09 - 0.01 = 43.080000000000005 95.07 - 0.01 = 95.05999999999999 205.05 - 0.01 = 205.04000000000002```
Can someone tell me why is this happening and how can be fixed?

b0rt
• 08-31-2011, 11:11 AM
b0rt
Oh, just found out myself...

Code:

`System.out.println(value+" - 0.01 = "+(value-0.01);`
I used:

Code:

`System.out.println(value+" - 0.01 = "+((value*100)-1)/100.00);`
And problem solved...

But still feels awkward having those results...

Does anyone know why Java can't make proper decimal subtractions?
• 08-31-2011, 11:53 AM
Tolls
• 08-31-2011, 01:59 PM
b0rt

I can understand that decimal values are represented in memory in an abstract way and that can cause errors when managing high numbers but... it just fails with a simple subtraction like:
2.02 - 0.01

It feels like a very tacky implementation to me... if you use one pair of integers (int base, int power) to represent a floating point number (value = base * 10 ^ power) it should not be so prone to errors...

I can imagine it is done to save memory consumption but still feels cheese having to do these tricks in order to avoid precision errors.

Hope I don't disturb anyone with my opinion, because it's just that.
Regards!
• 08-31-2011, 02:03 PM
pbrockway2
Quote:

Originally Posted by b0rt
Does anyone know why Java can't make proper decimal subtractions?

It gets much worse!

Mathematicians imagine (but only imagine!) quantities with infinite precision. Computers deal with finite time and finite resources: there isn't paper enough in all the world to write down the real numbers between 0 and 1 using a finite alphabet. So computers cheat and use well established rules for dealing with quantites using only finite precision. Java follows these rules faithfully and in a cross platform way. (as detailed in the link)

It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.

-----

The results you are getting are probably quite good enough: just ugly. (BigDecimal offers arbitrary, though still finite, precision if you must.) You deal with ugliness using some version of formatting like DecimalFormat or one of the printf() tribe.
• 08-31-2011, 02:15 PM
pbrockway2
And what makes 2.02-0.01 simple? More simple than e^pi, say.

Anyway the important thing is to use formatting to get the output looking exactly the way you want rather than resorting to near enough arithmetical tricks to blugeon the quantities.
• 08-31-2011, 02:18 PM
JosAH
Quote:

Originally Posted by pbrockway2
It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.

Analog computers represent numbers in R as a voltage level, not just as a few steenkin' bits. I wonder what has become of those hybrid machines (analog computers coupled to digital computers). I never see them anymore ...

kind regards,

Jos
• 08-31-2011, 02:42 PM
Skiller
Quote:

Originally Posted by b0rt
it just fails with a simple subtraction like:
2.02 - 0.01

Please keep in mind that asking a computer to subtract 0.01 from 2.02 is equally as "simple" as asking you to subtract a third from 1, but I'd like to see you try that in base 10 :P.

Computers work in base 2 (binary) while most humans are used to working in base 10 (decimal) and not all decimal numbers can be properly represented using binary. Similarly in base 3 subtracting a third from 1 is easy it's just 1 - 0.1 = 0.2, but it's impossible to represent a third in base 10 which is why I have to keep referring to it as a "third" rather than the numerical representation which in decimal is something like 0.33333333333333333333333333333333333333333333333 ect.
• 08-31-2011, 03:01 PM
pbrockway2
The base 2 business is often mentioned in this context (and it does play a role in the standards of floating point arithmetic conventionally adopted) but the problem is deeper: you can't uniquely name the real numbers over ever so small a continuous range with finite strings drawn from a finite alphabet whatever the convention you use for the numerals.

(I deliberately put it that way to avoid the point Jos mentioned: you could just represent a quantity analogically with a voltage or whatever. I was actually thinking of those characters in Gulliver's Travels who avoided the ambiguities of language by using real objects to communicate.)