1. Member
Join Date
Aug 2011
Posts
22
Rep Power
0

## Simple double calculations...

Hi there, it's me again asking basic Java questions ^^'

At some point in my program I have run into weird results when subtracting a hundredth to a double value.

I created this SSCCE to show what I'm talking about:

Java Code:
```    public static void main(String[] args) {
double value = 1.02;
System.out.println(value+" - 0.01 = "+(value-0.01));

value = 2.02;
System.out.println(value+" - 0.01 = "+(value-0.01));

value = 43.09;
System.out.println(value+" - 0.01 = "+(value-0.01));

value = 95.07;
System.out.println(value+" - 0.01 = "+(value-0.01));

value = 205.05;
System.out.println(value+" - 0.01 = "+(value-0.01));

}```
Which returns:

Java Code:
```1.02 - 0.01 = 1.01
2.02 - 0.01 = 2.0100000000000002
43.09 - 0.01 = 43.080000000000005
95.07 - 0.01 = 95.05999999999999
205.05 - 0.01 = 205.04000000000002```
Can someone tell me why is this happening and how can be fixed?

b0rt
Last edited by b0rt; 08-31-2011 at 10:58 AM. Reason: Added different decimal values

2. Member
Join Date
Aug 2011
Posts
22
Rep Power
0
Oh, just found out myself...

Java Code:
`System.out.println(value+" - 0.01 = "+(value-0.01);`
I used:

Java Code:
`System.out.println(value+" - 0.01 = "+((value*100)-1)/100.00);`
And problem solved...

But still feels awkward having those results...

Does anyone know why Java can't make proper decimal subtractions?
Last edited by b0rt; 08-31-2011 at 11:25 AM. Reason: spelling + question

3. Moderator
Join Date
Apr 2009
Posts
12,918
Rep Power
22

4. Member
Join Date
Aug 2011
Posts
22
Rep Power
0

I can understand that decimal values are represented in memory in an abstract way and that can cause errors when managing high numbers but... it just fails with a simple subtraction like:
2.02 - 0.01

It feels like a very tacky implementation to me... if you use one pair of integers (int base, int power) to represent a floating point number (value = base * 10 ^ power) it should not be so prone to errors...

I can imagine it is done to save memory consumption but still feels cheese having to do these tricks in order to avoid precision errors.

Hope I don't disturb anyone with my opinion, because it's just that.
Regards!

5. Moderator
Join Date
Feb 2009
Location
New Zealand
Posts
4,712
Rep Power
14
Originally Posted by b0rt
Does anyone know why Java can't make proper decimal subtractions?
It gets much worse!

Mathematicians imagine (but only imagine!) quantities with infinite precision. Computers deal with finite time and finite resources: there isn't paper enough in all the world to write down the real numbers between 0 and 1 using a finite alphabet. So computers cheat and use well established rules for dealing with quantites using only finite precision. Java follows these rules faithfully and in a cross platform way. (as detailed in the link)

It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.

-----

The results you are getting are probably quite good enough: just ugly. (BigDecimal offers arbitrary, though still finite, precision if you must.) You deal with ugliness using some version of formatting like DecimalFormat or one of the printf() tribe.

6. Moderator
Join Date
Feb 2009
Location
New Zealand
Posts
4,712
Rep Power
14
And what makes 2.02-0.01 simple? More simple than e^pi, say.

Anyway the important thing is to use formatting to get the output looking exactly the way you want rather than resorting to near enough arithmetical tricks to blugeon the quantities.

7. Originally Posted by pbrockway2
It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.
Analog computers represent numbers in R as a voltage level, not just as a few steenkin' bits. I wonder what has become of those hybrid machines (analog computers coupled to digital computers). I never see them anymore ...

kind regards,

Jos

8. Member
Join Date
Jan 2011
Posts
67
Rep Power
0
Originally Posted by b0rt
it just fails with a simple subtraction like:
2.02 - 0.01
Please keep in mind that asking a computer to subtract 0.01 from 2.02 is equally as "simple" as asking you to subtract a third from 1, but I'd like to see you try that in base 10 :P.

Computers work in base 2 (binary) while most humans are used to working in base 10 (decimal) and not all decimal numbers can be properly represented using binary. Similarly in base 3 subtracting a third from 1 is easy it's just 1 - 0.1 = 0.2, but it's impossible to represent a third in base 10 which is why I have to keep referring to it as a "third" rather than the numerical representation which in decimal is something like 0.33333333333333333333333333333333333333333333333 ect.
Last edited by Skiller; 08-31-2011 at 02:46 PM.

9. Moderator
Join Date
Feb 2009
Location
New Zealand
Posts
4,712
Rep Power
14
The base 2 business is often mentioned in this context (and it does play a role in the standards of floating point arithmetic conventionally adopted) but the problem is deeper: you can't uniquely name the real numbers over ever so small a continuous range with finite strings drawn from a finite alphabet whatever the convention you use for the numerals.

(I deliberately put it that way to avoid the point Jos mentioned: you could just represent a quantity analogically with a voltage or whatever. I was actually thinking of those characters in Gulliver's Travels who avoided the ambiguities of language by using real objects to communicate.)

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•