
BIG decimals
Hey,
I'm working on a program that will calculate pi, and I need a way to store digits beyond the 16 that a double can hold. I've looked into BigDecimals, but can't find out how many digits of a number that they can hold. Does anyone know how many digits bigdecimals hold or another way to store many digits usable in math (so not a string)?
Thanks

I meant, is there something more precise than a LONG? BigDecimal question still standing.

BigDecimal is basically an integer without a maximum size that is scaled, using 10^(scale). Therefore, because it is arbitrary precision, it does not have a maximum size until the computer runs out of memory.

So... is it as accurate as an int (not to very many places)?

An Integer is a wrapper for the primitive type, int.
 Range: 2^31 to 2^31  1
A BigInteger is an arbitraryprecision type of integer, capable of storying any integer of any size, so long as the computer has enough memory to store it.
 Range: infinity to +infinity (no decimal precision)
A BigDecimal is essentially a BigInteger multiplied by 10^(x) where x is some scale (0 to infinity, as far as I'm aware), which means that it can be infinitely precise if your computer is powerful enough.
 Range: 0.999999999999999 to +0.999999999999999 (both repeated to infinity)

Fantastic! The arbitrary precision aspect is awesome... in concept. Whenever I go to divide two BigDecimals though, the result is either a total loss of precision in rounding or an error. Say I wanted to divide a big decimal equal to two by another equal to three, I could round and have a result of one, or I'd get an error saying "Exception in thread "main" java.lang.ArithmeticException: Rounding necessary". I understand what that means... but is there a way around this?? I tried multiplying the numerator up by a constant then performing the division with rounding, then dividing the quotient by the above constant. Like so:
Code:
//numer=numerator, digitcount= "constant", denom= denominator,
//local = quotient
numer=numer.multiply(digitcount);
BigDecimal local = numer.divide(denom, RoundingMode.CEILING);
local=local.divide(digitcount, RoundingMode.UNNECESSARY);
numer=numer.divide(digitcount, RoundingMode.UNNECESSARY);

This is because BigDecimal cannot handle nonterminating sequences, which is a bit unfortunate. However, there is a bit of a work around:
Create a function that "tries" to divide using BigDecimal; if it fails, "catch" the error and return a double division instead. Something like this:
Code:
{
BigDecimal two = new BigDecimal(2);
BigDecimal three = new BigDecimal(3);
System.out.println(base.DivideDecimals(two,three).toString());
System.out.println(base.DivideDecimals(three,two).toString());
}
public static BigDecimal DivideDecimals(BigDecimal a, BigDecimal b) {
try {
System.out.println("Attempting BigDecimal division..");
return a.divide(b); // Don't attempt to round here.
} catch (ArithmeticException ae) {
System.out.println("BigDecimal division failed. Using doubles.");
return new BigDecimal(a.doubleValue() / b.doubleValue());
}
}
The result of such a program is:
Code:
run:
Attempting BigDecimal division..
BigDecimal division failed. Using doubles.
0.66666666666666662965923251249478198587894439697265625
Attempting BigDecimal division..
1.5
BUILD SUCCESSFUL (total time: 1 second)
Much luck!

Hey sick it worked! Only one more thing... is there a way to limit the number of digits used? Although I want a lot of digits, I don't want so many that it's too much for the the output pane so it tells me to put it in wrap mode. I've tried multiplying by X then dividing by X with rounding... no difference. Ideas?

Forget the last one... found the beauty of MathContexts... thanks for all your help, Zack

Dividing doubles as a replacement for BigDecimal isn't a good idea. Presumably your are using BigDecimal (in part) to avoid the inevitable imprecision of a floatbased number. Dividing doubles will simply mean reintroducing that imprecision.
What you want to do is provide a scale to the BigDecimal, which tells it how many decimal places to use. divide() has a scale parameter. So this:
Code:
BigDecimal bd1 = new BigDecimal(2);
BigDecimal bd2 = new BigDecimal(3);
BigDecimal result = bd1.divide(bd2);
System.out.println(result);
will fail witht he exception you gave, but this:
Code:
BigDecimal bd1 = new BigDecimal(2);
BigDecimal bd2 = new BigDecimal(3);
BigDecimal result = bd1.divide(bd2, 20, RoundingMode.HALF_UP);
System.out.println(result);
gives an output of
0.66666666666666666667
(20 decimal places)

The double is simply an alternative because most numbers dividing into nonterminating sequences can easily be scaled to doubles. Both methods work.

Except if you want something accurate (which you will do if you are trying to calculate pi to some number of decimal places) a floatbased thing won't do, due to the inevitable innacuracies of a float.
It's why currency is always held as a BigDecimal.

Since when is currency always held as a BigDecimal? That doesn't make intuitive sense. Currency is only ever held to two decimal places, with the third as a reference ($0.00 ± $0.001). As you saw in my output above, the double was accurate to 17 decimal places + a rounding decimal place. If you carried five decimal places, that would be excessively accurate for currency.

In any financial system I have worked on, that's where.
Currency conversion, for example is usually held to 5 decimal places, and you'll start to lose more than pennies if that isn't accurate...

I just converted an arbitrary value of the highestvalue currency (Kuwait Dinar) to the lowestvalue currency (Zimbabwe Dollar), and didn't lose any precision by trimming to 5 decimal places.
Code:
12,345.67891 KWD = 15,724,848.09 ZWD
12,345.67891234568015 KWD = 15,724,848.09015080519021 ZWD
Where is the precision loss here? And we can make that number as big as we want12,345 trillion, if you pleaseanything to the left of the decimal has no place in this.

Except you have introduced a problem as soon as you are dealing in such imprecision.
Having calculations that result in 1.99999999 instead of 2.0 (just an example) means you have introduced an incorrect value into your financial data. And since these calculations usually involve large numbers of numbers, each with their own potential innacuracy, you will (yes will, I have seen it) get results that are wrong. And these results leave you liable when the tax man comes to visit (or a client queries a total).
There is a reason that Sun, Oracle, IBM (to name but three) point out that all financial data should be handled using BigDecimals, or longs, rather than float/double.

You're still truncating the data to 20 decimal places though in the example above, whilst mine basically truncates it to 16ish. I know that's a factor of 10^4 but is it likely that you're going to be adding so many numbers that 1*10^17 is important? Likely not.

Try here.
Or here.
Or some stuff from IBM.
In essence, as I said above, don't use doubles for currency...you can choose to disbelieve me all you like, but if you do that in a financial application you will produce incorrect results.