By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
449,040 Members | 1,054 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 449,040 IT Pros & Developers. It's quick & easy.

Double: 2 Decimal points

P: n/a
Hello experts,
I used to program in C/C++ and now switched to Java. I am having a
difficulty that I need your help with. How can I limit a double variable to
hold 2 decimal points only? Say I have an array of 50 doubles that each ahs
a number such as 23.9918444. I want to round round this number to 23.99 and
any other calculations done on it should have the same precision.

I know that Decimal Format does the rounding but the thing that it converts
it to String and I have a ot of calculations which will be a big hussle to
convert from double->string->double.
Any help is really appreciaated!!
Cheers
Jul 26 '05 #1
Share this Question
Share on Google+
7 Replies


P: n/a
hana1 wrote:
Hello experts,
I used to program in C/C++ and now switched to Java. I am having a
difficulty that I need your help with. How can I limit a double variable to
hold 2 decimal points only? Say I have an array of 50 doubles that each ahs
a number such as 23.9918444. I want to round round this number to 23.99 and
any other calculations done on it should have the same precision.

I know that Decimal Format does the rounding but the thing that it converts
it to String and I have a ot of calculations which will be a big hussle to
convert from double->string->double.
Any help is really appreciaated!!
Cheers

How about a method to return a rounded precision of 2

public double roundDBL(double targetDBL){
int decimalPlace = 2;
BigDecimal bd = new BigDecimal(targetDBL);
bd = bd.setScale(decimalPlace,BigDecimal.ROUND_UP);

return (bd.doubleValue());
}

--
Thanks in Advance...
IchBin, Pocono Lake, Pa, USA http://weconsultants.servebeer.com
__________________________________________________ ________________________

' If there is one, Knowledge is the "Fountain of Youth"'
-William E. Taylor, Regular Guy (1952-)
Jul 26 '05 #2

P: n/a
tex
Just for what it's worth:

Floating point vars do not properly represent all decimal fractions
so no matter what rounding you do, sometimes, your dbl var is not
going to hold what you may think. In the event you are working
with money, keep stuff in cents rather than dollars. In many cases
financial calculations are done with integer arithmetic at, say, 1/100th
of a cent.

Some year ago I took the trouble of finding some examples, and
IBM's mainframe floating point encoding technique did not hold
0.10, so, say adding 100 dimes give you $9.99 (of course rounding
would take care of it, but not if you added enough of it, say for a
bank with 10s or 100s of thousands of customers, and large numbers
of transactions.

Today computers, including IBM mainframes I believe, use IEEE
floating point, and I have not bothered to find some examples, but
it is still true that not all decimals are represented by floating point
vars.

"hana1" <ha***@rogers.com> wrote in message
news:6I********************@rogers.com...
Hello experts,
I used to program in C/C++ and now switched to Java. I am having a
difficulty that I need your help with. How can I limit a double variable
to hold 2 decimal points only? Say I have an array of 50 doubles that
each ahs a number such as 23.9918444. I want to round round this number
to 23.99 and any other calculations done on it should have the same
precision.

I know that Decimal Format does the rounding but the thing that it
converts it to String and I have a ot of calculations which will be a big
hussle to convert from double->string->double.
Any help is really appreciaated!!
Cheers

Jul 27 '05 #3

P: n/a

"tex" <No****@hotmail.com> wrote in message
news:Og****************@newsread2.news.pas.earthli nk.net...
Just for what it's worth:

Some year ago I took the trouble of finding some examples, and
IBM's mainframe floating point encoding technique did not hold
0.10, so, say adding 100 dimes give you $9.99 (of course rounding
would take care of it, but not if you added enough of it, say for a
bank with 10s or 100s of thousands of customers, and large numbers
of transactions.

Today computers, including IBM mainframes I believe, use IEEE
floating point, and I have not bothered to find some examples, but
it is still true that not all decimals are represented by floating
point
vars.


Slightly off topic, but on mainframes financial applications tend to
use fixed point decimals. This does avoid the rounding errors in
adding and subtracting, but you pay for it by having a worse rounding
error with multiplication and division, so as far as I know it is not
a rational choice, just a traditional one.
Jul 31 '05 #4

P: n/a
tex
Slightly off topic, but on mainframes financial applications tend to
use fixed point decimals. This does avoid the rounding errors in adding
and subtracting, but you pay for it by having a worse rounding error with
multiplication and division, so as far as I know it is not a rational
choice, just a traditional one.


It actually is the best choice. Mainframe financial applicatins are
generally done in BCD (Binary Coded Decimal) which, at the hardware
level, is actually integers and any decimal point is in the hand of the
programmer (compilers, e.g. COBOL, would take care of that, but,
underneath it is all integers, though alllowing much bigger integers than
32-bit integers). It is such a good way to do things that Intel included
BCD with the 8088 in the 1st PCs and its still avl in todays chips.
Years ago, I was shocked to find spreadsheets (heavily used in financial
calcs) using floating pt, and also dismayed in C's not supporting BCD
as a native data type. Borland's Delphi includes a $-type which, by default
assumes 4 digits of decimal, and it supports a BCD object that lets
users select the precision, but, underneath it is all integers.

When you control rounding, and you have total control with integers
and an implied decimal point, you (can) get exactly the rounding and
precision that you want. Because financial calculations need to agree
to the penny, integer arithmethic is the very best.

It is actually even better (i.e. more correct) with scientific calculations
when all the numbers used are in the range of integer aritimetic.
Consider that you measure distance traveled, d = 100.0 miles, and
the time it took, t = 3.00 hrs. The average velocity is then

v = d/t = 100.0/3.00 = 33.333333333...(before rounding)

but, in science & engineering, d has 4 significant digits and t has 3
significant digits (had we measured more accuratly we should
express that in our data by showing more significan digits). In
multiplication & divison (and trig functions, et al), scientific principles
says we can only express our answer to the least number of significant
digits in our operands, here 3. So the proper scientific expression
of the answer is: v = 33.3 mi/hr.

In generaly, b/c floating point does not properly represent all decimals,
it will lead to more errors, while, within its range, integer arithmetic
(with sufficient implied decimal places) is better. Floating point is so
close to the answer that it is great for scientific purposes, but even
then care should be used: the precision of the numbers needs to be
considered, and even the order of operations should be considered.

Floating point is easier to work with. It keeps up with the decimal,
and it can express ranges totally out of limits of integer arithmitic such
as the Electron rest mass = 0.910953x10^-30Kg. Supercomputers
are super, in part, because of their highly optimized floating hardware.
Their 32-bit integer arithmetic may be little if any better than a PC's.

--tex



Aug 1 '05 #5

P: n/a
rik

"tex" <No****@hotmail.com> schreef in bericht
news:MJ*****************@newsread1.news.pas.earthl ink.net...

It actually is the best choice. Mainframe financial applicatins are
generally done in BCD (Binary Coded Decimal) which, at the hardware
level, is actually integers and any decimal point is in the hand of the
programmer (compilers, e.g. COBOL, would take care of that, but,
underneath it is all integers, though alllowing much bigger integers than
32-bit integers). It is such a good way to do things that Intel included
BCD with the 8088 in the 1st PCs and its still avl in todays chips.
Years ago, I was shocked to find spreadsheets (heavily used in financial
calcs) using floating pt, and also dismayed in C's not supporting BCD
as a native data type. Borland's Delphi includes a $-type which, by
default
assumes 4 digits of decimal, and it supports a BCD object that lets
users select the precision, but, underneath it is all integers.


One problem I have with this is that (1.0 / 3.0) + (1.0 / 3.0) + (1.0 / 3.0)
and ((1.0 + 1.0 + 1.0)/ 3.0) have a result that differs by 10%, and that my
experience is that the compiler doesn't always catch that, and that
mainframe programmers don't always think about that.

Another problem with BCD is that the decimal representation is rather
arbitrary (based on the number of fingers a mathematician in ancient India
happened to have had), and thus causes a lot of extra overhead for the CPU.

By the way, there are some excellent big decimal libraries available for C.
http://www.gnu.org/software/gmp/
Aug 3 '05 #6

P: n/a
tex
> One problem I have with this is that (1.0 / 3.0) + (1.0 / 3.0) + (1.0 /
3.0) and ((1.0 + 1.0 + 1.0)/ 3.0) have a result that differs by 10%, and
that my experience is that the compiler doesn't always catch that, and
that mainframe programmers don't always think about that.
This has nothing to do w/ integer vs floating pt arithmetic, but rather the
problem is that computers, having finite representations of numbers, cannot
do math correctly, PERIOD! (a x b)/c does not always equal a x (b/c)
on a computer and this is true regardless of number representation or
compiler. It is a programmer's problem. College courses in Numerical
Analysis address these problems.

The main point is that w/ integer arithmetic I can get as close as necessary
and can control the amout of error I will allow.

Secondly, all programmers, whether mainframe or otherwise, should
think about it. With a string of operations in a formula, the order of the
operations can make a significant difference.

This string started w/ hanna1 wanting to round floating pt numbers to 2
digits (e.g. 23.9918444 rounded to 23.99) and some programmers may
not realize that 23.99 may not be exactly 23.99 no matter what. Even
using floating pt, if hanna1 keeps her array in 1/100's she will be better
off because floating point DOES properly represent integers w/i its range
of precision, ie hanna1 could exactly store 23.99 as 2,399, but she would
need to remember to divide by 100 and round prior to display on screen
or report. Depending upon her other calculations and requirements, this
is likely one of the best choices for her (programming ease and accuracy).

Another problem with BCD is that the decimal representation is rather
arbitrary (based on the number of fingers a mathematician in ancient India
happened to have had), and thus causes a lot of extra overhead for the
CPU.
The BCD on IBM mainframes as well as on PCs is well defined. Last I looked,
PC's numeric processor recognizes a packed BCD number as a 10-byte number
with 1st nibble of first byte being the sign, and the remaining nibbles
holding a decimal digit in each for a total of a 19 digit integer.

The IBM mainframe format was a variable number of bytes, up to 16 as I
recall, with the last nibble of the last byte being the sign for a max of a
31-digit integer. I wouldn't be surprised if IBM's newer mainframes and
perhaps the AS400 support even bigger BCD integers, but I don't know.

New PC chips, and Java on standard PCs, support 64-bit integers, which
allow 18-digit signed integers.

By the way, there are some excellent big decimal libraries available for
C.
http://www.gnu.org/software/gmp/


Yep, Decimal (integer) arithmetic is soo important that such libraries came
out very quickly. Because so much of computer usage is commercial (i.e. $)
it should have been a standard data type. I think the original C-guys,
Kernighan & Ritchie, were engineers and developed C for a DEC PDP-11,
a 16-bit mini without BCD arithmetic, under Unix and they did not consider
commercial type computing. C was a great idea and 64-bit integers
eliminates the need for BCD much of the time. I tip my hat to K&R even
w/o BCD in C.

--tex


Aug 3 '05 #7

P: n/a

"tex" <No****@hotmail.com> wrote in message
news:VT***********@newsread3.news.pas.earthlink.ne t...
One problem I have with this is that (1.0 / 3.0) + (1.0 / 3.0) + (1.0 /
3.0) and ((1.0 + 1.0 + 1.0)/ 3.0) have a result that differs by 10%, and
that my experience is that the compiler doesn't always catch that, and
that mainframe programmers don't always think about that.


This has nothing to do w/ integer vs floating pt arithmetic, but rather
the
problem is that computers, having finite representations of numbers,
cannot
do math correctly, PERIOD! (a x b)/c does not always equal a x (b/c)
on a computer and this is true regardless of number representation or
compiler.


This isn't strictly true; some computing environments do calculations in
a symbolic manner instead of numerical (e.g. matlab).

If you declare N = 1, D = 3, and X = N/D, it does not store 0.3333...
into X, but actually stores the equation N/D. If you later multiply X by 3,
it will be able to perform the simplification and store X = N, which gives
the correct result.

- Oliver
Aug 12 '05 #8

This discussion thread is closed

Replies have been disabled for this discussion.