Slightly off topic, but on mainframes financial applications tend to
use fixed point decimals. This does avoid the rounding errors in adding
and subtracting, but you pay for it by having a worse rounding error with
multiplication and division, so as far as I know it is not a rational
choice, just a traditional one.
It actually is the best choice. Mainframe financial applicatins are
generally done in BCD (Binary Coded Decimal) which, at the hardware
level, is actually integers and any decimal point is in the hand of the
programmer (compilers, e.g. COBOL, would take care of that, but,
underneath it is all integers, though alllowing much bigger integers than
32-bit integers). It is such a good way to do things that Intel included
BCD with the 8088 in the 1st PCs and its still avl in todays chips.
Years ago, I was shocked to find spreadsheets (heavily used in financial
calcs) using floating pt, and also dismayed in C's not supporting BCD
as a native data type. Borland's Delphi includes a $-type which, by default
assumes 4 digits of decimal, and it supports a BCD object that lets
users select the precision, but, underneath it is all integers.
When you control rounding, and you have total control with integers
and an implied decimal point, you (can) get exactly the rounding and
precision that you want. Because financial calculations need to agree
to the penny, integer arithmethic is the very best.
It is actually even better (i.e. more correct) with scientific calculations
when all the numbers used are in the range of integer aritimetic.
Consider that you measure distance traveled, d = 100.0 miles, and
the time it took, t = 3.00 hrs. The average velocity is then
v = d/t = 100.0/3.00 = 33.333333333...(before rounding)
but, in science & engineering, d has 4 significant digits and t has 3
significant digits (had we measured more accuratly we should
express that in our data by showing more significan digits). In
multiplication & divison (and trig functions, et al), scientific principles
says we can only express our answer to the least number of significant
digits in our operands, here 3. So the proper scientific expression
of the answer is: v = 33.3 mi/hr.
In generaly, b/c floating point does not properly represent all decimals,
it will lead to more errors, while, within its range, integer arithmetic
(with sufficient implied decimal places) is better. Floating point is so
close to the answer that it is great for scientific purposes, but even
then care should be used: the precision of the numbers needs to be
considered, and even the order of operations should be considered.
Floating point is easier to work with. It keeps up with the decimal,
and it can express ranges totally out of limits of integer arithmitic such
as the Electron rest mass = 0.910953x10^-30Kg. Supercomputers
are super, in part, because of their highly optimized floating hardware.
Their 32-bit integer arithmetic may be little if any better than a PC's.
--tex