> From: TK <to****@hotmail .com>
I want to be able to enter a value on a page (like 3.2), and then
read it as a 32-bit float and break it into it's individual bytes.
That's not possible, because 3.2 (i.e. 32/10) cannot be exactly
represented as a floating-point value. When you input the string "3.2"
and have it parsed as a floating point value, you get a *different*
value that is very close to but not exactly equal to 32/10.
Floating point numbers are a crock. They were fine as a temporary hack
in Fortran in 1964 to allow approximate calculations to be done, with
no control over errors, but hopefully not too large errors. But a lot
of time has past and it's way way *WAY* overdue to replace floating
point numbers with interval arithmetic, where you get an assured bound
on error of calcuations. Then 3.2 would be parsed as a rational number
32/10, and then if you wanted it converted to a binary fraction it'd be
converted into an interval which contained 32/10 inside it and the
endpoints were as close on either side as possible with the specified
amount of precision allowed in the binary fraction representation. For
example, if you allowed 30 bits precision to the right of the "decimal
point", i.e. accuracy down to 2**(-30), then you'd get an interval of
[3435973836, 3435973837]b-30 internally (actually I should show the
internal integers in hexadecimal, i.e. [CCCCCCCC, CCCCCCCD]b-1E). Then
if you asked that interval to be converted back to decimal form, with a
precision of 9 decimal digits after the decimal point, you'd get the
interval [3.199999999, 3.200000001] which could be expressed more
compactly as 3.20000000[-1..1] showing clearly that the correct number
is very close to 3.2 although because of conversions to binary and back
to decimal the program can't say the value is exactly 3.2. With that
same internal representation, if you asked for output in decimal
carried to 15 decimal digits, you'd see
[3.1999999992549 41, 3.2000000001862 65] which could be abbreviated as
3.200000000[-745059..186265]. But most likely you'd prefer a default
print mode which automatically takes as many digits as agree at both
endpoints and then shows just the next one or two additional digits
where the endpoints disagree, rounded away, i.e.: 3.200000000[-75..19]
or 3.200000000[-8..2].
So will you join me in an effort to establish a multi-platform (*)
standard and conforming implementations for each platform?
* = (Java, JavaScript, CommonLisp, EmacsLisp, and any other languages
that we find likewise suitable)
Or would you rather use crufty old stupid floats and get back an exact
value of 3.1999999992549 41 and have no idea whether 3.2 is consistent
with the program's calculation?