jacob navia <ja***@nospam.c omwrites:
Flash Gordon wrote:
[...]
>>Since the standard doesn't force even IEEE754, there is *nothing*
the C language by itself can guarantee the user.
(I think jacob wrote the above; the attribution was snipped.)
>>
Are you seriously trying to claim that the only way to provide any
form of guarantee on floating point is to enforce IEEE754?
If there isn't even an accepted standard that is enforced, how
can you guarantee anything.
[...]
>
How could the standard guarantee ANYTHING about the precision of
floating point calculations when it doesn't even guarantee a common
standard?
Yes I am seriously saying that the absence of an enforced standard
makes any guarantee IMPOSSIBLE.
First of all, the C standard says that accuracy of floating-point
operations is implementation-defined, though the implementation is
allowed to say that the accuracy is unknown. This doesn't refute
jacob's claim, but it does mean that a particular implementation is
allowed to make guarantees that the standard doesn't make.
It's entirely possible for a language standard to make firm guarantees
about floating-point accuracy without mandating adherence to one
specific floating-point representation. For example, the Ada standard
has a detailed parameterized floating-point model with specific
precision requirements; Ada's requirements are satisfied by an IEEE
754 implementation, but can also be (and have been) satisfied by a
number of other floating-point representations . For example, in Ada
1.0 + 1.0 is guaranteed to be exactly equal to 2.0.
The C standard could have followed the same course (and since it was
written several years after the first Ada standard, I'm not entirely
sure why it didn't). But instead, the authors of the C standard chose
to leave floating-point precision issues up to each implementation.
In practice, each C implementation and each Ada implementation usually
just uses whatever floating-point representation and operations are
provided by the underlying hardware. (Exception: some systems
probably use software floating-point, and some others might need to do
some tweaking on top of what the hardware provides.) In most cases,
the precision provided by the hardware is as good as what the language
standard *could* have guaranteed.
Yes, jacob, it's true that the C standard makes no guarantees about
floating-point precision. It does make some guarantees about
floating-point range, which seems to contradict your claim above that
"there is *nothing* the C language by itself can guarantee the user".
(Perhaps in context it was sufficiently clear that you were talking
only about precision; I'm not going to bother to go back up the thread
to check.)
As for *why* the C standard doesn't require IEEE 754, there have been
plenty of computers that support other representations , and it would
have been absurd for the C standard to require IEEE 754 on systems
that don't provide it. (Examples: VAX, older Cray vector systems, and
IBM mainframes each have their own floating-point formats; there are
undoubtedly more examples.) The intent of IEEE 754 is to define a
single universal floating-point standard, and we're headed in that
direction, but we're not there yet -- and we certainly weren't there
when the C89 standard was being written.
[...]
This is an example of how the "regulars" spread nonsense just with the
only objective of "contradict ing jacob" as they announced here
yesterday.
Tedious rant ignored.
--
Keith Thompson (The_Other_Keit h) <ks***@mib.or g>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"