By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,137 Members | 1,024 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,137 IT Pros & Developers. It's quick & easy.

Testing math.h

P: n/a
Folks,

Looking at the sanity checks for math.h/math.c in "The Standard C library"
they use 4* DBL_EPSILON as the error range, which I think is equivalent to
three bits of error. Is this appropriate on old style IBM 370 libraries with
its nasty drifting prescion? If not what would be an appropriate value.

Dave.

P.S. Its just I am having problems with exp() and log() I expected these to
be tricky.
Dec 24 '06 #1
Share this Question
Share on Google+
1 Reply


P: n/a
"David Wade" <g8***@yahoo.comwrote in message
news:no******************************@eclipse.net. uk...
Folks,

Looking at the sanity checks for math.h/math.c in "The Standard C library"
they use 4* DBL_EPSILON as the error range, which I think is equivalent to
three bits of error. Is this appropriate on old style IBM 370 libraries
with
its nasty drifting prescion? If not what would be an appropriate value.

Dave.

P.S. Its just I am having problems with exp() and log() I expected these
to
be tricky.
The C Standard says nothing about accuracy. I chose a test criterion
which all sane libraries seemed to pass, but which quickly spotted
truly erroneous implementations. With hindsight, however, I can say
that such a criterion is rather naive, at least for anything more
than basic sanity.

First, DBL_EPSILON is only a coarse approximation to one "unit in the
least significant place" (ulp) when testing (d2 - d1) / d2. More
serious testers tend to report errors in ulps, except at zeros.

Second, even the concept of ulp is shaky in the presence of shifting
exponents, which change the weight of a ulp in different directions
for larger and smaller answers. We (Dinkumware) now use "reps" or
number of distinct representations between the ideal and actual
answer.

Third, even the concept of rep is shaky in regions where functions
are hypersensitive to the change of one rep in the argument. It has
long been known, for example, that sin(1e40) is not worth computing;
now we know it's because the answer depends way too sensitively on
small changes in the argument. You get similar sensitivity near
irrational zeros, such as sin(pi).

We now test math functions for worst case error in reps, over several
thousand representative arguments and paying particular attention to
corner cases. Good implementations tend to be correct to a few reps
for the common functions in their not-terribly-sensitive regions.
Amusingly enough, that hard-won sophistication leads to a sanity
check that doesn't differ all that much from what I made up fifteen
years ago.

Go figure.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Dec 25 '06 #2

This discussion thread is closed

Replies have been disabled for this discussion.