473,836 Members | 2,187 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8124
"Christian Bau" <ch***********@ cbau.freeserve. co.uk> wrote in message
news:ch******** *************** **********@slb-newsm1.svr.pol. co.uk...
In article <c8**********@s mof.fiawol.org> ,
jd*@smof.fiawol .org (John Cochran) wrote:
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressivelyfuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the magnitude of the ulp is greater than 2 pi, it becomes impossible to decide what the correct answer is.


If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.


According to that theory, yes. I've never disputed that. The issue
I keep coming back to, however, is that no programming language
standard I know of invites authors of math libraries to take
advantage of that theory. Rather, standards define a floating-point
value as its overt value -- summing the contributions of all the
mantissa bits, multiplying by the exponent, negating if negative.
Nowhere does it say that the argument is fuzzy and the result
should therefore reflect the fuzziness.

And in support of this viewpoint, any number of test suites actually
check the *quality* of a math library on the basis of how close
the actual function values are to the best internal representation
of the function value corresponding to the argument treated as an
exact value. I assure you that existing customers file bug reports
and potential customers stay away in droves when a library shows
up too many "errors" on these tests. Fuzz just don't cut it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #141
"Dr Chaos" <mb************ ****@NOSPAMyaho o.com> wrote in message
news:sl******** *************** ********@lyapun ov.ucsd.edu...
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
"John Cochran" <jd*@smof.fiawo l.org> wrote in message
news:c8******** **@smof.fiawol. org...
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...

>
>But the repeated argument is that the silliness stems from the fact
>that the floating-point value x stands for the interval (x - 1ulp,
>x + 1ulp). If it didn't, it would stand for the value it seems to
>stand for, and the library writer might be obliged to actually
>compute the function value corresponding to it. Well, what's sauce
>for the goose is sauce for the gander. Why should sine get progressively >fuzzier as that interval covers a wider range of function values and
>not require/allow/expect exactly the same from every other function
>that does the same? The exponential suffers from *exactly* the
>same fuzziness problem, to take just the best established of the
>fast-moving functions I mentioned earlier. Aren't we doing all our
>customers a disservice by giving them a misleading value for exp(x)
>when x is large?

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit representable. The main difference for sin() vs exp() is that for all of the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to infinite precision. However, with sin() once you get to an input where the magnitude of the ulp is greater than 2 pi, it becomes impossible to decide what the correct answer is.
Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher.


I think they're arguing from the notion that "the purpose of
computing is insight, not numbers."


That's nice. But you ain't gonna get squat in the way of insight
if your computer doesn't deliver on its promise to be damned fast
and highly accurate at carrying out your wishes. When R.W. Hamming
dropped that little aphorism, he was taking for granted that the
computer would fulfill its part of the bargain. He was reminding
the human users what they should be bringing to the party.
But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently.

The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.0142320547350 04e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0 , INF)). It looks
like 1.0142320547351 2e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).


relative error is what?


Roughly +/- 2^10 / 2^53. Or a loss of eleven bits out of 53.
So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*.


What matters is the number of non-garbage bits.


On the input value? Yes, that's what matters to the *programmer*.
The library writer has no idea wheter there are *any* garbage
bits. We as a class are expected to assume there are none.
And people would like us to deliver results with *no* garbage
bits (assuming an exact input), so we don't add unnecessarily
to the problem.
Suppose that number were zero.
Which number? I'm supposing that the argument has no garbage
bits and that the highest quality implementation produces
no garbage bits for a function return value. Haven't heard
any better criterion yet.
After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.


To some degree, yes, but the case is far stronger for sin/cos.


Really? How? I've yet to hear an argument that doesn't apply
equally well to the other functions. If you're talking about
the *likelihood* that sin(700.0) is not a well thought out
function call, then I agree that it's more likely to be
nonsense than exp(700.0). But once again, *the C Standard doesn't
say so and the library author has to assume that both are valid
calls.* Nobody has given a concrete reason why the library author
should be let off the hook so far, IMO.
Right?

But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.


Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.


And I have yet to have a customer call up and say that computing
exp(0.73) was *really* important. Is this a popularity contest
we're running, or are we trying to understand a programming
language specification?
By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.
Agreed. What has that to do with the requirements on writing the
Standard C library? We have to be sure to sort zero items properly
with qsort, but if I ever saw a student program that explicitly
called qsort for zero items I'd know s/he was confused. Such
examples are irrelevant to the requirements placed on library
writers.
I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.


So would I. But I've yet to hear presented here a concrete, *defensible*
definition of where "this" cuts in for sine.
Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.


who would complain, and what would be the particular application
they'd complain about?


Not for me to say. If I told them their application was unimportant,
or poorly designed, or had bad feng shui, would that let me off the
hook?
You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?


Because the uses of sine are different, and rounding
produces useful results.


So too does sine and cosine quite often. Who are you to tell
somebody else that s/he doesn't know what s/he is doing?
You might be right that they *likely* don't know what they're
doing, but you'd be foolish to take that for granted from
the outset.
But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water.


A programmer's conceptual blunder.


Okay, quantify that and reduce it to standardese and I'm all
for it.
And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.


Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(7 00.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.


relative error, and facts of actual use.


The relative errors for sin(700) and exp(700) are comparable,
as I just demonstrated (and as is obvious from a few lines
of calculus). "Facts of actual use" are anecdotal at best
and pure conjecture at worst. Whatever it is, it ain't math
and it ain't standardese.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #142
In article <Y9************ *****@nwrddc03. gnilink.net>,
"P.J. Plauger" <pj*@dinkumware .com> wrote:
"Christian Bau" <ch***********@ cbau.freeserve. co.uk> wrote in message
news:ch******** *************** **********@slb-newsm1.svr.pol. co.uk...
If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.


According to that theory, yes. I've never disputed that. The issue
I keep coming back to, however, is that no programming language
standard I know of invites authors of math libraries to take
advantage of that theory. Rather, standards define a floating-point
value as its overt value -- summing the contributions of all the
mantissa bits, multiplying by the exponent, negating if negative.
Nowhere does it say that the argument is fuzzy and the result
should therefore reflect the fuzziness.

And in support of this viewpoint, any number of test suites actually
check the *quality* of a math library on the basis of how close
the actual function values are to the best internal representation
of the function value corresponding to the argument treated as an
exact value. I assure you that existing customers file bug reports
and potential customers stay away in droves when a library shows
up too many "errors" on these tests. Fuzz just don't cut it.


Actually, what I meant was: With the exp function for large arguments as
an example, you have to make up your mind if you want to be lazy and
return results with an error of several hundred ulps under the
assumption that no one cares, or whether you want to produce quality
results. In this case, I would prefer quality.

In the end, errors tend to add up. Even if the calculation of x included
some significant rounding error, that is no good reason why the
calculation of exp (x) should add more error to this than absolutely
necessary.
Nov 14 '05 #143
In <ch************ *************** ******@slb-newsm1.svr.pol. co.uk> Christian Bau <ch***********@ cbau.freeserve. co.uk> writes:
Actually, what I meant was: With the exp function for large arguments as
an example, you have to make up your mind if you want to be lazy and
return results with an error of several hundred ulps under the
assumption that no one cares, or whether you want to produce quality
results. In this case, I would prefer quality.
Me too, it's just that our notions of "quality" differ: a result with
garbage bits presented as precision bits and obtained by wasting cpu
resources is a low quality result.
In the end, errors tend to add up. Even if the calculation of x included
some significant rounding error, that is no good reason why the
calculation of exp (x) should add more error to this than absolutely
necessary.


This statement makes sense *only* if one considers floating point
representations as standing for precise mathematical values. This doesn't
happen to be the case (when was the last time all your input consisted
exclusively of values exactly representable in the target floating point
types?). For all you know, the result with "more error
than absolutely necessary" may be the *exact* result to the actual
problem. If you don't understand this statement, you shouldn't be
dealing with floating point values...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #144

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.