473,836 Members | 2,304 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8119
In <40************ ***@news.indivi dual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
glen herrmannsfeldt <ga*@ugcs.calte ch.edu> wrote:
P.J. Plauger wrote:
> Again, you're *presuming* that the user did some sloppy calculation
> to arrive at a large argument to sine. If that's what happened, then
> the result will be correspondingly inaccurate. But that's *always*
> true of every calculation, whether it involves sin(x) or any other
> math function. It's true even if no math functions are called at
> all. The authors of the math functions can only take what they're
> given and do the best possible job with them. Why is that such
> a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.


You have failed to understand the substance of this discussion. Nobody
argued for less precission than could be *meaningfully* computed.

Back to my extreme example, one could consider DBL_MAX as an *exact*
value and spend significant CPU resources computing sin(DBL_MAX) with
double precision accuracy, but when would be that precision *meaningful*?

Give the user as much precision as it makes sense from the magnitude of
the argument and let *him* do the argument reduction if he needs more
precision, because only he can decide what is the proper way of performing
the argument reduction for his application. More often than not, trying
to outsmart the user results in dumb behaviour.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #81
In article <c7**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx*******@cwi. nl> "Dik T. Winter" <Di********@cwi .nl> writes:
In article <c7**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:...
>> Concrete examples, please.
>
>What is the sine of 162,873 radians? If you're working in radians,
>you can represent this input value *exactly* even in a float.

You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Must everything make physical sense? Perhaps it makes mathematical sense?


And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.


Not in mathematical applications, where the argument to the sine function
can very well be exact.


Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are possible.
Where do these
"mathematic al applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #82
"glen herrmannsfeldt" <ga*@ugcs.calte ch.edu> wrote in message
news:V7Xoc.6287 $6f5.499223@att bi_s54...
Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x ;x+=60) printf("%20.15g \n",sin(x*A)) ;

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting " so as not to hurt the
programmer's feelings.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Not sure what this has to do with the current discussion, but
I'll answer anyway. There was a market for machines that were
fast -- at least by the standards of those days -- and a prevailing
view that inaccurate floating-point was safe enough if you just
carried a few extra sacrificial precision bits. That view was
correct often enough that not all of Cray's customers went
bankrupt.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.
As before.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.
Most likely.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.

Okay.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).
Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.

And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.
Giving a fatal error tells the user that something is
wrong and should be fixed.
Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?
Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.
Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway?
Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.


Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #83
In <Hx********@cwi .nl> "Dik T. Winter" <Di********@cwi .nl> writes:
In article <c7**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx*******@cwi. nl> "Dik T. Winter" <Di********@cwi .nl> writes:
In article <c7**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
> In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
...
> >> Concrete examples, please.
> >
> >What is the sine of 162,873 radians? If you're working in radians,
> >you can represent this input value *exactly* even in a float.
>
> You *can*, but does it make physical sense to call sine with an integer
> argument (even if represented as a float)?

Must everything make physical sense? Perhaps it makes mathematical sense?


And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.


Yup what?!? Please elaborate.
> In real life applications, the argument of sine is computed using
> floating point arithmetic (on non-integer values), so it *is* a fuzzy
> value, with the degree of fuzziness implied by its magnitude.

Not in mathematical applications, where the argument to the sine function
can very well be exact.


Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are possible.


This is not good enough.
"mathematic al applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.


With the exception of numerical analysis, mathematical results are exact,
by definition.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #84
In article <C4************ *******@nwrddc0 2.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware .com> wrote:

<snipped lots of stuff>

Apart from the precision of an individual call to sin(x), would people
think that fabs (sin (x)) <= 1.0 must be true for _all_ values of x, no
matter how large? And should sin (x)*sin(x) + cos(x)*cos(x) be close to
1 for all x, no matter how large?
Nov 14 '05 #85
P.J. Plauger wrote:
"glen herrmannsfeldt" <ga*@ugcs.calte ch.edu> wrote in message
news:V7Xoc.6287 $6f5.499223@att bi_s54...
Consider this program fragment, as no real examples have
been presented so far: for(x=0;x+60! =x;x+=60) printf("%20.15g \n",sin(x*A)) ;
(snip)
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting " so as not to hurt the
programmer's feelings.
That example is more realistic than any you have posted
so far.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
(snip)
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster. Most likely. What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
When they use a single precision function they expect
less accurate answers than a double precision function.
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it. Okay. What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision). Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.
The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.

The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.
And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.
It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.
Giving a fatal error tells the user that something is
wrong and should be fixed. Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?
There are no problems of any use to anyone where they
are useful.

(snip)
Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway? Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.


It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.

-- glen

Nov 14 '05 #86
In article <a59pc.1495$gr. 83718@attbi_s52 >,
glen herrmannsfeldt <ga*@ugcs.calte ch.edu> wrote:
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.


Just for a thought: There is this thing called "inverse error analysis".
You try to calculate y = f (x). Usually you assume that your
floating-point math will produce a value y' instead which is not quite
equal to y, and you want to keep the difference between y and y' small.
But instead you might assume that you calculate y = f (x'), that is your
floating-point arithmetic produced the exact function value for some
different x', hopefully close to x.

Now consider y = cos (x). For x = 0, cos (x) = 1, and it becomes smaller
very slowly; for small x we have cos (x) approximately 1 - x^2/2. Now
take 64 bit IEEE floating point. The largest number smaller than 1 is 1
- 2^-53. Choose x such that cos x is about 1 - 2^-54, x = pow (2.0,
-53.0/2). For this x, the best that your implementation of cos (x) can
do is to return a result of either 1 or 1 - 2^-53. 1 = cos (x') for x' =
0, 1 - 2^-53 = cos (x') for x' about pow (2.0, -26). So the best you can
possibly get is cos (x'), with x' either zero or 41 percent larger than
x. And there is nothing you can do about it.

No matter how good your cos () implementation is, the worst case with
inverse error analysis is horrendously bad.
Nov 14 '05 #87
"glen herrmannsfeldt" <ga*@ugcs.calte ch.edu> wrote in message
news:a59pc.1495 $gr.83718@attbi _s52...
P.J. Plauger wrote:
"glen herrmannsfeldt" <ga*@ugcs.calte ch.edu> wrote in message
news:V7Xoc.6287 $6f5.499223@att bi_s54...
Consider this program fragment, as no real examples have
been presented so far:for(x=0;x+60! =x;x+=60) printf("%20.15g \n",sin(x*A)) ;
(snip)
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting " so as not to hurt the
programmer's feelings.


That example is more realistic than any you have posted
so far.


Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.
Most likely.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?


When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.
Okay.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?


It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.


"Far above and beyond" is once again qualitative arm waving.
You may hear a weaker call to duty than others, but there's
nowhere in the C Standard that justifies an arbitrary
point beyond which it's okay to give up on sine, or any other
function. You think sine is hard to get right for some
arguments, try lgamma, tgamma, and erfc (all in C99).
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).
Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.


The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.


What, you mean that a conforming implementation has no explicit
precision requirements? I understand that; in fact I was one
of the principal proponents of that (lack of) requirement in
the C Standard. Dan Pop already responded to that issue better
than I can. It is well understood that every implementation
has certain "quality of implementation" features. (I also
pioneered use of that term.) If getc takes two days per character,
nobody will buy your library. Similarly, if math functions
trap out on certain unspecified values, word will get out and
your customers will stay away in droves.

Math libraries are tougher to quantify, because they require so
many measurements. But once you do the measurements and publish
them, you'd better be prepared to justify the results. You can't
just say, "I decided not to implement pow for negative exponents."
Or more to the point, "at some point the results of sine begin
to really suck, but I'd rather not say where that is." And if
you *do* choose to publish your threshold of suckiness, you'd
better have a rationale for choosing it. Otherwise, some third
party tester will report that your sine function occasionally
is off by 40 billion ulp, and not provide the rationale for why
that might be.
The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.


Oh, I see. You were talking about criteria accompanying a
40-year-old architecture, with software technology to match.
And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.


It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.


Really? The C Standard says it should set errno to EDOM and
return some failure value. If your implementation also obeys
IEC 60559 (aka IEEE 754) then you should return a NaN. I
find very few customers who want a program to terminate rather
than produce a NaN. I know even fewer who would *ever* expect
sine to return a NaN for any finite argument.
Giving a fatal error tells the user that something is
wrong and should be fixed.
Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?


There are no problems of any use to anyone where they
are useful.


Is that a Zen koan or just the nonsense it appears to be?
Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.


It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.


As a one-time professional physicist myself, I now begin to
understand the nature of your education.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #88
On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Then that's a likely thought-bug in the C Standard.

>>Consider someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

> Most likely.

> What does this tell the designer of a sine function about
> where it's okay to stop delivering accurate results?


When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

I think accumulation of rotations is probably best done
with complex multiplication.
Nov 14 '05 #89
"Dr Chaos" <mb************ ****@NOSPAMyaho o.com> wrote in message
news:sl******** *************** ********@lyapun ov.ucsd.edu...
On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Then that's a likely thought-bug in the C Standard.


Likely, but it's there, and some of us have to live with it.
>>Consider someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

> Most likely.

> What does this tell the designer of a sine function about
> where it's okay to stop delivering accurate results?

When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.


The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile "
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile " police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."
I think accumulation of rotations is probably best done
with complex multiplication.


And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #90

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.