P: n/a

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks  
Share this Question
P: n/a

suri wrote: I downloaded glibc and tried looking for the code that implements the sine function i couldnt find the file. i went to the math directory and found math.h.. i guess that needs to be included for the sine function. but which .c file implements the sine/cosine and other trig fns http://www.eskimo.com/~scs/Cfaq/q14.3.html
Do you really want an implementation for sin?
Apparently, you are running an Intelcompatible processor which
comes with a sin instruction implemented in hardware.  
P: n/a

No i knew that faq.
my question is where is the sine fn imlementation?
since u say its a native instruction set means that there is no code in
ansi C for implementing sine.?
well libm.a must surely have it. but where are the C files?
Nudge wrote: suri wrote:
I downloaded glibc and tried looking for the code that implements the sine function i couldnt find the file. i went to the math directory and found math.h.. i guess that needs to be included for the sine function. but which .c file implements the sine/cosine and other trig fns http://www.eskimo.com/~scs/Cfaq/q14.3.html
Do you really want an implementation for sin?
Apparently, you are running an Intelcompatible processor which comes with a sin instruction implemented in hardware.  
P: n/a

suri wrote: No i knew that faq. my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
Some platforms have hardware instructions that compute sin() and the
compiler will emit them, bypassing the libm library's implementation. This
is pretty much true for ia32 and ia64.
well libm.a must surely have it. but where are the C files?
For FreeBSD, check out /usr/src/lib/libm/common/{trig.h,sincos.c} [nasty
looking code, but it works fast.] You're almost better off grabbing a
calculus book and having a look at a Taylor series expansion if you truly
want to understand the math. Or look at "Numerical Recipes in C".  
P: n/a

suri <hs***@usc.edu> wrote in message
news:c7************@ID233334.news.uniberlin.de... I downloaded glibc and tried looking for the code that implements the sine function i couldnt find the file. i went to the math directory and found math.h.. i guess that needs to be included for the sine function. but which .c file implements the sine/cosine and other trig fns
Here is one way, but not the most advanced way. It is fairly suitable for a
float (precision wise) but not for a double.
#include <iostream.h>
#include <iomanip.h>
#include <math.h>
// Source AMS 55, eqn 4.3.97. Handbook of Mathematical Functions, Pub by
U.S. Dept of Commerce
float sinx(float x)
{
static const float a[] =
{.1666666664,.0083333315,.0001984090,.0000027526,.0000000239};
float xsq = x*x;
float temp = x*(1 + a[0]*xsq + a[1]*xsq*xsq + a[2]* xsq*xsq*xsq
+a[3]*xsq*xsq*xsq*xsq
+ a[4]*xsq*xsq*xsq*xsq*xsq);
return temp;
}
// 
void test()
{
float x;
while(1)
{
cin >> x;
if(x<0.  x > (3.1416/2) )
{
cout << "Argument to sinx must be in range 0>= x <= pi/2 \n";
continue;
}
cout << sinx(x) << setw(12) << (float)sin(x) << endl;
}
The test code, but not the sinx() code, is in C++ which I suppose will lead
to some hissy fits since this is a C group. It is also a prestandard
version of C++ so I suppose there will be some more hissy fits from that
quarter. So be it.  
P: n/a

I do know the series expansion of sine i was just interested to know how
its implemented in the ansi C library. like how many terms of the
infinite series are included.
I have linux and use glibc. so i could find the file in the path u mentioned
wombat wrote: suri wrote:
No i knew that faq. my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64.
well libm.a must surely have it. but where are the C files?
For FreeBSD, check out /usr/src/lib/libm/common/{trig.h,sincos.c} [nasty looking code, but it works fast.] You're almost better off grabbing a calculus book and having a look at a Taylor series expansion if you truly want to understand the math. Or look at "Numerical Recipes in C".  
P: n/a

im sorry i meant i *could not* find the file
suri wrote: I do know the series expansion of sine i was just interested to know how its implemented in the ansi C library. like how many terms of the infinite series are included.
I have linux and use glibc. so i could find the file in the path u mentioned
wombat wrote:
suri wrote:
No i knew that faq. my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine.? Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64.
well libm.a must surely have it. but where are the C files? For FreeBSD, check out /usr/src/lib/libm/common/{trig.h,sincos.c} [nasty looking code, but it works fast.] You're almost better off grabbing a calculus book and having a look at a Taylor series expansion if you truly want to understand the math. Or look at "Numerical Recipes in C".  
P: n/a

On Sun, 02 May 2004 06:55:43 0700, suri <hs***@usc.edu> wrote in
comp.lang.c:
First, please don't toppost. Material you add in a reply goes AFTER
material you are quoting. Topposting makes technical discussions
hard to follow and is considered rude in comp.lang.c. I do know the series expansion of sine i was just interested to know how its implemented in the ansi C library. like how many terms of the infinite series are included.
There is no such thing as "the" ANSI C library. The C standard
defines the functions that must be provided, their interface, and
their results when used as defined. It does not specify an
implementation at all.
The standard library functions supplied with an implementation do not
even have to be written in C.

Jack Klein
Home: http://JKTechnology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/Cfaq/top.html
comp.lang.c++ http://www.parashift.com/c++faqlite/
alt.comp.lang.learn.cc++ http://www.contrib.andrew.cmu.edu/~a...FAQacllc.html  
P: n/a

wombat wrote: Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64.
<OT>
As far as I can tell, IA64 has no such instruction :)  
P: n/a

suri wrote: my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine? well libm.a must surely have it. but where are the C files?
Why don't you ask in a forum dedicated to the GNU libc?
You might try:
comp.os.linux.questions
comp.os.linux.development.apps http://sources.redhat.com/glibc/
(Please do not toppost.)  
P: n/a

On Sun, 02 May 2004 02:41:31 0700, in comp.lang.c , suri <hs***@usc.edu>
wrote: since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
Quite possibly.
well libm.a must surely have it. but where are the C files?
If its native to the processor, there IS no code to do it. Whether some
(offtopic) library has an alternate implementation is offtopic here.
But if you want code, then write it yourself. Sin(x) is calculable to a
good approximation using a power series.

Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/Cfaq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
== Posted via Newsfeed.Com  UnlimitedUncensoredSecure Usenet News== http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
= 19 East/WestCoast Specialized Servers  Total Privacy via Encryption =  
P: n/a

"Mark McIntyre" <ma**********@spamcop.net> wrote in message
news:2h********************************@4ax.com... On Sun, 02 May 2004 02:41:31 0700, in comp.lang.c , suri <hs***@usc.edu> wrote:
since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
Quite possibly.
well libm.a must surely have it. but where are the C files?
If its native to the processor, there IS no code to do it. Whether some (offtopic) library has an alternate implementation is offtopic here.
But if you want code, then write it yourself. Sin(x) is calculable to a good approximation using a power series.
A *good* sine function is one of the hardest math functions to
write, believe it or not. For x < pi / 4, it it indeed easy
to approximate with a polynomial in x^2  a truncated power
series will do, but you can save a term or two by fiddling the
coefficients. But doing proper argument reduction is an open
ended exercise in frustration. Just reducing the argument modulo
2*pi quickly accumulates errors unless you do arithmetic to
many extra bits of precision.
Some processors have instructions that do *parts* of the job
of computing the sine, like evaluating it in the easy region
described above. Some even have functions that appear to do
the argument reduction well for you, but they don't always
deliver.
See my book, The Standard C Library, for one implementation of
sin(x) in Standard C. It's only moderately naive.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"P.J. Plauger" wrote:
.... snip ... coefficients. But doing proper argument reduction is an open ended exercise in frustration. Just reducing the argument modulo 2*pi quickly accumulates errors unless you do arithmetic to many extra bits of precision.
And that problem is inherent. Adding precision bits for the
reduction will not help, because the input value doesn't have
them. It is the old problem of differences of similar sized
quantities.

A: Because it fouls the order in which people normally read text.
Q: Why is topposting such a bad thing?
A: Topposting.
Q: What is the most annoying thing on usenet and in email?  
P: n/a

CBFalconer writes: "P.J. Plauger" wrote: ... snip ... coefficients. But doing proper argument reduction is an open ended exercise in frustration. Just reducing the argument modulo 2*pi quickly accumulates errors unless you do arithmetic to many extra bits of precision.
And that problem is inherent. Adding precision bits for the reduction will not help, because the input value doesn't have them. It is the old problem of differences of similar sized quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you
saying that is not computable? Please elaborate.
There was a thread hereabouts many months ago on this very subject and AFAIK
no one suggested that it was not computable, it just couldn't be done with
doubles. And I see no inherent problems.  
P: n/a

wombat said the following, on 05/02/04 18:43: suri wrote:
No i knew that faq. my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
[snip] For FreeBSD, check out /usr/src/lib/libm/common/{trig.h,sincos.c} [nasty looking code, but it works fast.] You're almost better off grabbing a calculus book and having a look at a Taylor series expansion if you truly want to understand the math. Or look at "Numerical Recipes in C".
There are (at least) two possible questions that might lie behind the
OP's request. One, which is more or less ontopic, is "How might one
implement the sin() function of the C library in standard C?"
The other is something like, "What is the best numerical method for
calculating the sine function?" The answer to THAT question has
something to do with C, if that is the implementation language, but has
more to do with numerical analysis. Wombat has suggested _Numerical
Recipes in C_, which is good. The elementary N/A text I used way back
when was F.S. Acton's _Numerical Methods That [Usually] Work_. (BTW,
calculating function like this by straightforward application of the
"textbook" power series is often not the best approach.)
I'd also recommend P.J. Plauger's excellent book, _The Standard C
Library_, which includes consideration of both questions.

Rich Gibbs
rgibbs AT alumni DOT princeton DOT edu  
P: n/a

"osmium" <r1********@comcast.net> wrote in message
news:c7************@ID179017.news.uniberlin.de... CBFalconer writes:
"P.J. Plauger" wrote: ... snip ... coefficients. But doing proper argument reduction is an open ended exercise in frustration. Just reducing the argument modulo 2*pi quickly accumulates errors unless you do arithmetic to many extra bits of precision.
And that problem is inherent. Adding precision bits for the reduction will not help, because the input value doesn't have them. It is the old problem of differences of similar sized quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
There was a thread hereabouts many months ago on this very subject and
AFAIK no one suggested that it was not computable, it just couldn't be done with doubles. And I see no inherent problems.
Right. This difference of opinion highlights two conflicting
interpretations of floatingpoint numbers:
1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments 
all the good bits get lost.
2) They are what they are. Assume that every floatingpoint
representation exactly represents some value, however that
representation arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multiprecision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint  it stays exact
until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

osmium wrote: CBFalconer writes: "P.J. Plauger" wrote: ... snip ... coefficients. But doing proper argument reduction is an open ended exercise in frustration. Just reducing the argument modulo 2*pi quickly accumulates errors unless you do arithmetic to many extra bits of precision.
And that problem is inherent. Adding precision bits for the reduction will not help, because the input value doesn't have them. It is the old problem of differences of similar sized quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
PI is not a rational number, so any multiple cannot be represented
exactly. In addition, for any fixed accuracy, you will lose
significant digits in normalizing. To take your value of 50,000
radians, you are spending something like 4 decimal digits just for
the multiple of 2 PI, so the accuracy of the normalized angle will
be reduced by at least those 4 places.
To illustrate, take your handy calculator and enter:
1.23456789 + 1000000000 = <something>
and follow up with:
10000000000 = <something else>
and you will find that <something else> is not 1.23456789. You
are running into limited precision effects.

"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind."  Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses."  James Rhodes.  
P: n/a

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com... osmium wrote: CBFalconer writes: "P.J. Plauger" wrote: > ... snip ... > coefficients. But doing proper argument reduction is an open > ended exercise in frustration. Just reducing the argument modulo > 2*pi quickly accumulates errors unless you do arithmetic to > many extra bits of precision.
And that problem is inherent. Adding precision bits for the reduction will not help, because the input value doesn't have them. It is the old problem of differences of similar sized quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
PI is not a rational number, so any multiple cannot be represented exactly. In addition, for any fixed accuracy, you will lose significant digits in normalizing. To take your value of 50,000 radians, you are spending something like 4 decimal digits just for the multiple of 2 PI, so the accuracy of the normalized angle will be reduced by at least those 4 places.
No. The *precision* will be reduced. Whether or not the *accuracy*
is reduced depends on your conjecture about the bits that don't
exist off the right end of the fraction. Those of us who write
libraries for a living shouldn't presume that those missing bits
are garbage. We serve our customers best by assuming that they're
all zeros, and delivering up the best approximation to the correct
answer for that assumed value.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

In article <40********@news101.his.com> Rich Gibbs <rg****@REMOVEhis.com> writes: The other is something like, "What is the best numerical method for calculating the sine function?" The answer to THAT question has something to do with C, if that is the implementation language, but has more to do with numerical analysis.
It has nothing to do with the C language. The newsgroup:
sci.math.numanalysis
is more appropriate. They provide algorithms, it is only the last question
how to implement the algorithm in C. When designing these algorithms you
have quite a few contradicting objectives. (Overall relative precision,
monotony, and whatever.) Do you wish a good algorithm where sin(x) > 1
could be true? (CodyWaite, of sixties fame provided an algorithm where
that was possible.) The algorithm could still fit in your requirements:
speed and relative precision.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

In article <W1******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic].
Have you looked at how it was done at DEC? There is a writeup about
it in the old SigNum Notices of the sixties or seventies. Mary Decker,
I think.
It has been my opinion for a very long time that he real sine function
is not so very important. The most important in numerical mathematics
is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and
exact range reduction. Alas, that function is not in C.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

In article <Wu******************@nwrddc03.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: No. The *precision* will be reduced. Whether or not the *accuracy* is reduced depends on your conjecture about the bits that don't exist off the right end of the fraction. Those of us who write libraries for a living shouldn't presume that those missing bits are garbage. We serve our customers best by assuming that they're all zeros, and delivering up the best approximation to the correct answer for that assumed value.
Right. The basic functions should be black boxes that assume the input
value is exact. See how the precision of these functions are defined
in Ada (and I had not nothing to do with it).

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

"Dik T. Winter" wrote: "P.J. Plauger" <pj*@dinkumware.com> writes:
I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic].
Have you looked at how it was done at DEC? There is a writeup about it in the old SigNum Notices of the sixties or seventies. Mary Decker, I think.
It has been my opinion for a very long time that he real sine function is not so very important. The most important in numerical mathematics is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and exact range reduction. Alas, that function is not in C.
That just makes the problem show up elsewhere. Compare that
function with an argument of (1e18 + 0.1) against an argument of
(0.1), for example. At least on any common arithmetic system
known to me.

"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind."  Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses."  James Rhodes.  
P: n/a

"P.J. Plauger" wrote: "CBFalconer" <cb********@yahoo.com> wrote in message osmium wrote: CBFalconer writes: "P.J. Plauger" wrote:
... snip ... > coefficients. But doing proper argument reduction is an open > ended exercise in frustration. Just reducing the argument modulo > 2*pi quickly accumulates errors unless you do arithmetic to > many extra bits of precision.
And that problem is inherent. Adding precision bits for the reduction will not help, because the input value doesn't have them. It is the old problem of differences of similar sized quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
PI is not a rational number, so any multiple cannot be represented exactly. In addition, for any fixed accuracy, you will lose significant digits in normalizing. To take your value of 50,000 radians, you are spending something like 4 decimal digits just for the multiple of 2 PI, so the accuracy of the normalized angle will be reduced by at least those 4 places.
No. The *precision* will be reduced. Whether or not the *accuracy* is reduced depends on your conjecture about the bits that don't exist off the right end of the fraction. Those of us who write libraries for a living shouldn't presume that those missing bits are garbage. We serve our customers best by assuming that they're all zeros, and delivering up the best approximation to the correct answer for that assumed value.
Yes, precision is a much better word. However there is an
argument for assuming the first missing bit is a 1.

"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind."  Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses."  James Rhodes.  
P: n/a

"Dik T. Winter" <Di********@cwi.nl> wrote in message
news:Hx********@cwi.nl... In article <W1******************@nwrddc02.gnilink.net> "P.J. Plauger"
<pj*@dinkumware.com> writes: > I've gone to both extremes over the past several decades. > Our latest math library, still in internal development, > can get exact function values for *all* argument values. > It uses multiprecision argument reduction that can gust > up to over 4,000 bits [sic].
Have you looked at how it was done at DEC? There is a writeup about it in the old SigNum Notices of the sixties or seventies. Mary Decker, I think.
I think I'm familiar with that approach, but I've studied so many
over the past ten years that I'm no longer certain. Our latest
approach is to use an array of halfprecision values to represent
2/pi to as many bits as we see fit. We have a highly portable and
reasonably fast internal package that does the magic arithmetic
needed for range reduction.
It has been my opinion for a very long time that he real sine function is not so very important. The most important in numerical mathematics is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and exact range reduction. Alas, that function is not in C.
Yes, sinq and its variants avoids the problem of needing all those
bits of pi. But many formulas actually compute radians before calling
on sin/cos/tan. For them to work in quadrants (or whole goroundies)
would just shift the 2*pi factor to someplace in the calculation
that retains even less precision.
(If the OP is not suitably frightened about writing his/her own
sine function by now, s/he should be.)
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com... "Dik T. Winter" wrote: "P.J. Plauger" <pj*@dinkumware.com> writes:
I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic].
Have you looked at how it was done at DEC? There is a writeup about it in the old SigNum Notices of the sixties or seventies. Mary Decker, I think.
It has been my opinion for a very long time that he real sine function is not so very important. The most important in numerical mathematics is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and exact range reduction. Alas, that function is not in C.
That just makes the problem show up elsewhere. Compare that function with an argument of (1e18 + 0.1) against an argument of (0.1), for example. At least on any common arithmetic system known to me.
No, again. Working in quadrants, or goroundies, *does* solve the
argumentreduction problem  you just chuck the leading bits
that don't contribute to the final value. But you're *still*
conjecturing that a lack of fraction bits is a lack of accuracy,
and that's not necessarily true. Hence, it behooves the library
writer to do a good job in case it *isn't* true.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com... "P.J. Plauger" wrote: "CBFalconer" <cb********@yahoo.com> wrote in message osmium wrote: CBFalconer writes: > "P.J. Plauger" wrote: > > ... snip ... >> coefficients. But doing proper argument reduction is an open >> ended exercise in frustration. Just reducing the argument modulo >> 2*pi quickly accumulates errors unless you do arithmetic to >> many extra bits of precision. > > And that problem is inherent. Adding precision bits for the > reduction will not help, because the input value doesn't have > them. It is the old problem of differences of similar sized > quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
PI is not a rational number, so any multiple cannot be represented exactly. In addition, for any fixed accuracy, you will lose significant digits in normalizing. To take your value of 50,000 radians, you are spending something like 4 decimal digits just for the multiple of 2 PI, so the accuracy of the normalized angle will be reduced by at least those 4 places.
No. The *precision* will be reduced. Whether or not the *accuracy* is reduced depends on your conjecture about the bits that don't exist off the right end of the fraction. Those of us who write libraries for a living shouldn't presume that those missing bits are garbage. We serve our customers best by assuming that they're all zeros, and delivering up the best approximation to the correct answer for that assumed value.
Yes, precision is a much better word. However there is an argument for assuming the first missing bit is a 1.
Right, and I acknowledged that argument earlier in this thread.
But when you're talking about how to write a professional sine
function, the argument is irrelevant.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"P.J. Plauger" wrote: "CBFalconer" <cb********@yahoo.com> wrote in message news:40***************@yahoo.com...
"Dik T. Winter" wrote: "P.J. Plauger" <pj*@dinkumware.com> writes:
> I've gone to both extremes over the past several decades. > Our latest math library, still in internal development, > can get exact function values for *all* argument values. > It uses multiprecision argument reduction that can gust > up to over 4,000 bits [sic].
Have you looked at how it was done at DEC? There is a writeup about it in the old SigNum Notices of the sixties or seventies. Mary Decker, I think.
It has been my opinion for a very long time that he real sine function is not so very important. The most important in numerical mathematics is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and exact range reduction. Alas, that function is not in C.
That just makes the problem show up elsewhere. Compare that function with an argument of (1e18 + 0.1) against an argument of (0.1), for example. At least on any common arithmetic system known to me.
No, again. Working in quadrants, or goroundies, *does* solve the argumentreduction problem  you just chuck the leading bits that don't contribute to the final value. But you're *still* conjecturing that a lack of fraction bits is a lack of accuracy, and that's not necessarily true. Hence, it behooves the library writer to do a good job in case it *isn't* true.
In the example I gave above, using any binary representation, it
is true. Maybe I should have used + (1.0/3.0) to ensure problems
on any nontrinary system :)
At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.

"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind."  Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses."  James Rhodes.  
P: n/a

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com... > It has been my opinion for a very long time that he real sine > function is not so very important. The most important in > numerical mathematics is sin2pi(x) which calculates sin(2.pi.x), > and which allows easy and exact range reduction. Alas, that > function is not in C.
That just makes the problem show up elsewhere. Compare that function with an argument of (1e18 + 0.1) against an argument of (0.1), for example. At least on any common arithmetic system known to me. No, again. Working in quadrants, or goroundies, *does* solve the argumentreduction problem  you just chuck the leading bits that don't contribute to the final value. But you're *still* conjecturing that a lack of fraction bits is a lack of accuracy, and that's not necessarily true. Hence, it behooves the library writer to do a good job in case it *isn't* true.
In the example I gave above, using any binary representation, it is true.
What is true? A typical IEEE double representation will render
1e18 + 0.1 as 1e18, if that's what you intend to signify.
When fed as input to a function, the function can't know that
some poor misguided programmer had aspirations for some of the
bits not retained. But that has nothing to do with how hard
the sine function should work to deliver the best representation
of sin(x) when x == 1e18. You're once again asking the library
writer to make up stories about bits dead and gone  that's
the responsibility of the programmer in designing the program
and judging the meaningfulness of the result returned by sin(x)
*for that particular program.* The library writer can't and
shouldn't care about what might have been.
Maybe I should have used + (1.0/3.0) to ensure problems on any nontrinary system :)
So what if you did? That still doesn't change the analysis.
There exists a best internal approximation for 1/3 in any given
floatingpoint representation. There also exists a best internal
approximation for the sine *of that representation.* The library
can't and shouldn't surmise that 0.333333 (speaking decimal)
is the leading part of 0.33333333333... It could just as well
be the leading part of 0.3333334.
At any rate, the ideal system produces instantaneous results with no range limitations and the full accuracy of the arithmetic representation. So the designer should pick a compromise that will suffice for most anticipated users. Here there be dragons.
I mostly agree with that sentiment, except for the "most anticipated
users" part. If I followed that maxim to its logical extreme, my
company would produce very fast math functions that yield results
of mediocre accuracy, with spurious overflows and traps for extreme
arguments. An incredibly large number of math libraries meet that
de facto spec, *and the users almost never complain.*
We aim to please more exacting users, however, to the point that
they often don't notice the pains we've taken to stay out of their
way. So, to review your checklist:
 We don't aim for instantaneous results, even though a few of
the better designed modern CPUs can now evaluate the common
math functions in single hardware instructions. Our compromise
is to keep the code as fast as possible while still writing in
highly portable C.
 We *do* aim for no range limitations. We think through every
possible input representation, and produce either a suitable
special value (Inf, Inf, 0, 0, or NaN) or a finite result close
to the best internal approximation of the function value for that
input, treating the input as exact.
 We provide full accuracy only for a handful of the commonest
math functions. But we generally aim for worstcase errors of
3 ulp (units in the leastsignificant place) for float, 4 for
double, and 5 for 113bit long double. The rare occasions
where we fail to achieve this goal are around the zeros of
messy functions.
Most important, we have tools to *measure* how well we do,
and how well other libraries do. At some point in the not
too distant future, we'll publish some objective comparisons
at our web site. We find that programmers are more likely to
consider upgrading an existing library when they have a
better understanding of its limitations.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"P.J. Plauger" wrote: "CBFalconer" <cb********@yahoo.com> wrote in message
.... snip ... At any rate, the ideal system produces instantaneous results with no range limitations and the full accuracy of the arithmetic representation. So the designer should pick a compromise that will suffice for most anticipated users. Here there be dragons. I mostly agree with that sentiment, except for the "most anticipated users" part. If I followed that maxim to its logical extreme, my company would produce very fast math functions that yield results of mediocre accuracy, with spurious overflows and traps for extreme arguments. An incredibly large number of math libraries meet that de facto spec, *and the users almost never complain.*
They probably know no better. However I would not castigate traps
for extremes arguments too much  it is surely better for the user
to know something has happened than to use values with zero
significant digits. In a way it is unfortunate that hardware has
replaced software floating point, because it makes it
impracticable to fix problems.
Not all applications require the same FP library, and the usual
library/linker etc construction of C applications makes such
customization fairly easy. We aim to please more exacting users, however, to the point that they often don't notice the pains we've taken to stay out of their way. So, to review your checklist:
 We don't aim for instantaneous results, even though a few of the better designed modern CPUs can now evaluate the common math functions in single hardware instructions. Our compromise is to keep the code as fast as possible while still writing in highly portable C.
 We *do* aim for no range limitations. We think through every possible input representation, and produce either a suitable special value (Inf, Inf, 0, 0, or NaN) or a finite result close to the best internal approximation of the function value for that input, treating the input as exact.
A precision measure (count of significant bits) would be handy.  We provide full accuracy only for a handful of the commonest math functions. But we generally aim for worstcase errors of 3 ulp (units in the leastsignificant place) for float, 4 for double, and 5 for 113bit long double. The rare occasions where we fail to achieve this goal are around the zeros of messy functions.
Those objectives are generally broad enough to eliminate any worry
over whether the next missing bit is 0 or 1. Most important, we have tools to *measure* how well we do, and how well other libraries do. At some point in the not too distant future, we'll publish some objective comparisons at our web site. We find that programmers are more likely to consider upgrading an existing library when they have a better understanding of its limitations.
Constructing those tools, in itself, is an application requiring a
much different math library. I suspect Gauss has no small hand.

Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!  
P: n/a

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com... I mostly agree with that sentiment, except for the "most anticipated users" part. If I followed that maxim to its logical extreme, my company would produce very fast math functions that yield results of mediocre accuracy, with spurious overflows and traps for extreme arguments. An incredibly large number of math libraries meet that de facto spec, *and the users almost never complain.* They probably know no better. However I would not castigate traps for extremes arguments too much  it is surely better for the user to know something has happened than to use values with zero significant digits.
Only sin/cos/tan suffers from catastrophic significance loss, and
these functions rarely trap. I'm talking about things like
hypot(DBL_MAX / 2, DBL_MAX / 2)
which has a well defined and precise value, but which often
causes an intermediate overflow and/or trap in naive implementations.
In a way it is unfortunate that hardware has replaced software floating point, because it makes it impracticable to fix problems.
I don't understand that comment.
Not all applications require the same FP library, and the usual library/linker etc construction of C applications makes such customization fairly easy.
Indeed. That's how we can sell replacement and addon libraries. We aim to please more exacting users, however, to the point that they often don't notice the pains we've taken to stay out of their way. So, to review your checklist:
 We don't aim for instantaneous results, even though a few of the better designed modern CPUs can now evaluate the common math functions in single hardware instructions. Our compromise is to keep the code as fast as possible while still writing in highly portable C.
 We *do* aim for no range limitations. We think through every possible input representation, and produce either a suitable special value (Inf, Inf, 0, 0, or NaN) or a finite result close to the best internal approximation of the function value for that input, treating the input as exact.
A precision measure (count of significant bits) would be handy.
That's the ulp measure described below.  We provide full accuracy only for a handful of the commonest math functions. But we generally aim for worstcase errors of 3 ulp (units in the leastsignificant place) for float, 4 for double, and 5 for 113bit long double. The rare occasions where we fail to achieve this goal are around the zeros of messy functions.
Those objectives are generally broad enough to eliminate any worry over whether the next missing bit is 0 or 1.
No, they're orthogonal to that issue, as I keep trying to explain
to you. We do the best we can with the function arguments given.
It is the inescapable responsibility of the programmer to
understand the effect of uncertainties in those argument values on
the uncertainties in the function results. Most important, we have tools to *measure* how well we do, and how well other libraries do. At some point in the not too distant future, we'll publish some objective comparisons at our web site. We find that programmers are more likely to consider upgrading an existing library when they have a better understanding of its limitations.
Constructing those tools, in itself, is an application requiring a much different math library. I suspect Gauss has no small hand.
His spirit indeed lives on. We mostly use Maplesoft these
days, having long since given up on Mathematica as too
buggy.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

"P.J. Plauger" wrote: "CBFalconer" <cb********@yahoo.com> wrote in message
.... snip ... In a way it is unfortunate that hardware has replaced software floating point, because it makes it impracticable to fix problems. I don't understand that comment.
I mean that the library/routine creator is fairly well restricted
to the FP format supplied by the hardware, on penalty of
horrendous time penalties. Not all applications require the same FP library, and the usual library/linker etc construction of C applications makes such customization fairly easy. Indeed. That's how we can sell replacement and addon libraries.
.... snip ... A precision measure (count of significant bits) would be handy. That's the ulp measure described below.
I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.
The replacement library can hardly replace the underlying FP
operations. barring the sort of kluges designed to use or bypass
emulators.  We provide full accuracy only for a handful of the commonest math functions. But we generally aim for worstcase errors of 3 ulp (units in the leastsignificant place) for float, 4 for double, and 5 for 113bit long double. The rare occasions where we fail to achieve this goal are around the zeros of messy functions.
Those objectives are generally broad enough to eliminate any worry over whether the next missing bit is 0 or 1.
No, they're orthogonal to that issue, as I keep trying to explain to you. We do the best we can with the function arguments given. It is the inescapable responsibility of the programmer to understand the effect of uncertainties in those argument values on the uncertainties in the function results.
Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.
Without suitable actions and corrections results can have
unexpected biases. The horrible example is failure to round (as
did some of the early Microsoft Basic fp routines, and my first
efforts :). A more subtle example is rounding up (or down) from
0.5.
None of this is criticism, but I think it is essential to have a
clear idea of the possible failure mechanisms of our algorithms in
order to use them properly. Early nuclear pulse height analyzers
gave peculiar results until the effects of differential
nonlinearity were recognized. After that the causes of such
nonlinearity were often wierd and seemingly totally disconnected.

fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE".  Hutchison  
P: n/a

In article <W1******************@nwrddc02.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote: I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic]. "The Standard C Library" represents an intermediate viewpoint  it stays exact until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve precision for large arguments in the next library we ship.
Obviously you don't get _exact_ values for any argument except sin (0.0)
:).
I guess you mean "the exact value, correctly rounded to the nearest
valid floating point number", since libraries that round to one of the
two nearest valid floating point numbers are already available?  
P: n/a

suri wrote: im sorry i meant i *could not* find the file
suri wrote: I do know the series expansion of sine i was just interested to know how its implemented in the ansi C library. like how many terms of the infinite series are included.
I have linux and use glibc. so i could find the file in the path u mentioned
I stopped using Linux shortly after growing out of puberty. I developed more
sophisticated tastes in life... but I digress...
As someone else pointed out, there is no canonical ANSI C implementation.
There are various and sundry different ways of computing transcendental
functions with various accuracies and efficiencies.
The FreeBSD code mentions "a special Remez algorithm", but it boils down to
a Horner method polynomial computation (and if you don't know what the
Horner method is, google it):

/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)trig.h 8.1 (Berkeley) 6/4/93
*/
#include "mathimpl.h"
vc(thresh, 2.6117239648121182150E1 ,b863,3f85,6ea0,6b02,
1, .85B8636B026EA0)
vc(PIo4, 7.8539816339744830676E1 ,0fda,4049,68c2,a221,
0, .C90FDAA22168C2)
vc(PIo2, 1.5707963267948966135E0 ,0fda,40c9,68c2,a221,
1, .C90FDAA22168C2)
vc(PI3o4, 2.3561944901923449203E0 ,cbe3,4116,0e92,f999,
2, .96CBE3F9990E92)
vc(PI, 3.1415926535897932270E0 ,0fda,4149,68c2,a221,
2, .C90FDAA22168C2)
vc(PI2, 6.2831853071795864540E0 ,0fda,41c9,68c2,a221,
3, .C90FDAA22168C2)
ic(thresh, 2.6117239648121182150E1 , 2, 1.0B70C6D604DD4)
ic(PIo4, 7.8539816339744827900E1 , 1, 1.921FB54442D18)
ic(PIo2, 1.5707963267948965580E0 , 0, 1.921FB54442D18)
ic(PI3o4, 2.3561944901923448370E0 , 1, 1.2D97C7F3321D2)
ic(PI, 3.1415926535897931160E0 , 1, 1.921FB54442D18)
ic(PI2, 6.2831853071795862320E0 , 2, 1.921FB54442D18)
#ifdef vccast
#define thresh vccast(thresh)
#define PIo4 vccast(PIo4)
#define PIo2 vccast(PIo2)
#define PI3o4 vccast(PI3o4)
#define PI vccast(PI)
#define PI2 vccast(PI2)
#endif
#ifdef national
static long fmaxx[] = { 0xffffffff, 0x7fefffff};
#define fmax (*(double*)fmaxx)
#endif /* national */
static const double
zero = 0,
one = 1,
negone = 1,
half = 1.0/2.0,
small = 1E10, /* 1+small**2 == 1; better values for small:
* small = 1.5E9 for VAX D
* = 1.2E8 for IEEE Double
* = 2.8E10 for IEEE Extended
*/
big = 1E20; /* big := 1/(small**2) */
/* sin__S(x*x) ... reimplemented as a macro
* DOUBLE PRECISION (VAX D format 56 bits, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* sin(x*k)  x
* RETURN  on [PI/4,PI/4] , where k=pi/PI, PI is the
rounded
* x
* value of pi in machine precision:
*
* Decimal:
* pi = 3.141592653589793 23846264338327 .....
* 53 bits PI = 3.141592653589793 115997963 ..... ,
* 56 bits PI = 3.141592653589793 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A308D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A308 = 4 * .C90FDAA22168C2
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* (sin(k*x)x)/x = z*(S0 + S1*z^1 + ... + S5*z^5).
* Then
* sin__S(x*x) = z*(S0 + S1*z^1 + ... + S5*z^5)
*
* The coefficient S's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(61.11) for VAX D FORMAT, 2**(57.45) for IEEE DOUBLE.
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*
*/
vc(S0, 1.6666666666666646660E1 ,aaaa,bf2a,aa71,aaaa, 2,
.AAAAAAAAAAAA71)
vc(S1, 8.3333333333297230413E3 ,8888,3d08,477f,8888,
6, .8888888888477F)
vc(S2, 1.9841269838362403710E4 ,0d00,ba50,1057,cf8a, 12,
.D00D00CF8A1057)
vc(S3, 2.7557318019967078930E6 ,ef1c,3738,bedc,a326,
18, .B8EF1CA326BEDC)
vc(S4, 2.5051841873876551398E8 ,3195,b3d7,e1d3,374c, 25,
.D73195374CE1D3)
vc(S5, 1.6028995389845827653E10 ,3d9c,3030,cccc,6d26,
32, .B03D9C6D26CCCC)
vc(S6, 6.2723499671769283121E13 ,8d0b,ac30,ea82,7561, 40,
.B08D0B7561EA82)
ic(S0, 1.6666666666666463126E1 , 3, 1.555555555550C)
ic(S1, 8.3333333332992771264E3 , 7, 1.111111110C461)
ic(S2, 1.9841269816180999116E4 , 13, 1.A01A019746345)
ic(S3, 2.7557309793219876880E6 , 19, 1.71DE3209CDCD9)
ic(S4, 2.5050225177523807003E8 , 26, 1.AE5C0E319A4EF)
ic(S5, 1.5868926979889205164E10 , 33, 1.5CF61DF672B13)
#ifdef vccast
#define S0 vccast(S0)
#define S1 vccast(S1)
#define S2 vccast(S2)
#define S3 vccast(S3)
#define S4 vccast(S4)
#define S5 vccast(S5)
#define S6 vccast(S6)
#endif
#if defined(vax)defined(tahoe)
# define sin__S(z) (z*(S0+z*(S1+z*(S2+z*(S3+z*(S4+z*(S5+z*S6)))))))
#else /* defined(vax)defined(tahoe) */
# define sin__S(z) (z*(S0+z*(S1+z*(S2+z*(S3+z*(S4+z*S5))))))
#endif /* defined(vax)defined(tahoe) */
/* cos__C(x*x) ... reimplemented as a macro
* DOUBLE PRECISION (VAX D FORMAT 56 BITS, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* x*x
* RETURN cos(k*x)  1 +  on [PI/4,PI/4], where k = pi/PI,
* 2
* PI is the rounded value of pi in machine precision :
*
* Decimal:
* pi = 3.141592653589793 23846264338327 .....
* 53 bits PI = 3.141592653589793 115997963 ..... ,
* 56 bits PI = 3.141592653589793 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A308D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A308 = 4 * .C90FDAA22168C2
*
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* cos(k*x)1+z/2 = z*z*(C0 + C1*z^1 + ... + C5*z^5)
* then
* cos__C(z) = z*z*(C0 + C1*z^1 + ... + C5*z^5)
*
* The coefficient C's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(64) for VAX D FORMAT, 2**(58.3) for IEEE DOUBLE.
*
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*/
vc(C0, 4.1666666666666504759E2 ,aaaa,3e2a,a9f0,aaaa,
4, .AAAAAAAAAAA9F0)
vc(C1, 1.3888888888865302059E3 ,0b60,bbb6,0cca,b60a, 9,
.B60B60B60A0CCA)
vc(C2, 2.4801587285601038265E5 ,0d00,38d0,098f,cdcd,
15, .D00D00CDCD098F)
vc(C3, 2.7557313470902390219E7 ,f27b,b593,e805,b593, 21,
.93F27BB593E805)
vc(C4, 2.0875623401082232009E9 ,74c8,320f,3ff0,fa1e,
28, .8F74C8FA1E3FF0)
vc(C5, 1.1355178117642986178E11 ,c32d,ae47,5a63,0a5c, 36,
.C7C32D0A5C5A63)
ic(C0, 4.1666666666666504759E2 , 5, 1.555555555553E)
ic(C1, 1.3888888888865301516E3 , 10, 1.6C16C16C14199)
ic(C2, 2.4801587269650015769E5 , 16, 1.A01A01971CAEB)
ic(C3, 2.7557304623183959811E7 , 22, 1.27E4F1314AD1A)
ic(C4, 2.0873958177697780076E9 , 29, 1.1EE3B60DDDC8C)
ic(C5, 1.1250289076471311557E11 , 37, 1.8BD5986B2A52E)
#ifdef vccast
#define C0 vccast(C0)
#define C1 vccast(C1)
#define C2 vccast(C2)
#define C3 vccast(C3)
#define C4 vccast(C4)
#define C5 vccast(C5)
#endif
#define cos__C(z) (z*z*(C0+z*(C1+z*(C2+z*(C3+z*(C4+z*C5))))))
  
P: n/a

In article <40***************@yahoo.com>,
CBFalconer <cb********@yahoo.com> wrote: At any rate, the ideal system produces instantaneous results with no range limitations and the full accuracy of the arithmetic representation. So the designer should pick a compromise that will suffice for most anticipated users. Here there be dragons.
Alternatively, a choice of functions that use different compromises
between speed and precision. Let the user of the library decide what is
more important.  
P: n/a

In article <BN********************@nwrddc01.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote: We aim to please more exacting users, however, to the point that they often don't notice the pains we've taken to stay out of their way.
Just a thought: High quality arithmetic especially benefits the clueless
user, who would have no idea why things go wrong if they go wrong and
who would have no chance to fix things if they go wrong.  
P: n/a

Christian Bau wrote: CBFalconer <cb********@yahoo.com> wrote:
At any rate, the ideal system produces instantaneous results with no range limitations and the full accuracy of the arithmetic representation. So the designer should pick a compromise that will suffice for most anticipated users. Here there be dragons.
Alternatively, a choice of functions that use different compromises between speed and precision. Let the user of the library decide what is more important.
That is not as feasible as changing complete libraries. For one
thing, the language specifies the names involved. For another, an
efficient library is highly interlinked. A routine to accept an
argument and a pointer to a structure with a list of coefficients
for polynomial expansion is likely to be heavily used. So a mix
and match strategy is very likely to create virtually insoluble
bugs. The need for lm in linking makes this easy, by simply
replacing the library proper at the appropriate times.

fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE".  Hutchison  
P: n/a

"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slbnewsm1.svr.pol.co.uk... In article <W1******************@nwrddc02.gnilink.net>, "P.J. Plauger" <pj*@dinkumware.com> wrote:
I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic]. "The Standard C Library" represents an intermediate viewpoint  it stays exact until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve precision for large arguments in the next library we ship. Obviously you don't get _exact_ values for any argument except sin (0.0) :).
And +1, and 1, of course. Yes, IEC 60559 requires that the
floatingpoint inexact status bit be set for any value that
can't be precisely represented, and we do that. What I
meant by "exact" above was shorthand for "the nearest
internal representation to the actual function value, taking
the argument(s) as exact."
I guess you mean "the exact value, correctly rounded to the nearest valid floating point number", since libraries that round to one of the two nearest valid floating point numbers are already available?
Several libraries do give the best rounded result for sin(x), for
small enough x. One or two are perverse enough to do proper
argument reduction over a huge range, perhaps even the full
range of values. Several more sensible libraries compute a very
good sin(x) over a large but not unbounded range. I've yet to
decide which of these choices best serves our customers.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

wombat <sc****@cs.ucla.edu> wrote: suri wrote: No i knew that faq. my question is where is the sine fn imlementation? since u say its a native instruction set means that there is no code in ansi C for implementing sine.?
Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64.
I'm pretty sure its not true of IA64 and its not really true of AMD64
either. (Though I personally don't agree with the choice in either
case.)
As to the original question: http://www.research.scea.com/gdc2003...functions.html
shows some techniques that I think are close to the state of the art.

Paul Hsieh http://www.pobox.com/~qed/ http://bstring.sf.net/  
P: n/a

In article <DK*******************@nwrddc03.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote: "Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message news:ch*********************************@slbnewsm1.svr.pol.co.uk...
Obviously you don't get _exact_ values for any argument except sin (0.0) :). And +1, and 1, of course.
Of course not. sin (+1) and sin (1) are irrational, and sin (x) is not
+/ 1 for any rational x.
What I meant by "exact" above was shorthand for "the nearest internal representation to the actual function value, taking the argument(s) as exact."
That would be "the exact value, correctly rounded to the nearest valid
floating point number", right?
Several libraries do give the best rounded result for sin(x), for small enough x.
I think that is actually not too difficult if you write could
specifically for one floating point implementation: Do the calculation
in double double precision, then round to double. Estimate the maximum
rounding error which should be only a tiny bit more than 1/2 ulp. Then
some nontrivial maths should give you a small set of worstcase numbers
(those values of x such that sin (x) is almost exactly between two
neighbouring floating point numbers).
If you are very careful to keep the rounding errors small then you
should have very few values x for which you cannot prove that you get
the correct result. Check those values by hand (even if you cannot
_prove_ that sin (x) is rounded correctly, about half the time or more
it should happen anyway). Play around with coefficients to get the
correct result more often, and write an explicit test for the remaining
cases.
One or two are perverse enough to do proper argument reduction over a huge range, perhaps even the full range of values.
flibm does that. And an implementation that is suitable for Java is
forced to do this by the Java standard.
Several more sensible libraries compute a very good sin(x) over a large but not unbounded range. I've yet to decide which of these choices best serves our customers.
Some people would just pick which choice produces the fastest benchmark
results :(
An interesting mathematical problem: Find approximations for sine and
cosine, such that
s*s + c*c <= 1.0
using floatingpoint arithmetic is guaranteed.  
P: n/a

In article <40***************@yahoo.com> cb********@worldnet.att.net writes:
.... I meant something that was automatically associated with each value, and revised by the process of calculation. For example, multiplying something with 3 digit precision by something with two digit precision can have no better that 2 digit precision, regardless of actual significant bits generated. Subtraction of similar sized numbers can produce violent reductions. This is the sort of thing one can build into a custom software FP representation, rather than doing some sort of worst case analysis up front.
This would mean that almost all results in numerical algebra would be
without value. E.g. calculating the solution of a system of 40 linear
equations might easily result in something without value in this measure.
Moreover, you have to be extremely careful here. I once designed an
arcsine routine where subtraction of equal sized numbers was commonplace,
however, in the ultimate result the introduced inaccuracies did not play
a role, as careful analysis did prove. No, they're orthogonal to that issue, as I keep trying to explain to you. We do the best we can with the function arguments given. It is the inescapable responsibility of the programmer to understand the effect of uncertainties in those argument values on the uncertainties in the function results.
Where you assume that those arguments are exact. In reality, if they come from almost any physical entity, they express some form of measurement range. The only exactness is in the individual terms of a Taylor series, for example. If you compute something to 3 ulp, a 1/2 bit error in the argument value is practically meaningless and can usually be ignored, barring a nearby singularity. All of which leads to the same thing.
Wrong. If you compute (for instance) the sine with an accuracy of 3
ulp, that means that you expect exact values. A 1/2 bit error in the
argument might give a huge change in the result.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

Paul Hsieh wrote: wombat <sc****@cs.ucla.edu> wrote:
Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64. I'm pretty sure its not true of IA64 and its not really true of AMD64 either.
AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ? http://www.amd.com/usen/assets/cont...docs/26569.pdf
 Paul Hsieh http://www.pobox.com/~qed/ http://bstring.sf.net/
The signature delimiter is DASHDASHSPACE.
Yours is missing the last character.  
P: n/a

"Grumble" <in*****@kma.eu.org> wrote in message
news:c7**********@newsrocq.inria.fr... Paul Hsieh wrote:
wombat <sc****@cs.ucla.edu> wrote:
Some platforms have hardware instructions that compute sin() and the compiler will emit them, bypassing the libm library's implementation. This is pretty much true for ia32 and ia64.
I'm pretty sure its not true of IA64 and its not really true of AMD64 either.
AMD64 supports x87. Thus FSIN is available. What did you mean by "not really true of AMD64 either" ?
Possibly referring to compilers complying with ABIs disallowing x87, or
taking advantage of higher performance of SSE parallel libraries. Use of
fsin on IA64 is extremely unlikely, even though it's still there.  
P: n/a

In <W1******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: "osmium" <r1********@comcast.net> wrote in message news:c7************@ID179017.news.uniberlin.de...
CBFalconer writes:
> "P.J. Plauger" wrote: > > > ... snip ... > > coefficients. But doing proper argument reduction is an open > > ended exercise in frustration. Just reducing the argument modulo > > 2*pi quickly accumulates errors unless you do arithmetic to > > many extra bits of precision. > > And that problem is inherent. Adding precision bits for the > reduction will not help, because the input value doesn't have > them. It is the old problem of differences of similar sized > quantities.
Huh? If I want the phase of an oscillator after 50,000 radians are you saying that is not computable? Please elaborate.
There was a thread hereabouts many months ago on this very subject and AFAIK no one suggested that it was not computable, it just couldn't be done with doubles. And I see no inherent problems.
Right. This difference of opinion highlights two conflicting interpretations of floatingpoint numbers:
1) They're fuzzy. Assume the first discarded bit is somewhere between zero and one. With this viewpoint, CBFalconer is correct that there's no point in trying to compute a sine accurately for large arguments  all the good bits get lost.
2) They are what they are. Assume that every floatingpoint representation exactly represents some value, however that representation arose. With this viewpoint, osmium is correct that there's a corresponding sine that is worth computing to full machine precision.
I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic]. "The Standard C Library" represents an intermediate viewpoint  it stays exact until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve precision for large arguments in the next library we ship.
Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
second interpretation has no more clues about floating point than the
guys who expect 0.1 to be accurately represented in binary floating point.
Dan

Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de  
P: n/a

"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch... This difference of opinion highlights two conflicting interpretations of floatingpoint numbers:
1) They're fuzzy. Assume the first discarded bit is somewhere between zero and one. With this viewpoint, CBFalconer is correct that there's no point in trying to compute a sine accurately for large arguments  all the good bits get lost.
2) They are what they are. Assume that every floatingpoint representation exactly represents some value, however that representation arose. With this viewpoint, osmium is correct that there's a corresponding sine that is worth computing to full machine precision.
I've gone to both extremes over the past several decades. Our latest math library, still in internal development, can get exact function values for *all* argument values. It uses multiprecision argument reduction that can gust up to over 4,000 bits [sic]. "The Standard C Library" represents an intermediate viewpoint  it stays exact until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve precision for large arguments in the next library we ship.
Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the second interpretation has no more clues about floating point than the guys who expect 0.1 to be accurately represented in binary floating point.
Sorry, but some of our customers are highly clued and they *do*
know when their floatingpoint numbers are fuzzy and when they're
not. In the latter case, the last thing they want/need is for us
library writers to tell them that we've taken the easy way out
on the assumption that their input values to our functions are
fuzzier than they think they are.
It's the job of the library writer to return the best internal
approximation to the function value for a given input value
*treated as an exact number.* If the result has fuzz in a
particular application, it's up to the authors of that
application to analyze the consequence.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

In <Zy********************@nwrddc01.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: Sorry, but some of our customers are highly clued and they *do* know when their floatingpoint numbers are fuzzy and when they're not.
Concrete examples, please.
Dan

Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de  
P: n/a

"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch... In <Zy********************@nwrddc01.gnilink.net> "P.J. Plauger"
<pj*@dinkumware.com> writes:Sorry, but some of our customers are highly clued and they *do* know when their floatingpoint numbers are fuzzy and when they're not.
Concrete examples, please.
What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float. Do
you want to be told that the return value has about 16 loworder
garbage bits because nobody could possibly expect an angle that
large to have any less fuzz? Maybe you do, but some don't. And I,
for one, have trouble justifying in this case why a standard
library function shouldn't deliver on a notunreasonable
expectation. (The fact that it's hard to deliver on the expectation
doesn't make it unreasonable.)
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: "Dan Pop" <Da*****@cern.ch> wrote in message news:c7**********@sunnews.cern.ch... In <Zy********************@nwrddc01.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: >Sorry, but some of our customers are highly clued and they *do* >know when their floatingpoint numbers are fuzzy and when they're >not.
Concrete examples, please.
What is the sine of 162,873 radians? If you're working in radians, you can represent this input value *exactly* even in a float.
You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?
In real life applications, the argument of sine is computed using
floating point arithmetic (on noninteger values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.
So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.
To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable. If I call
sin(DBL_MAX) I deserve *any* garbage in the 1..1 range, even if DBL_MAX
is an exact integer value.
Dan

Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de  
P: n/a

"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch... In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger"
<pj*@dinkumware.com> writes:"Dan Pop" <Da*****@cern.ch> wrote in message news:c7**********@sunnews.cern.ch... In <Zy********************@nwrddc01.gnilink.net> "P.J. Plauger"<pj*@dinkumware.com> writes: >Sorry, but some of our customers are highly clued and they *do* >know when their floatingpoint numbers are fuzzy and when they're >not.
Concrete examples, please.
What is the sine of 162,873 radians? If you're working in radians, you can represent this input value *exactly* even in a float.
You *can*, but does it make physical sense to call sine with an integer argument (even if represented as a float)?
Yes. It also makes sense to call sine with a double having 40 bits of
fraction *that are exact* and expect the 53bit sine corresponding to
that exact number, regardless of whether there's also an exact integer
contribution as well. Same problem.
In real life applications, the argument of sine is computed using floating point arithmetic (on noninteger values), so it *is* a fuzzy value, with the degree of fuzziness implied by its magnitude.
Not necessarily. Once again, you're presuming that everybody programs
like you do. Library vendors don't have that luxury.
So, I was asking about *concrete* examples where it makes sense to call sine with integral arguments or with arguments that are provably *exact* representations of the intended value.
And I gave one.
To *me*, as a user, having a sine that spends CPU cycles to provide the answer with the precision implied by the assumption that the argument represents an exact value, is unacceptable.
What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.
If I call sin(DBL_MAX) I deserve *any* garbage in the 1..1 range, even if DBL_MAX is an exact integer value.
Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.
Fortunately for you, most sine functions meet your quality
requirements.
P.J. Plauger
Dinkumware, Ltd. http://www.dinkumware.com  
P: n/a

In <I6******************@nwrddc03.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes: "Dan Pop" <Da*****@cern.ch> wrote in message news:c7**********@sunnews.cern.ch...
In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger"<pj*@dinkumware.com> writes: >"Dan Pop" <Da*****@cern.ch> wrote in message >news:c7**********@sunnews.cern.ch... >> In <Zy********************@nwrddc01.gnilink.net> "P.J. Plauger" ><pj*@dinkumware.com> writes: >> >> >Sorry, but some of our customers are highly clued and they *do* >> >know when their floatingpoint numbers are fuzzy and when they're >> >not. >> >> Concrete examples, please. > >What is the sine of 162,873 radians? If you're working in radians, >you can represent this input value *exactly* even in a float.
You *can*, but does it make physical sense to call sine with an integer argument (even if represented as a float)?
Yes. It also makes sense to call sine with a double having 40 bits of fraction *that are exact* and expect the 53bit sine corresponding to that exact number, regardless of whether there's also an exact integer contribution as well. Same problem.
Concrete examples, please. In real life applications, the argument of sine is computed using floating point arithmetic (on noninteger values), so it *is* a fuzzy value, with the degree of fuzziness implied by its magnitude.
Not necessarily. Once again, you're presuming that everybody programs like you do. Library vendors don't have that luxury.
Concrete examples, please. Assuming a competent approach of the problem. So, I was asking about *concrete* examples where it makes sense to call sine with integral arguments or with arguments that are provably *exact* representations of the intended value.
And I gave one.
Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetical"
are not exactly synonyms in any language I'm familiar with. To *me*, as a user, having a sine that spends CPU cycles to provide the answer with the precision implied by the assumption that the argument represents an exact value, is unacceptable.
What if the cycles are spent only on large arguments? All you have to do then is avoid the large arguments you know to be meaningless in your application.
Even not so large arguments can still have plenty of fuziness and
getting a 53bit accurate answer for the value actually represented is
still a waste of CPU resources. sin(DBL_MAX) I deserve *any* garbage in the 1..1 range, even if DBL_MAX is an exact integer value.
Probably. You also deserve *any* number of CPU cycles spent generating that garbage.
A *good* implementation (which is what we're talking about, right?) is
supposed to produce garbage (where garbage is asked for) as fast as
possible.
Fortunately for you, most sine functions meet your quality requirements.
Glad to hear it. I hope I'll never be forced to use yours.
Dan

Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de  
P: n/a

In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes: In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
.... Concrete examples, please.
What is the sine of 162,873 radians? If you're working in radians, you can represent this input value *exactly* even in a float.
You *can*, but does it make physical sense to call sine with an integer argument (even if represented as a float)?
Must everything make physical sense? Perhaps it makes mathematical sense?
In real life applications, the argument of sine is computed using floating point arithmetic (on noninteger values), so it *is* a fuzzy value, with the degree of fuzziness implied by its magnitude.
Not in mathematical applications, where the argument to the sine function
can very well be exact.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 7080
 replies: 143
 date asked: Nov 14 '05
