473,836 Members | 2,300 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8119
Paul Hsieh wrote:
-wombat- <sc****@cs.ucla .edu> wrote:
Some platforms have hardware instructions that compute sin()
and the compiler will emit them, bypassing the libm library's
implementation. This is pretty much true for ia32 and ia64.
I'm pretty sure its not true of IA64 and its not really true of
AMD64 either.


AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ?

http://www.amd.com/us-en/assets/cont...docs/26569.pdf
--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/


The signature delimiter is DASH-DASH-SPACE.
Yours is missing the last character.

Nov 14 '05 #41

"Grumble" <in*****@kma.eu .org> wrote in message
news:c7******** **@news-rocq.inria.fr.. .
Paul Hsieh wrote:
-wombat- <sc****@cs.ucla .edu> wrote:
Some platforms have hardware instructions that compute sin()
and the compiler will emit them, bypassing the libm library's
implementation. This is pretty much true for ia32 and ia64.


I'm pretty sure its not true of IA64 and its not really true of
AMD64 either.


AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ?

Possibly referring to compilers complying with ABIs disallowing x87, or
taking advantage of higher performance of SSE parallel libraries. Use of
fsin on IA64 is extremely unlikely, even though it's still there.
Nov 14 '05 #42
In <W1************ ******@nwrddc02 .gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"osmium" <r1********@com cast.net> wrote in message
news:c7******* *****@ID-179017.news.uni-berlin.de...
CBFalconer writes:
> "P.J. Plauger" wrote:
> >
> ... snip ...
> > coefficients. But doing proper argument reduction is an open
> > ended exercise in frustration. Just reducing the argument modulo
> > 2*pi quickly accumulates errors unless you do arithmetic to
> > many extra bits of precision.
>
> And that problem is inherent. Adding precision bits for the
> reduction will not help, because the input value doesn't have
> them. It is the old problem of differences of similar sized
> quantities.


Huh? If I want the phase of an oscillator after 50,000 radians are you
saying that is not computable? Please elaborate.

There was a thread hereabouts many months ago on this very subject and

AFAIK
no one suggested that it was not computable, it just couldn't be done with
doubles. And I see no inherent problems.


Right. This difference of opinion highlights two conflicting
interpretation s of floating-point numbers:

1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.

2) They are what they are. Assume that every floating-point
representati on exactly represents some value, however that
representati on arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.

I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.


Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
second interpretation has no more clues about floating point than the
guys who expect 0.1 to be accurately represented in binary floating point.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #43
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******** **@sunnews.cern .ch...
This difference of opinion highlights two conflicting
interpretation s of floating-point numbers:

1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.

2) They are what they are. Assume that every floating-point
representati on exactly represents some value, however that
representati on arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.

I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.


Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
second interpretation has no more clues about floating point than the
guys who expect 0.1 to be accurately represented in binary floating point.


Sorry, but some of our customers are highly clued and they *do*
know when their floating-point numbers are fuzzy and when they're
not. In the latter case, the last thing they want/need is for us
library writers to tell them that we've taken the easy way out
on the assumption that their input values to our functions are
fuzzier than they think they are.

It's the job of the library writer to return the best internal
approximation to the function value for a given input value
*treated as an exact number.* If the result has fuzz in a
particular application, it's up to the authors of that
application to analyze the consequence.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #44
In <Zy************ ********@nwrddc 01.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
Sorry, but some of our customers are highly clued and they *do*
know when their floating-point numbers are fuzzy and when they're
not.


Concrete examples, please.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #45
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******** **@sunnews.cern .ch...
In <Zy************ ********@nwrddc 01.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
Sorry, but some of our customers are highly clued and they *do*
know when their floating-point numbers are fuzzy and when they're
not.


Concrete examples, please.


What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float. Do
you want to be told that the return value has about 16 low-order
garbage bits because nobody could possibly expect an angle that
large to have any less fuzz? Maybe you do, but some don't. And I,
for one, have trouble justifying in this case why a standard
library function shouldn't deliver on a not-unreasonable
expectation. (The fact that it's hard to deliver on the expectation
doesn't make it unreasonable.)

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #46
In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******* ***@sunnews.cer n.ch...
In <Zy************ ********@nwrddc 01.gnilink.net> "P.J. Plauger"

<pj*@dinkumwar e.com> writes:
>Sorry, but some of our customers are highly clued and they *do*
>know when their floating-point numbers are fuzzy and when they're
>not.


Concrete examples, please.


What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float.


You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?

In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.

So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.

To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable. If I call
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX
is an exact integer value.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #47
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******** **@sunnews.cern .ch...
In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******* ***@sunnews.cer n.ch...
In <Zy************ ********@nwrddc 01.gnilink.net> "P.J. Plauger"<pj*@dinkumwar e.com> writes:

>Sorry, but some of our customers are highly clued and they *do*
>know when their floating-point numbers are fuzzy and when they're
>not.

Concrete examples, please.


What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float.


You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Yes. It also makes sense to call sine with a double having 40 bits of
fraction *that are exact* and expect the 53-bit sine corresponding to
that exact number, regardless of whether there's also an exact integer
contribution as well. Same problem.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.
Not necessarily. Once again, you're presuming that everybody programs
like you do. Library vendors don't have that luxury.
So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.
And I gave one.
To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable.
What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.
If I call
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX
is an exact integer value.


Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.

Fortunately for you, most sine functions meet your quality
requirements.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #48
In <I6************ ******@nwrddc03 .gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c7******* ***@sunnews.cer n.ch...
In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger"<pj*@dinkumwar e.com> writes:
>"Dan Pop" <Da*****@cern.c h> wrote in message
>news:c7******* ***@sunnews.cer n.ch...
>> In <Zy************ ********@nwrddc 01.gnilink.net> "P.J. Plauger"
><pj*@dinkumwar e.com> writes:
>>
>> >Sorry, but some of our customers are highly clued and they *do*
>> >know when their floating-point numbers are fuzzy and when they're
>> >not.
>>
>> Concrete examples, please.
>
>What is the sine of 162,873 radians? If you're working in radians,
>you can represent this input value *exactly* even in a float.


You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Yes. It also makes sense to call sine with a double having 40 bits of
fraction *that are exact* and expect the 53-bit sine corresponding to
that exact number, regardless of whether there's also an exact integer
contribution as well. Same problem.


Concrete examples, please.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.


Not necessarily. Once again, you're presuming that everybody programs
like you do. Library vendors don't have that luxury.


Concrete examples, please. Assuming a competent approach of the problem.
So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.


And I gave one.


Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetic al"
are not exactly synonyms in any language I'm familiar with.
To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable.


What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.


Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX
is an exact integer value.


Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.


A *good* implementation (which is what we're talking about, right?) is
supposed to produce garbage (where garbage is asked for) as fast as
possible.
Fortunately for you, most sine functions meet your quality
requirements .


Glad to hear it. I hope I'll never be forced to use yours.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #49
In article <c7**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <1d************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes: ....
Concrete examples, please.


What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float.


You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Must everything make physical sense? Perhaps it makes mathematical sense?
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.


Not in mathematical applications, where the argument to the sine function
can very well be exact.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.