473,836 Members | 2,130 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8124
"P.J. Plauger" wrote:
"CBFalconer " <cb********@yah oo.com> wrote in message
.... snip ...
In a way it is unfortunate that hardware has
replaced software floating point, because it makes it
impracticable to fix problems.
I don't understand that comment.


I mean that the library/routine creator is fairly well restricted
to the FP format supplied by the hardware, on penalty of
horrendous time penalties.
Not all applications require the same FP library, and the usual
library/linker etc construction of C applications makes such
customization fairly easy.
Indeed. That's how we can sell replacement and add-on libraries.

.... snip ...
A precision measure (count of significant bits) would be handy.
That's the ulp measure described below.


I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.

The replacement library can hardly replace the underlying FP
operations. barring the sort of kluges designed to use or bypass
emulators.
-- We provide full accuracy only for a handful of the commonest
math functions. But we generally aim for worst-case errors of
3 ulp (units in the least-significant place) for float, 4 for
double, and 5 for 113-bit long double. The rare occasions
where we fail to achieve this goal are around the zeros of
messy functions.


Those objectives are generally broad enough to eliminate any worry
over whether the next missing bit is 0 or 1.


No, they're orthogonal to that issue, as I keep trying to explain
to you. We do the best we can with the function arguments given.
It is the inescapable responsibility of the programmer to
understand the effect of uncertainties in those argument values on
the uncertainties in the function results.


Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.

Without suitable actions and corrections results can have
unexpected biases. The horrible example is failure to round (as
did some of the early Microsoft Basic fp routines, and my first
efforts :-). A more subtle example is rounding up (or down) from
0.5.

None of this is criticism, but I think it is essential to have a
clear idea of the possible failure mechanisms of our algorithms in
order to use them properly. Early nuclear pulse height analyzers
gave peculiar results until the effects of differential
non-linearity were recognized. After that the causes of such
non-linearity were often wierd and seemingly totally disconnected.

--
fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE". - Hutchison
Nov 14 '05 #31
In article <W1************ ******@nwrddc02 .gnilink.net>,
"P.J. Plauger" <pj*@dinkumware .com> wrote:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.


Obviously you don't get _exact_ values for any argument except sin (0.0)
:-).

I guess you mean "the exact value, correctly rounded to the nearest
valid floating point number", since libraries that round to one of the
two nearest valid floating point numbers are already available?
Nov 14 '05 #32
suri wrote:
im sorry i meant i *could not* find the file

suri wrote:
I do know the series expansion of sine i was just interested to know how
its implemented in the ansi C library. like how many terms of the
infinite series are included.

I have linux and use glibc. so i could find the file in the path u
mentioned


I stopped using Linux shortly after growing out of puberty. I developed more
sophisticated tastes in life... but I digress...

As someone else pointed out, there is no canonical ANSI C implementation.
There are various and sundry different ways of computing transcendental
functions with various accuracies and efficiencies.

The FreeBSD code mentions "a special Remez algorithm", but it boils down to
a Horner method polynomial computation (and if you don't know what the
Horner method is, google it):

----------------------------------------------------------------------------
/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement :
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)trig.h 8.1 (Berkeley) 6/4/93
*/

#include "mathimpl.h "

vc(thresh, 2.6117239648121 182150E-1 ,b863,3f85,6ea0 ,6b02,
-1, .85B8636B026EA0 )
vc(PIo4, 7.8539816339744 830676E-1 ,0fda,4049,68c2 ,a221,
0, .C90FDAA22168C2 )
vc(PIo2, 1.5707963267948 966135E0 ,0fda,40c9,68c2 ,a221,
1, .C90FDAA22168C2 )
vc(PI3o4, 2.3561944901923 449203E0 ,cbe3,4116,0e92 ,f999,
2, .96CBE3F9990E92 )
vc(PI, 3.1415926535897 932270E0 ,0fda,4149,68c2 ,a221,
2, .C90FDAA22168C2 )
vc(PI2, 6.2831853071795 864540E0 ,0fda,41c9,68c2 ,a221,
3, .C90FDAA22168C2 )

ic(thresh, 2.6117239648121 182150E-1 , -2, 1.0B70C6D604DD4 )
ic(PIo4, 7.8539816339744 827900E-1 , -1, 1.921FB54442D18 )
ic(PIo2, 1.5707963267948 965580E0 , 0, 1.921FB54442D18 )
ic(PI3o4, 2.3561944901923 448370E0 , 1, 1.2D97C7F3321D2 )
ic(PI, 3.1415926535897 931160E0 , 1, 1.921FB54442D18 )
ic(PI2, 6.2831853071795 862320E0 , 2, 1.921FB54442D18 )

#ifdef vccast
#define thresh vccast(thresh)
#define PIo4 vccast(PIo4)
#define PIo2 vccast(PIo2)
#define PI3o4 vccast(PI3o4)
#define PI vccast(PI)
#define PI2 vccast(PI2)
#endif

#ifdef national
static long fmaxx[] = { 0xffffffff, 0x7fefffff};
#define fmax (*(double*)fmax x)
#endif /* national */

static const double
zero = 0,
one = 1,
negone = -1,
half = 1.0/2.0,
small = 1E-10, /* 1+small**2 == 1; better values for small:
* small = 1.5E-9 for VAX D
* = 1.2E-8 for IEEE Double
* = 2.8E-10 for IEEE Extended
*/
big = 1E20; /* big := 1/(small**2) */

/* sin__S(x*x) ... re-implemented as a macro
* DOUBLE PRECISION (VAX D format 56 bits, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* sin(x*k) - x
* RETURN --------------- on [-PI/4,PI/4] , where k=pi/PI, PI is the
rounded
* x
* value of pi in machine precision:
*
* Decimal:
* pi = 3.1415926535897 93 23846264338327 .....
* 53 bits PI = 3.1415926535897 93 115997963 ..... ,
* 56 bits PI = 3.1415926535897 93 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A30 8D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A30 8 = 4 * .C90FDAA22168C2
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* (sin(k*x)-x)/x = z*(S0 + S1*z^1 + ... + S5*z^5).
* Then
* sin__S(x*x) = z*(S0 + S1*z^1 + ... + S5*z^5)
*
* The coefficient S's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(-61.11) for VAX D FORMAT, 2**(-57.45) for IEEE DOUBLE.
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*
*/

vc(S0, -1.6666666666666 646660E-1 ,aaaa,bf2a,aa71 ,aaaa, -2,
-.AAAAAAAAAAAA71 )
vc(S1, 8.3333333333297 230413E-3 ,8888,3d08,477f ,8888,
-6, .8888888888477F )
vc(S2, -1.9841269838362 403710E-4 ,0d00,ba50,1057 ,cf8a, -12,
-.D00D00CF8A1057 )
vc(S3, 2.7557318019967 078930E-6 ,ef1c,3738,bedc ,a326,
-18, .B8EF1CA326BEDC )
vc(S4, -2.5051841873876 551398E-8 ,3195,b3d7,e1d3 ,374c, -25,
-.D73195374CE1D3 )
vc(S5, 1.6028995389845 827653E-10 ,3d9c,3030,cccc ,6d26,
-32, .B03D9C6D26CCCC )
vc(S6, -6.2723499671769 283121E-13 ,8d0b,ac30,ea82 ,7561, -40,
-.B08D0B7561EA82 )

ic(S0, -1.6666666666666 463126E-1 , -3, -1.555555555550C )
ic(S1, 8.3333333332992 771264E-3 , -7, 1.111111110C461 )
ic(S2, -1.9841269816180 999116E-4 , -13, -1.A01A019746345 )
ic(S3, 2.7557309793219 876880E-6 , -19, 1.71DE3209CDCD9 )
ic(S4, -2.5050225177523 807003E-8 , -26, -1.AE5C0E319A4EF )
ic(S5, 1.5868926979889 205164E-10 , -33, 1.5CF61DF672B13 )

#ifdef vccast
#define S0 vccast(S0)
#define S1 vccast(S1)
#define S2 vccast(S2)
#define S3 vccast(S3)
#define S4 vccast(S4)
#define S5 vccast(S5)
#define S6 vccast(S6)
#endif

#if defined(vax)||d efined(tahoe)
# define sin__S(z) (z*(S0+z*(S1+z* (S2+z*(S3+z*(S4 +z*(S5+z*S6)))) )))
#else /* defined(vax)||d efined(tahoe) */
# define sin__S(z) (z*(S0+z*(S1+z* (S2+z*(S3+z*(S4 +z*S5))))))
#endif /* defined(vax)||d efined(tahoe) */

/* cos__C(x*x) ... re-implemented as a macro
* DOUBLE PRECISION (VAX D FORMAT 56 BITS, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* x*x
* RETURN cos(k*x) - 1 + ----- on [-PI/4,PI/4], where k = pi/PI,
* 2
* PI is the rounded value of pi in machine precision :
*
* Decimal:
* pi = 3.1415926535897 93 23846264338327 .....
* 53 bits PI = 3.1415926535897 93 115997963 ..... ,
* 56 bits PI = 3.1415926535897 93 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A30 8D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A30 8 = 4 * .C90FDAA22168C2
*
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* cos(k*x)-1+z/2 = z*z*(C0 + C1*z^1 + ... + C5*z^5)
* then
* cos__C(z) = z*z*(C0 + C1*z^1 + ... + C5*z^5)
*
* The coefficient C's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(-64) for VAX D FORMAT, 2**(-58.3) for IEEE DOUBLE.
*
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*/

vc(C0, 4.1666666666666 504759E-2 ,aaaa,3e2a,a9f0 ,aaaa,
-4, .AAAAAAAAAAA9F0 )
vc(C1, -1.3888888888865 302059E-3 ,0b60,bbb6,0cca ,b60a, -9,
-.B60B60B60A0CCA )
vc(C2, 2.4801587285601 038265E-5 ,0d00,38d0,098f ,cdcd,
-15, .D00D00CDCD098F )
vc(C3, -2.7557313470902 390219E-7 ,f27b,b593,e805 ,b593, -21,
-.93F27BB593E805 )
vc(C4, 2.0875623401082 232009E-9 ,74c8,320f,3ff0 ,fa1e,
-28, .8F74C8FA1E3FF0 )
vc(C5, -1.1355178117642 986178E-11 ,c32d,ae47,5a63 ,0a5c, -36,
-.C7C32D0A5C5A63 )

ic(C0, 4.1666666666666 504759E-2 , -5, 1.555555555553E )
ic(C1, -1.3888888888865 301516E-3 , -10, -1.6C16C16C14199 )
ic(C2, 2.4801587269650 015769E-5 , -16, 1.A01A01971CAEB )
ic(C3, -2.7557304623183 959811E-7 , -22, -1.27E4F1314AD1A )
ic(C4, 2.0873958177697 780076E-9 , -29, 1.1EE3B60DDDC8C )
ic(C5, -1.1250289076471 311557E-11 , -37, -1.8BD5986B2A52E )

#ifdef vccast
#define C0 vccast(C0)
#define C1 vccast(C1)
#define C2 vccast(C2)
#define C3 vccast(C3)
#define C4 vccast(C4)
#define C5 vccast(C5)
#endif

#define cos__C(z) (z*z*(C0+z*(C1+ z*(C2+z*(C3+z*( C4+z*C5))))))
----------------------------------------------------------------------------
Nov 14 '05 #33
In article <40************ ***@yahoo.com>,
CBFalconer <cb********@yah oo.com> wrote:
At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.


Alternatively, a choice of functions that use different compromises
between speed and precision. Let the user of the library decide what is
more important.
Nov 14 '05 #34
In article <BN************ ********@nwrddc 01.gnilink.net> ,
"P.J. Plauger" <pj*@dinkumware .com> wrote:
We aim to please more exacting users, however, to the point that
they often don't notice the pains we've taken to stay out of their
way.


Just a thought: High quality arithmetic especially benefits the clueless
user, who would have no idea why things go wrong if they go wrong and
who would have no chance to fix things if they go wrong.
Nov 14 '05 #35
Christian Bau wrote:
CBFalconer <cb********@yah oo.com> wrote:
At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.


Alternatively, a choice of functions that use different compromises
between speed and precision. Let the user of the library decide what
is more important.


That is not as feasible as changing complete libraries. For one
thing, the language specifies the names involved. For another, an
efficient library is highly interlinked. A routine to accept an
argument and a pointer to a structure with a list of coefficients
for polynomial expansion is likely to be heavily used. So a mix
and match strategy is very likely to create virtually insoluble
bugs. The need for -lm in linking makes this easy, by simply
replacing the library proper at the appropriate times.

--
fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE". - Hutchison
Nov 14 '05 #36
"Christian Bau" <ch***********@ cbau.freeserve. co.uk> wrote in message
news:ch******** *************** **********@slb-newsm1.svr.pol. co.uk...
In article <W1************ ******@nwrddc02 .gnilink.net>,
"P.J. Plauger" <pj*@dinkumware .com> wrote:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.
Obviously you don't get _exact_ values for any argument except sin (0.0)
:-).


And +1, and -1, of course. Yes, IEC 60559 requires that the
floating-point inexact status bit be set for any value that
can't be precisely represented, and we do that. What I
meant by "exact" above was shorthand for "the nearest
internal representation to the actual function value, taking
the argument(s) as exact."
I guess you mean "the exact value, correctly rounded to the nearest
valid floating point number", since libraries that round to one of the
two nearest valid floating point numbers are already available?


Several libraries do give the best rounded result for sin(x), for
small enough x. One or two are perverse enough to do proper
argument reduction over a huge range, perhaps even the full
range of values. Several more sensible libraries compute a very
good sin(x) over a large but not unbounded range. I've yet to
decide which of these choices best serves our customers.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #37
-wombat- <sc****@cs.ucla .edu> wrote:
suri wrote:
No i knew that faq.
my question is where is the sine fn imlementation?
since u say its a native instruction set means that there is no code in
ansi C for implementing sine.?


Some platforms have hardware instructions that compute sin() and the
compiler will emit them, bypassing the libm library's implementation. This
is pretty much true for ia32 and ia64.


I'm pretty sure its not true of IA64 and its not really true of AMD64
either. (Though I personally don't agree with the choice in either
case.)

As to the original question:

http://www.research.scea.com/gdc2003...functions.html

shows some techniques that I think are close to the state of the art.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 14 '05 #38
In article <DK************ *******@nwrddc0 3.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware .com> wrote:
"Christian Bau" <ch***********@ cbau.freeserve. co.uk> wrote in message
news:ch******** *************** **********@slb-newsm1.svr.pol. co.uk...
Obviously you don't get _exact_ values for any argument except sin (0.0)
:-).
And +1, and -1, of course.


Of course not. sin (+1) and sin (-1) are irrational, and sin (x) is not
+/- 1 for any rational x.
What I
meant by "exact" above was shorthand for "the nearest
internal representation to the actual function value, taking
the argument(s) as exact."
That would be "the exact value, correctly rounded to the nearest valid
floating point number", right?
Several libraries do give the best rounded result for sin(x), for
small enough x.
I think that is actually not too difficult if you write could
specifically for one floating point implementation: Do the calculation
in double double precision, then round to double. Estimate the maximum
rounding error which should be only a tiny bit more than 1/2 ulp. Then
some non-trivial maths should give you a small set of worst-case numbers
(those values of x such that sin (x) is almost exactly between two
neighbouring floating point numbers).

If you are very careful to keep the rounding errors small then you
should have very few values x for which you cannot prove that you get
the correct result. Check those values by hand (even if you cannot
_prove_ that sin (x) is rounded correctly, about half the time or more
it should happen anyway). Play around with coefficients to get the
correct result more often, and write an explicit test for the remaining
cases.
One or two are perverse enough to do proper
argument reduction over a huge range, perhaps even the full
range of values.
flibm does that. And an implementation that is suitable for Java is
forced to do this by the Java standard.
Several more sensible libraries compute a very
good sin(x) over a large but not unbounded range. I've yet to
decide which of these choices best serves our customers.


Some people would just pick which choice produces the fastest benchmark
results :-(

An interesting mathematical problem: Find approximations for sine and
cosine, such that

s*s + c*c <= 1.0

using floating-point arithmetic is guaranteed.
Nov 14 '05 #39
In article <40************ ***@yahoo.com> cb********@worl dnet.att.net writes:
....
I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.


This would mean that almost all results in numerical algebra would be
without value. E.g. calculating the solution of a system of 40 linear
equations might easily result in something without value in this measure.
Moreover, you have to be extremely careful here. I once designed an
arcsine routine where subtraction of equal sized numbers was commonplace,
however, in the ultimate result the introduced inaccuracies did not play
a role, as careful analysis did prove.
No, they're orthogonal to that issue, as I keep trying to explain
to you. We do the best we can with the function arguments given.
It is the inescapable responsibility of the programmer to
understand the effect of uncertainties in those argument values on
the uncertainties in the function results.


Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.


Wrong. If you compute (for instance) the sine with an accuracy of 3
ulp, that means that you expect exact values. A 1/2 bit error in the
argument might give a huge change in the result.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.