473,854 Members | 1,822 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8133
In article <xL************ *******@nwrddc0 3.gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile " police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0
Nov 14 '05 #91
"John Cochran" <jd*@smof.fiawo l.org> wrote in message
news:c8******** **@smof.fiawol. org...
In article <xL************ *******@nwrddc0 3.gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile " police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0


Yes, that's a reasonable cutoff point, on the face of it. Just don't
look too close. You're falling prey to the same error in logic that
traps most people who first study this problem -- assuming that there
must be some intrinsic error, say 1/2 ulp, in the argument. If we're
going to apply that criterion to the library uniformly, then rounding
1e38 is equally suspect. (How happy would you be if round occasionally
produced a fatal error? Particularly when the answer is obvious and
easy to compute.)

But if you assume the argument is exact, as *all* library functions
really must do, the your statement is incorrect. There *is* a well
defined angle corresponding to *every* finite floating-point argument.
You (or I, to be specific) may not like the amount of work required
to compute it accurately, but the value is known and well defined.

If, OTOH, you want to let the library vendors off the hook whenever
a function is hard to compute, I've got a little list. And it doesn't
stop at sin/cos/tan.

And if, OTOOH, you want to let the library vendors off the hook
whenever a typical programmer probably doesn't know what he's doing,
*boy* do I have a list.

The trick with standards, as with library design, is to have a
rational framework that's uniformly applied. The sine function
may illustrate some of the nastiest consequences of carrying the
current framework to its logical conclusion, but I assure you
that there are plenty of other monsters lurking out there. Any
change you propose to the framework just gives you a different
set to battle.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #92
In article <c8**********@s mof.fiawol.org> ,
jd*@smof.fiawol .org (John Cochran) wrote:
In article <xL************ *******@nwrddc0 3.gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile " police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0


Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number. Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.
Nov 14 '05 #93
"P.J. Plauger" <pj*@dinkumware .com> writes:

|> I agree that it's a Quality of Implementation issue just how fast a
|> library function supplies nonsense when called with nonsense
|> arguments. But I've yet to hear an objective criterion for
|> determining how much of an argument is garbage. Absent that, the
|> best way I know for library writers to satisfy customers is to
|> assume that every input value is exact, and to produce the closest
|> possible representation to the nearest internal representation of
|> the corresponding function value.

Maybe I'm just being naïve, but I always thought that it was a
necessary quality of a correct implementation that it give correct
results for all legal input. And that it wasn't the job of library
implementers to decide what was or was not reasonable input for my
application.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Nov 14 '05 #94
P.J. Plauger wrote:
"Dr Chaos" <mb************ ****@NOSPAMyaho o.com> wrote in message
news:sl******** *************** ********@lyapun ov.ucsd.edu...

On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.


Then that's a likely thought-bug in the C Standard.

Likely, but it's there, and some of us have to live with it.

>>Conside r someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

>Most likely.

>What does this tell the designer of a sine function about
>where it's okay to stop delivering accurate results?

When they use a single precision function they expect
less accurate answers than a double precision function.

No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile "
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile " police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."

I think accumulation of rotations is probably best done
with complex multiplication.

And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?


P.J.
I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

...the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?

2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI ));

...not the solution for large angle argument reduction?

Keep up the good work.
--
Joe Wright mailto:jo****** **@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #95
Joe Wright wrote:
.... snip ...
2. Large Angles. The circle is 360 degrees or '2 pi radians'.
Why is something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI ));

..not the solution for large angle argument reduction?


That depends highly on how you compute the fmod. Say you compute
r/(2*PI), truncate it to an integer, multiply by (2*PI), and
subtract that from r. Now you have the difference of two
comparable magnitudes, with attendant loss of significant bits.

Compare with methods of computing sigma f(n) for n = 1 ....
infinity. If you start with n=1 (using the normally available
floating point system) you will end up with something quite
divergent from the true answer, regardless of whether the series
actually converges or not. A reasonably accurate computation
requires starting with the smallest terms.

--
"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind." - Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses." - James Rhodes.
Nov 14 '05 #96
In article <c8**********@s unnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi .nl> "Dik T. Winter" <Di********@cwi .nl> writes:

....
And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.


Yup what?!? Please elaborate.


I would have thought that was simple. Yes, I agree with that paragraph.
> Not in mathematical applications, where the argument to the sine
> function can very well be exact.

Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are
possible.


This is not good enough.


I'm so sorry.
"mathematic al applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.


With the exception of numerical analysis, mathematical results are exact,
by definition.


You are forgetting a few fields: numerical algebra, computational number
theory, statistics... But in some cases (especially computational number
theory) final results can be exact while intermediate results are not.
For instance in a project I was involved in, we have shown that the first
1,200,000,000 non-trivial zeros of the Rieman zeta function have real
part 1/2. However, none of the calculated zeros was exact. (Moreover,
the calculation involved the sine function. Moreover, we had to get
reasonable precision, as it involved separating places where the sign
of the function changes. %)
---
% If you are interested, there are well known formula's that indicate in
which region the n-th non-trivial zero resides. However, these regions
overlap. But using this you can set up algorithms that will locate
groups of zero's. The problem is that these zero's will get arbitrarily
close to each other, so using floating point on the machine we did the
runs on (a CDC Cyber 205) the best separation we could get was 10**(-13).
This was not enough, so sometimes we had to resort to double precision.
However, the argument was *exact*. A precise floating-point (hence
rational) number.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #97
"Joe Wright" <jo********@com cast.net> wrote in message
news:Q5******** ************@co mcast.com...
I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

..the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?
Yes.
2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI ));

..not the solution for large angle argument reduction?
People once thought it was, or should be. Indeed, that was one of the
arguments for adding the rather finicky fmod to IEEE floating point
and eventually the C Standard. But if you think about it hard enough
and long enough -- took me an afternoon and several sheets of paper --
you realize that it doesn't cut it. You effectively have to keep
subtracting 2*pi from your argument r until it's less than 2*pi.
fmod does this by subtracting the various multiples 2*pi*2^n. If
*any one* of them does not have nearly 16 good fraction digits, as
well as all the digits it needs to the left of the decimal point, it's
going to mess up the whole set of subtractions. So if you want to
reduce numbers as large as 10^38, you have to represent pi to about
log10(10^(38+16 )) or 54 digits. For 113-bit IEEE long double, you
need well over 4000 digits.

We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.
Keep up the good work.


Thanks.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #98
"P.J. Plauger" wrote:
.... snip ...
We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.


It seems to me that the reduction could be fairly rapid if an
estimate is formed by normal division, break that up into single
bit binary portions (so that multiples of PI do not add non-zero
significant bits) to do the actual reduction. Not worked out at
all, just the glimmer of a method. The idea is to remove the
leftmost digits of the original argument first.

--
Chuck F (cb********@yah oo.com) (cb********@wor ldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net> USE worldnet address!
Nov 14 '05 #99
In <ch************ *************** ******@slb-newsm1.svr.pol. co.uk> Christian Bau <ch***********@ cbau.freeserve. co.uk> writes:
Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number.
But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.
Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.


The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to compute
sin(DBL_MAX) with 53-bit precision, ignoring the fact that DBL_MAX
stands for an interval so large as to make this function call completely
devoid of any meaning?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.