473,856 Members | 2,162 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8135
"glen herrmannsfeldt" <ga*@ugcs.calte ch.edu> wrote in message
news:AzHoc.3903 8$xw3.2533208@a ttbi_s04...
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representatio n of the corresponding function value.
(snip)
I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.


You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.


Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.


Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)


Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #71
"P.J. Plauger" wrote:
.... snip ...
Whatever value pi may have when truncated to machine precision
is useless for reducing arguments properly. The error shows up
after just a few radians. It's hard to defend a conjecture that
the sine of 5*pi is never worth computing properly.


This I have to strongly disagree with. What you have to do is
effectively extend the significant bits of the argument while
reducing it. As long as you assume the argument is exact, you can
do this from the unlimited supply of zeroes. The accuracy of 10
mod 3 is dependant on the accuracy of 3. It cannot exceed the
accuracy of either 10 or 3, but it need lose nothing in its
computation.

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #72
glen herrmannsfeldt wrote:
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.


(snip)
I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.


You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.


Consider how you compute 1e30 DIV 3 to any desired accuracy. The
result depends on the accuracy of 3 alone if you assume that 1e30
is exact. Similarly, consider the results on division of 1e30 by
0.33333. (exactly 5 3s)

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #73
Tim Prince wrote:
Grumble wrote:
AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ?


Possibly referring to compilers complying with ABIs disallowing
x87, or taking advantage of higher performance of SSE parallel
libraries. Use of fsin on IA64 is extremely unlikely, even though
it's still there.


I assume you meant AMD64, not IA64?

Where did you read that the AMD64 ABI disallowed x87?

http://www.x86-64.org/abi.pdf

Nov 14 '05 #74
P.J. Plauger wrote:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.


I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).

I think that it'd be particularly ironic to combine this with unlimited
precision for floating-point constants.

--
++acr@,ka"
Nov 14 '05 #75
In <sl************ ****@ID-227112.user.uni-berlin.de> Sam Dennis <sa*@malfunctio n.screaming.net > writes:
P.J. Plauger wrote:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.


I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).


True and irrelevant, as we're discussing about what a high quality
implementation should do.

If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
^^
least one program that contains at least one instance of every
^^^^^^^^^^^^^^^ ^^
one of the following limits:

Other, strictly conforming programs, must be merely accepted (whatever
that means, the standard provides no further clues).

So consider the following "implementation " for Unix systems:

echo "Program accepted."
cp /bin/true a.out

combined with an implementation of /bin/true that exercises all the
translation limits enumerated in 5.2.4.1 and the documentation required
by the standard.

Now, every translation unit, either correct or not, will receive the one
diagnostic required by the standard, all strictly conforming programs are
"accepted" (as well as all other programs ;-) and the *one* C program
mentioned above is translated and executed.

The real question is: how many users is my implementation going to have?
Answer: as many as yours ;-) This illustrates the importance of a factor
that is not even mentioned in the C standard: the quality of
implementation.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #76
Dan Pop wrote:
Sam Dennis <sa*@malfunctio n.screaming.net > writes:
`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. [...]
True and irrelevant, as we're discussing about what a high quality
implementation should do.


Ridiculous accuracy for silly arguments is unnecessary for high quality
programs, which should accommodate libraries coded by mere mortals that
can furthermore calculate logarithms in less time than it takes for the
operator to look up the answer himself. (Admittedly, such programs can
not operate on my hypothetical implementation if they actually make use
of the floating-point facilities provided by C itself; the portable way
to get guaranteed accuracy is to use a library and its types for all of
one's non-integral calculations, written in strictly conforming C.)
If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
least one program that contains at least one instance of every
one of the following limits:
I've decided that that was probably a political decision to allow buggy
compilers to claim conformance. Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out
(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)

Well, obviously there's the #error thing, which means that #include has
to be handled, which means...

cat "$@"
echo "Program accepted."
cp /bin/true a.out

....and the documentation must state that #include source file searching
is not supported and that all output is diagnostics (or similar words).
But, yes, that'd be a conforming implementation; and, because any input
is `acceptable' to it, any data is a conforming program; that's no news
to you, of course.

Yes, the Standard has huge loopholes. Those two are best ignored, with
useful definitions of `conforming' implementation and programs mentally
substituted (in the latter case, by `strictly conforming program').
This illustrates the importance of a factor that is not even mentioned
in the C standard: the quality of implementation.


Oh, but it would be a perfectly good implementation in most other ways.

Actually, I've been thinking for a long time about writing this, except
that the quality of implementation, as it's normally understood, should
be variable, for conformance testing of programs.

Catching all the potential problems isn't feasible, but a useful number
is probably just about manageable.

--
++acr@,ka"
Nov 14 '05 #77
P.J. Plauger wrote:
(I wrote)
You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.
Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.
Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x ;x+=60) printf("%20.15g \n",sin(x*A)) ;

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
(snip)
Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)

Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?

The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.

Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.

As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.

The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).

Giving a fatal error tells the user that something is
wrong and should be fixed. Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.

Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

-- glen

Nov 14 '05 #78
glen herrmannsfeldt <ga*@ugcs.calte ch.edu> wrote:
P.J. Plauger wrote:
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.

Richard
Nov 14 '05 #79
In <sl************ ****@ID-227112.user.uni-berlin.de> Sam Dennis <sa*@malfunctio n.screaming.net > writes:
Dan Pop wrote:
If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
least one program that contains at least one instance of every
one of the following limits:
I've decided that that was probably a political decision to allow buggy
compilers to claim conformance.


It's actually a pragmatic decision: for any given implementation, it is
possible to construct a strictly conforming program exceeding the
implementation' s resources, while still staying within the translation
limits mentioned in the standard.

One year or so ago I explored the ways of avoiding this wording by
adding a few more translation limits. Then, the standard could reasonably
require the translation and execution of *any* program staying within the
translation limits.
Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.


I don't think so. If the standard could specify minimal accuracy
requirements for the floating point types, it could also specify
minimal accuracy requirements for the floating point operations. I have
already debated this issue in comp.std.c and my opponents came with bogus
counterargument s (like the unfeasibility of documenting the accuracy of
all the <math.h> functions, although I was strictly and explicitly
talking about the floating point operations).
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out


(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)


Who says that the one and only correctly translated program has to
contain the #error pragma?
This illustrates the importance of a factor that is not even mentioned
in the C standard: the quality of implementation.


Oh, but it would be a perfectly good implementation in most other ways.


I.e. for people with no need for floating point. This is actually the
case with certain implementations for embedded control applications:
floating point support is too expensive and irrelevant to the typical
application domain of the processor, yet the standard says that it must
be provided. A much better solution would have been to make floating
point support optional for freestanding implementations .

In the case of hosted implementations , people would choose the ones
providing proper support for floating point even if they don't need it.

OTOH, considering that practically every hosted implementation in current
use today is for systems with native floating point support, the issue is
moot: the implementation delivers whatever the hardware delivers.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #80

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.