473,836 Members | 2,210 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8119
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******** ***@sunnews.cer n.ch...
In <40************ ***@yahoo.com> CBFalconer <cb********@yah oo.com> writes:
Dan Pop wrote:
... snip ...

The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to
compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
DBL_MAX stands for an interval so large as to make this function
call completely devoid of any meaning?


PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.


Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?


Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.


Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously. Absent such information, a high quality
sin function favors getting the best possible answer for the argument
given, over doing something fast in the hopes that *all* users are
as fuzzy and in a hurry as you always seem to be. And as I've indicated
several times in this thread, computing the sine of *any* angle
smaller in magnitude than pi/4 can be done pretty fast. Odds are
that any logic you add to reduce the precision in this range would
take about as long to execute as the time you save. So all you have
to do is reduce your own arguments, as quick and sloppy as suits
your needs, and you avoid all that wasteful accuracy demanded by
professional programmers.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #111
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******** ***@sunnews.cer n.ch...
In <%2************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******* ****@sunnews.ce rn.ch...
In <ch************ *************** ******@slb-newsm1.svr.pol. co.uk>

Christian Bau <ch***********@ cbau.freeserve. co.uk> writes:
>Well, you actually _can_ find the correct answer quite well. A value of >type double represents a single real number.

But, when used in a real number context (as opposed to an integer number context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetic al"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #112
In article <c8***********@ sunnews.cern.ch > Da*****@cern.ch (Dan Pop) writes:
In <%2************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:

....
But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


I think I gave one? (Zero's of the Riemann zeta function.)

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #113
Dan Pop wrote:
"P.J. Plauger" <pj*@dinkumware .com> writes:

.... snip ...

That's just one interpretation of floating-point representation.
It is akin to the oft-repeated presumption in this thread that
every value carries a +/- 1/2 ulp error with it. But that's not
always the case. Many a floating-point calculation is *exact*.


When I asked you for concrete examples, you provided exactly zilch.


I expanded my earlier example showing the errors arising from
large arguments to sin to show possible user argument reductions.
The first one I tried was fmod, and I am pleasantly surprised by
its accuracy here. The library is DJGPP 2.03.

#include <stdio.h>
#include <limits.h> /* INT_MAX */
#include <math.h> /* fmod, sin, atan */
#include <stdlib.h> /* strtod */

/* try delta 0.001953125 (i.e. 2**-n i.e. single bit)
0.00048828125
0.0001220703125
0.0000305175781 25
0.0000076293945 3125
0.0000019073485 328125
to show the precision preserved by reduce() */

/* -------------------1 */

/* return arg modulo PI */
double reduce(double arg, double PI)
{
return fmod(arg, PI);
} /* reduce */

/* -------------------1 */

int main(int argc, char **argv)
{
double PI = 4.0 * atan(1.0);
unsigned int i;
double arg, narg, delta;

delta = 0.0;
if (argc > 1) delta = strtod(argv[1], NULL);

for (i = 1; i < (INT_MAX / 2U); i = 4 * i) {
arg = i * PI + delta;
narg = reduce(arg, PI);
printf("%11d %28.16f %20.16f\n%11c %28.16f %20.16f\n",
i, arg, sin(arg), ' ', narg, sin(narg));
}
return 0;
}

c:\c\junk>a 0.0000019073485 328125
1 3.1415945609383 260 -0.0000019073485 328
0.0000019073485 329 0.0000019073485 329
4 12.566372521707 7058 0.0000019073485 328
0.0000019073485 333 0.0000019073485 333
16 50.265484364785 2232 0.0000019073485 314
0.0000019073485 333 0.0000019073485 333
64 201.06193173709 52785 0.0000019073485 113
0.0000019073485 191 0.0000019073485 191
256 804.24772122633 55568 0.0000019073484 878
0.0000019073485 191 0.0000019073485 191
1024 3216.9908791832 967836 0.0000019073485 074
0.0000019073486 328 0.0000019073486 328
4096 12867.963511011 1412359 0.0000019073481 312
0.0000019073486 328 0.0000019073486 328
16384 51471.854038322 5190453 0.0000019073466 264
0.0000019073486 328 0.0000019073486 328
65536 205887.41614756 80302829 0.0000019073406 072
0.0000019073486 328 0.0000019073486 328
262144 823549.66458455 00752330 0.0000019073165 305
0.0000019073486 328 0.0000019073486 328
1048576 3294198.6583324 782550335 0.0000019072202 235
0.0000019073486 328 0.0000019073486 328
4194304 13176794.633324 1909742355 0.0000019068349 957
0.0000019073486 328 0.0000019073486 328
16777216 52707178.533291 0418510437 0.0000019052940 843
0.0000019073486 328 0.0000019073486 328
67108864 210828714.13315 84453582764 0.0000018991304 387
0.0000019073486 328 0.0000019073486 328
268435456 843314856.53262 80593872070 0.0000018744758 563
0.0000019073486 328 0.0000019073486 328
--
Chuck F (cb********@yah oo.com) (cb********@wor ldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net> USE worldnet address!
Nov 14 '05 #114
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.

-- glen

Nov 14 '05 #115
In article <TCCqc.23014$gr .1960915@attbi_ s52> glen herrmannsfeldt <ga*@ugcs.calte ch.edu> writes:
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.
It is quite possible that within the set of matrices given, there is
at least one singular matrix. The number 40 I use above comes from
von Neuman. And indeed, before Wilkinson, numeric mathematicians could
not deal with the error analysis of large linear systems.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #116
In <V2************ ********@nwrddc 02.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******* ****@sunnews.ce rn.ch...
In <40************ ***@yahoo.com> CBFalconer <cb********@yah oo.com> writes:
>Dan Pop wrote:
>>
>... snip ...
>>
>> The point is not whether it can be calculated, but rather how much
>> precision should the calculation produce? Does it makes sense to
>> compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
>> DBL_MAX stands for an interval so large as to make this function
>> call completely devoid of any meaning?
>
>PJPs point is that that interval is the range of values specified,
>with the actual value being the most likely value in the range.
>The shape of the probability curve in the interval is known only
>to the end user. It may be an impulse function precisely at the
>specified value.


Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?


Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.


sinf is useless in portable C code.
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.


Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously.


ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #117
In <G6************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******* ****@sunnews.ce rn.ch...
In <%2************ *******@nwrddc0 2.gnilink.net> "P.J. Plauger"

<pj*@dinkumwar e.com> writes:
>"Dan Pop" <Da*****@cern.c h> wrote in message
>news:c8******* ****@sunnews.ce rn.ch...
>> In <ch************ *************** ******@slb-newsm1.svr.pol. co.uk>
>Christian Bau <ch***********@ cbau.freeserve. co.uk> writes:
>
>> >Well, you actually _can_ find the correct answer quite well. A valueof >> >type double represents a single real number.
>>
>> But, when used in a real number context (as opposed to an integernumber >> context -- floating point can be used in both contexts) it stands for a
>> whole subset of the real numbers set. The real value exactlyrepresented >> is no more relevant than any other value from that set.
>
>That's just one interpretation of floating-point representation. It is
>akin to the oft-repeated presumption in this thread that every value
>carries a +/- 1/2 ulp error with it. But that's not always the case.
>Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetic al"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?


Of course I do. Otherwise, I couldn't point out that you have still
failed to support your assertion with concrete examples, therefore I
refuse to consider it seriously, period.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #118
In <Hx********@cwi .nl> "Dik T. Winter" <Di********@cwi .nl> writes:
In article <TCCqc.23014$gr .1960915@attbi_ s52> glen herrmannsfeldt <ga*@ugcs.calte ch.edu> writes:
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.


Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.
Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #119
In article <c8***********@ sunnews.cern.ch > Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi .nl> "Dik T. Winter" <Di********@cwi .nl> writes:

....
I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.


Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.


Ah, so, what method do *you* use to solve linear systems on the computer?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #120

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.