473,890 Members | 2,014 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 8143
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:

For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and


I'd think about NaN in an IEEE context.
2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.


in the case of sin(), yes, because large arguments in floating point
really are mathematically silly. The same is not true of say logarithmic
gamma, or just plain logarithm.

Nov 14 '05 #131
"Dr Chaos" <mb************ ****@NOSPAMyaho o.com> wrote in message
news:sl******** *************** ********@lyapun ov.ucsd.edu...
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:

For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and


I'd think about NaN in an IEEE context.


And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians? I'm still waiting for the justification for any particular
magnitude of the cutoff.
2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.


in the case of sin(), yes, because large arguments in floating point
really are mathematically silly.


They really aren't, but this seems to be a point too subtle for
many to grasp.
The same is not true of say logarithmic
gamma, or just plain logarithm.


But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #132
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.
Nov 14 '05 #133
In <hA************ **@nwrddc02.gni link.net> "P.J. Plauger" <pj*@dinkumware .com> writes:
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******* ****@sunnews.ce rn.ch...
>: > Assume all the bits of the argument are exact and the *only* source
>: > of fuziness is the magnitude of the value. By the time the least
>: > significant bit of the mantissa means more than 2 * pi, the fuziness
>: > is complete (any result in the -1..1 range is correct).
>: >
>: > Is that concrete enough for you?
>:
>: Yes, it's concrete enough to show that you still don't get it.
>
>So what you're committing to is the requirement that the sine
>should *not* be computed for any value where 1 ulp weighs more
>than 2*pi.
That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.


There you go again, getting un-concrete. You'll have to specify
what "reasonably " means, or some nit-picker will take apart
your revision of the sine specification.


As close to 1.0 as the C standard requires 2.0 - 1.0 to be ;-)

If you're not interested in a serious discussion, please don't waste my
time.
>And presumably, sine should deliver the best possible
>approximatio n to any argument less than this magnitude.


Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].


Okay, then I see no reason why we shouldn't apply this principle
to *every* math function. That would mean, for example, that
the range of permissible values can be huge for large arguments
to exp, lgamma, tgamma, as well as many combinations of arguments
to pow.


Precisely. I didn't mean sine to be treated as an exception.
>(This
>is fine with me, since it still requires an argument reduction
>using about 2 1/2 words of precision, so it will break the
>vast majority of sine functions that have been in use for the
>past several decades.


Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?


No. But if you specify the behavior of a math function and don't
provide a way to specify the uncertainty in its arguments, does
it make any sense to demand that the math functions know how
much uncertainty is there?


The amount of uncertainty assumed by the library is dictated by the
nature of the floating point representation being used. In most real
life applications, the amount is higher than that, but this is none of the
library implementor concern.
I've been talking about the requirements
on the authors of math functions and you keep going back to how
they should be properly used. Two different universes of discourse.


They stop being different once common sense is applied. The implementor
need not waste any resources providing more precision than a proper usage
of the function requires. The additional "precision" is garbage, anyway:
only a fool would look for *significant* information in those bits.
>But hey, we're covered, so I'm happy.)
>Of course, you still need to specify what to return for the
>garbage value, and what error to report (if any). I await your
>further guidance.


For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.


Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).


The *exact* meaning of ERANGE is as documented for each function using
it. But, as I said, use *any* value *you* consider sensible and document
it.

Again, I'm not interested in this kind of idioting nit picking...
>So you're willing to stand behind this criterion even after I
>described, in a later posting, the various ways in which it was
>arbitrary?


Yup. I didn't buy any of your sophistry.
>If this is submitted as a DR to WG14, you're
>willing, as it's original author and proposer, to defend it
>against cranky attacks by people who haven't thought through
>the matter as carefully as you?


Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.


So that's your copout for not actually doing anything?


Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #134
On Mon, 24 May 2004 10:51:44 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
"Dr Chaos" <mb************ ****@NOSPAMyaho o.com> wrote in message
news:sl******** *************** ********@lyapun ov.ucsd.edu...
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
>>
>> For the garbage value, see above. Set errno to any value you consider
>> sensible and document it. ERANGE would be fine with me.
>
> Okay, so a programmer knows that a call to sine was silly if
>
> 1) the return value is between -1 and +1, and
I'd think about NaN in an IEEE context.


And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians?


Yes.
I'm still waiting for the justification for any particular
magnitude of the cutoff.
When the least significant bit of the argument induces a change of at
least 2 pi its value (so that the output is essentially indeterminate
with a 1 lsb fluctuation), then it is extremely likely that the
programmer is making a blunder and ought to know.
> 2) errno indicates that some function value is either too large to
> represent (overflow) or too small to represent (underflow).
>
> That should stand up well to criticism.
in the case of sin(), yes, because large arguments in floating point
really are mathematically silly.


They really aren't, but this seems to be a point too subtle for
many to grasp.


Explain a situation, using computations with standard fixed-precision
floating point, when computing sin of a value which was passed in
standard floating poitn formats would be valuable for magnitudes
beyond the above.
The same is not true of say logarithmic
gamma, or just plain logarithm.


But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem,


It only does in the legalistic sense, but not in the useful esense
because in those cases people would be comparing magnitudes, and
possibly dividing or taking logarithms of the large results.
(And they should have been taking logarithms first most likely)

And the error relative to the value remains small, and that's what
people almost always care about using exp.

At some point you have to return +Inf for exp() of too large an
argument? Is it *really* positive infinity? No. It is far from
positive infinity, well, infinitely so.

There's an arbitrary cutoff right there. Why aren't you forcing
people to convert to an arbitrary precision format?
to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?
sin and cos are different from exp, and especially lgamma.

I would be pleased with a library which signaled excessively large arguments
for sin/cos. It's a blunder, like dereferencing NULL.

For a library which had arguments of arbitrary precision, then yes,
it would be a fine idea to get sin() well.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Nov 14 '05 #135
"John Cochran" <jd*@smof.fiawo l.org> wrote in message
news:c8******** **@smof.fiawol. org...
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher. If that
were the one, obvious, and only meaning of a floating-point argument
value, then I'd agree that the uncertainty in the value of sin(x)
eventually gets so large that it makes little sense to ascribe a
given value to the argument. All we can say is the value is in the
range [-1, 1].

But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently. The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.0142320547350 04e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0 , INF)). It looks
like 1.0142320547351 2e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).

So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*. After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.
Right?

So much for the requirements on exp, cosh, sinh, tgamma, and
pow -- to name the functions I know off the top of my head would
go to hell.

But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off. Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.

Now here's the harder thing to understand. *Every* finite
argument has a well defined sine corresponding to it. It's
not easy to compute, but it's well defined. It would be
easier to understand if the sine took an argument in quadrants.
For larger arguments, the granularity gets steadily worse,
but the arguments still stand for an obvious number of
quadrants, or degrees if you prefer to think in those terms.
Once you get to where the least significant bit weighs 0.5,
you can represent only multiples of 45 degrees. All your
results are 0, 1, or sqrt(1/2), possibly with a minus sign.
Go to the next larger exponent and your values are only
0, 1, and -1, corresponding to multiples of 90 degrees. Go
to the next larger exponent and your values are all zero,
because the sine of any multiple of 180 degrees is zero.
And thus it remains for all the larger finite representable
values.

You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?

The added wrinkle with the conventional sine is that it
counts in radians. Thus *every* nonzero argument to sine
corresponds to an angle in quadrants or degrees that
takes an infinite number of bits to precisely represent
(pi being irrational and all). Nevertheless, you can in
principle multiply any finite representable floating-point
value by enough bits of pi to retain the reduced angle
(in the interval [0, 2*pi) say) to sufficient precision
to define the corresponding sine function to the same
precision as the argument. How that happens is up to
the implementor. If the program is computing exact multiples
of radians, it doesn't even have to know how to represent
pi to generate meaningful, and often exact, arguments
to the sine function.

I don't know of any technique to do this calculation other
than to do argument reduction to ever more precision for
ever larger arguments, but it can and has been done.
Since the C Standard says nothing about any limitations
on the size of the argument to sine, it's hard to make a
case that a library vendor has any right *not* to do all
this work. FWIW, I wish there were a way to get off the
hook. But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water. And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.

Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(7 00.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #136
"Dan Pop" <Da*****@cern.c h> wrote in message
news:c8******** ***@sunnews.cer n.ch...
>If this is submitted as a DR to WG14, you're
>willing, as it's original author and proposer, to defend it
>against cranky attacks by people who haven't thought through
>the matter as carefully as you?

Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.


So that's your copout for not actually doing anything?


Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?


Don't get me wrong, I never expected you to. It's obvious by now
that you're a stuck opposer. You love to be rude to people --
by calling them stupid, for example -- and you love to sit on
the sidelines and criticize the work of others. But I'm quite
certain you will *never* do anything that would put you in
the position of having your own detailed technical proposals
be criticized by others.

I respond to you because it gives me the opportunity to explain
things to others. I have no expectation that you'll ever let in
a new idea or admit you're anything but right all the time.
Once I find myself just responding to your jibes and shifting
arguments, I know it's time to quit.

This is that time.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #137
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
"John Cochran" <jd*@smof.fiawo l.org> wrote in message
news:c8******** **@smof.fiawol. org...
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...
>
>But the repeated argument is that the silliness stems from the fact
>that the floating-point value x stands for the interval (x - 1ulp,
>x + 1ulp). If it didn't, it would stand for the value it seems to
>stand for, and the library writer might be obliged to actually
>compute the function value corresponding to it. Well, what's sauce
>for the goose is sauce for the gander. Why should sine get progressively
>fuzzier as that interval covers a wider range of function values and
>not require/allow/expect exactly the same from every other function
>that does the same? The exponential suffers from *exactly* the
>same fuzziness problem, to take just the best established of the
>fast-moving functions I mentioned earlier. Aren't we doing all our
>customers a disservice by giving them a misleading value for exp(x)
>when x is large?
Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher.


I think they're arguing from the notion that "the purpose of
computing is insight, not numbers."
But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently. The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.0142320547350 04e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0 , INF)). It looks
like 1.0142320547351 2e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).
relative error is what?
So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*.
What matters is the number of non-garbage bits.

Suppose that number were zero.
After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.
To some degree, yes, but the case is far stronger for sin/cos.
Right? But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.
Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.
Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.
who would complain, and what would be the particular application
they'd complain about?
You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?
Because the uses of sine are different, and rounding
produces useful results.
But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water.
A programmer's conceptual blunder.
And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.
Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(7 00.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.
relative error, and facts of actual use.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


Nov 14 '05 #138
In article <c8**********@s mof.fiawol.org> ,
jd*@smof.fiawol .org (John Cochran) wrote:
In article <4t************ ******@nwrddc03 .gnilink.net>,
P.J. Plauger <pj*@dinkumware .com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.
Nov 14 '05 #139
Dr Chaos <mb************ ****@NOSPAMyaho o.com> wrote:
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware .com> wrote:
But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.


Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.


I have a suggestion so that your heroic efforts go to something
more useful:

Here is something that might, conceivably, occasionally be used:

sin_2pi_product (double z1, double z2)

return the sin of the 2*pi*z1*z2, where z1 and z2 are rationals in
IEEE double and 2 pi is expressed to sufficient number of
significant digits as necessary.

Nov 14 '05 #140

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.