473,385 Members | 2,162 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 7891
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
Concrete examples, please.
.....
Concrete examples, please. Assuming a competent approach of the problem.
.....
So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact* representations of the intended value.


And I gave one.


Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetical"
are not exactly synonyms in any language I'm familiar with.


The term is beyond my grasp.
To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable.


What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.


Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.


So all the sine function needs is an indication of how much fuzz to
assume in the argument. I've yet to hear a specific criterion. Concrete
example please.
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX is an exact integer value.


Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.


A *good* implementation (which is what we're talking about, right?) is
supposed to produce garbage (where garbage is asked for) as fast as
possible.


I agree that it's a Quality of Implementation issue just how fast a
library function supplies nonsense when called with nonsense arguments.
But I've yet to hear an objective criterion for determining how much
of an argument is garbage. Absent that, the best way I know for library
writers to satisfy customers is to assume that every input value is
exact, and to produce the closest possible representation to the nearest
internal representation of the corresponding function value.

Of course, if you don't want to take responsibility for analyzing error
propagation, that's fine. Just don't foist the burden on somebody who
doesn't know how fuzzy your inputs really are.
Fortunately for you, most sine functions meet your quality
requirements.


Glad to hear it. I hope I'll never be forced to use yours.


Me too.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #51
In <Hx*******@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:

...
> Concrete examples, please.

What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float.


You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Must everything make physical sense? Perhaps it makes mathematical sense?


And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.


Not in mathematical applications, where the argument to the sine function
can very well be exact.


Please elaborate, again, with concrete examples. Where do these
"mathematical applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #52
P.J. Plauger writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
Concrete examples, please.
.....
Concrete examples, please. Assuming a competent approach of the problem.
.....
> So, I was asking about *concrete* examples where it makes sense to call> sine with integral arguments or with arguments that are provably

*exact*> representations of the intended value.

And I gave one.


Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetical"
are not exactly synonyms in any language I'm familiar with.


You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.


Nov 14 '05 #53
In <Di********************@nwrddc01.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.


So all the sine function needs is an indication of how much fuzz to
assume in the argument. I've yet to hear a specific criterion. Concrete
example please.


Assume all the bits of the argument are exact and the *only* source
of fuziness is the magnitude of the value. By the time the least
significant bit of the mantissa means more than 2 * pi, the fuziness
is complete (any result in the -1..1 range is correct).

Is that concrete enough for you?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #54
In <2g************@uni-berlin.de> "osmium" <r1********@comcast.net> writes:
P.J. Plauger writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
> Concrete examples, please.
> .....
> Concrete examples, please. Assuming a competent approach of theproblem. > .....
> >> So, I was asking about *concrete* examples where it makes sense tocall > >> sine with integral arguments or with arguments that are provably

*exact*
> >> representations of the intended value.
> >
> >And I gave one.
>
> Nope, you gave me nothing in the way of a *concrete* example. Or maybe
> the term is beyond your grasp... Clue: "concrete" and "hypothetical"
> are not exactly synonyms in any language I'm familiar with.


You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.


You may also want to mention a concrete example where applying the sine
function to such a revolution counter makes sense. Unless I'm missing
something, only the last, possibly incomplete, revolution counts in the
context of the sine function, so please clue me in.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #55
Dan Pop writes:
In <2g************@uni-berlin.de> "osmium" <r1********@comcast.net> writes:
P.J. Plauger writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...

> Concrete examples, please.
> .....
> Concrete examples, please. Assuming a competent approach of the

problem.
> .....
> >> So, I was asking about *concrete* examples where it makes sense to

call
> >> sine with integral arguments or with arguments that are provably
*exact*
> >> representations of the intended value.
> >
> >And I gave one.
>
> Nope, you gave me nothing in the way of a *concrete* example. Or maybe > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
> are not exactly synonyms in any language I'm familiar with.


You might mention to Mr. Pop the notion of a revolution counter, such as aVeeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. IfDan Pop needs the name of a particular ship to make the point sufficientlyconcrete, I expect some one could provide the name of such a ship.


You may also want to mention a concrete example where applying the sine
function to such a revolution counter makes sense. Unless I'm missing
something, only the last, possibly incomplete, revolution counts in the
context of the sine function, so please clue me in.


When I was a child I said, "prime numbers, what are they good for?" Now
finally I, and a lot of other people know what they are good for. For a
mathematician sitting in an ivory tower to decide which equations make sense
and which are worth solving is the height of arrogance. My reading is that
that is the point P.J. Plauger has been trying to make. The world is so
complex that the work load must be divided amongst several heads. P.J. does
his mathematician thing and engineers do their engineering thing. When his
fussiness causes his sine routine to take 50 msec when Intel does it in 2
pico seconds, we might revisit the problem. I don't think we are there yet.
Nov 14 '05 #56
In <2g************@uni-berlin.de> "osmium" <r1********@comcast.net> writes:
Dan Pop writes:
In <2g************@uni-berlin.de> "osmium" <r1********@comcast.net>

writes:
>P.J. Plauger writes:
>
>> "Dan Pop" <Da*****@cern.ch> wrote in message
>> news:c7**********@sunnews.cern.ch...
>>
>> > Concrete examples, please.
>> > .....
>> > Concrete examples, please. Assuming a competent approach of the
>problem.
>> > .....
>> > >> So, I was asking about *concrete* examples where it makes sense to
>call
>> > >> sine with integral arguments or with arguments that are provably
>> *exact*
>> > >> representations of the intended value.
>> > >
>> > >And I gave one.
>> >
>> > Nope, you gave me nothing in the way of a *concrete* example. Ormaybe >> > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
>> > are not exactly synonyms in any language I'm familiar with.
>
>You might mention to Mr. Pop the notion of a revolution counter, such asa >Veeder-Root counter on the propeller shaft of an ocean going ship. My
>guess is that there are a lot of radians between England and New Jersey.If >Dan Pop needs the name of a particular ship to make the pointsufficiently >concrete, I expect some one could provide the name of such a ship.


You may also want to mention a concrete example where applying the sine
function to such a revolution counter makes sense. Unless I'm missing
something, only the last, possibly incomplete, revolution counts in the
context of the sine function, so please clue me in.


When I was a child I said, "prime numbers, what are they good for?" Now
finally I, and a lot of other people know what they are good for. For a
mathematician sitting in an ivory tower to decide which equations make sense
and which are worth solving is the height of arrogance. My reading is that
that is the point P.J. Plauger has been trying to make. The world is so
complex that the work load must be divided amongst several heads. P.J. does
his mathematician thing and engineers do their engineering thing. When his
fussiness causes his sine routine to take 50 msec when Intel does it in 2
pico seconds, we might revisit the problem. I don't think we are there yet.


I somehow failed to find in this empty verbiage the answer to the concrete
question I was asking in my previous post...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #57
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
In <Di********************@nwrddc01.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c7**********@sunnews.cern.ch...
Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.


So all the sine function needs is an indication of how much fuzz to
assume in the argument. I've yet to hear a specific criterion. Concrete
example please.


Assume all the bits of the argument are exact and the *only* source
of fuziness is the magnitude of the value. By the time the least
significant bit of the mantissa means more than 2 * pi, the fuziness
is complete (any result in the -1..1 range is correct).

Is that concrete enough for you?


Yes, it's concrete enough to show that you still don't get it.

If the argument were in quadrants, I'd agree that for sufficiently
large arguments the answer is always one of {0, 1, -1}, and for
still larger arguments the answer is always zero. You'd still
want to return the appropriate member of this simple set, just as you
want to return 1e38 from the round function when called with 1e38.
Here, round is operating nowhere near its most interesting range,
but there's still a well defined function value for each input value,
and it's not the place of the library to set the Uninteresting Argument
Exception in the FPP, or set errno to EUNINTERESTING_ARGUMENT.

The argument to sine is not really in quadrants, but in radians.
That means that even when there are no fraction bits represented in
the argument x, there *still are* in x/(pi/2) (which is how you convert
radians to quadrants). So each argument value still corresponds to
some reduced argument that can be expressed to full machine precision
(almost always as some value other than {1, 0, -1} but still computable
with enough hard work). Which means that there's a corresponding
function return value that's also computable to full machine precision
that you really ought to return.

You may think that it's unlikely that all the bits of that value are
meaningful in a given application, and you're probably right. But
you're not *definitely* right. So as a library vendor, best engineering
practice is to supply all the bits that *might* make sense, in case
they actually do. A good quality implementation will continue to
compute the sine of small arguments quickly, but if it has to take
progressively longer to reduce ever larger arguments, then so be it.
You don't want the cost, reduce the arguments quickly, by your own
crude methods that are good enough for your application, before calling
sine.

Note also that even sin(1e-38) *might not* be good to full precision,
because the argument happens not to be good to full precision, but
you have no criterion for judging how many bits are worth computing.
Since it's so easy to compute them, and since they're often all
meaningful, nobody wastes much time debating the cost of producing
bits that are potentially garbage.

Now it so happens that it gets progressively *much* harder to follow
this best engineering practice as x gets larger in magnitude. The
temptation is strong to decree that the bits are meaningless beyond
some point and just return garbage. I've done this very thing in the
past, but I'm less inclined to do it now -- at least not without some
sanctioned way to report significance loss in lieu of computing a
difficult function value.

You also didn't get that you're still not giving a sufficiently concrete
criterion for when the sine function should stop trying. If you don't
like that a sine function wastes your program microseconds computing
what you shouldn't have told it to compute, then you need to be more
precise about where it can and should give up. You state above that
the function result is definitely meaningless once 1 ulp in the argument
weighs more than 2*pi, but why go that far? Aside from a phase factor,
you've lost all angle information at pi/2. But then how meaningful is it
to have just a couple of bits of fraction information? To repeat what
you stated above:
Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.


So you've taken on a serious responsibility here. You have to tell us
library vendors just how small "not so large" is, so we know where to
produce quick garbage instead of slower correct answers. If you don't,
you have no right to deem our products unacceptable, or even simply
wasteful.

Of course, you also need to get some standards organization to agree
with you, so we all have the same *concrete* criteria to obey. But I'm
sure you can sweet talk one of them into doing as you say, once you
actually say it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #58
"P.J. Plauger" wrote:
.... snip ...
I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.

Of course, if you don't want to take responsibility for analyzing
error propagation, that's fine. Just don't foist the burden on
somebody who doesn't know how fuzzy your inputs really are.


I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.

We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.

For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).

At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #59
"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com...
"P.J. Plauger" wrote:

... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.

Of course, if you don't want to take responsibility for analyzing
error propagation, that's fine. Just don't foist the burden on
somebody who doesn't know how fuzzy your inputs really are.


I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.

We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.

For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).

At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.


I think that you're saying a close approximation to what I've
said.

Thanks,

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #60
CBFalconer wrote:

(snip regarding sin() function)
And that problem is inherent. Adding precision bits for the
reduction will not help, because the input value doesn't have
them. It is the old problem of differences of similar sized
quantities.

When I was in high school I knew someone with a brand
new HP-55 calculator. (You can figure out when that was
if you want.) He was so proud of it, and sure of the
answers it gave. For the sine (in degrees) of 9.999999999e99
it gave something like 0.53, which is obviously wrong
because 9.999999999e99 is a multiple of 180.

Yes, argument reduction is always a problem, because
people will expect it to be right, no matter how useless
the result is. It is a little easier in degrees than
in radians, yet few languages (PL/I being one) support
SIND() and such.

-- glen

Nov 14 '05 #61
CBFalconer wrote:
"P.J. Plauger" wrote: ... snip ...
I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.


(snip)
I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.
You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.
We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.
Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)
For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).
Now you multiply the angle, in multiples of 45 degrees,
by (pi/4) accurate to 53 or so bits for an IEEE double.
If the argument reduction is done with a 4000 bit accurate
pi, you find many more than 5 values for sine.
At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.


It may have been fixed by now, but it was well known in
the past that Postscript could not do a proper 90 degree
or 180 degree rotation. It did it by calculating the sine
and cosine of angles in radians, converted from degrees.
It seems that the value of pi used was different than that
used in argument reduction, so that sin(180 degrees) was
not zero. If it is not zero, it is possible that a horizontal
line in the input will not be horizontal, in the output,
when rounded (or truncated, I forget) to pixel positions.
I have been told by professional typesetters that it was
visible in the output of high resolution phototypesetters.

-- glen

Nov 14 '05 #62
Dik T. Winter wrote:
(snip)
(someone wrote)
> >What is the sine of 162,873 radians? If you're working in radians,
> >you can represent this input value *exactly* even in a float.
(someone else wrote)
> You *can*, but does it make physical sense to call sine with an integer
> argument (even if represented as a float)?
Must everything make physical sense? Perhaps it makes mathematical sense?
(snip)
Not in mathematical applications, where the argument to the sine function
can very well be exact.


Show me a useful, real life, mathematical problem that requires the
evaluation of 167873 radians. Now, I can almost imagine problems
involving large integer numbers of degrees. If those degrees
are multiplied by (pi/180), possibly with a different value
of pi than is used in the argument reduction, all useful bits
are lost.

Yes, radians are nice in many cases, but this is not one.

-- glen

Nov 14 '05 #63
P.J. Plauger wrote:

(snip, someone wrote)
Huh? If I want the phase of an oscillator after 50,000 radians are
you saying that is not computable? Please elaborate.

(someone else wrote)
PI is not a rational number, so any multiple cannot be represented
exactly. In addition, for any fixed accuracy, you will lose
significant digits in normalizing. To take your value of 50,000
radians, you are spending something like 4 decimal digits just for
the multiple of 2 PI, so the accuracy of the normalized angle will
be reduced by at least those 4 places.
No. The *precision* will be reduced. Whether or not the *accuracy*
is reduced depends on your conjecture about the bits that don't
exist off the right end of the fraction. Those of us who write
libraries for a living shouldn't presume that those missing bits
are garbage. We serve our customers best by assuming that they're
all zeros, and delivering up the best approximation to the correct
answer for that assumed value.


Why assume they are all zero? In what cases would someone
want the sine of a number that was a large integer number of
radians? Well, 50,000 isn't so large, but say that it
was 1e50?

The first math library that I used would refuse to do
double precision sine at pi*2**50, pretty much no fraction
bits left. I can almost imagine some cases for an integer
number of degrees, but that would almost only make sense
in decimal floating point arithmetic. (Many calculators
but few computers.)

-- glen
Nov 14 '05 #64
P.J. Plauger wrote:
(snip)
Yes, it's concrete enough to show that you still don't get it.
(snip)
You may think that it's unlikely that all the bits of that value are
meaningful in a given application, and you're probably right. But
you're not *definitely* right. So as a library vendor, best engineering
practice is to supply all the bits that *might* make sense, in case
they actually do. A good quality implementation will continue to
compute the sine of small arguments quickly, but if it has to take
progressively longer to reduce ever larger arguments, then so be it.
You don't want the cost, reduce the arguments quickly, by your own
crude methods that are good enough for your application, before calling
sine.
By returning a value you are making a claim that there is some
sense in calculating that value. There is no sense in calculating
such large values of radians such that the uncertainty in the
angle is greater than pi.
Note also that even sin(1e-38) *might not* be good to full precision,
because the argument happens not to be good to full precision, but
you have no criterion for judging how many bits are worth computing.
Since it's so easy to compute them, and since they're often all
meaningful, nobody wastes much time debating the cost of producing
bits that are potentially garbage.
Well, small arguments it radians are easy to compute. One
could, for example, do a numeric derivative at this point,
say (sin(2e-38)-sin(1e-38))/(2e38-1e38) and likely get an
exact 1.00.

(snip)
You also didn't get that you're still not giving a sufficiently concrete
criterion for when the sine function should stop trying. If you don't
like that a sine function wastes your program microseconds computing
what you shouldn't have told it to compute, then you need to be more
precise about where it can and should give up. You state above that
the function result is definitely meaningless once 1 ulp in the argument
weighs more than 2*pi, but why go that far? Aside from a phase factor,
you've lost all angle information at pi/2. But then how meaningful is it
to have just a couple of bits of fraction information? To repeat what
you stated above:
(snip)

Any of those are close enough for me.
So you've taken on a serious responsibility here. You have to tell us
library vendors just how small "not so large" is, so we know where to
produce quick garbage instead of slower correct answers. If you don't,
you have no right to deem our products unacceptable, or even simply
wasteful.


At that point, all you are saying is that the user should do
their own argument reduction, using the appropriate method
which only the user can know, before calling sin().

Java, at least, defines a standard value for pi, so that programs
can used that in generating arguments. Should you assume that
the argument, in radians, is a multiple of that value of pi,
or a much more accurate one?

-- glen

Nov 14 '05 #65
osmium wrote:

(snip)
You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.


Design a rotating shaft counter that can count in exact radians
and I will figure out how to calculate the sine of it.

I can easily design one that will count revolutions, degrees,
or most any other integer multiple of revolutions, but not
radians.

There is no point in doing argument reduction with an exact,
to thousands of bits, representation of pi when the user can't
generate arguments with such pi, and has no source of such
arguments.

Go over to comp.lang.pl1, and you will find the sind() and
cosd() functions, which could do exact argument reduction
on large integers. (I have no idea whether they do or not.)

-- glen

Nov 14 '05 #66
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:GKGoc.80856$Ik.6105956@attbi_s53...
P.J. Plauger wrote:

(snip, someone wrote)
Huh? If I want the phase of an oscillator after 50,000 radians are
you saying that is not computable? Please elaborate.
(someone else wrote)
PI is not a rational number, so any multiple cannot be represented
exactly. In addition, for any fixed accuracy, you will lose
significant digits in normalizing. To take your value of 50,000
radians, you are spending something like 4 decimal digits just for
the multiple of 2 PI, so the accuracy of the normalized angle will
be reduced by at least those 4 places.
No. The *precision* will be reduced. Whether or not the *accuracy*
is reduced depends on your conjecture about the bits that don't
exist off the right end of the fraction. Those of us who write
libraries for a living shouldn't presume that those missing bits
are garbage. We serve our customers best by assuming that they're
all zeros, and delivering up the best approximation to the correct
answer for that assumed value.


Why assume they are all zero?


For the same reason that the library assumes 0.5 (which really looks
like 0.5000000 or thereabouts) is 0.5. The value of a floating-point
number is defined by its bits. Part of that definition effectively
requires you to treat all the bits you don't see as zeros.
In what cases would someone
want the sine of a number that was a large integer number of
radians? Well, 50,000 isn't so large, but say that it
was 1e50?
One of the interesting challenges for us library writers is that
we have to serve all possible customers. We don't get to rule on
which ones are being silly -- or overly demanding. But if we
slight so much as a single case, however extreme or apparently
unlikely, customers rightly castigate us, and reviewers emphasize
the failure all out of proportion to the number of cases we get
right. So *you* can decide that 50,000 radians is worth reducing
properly but 1e50 is not. Mazeltov. Until the C Standard lets us
off the hook beyond some given argument magnitude, however, we
have no excuse not to try.

In fact, one of the commercial test suites we currently use tests
sin(x) up to about 1e18. Is that silly? I can't find any place in
the C Standard that says it isn't. So far, no major potential
customer has seized on this suite as an acceptance test, but *you*
try to explain to some high-level decision maker why it's okay
to pay us for a product that fails a handful of tests.
The first math library that I used would refuse to do
double precision sine at pi*2**50, pretty much no fraction
bits left.
That's nice. Who got to pick the cutoff point?
I can almost imagine some cases for an integer
number of degrees, but that would almost only make sense
in decimal floating point arithmetic. (Many calculators
but few computers.)


Really? I can't imagine why.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #67
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:HZGoc.79927$0H1.7555079@attbi_s54...
Dik T. Winter wrote:
(snip)
(someone wrote)
> >What is the sine of 162,873 radians? If you're working in radians,
> >you can represent this input value *exactly* even in a float. (someone else wrote)
> You *can*, but does it make physical sense to call sine with an integer > argument (even if represented as a float)?
Must everything make physical sense? Perhaps it makes mathematical
sense?
(snip)
Not in mathematical applications, where the argument to the sine
function can very well be exact.


Show me a useful, real life, mathematical problem that requires the
evaluation of 167873 radians.


Now you're playing Dan Pop's game. Somebody contrives a problem
that's arguably practical and you get to rule, from the catbird
seat, that it's not really a real-life problem. (And yet both
you and Dan Pop toss out equally academic problems as though
they're pragmatic engineering issues.)

But it doesn't *matter* whether or not I can score points at your
game, because that's not the game I'm in (as I keep repeating).
My company endeavors to make high quality libraries that satisfy
programming language standards and meet the needs of demanding
customers. We don't have the luxury of demanding in turn that our
customers prove that everything they do makes sense. Our treaty
point is the C Standard, in this case. We don't match it, they
have a reason to be pissed. We match it, they don't have a case.
(Well, there are also QOI issues, but that's not at issue here.)
Now, I can almost imagine problems
involving large integer numbers of degrees.
Why is that better? It's just easier to comprehend the truncations
involved, and easier to compute the associated nonsense, or valid
reduced argument. Still the same problem.
If those degrees
are multiplied by (pi/180), possibly with a different value
of pi than is used in the argument reduction, all useful bits
are lost.
Indeed. As a library vendor, you use the best possible value of
pi. If that's not what the customer used, it's the customer's
problem. (I once had a customer who was sure we had a bad tangent
function, until I pointed out that he was approximating pi in
his program with 22/7.)
Yes, radians are nice in many cases, but this is not one.


Only because it's harder to reduce the argument accurately.
In either case, you can't give any more meaning to an argument
reduced from a large magnitude, *or any less.*

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #68
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:CcHoc.3405$1q3.396588@attbi_s51...
osmium wrote:

(snip)
You might mention to Mr. Pop the notion of a revolution counter, such as a Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If Dan Pop needs the name of a particular ship to make the point sufficiently concrete, I expect some one could provide the name of such a ship.
Design a rotating shaft counter that can count in exact radians
and I will figure out how to calculate the sine of it.

I can easily design one that will count revolutions, degrees,
or most any other integer multiple of revolutions, but not
radians.

There is no point in doing argument reduction with an exact,
to thousands of bits, representation of pi when the user can't
generate arguments with such pi, and has no source of such
arguments.


Your logic is specious. What if the customer is computing
directly in radians? It's an artifact of the library function
that requires the use of many bits of pi. It turns out that
the easiest way to compute pi is to convert it to quadrants
and compute it over the interval [-0.5, 0.5]. So we have to
work in high precision to give the customer the best answer.
The customer *does not* have to work in equally high precision
to give us an input worthy of computing a sine.

You've once again fallen into the trap of making up stories
about the validity of arguments presented to a library function.
That's *not* a luxury we library writers can indulge.
Go over to comp.lang.pl1, and you will find the sind() and
cosd() functions, which could do exact argument reduction
on large integers. (I have no idea whether they do or not.)


Yes, it's easier to do. Why do you think the resulting reduced
arguments are any worthier of evaluating, or any less?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #69
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:PnHoc.81199$Ik.6124808@attbi_s53...
P.J. Plauger wrote:
(snip)
Yes, it's concrete enough to show that you still don't get it.
(snip)
You may think that it's unlikely that all the bits of that value are
meaningful in a given application, and you're probably right. But
you're not *definitely* right. So as a library vendor, best engineering
practice is to supply all the bits that *might* make sense, in case
they actually do. A good quality implementation will continue to
compute the sine of small arguments quickly, but if it has to take
progressively longer to reduce ever larger arguments, then so be it.
You don't want the cost, reduce the arguments quickly, by your own
crude methods that are good enough for your application, before calling
sine.


By returning a value you are making a claim that there is some
sense in calculating that value. There is no sense in calculating
such large values of radians such that the uncertainty in the
angle is greater than pi.


You've made up a story again, about the "uncertainty" in the reduced
angle. All the library writer knows is that there's little *precision*
in the representation of that reduced angle. But it still has a well
defined value and it may well be meaningful to the customer.
Note also that even sin(1e-38) *might not* be good to full precision,
because the argument happens not to be good to full precision, but
you have no criterion for judging how many bits are worth computing.
Since it's so easy to compute them, and since they're often all
meaningful, nobody wastes much time debating the cost of producing
bits that are potentially garbage.


Well, small arguments it radians are easy to compute. One
could, for example, do a numeric derivative at this point,
say (sin(2e-38)-sin(1e-38))/(2e38-1e38) and likely get an
exact 1.00.


You missed the point. Of *course* they're easy to compute. So
too is the sine of 1e38 quadrants -- it's zero. But should we
decide how much garbage to give back to customers based on how
easy it is for us to compute it? Suppose I make up a story that
most people who call sine with tiny arguments don't know what
the hell they're doing. I could modulate your claim above:

: By returning a value you are making a claim that there is some
: sense in calculating that value. There is no sense in calculating
: such tiny values of radians such that the magnitude of the
: angle is much less than than pi.

Who knows, maybe I could even cite studies. But in any case, I
wouldn't last long in front of a customer waving a bug report
under my nose. And I would have no defense in the C Standard.

Conjectures about the validity of input arguments just don't
cut it.
(snip)
You also didn't get that you're still not giving a sufficiently concrete
criterion for when the sine function should stop trying. If you don't
like that a sine function wastes your program microseconds computing
what you shouldn't have told it to compute, then you need to be more
precise about where it can and should give up. You state above that
the function result is definitely meaningless once 1 ulp in the argument
weighs more than 2*pi, but why go that far? Aside from a phase factor,
you've lost all angle information at pi/2. But then how meaningful is it
to have just a couple of bits of fraction information? To repeat what
you stated above:
(snip)

Any of those are close enough for me.


That's nice. Would you mind writing that up as a DR? I'd love it
if the C Standard was changed to make less work for me.
So you've taken on a serious responsibility here. You have to tell us
library vendors just how small "not so large" is, so we know where to
produce quick garbage instead of slower correct answers. If you don't,
you have no right to deem our products unacceptable, or even simply
wasteful.


At that point, all you are saying is that the user should do
their own argument reduction, using the appropriate method
which only the user can know, before calling sin().


Yes, that's exactly what I'm saying. And I said it in response to
Dan Pop's claim. He alleges that taking a long time to compute the
sine of a large angle is "wasteful" of CPU cycles, even "unacceptable."

Groucho Marks: Doctor, doctor, it hurts when I do this. What should I do?

Doctor: Don't do that.
Java, at least, defines a standard value for pi, so that programs
can used that in generating arguments. Should you assume that
the argument, in radians, is a multiple of that value of pi,
or a much more accurate one?


Whatever value pi may have when truncated to machine precision
is useless for reducing arguments properly. The error shows up
after just a few radians. It's hard to defend a conjecture that
the sine of 5*pi is never worth computing properly.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #70
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:AzHoc.39038$xw3.2533208@attbi_s04...
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.
(snip)
I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.


You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.


Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.


Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)


Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #71
"P.J. Plauger" wrote:
.... snip ...
Whatever value pi may have when truncated to machine precision
is useless for reducing arguments properly. The error shows up
after just a few radians. It's hard to defend a conjecture that
the sine of 5*pi is never worth computing properly.


This I have to strongly disagree with. What you have to do is
effectively extend the significant bits of the argument while
reducing it. As long as you assume the argument is exact, you can
do this from the unlimited supply of zeroes. The accuracy of 10
mod 3 is dependant on the accuracy of 3. It cannot exceed the
accuracy of either 10 or 3, but it need lose nothing in its
computation.

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #72
glen herrmannsfeldt wrote:
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.


(snip)
I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.


You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.


Consider how you compute 1e30 DIV 3 to any desired accuracy. The
result depends on the accuracy of 3 alone if you assume that 1e30
is exact. Similarly, consider the results on division of 1e30 by
0.33333. (exactly 5 3s)

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #73
Tim Prince wrote:
Grumble wrote:
AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ?


Possibly referring to compilers complying with ABIs disallowing
x87, or taking advantage of higher performance of SSE parallel
libraries. Use of fsin on IA64 is extremely unlikely, even though
it's still there.


I assume you meant AMD64, not IA64?

Where did you read that the AMD64 ABI disallowed x87?

http://www.x86-64.org/abi.pdf

Nov 14 '05 #74
P.J. Plauger wrote:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.


I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).

I think that it'd be particularly ironic to combine this with unlimited
precision for floating-point constants.

--
++acr@,ka"
Nov 14 '05 #75
In <sl****************@ID-227112.user.uni-berlin.de> Sam Dennis <sa*@malfunction.screaming.net> writes:
P.J. Plauger wrote:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.


I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).


True and irrelevant, as we're discussing about what a high quality
implementation should do.

If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
^^
least one program that contains at least one instance of every
^^^^^^^^^^^^^^^^^
one of the following limits:

Other, strictly conforming programs, must be merely accepted (whatever
that means, the standard provides no further clues).

So consider the following "implementation" for Unix systems:

echo "Program accepted."
cp /bin/true a.out

combined with an implementation of /bin/true that exercises all the
translation limits enumerated in 5.2.4.1 and the documentation required
by the standard.

Now, every translation unit, either correct or not, will receive the one
diagnostic required by the standard, all strictly conforming programs are
"accepted" (as well as all other programs ;-) and the *one* C program
mentioned above is translated and executed.

The real question is: how many users is my implementation going to have?
Answer: as many as yours ;-) This illustrates the importance of a factor
that is not even mentioned in the C standard: the quality of
implementation.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #76
Dan Pop wrote:
Sam Dennis <sa*@malfunction.screaming.net> writes:
`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. [...]
True and irrelevant, as we're discussing about what a high quality
implementation should do.


Ridiculous accuracy for silly arguments is unnecessary for high quality
programs, which should accommodate libraries coded by mere mortals that
can furthermore calculate logarithms in less time than it takes for the
operator to look up the answer himself. (Admittedly, such programs can
not operate on my hypothetical implementation if they actually make use
of the floating-point facilities provided by C itself; the portable way
to get guaranteed accuracy is to use a library and its types for all of
one's non-integral calculations, written in strictly conforming C.)
If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
least one program that contains at least one instance of every
one of the following limits:
I've decided that that was probably a political decision to allow buggy
compilers to claim conformance. Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out
(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)

Well, obviously there's the #error thing, which means that #include has
to be handled, which means...

cat "$@"
echo "Program accepted."
cp /bin/true a.out

....and the documentation must state that #include source file searching
is not supported and that all output is diagnostics (or similar words).
But, yes, that'd be a conforming implementation; and, because any input
is `acceptable' to it, any data is a conforming program; that's no news
to you, of course.

Yes, the Standard has huge loopholes. Those two are best ignored, with
useful definitions of `conforming' implementation and programs mentally
substituted (in the latter case, by `strictly conforming program').
This illustrates the importance of a factor that is not even mentioned
in the C standard: the quality of implementation.


Oh, but it would be a perfectly good implementation in most other ways.

Actually, I've been thinking for a long time about writing this, except
that the quality of implementation, as it's normally understood, should
be variable, for conformance testing of programs.

Catching all the potential problems isn't feasible, but a useful number
is probably just about manageable.

--
++acr@,ka"
Nov 14 '05 #77
P.J. Plauger wrote:
(I wrote)
You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.
Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.
Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
(snip)
Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)

Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?

The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.

Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.

As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.

The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).

Giving a fatal error tells the user that something is
wrong and should be fixed. Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.

Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

-- glen

Nov 14 '05 #78
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
P.J. Plauger wrote:
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.

Richard
Nov 14 '05 #79
In <sl****************@ID-227112.user.uni-berlin.de> Sam Dennis <sa*@malfunction.screaming.net> writes:
Dan Pop wrote:
If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
least one program that contains at least one instance of every
one of the following limits:
I've decided that that was probably a political decision to allow buggy
compilers to claim conformance.


It's actually a pragmatic decision: for any given implementation, it is
possible to construct a strictly conforming program exceeding the
implementation's resources, while still staying within the translation
limits mentioned in the standard.

One year or so ago I explored the ways of avoiding this wording by
adding a few more translation limits. Then, the standard could reasonably
require the translation and execution of *any* program staying within the
translation limits.
Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.


I don't think so. If the standard could specify minimal accuracy
requirements for the floating point types, it could also specify
minimal accuracy requirements for the floating point operations. I have
already debated this issue in comp.std.c and my opponents came with bogus
counterarguments (like the unfeasibility of documenting the accuracy of
all the <math.h> functions, although I was strictly and explicitly
talking about the floating point operations).
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out


(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)


Who says that the one and only correctly translated program has to
contain the #error pragma?
This illustrates the importance of a factor that is not even mentioned
in the C standard: the quality of implementation.


Oh, but it would be a perfectly good implementation in most other ways.


I.e. for people with no need for floating point. This is actually the
case with certain implementations for embedded control applications:
floating point support is too expensive and irrelevant to the typical
application domain of the processor, yet the standard says that it must
be provided. A much better solution would have been to make floating
point support optional for freestanding implementations.

In the case of hosted implementations, people would choose the ones
providing proper support for floating point even if they don't need it.

OTOH, considering that practically every hosted implementation in current
use today is for systems with native floating point support, the issue is
moot: the implementation delivers whatever the hardware delivers.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #80
In <40***************@news.individual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
P.J. Plauger wrote:
> Again, you're *presuming* that the user did some sloppy calculation
> to arrive at a large argument to sine. If that's what happened, then
> the result will be correspondingly inaccurate. But that's *always*
> true of every calculation, whether it involves sin(x) or any other
> math function. It's true even if no math functions are called at
> all. The authors of the math functions can only take what they're
> given and do the best possible job with them. Why is that such
> a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.


You have failed to understand the substance of this discussion. Nobody
argued for less precission than could be *meaningfully* computed.

Back to my extreme example, one could consider DBL_MAX as an *exact*
value and spend significant CPU resources computing sin(DBL_MAX) with
double precision accuracy, but when would be that precision *meaningful*?

Give the user as much precision as it makes sense from the magnitude of
the argument and let *him* do the argument reduction if he needs more
precision, because only he can decide what is the proper way of performing
the argument reduction for his application. More often than not, trying
to outsmart the user results in dumb behaviour.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #81
In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx*******@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:...
>> Concrete examples, please.
>
>What is the sine of 162,873 radians? If you're working in radians,
>you can represent this input value *exactly* even in a float.

You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?


Must everything make physical sense? Perhaps it makes mathematical sense?


And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.


Not in mathematical applications, where the argument to the sine function
can very well be exact.


Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are possible.
Where do these
"mathematical applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #82
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:V7Xoc.6287$6f5.499223@attbi_s54...
Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting" so as not to hurt the
programmer's feelings.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?


Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?


Not sure what this has to do with the current discussion, but
I'll answer anyway. There was a market for machines that were
fast -- at least by the standards of those days -- and a prevailing
view that inaccurate floating-point was safe enough if you just
carried a few extra sacrificial precision bits. That view was
correct often enough that not all of Cray's customers went
bankrupt.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.
As before.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.
Most likely.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.

Okay.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).
Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.

And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.
Giving a fatal error tells the user that something is
wrong and should be fixed.
Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?
Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.
Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway?
Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.


Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #83
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx*******@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <c7**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
> In <1d*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
...
> >> Concrete examples, please.
> >
> >What is the sine of 162,873 radians? If you're working in radians,
> >you can represent this input value *exactly* even in a float.
>
> You *can*, but does it make physical sense to call sine with an integer
> argument (even if represented as a float)?

Must everything make physical sense? Perhaps it makes mathematical sense?


And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.


Yup what?!? Please elaborate.
> In real life applications, the argument of sine is computed using
> floating point arithmetic (on non-integer values), so it *is* a fuzzy
> value, with the degree of fuzziness implied by its magnitude.

Not in mathematical applications, where the argument to the sine function
can very well be exact.


Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are possible.


This is not good enough.
"mathematical applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.


With the exception of numerical analysis, mathematical results are exact,
by definition.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #84
In article <C4*******************@nwrddc02.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote:

<snipped lots of stuff>

Apart from the precision of an individual call to sin(x), would people
think that fabs (sin (x)) <= 1.0 must be true for _all_ values of x, no
matter how large? And should sin (x)*sin(x) + cos(x)*cos(x) be close to
1 for all x, no matter how large?
Nov 14 '05 #85
P.J. Plauger wrote:
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:V7Xoc.6287$6f5.499223@attbi_s54...
Consider this program fragment, as no real examples have
been presented so far: for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));
(snip)
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting" so as not to hurt the
programmer's feelings.
That example is more realistic than any you have posted
so far.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
(snip)
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster. Most likely. What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
When they use a single precision function they expect
less accurate answers than a double precision function.
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it. Okay. What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision). Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.
The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.

The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.
And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.
It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.
Giving a fatal error tells the user that something is
wrong and should be fixed. Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?
There are no problems of any use to anyone where they
are useful.

(snip)
Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway? Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.


It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.

-- glen

Nov 14 '05 #86
In article <a59pc.1495$gr.83718@attbi_s52>,
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.


Just for a thought: There is this thing called "inverse error analysis".
You try to calculate y = f (x). Usually you assume that your
floating-point math will produce a value y' instead which is not quite
equal to y, and you want to keep the difference between y and y' small.
But instead you might assume that you calculate y = f (x'), that is your
floating-point arithmetic produced the exact function value for some
different x', hopefully close to x.

Now consider y = cos (x). For x = 0, cos (x) = 1, and it becomes smaller
very slowly; for small x we have cos (x) approximately 1 - x^2/2. Now
take 64 bit IEEE floating point. The largest number smaller than 1 is 1
- 2^-53. Choose x such that cos x is about 1 - 2^-54, x = pow (2.0,
-53.0/2). For this x, the best that your implementation of cos (x) can
do is to return a result of either 1 or 1 - 2^-53. 1 = cos (x') for x' =
0, 1 - 2^-53 = cos (x') for x' about pow (2.0, -26). So the best you can
possibly get is cos (x'), with x' either zero or 41 percent larger than
x. And there is nothing you can do about it.

No matter how good your cos () implementation is, the worst case with
inverse error analysis is horrendously bad.
Nov 14 '05 #87
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:a59pc.1495$gr.83718@attbi_s52...
P.J. Plauger wrote:
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:V7Xoc.6287$6f5.499223@attbi_s54...
Consider this program fragment, as no real examples have
been presented so far:for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));
(snip)
Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting" so as not to hurt the
programmer's feelings.


That example is more realistic than any you have posted
so far.


Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.
Most likely.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?


When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.
Okay.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?


It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.


"Far above and beyond" is once again qualitative arm waving.
You may hear a weaker call to duty than others, but there's
nowhere in the C Standard that justifies an arbitrary
point beyond which it's okay to give up on sine, or any other
function. You think sine is hard to get right for some
arguments, try lgamma, tgamma, and erfc (all in C99).
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).
Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.


The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.


What, you mean that a conforming implementation has no explicit
precision requirements? I understand that; in fact I was one
of the principal proponents of that (lack of) requirement in
the C Standard. Dan Pop already responded to that issue better
than I can. It is well understood that every implementation
has certain "quality of implementation" features. (I also
pioneered use of that term.) If getc takes two days per character,
nobody will buy your library. Similarly, if math functions
trap out on certain unspecified values, word will get out and
your customers will stay away in droves.

Math libraries are tougher to quantify, because they require so
many measurements. But once you do the measurements and publish
them, you'd better be prepared to justify the results. You can't
just say, "I decided not to implement pow for negative exponents."
Or more to the point, "at some point the results of sine begin
to really suck, but I'd rather not say where that is." And if
you *do* choose to publish your threshold of suckiness, you'd
better have a rationale for choosing it. Otherwise, some third
party tester will report that your sine function occasionally
is off by 40 billion ulp, and not provide the rationale for why
that might be.
The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.


Oh, I see. You were talking about criteria accompanying a
40-year-old architecture, with software technology to match.
And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.


It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.


Really? The C Standard says it should set errno to EDOM and
return some failure value. If your implementation also obeys
IEC 60559 (aka IEEE 754) then you should return a NaN. I
find very few customers who want a program to terminate rather
than produce a NaN. I know even fewer who would *ever* expect
sine to return a NaN for any finite argument.
Giving a fatal error tells the user that something is
wrong and should be fixed.
Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?


There are no problems of any use to anyone where they
are useful.


Is that a Zen koan or just the nonsense it appears to be?
Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.


It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.


As a one-time professional physicist myself, I now begin to
understand the nature of your education.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #88
On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Then that's a likely thought-bug in the C Standard.

>>Consider someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

> Most likely.

> What does this tell the designer of a sine function about
> where it's okay to stop delivering accurate results?


When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

I think accumulation of rotations is probably best done
with complex multiplication.
Nov 14 '05 #89
"Dr Chaos" <mb****************@NOSPAMyahoo.com> wrote in message
news:sl*******************************@lyapunov.uc sd.edu...
On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
Then that's a likely thought-bug in the C Standard.


Likely, but it's there, and some of us have to live with it.
>>Consider someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

> Most likely.

> What does this tell the designer of a sine function about
> where it's okay to stop delivering accurate results?

When they use a single precision function they expect
less accurate answers than a double precision function.


No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.


The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile"
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."
I think accumulation of rotations is probably best done
with complex multiplication.


And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #90
In article <xL*******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0
Nov 14 '05 #91
"John Cochran" <jd*@smof.fiawol.org> wrote in message
news:c8**********@smof.fiawol.org...
In article <xL*******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0


Yes, that's a reasonable cutoff point, on the face of it. Just don't
look too close. You're falling prey to the same error in logic that
traps most people who first study this problem -- assuming that there
must be some intrinsic error, say 1/2 ulp, in the argument. If we're
going to apply that criterion to the library uniformly, then rounding
1e38 is equally suspect. (How happy would you be if round occasionally
produced a fatal error? Particularly when the answer is obvious and
easy to compute.)

But if you assume the argument is exact, as *all* library functions
really must do, the your statement is incorrect. There *is* a well
defined angle corresponding to *every* finite floating-point argument.
You (or I, to be specific) may not like the amount of work required
to compute it accurately, but the value is known and well defined.

If, OTOH, you want to let the library vendors off the hook whenever
a function is hard to compute, I've got a little list. And it doesn't
stop at sin/cos/tan.

And if, OTOOH, you want to let the library vendors off the hook
whenever a typical programmer probably doesn't know what he's doing,
*boy* do I have a list.

The trick with standards, as with library design, is to have a
rational framework that's uniformly applied. The sine function
may illustrate some of the nastiest consequences of carrying the
current framework to its logical conclusion, but I assure you
that there are plenty of other monsters lurking out there. Any
change you propose to the framework just gives you a different
set to battle.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #92
In article <c8**********@smof.fiawol.org>,
jd*@smof.fiawol.org (John Cochran) wrote:
In article <xL*******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0


Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number. Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.
Nov 14 '05 #93
"P.J. Plauger" <pj*@dinkumware.com> writes:

|> I agree that it's a Quality of Implementation issue just how fast a
|> library function supplies nonsense when called with nonsense
|> arguments. But I've yet to hear an objective criterion for
|> determining how much of an argument is garbage. Absent that, the
|> best way I know for library writers to satisfy customers is to
|> assume that every input value is exact, and to produce the closest
|> possible representation to the nearest internal representation of
|> the corresponding function value.

Maybe I'm just being naïve, but I always thought that it was a
necessary quality of a correct implementation that it give correct
results for all legal input. And that it wasn't the job of library
implementers to decide what was or was not reasonable input for my
application.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Nov 14 '05 #94
P.J. Plauger wrote:
"Dr Chaos" <mb****************@NOSPAMyahoo.com> wrote in message
news:sl*******************************@lyapunov.uc sd.edu...

On Fri, 14 May 2004 21:23:51 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.


Then that's a likely thought-bug in the C Standard.

Likely, but it's there, and some of us have to live with it.

>>Consider someone doing a single precision sine. Most
>>likely they use single precision instead of double
>>because they don't need so much accuracy and hope that
>>the result will be generated faster.

>Most likely.

>What does this tell the designer of a sine function about
>where it's okay to stop delivering accurate results?

When they use a single precision function they expect
less accurate answers than a double precision function.

No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.


The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile"
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."

I think accumulation of rotations is probably best done
with complex multiplication.

And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?


P.J.
I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

...the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?

2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

...not the solution for large angle argument reduction?

Keep up the good work.
--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #95
Joe Wright wrote:
.... snip ...
2. Large Angles. The circle is 360 degrees or '2 pi radians'.
Why is something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

..not the solution for large angle argument reduction?


That depends highly on how you compute the fmod. Say you compute
r/(2*PI), truncate it to an integer, multiply by (2*PI), and
subtract that from r. Now you have the difference of two
comparable magnitudes, with attendant loss of significant bits.

Compare with methods of computing sigma f(n) for n = 1 ....
infinity. If you start with n=1 (using the normally available
floating point system) you will end up with something quite
divergent from the true answer, regardless of whether the series
actually converges or not. A reasonably accurate computation
requires starting with the smallest terms.

--
"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind." - Bush.
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses." - James Rhodes.
Nov 14 '05 #96
In article <c8**********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:

....
And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.


Yup.


Yup what?!? Please elaborate.


I would have thought that was simple. Yes, I agree with that paragraph.
> Not in mathematical applications, where the argument to the sine
> function can very well be exact.

Please elaborate, again, with concrete examples.


You want a concrete example, I just do think that such examples are
possible.


This is not good enough.


I'm so sorry.
"mathematical applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?


Pretty much as in every such case. Careful error analysis.


With the exception of numerical analysis, mathematical results are exact,
by definition.


You are forgetting a few fields: numerical algebra, computational number
theory, statistics... But in some cases (especially computational number
theory) final results can be exact while intermediate results are not.
For instance in a project I was involved in, we have shown that the first
1,200,000,000 non-trivial zeros of the Rieman zeta function have real
part 1/2. However, none of the calculated zeros was exact. (Moreover,
the calculation involved the sine function. Moreover, we had to get
reasonable precision, as it involved separating places where the sign
of the function changes. %)
---
% If you are interested, there are well known formula's that indicate in
which region the n-th non-trivial zero resides. However, these regions
overlap. But using this you can set up algorithms that will locate
groups of zero's. The problem is that these zero's will get arbitrarily
close to each other, so using floating point on the machine we did the
runs on (a CDC Cyber 205) the best separation we could get was 10**(-13).
This was not enough, so sometimes we had to resort to double precision.
However, the argument was *exact*. A precise floating-point (hence
rational) number.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #97
"Joe Wright" <jo********@comcast.net> wrote in message
news:Q5********************@comcast.com...
I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

..the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?
Yes.
2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

..not the solution for large angle argument reduction?
People once thought it was, or should be. Indeed, that was one of the
arguments for adding the rather finicky fmod to IEEE floating point
and eventually the C Standard. But if you think about it hard enough
and long enough -- took me an afternoon and several sheets of paper --
you realize that it doesn't cut it. You effectively have to keep
subtracting 2*pi from your argument r until it's less than 2*pi.
fmod does this by subtracting the various multiples 2*pi*2^n. If
*any one* of them does not have nearly 16 good fraction digits, as
well as all the digits it needs to the left of the decimal point, it's
going to mess up the whole set of subtractions. So if you want to
reduce numbers as large as 10^38, you have to represent pi to about
log10(10^(38+16)) or 54 digits. For 113-bit IEEE long double, you
need well over 4000 digits.

We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.
Keep up the good work.


Thanks.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #98
"P.J. Plauger" wrote:
.... snip ...
We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.


It seems to me that the reduction could be fairly rapid if an
estimate is formed by normal division, break that up into single
bit binary portions (so that multiples of PI do not add non-zero
significant bits) to do the actual reduction. Not worked out at
all, just the glimmer of a method. The idea is to remove the
leftmost digits of the original argument first.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #99
In <ch*********************************@slb-newsm1.svr.pol.co.uk> Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number.
But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.
Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.


The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to compute
sin(DBL_MAX) with 53-bit precision, ignoring the fact that DBL_MAX
stands for an interval so large as to make this function call completely
devoid of any meaning?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.