473,396 Members | 2,011 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Sine code for ANSI C

Hello
I downloaded glibc and tried looking for the code that implements the
sine function
i couldnt find the file.
i went to the math directory and found math.h.. i guess that needs to be
included for the sine function. but which .c file implements the
sine/cosine and other trig fns
thanks
Nov 14 '05
143 7897
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <ch*********************************@slb-newsm1.svr.pol.co.uk>

Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number.


But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly
represented is way more relevant than any other value from the set.
In particular, it is the value that library functions should assume
when deciding what function value to return. Do that and you will
please a large number of your customers. Don't do it and you won't
even please those who really believe the 1/2 ulp fuzz. (They should
be doing their own error propagation analysis no matter what, and
they'll almost always benefit from a result in the center of the range.)
Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.


The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to compute
sin(DBL_MAX) with 53-bit precision, ignoring the fact that DBL_MAX
stands for an interval so large as to make this function call completely
devoid of any meaning?


You have to ignore the possibility that DBL_MAX stands for a large range
because *it might not.* Granted, it probably does have that much fuzz
practically all the time; but you, the library vendor, don't know that.
And as we've discussed at length here, it's hard to say just where along
the road to possible perdition it's okay to stop trying. Once you tool
up for doing the job right along a good part of that road, it ain't that
much more work to go all the way. Then you don't have to worry about
some customer coming along and saying, "Hey, why did you give up on
computing that value properly? I happen to know what I'm doing." And
that customer might even be right.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #101
Dan Pop wrote:
.... snip ...
The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to
compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
DBL_MAX stands for an interval so large as to make this function
call completely devoid of any meaning?


PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.

If I ask for the value of sin(12345678*PI) I can confidently
assert that the value is 0. Using that value will never cause an
unexplained fault. The following illustrates the problem:

[1] c:\c\junk>a
1 3.1415926535897931 0.0000000000000001
2 6.2831853071795862 -0.0000000000000002
4 12.5663706143591725 -0.0000000000000005
8 25.1327412287183449 -0.0000000000000010
16 50.2654824574366899 -0.0000000000000020
32 100.5309649148733797 -0.0000000000000039
64 201.0619298297467594 -0.0000000000000078
128 402.1238596594935188 -0.0000000000000157
256 804.2477193189870377 -0.0000000000000313
512 1608.4954386379740754 -0.0000000000000627
1024 3216.9908772759481508 -0.0000000000001254
2048 6433.9817545518963016 -0.0000000000002508
4096 12867.9635091037926031 -0.0000000000005016
8192 25735.9270182075852063 -0.0000000000010032
16384 51471.8540364151704125 -0.0000000000020064
32768 102943.7080728303408250 -0.0000000000040128
65536 205887.4161456606816500 -0.0000000000080256
131072 411774.8322913213633001 -0.0000000000160512
262144 823549.6645826427266002 -0.0000000000321023
524288 1647099.3291652854532003 -0.0000000000642046
1048576 3294198.6583305709064007 -0.0000000001284093
2097152 6588397.3166611418128014 -0.0000000002568186
4194304 13176794.6333222836256027 -0.0000000005136371
8388608 26353589.2666445672512054 -0.0000000010272743
16777216 52707178.5332891345024109 -0.0000000020545485
33554432 105414357.0665782690048218 -0.0000000041090971
67108864 210828714.1331565380096436 -0.0000000082181941
134217728 421657428.2663130760192871 -0.0000000164363883
268435456 843314856.5326261520385742 -0.0000000328727765
536870912 1686629713.0652523040771484 -0.0000000657455530
1073741824 3373259426.1305046081542969 -0.0000001314911060

[1] c:\c\junk>cat sines.c
#include <stdio.h>
#include <limits.h>
#include <math.h>
int main(void)
{
double PI = 4.0 * atan(1.0);
unsigned int i;

for (i = 1; i < INT_MAX; i += i) {
printf("%11d %28.16f %20.16f\n", i, PI * i, sin(PI * i));
}
return 0;
}

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #102
In article <%2*******************@nwrddc02.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote:
You have to ignore the possibility that DBL_MAX stands for a large range
because *it might not.* Granted, it probably does have that much fuzz
practically all the time; but you, the library vendor, don't know that.
And as we've discussed at length here, it's hard to say just where along
the road to possible perdition it's okay to stop trying. Once you tool
up for doing the job right along a good part of that road, it ain't that
much more work to go all the way. Then you don't have to worry about
some customer coming along and saying, "Hey, why did you give up on
computing that value properly? I happen to know what I'm doing." And
that customer might even be right.


Somehow your mentioning of sin (DBL_MAX) sparked a little idea that
might help you calculating sin (x) for arbitrarily large values quite
quickly, especially useful if you use long double with 15 exponent bits
and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of sin (65536 ^ N)
and cos (65536 ^ N) with lets say 20 or 24 extra bits of precision.
Every large floating-point number can be written as x = +/- k * 2^n
where k is for example a 53 bit or 64 bit integer. So take a value of x,
write it as the sum of very few small multiples of (65536 ^ N) which
should be quite simple, then use some simple theorems for calculating
sin (a+b) when sin and cos of a and b are known. That should work quite
quickly for arbitrary large x.
Nov 14 '05 #103
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...


I got really interested in P.J.'s argument reduction to sin() et.al.
and while constructing something to test with I have found something
very strange. A very simple program..

#include <stdio.h>
#include <math.h>

#define R PI*2/360 /* Radians per Degree */

int main(void) {
double d, r = R;
int i, lin;
for (lin = 0, i = 30; i < 6000000; i += 360000) {
d = i * R;
printf("\n%2d. Deg %8d, Rad %.16e Sin %+.16e", ++lin, i, d,
sin(d));
}
puts("");
for (lin = 0, i = 30; i < 6000000; i += 360000) {
d = i * r;
printf("\n%2d. Deg %8d, Rad %.16e Sin %+.16e", ++lin, i, d,
sin(d));
}
puts("");
return 0;
}

Sorry about the line wraps.

I use the DJGPP versions of gcc.

The point of the exercise was to simulate fairly large integral
(degree) angles, convert the (int) degrees to (double) radians and
present the result to sin() and see what we get. Because I know the
sine of 30 degrees is 0.5 and I suppose the sine 390 and 750 degrees
(and so on) is the same.

The program consists of two virtually identical loops. The
difference is only the assignment of d.

The loops produce 17 lines of output, identical except (in My case)
for line 14. In that case, the calculation of d yields a different
result, off by one in the least significant bit of the mantissa of
d. Passing these two to sin() result in a difference of 18 lsb bits
of the result's mantissa.

If you see the same difference, can you explain it?
--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #104
Christian Bau wrote:
.... snip ...
Somehow your mentioning of sin (DBL_MAX) sparked a little idea
that might help you calculating sin (x) for arbitrarily large
values quite quickly, especially useful if you use long double
with 15 exponent bits and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of
sin (65536 ^ N) and cos (65536 ^ N) with lets say 20 or 24
extra bits of precision. ... snip ...


That isn't necessary. Assuming the argument is precise (i.e.
extended by zero bits on the right) all we need to compute (arg
mod PI) to the precision of PI is a couple of guard bits.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #105
In article <40***************@yahoo.com>,
CBFalconer <cb********@yahoo.com> wrote:
Christian Bau wrote:

... snip ...

Somehow your mentioning of sin (DBL_MAX) sparked a little idea
that might help you calculating sin (x) for arbitrarily large
values quite quickly, especially useful if you use long double
with 15 exponent bits and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of
sin (65536 ^ N) and cos (65536 ^ N) with lets say 20 or 24
extra bits of precision. ... snip ...


That isn't necessary. Assuming the argument is precise (i.e.
extended by zero bits on the right) all we need to compute (arg
mod PI) to the precision of PI is a couple of guard bits.


When you write PI you mean a floating point approximation of the
mathematical constant pi? For large x, reduction modulo an approximation
to pi will give a result that is very much different from reduction
modulo pi. That is why some libraries store pi or 1/pi with a precision
of several thousand bits.
Nov 14 '05 #106
Joe Wright <jo********@comcast.net> wrote:
CBFalconer wrote:
"P.J. Plauger" wrote:

... snip ...
I got really interested in P.J.'s argument reduction to sin() et.al.
and while constructing something to test with I have found something
very strange. A very simple program..

#include <stdio.h>
#include <math.h>

#define R PI*2/360 /* Radians per Degree */


<snip>
If you see the same difference, can you explain it?


If you actually define PI somewhere and parenthesize the
replacement text for R, the code not only compiles, but
all the strangeness should be gone.

Regards
--
Irrwahn Grausewitz (ir*******@freenet.de)
welcome to clc: http://www.ungerhu.com/jxh/clc.welcome.txt
clc faq-list : http://www.faqs.org/faqs/C-faq/faq/
clc OT guide : http://benpfaff.org/writings/clc/off-topic.html
Nov 14 '05 #107
Christian Bau wrote:
CBFalconer <cb********@yahoo.com> wrote:
Christian Bau wrote:

... snip ...

Somehow your mentioning of sin (DBL_MAX) sparked a little idea
that might help you calculating sin (x) for arbitrarily large
values quite quickly, especially useful if you use long double
with 15 exponent bits and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of
sin (65536 ^ N) and cos (65536 ^ N) with lets say 20 or 24
extra bits of precision. ... snip ...


That isn't necessary. Assuming the argument is precise (i.e.
extended by zero bits on the right) all we need to compute (arg
mod PI) to the precision of PI is a couple of guard bits.


When you write PI you mean a floating point approximation of the
mathematical constant pi? For large x, reduction modulo an
approximation to pi will give a result that is very much different
from reduction modulo pi. That is why some libraries store pi or
1/pi with a precision of several thousand bits.


You're right. I was thinking of the case where we start with a
number of the form "arg = 2 * PI * rotations + delta", where we
can recover delta provided we use the same value for PI and the
number has not lost precision in the first place.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #108
In <%2*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <ch*********************************@slb-newsm1.svr.pol.co.uk>

Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
>Well, you actually _can_ find the correct answer quite well. A value of
>type double represents a single real number.


But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #109
In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Dan Pop wrote:

... snip ...

The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to
compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
DBL_MAX stands for an interval so large as to make this function
call completely devoid of any meaning?


PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.


Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?

To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #110
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Dan Pop wrote:
... snip ...

The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to
compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
DBL_MAX stands for an interval so large as to make this function
call completely devoid of any meaning?


PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.


Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?


Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.


Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously. Absent such information, a high quality
sin function favors getting the best possible answer for the argument
given, over doing something fast in the hopes that *all* users are
as fuzzy and in a hurry as you always seem to be. And as I've indicated
several times in this thread, computing the sine of *any* angle
smaller in magnitude than pi/4 can be done pretty fast. Odds are
that any logic you add to reduce the precision in this range would
take about as long to execute as the time you save. So all you have
to do is reduce your own arguments, as quick and sloppy as suits
your needs, and you avoid all that wasteful accuracy demanded by
professional programmers.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #111
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <%2*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <ch*********************************@slb-newsm1.svr.pol.co.uk>

Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
>Well, you actually _can_ find the correct answer quite well. A value of >type double represents a single real number.

But, when used in a real number context (as opposed to an integer number context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #112
In article <c8***********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <%2*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:

....
But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.


That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


I think I gave one? (Zero's of the Riemann zeta function.)

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #113
Dan Pop wrote:
"P.J. Plauger" <pj*@dinkumware.com> writes:

.... snip ...

That's just one interpretation of floating-point representation.
It is akin to the oft-repeated presumption in this thread that
every value carries a +/- 1/2 ulp error with it. But that's not
always the case. Many a floating-point calculation is *exact*.


When I asked you for concrete examples, you provided exactly zilch.


I expanded my earlier example showing the errors arising from
large arguments to sin to show possible user argument reductions.
The first one I tried was fmod, and I am pleasantly surprised by
its accuracy here. The library is DJGPP 2.03.

#include <stdio.h>
#include <limits.h> /* INT_MAX */
#include <math.h> /* fmod, sin, atan */
#include <stdlib.h> /* strtod */

/* try delta 0.001953125 (i.e. 2**-n i.e. single bit)
0.00048828125
0.0001220703125
0.000030517578125
0.00000762939453125
0.0000019073485328125
to show the precision preserved by reduce() */

/* -------------------1 */

/* return arg modulo PI */
double reduce(double arg, double PI)
{
return fmod(arg, PI);
} /* reduce */

/* -------------------1 */

int main(int argc, char **argv)
{
double PI = 4.0 * atan(1.0);
unsigned int i;
double arg, narg, delta;

delta = 0.0;
if (argc > 1) delta = strtod(argv[1], NULL);

for (i = 1; i < (INT_MAX / 2U); i = 4 * i) {
arg = i * PI + delta;
narg = reduce(arg, PI);
printf("%11d %28.16f %20.16f\n%11c %28.16f %20.16f\n",
i, arg, sin(arg), ' ', narg, sin(narg));
}
return 0;
}

c:\c\junk>a 0.0000019073485328125
1 3.1415945609383260 -0.0000019073485328
0.0000019073485329 0.0000019073485329
4 12.5663725217077058 0.0000019073485328
0.0000019073485333 0.0000019073485333
16 50.2654843647852232 0.0000019073485314
0.0000019073485333 0.0000019073485333
64 201.0619317370952785 0.0000019073485113
0.0000019073485191 0.0000019073485191
256 804.2477212263355568 0.0000019073484878
0.0000019073485191 0.0000019073485191
1024 3216.9908791832967836 0.0000019073485074
0.0000019073486328 0.0000019073486328
4096 12867.9635110111412359 0.0000019073481312
0.0000019073486328 0.0000019073486328
16384 51471.8540383225190453 0.0000019073466264
0.0000019073486328 0.0000019073486328
65536 205887.4161475680302829 0.0000019073406072
0.0000019073486328 0.0000019073486328
262144 823549.6645845500752330 0.0000019073165305
0.0000019073486328 0.0000019073486328
1048576 3294198.6583324782550335 0.0000019072202235
0.0000019073486328 0.0000019073486328
4194304 13176794.6333241909742355 0.0000019068349957
0.0000019073486328 0.0000019073486328
16777216 52707178.5332910418510437 0.0000019052940843
0.0000019073486328 0.0000019073486328
67108864 210828714.1331584453582764 0.0000018991304387
0.0000019073486328 0.0000019073486328
268435456 843314856.5326280593872070 0.0000018744758563
0.0000019073486328 0.0000019073486328
--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #114
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.

-- glen

Nov 14 '05 #115
In article <TCCqc.23014$gr.1960915@attbi_s52> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.
It is quite possible that within the set of matrices given, there is
at least one singular matrix. The number 40 I use above comes from
von Neuman. And indeed, before Wilkinson, numeric mathematicians could
not deal with the error analysis of large linear systems.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #116
In <V2********************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
>Dan Pop wrote:
>>
>... snip ...
>>
>> The point is not whether it can be calculated, but rather how much
>> precision should the calculation produce? Does it makes sense to
>> compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
>> DBL_MAX stands for an interval so large as to make this function
>> call completely devoid of any meaning?
>
>PJPs point is that that interval is the range of values specified,
>with the actual value being the most likely value in the range.
>The shape of the probability curve in the interval is known only
>to the end user. It may be an impulse function precisely at the
>specified value.


Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?


Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.


sinf is useless in portable C code.
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.


Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously.


ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #117
In <G6*******************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <%2*******************@nwrddc02.gnilink.net> "P.J. Plauger"

<pj*@dinkumware.com> writes:
>"Dan Pop" <Da*****@cern.ch> wrote in message
>news:c8***********@sunnews.cern.ch...
>> In <ch*********************************@slb-newsm1.svr.pol.co.uk>
>Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
>
>> >Well, you actually _can_ find the correct answer quite well. A valueof >> >type double represents a single real number.
>>
>> But, when used in a real number context (as opposed to an integernumber >> context -- floating point can be used in both contexts) it stands for a
>> whole subset of the real numbers set. The real value exactlyrepresented >> is no more relevant than any other value from that set.
>
>That's just one interpretation of floating-point representation. It is
>akin to the oft-repeated presumption in this thread that every value
>carries a +/- 1/2 ulp error with it. But that's not always the case.
>Many a floating-point calculation is *exact*. So the real value exactly


When I asked you for concrete examples, you provided exactly zilch.


Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?


Of course I do. Otherwise, I couldn't point out that you have still
failed to support your assertion with concrete examples, therefore I
refuse to consider it seriously, period.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #118
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <TCCqc.23014$gr.1960915@attbi_s52> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.


I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.


Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.
Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #119
In article <c8***********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:

....
I would certainly look carefully at them, especially
if done with matrix inversion.


Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.


Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.


Ah, so, what method do *you* use to solve linear systems on the computer?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #120
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8**********@sunnews.cern.ch...
In <V2********************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
>Dan Pop wrote:
>>
>... snip ...
>>
>> The point is not whether it can be calculated, but rather how much
>> precision should the calculation produce? Does it makes sense to
>> compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
>> DBL_MAX stands for an interval so large as to make this function
>> call completely devoid of any meaning?
>
>PJPs point is that that interval is the range of values specified,
>with the actual value being the most likely value in the range.
>The shape of the probability curve in the interval is known only
>to the end user. It may be an impulse function precisely at the
>specified value.

Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?


Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.


sinf is useless in portable C code.


But wait a minute. You've just outlined a "concrete example" where
you know you deserve only 5 digits of precision. sinf will meet
your needs *portably*. And it won't waste your precious CPU resources
to give you 16 digits of *bogus* precison. How can you call that
useless when you just outlined a concrete use case where it portably
meets your needs exactly?
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.


Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously.


ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.


But you've yet to commit to any *concrete* rule for deciding how many
digits you deserve given the magnitude of the argument value. And you
say we're at fault if a library takes more time than it should. We're
looking to you for guidance in this area, particularly since you seem
to have such a clear vision of the right way to do things. Please provide
wording for a correction to the C Standard so we can all critique it and
then adopt it. We're counting on you for leadership.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #121
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
In article <c8***********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:

...
> I would certainly look carefully at them, especially
> if done with matrix inversion.

Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.


Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.


Ah, so, what method do *you* use to solve linear systems on the computer?


I don't solve linear systems on the computer. Yet, as far as I know, all
the usual methods come bundled with an error analysis, that allows
computing the precision of the results. If you don't compute this
precision, what's the point in solving the linear systems in the first
place? There are more efficient random number generators...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #122
In <EJ**************@nwrddc03.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8**********@sunnews.cern.ch...
In <V2********************@nwrddc02.gnilink.net> "P.J. Plauger"

<pj*@dinkumware.com> writes:
>"Dan Pop" <Da*****@cern.ch> wrote in message
>news:c8***********@sunnews.cern.ch...
>
>> In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com>writes: >>
>> >Dan Pop wrote:
>> >>
>> >... snip ...
>> >>
>> >> The point is not whether it can be calculated, but rather how much
>> >> precision should the calculation produce? Does it makes sense to
>> >> compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
>> >> DBL_MAX stands for an interval so large as to make this function
>> >> call completely devoid of any meaning?
>> >
>> >PJPs point is that that interval is the range of values specified,
>> >with the actual value being the most likely value in the range.
>> >The shape of the probability curve in the interval is known only
>> >to the end user. It may be an impulse function precisely at the
>> >specified value.
>>
>> Or it may not. If I deserve 5 digits of precision, why should the
>> implementation waste precious CPU resources to give me 16 digits of
>> *bogus* precision?
>
>Because it doesn't know that it shouldn't? You could supply a useful
>hint by calling sinf, in this particular case at least.


sinf is useless in portable C code.


But wait a minute. You've just outlined a "concrete example" where
you know you deserve only 5 digits of precision. sinf will meet
your needs *portably*. And it won't waste your precious CPU resources
to give you 16 digits of *bogus* precison. How can you call that
useless when you just outlined a concrete use case where it portably
meets your needs exactly?


sinf is a C99 invention and the portability of C99 code is extremely
impaired by the number of implementations claiming C99-conformance.

C99 is perfectly futile to anyone concerned about the portability of his
code. At best, it's a guide about what C89 user space identifiers not
to use in your code (because C99 has no qualms stomping on the C89 user
name space).
>> To deserve 16 digits of precision, it is *my* job to call sin() with
>> an argument of the right magnitude. It's as simple as that.
>
>Simple for you, perhaps, but the interface for sin currently fails
>to provide any information about how many of the bits you're supplying
>should be taken seriously.


ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.


But you've yet to commit to any *concrete* rule for deciding how many
digits you deserve given the magnitude of the argument value.


I have already provided the rule, last week. And when I asked if it was
concrete enough for you, you didn't object.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #123
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8**********@sunnews.cern.ch...
sinf is useless in portable C code.
But wait a minute. You've just outlined a "concrete example" where
you know you deserve only 5 digits of precision. sinf will meet
your needs *portably*. And it won't waste your precious CPU resources
to give you 16 digits of *bogus* precison. How can you call that
useless when you just outlined a concrete use case where it portably
meets your needs exactly?


sinf is a C99 invention and the portability of C99 code is extremely
impaired by the number of implementations claiming C99-conformance.


Actually it's described in C89.
C99 is perfectly futile to anyone concerned about the portability of his
code. At best, it's a guide about what C89 user space identifiers not
to use in your code (because C99 has no qualms stomping on the C89 user
name space).


sinf has been reserved since 1989, so I can't imagine how using it
could possibly "stomp on" any program. But that's okay, I understand
that you're trolling.
>> To deserve 16 digits of precision, it is *my* job to call sin() with
>> an argument of the right magnitude. It's as simple as that.
>
>Simple for you, perhaps, but the interface for sin currently fails
>to provide any information about how many of the bits you're supplying
>should be taken seriously.

ALL of them, as I have already explained you. It's the magnitude of the value that decides how many digits of precision I deserve in the
result.
But you've yet to commit to any *concrete* rule for deciding how many
digits you deserve given the magnitude of the argument value.


I have already provided the rule, last week. And when I asked if it was
concrete enough for you, you didn't object.


Oh sorry, I thought you were explaining your (mis)understanding
of the problem with this interaction:

: > >So all the sine function needs is an indication of how much fuzz to
: > >assume in the argument. I've yet to hear a specific criterion. Concrete
: > >example please.
: >
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi. And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude. (This
is fine with me, since it still requires an argument reduction
using about 2 1/2 words of precision, so it will break the
vast majority of sine functions that have been in use for the
past several decades. But hey, we're covered, so I'm happy.)
Of course, you still need to specify what to return for the
garbage value, and what error to report (if any). I await your
further guidance.

So you're willing to stand behind this criterion even after I
described, in a later posting, the various ways in which it was
arbitrary? If this is submitted as a DR to WG14, you're
willing, as it's original author and proposer, to defend it
against cranky attacks by people who haven't thought through
the matter as carefully as you?

Brave of you. We can call it the Pop Criterion.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #124
In article <c8**********@sunnews.cern.ch>, Da*****@cern.ch (Dan Pop)
wrote:
I don't solve linear systems on the computer. Yet, as far as I know, all
the usual methods come bundled with an error analysis, that allows
computing the precision of the results. If you don't compute this
precision, what's the point in solving the linear systems in the first
place? There are more efficient random number generators...


It depends. Do you want a set of numbers that is close to the correct
solution of the systems, or do you want a set of numbers that is very
close to being a solution, that is each equation is solved with a small
error?

For many systems, you can get the second much easier than the first. Or
lets say you want to minimise f (x). It often doesn't matter whether you
find an x' that is close to the x with the smallest value or not, but
what you want is an x' such that f (x') is close to the smallest value
of f (x).
Nov 14 '05 #125
In article <ch*********************************@slb-newsm1.svr.pol.co.uk> Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
In article <c8**********@sunnews.cern.ch>, Da*****@cern.ch (Dan Pop)
wrote:
I don't solve linear systems on the computer. Yet, as far as I know, all
the usual methods come bundled with an error analysis, that allows
computing the precision of the results. If you don't compute this
precision, what's the point in solving the linear systems in the first
place? There are more efficient random number generators...


It depends. Do you want a set of numbers that is close to the correct
solution of the systems, or do you want a set of numbers that is very
close to being a solution, that is each equation is solved with a small
error?


The actual methods most commonly used give the exact solution to a system
that differs from the original system with a small error. This does *not*
mean that the solution is close to the exact solution of the exact original
problem. It is close to it when the condition number of the original
matrix is good. And that is what the error analysis does do (backward
error analysis as introduced by J.H. Wilkinson).

To compute the precision of the result you have to do much more than is
feasable in most cases. Have a look at Karlsruhe arithmetic from the
University at Karlsruhe.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #126
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
To compute the precision of the result you have to do much more than is
feasable in most cases.


What is the value of a result with unknown precision? To repeat my
earlier example, if you don't know if it's 1 +/- 0.001 or 1 +/- 1000,
the value of 1 is of pretty little use.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #127
In <9L****************@nwrddc03.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8**********@sunnews.cern.ch...
>> sinf is useless in portable C code.
>
>But wait a minute. You've just outlined a "concrete example" where
>you know you deserve only 5 digits of precision. sinf will meet
>your needs *portably*. And it won't waste your precious CPU resources
>to give you 16 digits of *bogus* precison. How can you call that
>useless when you just outlined a concrete use case where it portably
>meets your needs exactly?
sinf is a C99 invention and the portability of C99 code is extremely
impaired by the number of implementations claiming C99-conformance.


Actually it's described in C89.


Are you trolling? It may be described in C89 (as a reserved identifier)
but no conforming C89 implementation is required to provide it. Your
own published implementation of the standard C89 library doesn't...
C99 is perfectly futile to anyone concerned about the portability of his
code. At best, it's a guide about what C89 user space identifiers not
to use in your code (because C99 has no qualms stomping on the C89 user
name space).


sinf has been reserved since 1989, so I can't imagine how using it
could possibly "stomp on" any program. But that's okay, I understand
that you're trolling.


You're trolling. I was talking in general terms: there are dozens of C99
standard library functions whose names belong to the C89 user name space:
all of <complex.h> and <fenv.h>, as well as additions to other headers,
including something as innocuous as round().
>> >> To deserve 16 digits of precision, it is *my* job to call sin() with
>> >> an argument of the right magnitude. It's as simple as that.
>> >
>> >Simple for you, perhaps, but the interface for sin currently fails
>> >to provide any information about how many of the bits you're supplying
>> >should be taken seriously.
>>
>> ALL of them, as I have already explained you. It's the magnitude ofthe >> value that decides how many digits of precision I deserve in theresult. >
>But you've yet to commit to any *concrete* rule for deciding how many
>digits you deserve given the magnitude of the argument value.


I have already provided the rule, last week. And when I asked if it was
concrete enough for you, you didn't object.


Oh sorry, I thought you were explaining your (mis)understanding
of the problem with this interaction:

: > >So all the sine function needs is an indication of how much fuzz to
: > >assume in the argument. I've yet to hear a specific criterion. Concrete
: > >example please.
: >
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi.


That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.
And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude.
Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].
(This
is fine with me, since it still requires an argument reduction
using about 2 1/2 words of precision, so it will break the
vast majority of sine functions that have been in use for the
past several decades.
Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?
But hey, we're covered, so I'm happy.)
Of course, you still need to specify what to return for the
garbage value, and what error to report (if any). I await your
further guidance.
For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
So you're willing to stand behind this criterion even after I
described, in a later posting, the various ways in which it was
arbitrary?
Yup. I didn't buy any of your sophistry.
If this is submitted as a DR to WG14, you're
willing, as it's original author and proposer, to defend it
against cranky attacks by people who haven't thought through
the matter as carefully as you?
Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously. Whatever
considerations led to the current committee decision would be still valid
after the submission of my DR (no precision specification is *more*
permissive than *any* precision specification).

If I were to submit a DR, it would be one requiring each implementation
to document the precision of the 4 arithmetic operations on each floating
point type. But I know, from past c.s.c discussion, that the committee
would turn a deaf ear to my DR.
Brave of you. We can call it the Pop Criterion.


Call it the common sense criterion.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #128
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi.
That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.


There you go again, getting un-concrete. You'll have to specify
what "reasonably" means, or some nit-picker will take apart
your revision of the sine specification.
And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude.


Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].


Okay, then I see no reason why we shouldn't apply this principle
to *every* math function. That would mean, for example, that
the range of permissible values can be huge for large arguments
to exp, lgamma, tgamma, as well as many combinations of arguments
to pow.
(This
is fine with me, since it still requires an argument reduction
using about 2 1/2 words of precision, so it will break the
vast majority of sine functions that have been in use for the
past several decades.


Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?


No. But if you specify the behavior of a math function and don't
provide a way to specify the uncertainty in its arguments, does
it make any sense to demand that the math functions know how
much uncertainty is there? I've been talking about the requirements
on the authors of math functions and you keep going back to how
they should be properly used. Two different universes of discourse.
But hey, we're covered, so I'm happy.)
Of course, you still need to specify what to return for the
garbage value, and what error to report (if any). I await your
further guidance.


For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.


Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.
So you're willing to stand behind this criterion even after I
described, in a later posting, the various ways in which it was
arbitrary?


Yup. I didn't buy any of your sophistry.
If this is submitted as a DR to WG14, you're
willing, as it's original author and proposer, to defend it
against cranky attacks by people who haven't thought through
the matter as carefully as you?


Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.


So that's your copout for not actually doing anything?
Whatever
considerations led to the current committee decision would be still valid
after the submission of my DR (no precision specification is *more*
permissive than *any* precision specification).
Not even if you were brave enough to pioneer requirements on
precision in the math library? That's not too brave, given that
other languages have done so.
If I were to submit a DR, it would be one requiring each implementation
to document the precision of the 4 arithmetic operations on each floating
point type. But I know, from past c.s.c discussion, that the committee
would turn a deaf ear to my DR.


So that's your copout for not actually doing anything?
Brave of you. We can call it the Pop Criterion.


Call it the common sense criterion.


I won't call it squat if you never submit it as a DR. It's just
more idle carping.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #129
In article <c8***********@sunnews.cern.ch> Da*****@cern.ch (Dan Pop) writes:
In <Hx********@cwi.nl> "Dik T. Winter" <Di********@cwi.nl> writes:
To compute the precision of the result you have to do much more than is
feasable in most cases.


What is the value of a result with unknown precision? To repeat my
earlier example, if you don't know if it's 1 +/- 0.001 or 1 +/- 1000,
the value of 1 is of pretty little use.


I have precisely stated what the result means in numerical algebra.
But if you do not trust it see that every use of linear systems with
orders about 40 is abandoned. Or look up in books by Wikinson and
Reinsch. Getting guaranteed results is very expensive. The Karlsuhe
group has even had IBM design specific extensions to their 370 to
make it even feasable.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #130
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:

For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and


I'd think about NaN in an IEEE context.
2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.


in the case of sin(), yes, because large arguments in floating point
really are mathematically silly. The same is not true of say logarithmic
gamma, or just plain logarithm.

Nov 14 '05 #131
"Dr Chaos" <mb****************@NOSPAMyahoo.com> wrote in message
news:sl*******************************@lyapunov.uc sd.edu...
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:

For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and


I'd think about NaN in an IEEE context.


And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians? I'm still waiting for the justification for any particular
magnitude of the cutoff.
2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.


in the case of sin(), yes, because large arguments in floating point
really are mathematically silly.


They really aren't, but this seems to be a point too subtle for
many to grasp.
The same is not true of say logarithmic
gamma, or just plain logarithm.


But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #132
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.
Nov 14 '05 #133
In <hA**************@nwrddc02.gnilink.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
>: > Assume all the bits of the argument are exact and the *only* source
>: > of fuziness is the magnitude of the value. By the time the least
>: > significant bit of the mantissa means more than 2 * pi, the fuziness
>: > is complete (any result in the -1..1 range is correct).
>: >
>: > Is that concrete enough for you?
>:
>: Yes, it's concrete enough to show that you still don't get it.
>
>So what you're committing to is the requirement that the sine
>should *not* be computed for any value where 1 ulp weighs more
>than 2*pi.
That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.


There you go again, getting un-concrete. You'll have to specify
what "reasonably" means, or some nit-picker will take apart
your revision of the sine specification.


As close to 1.0 as the C standard requires 2.0 - 1.0 to be ;-)

If you're not interested in a serious discussion, please don't waste my
time.
>And presumably, sine should deliver the best possible
>approximation to any argument less than this magnitude.


Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].


Okay, then I see no reason why we shouldn't apply this principle
to *every* math function. That would mean, for example, that
the range of permissible values can be huge for large arguments
to exp, lgamma, tgamma, as well as many combinations of arguments
to pow.


Precisely. I didn't mean sine to be treated as an exception.
>(This
>is fine with me, since it still requires an argument reduction
>using about 2 1/2 words of precision, so it will break the
>vast majority of sine functions that have been in use for the
>past several decades.


Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?


No. But if you specify the behavior of a math function and don't
provide a way to specify the uncertainty in its arguments, does
it make any sense to demand that the math functions know how
much uncertainty is there?


The amount of uncertainty assumed by the library is dictated by the
nature of the floating point representation being used. In most real
life applications, the amount is higher than that, but this is none of the
library implementor concern.
I've been talking about the requirements
on the authors of math functions and you keep going back to how
they should be properly used. Two different universes of discourse.


They stop being different once common sense is applied. The implementor
need not waste any resources providing more precision than a proper usage
of the function requires. The additional "precision" is garbage, anyway:
only a fool would look for *significant* information in those bits.
>But hey, we're covered, so I'm happy.)
>Of course, you still need to specify what to return for the
>garbage value, and what error to report (if any). I await your
>further guidance.


For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.


Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).


The *exact* meaning of ERANGE is as documented for each function using
it. But, as I said, use *any* value *you* consider sensible and document
it.

Again, I'm not interested in this kind of idioting nit picking...
>So you're willing to stand behind this criterion even after I
>described, in a later posting, the various ways in which it was
>arbitrary?


Yup. I didn't buy any of your sophistry.
>If this is submitted as a DR to WG14, you're
>willing, as it's original author and proposer, to defend it
>against cranky attacks by people who haven't thought through
>the matter as carefully as you?


Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.


So that's your copout for not actually doing anything?


Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #134
On Mon, 24 May 2004 10:51:44 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
"Dr Chaos" <mb****************@NOSPAMyahoo.com> wrote in message
news:sl*******************************@lyapunov.uc sd.edu...
On Fri, 21 May 2004 14:43:25 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
>>
>> For the garbage value, see above. Set errno to any value you consider
>> sensible and document it. ERANGE would be fine with me.
>
> Okay, so a programmer knows that a call to sine was silly if
>
> 1) the return value is between -1 and +1, and
I'd think about NaN in an IEEE context.


And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians?


Yes.
I'm still waiting for the justification for any particular
magnitude of the cutoff.
When the least significant bit of the argument induces a change of at
least 2 pi its value (so that the output is essentially indeterminate
with a 1 lsb fluctuation), then it is extremely likely that the
programmer is making a blunder and ought to know.
> 2) errno indicates that some function value is either too large to
> represent (overflow) or too small to represent (underflow).
>
> That should stand up well to criticism.
in the case of sin(), yes, because large arguments in floating point
really are mathematically silly.


They really aren't, but this seems to be a point too subtle for
many to grasp.


Explain a situation, using computations with standard fixed-precision
floating point, when computing sin of a value which was passed in
standard floating poitn formats would be valuable for magnitudes
beyond the above.
The same is not true of say logarithmic
gamma, or just plain logarithm.


But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem,


It only does in the legalistic sense, but not in the useful esense
because in those cases people would be comparing magnitudes, and
possibly dividing or taking logarithms of the large results.
(And they should have been taking logarithms first most likely)

And the error relative to the value remains small, and that's what
people almost always care about using exp.

At some point you have to return +Inf for exp() of too large an
argument? Is it *really* positive infinity? No. It is far from
positive infinity, well, infinitely so.

There's an arbitrary cutoff right there. Why aren't you forcing
people to convert to an arbitrary precision format?
to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?
sin and cos are different from exp, and especially lgamma.

I would be pleased with a library which signaled excessively large arguments
for sin/cos. It's a blunder, like dereferencing NULL.

For a library which had arguments of arbitrary precision, then yes,
it would be a fine idea to get sin() well.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Nov 14 '05 #135
"John Cochran" <jd*@smof.fiawol.org> wrote in message
news:c8**********@smof.fiawol.org...
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher. If that
were the one, obvious, and only meaning of a floating-point argument
value, then I'd agree that the uncertainty in the value of sin(x)
eventually gets so large that it makes little sense to ascribe a
given value to the argument. All we can say is the value is in the
range [-1, 1].

But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently. The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.014232054735004e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0, INF)). It looks
like 1.01423205473512e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).

So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*. After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.
Right?

So much for the requirements on exp, cosh, sinh, tgamma, and
pow -- to name the functions I know off the top of my head would
go to hell.

But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off. Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.

Now here's the harder thing to understand. *Every* finite
argument has a well defined sine corresponding to it. It's
not easy to compute, but it's well defined. It would be
easier to understand if the sine took an argument in quadrants.
For larger arguments, the granularity gets steadily worse,
but the arguments still stand for an obvious number of
quadrants, or degrees if you prefer to think in those terms.
Once you get to where the least significant bit weighs 0.5,
you can represent only multiples of 45 degrees. All your
results are 0, 1, or sqrt(1/2), possibly with a minus sign.
Go to the next larger exponent and your values are only
0, 1, and -1, corresponding to multiples of 90 degrees. Go
to the next larger exponent and your values are all zero,
because the sine of any multiple of 180 degrees is zero.
And thus it remains for all the larger finite representable
values.

You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?

The added wrinkle with the conventional sine is that it
counts in radians. Thus *every* nonzero argument to sine
corresponds to an angle in quadrants or degrees that
takes an infinite number of bits to precisely represent
(pi being irrational and all). Nevertheless, you can in
principle multiply any finite representable floating-point
value by enough bits of pi to retain the reduced angle
(in the interval [0, 2*pi) say) to sufficient precision
to define the corresponding sine function to the same
precision as the argument. How that happens is up to
the implementor. If the program is computing exact multiples
of radians, it doesn't even have to know how to represent
pi to generate meaningful, and often exact, arguments
to the sine function.

I don't know of any technique to do this calculation other
than to do argument reduction to ever more precision for
ever larger arguments, but it can and has been done.
Since the C Standard says nothing about any limitations
on the size of the argument to sine, it's hard to make a
case that a library vendor has any right *not* to do all
this work. FWIW, I wish there were a way to get off the
hook. But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water. And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.

Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(700.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #136
"Dan Pop" <Da*****@cern.ch> wrote in message
news:c8***********@sunnews.cern.ch...
>If this is submitted as a DR to WG14, you're
>willing, as it's original author and proposer, to defend it
>against cranky attacks by people who haven't thought through
>the matter as carefully as you?

Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.


So that's your copout for not actually doing anything?


Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?


Don't get me wrong, I never expected you to. It's obvious by now
that you're a stuck opposer. You love to be rude to people --
by calling them stupid, for example -- and you love to sit on
the sidelines and criticize the work of others. But I'm quite
certain you will *never* do anything that would put you in
the position of having your own detailed technical proposals
be criticized by others.

I respond to you because it gives me the opportunity to explain
things to others. I have no expectation that you'll ever let in
a new idea or admit you're anything but right all the time.
Once I find myself just responding to your jibes and shifting
arguments, I know it's time to quit.

This is that time.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #137
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
"John Cochran" <jd*@smof.fiawol.org> wrote in message
news:c8**********@smof.fiawol.org...
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...
>
>But the repeated argument is that the silliness stems from the fact
>that the floating-point value x stands for the interval (x - 1ulp,
>x + 1ulp). If it didn't, it would stand for the value it seems to
>stand for, and the library writer might be obliged to actually
>compute the function value corresponding to it. Well, what's sauce
>for the goose is sauce for the gander. Why should sine get progressively
>fuzzier as that interval covers a wider range of function values and
>not require/allow/expect exactly the same from every other function
>that does the same? The exponential suffers from *exactly* the
>same fuzziness problem, to take just the best established of the
>fast-moving functions I mentioned earlier. Aren't we doing all our
>customers a disservice by giving them a misleading value for exp(x)
>when x is large?
Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher.


I think they're arguing from the notion that "the purpose of
computing is insight, not numbers."
But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently. The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.014232054735004e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0, INF)). It looks
like 1.01423205473512e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).
relative error is what?
So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*.
What matters is the number of non-garbage bits.

Suppose that number were zero.
After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.
To some degree, yes, but the case is far stronger for sin/cos.
Right? But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.
Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.
Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.
who would complain, and what would be the particular application
they'd complain about?
You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?
Because the uses of sine are different, and rounding
produces useful results.
But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water.
A programmer's conceptual blunder.
And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.
Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(700.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.
relative error, and facts of actual use.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


Nov 14 '05 #138
In article <c8**********@smof.fiawol.org>,
jd*@smof.fiawol.org (John Cochran) wrote:
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.


If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.
Nov 14 '05 #139
Dr Chaos <mb****************@NOSPAMyahoo.com> wrote:
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.


Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.


I have a suggestion so that your heroic efforts go to something
more useful:

Here is something that might, conceivably, occasionally be used:

sin_2pi_product(double z1, double z2)

return the sin of the 2*pi*z1*z2, where z1 and z2 are rationals in
IEEE double and 2 pi is expressed to sufficient number of
significant digits as necessary.

Nov 14 '05 #140
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <c8**********@smof.fiawol.org>,
jd*@smof.fiawol.org (John Cochran) wrote:
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressivelyfuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?


Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the magnitude of the ulp is greater than 2 pi, it becomes impossible to decide what the correct answer is.


If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.


According to that theory, yes. I've never disputed that. The issue
I keep coming back to, however, is that no programming language
standard I know of invites authors of math libraries to take
advantage of that theory. Rather, standards define a floating-point
value as its overt value -- summing the contributions of all the
mantissa bits, multiplying by the exponent, negating if negative.
Nowhere does it say that the argument is fuzzy and the result
should therefore reflect the fuzziness.

And in support of this viewpoint, any number of test suites actually
check the *quality* of a math library on the basis of how close
the actual function values are to the best internal representation
of the function value corresponding to the argument treated as an
exact value. I assure you that existing customers file bug reports
and potential customers stay away in droves when a library shows
up too many "errors" on these tests. Fuzz just don't cut it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #141
"Dr Chaos" <mb****************@NOSPAMyahoo.com> wrote in message
news:sl*******************************@lyapunov.uc sd.edu...
On Mon, 24 May 2004 16:40:35 GMT, P.J. Plauger <pj*@dinkumware.com> wrote:
"John Cochran" <jd*@smof.fiawol.org> wrote in message
news:c8**********@smof.fiawol.org...
In article <4t******************@nwrddc03.gnilink.net>,
P.J. Plauger <pj*@dinkumware.com> wrote:
SNIP...

>
>But the repeated argument is that the silliness stems from the fact
>that the floating-point value x stands for the interval (x - 1ulp,
>x + 1ulp). If it didn't, it would stand for the value it seems to
>stand for, and the library writer might be obliged to actually
>compute the function value corresponding to it. Well, what's sauce
>for the goose is sauce for the gander. Why should sine get progressively >fuzzier as that interval covers a wider range of function values and
>not require/allow/expect exactly the same from every other function
>that does the same? The exponential suffers from *exactly* the
>same fuzziness problem, to take just the best established of the
>fast-moving functions I mentioned earlier. Aren't we doing all our
>customers a disservice by giving them a misleading value for exp(x)
>when x is large?

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit representable. The main difference for sin() vs exp() is that for all of the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to infinite precision. However, with sin() once you get to an input where the magnitude of the ulp is greater than 2 pi, it becomes impossible to decide what the correct answer is.
Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher.


I think they're arguing from the notion that "the purpose of
computing is insight, not numbers."


That's nice. But you ain't gonna get squat in the way of insight
if your computer doesn't deliver on its promise to be damned fast
and highly accurate at carrying out your wishes. When R.W. Hamming
dropped that little aphorism, he was taking for granted that the
computer would fulfill its part of the bargain. He was reminding
the human users what they should be bringing to the party.
But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently.

The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.014232054735004e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0, INF)). It looks
like 1.01423205473512e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).


relative error is what?


Roughly +/- 2^10 / 2^53. Or a loss of eleven bits out of 53.
So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*.


What matters is the number of non-garbage bits.


On the input value? Yes, that's what matters to the *programmer*.
The library writer has no idea wheter there are *any* garbage
bits. We as a class are expected to assume there are none.
And people would like us to deliver results with *no* garbage
bits (assuming an exact input), so we don't add unnecessarily
to the problem.
Suppose that number were zero.
Which number? I'm supposing that the argument has no garbage
bits and that the highest quality implementation produces
no garbage bits for a function return value. Haven't heard
any better criterion yet.
After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.


To some degree, yes, but the case is far stronger for sin/cos.


Really? How? I've yet to hear an argument that doesn't apply
equally well to the other functions. If you're talking about
the *likelihood* that sin(700.0) is not a well thought out
function call, then I agree that it's more likely to be
nonsense than exp(700.0). But once again, *the C Standard doesn't
say so and the library author has to assume that both are valid
calls.* Nobody has given a concrete reason why the library author
should be let off the hook so far, IMO.
Right?

But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.


Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.


And I have yet to have a customer call up and say that computing
exp(0.73) was *really* important. Is this a popularity contest
we're running, or are we trying to understand a programming
language specification?
By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.
Agreed. What has that to do with the requirements on writing the
Standard C library? We have to be sure to sort zero items properly
with qsort, but if I ever saw a student program that explicitly
called qsort for zero items I'd know s/he was confused. Such
examples are irrelevant to the requirements placed on library
writers.
I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.


So would I. But I've yet to hear presented here a concrete, *defensible*
definition of where "this" cuts in for sine.
Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.


who would complain, and what would be the particular application
they'd complain about?


Not for me to say. If I told them their application was unimportant,
or poorly designed, or had bad feng shui, would that let me off the
hook?
You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?


Because the uses of sine are different, and rounding
produces useful results.


So too does sine and cosine quite often. Who are you to tell
somebody else that s/he doesn't know what s/he is doing?
You might be right that they *likely* don't know what they're
doing, but you'd be foolish to take that for granted from
the outset.
But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water.


A programmer's conceptual blunder.


Okay, quantify that and reduce it to standardese and I'm all
for it.
And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.


Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(700.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.


relative error, and facts of actual use.


The relative errors for sin(700) and exp(700) are comparable,
as I just demonstrated (and as is obvious from a few lines
of calculus). "Facts of actual use" are anecdotal at best
and pure conjecture at worst. Whatever it is, it ain't math
and it ain't standardese.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Nov 14 '05 #142
In article <Y9*****************@nwrddc03.gnilink.net>,
"P.J. Plauger" <pj*@dinkumware.com> wrote:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.


According to that theory, yes. I've never disputed that. The issue
I keep coming back to, however, is that no programming language
standard I know of invites authors of math libraries to take
advantage of that theory. Rather, standards define a floating-point
value as its overt value -- summing the contributions of all the
mantissa bits, multiplying by the exponent, negating if negative.
Nowhere does it say that the argument is fuzzy and the result
should therefore reflect the fuzziness.

And in support of this viewpoint, any number of test suites actually
check the *quality* of a math library on the basis of how close
the actual function values are to the best internal representation
of the function value corresponding to the argument treated as an
exact value. I assure you that existing customers file bug reports
and potential customers stay away in droves when a library shows
up too many "errors" on these tests. Fuzz just don't cut it.


Actually, what I meant was: With the exp function for large arguments as
an example, you have to make up your mind if you want to be lazy and
return results with an error of several hundred ulps under the
assumption that no one cares, or whether you want to produce quality
results. In this case, I would prefer quality.

In the end, errors tend to add up. Even if the calculation of x included
some significant rounding error, that is no good reason why the
calculation of exp (x) should add more error to this than absolutely
necessary.
Nov 14 '05 #143
In <ch*********************************@slb-newsm1.svr.pol.co.uk> Christian Bau <ch***********@cbau.freeserve.co.uk> writes:
Actually, what I meant was: With the exp function for large arguments as
an example, you have to make up your mind if you want to be lazy and
return results with an error of several hundred ulps under the
assumption that no one cares, or whether you want to produce quality
results. In this case, I would prefer quality.
Me too, it's just that our notions of "quality" differ: a result with
garbage bits presented as precision bits and obtained by wasting cpu
resources is a low quality result.
In the end, errors tend to add up. Even if the calculation of x included
some significant rounding error, that is no good reason why the
calculation of exp (x) should add more error to this than absolutely
necessary.


This statement makes sense *only* if one considers floating point
representations as standing for precise mathematical values. This doesn't
happen to be the case (when was the last time all your input consisted
exclusively of values exactly representable in the target floating point
types?). For all you know, the result with "more error
than absolutely necessary" may be the *exact* result to the actual
problem. If you don't understand this statement, you shouldn't be
dealing with floating point values...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #144

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.