"John Cochran" <jd*@smof.fiawo l.org> wrote in message

news:c8******** **@smof.fiawol. org...

In article <4t************ ******@nwrddc03 .gnilink.net>,

P.J. Plauger <pj*@dinkumware .com> wrote:

SNIP...

But the repeated argument is that the silliness stems from the fact

that the floating-point value x stands for the interval (x - 1ulp,

x + 1ulp). If it didn't, it would stand for the value it seems to

stand for, and the library writer might be obliged to actually

compute the function value corresponding to it. Well, what's sauce

for the goose is sauce for the gander. Why should sine get progressively

fuzzier as that interval covers a wider range of function values and

not require/allow/expect exactly the same from every other function

that does the same? The exponential suffers from *exactly* the

same fuzziness problem, to take just the best established of the

fast-moving functions I mentioned earlier. Aren't we doing all our

customers a disservice by giving them a misleading value for exp(x)

when x is large?

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument

of about 709.78 give or take will give a result close to the upper limit

representable. The main difference for sin() vs exp() is that for all of

the values you can pass to exp(), there is an answer and it is possible

to come up with a floating point number closest to the correct answer to

infinite precision. However, with sin() once you get to an input where the

magnitude of the ulp is greater than 2 pi, it becomes impossible to decide

what the correct answer is.

Sigh. Let's try again. You're still arguing from the notion that

a given floating-point number represents a range of values, from

the next lower representable value to the next higher. If that

were the one, obvious, and only meaning of a floating-point argument

value, then I'd agree that the uncertainty in the value of sin(x)

eventually gets so large that it makes little sense to ascribe a

given value to the argument. All we can say is the value is in the

range [-1, 1].

But if that's how we're going to treat sine, it's hard to make a

case for treating any other function differently. The fast moving

functions, of which exp and tgamma are just two examples, *also*

get fuzzier as their arguments get larger. I should stick to

calculus, but I'll instead give a concrete example. If you compute

exp(700.0), you get an answer of about 1.0142320547350 04e+304.

Now try computing the exponential of the next larger representable

value (which you can obtain from nextafter(700.0 , INF)). It looks

like 1.0142320547351 2e+304, printed to the same precision. That

value is 947 ulp bigger than exp(700.0).

So by the logic repeatedly expressed in this thread, it's perfectly

fine for exp(700.0) to return *any* value within plus or minus

900-odd ulp. That's nearly *eleven garbage bits*. After all,

whoever computed that 700.0 ought to know the effect of a 1 ulp

error on the argument, so it's silly -- and a waste of time --

for the library to try too hard to get the exponential right.

Right?

So much for the requirements on exp, cosh, sinh, tgamma, and

pow -- to name the functions I know off the top of my head would

go to hell.

But if you assume, for the sake of computing any of these fast

moving functions, that the argument represents some *exact*

value, then there is a well defined function value that can

be delivered corresponding to that exact value. And I'll bet

there's more than one physicist, engineer, and/or economist

who'd rather get that value than something about 1,000 ulp

off. Pragmatically speaking, I do know that plenty of customers

will complain if your exponential function sucks as much as

the interval interpretation would permit.

Now here's the harder thing to understand. *Every* finite

argument has a well defined sine corresponding to it. It's

not easy to compute, but it's well defined. It would be

easier to understand if the sine took an argument in quadrants.

For larger arguments, the granularity gets steadily worse,

but the arguments still stand for an obvious number of

quadrants, or degrees if you prefer to think in those terms.

Once you get to where the least significant bit weighs 0.5,

you can represent only multiples of 45 degrees. All your

results are 0, 1, or sqrt(1/2), possibly with a minus sign.

Go to the next larger exponent and your values are only

0, 1, and -1, corresponding to multiples of 90 degrees. Go

to the next larger exponent and your values are all zero,

because the sine of any multiple of 180 degrees is zero.

And thus it remains for all the larger finite representable

values.

You might argue that the sine in quadrants becomes silly

once you get to counting by 180 degrees, or multiples

thereof. But the round function gets equally silly for

comparable numbers -- once the fraction bits go away, the

answer is obvious and trivial to compute. *But it's still

well defined and meaningful.* If you were to start

throwing exceptions, or returning NaNs, just because the

rounded result is so obvious, I assure you that many

people would have occasion to complain, and justifiably

so. Why should sine be any different?

The added wrinkle with the conventional sine is that it

counts in radians. Thus *every* nonzero argument to sine

corresponds to an angle in quadrants or degrees that

takes an infinite number of bits to precisely represent

(pi being irrational and all). Nevertheless, you can in

principle multiply any finite representable floating-point

value by enough bits of pi to retain the reduced angle

(in the interval [0, 2*pi) say) to sufficient precision

to define the corresponding sine function to the same

precision as the argument. How that happens is up to

the implementor. If the program is computing exact multiples

of radians, it doesn't even have to know how to represent

pi to generate meaningful, and often exact, arguments

to the sine function.

I don't know of any technique to do this calculation other

than to do argument reduction to ever more precision for

ever larger arguments, but it can and has been done.

Since the C Standard says nothing about any limitations

on the size of the argument to sine, it's hard to make a

case that a library vendor has any right *not* to do all

this work. FWIW, I wish there were a way to get off the

hook. But I've yet to see presented in this thread any

rationale for *not* computing sine properly that holds

water. And I've yet to see a rationale for picking a

cutoff point, or an error code, or an error return value,

for large sines that also holds water.

Also FWIW, I just computed the change in the sine function

between sin(700.0) and sin(nextafter(7 00.0, INF)). It's

-859 ulp. That's a *smaller* interval than for exp in

the same argument range. So tell me again why it's okay to

punt on sine and not on exp.

P.J. Plauger

Dinkumware, Ltd.

http://www.dinkumware.com