P: n/a

Hi all,
Normally, I would trust that the ANSI libraries are written to be as
efficient as possible, but I have an application in which the majority of
the run time is calling the acos(...) method. So now I start to wonder how
that method, and others in the math.h library, are implemented.
Dave  
Share this Question
P: n/a

Dave Rudolf <nu****************@hotmail.com> scribbled the following: Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
This depends completely on the C compiler used and is not standardised.
If they fulfill the specification in the ISO C standard, they might be
implemented by carrier pigeons, for all C cares.

/ Joona Palaste (pa*****@cc.helsinki.fi)  Finland \
\ http://www.helsinki.fi/~palaste  rules! /
"I am lying."
 Anon  
P: n/a

"Dave Rudolf" <nu****************@hotmail.com> wrote in message
news:10*************@corp.supernews.com... Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible,
I would *hope* that, not necessarily "trust".
but I have an application in which the majority of the run time is calling the acos(...) method.
I seriously doubt that the C runtime calls 'acos()' at all.
Perhaps the application itself does.
So now I start to wonder how that method, and others in the math.h library, are implemented.
However a given implementor does it. The language standard
only specifies *behavior*, not implementation or performance.
Such performance also necessarily depends upon the underlying
hardware.
Mike  
P: n/a

Dave Rudolf <nu****************@hotmail.com> wrote: Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
There's nothing in the ANSI standard that requires a certain
implementation, so acos() could be slow as molasses without
any violation of the standard. From this also follows that
your question can't be answered in this group, since it's
not a question about standard C but about the quality of the
implementation you're using. So your best bet is asking in
a group devoted to your compiler/linker/standard library
combination, or, if it's a commercial product, the company
that wrote them.
<OT>
I would guess that on a machine with a floating point unit
it will be used by the acos() library function, so in this
case you probably won't get much faster with some fancy
optimizations.
</OT>
Regards, Jens

\ Jens Thoms Toerring ___ Je***********@physik.fuberlin.de
\__________________________ http://www.toerring.de  
P: n/a

>>Normally, I would trust that the ANSI libraries are written to be as efficient as possible,
Impossible to tell. Especially not for timecritical duties. You're not
even defining what you call "efficient".
but I have an application in which the majority of the run time is calling the acos(...) method.
I seriously doubt that the C runtime calls 'acos()' at all. Perhaps the application itself does.
That's probably what he meant. So now I start to wonder how that method, and others in the math.h library, are implemented.
FYI, in C we don't speak of "methods" but rather of "functions".
Nobody but the library's author can answer that question.
Optimizing is a whole job in itself, and it starts with profiling. It
seems like you profiled some and found the culprit. Now it's up to
you to improve your code: since using acos() from the standard library
on your particular platform is too slow for your needs, you'll have
to either 1) not use it at all (rewriting your numerical algorithm may
help) or 2) if you absolutely can't do without it, find a speedier
implementation of acos for your platform (it may involve using some
assembly, or whatever else that you may see fit).
As a general (but not absolute) rule, the C standard library is
implemented more with "exactness" in mind than "speed". For all
maths functions, that means that they are generally implemented as to
lead to the best precision possible, possibly sacrifying speed.
You'll have to explore other paths: writing your own maths functions,
or find a thirdparty library that will fit your needs.  
P: n/a

Dave Rudolf wrote: Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible,
As possible given what? Performance is important to most compiler
implementors, but it's often third down on the list, after 1)
correctness and 2) portability.
And not just correctness according to the ANSI C standard, but
correctness according to the relevant IEEE floating point standards,
with sane handling of NaN and plus or minus infinity. And also
correctness for very large numbers, very small numbers, etc.
Those checks may not be necessary in your program, but you're still
paying for them.
but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
Why not get libm from GNU and find out? I wouldn't be surprised if you
could beat acos() with "good enough" precision over a particular range
(whatever you're interested in). Try using a Taylor expansion.

Pull out a splinter to reply.  
P: n/a

Dave Rudolf wrote: Normally, I would trust that the ANSI libraries are written to be as efficient as possible
The ANSI/ISO C standards do *not* specify performance.
The "quality" of the implementation is determined by "market forces".
It is up to you to evaluate the performance of your implementation
and shop around for a better one if you find it inadequate.
but I have an application in which the majority of the run time how is calling the acos(...) method. So now I start to wonder that method, and others in the math.h library, are implemented.
The arc cosine function is expensive to compute.
It's hard to beat commercial C math library developers
but you should shop around.  
P: n/a

Dave Rudolf wrote: Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
<topicality level="marginal">
Others have pointed out that there are no guarantees of
efficiency in the Standard, and that efficiencies will vary
from one implementation to the next. However (and although
there's no guarantee of this, either), it seems likely that
implementors will consider accuracy and "good behavior" to
be more important than efficiency. After all, if efficiency
were paramount, one could get away with
double acos(double x) { return 42.0; }
There's actually a little bit of sense behind this joke
implementation: It should prompt you to ask how much accuracy
you actually need in your application. For example, if you
only need a few decimal places' worth, you might gain speed
by precomputing a table of accurate acos() values and then
doing crude interpolations in the table to get the quickand
dirty values you need in such great quantities.
... which also raises the question: Why does the application
need such a huge number of acos() values?
</topicality>
 Er*********@sun.com  
P: n/a

Peter Ammon wrote:
.... snip ... Why not get libm from GNU and find out? I wouldn't be surprised if you could beat acos() with "good enough" precision over a particular range (whatever you're interested in). Try using a Taylor expansion.
Not if you want speed you don't. With some slight trigonometry
you should be able to convert cos(angle) into tan(angle). Use
this as input to an arctan(value) routine, which can in turn be
built in various ways. One uses a levelled Tschebychev
approximation over 0 to 1.

Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!  
P: n/a

CBFalconer wrote: Peter Ammon wrote:
... snip ...
Why not get libm from GNU and find out? I wouldn't be surprised if you could beat acos() with "good enough" precision over a particular range (whatever you're interested in). Try using a Taylor expansion.
Not if you want speed you don't. With some slight trigonometry, you should be able to convert cos(angle) into tan(angle). Use this as input to an arctan(value) routine, which can in turn be built in various ways. One uses a leveled Tschebychev approximation over 0 to 1.
Not if you want speed you don't.
This method requires reading a bunch of high precision
floatingpoint numbers from system memory
which can actually require more time than computing
Taylor coefficients in registers onthefly.
You're out of your depth here Chuck.  
P: n/a

In article <10*************@corp.supernews.com>,
"Dave Rudolf" <nu****************@hotmail.com> wrote: Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
Since I have actually not used the acos () function _once_ in over
twenty years of C programming, including tons of numerical mathematics,
3D graphics etc., I am just wondering what you are doing.
You might consider posting what you are actually trying to do, and it is
very likely that there will be ways to reduce the number of calls to
acos () considerably.  
P: n/a

Christian Bau wrote: Since I have actually not used the acos() function _once_ in over twenty years of C programming,
So... you're a newbie eh?
including tons of numerical mathematics, 3D graphics, etc.
It appears in signal processing applications 
a DolphChebychev window function for example.  
P: n/a

In article <40**************@jpl.nasa.gov>,
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote: CBFalconer wrote:
Peter Ammon wrote:
... snip ...
Why not get libm from GNU and find out? I wouldn't be surprised if you could beat acos() with "good enough" precision over a particular range (whatever you're interested in). Try using a Taylor expansion.
Not if you want speed you don't. With some slight trigonometry, you should be able to convert cos(angle) into tan(angle). Use this as input to an arctan(value) routine, which can in turn be built in various ways. One uses a leveled Tschebychev approximation over 0 to 1.
Not if you want speed you don't.
This method requires reading a bunch of high precision floatingpoint numbers from system memory which can actually require more time than computing Taylor coefficients in registers onthefly.
You're out of your depth here Chuck.
Tisdale, you are clueless. As usual.
The fastest approach would be to split the argument range into two
parts, abs (x) <= c and abs (x) >= c for some suitably chosen c,
probably somewhere around 0.7 or 0.8. For abs (x) <= c, approximate acos
x by a polynomial of the form (pi  ax  bx^3  cx^5...). For x > c,
define f (z) = acos (1  z^2). Appromiate f (z) by a polynomial of the
form ax + bx^3 + cx^5 + dx^7. Calculate acos (x) = f (sqrt (1x)). For x
< c use acos (x) = pi ()  acos (x) = pi ()  f (sqrt (1+x)). Take a
higher polynomial depending on how much precision you want. Finding the
coefficients is left as an exercise to the reader :)
The substitution z = sqrt (1x) nicely catches the behavior of acos x
for abs (x) close to 1, where the first derivative is unbounded and
avoids excessive rounding errors.  
P: n/a

E. Robert Tisdale wrote: Christian Bau wrote: Since I have actually not used the acos() function _once_ in over twenty years of C programming, So... you're a newbie eh? including tons of numerical mathematics, 3D graphics, etc. It appears in signal processing applications  a DolphChebychev window function for example.
I think the point of Christian's posting was that
he would like more information from the OP why the
acos() function was called so often. If it is something
that requires high precision and MUST be called often
(as in a signal processing application), then it is
one thing.
If it does not require high precision or is used
gratuitously, there might be better ways
to accomplish the same thing. As one poster pointed
out, a rough extrapolation between values in a static
array may satisfy the requirement. Without knowing
the degree of precision needed, we just don't know.

Ñ
"It is impossible to make anything foolproof because fools are so
ingenious"  A. Bloch  
P: n/a

In article <42********************@bgtnsc05news.ops.worldnet.att.net>,
Nick Landsberg <hu*****@NOSPAM.att.net> wrote: E. Robert Tisdale wrote:
Christian Bau wrote:
Since I have actually not used the acos() function once in over twenty years of C programming,
So... you're a newbie eh?
including tons of numerical mathematics, 3D graphics, etc.
It appears in signal processing applications  a DolphChebychev window function for example.
I think the point of Christian's posting was that he would like more information from the OP why the acos() function was called so often. If it is something that requires high precision and MUST be called often (as in a signal processing application), then it is one thing.
If it does not require high precision or is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough extrapolation between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
Actually, Tisdale just had to prove once again his stupidity: If you
read an article about the DolphChebychev window function, the first
thing that you see is that Chebychev functions are defined as cos (n *
acos (x))  and they are easily calculated using a trivial recursion
formula that doesn't need anything more complicated than adding and
multiplying.
Only an idiot would calculate them using the acos () function.  
P: n/a

Christian Bau wrote: The fastest approach would be to split the argument range into two parts, abs(x) <= c and abs(x) >= c for some suitably chosen c, probably somewhere around 0.7 or 0.8. For abs (x) <= c, approximate acos x by a polynomial of the form (pi  ax  bx^3  cx^5...). For x > c, define f(z) = acos(1  z^2). Approximate f(z) by a polynomial of the form ax + bx^3 + cx^5 + dx^7. Calculate acos(x) = f(sqrt(1  x)). For x < c use acos(x) = pi()  acos(x) = pi()  f(sqrt(1 + x)). Take a higher polynomial depending on how much precision you want. Finding the coefficients is left as an exercise to the reader :)
The substitution z = sqrt(1x) nicely catches the behavior of acos x for abs(x) close to 1, where the first derivative is unbounded and avoids excessive rounding errors.
Have you every implemented and tested this "algorithm"?
If so please publish it here so that we can "benchmark" it
against our C library implementations of acos. I'll bet you $1.00 that
I can find one that beats your implementation.  
P: n/a

Christian Bau wrote: If you read an article about the DolphChebychev window function, the first thing that you see is that Chebychev functions are defined as cos (n*acos(x))  and they are easily calculated using a trivial recursion formula that doesn't need anything more complicated than adding and multiplying.
Please show us the formula.  
P: n/a

E. Robert Tisdale wrote: Christian Bau wrote: If you read an article about the DolphChebychev window function, the first thing that you see is that Chebychev functions are defined as cos (n*acos(x))  and they are easily calculated using a trivial recursion formula that doesn't need anything more complicated than adding and multiplying. Please show us the formula.
Now cut that out! I'm a relative newbie to c.l.c in this
incarnation but I used to dwell here 15 years ago or so.
Neither of you is making a good name for themselves
by airing your differences in a public forum, and,
you are both making a case FOR the folks who want
c.l.c "expanded" to take into account other stuff
than just pure "C".
This is an algorithm issue. Move it and your
petty squabbles to the algorithms newsgroup, please.
Grumpy
P.S.  the above formula, expanded, becomes
cos(n) * cos(acos(x)) 
unless I've forgotten my trig after 45 years.
Since cos(acos(X)) degenerates to X, it should
be replaceable with ...
cos(n) * x...
where have I made a mistake? (honestly asking,
not baiting.)

Ñ
"It is impossible to make anything foolproof because fools are so
ingenious"  A. Bloch  
P: n/a

In article <40**************@jpl.nasa.gov> E.**************@jpl.nasa.gov writes: CBFalconer wrote:
.... Not if you want speed you don't. With some slight trigonometry, you should be able to convert cos(angle) into tan(angle). Use this as input to an arctan(value) routine, which can in turn be built in various ways. One uses a leveled Tschebychev approximation over 0 to 1.
Not if you want speed you don't.
This method requires reading a bunch of high precision floatingpoint numbers from system memory which can actually require more time than computing Taylor coefficients in registers onthefly.
Yup, you are clueless.
1. Calculating Taylor coefficients on the fly requires division, which
is on most processors much slower than multiplication.
2. The number of coefficients required in a Tschebychev approximation is
less than the number of Taylor coefficients needed. For instance,
to get a good sine in double precision on [0, pi/2] you can use only
6 terms of the Tschebychev expansion of (sin(x)/x  1). (This
formula to get good relative precision near x = 0.)

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

In article <40**************@jpl.nasa.gov> E.**************@jpl.nasa.gov writes: Christian Bau wrote:
Since I have actually not used the acos() function _once_ in over twenty years of C programming,
So... you're a newbie eh?
Hrm. The first language I used, Algol 60, did not even *have* the
arccosine as a function. It had only sin, cos and arctan from this
group. Only when I started programming in Fortran did I see the
ACOS as a standard function, but have actually never needed it.
Done quite a bit of numerical programming actually. Like implementing
an arccosine function, but never used it ;).

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

In article <40**************@jpl.nasa.gov> E.**************@jpl.nasa.gov writes: Christian Bau wrote:
If you read an article about the DolphChebychev window function, the first thing that you see is that Chebychev functions are defined as cos (n*acos(x))  and they are easily calculated using a trivial recursion formula that doesn't need anything more complicated than adding and multiplying.
Please show us the formula.
You mean:
T_0(x) = 1
T_1(x) = x
T_n(x) = 2x.T_{n1}(x)  T_{n2}(x) for n > 1
? (This from memory...)

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

Nick Landsberg wrote: I think the point of Christian's posting was that he would like more information from the [Dave Rudolf] [about] why the acos() function was called so often. If it is something that requires high precision and MUST be called often (as in a signal processing application), then it is one thing.
If it does not require high precision or [high precision] is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough [interpolation] between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
Unfortunately,
I don't know what degree of precision is required either.
We can only hope that Dave Rudolf is still following this thread
and will reply with the answer.
But I'll tell you what you can do without knowing any more
about Dave Rudolf's precision requirements.
You can implement you proposed interpolation scheme
and benchmark it against the acos provided with your C compiler.
I predict that you will find that your interpolation scheme
is *slower* even if you need just a few decimal digits precision.
Please post your implementation so that we can repeat
your benchmark experiments ourselves.
There is a reason why math functions like acos
are included in the standard library.
There is *no* single portable implementation
that will deliver performance and accuracy everywhere.
Compiler developers hire experts to customize implementations
for each target platform and, usually, they are hard to beat.  
P: n/a

E. Robert Tisdale wrote: Nick Landsberg wrote: I think the point of Christian's posting was that he would like more information from the [Dave Rudolf] [about] why the acos() function was called so often. If it is something that requires high precision and MUST be called often (as in a signal processing application), then it is one thing. If it does not require high precision or [high precision] is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough [interpolation] between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
[You misunderstood at least one part in your paraphrase
of my post. In the second paragraph I meant ...
"If it does not require high precision or *acos()* is
used gratuitiouly, ..." not, as you have interpreted it.
But, rereading it, I can see how the sentence
could be so interpreted. I also did not put []'s
around the word "interpolation."] Unfortunately, I don't know what degree of precision is required either. We can only hope that Dave Rudolf is still following this thread and will reply with the answer.
Then why attack various proposed algorithms? Without knowing
the problem the poster is trying to solve, why argue about
the details of the solution? But I'll tell you what you can do without knowing any more about Dave Rudolf's precision requirements. You can implement you proposed interpolation scheme and benchmark it against the acos provided with your C compiler. I predict that you will find that your interpolation scheme is *slower* even if you need just a few decimal digits precision. Please post your implementation so that we can repeat your benchmark experiments ourselves.
I did not propose the interpolation scheme, some other
poster did, I believe it was Eric Sosman. Since you are the
one questioning the suggestion, may I suggest you post your
benchmark results first to prove your contention? There is a reason why math functions like acos are included in the standard library. There is *no* single portable implementation that will deliver performance and accuracy everywhere. Compiler developers hire experts to customize implementations for each target platform and, usually, they are hard to beat.
All that is true, for applications which require precision.
It is not clear that this application requires that precision.
We come back to full circle regarding:
1  what is the precision required for this application?
2  is there a better way rather than the mathematically precise
way to get a "good enough" answer?
And this whole digression belongs in something like
comp.algorithms.philosophy
rather than a discussion of the language, IMO.

Ñ
"It is impossible to make anything foolproof because fools are so
ingenious"  A. Bloch  
P: n/a

> Only an idiot would calculate them using the acos () function.
Besides, that would be stupid to recompute the window function
every time you have to use it: there is most likely some means
of computing it once, place the discret values in an array and
then use this. After all, we are dealing with discret algorithms.
Unless the number of points was gigantic.  
P: n/a

Dik T. Winter wrote: 1. Calculating Taylor coefficients on the fly requires division,
Nonsense! Multiply the Taylor coefficients through
by the least common denominator, sum using Horner's rule
then multiply the result by the inverse of the least common denominator.
All of the coefficients can be calculated using integer multiplication.
which is on most processors much slower than multiplication.
2. The number of coefficients required in a Tschebychev approximation is less than the number of Taylor coefficients needed.
By one or, at best, two terms.
For instance, to get a good sine in double precision on [0, pi/2], you can use only 6 terms of the Tschebychev expansion of (sin(x)/x  1). (This formula to get good relative precision near x = 0.)
Let's suppose that the Taylor series requires 8 terms
to obtain the minimum precision everywhere.
The Tschebychev approximation could be, at best, 25% faster.
But it could easily require much more time to load
the high precision Tschebychev coefficients from memory.
But don't take my word for it.
Implement both and benchmark them against each other.
The winner will be determined by your computer architecture.  
P: n/a

Dik T. Winter wrote: E.Robert.Tisdale writes:
The first language I used, Algol 60,
Me too. On a CDC 6600.
did not even *have* the arccosine as a function. It had only sin, cos and arctan from this group. Only when I started programming in Fortran did I see the ACOS as a standard function, but have actually never needed it. Done quite a bit of numerical programming actually.
I only have a little experience http://csdl.computer.org/comp/procee...0890163abs.htm
Like implementing an arccosine function, but never used it ;).  
P: n/a

In article <40**************@jpl.nasa.gov> E.**************@jpl.nasa.gov writes: Dik T. Winter wrote:
1. Calculating Taylor coefficients on the fly requires division, Nonsense! Multiply the Taylor coefficients through by the least common denominator, sum using Horner's rule then multiply the result by the inverse of the least common denominator. All of the coefficients can be calculated using integer multiplication.
Pray show the algorithm you intend with Taylor coefficients 1/(2k+1)!.
I have no idea what you mean with the above. which is on most processors much slower than multiplication.
2. The number of coefficients required in a Tschebychev approximation is less than the number of Taylor coefficients needed.
By one or, at best, two terms.
As typically the Tschebychev coefficients are about 2^(n) times the
original coefficients, I wonder where you get your one or two from. For instance, to get a good sine in double precision on [0, pi/2], you can use only 6 terms of the Tschebychev expansion of (sin(x)/x  1). (This formula to get good relative precision near x = 0.)
Let's suppose that the Taylor series requires 8 terms to obtain the minimum precision everywhere. The Tschebychev approximation could be, at best, 25% faster.
OK, the Tschebyshev approximation in the above case:
double dc[] = {0.16666666666666662966,
0.00833333333333094971,
0.00019841269836759707,
0.00000275573161030613,
0.00000002505113193827,
0.00000000015918135277};
The algorithm reads (y = x*x):
sin = ((((((dc[5] * y + dc[4]) * y + dc[3]) * y + dc[2]) * y + dc[1]) * y
+ dc[0]) * y + 1.0) * x;
(Actually this is for x in [pi/2,pi/2] with a relative precision of about
1 ULP, or, in tangible terms, the relative error is bounded by about 2e16.)
But it could easily require much more time to load the high precision Tschebychev coefficients from memory. But don't take my word for it. Implement both and benchmark them against each other.
Now implement your algorithm, without loading high precision coefficients
from memory or using division...

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:40**************@jpl.nasa.gov... Christian Bau wrote:
Since I have actually not used the acos() function _once_ in over twenty years of C programming,
So... you're a newbie eh?
I also have been using C for a long time (about 17 years).
I've never used *any* of the 'math.h' functions in my
production code (well, there might be one or two I've forgotten
about, but you get the idea). That's because my application
domains never needed them. So am I a 'newbie' too?
Mike  
P: n/a

Dik T. Winter wrote: E.Robert.Tisdale writes: > Dik T. Winter wrote: > > > 1. Calculating Taylor coefficients on the fly requires division, > > Nonsense! Multiply the Taylor coefficients through > by the least common denominator, sum using Horner's rule > then multiply the result by the inverse of the least common denominator. > All of the coefficients can be calculated using integer multiplication.
Pray show the algorithm you intend with Taylor coefficients 1/(2k+1)!. I have no idea what you mean with the above.
Suppose that the highest order coefficient is 1/(2n+1)!. Compute
[( ... ((x^2 + (2n+1)!/(2n1)!)*x^2 + (2n+1)!/(2n3)!)*x^2 + ... + 1)*x]
*[1/(2n+1)!]
The first coefficient
(2n+1)!/(2n1)! = (2n+1)*2n
The second coefficient
(2n+1)!/(2n3)!) = ((2n+1)!/(2n1)!)*(2n1)*(2n2)
etc.
For small n, you can compute these coefficients
in an integer register onthefly.
All you need to do is multiply the Horner product through by
[1/(2n+1)!]
at the end.  
P: n/a

Dik T. Winter wrote: OK, the Tschebyshev approximation in the above case: double dc[] = {0.16666666666666662966, 0.00833333333333094971, 0.00019841269836759707, 0.00000275573161030613, 0.00000002505113193827, 0.00000000015918135277}; The algorithm reads (y = x*x): sin = ((((((dc[5] * y + dc[4]) * y + dc[3]) * y + dc[2]) * y + dc[1]) * y + dc[0]) * y + 1.0) * x; (Actually this is for x in [pi/2, pi/2] with a relative precision of about 1 ULP, or, in tangible terms, the relative error is bounded by about 2e16.)
That's only about 4 or 5 decimal digits. You will probably find that
your algorithm would benefit from a "range reduction".
Anyway, Have you tried to "benchmark" your implementation
against the sin(x) function in your standard C library.
If so, please report your results
including the architecture and operating system
where you ran the benchmarks.  
P: n/a

Mike Wahler wrote: E. Robert Tisdale wrote:
Christian Bau wrote:
Since I have actually not used the acos() function _once_ in over twenty years of C programming,
So... you're a newbie eh?
I also have been using C for a long time (about 17 years). I've never used *any* of the 'math.h' functions in my production code (well, there might be one or two that I've forgotten about, but you get the idea). That's because my application domains never needed them.
So am I a 'newbie' too?
To numerical computing evidently.  
P: n/a

On Thu, 26 Feb 2004 01:20:32 +0000, Nick Landsberg wrote:
E. Robert Tisdale wrote:
Nick Landsberg wrote:
[snip] If it does not require high precision or [high precision] is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough [interpolation] between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
[You misunderstood at least one part in your paraphrase of my post. In the second paragraph I meant ... "If it does not require high precision or *acos()* is used gratuitiouly, ..." not, as you have interpreted it. But, rereading it, I can see how the sentence could be so interpreted. I also did not put []'s around the word "interpolation."]
For whatever reason, E. Robert Tisdale modifies posts he is replying to on
occasion. Tisdale has not, to my knowledge, offered any explicit
explanation for this behavior.
Mac  
P: n/a

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:40**************@jpl.nasa.gov... Mike Wahler wrote:
E. Robert Tisdale wrote:
Christian Bau wrote:
Since I have actually not used the acos() function _once_ in over twenty years of C programming,
So... you're a newbie eh? I also have been using C for a long time (about 17 years). I've never used *any* of the 'math.h' functions in my production code (well, there might be one or two that I've forgotten about, but you get the idea). That's because my application domains never needed them.
So am I a 'newbie' too?
To numerical computing evidently.
Well, yes, but so what? Numerical computing is only one
of a virtually limitless number of domains in which
C is used. Accounting/business/database applications have been
putting bread on my table for decades.
Here's what I was responding to:
Christian Bau wrote:
Since I have actually not used the acos() function _once_ in over twenty years of C programming,
You replied:
So... you're a newbie eh?
The only conclusion I could draw from context was that you're
calling Christian a 'newbie' with C. IMO this newsgroup's
archives provide much evidence to the contrary.
Mike  
P: n/a

Mac wrote: For whatever reason, E. Robert Tisdale modifies posts he is replying to on occasion. Tisdale has not, to my knowledge, offered any explicit explanation for this behavior.
What's to explain?
There is no need to quote and it is discouraged.
It is as evil as top posting.
I fix capitalization, punctuation, spelling and grammar errors
to reflect my understanding of what was said.
I am pretty sure that Nick Landsberg meant interpolation
when he said extrapolation and that he meant Dave Rudolf
when he said OP.
I also quietly fix style and programming errors in other people's code
if they aren't essential to understanding the point in question.  
P: n/a

Mike Wahler wrote: The only conclusion I could draw from context was that you're calling Christian a 'newbie' with C.
Yes. At 20 years, he is just getting started.
This newsgroup's archives provide much evidence to the contrary.
Get a life Mike. And a sense of humor to make it bearable :)  
P: n/a

"E. Robert Tisdale" wrote: Mac wrote:
For whatever reason, E. Robert Tisdale modifies posts he is replying to on occasion. Tisdale has not, to my knowledge, offered any explicit explanation for this behavior.
What's to explain? There is no need to quote and it is discouraged. It is as evil as top posting. I fix capitalization, punctuation, spelling and grammar errors to reflect my understanding of what was said. I am pretty sure that Nick Landsberg meant interpolation when he said extrapolation and that he meant Dave Rudolf when he said OP.
I also quietly fix style and programming errors in other people's code if they aren't essential to understanding the point in question.
No, you misquote in order to impose your own misunderstandings.
If you have some different understanding, write what that is. We
can then be amused at the prattling of children, rather than
outraged.

Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!  
P: n/a

In article <40**************@jpl.nasa.gov>,
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote: Christian Bau wrote:
The fastest approach would be to split the argument range into two parts, abs(x) <= c and abs(x) >= c for some suitably chosen c, probably somewhere around 0.7 or 0.8. For abs (x) <= c, approximate acos x by a polynomial of the form (pi  ax  bx^3  cx^5...). For x > c, define f(z) = acos(1  z^2). Approximate f(z) by a polynomial of the form ax + bx^3 + cx^5 + dx^7. Calculate acos(x) = f(sqrt(1  x)). For x < c use acos(x) = pi()  acos(x) = pi()  f(sqrt(1 + x)). Take a higher polynomial depending on how much precision you want. Finding the coefficients is left as an exercise to the reader :)
The substitution z = sqrt(1x) nicely catches the behavior of acos x for abs(x) close to 1, where the first derivative is unbounded and avoids excessive rounding errors.
Have you every implemented and tested this "algorithm"? If so please publish it here so that we can "benchmark" it against our C library implementations of acos. I'll bet you $1.00 that I can find one that beats your implementation.
Just recently I read something about the typical troll's methods: Demand
documentation, references, everything from others. Always refuse to
provide the same yourself.
So you ask me to invest considerable amount of time to implement an
algorithm for acos, then you do a web search, and all for one dollar?
Asshole.
Lets do it the other way round: You publish ANY implementation of acos
in C, and I bet you UK?20,000 that I can beat it. I have a Macintosh at
home, so we will run speed tests on a PowerPC with a Macintosh G4, using
CodeWarrior 7 for MacOS as the compiler. 64 bit double precision, error
less than 1ulp for the whole range are required, so the only thing to be
measured is speed.
Twenthy thousand pounds makes it worth my while, hurts you enough, and I
am quite confident that I can beat anything you can find.  
P: n/a

In article <40**************@jpl.nasa.gov>,
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote: Christian Bau wrote:
If you read an article about the DolphChebychev window function, the first thing that you see is that Chebychev functions are defined as cos (n*acos(x))  and they are easily calculated using a trivial recursion formula that doesn't need anything more complicated than adding and multiplying.
Please show us the formula.
Google for "Dolph Chebychev". Idiot.  
P: n/a

In article <40**************@jpl.nasa.gov>,
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote: Mike Wahler wrote:
The only conclusion I could draw from context was that you're calling Christian a 'newbie' with C.
Yes. At 20 years, he is just getting started.
After 20 years, I am still learning. Your posts make you look as if you
stopped learning 30 years ago. This newsgroup's archives provide much evidence to the contrary.
Get a life Mike. And a sense of humor to make it bearable :)
Another typical troll/bully tactics: When called to order, accuse people
of having no sense of humour.  
P: n/a

E. Robert Tisdale wrote: Mac wrote:
For whatever reason, E. Robert Tisdale modifies posts he is replying to on occasion. Tisdale has not, to my knowledge, offered any explicit explanation for this behavior. What's to explain? There is no need to quote and it is discouraged.
Appropriate quotation of context is encouraged.
It is as evil as top posting.
No.
I fix capitali[s]ation, punctuation, spelling and grammar errors[.]
I have redacted the above line to fit in with my own personal tastes. Note
that I have used standard editorial notation to mark where changes have
been made. I suggest that, if you must edit quotations, you adopt a similar
strategy.
to reflect my understanding of what was said. I am pretty sure that Nick Landsberg meant interpolation when he said extrapolation and that he meant Dave Rudolf when he said OP.
I also quietly fix style and programming errors in other people's code if they aren't essential to understanding the point in question.
Even people who are widely recognised to be competent C programmers don't do
this. Your grasp of C has proved to be a little shaky, so it might be wiser
to refrain from introducing what might turn out to be your bugs in what you
claim is other people's code.

Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place."  Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/Cfaq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton  
P: n/a

Nick Landsberg wrote:
E. Robert Tisdale wrote:
Nick Landsberg wrote:
I think the point of Christian's posting was that he would like more information from the [Dave Rudolf] [about] why the acos() function was called so often. If it is something that requires high precision and MUST be called often (as in a signal processing application), then it is one thing.
If it does not require high precision or [high precision] is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough [interpolation] between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
[You misunderstood at least one part in your paraphrase of my post. In the second paragraph I meant ... "If it does not require high precision or *acos()* is used gratuitiouly, ..." not, as you have interpreted it. But, rereading it, I can see how the sentence could be so interpreted. I also did not put []'s around the word "interpolation."]
The [] thing is fairly standard notation for "editorial modification made
here".
In fact, I rattled off an article suggesting Mr Tisdale (if he insists on
editing people's stuff) should start to use such notation, before noticing
(by reading your article) that he has in fact done so.
What he is doing may be stupid, but it's not actually misleading, provided
he marks it appropriately (as he appears to be doing). The point of marking
his editorialisations is that those who distrust them can go back and check
what the original actually said. Unmarked editorialisations are more
insidious. Basically, if you trust a poster not to make unmarked
editorialisations, then okay, you trust them. If you don't, then the
editorial markers probably don't do much for you anyway. But reputable
editors will use them when it is appropriate.

Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place."  Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/Cfaq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton  
P: n/a

In article <40**********@jpl.nasa.gov> E.**************@jpl.nasa.gov writes: Dik T. Winter wrote:
OK, the Tschebyshev approximation in the above case: double dc[] = {0.16666666666666662966, 0.00833333333333094971, 0.00019841269836759707, 0.00000275573161030613, 0.00000002505113193827, 0.00000000015918135277}; The algorithm reads (y = x*x): sin = ((((((dc[5] * y + dc[4]) * y + dc[3]) * y + dc[2]) * y + dc[1]) * y + dc[0]) * y + 1.0) * x; (Actually this is for x in [pi/2, pi/2] with a relative precision of about 1 ULP, or, in tangible terms, the relative error is bounded by about 2e16.) That's only about 4 or 5 decimal digits. You will probably find that your algorithm would benefit from a "range reduction".
Since when is 2e16 relative error only about 4 or 5 decimal digits? I
would have thought 15+ *decimal* digits... Or do you noe know what
2e16 means? And in what way do you suggest to do the range reduction?
Anyway, Have you tried to "benchmark" your implementation against the sin(x) function in your standard C library.
What do you think? Not with regards to time, but with regards to
precision.
If so, please report your results including the architecture and operating system where you ran the benchmarks.
Sun Solaris, relative error bounded by about 2e16.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/  
P: n/a

> Since when is 2e16 relative error only about 4 or 5 decimal digits? I would have thought 15+ *decimal* digits... Or do you noe know what 2e16 means? And in what way do you suggest to do the range reduction?
He might have confused 2e16 with 2^(16) ?  
P: n/a

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:40**************@jpl.nasa.gov... Nick Landsberg wrote:
I think the point of Christian's posting was that he would like more information from the [Dave Rudolf] [about] why the acos() function was called so often. If it is something that requires high precision and MUST be called often (as in a signal processing application), then it is one thing.
If it does not require high precision or [high precision] is used gratuitously, there might be better ways to accomplish the same thing. As one poster pointed out, a rough [interpolation] between values in a static array may satisfy the requirement. Without knowing the degree of precision needed, we just don't know.
Unfortunately, I don't know what degree of precision is required either. We can only hope that Dave Rudolf is still following this thread and will reply with the answer.
Ya, I'm back. The signaltonoise ratio in this thread is getting pretty
small :). In short, I don't know the exact accuracy that will be needed. I
am using the acos function do determine the angle between vectors, in order
to simulate angular spring torque in a numeric sumulation of elastic
surfaces. It's for nonscientific graphical simulation, and we do not have a
theoretically rigid physical model, so we can probably get away with some
approximation.
At any rate, I now have a number of avenues to explore. Thank you all for
your ideas.
Dave  
P: n/a

E. Robert Tisdale wrote: Mike Wahler wrote:
The only conclusion I could draw from context was that you're calling Christian a 'newbie' with C.
Yes. At 20 years, he is just getting started.
This newsgroup's archives provide much evidence to the contrary.
Get a life Mike. And a sense of humor to make it bearable :)
Mr. E. Tisdale,
I have over 20 years experience with the C language and have
purposely avoided the math library. Most of the embedded systems
I worked would die (come to a hault) or worse if the math
library was included. But, from reading your past posts,
you either claim that embedded systems don't exist or
insuate that only desktop and the like exist.
In smaller embbeded applications, either a lookup table
is used (due to size and speed) or the numerical base
is changed so floating point is not required. Many
times, the accuracy is not required to move a solenoid.
I have learned calculus and numerical analysis. Just
because I don't use all the features of the language
doesn't make me a newbie.

Thomas Matthews
C++ newsgroup welcome message: http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++faqlite
C Faq: http://www.eskimo.com/~scs/cfaq/top.html
alt.comp.lang.learn.cc++ faq: http://www.raos.demon.uk/acllcc++/faq.html
Other sites: http://www.josuttis.com  C++ STL Library book  
P: n/a

In <c1*********@oravannahka.helsinki.fi> Joona I Palaste <pa*****@cc.helsinki.fi> writes: Dave Rudolf <nu****************@hotmail.com> scribbled the following: Hi all,
Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...) method. So now I start to wonder how that method, and others in the math.h library, are implemented.
This depends completely on the C compiler used and is not standardised.
It is entirely a library issue and need not depend at all on the C
compiler being used. I have seen assemblyonly implementations of
<math.h>.
Dan

Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de  
P: n/a

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:40**************@jpl.nasa.gov... Mike Wahler wrote:
The only conclusion I could draw from context was that you're calling Christian a 'newbie' with C. Yes. At 20 years, he is just getting started.
This newsgroup's archives provide much evidence to the contrary.
Get a life Mike.
I'm quite happy with my life, but thanks for your concern.
And a sense of humor to make it bearable :)
I'm also quite satisfied with my sense of humor (some
folks often ask me to 'tone it down'. :)
Ya know, I should probably just plonk you, but you're
just too darn amusing. :)
Mike  
P: n/a

Thomas Matthews wrote: I have over 20 years experience with the C language and have purposely avoided the math library. Most of the embedded systems I worked would die (come to a hault)
You probably meant halt.
or worse if the math library was included.
Why?
But, from reading your past posts, you either claim that embedded systems don't exist or insuate
You probably meant insinuate.
that only desktop and the like exist.
Not that I recall.
Can you quote such an instance?
In smaller embbeded
You probably meant embedded.
applications, either a lookup table is used (due to size and speed) or the numerical base is changed so [that] floating point is not required. Many times, the accuracy is not required to move a solenoid.
I have learned calculus and numerical analysis. Just because I don't use all the features of the language doesn't make me a newbie.
No.
You're a newbie because you have only 20 years experience.  
P: n/a

Mike Wahler wrote: E. Robert Tisdale:
Mike Wahler wrote:
The only conclusion I could draw from context was that you're calling Christian a 'newbie' with C. Yes. At 20 years, he is just getting started.
This newsgroup's archives provide much evidence to the contrary.
Get a life Mike.
I'm quite happy with my life, but thanks for your concern.
And a sense of humor to make it bearable :)
I'm also quite satisfied with my sense of humor
It's perverse.
(some folks often ask me to 'tone it down'. :)
Ya know, I should probably just plonk you,
Not likely. Trolls never use kill files.
Their egos won't let them. They just threaten.
but you're just too darn amusing. :)
Yes, everyone knows that you troll for fun.  
P: n/a

Mike Wahler wrote: <E.**************@jpl.nasa.gov> wrote:
.... snip ... And a sense of humor to make it bearable :)
I'm also quite satisfied with my sense of humor (some folks often ask me to 'tone it down'. :)
Ya know, I should probably just plonk you, but you're just too darn amusing. :)
You also need to watch your back with Trollsdale around. ;[

Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 3222
 replies: 92
 date asked: Nov 14 '05
