On 14 Sep, 04:06, Peng Yu <PengYu...@gmail.comwrote:

In my field of work certain analytical solutions were formulated

in the early '50s, but a stable numerical solution wasn't found

until the early/mid '90s.

Would you please give some example references on this?

At the risk of becoming inaccurate, as I haven't reviewed

the material in 5 years and write off the top of my head:

Around 1953-55 Tompson and Haskell proposed a method to

compute the propagation of seismic waves through layered

media. The method used terms on the form

x = (exp(y)+1)/(exp(z)+1)

where y and z were of large magnitude and 'almost equal'.

In a perfect formulation x would be very close to 1.

Since y and z are large an one uses an imperfect numerical

representation, the computation errors in the exponents

become important. So basically the terms that should

cancel didn't, and one was left with a numerically unstable

solution.

There were made several attempts to handle this (Ng and Reid

in the '70s, Henrik Schmidt in the '80), with varoius

degrees of success. And complexity. As far as I am concerned,

the problem wasn't solved until around 1993 when Sven Ivansson

came up with a numerically stable scheme.

What all these attempts had in common was that they took

the original analytical formulation and organized the terms

in various ways to avoid the complicated, large-magnitude

internal terms.

I am sure there are simuilar examples in other areas.

As for an example on error analysis, you could check out the

analysis of Horner's rule for evaluating polynomials, which

is tretaed in most intro books on numerical analysis.

Rune