> > > Unfortunately most C libraries only use the stupid algorithm which

often

gives some useless digits.

They are not useless if you want more accuracy about what you have

Why not display the *exact* decimal representation,

"0.10000000000000000555111512312578270211815834045 41015625"?

One can do better than that--or at least something I think is better.

Many moons ago, Guy Steele proposed an elegant pair of rules for converting

between decimal and internal floating-point, be it binary, decimal,

hexadecimal, or something else entirely:

1) Input (i.e. conversion from decimal to internal form) always yields

the closest (rounded) internal value to the given input.

2) Output (i.e. conversion from internal form to decimal) yields the

smallest number of significant digits that, when converted back to internal

form, yields exactly the same value.

This scheme is useful because, among other things, it ensures that all

numbers with only a few significant digits will convert to internal form and

back to decimal without change. For example, consider 0.1. Converting 0.1

to internal form yields the closest internal number to 0.1. Call that

number X. Then when we write X back out again, we *must* get 0.1, because

0.1 is surely the decimal number with the fewest significant digits that

yields X when converted.

I have suggested in the past that Python use these conversion rules. It

turns out that there are three strong arguments against it:

1) It would preclude using the native C library for conversions, and

would probably yield different results from C under some circumstances.

2) It is difficult to implement portably, and if it is not implemented

portably, it must be reimplemented for every platform.

3) It potentially requires unbounded-precision arithmetic to do the

conversions, although a clever implementation can avoid it most of the time.

I still think it would be a good idea, but I can see that it would be more

work than is feasible. I don't want to do the work myself, anyway :-)