Andrew Koenig <ar*@acm.orgwrote:

"Ole Nielsby" <ol*********@snailmail.dkwrote in message

news:45***********************@news.sunsite.dk...

>I want to serialize doubles in human-readable decimal form

and be sure I get the exact same binary values when I read

them back. (Right now, I don't care about NaN, infinities etc.)

>In essense, this boils down to converting a double to a

(large) integer mantissa and a decimal exponent, and back,

so that 3.1416 would be represented as {31416, -4}.

>I wrote a converter [...] but [...] a few percent of

the numbers come out different.

>Does anybody know of an algorithm that is known to

work?

Such an algorithm exists, but it's not easy.

If I remember correctly, the IEEE 754 floating-point standard requires

that when you convert a character string to floating-point, the result

must be equal to what you would get if you correctly rounded the

infinite-precision representation of that character string. When you

convert a floating-point value to a string with enough digits, the result

must be within 0.47 LSB of the exact binary value. This latter constraint

guarantees that converting a floating-point number to character and back

to floating-point will give you exactly the same result, provided that

there are enough digits in the character version. Proving that the

constraint was sufficient was Jerome Coonen's PhD thesis, which suggests

how difficult the problem is.

So if your implementation meets the IEEE 754 standard, the problem is easy

to solve :-)

If it doesn't meet the standard, you have to figure it out yourself.

Either you have to implement something that's as good as the standard,

which isn't easy, or you're going to have to come up with another way of

doing it that you can prove is as good, which is even harder.

The setting is this: I am implementing a homebrew fp language (PILS)

by writing an interpreter in C++. Like Lisp, simple data can be serialized

by outputting them in the syntax of the language. It is important that

this doesn't change numbers, i.e. if a number is printed and re-read by

the same process, it must be the same.

The current implementation is in VC8/Win32 and stores numbers as

double, i.e. 64 bit fpu format. The precision model is set to "high"

which means the FPU uses 64 bit mantissa internally, but the mantissa

is rounded to 52 bits when stored in a double variable. I use up to 18

digit integers, which should be a few digits more than required for a

52 bit mantissa.

To isolate the precision issues from formatting details, I wrote

a small class that does the conversion to/from a long long

mantiassa and a decimal exponent.

My conversion class looks as follows (please bear with my

less-than-perfect C++ habits, I took up C++ to implement

PILS because it seems next to impossible to interface asm

to .NET...). Note: the power of 10 to multiply or divide is

constructed naively by mulitiplying tens; this is not the

optimal solution for large exponents, but this shouldn't

make the numbers differ - the scale is constructed in

the same way for reading/writing.

class FloatSplit {

public:

long long mantissa;

long exponent;

double get(); //FloatSplit -double

void set(double value); //double -FloatSplit

};

double FloatSplit::get() //after reading a number, convert to double

{

double scale = 1;

double value = (double)mantissa;

if (exponent 0) {

// Naive scale computation

for (long e = 0; e < exponent; e++) scale *= 10;

value = mantissa * scale;

}

else if (exponent < 0) {

// Naive scale computation

for (long e = 0; e exponent; e--) scale *= 10;

value = mantissa / scale;

}

return value;

}

void FloatSplit::set(double value) //Split the value for writing

{

mantissa = (long long)value;

exponent = 0;

if ((double)mantissa == value) return; /*integral values*/

double absValue = value < 0 ? -value : value;

double scale = 1;

if (absValue >= 1e18) {

exponent++;

// Naive scale computation

scale *= 10;

while (absValue / scale >= 1e18 && exponent < 1000) {

exponent++;

// Naive scale computation

scale *= 10;

}

mantissa = (long long)(absValue / scale);

/* try to adjust mantissa - disabled, made things worse */

// if (absValue (double)mantissa * scale) mantissa++;

// if (absValue < (double)mantissa * scale) mantissa--;

}

else if (absValue < 1e17) {

while (absValue * scale < 1e17

&& absValue != (double)mantissa / scale

&& exponent -1000)

{

// Naive scale computation

scale *= 10;

exponent--;

mantissa = (long long)(absValue * scale);

/* try to adjust mantissa - disabled, made things worse */

// if (absValue (double)mantissa / scale) mantissa++;

// if (absValue < (double)mantissa / scale) mantissa--;

}

}

if (value < 0) mantissa = -mantissa;

}

I tested like this:

for (int i = -300; i <= 300; i++) testConvert(exp(i) * sin(i));

and it failed for i = -278, -210, 61, 109, 129, 144, 160, 161,

167, 172, 187, 200, 209, 220, 223, 245, 249, 253, 259, 262, 269,

280, 299, 300.

---end---