Michael A. Covington <lo**@ai.uga.edu.for.address> wrote:

You're right, and to put it even more clearly:

The Decimal type is a binary integer stored with an offset that represents a

power of 10, and not normalized.

Thus 1 and 1.0 are different numbers. The latter is stored as 10 offset 1

decimal place to the right.

This combines the speed of binary arithmetic with the ability to represent

decimal numbers exactly, and, indeed, to remember how many decimal places

were given (so if you give a price as $1.00 it remains 1.00, not 1 or 1.0).

Clever... but poorly documented!

Not particularly poorly documented, really - the docs for

System.Decimal are pretty clear about all but the normalisation:

<quote>

The binary representation of an instance of Decimal consists of a 1-bit

sign, a 96-bit integer number, and a scaling factor used to divide the

96-bit integer and specify what portion of it is a decimal fraction.

The scaling factor is implicitly the number 10, raised to an exponent

ranging from 0 to 28.

</quote>

That, to me, can't easily be misinterpreted in the way that your first

post suggests where each byte is between 0 and 99.

--

Jon Skeet - <sk***@pobox.com>

http://www.pobox.com/~skeet
If replying to the group, please do not mail me too