473,387 Members | 3,781 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Decimal vs float

I wonder why this expression works:
decimal.Decimal("5.5")**1024 Decimal("1.353299876254915295189966576E+758")

but this one causes an error

5.5**1024

Traceback (most recent call last):
File "<interactive input>", line 1, in ?
OverflowError: (34, 'Result too large')

Another quirk is the follwoing:
decimal.Decimal(5.5)

Traceback (most recent call last):
....
TypeError: Cannot convert float to Decimal. First convert the float to
a string

If Mr. interpreter is as slick as he is why doesn't he convert the
float by himself? This is at most a warning caused by possible rounding
errors of float.

Instead of dealing with awkward wrappers, I wonder if literals
currently interpreted as floats could not be interpreted as Decimal
objects in future?

Kay

Jan 19 '06 #1
15 12791
Kay Schluehr wrote:
I wonder why this expression works:
decimal.Decimal("5.5")**1024
Decimal("1.353299876254915295189966576E+758")
The result is a Decimal type, which can have *very high* values.
but this one causes an error

5.5**1024

Traceback (most recent call last):
File "<interactive input>", line 1, in ?
OverflowError: (34, 'Result too large')
Because the result is a float, which values are limited by your hardware
(CPU).
Another quirk is the follwoing:

decimal.Decimal(5.5)

Traceback (most recent call last):
...
TypeError: Cannot convert float to Decimal. First convert the float to
a string

If Mr. interpreter is as slick as he is why doesn't he convert the
float by himself? This is at most a warning caused by possible rounding
errors of float.


floating points are always imprecise, so you wouldn't want them as an
input parameter for a precise Decimal type.

Because if your nice Decimal type would then look like this:

Decimal("5.499999999999999999999999999999999999999 999999999999999999999999")

you would complain too, right?

For more enlightenment, you can start with the PEP
http://www.python.org/peps/pep-0327....t-construction
Instead of dealing with awkward wrappers, I wonder if literals
currently interpreted as floats could not be interpreted as Decimal
objects in future?


No, because a software Decimal type is orders of magnitude slower than
floating point types, for which there is hardware support by your CPU.

If you're asking for additional Python decimal literals like

mydecimal = 5.5d

or whatever, that's a different question. I don't know if anything like
this is planned. FWIW I don't think it's necessary. using the Decimal
constructor is explicit too and we don't really need syntactic sugar for
decimal literals.

-- Gerhard
Jan 19 '06 #2
Kay Schluehr wrote:
I wonder why this expression works:

decimal.Decimal("5.5")**1024
Decimal("1.353299876254915295189966576E+758")

but this one causes an error

5.5**1024

Traceback (most recent call last):
File "<interactive input>", line 1, in ?
OverflowError: (34, 'Result too large')
Because the Decimal type can represent a larger range of values than the
float type. Your first expression give a Decimal result, your second
attempts to give a float result.
Another quirk is the follwoing:

decimal.Decimal(5.5)

Traceback (most recent call last):
...
TypeError: Cannot convert float to Decimal. First convert the float to
a string

If Mr. interpreter is as slick as he is why doesn't he convert the
float by himself? This is at most a warning caused by possible rounding
errors of float.

Indeed, as the documentation says: """This serves as an explicit
reminder of the details of the conversion (including representation
error)""". Otherwise you would get numpties using constructions like
Decimal(0.1) and then asking why the result was the same as
Decimal("0.10000000000000001") (or something similar). Who needs it?
Certainly not Mr. interpreter, or his c.l.py friends.
Instead of dealing with awkward wrappers, I wonder if literals
currently interpreted as floats could not be interpreted as Decimal
objects in future?

That would be a very large change in the behaviour of the interpreter,
and unfortunately it doesn't take account of the need in decimal to
specify the context in which a calculation takes place.You need to be
able to specify precision, rounding and a number of other values to make
your computations completely specified. What would you use as the
default context?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Jan 19 '06 #3

Steve Holden wrote:
If Mr. interpreter is as slick as he is why doesn't he convert the
float by himself? This is at most a warning caused by possible rounding
errors of float.

Indeed, as the documentation says: """This serves as an explicit
reminder of the details of the conversion (including representation
error)""". Otherwise you would get numpties using constructions like
Decimal(0.1) and then asking why the result was the same as
Decimal("0.10000000000000001") (or something similar). Who needs it?
Certainly not Mr. interpreter, or his c.l.py friends.


The stringification of floats seems to work accurately just like the
error message tells:
Decimal(str(0.1)) Decimal("0.1")

This is interesting. If we define

def f():
print str(1.1)

and disassemble the function, we get:

dis.dis(f)
2 0 LOAD_GLOBAL 0 (str)
3 LOAD_CONST 1 (1.1000000000000001) #
huh?
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
11 LOAD_CONST 0 (None)
14 RETURN_VALUE

But when we call f, we receive
f()
1.1

Mr. Interpreter seems to have a higher level of awareness :)
Instead of dealing with awkward wrappers, I wonder if literals
currently interpreted as floats could not be interpreted as Decimal
objects in future?

That would be a very large change in the behaviour of the interpreter,
and unfortunately it doesn't take account of the need in decimal to
specify the context in which a calculation takes place.


I don't see this as a big obstacle. With the current implementation the
compiler has to generate a decimal object from a NUMBER token instead
of a float object. The context of a calculation is still the decimal
module object and it's attributes. Why should it be changed?

Kay

Jan 19 '06 #4
Kay Schluehr wrote:
Steve Holden wrote:
If Mr. interpreter is as slick as he is why doesn't he convert the
float by himself? This is at most a warning caused by possible rounding
errors of float.

Indeed, as the documentation says: """This serves as an explicit
reminder of the details of the conversion (including representation
error)""". Otherwise you would get numpties using constructions like
Decimal(0.1) and then asking why the result was the same as
Decimal("0.10000000000000001") (or something similar). Who needs it?
Certainly not Mr. interpreter, or his c.l.py friends.

The stringification of floats seems to work accurately just like the
error message tells:

Decimal(str(0.1))
Decimal("0.1")

This is interesting. If we define

def f():
print str(1.1)

and disassemble the function, we get:

dis.dis(f)
2 0 LOAD_GLOBAL 0 (str)
3 LOAD_CONST 1 (1.1000000000000001) #
huh?
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
11 LOAD_CONST 0 (None)
14 RETURN_VALUE

But when we call f, we receive

f()
1.1

Mr. Interpreter seems to have a higher level of awareness :)

Mr. Interpreter (I see we are affording him capitals) has had his gonads
stamped on when it comes to converting flots into strings, and so is
exceedingly cautious when presenting them to users. This would not be a
good idea when converting floats into other numeric representations.Instead of dealing with awkward wrappers, I wonder if literals
currently interpreted as floats could not be interpreted as Decimal
objects in future?


That would be a very large change in the behaviour of the interpreter,
and unfortunately it doesn't take account of the need in decimal to
specify the context in which a calculation takes place.

I don't see this as a big obstacle. With the current implementation the
compiler has to generate a decimal object from a NUMBER token instead
of a float object. The context of a calculation is still the decimal
module object and it's attributes. Why should it be changed?

Kay

Well besides the fact that people would complain about the (lack of)
speed I don't think I want to start having to explain to beginners how
to handle precision and rounding settings to get the results they think
they want.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Jan 19 '06 #5
Kay Schluehr wrote:
This is interesting. If we define

def f():
print str(1.1)

and disassemble the function, we get:

dis.dis(f)
2 0 LOAD_GLOBAL 0 (str)
3 LOAD_CONST 1 (1.1000000000000001) # huh?


huh huh?
str(1.1) '1.1' repr(1.1) '1.1000000000000001' "%.10g" % 1.1 '1.1' "%.20g" % 1.1 '1.1000000000000001' "%.30g" % 1.1 '1.1000000000000001' "%.10f" % 1.1 '1.1000000000' "%.20f" % 1.1 '1.10000000000000010000' "%.30f" % 1.1

'1.100000000000000100000000000000'

more here: http://docs.python.org/tut/node16.html

</F>

Jan 19 '06 #6
[Kay Schluehr]
This is interesting. If we define

def f():
print str(1.1)

and disassemble the function, we get:
dis.dis(f)
2 0 LOAD_GLOBAL 0 (str)
3 LOAD_CONST 1 (1.1000000000000001) # huh?


[Fredrik Lundh] huh huh?
str(1.1) '1.1' repr(1.1) '1.1000000000000001' "%.10g" % 1.1 '1.1'
A more interesting one is:

"%.12g" % a_float

because that's closest to what str(a_float) produces in Python.
repr(a_float) is closest to:

"%.17g" % a_float
"%.20g" % 1.1 '1.1000000000000001' "%.30g" % 1.1 '1.1000000000000001' "%.10f" % 1.1 '1.1000000000' "%.20f" % 1.1 '1.10000000000000010000' "%.30f" % 1.1 '1.100000000000000100000000000000'


The results of most of those (the ones asking for more than 17
significant digits) vary a lot across platforms. The IEEE-754
standard doesn't wholly define output conversions, and explicitly
allows that a conforming implementation may produce any digits
whatsoever at and after the 18th signficant digit. when converting a
754 double to string. In practice, all implementations I know of that
exploit that produce zeroes at and after the 18th digit -- but they
could produce 1s instead, or 9s, or digits from pi, or repetitions of
the gross national product of Finland in 1967. You're using one of
those there, probably Windows. glibc does conversions "as if to
infinite precision" instead, so here on a Linux box:
"%.20g" % 1.1 '1.1000000000000000888' "%.30g" % 1.1 '1.10000000000000008881784197001' "%.50g" % 1.1 '1.10000000000000008881784197001252323389053344726 56' "%.100g" % 1.1

'1.10000000000000008881784197001252323389053344726 5625'

The last one is in fact the exact decimal representation of the 754
double closest to the decimal 1.1.
more here: http://docs.python.org/tut/node16.html


Still, there's always more ;-)
Jan 19 '06 #7
Fredrik Lundh wrote:
str(1.1)
'1.1'
repr(1.1)
'1.1000000000000001'


To add to the confusion:
str(1.1000000000000001) '1.1' repr(1.1000000000000001)

'1.1000000000000001'

Floating point numbers are not precise.
Decimals are, so they require precise
information when they are constructed.

If you want to use the rounding that str()
uses when you create a Decimal instance,
please use that. Decimal(str(x)). By forcing
you to do that, Python makes you aware of the
problem and you can avoid nasty surprices.

If you think that's excessive typing, just
define a simple function.

def D(x): return Decimal(str(x))
Jan 19 '06 #8
On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]

floating points are always imprecise, so you wouldn't want them as an

Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value. BTW, you'd need a 64-bit CPU to get range(-2**53,2**53+1)
but the 53 bits of available precision a float (IEEE 754 double) can represent
each integer in that range exactly (and of course similar sets counting by 2 or 4 etc.)

You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."

As a practical matter it is hard to track when floating point calculations lose
exactness (though UIAM there are IEEE 754 hardware features that can support that),
so it is just easier to declare all floating point values to be tainted with inexactness
from the start, even though it isn't so.

1.0 is precisely represented as a float. So is 1.5 and so are more other values than
you can count with an ordinary int ;-)

Regards,
Bengt Richter
Jan 20 '06 #9
On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:
On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]

floating points are always imprecise, so you wouldn't want them as an Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value.


Of course every float has a precise rational value.
0.1000000000000000000001 has a precise rational value:

1000000000000000000001/10000000000000000000000

But that's hardly what people are referring to. The question isn't whether
every float is an (ugly) rational, but whether every (tidy) rational is a
float. And that is *not* the case, simple rationals like 1/10 cannot be
written precisely as floats no matter how many bits you use.
You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."


"Always" is too strong, since (for example) 1/2 can be represented
precisely as a float. But in general, for any "random" rational value N/M,
the odds are that it cannot be represented precisely as a float. And
that's what people mean when they say floats are imprecise.

--
Steven.

Jan 21 '06 #10
Steven D'Aprano wrote:
On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:

On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]
floating points are always imprecise, so you wouldn't want them as an


Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value.

Of course every float has a precise rational value.
0.1000000000000000000001 has a precise rational value:

1000000000000000000001/10000000000000000000000

But that's hardly what people are referring to. The question isn't whether
every float is an (ugly) rational, but whether every (tidy) rational is a
float. And that is *not* the case, simple rationals like 1/10 cannot be
written precisely as floats no matter how many bits you use.

You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."

"Always" is too strong, since (for example) 1/2 can be represented
precisely as a float. But in general, for any "random" rational value N/M,
the odds are that it cannot be represented precisely as a float. And
that's what people mean when they say floats are imprecise.

And you thought Bengt didn't know that?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Jan 21 '06 #11
On Sat, 21 Jan 2006 03:48:26 +0000, Steve Holden wrote:
Steven D'Aprano wrote:
On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:

On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]

floating points are always imprecise, so you wouldn't want them as an

Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value.

Of course every float has a precise rational value.
0.1000000000000000000001 has a precise rational value:

1000000000000000000001/10000000000000000000000

But that's hardly what people are referring to. The question isn't whether
every float is an (ugly) rational, but whether every (tidy) rational is a
float. And that is *not* the case, simple rationals like 1/10 cannot be
written precisely as floats no matter how many bits you use.

You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."

"Always" is too strong, since (for example) 1/2 can be represented
precisely as a float. But in general, for any "random" rational value N/M,
the odds are that it cannot be represented precisely as a float. And
that's what people mean when they say floats are imprecise.

And you thought Bengt didn't know that?


I didn't know what to think.

Given the question "I want 0.1, but my float has the value 0.100...01,
why does Python have a bug?" comes up all the time, does it really help
to point out that the float representing 0.100...01 is an exact rational
-- especially without mentioning that it happens to be the *wrong* exact
rational?

I won't even try to guess what Bengt does or doesn't know, but he seems to
be implying that while floats can't represent arbitrary reals (like
sqrt(3) or pi) exactly, exact rationals are no problem at all. (Certainly
every one of his examples are of rationals which floats do represent
exactly.) In any case, Bengt isn't the only person reading the thread.
Don't they deserve a clarification?

--
Steven.

Jan 21 '06 #12
Steven D'Aprano wrote:
On Sat, 21 Jan 2006 03:48:26 +0000, Steve Holden wrote:
Steven D'Aprano wrote:
On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:
On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]

>floating points are always imprecise, so you wouldn't want them as an

Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value.
Of course every float has a precise rational value.
0.1000000000000000000001 has a precise rational value:

1000000000000000000001/10000000000000000000000

But that's hardly what people are referring to. The question isn't whether
every float is an (ugly) rational, but whether every (tidy) rational is a
float. And that is *not* the case, simple rationals like 1/10 cannot be
written precisely as floats no matter how many bits you use.
You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."
"Always" is too strong, since (for example) 1/2 can be represented
precisely as a float. But in general, for any "random" rational value N/M,
the odds are that it cannot be represented precisely as a float. And
that's what people mean when they say floats are imprecise.

And you thought Bengt didn't know that?


I didn't know what to think.

Given the question "I want 0.1, but my float has the value 0.100...01,
why does Python have a bug?" comes up all the time, does it really help
to point out that the float representing 0.100...01 is an exact rational
-- especially without mentioning that it happens to be the *wrong* exact
rational?


I concur and I wonder why CAS like e.g. Maple that represent floating
point numbers using two integers [1] are neither awkward to use nor
inefficient. According to the Python numeric experts one has to pay a
high tradeoff between speed and accuracy. But as it seems it just
compares two Python implementations ( float / decimal ) and does not
compare those to approaches in other scientific computing systems. By
the way one can also learn from Maple how accuracy can be adjusted
practically. I never heard users complaining about that. I recommend
reading the Maple docs that are very explicit about this.

Kay

[1] It is a little more complicated. Maple makes the distininction
between floats ( hardware ) and sfloats ( software ) but makes only
limited use of hardware floats:

"For Maple 9, a software floating-point number (see type[sfloat]) and a
general floating-point number (see type[float]) are considered to be
the same object. Maple hardware floating-point numbers can only exist
as elements of rtable (Arrays, Matrices, and Vectors) and internally
within the evalhf evaluator. See UseHardwareFloats."
"The UseHardwareFloats environment variable controls whether Maple's
hardware or software floating-point computation environment is used to
perform all floating-point operations.

This environment variable has influence only over computations done on
floating-point rtables (Arrays, Matrices and Vectors). In future
versions of Maple, UseHardwareFloats will be used to force all
floating-point computations to be performed in the hardware
floating-point environment.

The default value of UseHardwareFloats is deduced. The value deduced
tells Maple to deduce the computation environment (hardware or
software) from the current setting of the Digits environment variable:
if Digits <= evalhf(Digits) then hardware float computation is
performed; otherwise, the computation is performed in software. The
value of UseHardwareFloats can be changed by using the assignment
operator. "

Jan 21 '06 #13
[Kay Schluehr]
I concur and I wonder why CAS like e.g. Maple that represent floating
point numbers using two integers [1] are neither awkward to use nor
inefficient.
My guess is that it's because you never timed the difference in Maple
-- or, perhaps, that you did, but misinterpreted the results. You
don't give any data, so it's hard to guess which.

BTW, why do you think Maple's developers added the UseHardwareFloats option?
According to the Python numeric experts one has to pay a
high tradeoff between speed and accuracy. But as it seems it just
compares two Python implementations ( float / decimal ) and does not
compare those to approaches in other scientific computing systems.
It's easy to find papers comparing the speed of HW and SW floating
point in Maple. Have you done that, Kay? For example, read:

"Symbolic and Numeric Scientific Computation in Maple"
K.O. Geddes, H.Q. Le
http://www.scg.uwaterloo.ca/~kogeddes/papers/ICAAA02.ps

Keith Geddes is a key figure in Maple's history and development, and
can hardly be accused of being a Python apologist ;-) Note that
Example 1.5 there shows a _factor_ of 47 speed gain from using HW
instead of SW floats in Maple, when solving a reasonably large system
of linear equations. So I'll ask again ;-): why do you think Maple's
developers added the UseHardwareFloats option?

While that paper mentions the facility only briefly, Geddes and Zheng
give detailed analyses of the tradeoffs in Maple here:

"Exploiting Fast Hardware Floating Point in High Precision Computation"
http://www.scg.uwaterloo.ca/~kogedde...rs/TR200241.ps

If you're uncomfortable reading technical papers, one bottom line is
that they show that the time required by Maple to do a floating-point
multiplication in software "is at least 1000 times larger" than doing
the same with UseHardwareFloats set to true (and Digits:=15 in both
cases).
By the way one can also learn from Maple how accuracy can be adjusted
practically. I never heard users complaining about that.
It's easy to change the number of digits of precision in Python's
decimal module.
...

Jan 21 '06 #14
Tim Peters wrote:
[Kay Schluehr]
I concur and I wonder why CAS like e.g. Maple that represent floating
point numbers using two integers [1] are neither awkward to use nor
inefficient.
My guess is that it's because you never timed the difference in Maple
-- or, perhaps, that you did, but misinterpreted the results. You
don't give any data, so it's hard to guess which.

BTW, why do you think Maple's developers added the UseHardwareFloats option?


For no good reason at least not as a public interface. But since the
value of UseHardwareFloats is deduced from the accuracy ( Digits
environment variable ) a programmer has seldom to deal explicitely with
UseHardwareFloats. That means he pays for accuracy only, what is the
essential numerical information. I guess an analogy is the use of
integers in Python with the differerence that precision is replaced by
the amount of absolute value. The solution is so nice that I wonder why
floats and decimals could not be unified in a similar way?
According to the Python numeric experts one has to pay a
high tradeoff between speed and accuracy. But as it seems it just
compares two Python implementations ( float / decimal ) and does not
compare those to approaches in other scientific computing systems.


It's easy to find papers comparing the speed of HW and SW floating
point in Maple. Have you done that, Kay?


O.K. you are right here.
For example, read:

"Symbolic and Numeric Scientific Computation in Maple"
K.O. Geddes, H.Q. Le
http://www.scg.uwaterloo.ca/~kogeddes/papers/ICAAA02.ps

Keith Geddes is a key figure in Maple's history and development, and
can hardly be accused of being a Python apologist ;-) Note that
Example 1.5 there shows a _factor_ of 47 speed gain from using HW
instead of SW floats in Maple, when solving a reasonably large system
of linear equations. So I'll ask again ;-): why do you think Maple's
developers added the UseHardwareFloats option?

While that paper mentions the facility only briefly, Geddes and Zheng
give detailed analyses of the tradeoffs in Maple here:

"Exploiting Fast Hardware Floating Point in High Precision Computation"
http://www.scg.uwaterloo.ca/~kogedde...rs/TR200241.ps

If you're uncomfortable reading technical papers, one bottom line is
that they show that the time required by Maple to do a floating-point
multiplication in software "is at least 1000 times larger" than doing
the same with UseHardwareFloats set to true (and Digits:=15 in both
cases).


No, it's perfect, Tim. Thanks for the links.
By the way one can also learn from Maple how accuracy can be adjusted
practically. I never heard users complaining about that.


It's easy to change the number of digits of precision in Python's
decimal module.


If I remember correctly it was Steve Holden who complained that
explaining accuracy by means of the decimal module would be a non issue
for beginners in Python. I have nothing to complain here ( o.k. nesting
two levels deep to set prec is less nice than having Digits offered by
the CAS immediately. But this is more cosmetic ).

Kay

Jan 22 '06 #15
On Sat, 21 Jan 2006 14:28:20 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:
On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <gh@ghaering.de> wrote:
[...]

floating points are always imprecise, so you wouldn't want them as an Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value.


Of course every float has a precise rational value.
0.1000000000000000000001 has a precise rational value:

1000000000000000000001/10000000000000000000000

Good, I'm glad that part is clear ;-)
But that's hardly what people are referring to. The question isn't whether "people"?every float is an (ugly) rational, but whether every (tidy) rational is a
float. And that is *not* the case, simple rationals like 1/10 cannot be
written precisely as floats no matter how many bits you use. See the next statement below. What did you think I meant?
You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."


"Always" is too strong, since (for example) 1/2 can be represented
precisely as a float. But in general, for any "random" rational value N/M,
the odds are that it cannot be represented precisely as a float. And
that's what people mean when they say floats are imprecise.

That's what *you* mean, I take it ;-) I suspect what most people mean is that they don't
really understand how floating point works in detail, and they'd rather not think about it
if they can substitute a simple generalization that mostly keeps them out of trouble ;-)

Besides, "cannot be represented precisely" is a little more subtle than numbers of bits.
E.g., one could ask, how does the internal floating point bit pattern for 0.10000000000000001
(which incidentally is not the actual exact decimal value of the IEEE 754 bit pattern --
0.100000000000000005551115123125782702118158340454 1015625 is the exact value)
*not* "represent" 0.1 precisely? E.g., if all you are interested in is one decimal fractional
digit, any float whose exact rational value is f where .05 <= f < 0.15 could be viewed as one
in a (quite large) set of peculiar error-correcting codes that all map to the exact value you
want to represent. This is a matter of what you mean by "represent" vs what is represented.
Float representations are just codes made of bits. If what you want is for '%5.2f'%f to
produce two reliably exact decimal fractional digits, you have a lot of choices for f. Chances
are f = 0.1 won't make for a surprise, which in some sense means that the float bits behind float('.1')
"represented" .1 exactly, even though they did so by way of an unambiguously associated nearby
but different mathematically exact value.

BTW, equally important to precision of individual numbers IMO is what happens to the precision of
results of operations on inexactly represented values. How do errors accumulate and eventually
cause purportedly precise results to differ from mathematically exact results more than the advertised
precision would seem to allow? This kind of question leads to laws about when and how to round,
and definition of legal usage for factors e.g. converting from one currency to another, where
the inverse conversion factor is not a mathematical inverse.

Practicality is beating up on purity all over the place ;-)

Regards,
Bengt Richter
Jan 23 '06 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

21
by: Batista, Facundo | last post by:
Here I send it. Suggestions and all kinds of recomendations are more than welcomed. If it all goes ok, it'll be a PEP when I finish writing/modifying the code. Thank you. .. Facundo
0
by: Batista, Facundo | last post by:
People: I'll post a reviewed version of the PEP. The only differences with the previous one will be the treatmen to float in both explicit and implicit construction: ------------ In...
3
by: jbauer | last post by:
I was interested in playing around with Decimal and subclassing it. For example, if I wanted a special class to permit floats to be automatically converted to strings. from decimal import...
3
by: Alex Martelli | last post by:
As things stand now (gmpy 1.01), an instance d of decimal.Decimal cannot transparently become an instance of any of gmpy.{mpz, mpq, mpf}, nor vice versa (the conversions are all possible, but a bit...
18
by: Kuljit | last post by:
I am doing Engineering(B.Tech) in Computer Science. I have a question for which i am struggling to write a C code(program). It struck me when we were being taught about a program which counts the...
11
by: Pieter | last post by:
Hi, I'm having some troubles with my numeric-types in my VB.NET 2005 application, together with a SQL Server 2000. - I first used Single in my application, and Decimal in my database. But a...
25
by: Lennart Benschop | last post by:
Python has had the Decimal data type for some time now. The Decimal data type is ideal for financial calculations. Using this data type would be more intuitive to computer novices than float as its...
17
by: D'Arcy J.M. Cain | last post by:
I'm not sure I follow this logic. Can someone explain why float and integer can be compared with each other and decimal can be compared to integer but decimal can't be compared to float? True...
6
by: Terry Reedy | last post by:
Gerhard Häring wrote: The new fractions module acts differently, which is to say, as most would want. True Traceback (most recent call last): File "<pyshell#20>", line 1, in <module> F(1.0)...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.