By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
444,199 Members | 1,120 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 444,199 IT Pros & Developers. It's quick & easy.

Simple addition

P: n/a
Hi,

I tried a simple addition with python and I don't understand what is
going on:

$python
464.73+279.78

744.50999999999999

weird isn't it ? I use python2.3.

comments welcome
Mathieu
Jul 18 '05 #1
Share this Question
Share on Google+
11 Replies


P: n/a
Mathieu Malaterre wrote:
$python
>>464.73+279.78 744.50999999999999

weird isn't it ? I use python2.3.


Mathieu, you can find a full explanation of what you're seeing at the
following documentation link:
http://www.python.org/doc/current/tut/node15.html

From that page:

"Note that this is in the very nature of binary floating-point: this is
not a bug in Python, it is not a bug in your code either, and you'll see
the same kind of thing in all languages that support your hardware's
floating-point arithmetic (although some languages may not display the
difference by default, or in all output modes).

Python's builtin str() function produces only 12 significant digits, and
you may wish to use that instead. It's unusual for eval(str(x)) to
reproduce x, but the output may be more pleasant to look at:

print str(0.1)

0.1

It's important to realize that this is, in a real sense, an illusion:
the value in the machine is not exactly 1/10, you're simply rounding the
display of the true machine value."
Jul 18 '05 #2

P: n/a
Brian wrote:
Mathieu Malaterre wrote:
$python
>>464.73+279.78

744.50999999999999

weird isn't it ? I use python2.3.

Mathieu, you can find a full explanation of what you're seeing at the
following documentation link:
http://www.python.org/doc/current/tut/node15.html


Thanks a bunch Brian !
Jul 18 '05 #3

P: n/a
>>>>> Brian <py*****************************@invalid.net> (B) wrote:
print str(0.1)

B> 0.1

B> It's important to realize that this is, in a real sense, an illusion: the
B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."

On the other hand, python could have done better. There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print 0.1".
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.

This is because ideally it should print the representation with the least
number of digits that when read back gives the same internal value as the
number printed. In this case that is obviously "0.1".

[1] Guy L. Steele, Jr. Jon L. White, How to print floating-point numbers
accurately, Proceedings of the ACM SIGPLAN 1990 conference on Programming
language design and implementation, Pages: 112 - 126.
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.***********@hccnet.nl
Jul 18 '05 #4

P: n/a

"Piet van Oostrum" <pi**@cs.uu.nl> wrote in message
news:wz************@cs.uu.nl...
>> Brian <py*****************************@invalid.net> (B) wrote: print str(0.1) B> 0.1

B> It's important to realize that this is, in a real sense, an illusion: the B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."

On the other hand, python could have done better.
Python gives you a choice between most exact and 'pleasant'. This *is*
better, in my opinion, than no choice.
There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print 0.1".

What? In 2.2:
print 0.1

0.1

did this change in 2.3?
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.
They are not useless if you want more accuracy about what you have and what
you will get with further computation. Tracking error expansion is an
important part of designing floating point calculations.
This is because ideally it should print the representation with the least
number of digits that when read back gives the same internal value as the
number printed. In this case that is obviously "0.1".


This is opinion, not fact. Opinions are divided.

Terry J. Reedy
Jul 18 '05 #5

P: n/a
Mathieu Malaterre <mm******@nycap.rr.com> wrote in message news:<LW*******************@twister.nyroc.rr.com>. ..
Hi,

I tried a simple addition with python and I don't understand what is
going on:

$python
>>464.73+279.78

744.50999999999999


You computer does arithmetic in binary. None of these numbers can be
exactly represented as a binary fraction.

464.73 = bin 111010000.10 11101011100001010001 11101011100001010001...
279.78 = bin 100010111.1 10001111010111000010 10001111010111000010...

They get rounded to the nearest 53-bit float:

464.73 ~= 0x1.D0BAE147AE147p+8
279.78 ~= 0x1.17C7AE147AE14p+8
--------------------
0x2.E8828F5C28F5Bp+8
~= 0x1.744147AE147AEp+9 after normalization

The exact decimal equivalent of this sum is
744.509999999999990905052982270717620849609375. Python's repr()
rounds this to 17 decimal digits, or "744.50999999999999".
Jul 18 '05 #6

P: n/a
>>>>> "Terry Reedy" <tj*****@udel.edu> (TR) wrote:

TR> "Piet van Oostrum" <pi**@cs.uu.nl> wrote in message
TR> news:wz************@cs.uu.nl...
>>>>> Brian <py*****************************@invalid.net> (B) wrote:

>>>> print str(0.1) B> 0.1
B> It's important to realize that this is, in a real sense, an illusion:
TR> the
B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."
On the other hand, python could have done better.
TR> Python gives you a choice between most exact and 'pleasant'. This *is*
TR> better, in my opinion, than no choice.

0.10000000000000001 is not more exact then 0.1. It is a false illusion of
exactness.
There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print TR> 0.1".

TR> What? In 2.2: print 0.1

TR> 0.1

TR> did this change in 2.3?

Ok, mistake, I should have left out the print. But you should know what I
mean.
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.
TR> They are not useless if you want more accuracy about what you have and what
TR> you will get with further computation. Tracking error expansion is an
TR> important part of designing floating point calculations.
This is because ideally it should print the representation with the least
number of digits that when read back gives the same internal value as the
number printed. In this case that is obviously "0.1".


TR> This is opinion, not fact. Opinions are divided.

It would cause no errors and it would prevent a lot of the questions that
appear about every few days here about this subject. So what is the
advantage of printing 0.10000000000000001 or xx.xxx999999999998?
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.***********@hccnet.nl
Jul 18 '05 #7

P: n/a
"Terry Reedy" <tj*****@udel.edu> wrote in message news:<3O********************@comcast.com>...
"Piet van Oostrum" <pi**@cs.uu.nl> wrote in message
news:wz************@cs.uu.nl...
>>>> Brian <py*****************************@invalid.net> (B) wrote: [repr(0.1) is ugly!]

Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.


They are not useless if you want more accuracy about what you have


Why not display the *exact* decimal representation,
"0.10000000000000000555111512312578270211815834045 41015625"?
and what you will get with further computation.
Tracking error expansion is an
important part of designing floating point calculations.


We're talking about human-readable representation, not calculations.
Jul 18 '05 #8

P: n/a
Dan Bishop wrote:
Why not display the *exact* decimal representation,
"0.10000000000000000555111512312578270211815834045 41015625"?


This has my vote. Unfortunately Python seems incapable of figuring out all
of those digits.

Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
'%.64f' % 0.1

'0.10000000000000001000000000000000000000000000000 00000000000000000'
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #9

P: n/a
> > > Unfortunately most C libraries only use the stupid algorithm which
often
gives some useless digits.


They are not useless if you want more accuracy about what you have


Why not display the *exact* decimal representation,
"0.10000000000000000555111512312578270211815834045 41015625"?


One can do better than that--or at least something I think is better.

Many moons ago, Guy Steele proposed an elegant pair of rules for converting
between decimal and internal floating-point, be it binary, decimal,
hexadecimal, or something else entirely:

1) Input (i.e. conversion from decimal to internal form) always yields
the closest (rounded) internal value to the given input.

2) Output (i.e. conversion from internal form to decimal) yields the
smallest number of significant digits that, when converted back to internal
form, yields exactly the same value.

This scheme is useful because, among other things, it ensures that all
numbers with only a few significant digits will convert to internal form and
back to decimal without change. For example, consider 0.1. Converting 0.1
to internal form yields the closest internal number to 0.1. Call that
number X. Then when we write X back out again, we *must* get 0.1, because
0.1 is surely the decimal number with the fewest significant digits that
yields X when converted.

I have suggested in the past that Python use these conversion rules. It
turns out that there are three strong arguments against it:

1) It would preclude using the native C library for conversions, and
would probably yield different results from C under some circumstances.

2) It is difficult to implement portably, and if it is not implemented
portably, it must be reimplemented for every platform.

3) It potentially requires unbounded-precision arithmetic to do the
conversions, although a clever implementation can avoid it most of the time.

I still think it would be a good idea, but I can see that it would be more
work than is feasible. I don't want to do the work myself, anyway :-)
Jul 18 '05 #10

P: n/a
Rainer Deyke wrote:
Dan Bishop wrote:
Why not display the *exact* decimal representation,
"0.100000000000000005551115123125782702118158340 4541015625"?

This has my vote. Unfortunately Python seems incapable of figuring out all
of those digits.

Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
'%.64f' % 0.1
'0.10000000000000001000000000000000000000000000000 00000000000000000'


Python 2.3.3 seems to be able to do it on Red Hat Linux 9.0:

[rodh@rodh rodh]$ python
Python 2.3.3 (#1, Dec 20 2003, 17:47:13)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
'%.64f' % 0.1 '0.10000000000000000555111512312578270211815834045 41015625000000000'

Must be a M$ MSC problem.

--
Rod
Jul 18 '05 #11

P: n/a
>>>>> "Andrew Koenig" <ar*@acm.org> (AK) wrote:

AK> Many moons ago, Guy Steele proposed an elegant pair of rules for converting
AK> between decimal and internal floating-point, be it binary, decimal,
AK> hexadecimal, or something else entirely:

That was exactly what I was suggesting. I even included the bib reference.
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.***********@hccnet.nl
Jul 18 '05 #12

This discussion thread is closed

Replies have been disabled for this discussion.