In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
following types of errors whenever I do simple arithmetic:
1st example: 12.10 + 8.30
20.399999999999999 1.1 - 0.2
0.90000000000000013
2nd example(no errors here): bool(130.0 - 129.0 == 1.0)
True
3rd example: a = 0.013 b = 0.0129 c = 0.0001 [a, b, c]
[0.012999999999999999, 0.0129, 0.0001] bool((a - b) == c)
False
This sort of error is no big deal in most cases, but I'm sure it could
become a problem under certain conditions, particularly the 3rd
example, where I'm using truth testing. The same results occur in all
cases whether I define variables a, b, and c, or enter the values
directly into the bool statement. Also, it doesn't make a difference
whether "a = 0.013" or "a = 0.0130".
I haven't checked this under windows 2000 or XP, but I expect the same
thing would happen. Any suggestions for a way to fix this sort of
error? 89 4708
Have a look at the FAQ (before the response to your message builds).
----- Original Message -----
From: Radioactive Man
Newsgroups: comp.lang.python
To: py*********@python.org
Sent: Saturday, September 18, 2004 9:50 AM
Subject: Math errors in python
In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
following types of errors whenever I do simple arithmetic:
1st example: 12.10 + 8.30
20.399999999999999 1.1 - 0.2
0.90000000000000013
2nd example(no errors here): bool(130.0 - 129.0 == 1.0)
True
3rd example: a = 0.013 b = 0.0129 c = 0.0001 [a, b, c]
[0.012999999999999999, 0.0129, 0.0001] bool((a - b) == c)
False
This sort of error is no big deal in most cases, but I'm sure it could
become a problem under certain conditions, particularly the 3rd
example, where I'm using truth testing. The same results occur in all
cases whether I define variables a, b, and c, or enter the values
directly into the bool statement. Also, it doesn't make a difference
whether "a = 0.013" or "a = 0.0130".
I haven't checked this under windows 2000 or XP, but I expect the same
thing would happen. Any suggestions for a way to fix this sort of
error?
-- http://mail.python.org/mailman/listinfo/python-list
[Radioactive Man] In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
1st example: 12.10 + 8.30
20.399999999999999
....
Please read the Tutorial appendix on floating-point issues: http://docs.python.org/tut/node15.html
On Saturday 18 September 2004 09:50 am, Radioactive Man wrote: In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
1st example: 12.10 + 8.30
20.399999999999999
It's not a bug, it's a feature of binary arithmetic on ALL coumputers
in ALL languages. (But perhaps Python is the first time it has not
been hidden from you.)
See the Python FAQ entry 1.4.2: http://www.python.org/doc/faq/genera...-so-inaccurate
Gary Herron
On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote: In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
Specifically (building on DogWalker's reply), http://www.python.org/doc/faq/genera...-so-inaccurate
Radioactive Man wrote: thing would happen. Any suggestions for a way to fix this sort of error?
Starting with Python 2.4 there will be the 'decimal' module supporting
"arithmetic the way you know it": from decimal import * Decimal("12.10") + Decimal("8.30")
Decimal("20.40") Decimal("1.1") - Decimal("0.2")
Decimal("0.9") Decimal("130.0")-Decimal("129.0") == Decimal("1.0")
True a, b, c = map(Decimal, "0.013 0.0129 0.0001".split()) a, b, c
(Decimal("0.013"), Decimal("0.0129"), Decimal("0.0001")) (a-b) == c
True
Peter
Hi !
Many languages, yes.
All languages, no.
Few languages can use the DCB possibilities of the processors.
@-salutations
--
Michel Claveau
On Sat, 18 Sep 2004 20:54:42 +0200, Michel Claveau - abstraction
méta-galactique non triviale en fuite perpétuelle.
<un************@msupprimerlepoint.claveauPOINTco m> declaimed the
following in comp.lang.python: Few languages can use the DCB possibilities of the processors.
Uhmm... Data Control Block?
Did you mean BCD -- binary coded decimal. Which is a form of
decimal arithmetic, and not native binary floating point.
Considering how often the question about strange floating point
output comes up, I come to the conclusion that CS courses no longer
teach the basics of representation... (Try reading the Ada Reference
Manual regarding the differences between float and fixed, neither of
which imply decimal.)
Or find a copy of "Real Computing Made Real" http://www.amazon.com/exec/obidos/tg...95157?v=glance
-- ================================================== ============ < wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < wu******@dm.net | Bestiaria Support Staff < ================================================== ============ < Home Page: <http://www.dm.net/~wulfraed/> < Overflow Page: <http://wlfraed.home.netcom.com/> <
Hi !
BCD (english) ==> DCB (french)
And Alcyane-basic run, in 1976, with floating-based-DCB-arithmétique, on the
8080, before than IEEE normalize the floating-arithmétique-pow(256)-based
(and longer before Ada).
And DCB is native in all Intel processors.
And before, when i use Fortran on mini-computer, there was no this problem.
But, then it's very much easy to work with pow(256), And the anscendant
compatibility become... Hélas.
*sorry for my bad english*
Michel Claveau
On Sat, 18 Sep 2004 22:58:13 +0200, Michel Claveau - abstraction
méta-galactique non triviale en fuite perpétuelle.
<un************@msupprimerlepoint.claveauPOINTco m> declaimed the
following in comp.lang.python: And before, when i use Fortran on mini-computer, there was no this problem.
It may have been there, but you never saw it in print <G>
When I took classes in the 70s, we were taught never to rely
upon comparing two floating point numbers for equality. Instead, we were
told to compare for the difference between the two numbers being less
than some epsilon, with the epsilon defined as the smallest difference
that could be considered equal for the numbers being compared (if the
input is only good for 5 significant figures, the epsilon should not be
down into 7 significant)
a = 5.234
b = 5.235
epsilon = 0.0005
if abs(a - b) < epsilon then
equal
-- ================================================== ============ < wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < wu******@dm.net | Bestiaria Support Staff < ================================================== ============ < Home Page: <http://www.dm.net/~wulfraed/> < Overflow Page: <http://wlfraed.home.netcom.com/> <
Gary Herron <gh*****@islandtraining.com> wrote in message news:<ma**************************************@pyt hon.org>... On Saturday 18 September 2004 09:50 am, Radioactive Man wrote: In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
1st example:>> 12.10 + 8.30
20.399999999999999
It's not a bug, it's a feature of binary arithmetic on ALL coumputers in ALL languages.
Actually, it's a feature of limited-precision floating-point in ANY
base, not just binary. This includes base-10. (I'm sure you've seen
BCD calculators that give 1/3*3=0.99999999.)
Gary Herron wrote: On Saturday 18 September 2004 09:50 am, Radioactive Man wrote:
In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
1st example:
>12.10 + 8.30
20.399999999999999 It's not a bug, it's a feature of binary arithmetic on ALL coumputers in ALL languages. (But perhaps Python is the first time it has not been hidden from you.)
See the Python FAQ entry 1.4.2:
http://www.python.org/doc/faq/genera...-so-inaccurate
That's nonsense. My 7-year old TI-83 performs that calculation just
fine, and you're telling me, in this day and age, that Python running on
a modern 32-bit processor can't even compute simple decimals accurately?
Don't defend bad code.
Peter Otten wrote: Radioactive Man wrote:
thing would happen. Any suggestions for a way to fix this sort of error?
Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
Great, why fix what's broken when we can introduce a new module with an
inconvenient API.
Jeremy Bowers wrote: On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote:
In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
Specifically (building on DogWalker's reply), http://www.python.org/doc/faq/genera...-so-inaccurate
Perhaps there's a simple explanation for this, but why do we go to the
trouble of computing fractions when our hardware can't handle the
result? If the decimal value of 1/3 is can't be represented in binary,
then don't. We should use an internal representation that stores the
numerator and denominator as separate integers.
Chris S. wrote: Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
Great, why fix what's broken when we can introduce a new module with an inconvenient API.
1. It ain't broken.
2. What fraction of the numbers in your programs are constants?
Peter
On Sunday 19 September 2004 12:18 am, Chris S. wrote: Jeremy Bowers wrote: On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote:In python 2.3 (IDLE 1.0.3) running under windows 95, I get the following types of errors whenever I do simple arithmetic:
Specifically (building on DogWalker's reply), http://www.python.org/doc/faq/genera...-point-calcula tions-so-inaccurate
Perhaps there's a simple explanation for this, but why do we go to the trouble of computing fractions when our hardware can't handle the result? If the decimal value of 1/3 is can't be represented in binary, then don't. We should use an internal representation that stores the numerator and denominator as separate integers.
That's called rational arithmetic, and I'm sure you can find a package
that implements it for you. However what would you propose for
irrational numbers like sqrt(2) and transcendental numbers like PI?
While I'd love to compute with all those numbers in infinite
precision, we're all stuck with FINITE sized computers, and hence with
the inaccuracies of finite representations of numbers.
Dr. Gary Herron
Gary Herron wrote: That's called rational arithmetic, and I'm sure you can find a package that implements it for you. However what would you propose for irrational numbers like sqrt(2) and transcendental numbers like PI?
Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
arithmetic is meant for. Any decimal can be represented by a fraction,
yet not all fractions can be represented by decimals. My point is that
such simple accuracy should be supported out of the box.
While I'd love to compute with all those numbers in infinite precision, we're all stuck with FINITE sized computers, and hence with the inaccuracies of finite representations of numbers.
So are our brains, yet we somehow manage to compute 12.10 + 8.30
correctly using nothing more than simple skills developed in
grade-school. You could theoretically compute an infinitely long
equation by simply operating on single digits, yet Python, with all of
its resources, can't overcome this hurtle?
However, I understand Python's limitation in this regard. This
inaccuracy stems from the traditional C mindset, which typically
dismisses any approach not directly supported in hardware. As the FAQ
states, this problem is due to the "underlying C platform". I just find
it funny how a $20 calculator can be more accurate than Python running
on a $1000 Intel machine.
Peter Otten wrote: Chris S. wrote:
Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
Great, why fix what's broken when we can introduce a new module with an inconvenient API.
1. It ain't broken.
Call it what you will, it doesn't produce the correct result. From where
I come from, that's either bad or broken.
2. What fraction of the numbers in your programs are constants?
What?
Peter Otten <__*******@web.de> writes: Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
from decimal import * Decimal("12.10") + Decimal("8.30")
I haven't tried 2.4 yet. After
a = Decimal("1") / Decimal("3")
b = a * Decimal("3")
print b
What happens? Is that arithmetic as the way I know it?
On Sun, 19 Sep 2004 08:00:03 GMT, Chris S. wrote: Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for. Any decimal can be represented by a fraction, yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
Do you really think Pi equals 22/7 ? import math print math.pi
3.14159265359 print 22.0/7.0
3.14285714286
What do you get on your $20 calculator ?
--
Richard
Richard Townsend wrote: On Sun, 19 Sep 2004 08:00:03 GMT, Chris S. wrote:
Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for. Any decimal can be represented by a fraction, yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
Do you really think Pi equals 22/7 ?
Of course not. That's just a common approximation. Irrational numbers
are an obvious exception, but we shouldn't sacrifice the accuracy of
common decimal math just for their sake. import math print math.pi 3.14159265359 print 22.0/7.0
3.14285714286
What do you get on your $20 calculator ?
The same thing actually.
On Sunday 19 September 2004 01:00 am, Chris S. wrote: Gary Herron wrote: That's called rational arithmetic, and I'm sure you can find a package that implements it for you. However what would you propose for irrational numbers like sqrt(2) and transcendental numbers like PI? Sqrt is a fair criticism, but Pi equals 22/7,
What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
They don't even share three digits beyond the decimal point. (Can you
really be that ignorant about numbers and expect to contribute
intelligently to a discussion about numbers. Pi is a non-repeating
and non-ending number in base 10 or any other base.)
exactly the form this arithmetic is meant for. Any decimal can be represented by a fraction, yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
While I'd love to compute with all those numbers in infinite precision, we're all stuck with FINITE sized computers, and hence with the inaccuracies of finite representations of numbers.
So are our brains, yet we somehow manage to compute 12.10 + 8.30 correctly using nothing more than simple skills developed in grade-school. You could theoretically compute an infinitely long equation by simply operating on single digits, yet Python, with all of its resources, can't overcome this hurtle?
However, I understand Python's limitation in this regard. This inaccuracy stems from the traditional C mindset, which typically dismisses any approach not directly supported in hardware. As the FAQ states, this problem is due to the "underlying C platform". I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
If you are happy doing calculations with decimal numbers like 12.10 +
8.30, then the Decimal package may be what you want, but that fails as
soon as you want 1/3. But then you could use a rational arithmetic
package and get 1/3, but that would fail as soon as you needed sqrt(2)
or Pi. But then you could try ... what? Can you see the pattern
here? Any representation of the infinity of numbers on a finite
computer *must* necessarily be unable to represent some (actually
infinity many) of those numbers. The inaccuracies stem from that
fact.
Hardware designers have settled on a binary representation of floating
point numbers, and both C and Python use the underlying hardware
implementation. (Try your calculation in C -- you'll get the same
result if you choose to print out enough digits.)
And BTW, your calculator is not, in general, more accurate than the
modern IEEE binary hardware representation of numbers used on most of
today's computers. It is more accurate on only a select subset of all
numbers, and it does a good job of fooling you in those cases where it
loses accuracy, by doing calculations on more digits then it displays,
and rounding off to the on-screen digits.
So while a calculator will fool you into believing it is accurate when
it is not, it is Python's design decision to not cater to fools.
Dr Gary Herron
Hi !
No. BCD use another work : two digits by Byte. The calculation is
basically integer, it's the library who manage the decimal point.
There are no problem of round.
@-salutations
--
Michel Claveau
Gary Herron <gh*****@islandtraining.com> writes: Any representation of the infinity of numbers on a finite computer *must* necessarily be unable to represent some (actually infinity many) of those numbers. The inaccuracies stem from that fact.
Well, finite computers can't even represent all the integers, but
we can reasonably think of Python as capable of doing exact integer
arithmetic.
The issue here is that Python's behavior confuses the hell out of some
new users. There is a separate area of confusion, that
a = 2 / 3
sets a to 0, and to clear that up, the // operator was introduced and
Python 3.0 will supposedly treat / as floating-point division even
when both operands are integers. That doesn't solve the also very
common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic
can solve that.
Yes, with rational arithmetic, it will still be true that
sqrt(5.)**2.0 doesn't quite equal 5, but hardly anyone ever complains
about that.
And yes, there are languages that can do exact arithmetic on arbitrary
algebraic numbers, but they're normally not used for general-purpose
programming.
Am Sonntag, 19. September 2004 09:05 schrieb Chris S.: That's nonsense. My 7-year old TI-83 performs that calculation just fine, and you're telling me, in this day and age, that Python running on a modern 32-bit processor can't even compute simple decimals accurately? Don't defend bad code.
Do you actually know how your TI-83 works? If you did, you wouldn't be
complaining about bad code or something. The TI-83 is hiding something from
you, not Python.
This discussion is so senseless and inflamatory that I take the OP to be a
troll...
Heiko.
Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron: That's called rational arithmetic, and I'm sure you can find a package that implements it for you. However what would you propose for irrational numbers like sqrt(2) and transcendental numbers like PI?
Just as an example, try gmpy. Unlimited precision integer and rational
arithmetic. But don't think that they implement anything more than the four
basic operations on rationals, because algorithms like sqrt and pow become so
slow, that nobody sensible would use them, but rather just stick to the
binary arithmetic the computer uses (although this might have some minor
effects on precision, but these can be bounded).
Heiko.
Paul Rubin wrote: Peter Otten <__*******@web.de> writes: Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
>>> from decimal import * >>> Decimal("12.10") + Decimal("8.30") I haven't tried 2.4 yet. After
The auther is currently working on an installer, but just dropping it into
2.3's site-packages should work, too.
a = Decimal("1") / Decimal("3") b = a * Decimal("3") print b
What happens? Is that arithmetic as the way I know it?
Decimal as opposed to rational: from decimal import * Decimal(1)/Decimal(3)
Decimal("0.3333333333333333333333333333") 3*_
Decimal("0.9999999999999999999999999999")
Many people can cope with the inaccuracy induced by base 10 representations
and are taken by surprise by base 2 errors.
But you are right I left too much room for interpretation.
Peter
Chris S. wrote: Great, why fix what's broken when we can introduce a new module with an inconvenient API.
1. It ain't broken.
Call it what you will, it doesn't produce the correct result. From where I come from, that's either bad or broken.
If there is a way to always get the "correct" result in numerical
mathematics, I don't know it. But I'm not an expert. Can you enlighten me? 2. What fraction of the numbers in your programs are constants?
What?
Expressions like a*b+c are not affected by the choice of float/Decimal.
Values are normally read from a file or given interactively by a user. I
supposed that what you called inconvenient to be limited to decimal
constants (Decimal("1.2") vs. 1.2 for floats) and questioned its
significance, especially as scientific users will probably continue to use
floats.
Peter
Chris S. <ch*****@NOSPAM.udel.edu> wrote:
... Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
Of course it doesn't. What a silly assertion.
arithmetic is meant for. Any decimal can be represented by a fraction,
And pi can't be represented by either (if you mean _finite_ decimals and
fractions).
yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
In Python 2.4, decimal computations are indeed "supported out of the
box", although you do explicitly have to request them (the default
remains floating-point). In 2.3, you have to download and use any of
several add-on packages (decimal computations and rational ones have
very different characteristics, so you do have to choose) -- big deal. While I'd love to compute with all those numbers in infinite precision, we're all stuck with FINITE sized computers, and hence with the inaccuracies of finite representations of numbers.
So are our brains, yet we somehow manage to compute 12.10 + 8.30 correctly using nothing more than simple skills developed in
Using base 10, sure. Or, using fractions, even something that decimals
would not let you compute finitely, such as 1/7+1/6.
grade-school. You could theoretically compute an infinitely long equation by simply operating on single digits,
Not in finite time, you couldn't (excepting a few silly cases where the
equation is "infinitely long" only because of some rule that _can_ be
finitely expressed, so you don't even have to LOOK at all the equation
to solve [which is what I guess you mean by "compute"...?] it -- if you
have to LOOK at all of the equation, and it's infinite, you can't get
done in finite time).
yet Python, with all of its resources, can't overcome this hurtle?
The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
play with decimal to your heart's content. Or do you mean fractions?
Then download gmpy and ditto. There are also packages for symbolic
computation and even more exotic kinds of arithmetic.
In practice, with the sole exception of monetary computations (which may
often be constrained by law, or at the very least by customary
practice), there is no real-life use in which the _accuracy_ of floating
point isn't ample. There are nevertheless lots of traps in arithmetic,
but switching to forms of arithmetic different from float doesn't really
make all the traps magically disappear, of course.
However, I understand Python's limitation in this regard. This inaccuracy stems from the traditional C mindset, which typically dismisses any approach not directly supported in hardware. As the FAQ
Ah, I see, a case of "those who can't be bothered to learn a LITTLE
history before spouting off" etc etc. Python's direct precursor, the
ABC language, used unbounded-precision rationals. As a result (obvious
to anybody who bothers to learn a little about the inner workings of
arithmetic), the simplest-looking string of computations could easily
consume all the memory at your computer's disposal, and then some, and
apparently unbounded amounts of time. It turned out that users object,
most of the time, to having some apparently trivial computation take
hours, rather than seconds, in order to be unboundedly precise rather
than, say, precise to "just" a couple hundred digits (far more digits
than you need to count the number of atoms in the Galaxy). So,
unbounded rationals as a default are out -- people may sometimes SAY
they want them, but in fact, in an overwhelming majority of the cases,
they actually do not (oh yes, people DO lie, first of all to
themselves:-).
As for decimals, that's what a very-high level language aiming for a
niche very close to Python used from the word go. It got started WAY
before Python -- I was productively using it over 20 years ago -- and
had the _IBM_ brand on it, which at the time pretty much meant the
thousand-pounds gorilla of computers. So where is it now, having had
all of these advantages (started years before, had IBM behind it, AND
was totally free of "the traditional C mindset", which was very far from
traditional at the time, particularly within IBM...!)...?
Googlefight is a good site for this kind of comparisons... try:
<http://www.googlefight.com/cgi-bin/c...python&q2=rexx
&B1=Make+a+fight%21&compare=1&langue=us>
and you'll see...:
"""
Number of results on Google for the keywords python and rexx:
python
(10 300 000 results)
versus
rexx
( 419 000 results)
The winner is: python
"""
Not just "the winner", an AMAZING winner -- over TWENTY times more
popular, despite all of Rexx's advantages! And while there are no doubt
many fascinating components to this story, a key one is among the pearls
of wisdom you can read by doing, at any Python interactive prompt: import this
and it is: "practicality beats purity". Rexx has always been rather
puristic in its adherence to its principles; Python is more pragmatic.
It turns out that this is worth a lot in the real world. Much the same
way, say, C ground PL/I into the dust. Come to think of it, Python's
spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
"the spirit of C" in the C ANSI Standard's introduction are more closely
followed by Python than by other languages which borrowed C's syntax,
such as C++ or Java), while Rexx does show some PL/I influence (not
surprising for an IBM-developed language, I guess).
Richard Gabriel's famous essay on "Worse is Better", e.g. at
<http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
reflections in the same vein.
Python never had any qualms in getting outside the "directly supported
in hardware" boundaries, mind you. Dictionaries and unbounded precision
integers are (and have long been) Python mainstays, although neither the
hardware nor the underlying C platform has any direct support for
either. For non-integer computations, though, Python has long been well
served by relying on C, and nowadays typically the HW too, to handle
them, which implied the use of floating-point; and leaving the messy
business of implementing the many other possibly useful kinds of
non-integer arithmetic to third-party extensions (many in fact written
in Python itself -- if you're not in a hurry, they're fine, too).
With Python 2.4, somebody finally felt enough of an itch regarding the
issue of getting support for decimal arithmetic in the Python standard
library to go to the trouble of scratching it -- as opposed to just
spouting off on a mailing list, or even just implementing what they
personally needed as just a third-party extension (there are _very_ high
hurdles to jump, to get your code into the Python standard library, so
it needs strong motivation to do so as opposed to just releasing your
own extension to the public).
states, this problem is due to the "underlying C platform". I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
You can get a calculator much cheaper than that these days (and "intel
machines" not too out of the mainstream for well less than half, as well
as several times, your stated price). It's pretty obvious that the
price of the hardware has nothing to do with that "_CAN_ be more
accurate" issue (my emphasis) -- which, incidentally, remains perfectly
true even in Python 2.4: it can be less, more, or just as accurate as
whatever calculator you're targeting, since the precision of decimal
computation is one of the aspects you can customize specifically...
Alex
On Sun, 19 Sep 2004 07:05:50 +0000, Chris S. wrote: That's nonsense. My 7-year old TI-83 performs that calculation just fine,
No, it doesn't. Your calculator is lying to you because it (correctly in
this case) expects that you want it to.
You need to educate yourself on how computers do math before passing such
uninformed judgments. http://www.apa.org/journals/psp/psp7761121.html
On 2004-09-19, Chris S. <ch*****@NOSPAM.udel.edu> wrote: Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this arithmetic is meant for.
<boggle>
Any decimal can be represented by a fraction, yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
It is. Just not with floating point.
So are our brains, yet we somehow manage to compute 12.10 + 8.30 correctly using nothing more than simple skills developed in grade-school. You could theoretically compute an infinitely long equation by simply operating on single digits, yet Python, with all of its resources, can't overcome this hurtle?
Sure it can.
However, I understand Python's limitation in this regard. This inaccuracy stems from the traditional C mindset, which typically dismisses any approach not directly supported in hardware. As the FAQ states, this problem is due to the "underlying C platform". I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
You're clueless on so many different points, I don't even know
where to start...
--
Grant Edwards grante Yow! I'm also pre-POURED
at pre-MEDITATED and
visi.com pre-RAPHAELITE!!
A nice thoughtful answer Alex, but possibly wasted, as it's been
suggested that he is just a troll. (Note his asssertion that Pi=22/7
in one post and the assertion that it is just a common approximation
in another, and this in a thread about numeric imprecision.)
Gary Herron
On Sunday 19 September 2004 09:41 am, Alex Martelli wrote: Chris S. <ch*****@NOSPAM.udel.edu> wrote: ...
Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
Of course it doesn't. What a silly assertion.
arithmetic is meant for. Any decimal can be represented by a fraction,
And pi can't be represented by either (if you mean _finite_ decimals and fractions).
yet not all fractions can be represented by decimals. My point is that such simple accuracy should be supported out of the box.
In Python 2.4, decimal computations are indeed "supported out of the box", although you do explicitly have to request them (the default remains floating-point). In 2.3, you have to download and use any of several add-on packages (decimal computations and rational ones have very different characteristics, so you do have to choose) -- big deal.
While I'd love to compute with all those numbers in infinite precision, we're all stuck with FINITE sized computers, and hence with the inaccuracies of finite representations of numbers.
So are our brains, yet we somehow manage to compute 12.10 + 8.30 correctly using nothing more than simple skills developed in
Using base 10, sure. Or, using fractions, even something that decimals would not let you compute finitely, such as 1/7+1/6.
grade-school. You could theoretically compute an infinitely long equation by simply operating on single digits,
Not in finite time, you couldn't (excepting a few silly cases where the equation is "infinitely long" only because of some rule that _can_ be finitely expressed, so you don't even have to LOOK at all the equation to solve [which is what I guess you mean by "compute"...?] it -- if you have to LOOK at all of the equation, and it's infinite, you can't get done in finite time).
yet Python, with all of its resources, can't overcome this hurtle?
The hurdle of decimal arithmetic, you mean? Download Python 2.4 and play with decimal to your heart's content. Or do you mean fractions? Then download gmpy and ditto. There are also packages for symbolic computation and even more exotic kinds of arithmetic.
In practice, with the sole exception of monetary computations (which may often be constrained by law, or at the very least by customary practice), there is no real-life use in which the _accuracy_ of floating point isn't ample. There are nevertheless lots of traps in arithmetic, but switching to forms of arithmetic different from float doesn't really make all the traps magically disappear, of course.
However, I understand Python's limitation in this regard. This inaccuracy stems from the traditional C mindset, which typically dismisses any approach not directly supported in hardware. As the FAQ
Ah, I see, a case of "those who can't be bothered to learn a LITTLE history before spouting off" etc etc. Python's direct precursor, the ABC language, used unbounded-precision rationals. As a result (obvious to anybody who bothers to learn a little about the inner workings of arithmetic), the simplest-looking string of computations could easily consume all the memory at your computer's disposal, and then some, and apparently unbounded amounts of time. It turned out that users object, most of the time, to having some apparently trivial computation take hours, rather than seconds, in order to be unboundedly precise rather than, say, precise to "just" a couple hundred digits (far more digits than you need to count the number of atoms in the Galaxy). So, unbounded rationals as a default are out -- people may sometimes SAY they want them, but in fact, in an overwhelming majority of the cases, they actually do not (oh yes, people DO lie, first of all to themselves:-).
As for decimals, that's what a very-high level language aiming for a niche very close to Python used from the word go. It got started WAY before Python -- I was productively using it over 20 years ago -- and had the _IBM_ brand on it, which at the time pretty much meant the thousand-pounds gorilla of computers. So where is it now, having had all of these advantages (started years before, had IBM behind it, AND was totally free of "the traditional C mindset", which was very far from traditional at the time, particularly within IBM...!)...?
Googlefight is a good site for this kind of comparisons... try:
<http://www.googlefight.com/cgi-bin/c...python&q2=rexx &B1=Make+a+fight%21&compare=1&langue=us>
and you'll see...: """ Number of results on Google for the keywords python and rexx:
python (10 300 000 results) versus rexx ( 419 000 results)
The winner is: python """
Not just "the winner", an AMAZING winner -- over TWENTY times more popular, despite all of Rexx's advantages! And while there are no doubt many fascinating components to this story, a key one is among the pearls
of wisdom you can read by doing, at any Python interactive prompt: >>> import this
and it is: "practicality beats purity". Rexx has always been rather puristic in its adherence to its principles; Python is more pragmatic. It turns out that this is worth a lot in the real world. Much the same way, say, C ground PL/I into the dust. Come to think of it, Python's spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as "the spirit of C" in the C ANSI Standard's introduction are more closely followed by Python than by other languages which borrowed C's syntax, such as C++ or Java), while Rexx does show some PL/I influence (not surprising for an IBM-developed language, I guess).
Richard Gabriel's famous essay on "Worse is Better", e.g. at <http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter reflections in the same vein.
Python never had any qualms in getting outside the "directly supported in hardware" boundaries, mind you. Dictionaries and unbounded precision integers are (and have long been) Python mainstays, although neither the hardware nor the underlying C platform has any direct support for either. For non-integer computations, though, Python has long been well served by relying on C, and nowadays typically the HW too, to handle them, which implied the use of floating-point; and leaving the messy business of implementing the many other possibly useful kinds of non-integer arithmetic to third-party extensions (many in fact written in Python itself -- if you're not in a hurry, they're fine, too).
With Python 2.4, somebody finally felt enough of an itch regarding the issue of getting support for decimal arithmetic in the Python standard library to go to the trouble of scratching it -- as opposed to just spouting off on a mailing list, or even just implementing what they personally needed as just a third-party extension (there are _very_ high hurdles to jump, to get your code into the Python standard library, so it needs strong motivation to do so as opposed to just releasing your own extension to the public).
states, this problem is due to the "underlying C platform". I just find it funny how a $20 calculator can be more accurate than Python running on a $1000 Intel machine.
You can get a calculator much cheaper than that these days (and "intel machines" not too out of the mainstream for well less than half, as well as several times, your stated price). It's pretty obvious that the price of the hardware has nothing to do with that "_CAN_ be more accurate" issue (my emphasis) -- which, incidentally, remains perfectly true even in Python 2.4: it can be less, more, or just as accurate as whatever calculator you're targeting, since the precision of decimal computation is one of the aspects you can customize specifically...
Alex
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
... The issue here is that Python's behavior confuses the hell out of some new users. There is a separate area of confusion, that
a = 2 / 3
sets a to 0, and to clear that up, the // operator was introduced and Python 3.0 will supposedly treat / as floating-point division even when both operands are integers. That doesn't solve the also very common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic can solve that.
Yes, but applying rational arithmetic by default might slow some
computations far too much for beginners' liking! My favourite for
Python 3.0 would be to have decimals by default, with special notations
to request floats and rationals (say '1/3r' for a rational, '1/3f' for a
float, '1/3' or '1/3d' for a decimal with some default parameters such
as number of digits). This is because my guess is that most naive users
would _expect_ decimals by default...
Yes, with rational arithmetic, it will still be true that sqrt(5.)**2.0 doesn't quite equal 5, but hardly anyone ever complains about that.
And yes, there are languages that can do exact arithmetic on arbitrary algebraic numbers, but they're normally not used for general-purpose programming.
Well, you can pretty easily use constructive reals with Python, see for
example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a
vastly vaster set than just algebraic numbers. If we DO want precision,
after all, why should sqrt(5) be more important than log(3)?
Alex
Gary Herron <gh*****@islandtraining.com> wrote:
... irrational numbers like sqrt(2) and transcendental numbers like PI? Sqrt is a fair criticism, but Pi equals 22/7,
What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal. They don't even share three digits beyond the decimal point. (Can you really be that ignorant about numbers and expect to contribute intelligently to a discussion about numbers. Pi is a non-repeating and non-ending number in base 10 or any other base.)
Any _integer_ base -- you can find infinitely many irrational bases in
which pi has repeating or terminating expansion (for example, you could
use pi itself as a base;-). OK, OK, I _am_ being silly!-)
If you are happy doing calculations with decimal numbers like 12.10 + 8.30, then the Decimal package may be what you want, but that fails as soon as you want 1/3.
But it fails in exactly the same way as a cheap calculator of the same
precision, and some people just have a fetish for that.
But then you could use a rational arithmetic package and get 1/3, but that would fail as soon as you needed sqrt(2) or Pi. But then you could try ... what? Can you see the pattern
Uh, "constructive reals", such as those you can find at
<http://www.hpl.hp.com/personal/Hans_Boehm/crcalc/> ...?
"Numbers are represented exactly internally to the calculator, and then
evaluated on demand to guarantee an error in the displayed result that
is strictly less than one in the least significant displayed digit. It
is possible to scroll the display to the right to generate essentially
arbitrary precision in the result." It has trig, logs, etc.
here? Any representation of the infinity of numbers on a finite computer *must* necessarily be unable to represent some (actually infinity many) of those numbers. The inaccuracies stem from that fact.
Yes, _but_. There is after all a *finite* set of reals you can describe
(constructively and univocally) by equations that you can write finitely
with a given finite alphabet, right? So, infinitely many (and indeed
infinitely many MORE, since reals overall are _uncountably_ infinite;-)
reals are of no possible constructive interest -- if we were somehow
given one, we would have no way to verify that it is what it is claimed
to be, anyway, since no claim for it can be written finitely over
whatever finite alphabet we previously agreed to use. So, I think we
can safely restrict discourse by ignoring, at least, the _uncountably_
infinite aspects of reals and sticking to some "potentially
constructively interesting" subset that is _countably_ infinite.
At this point, the theoretical problems aren't much worse than those you
meet with, say, integers, or just rationals, etc. Sure, you can't
represent any but a finite subset of integers (or rationals, etc) in a
finite computer _in a finite time_, yet that implies no _inaccuracy_
whatsoever -- specify your finite alphabet and the maximum size of
equation you want to be able to write, and I'll give you the specs for
how big a computer I will need to serve your needs. Easy!
A "constructive reals" library able to hold and manipulate all reals
that can be described as the sum of convergent series such that the Nth
term of the series is a ratio of polynomials in N whose tuples of
coefficients fit comfortably in memory (with space left over for some
computation), for example, would amply suffice to deal with all commonly
used 'transcendentals', such as the ones arising from trigonometry,
logarithms, etc, and many more besides. (My memories of arithmetic are
SO rusty I don't even recall if adding similarly constrained continuous
fractions to the mix would make any substantial difference, sigh...).
If you ask for some sufficiently big computation you may happen to run
out of memory -- not different from what happens if you ask for a
raising-to-power between two Python long's which happen to be too big
for your computer's memory. Buy more memory, move to a 64-bit CPU (and
a good OS for it), whatever: it's not a problem of _accuracy_, anyway.
It MAY be a problem of TIME -- if you're in any hurry, and have upgraded
your computer to have a few hundred terabytes of memory, you MAY be
disappointed at how deucedly long it takes to get that multiplication
between longs that just happened to overflow the memory resources of
your previous machine which had just 200 TB. If you ask for an infinite
representation of whatever, it will take an infinite time for you to see
it, of course -- your machine will keep emitting digits at whatever
rate, even very fast, but if the digits never stop coming then you'll
never stop staring at them able to truthfully say "I've seen them ALL".
But that's an effect that's easy to get even with such a simple
computation as 1/3... it may easily be held with perfect accuracy inside
the machine, just by using rationals, but if you want to see it as a
decimal number you'll never be done. Similarly for sqrt(2) and so on.
But again it's not a problem of _accuracy_, just one of patience;-). If
the machine is well programmed you'll never see even one wrong digit, no
matter how long you keep staring and hoping to catch an accuracy issue.
The reason we tend to use limited accuracy more often than strictly
needed is that we typically ARE in a hurry. E.g., I have measured the
radius of a semispherical fishbowl at 98.13 cm and want to know how much
water I need to fetch to fill it: I do NOT want to spend eons checking
out the millionth digit -- I started with a measurement that has four or
so significant digits (way more than _typical_ real-life measurements in
most cases, btw), it's obvious that I'll be satisfied with just a few
more significant digits in the answer. In fact, Python's floats are
_just fine_ for just about any real-life computation, excluding ones
involving money (which may often be constrained by law or at least by
common practice) and some involving combinatorial arithmetic (and thus,
typically, ratios between very large integers), but the latter only
apply to certain maniacs trying to compute stuff about games (such as,
yours truly;-).
So while a calculator will fool you into believing it is accurate when it is not, it is Python's design decision to not cater to fools.
Well put (+1 QOTW). But constructive reals are still COOL, even if
they're not of much practical use in real life;-).
Alex
Heiko Wundram <he*****@ceosg.de> wrote: Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron: That's called rational arithmetic, and I'm sure you can find a package that implements it for you. However what would you propose for irrational numbers like sqrt(2) and transcendental numbers like PI?
Just as an example, try gmpy. Unlimited precision integer and rational arithmetic. But don't think that they implement anything more than the four basic operations on rationals, because algorithms like sqrt and pow become so slow, that nobody sensible would use them, but rather just stick to the binary arithmetic the computer uses (although this might have some minor effects on precision, but these can be bounded).
Guilty as charged, but with a different explanation. I don't support
raising a rational to a rational exponent, not because it would "become
slow", but because it could not return a rational result in general.
When it CAN return a rational result, I'm happy as a lark to support it: x = gmpy.mpq(4,9) x ** gmpy.mpq(1,2)
mpq(2,3)
I.e. raising to the power 1/2 (which is the same as saying, taking the
square root) is supported in gmpy only when the base is a rational which
IS the square of some other rational -- and similarly for other
fractional exponents.
Say you're content with finite precision, and you problem is that
getting only a few dozen bits' worth falls far short of your ambition,
as you want _thousands_. Well, you don't have to "stick to the
arithmetic your computer uses", with its paltry dozens of bits' worth of
precision -- you can have just as many as you wish. For example:
For example...:
x=gmpy.mpf(2, 2222) x
mpf('2.e0',2222) y=gmpy.fsqrt(x) y
mpf('1.4142135623730950488016887242096980785696718 7537694807317667973799
07324784621070388503875343276415727350138462309122 9702492483605585073721
26441214970999358314132226659275055927557999505011 5278206057147010955997
16059702745345968620147285174186408891986095523292 3048430871432145083976
26036279952514079896872533965463318088296406206152 5835239505474575028775
99617298355752203375318570113543746034084988471603 8689997069900481503054
40277903164542478230684929369186215805784631115966 6871301301561856898723
72352885092648612494977154218334204285686060146824 7207714358548741556570
69677653720226485447015858801620758474922657226002 0855844665214583988939
4437092659180031138824646815708263e0',2222)
Of course, this still has bounded accuracy (gmpy doesn't do constructive
reals...):
x-(y*y)
mpf('1.2140632192547474473260207500704443662113640 3661789690072865954475
77629852211824441927267480654644152911855749255010 1271984681584381130555
89225911817824895017995339015966450881554095964474 1794226362686473376767
05569641121149898756148707870818767506006302270414 8995680107509652317604
47936457603982751891327244677206971387126667245427 9184421635785339332972
79197069078158394821278488334629857271047665895470 7852342842150889381157
56304593623113851540670937616799716987990078434714 6377935422794796191261
62484974096494228384286877908229255786916602409531 8326003777296248197487
88585822317559194311271148131969552603976031835384 9240080721341697065981
8471278600062647147473105883272095e-674',2222)
i.e., there IS an error of about 10 to the minus 674 power, i.e. a
precision of barely more than a couple of thousands of bits -- but then,
that IS what you asked for, with that '2222'!-)
Computing square roots (or whatever) directly on rationals would be no
big deal, were there demand -- you'd still have to tell me what kind of
loss of accuracy you're willing to tolerate, though. I personally find
it handier to compute with mpf's (high-precision floats) and then turn
the result into rationals with a Stern-Brocot algorithm...:
z=gmpy.f2q(y,-2000) z
mpq(8778784036294705622138983709988811978418490062 2573984346903816053706
51003837186211949850200822769659495889207374439452 4220336403937617412073
52195374603313507432198679666937939324888709931274 5495535792954890191437
23323043674692718039303532828449048115304139861970 0720943077149557439382
34750528988254439L,6207537722636196889033728660985 3165704271551096494666
54403397536226550469687056940926509195569391154881 2764050925469857560059
62378928792202607816549754260202260760390085465875 3038808290787475128940
69480608471512930897828852374241357349422190156558 8452667869917019091383
93125847495825105773132566685269L)
If you need the square root of two as a rational number with an error of
less than 1 in 2**-2000, I think this is a reasonable approach. As for
speed, this is quite decently usable in an interactive session in my
cheap and cheerful Mac iBook 12" portable (not the latest model, which
is quite a bit faster, much less the "professional" Powerbooks -- I'm
talking about an ageing, though good-quality, consumer-class machine!).
gmpy (or to be more precise the underlying GMP library) runs optimally
on AMD Athlon 32-bit processors, which happen to be dirt cheap these
days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
Athlon chip would no doubt let you use way more than these humble couple
thousand bits for such interactive computations while maintaining a
perfectly acceptable interactive response time.
Alex
Gary Herron <gh*****@islandtraining.com> wrote: A nice thoughtful answer Alex, but possibly wasted, as it's been suggested that he is just a troll. (Note his asssertion that Pi=22/7 in one post and the assertion that it is just a common approximation in another, and this in a thread about numeric imprecision.)
If he's not a troll, he _should_ be -- it's just too sad to consider the
possibility that somebody is really that ignorant and arrogant at the
same time (although, tragically, human nature is such as to make that
entirely possible). Nevertheless, newsgroups and mailing lists have an
interesting characteristic: no "thoughtful answer" need ever be truly
wasted, even if the person you're answering is not just a troll, but a
robotized one, _because there are other readers_ which may find
interest, amusement, or both, in that answer. On a newsgroup, or
very-large-audience mailing list, one doesn't really write just for the
person you're nominally answering, but for the public at large.
Alex
On Sun, 19 Sep 2004 07:05:50 GMT, "Chris S." <ch*****@NOSPAM.udel.edu>
declaimed the following in comp.lang.python: That's nonsense. My 7-year old TI-83 performs that calculation just fine, and you're telling me, in this day and age, that Python running on
Most calculators use 1) BCD, and 2) they keep guard digits
(about two extra digits) that are not displayed. Using the guard digits,
the calculator performs rounding to the display resolution.
1.0 / 3.0 => 0.3333333| (displayed)
0.3333333|33 (internal)
* 3.0 => 0.9999999|99 (internally)
1.0 (displayed after rounding the
guards)
Strangely, HP's tended not to hold guard digits... My HP-48sx
gives the all-9s result, and I recall older models also not having
guards.
Most that use guard digits can be determined by 1) the example
sequence returning 1.0, and 2) do the 1/3, then manually subtract the
value you see on the display -- often you'll get something like
3.3E-<something> which were the hidden guard digits.
a modern 32-bit processor can't even compute simple decimals accurately? Don't defend bad code.
Before you accuse Python of bad code (you might as well accuse
Intel and AMD, since they make the floating point processor in most
machines), take the time to learn how Calculators function internally.
My college actually offered a class on using scientific calculators,
including details on guard digits, arithmetic vs algebraic vs RPN, etc.
-- ================================================== ============ < wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < wu******@dm.net | Bestiaria Support Staff < ================================================== ============ < Home Page: <http://www.dm.net/~wulfraed/> < Overflow Page: <http://wlfraed.home.netcom.com/> <
Dennis Lee Bieber <wl*****@ix.netcom.com> wrote: Strangely, HP's tended not to hold guard digits... My HP-48sx gives the all-9s result, and I recall older models also not having guards.
Nothing strange there -- HP's calculators were squarely aimed at
scientists and engineers, who are supposed to know what they're doing
when it comes to numeric computation (they mostly _don't_, but they like
to kid themselves that they do!-).
Alex al*****@yahoo.com (Alex Martelli) writes: Yes, but applying rational arithmetic by default might slow some computations far too much for beginners' liking!
I dunno, lots of Lisp dialects do rational arithmetic by default. Well, you can pretty easily use constructive reals with Python, see for example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a vastly vaster set than just algebraic numbers. If we DO want precision, after all, why should sqrt(5) be more important than log(3)?
I don't know that it's generally tractable to do exact computation on
constructive reals. How do you implement comparison (<, >, ==)?
Michel Claveau - abstraction méta-galactique non trivial e en fuite perpétuelle. <un************@msupprimerlepoint.claveauPOINTco m> wrote in message news:<ci**********@news-reader5.wanadoo.fr>... Hi !
No. BCD use another work : two digits by Byte. The calculation is basically integer, it's the library who manage the decimal point.
There are no problem of round.
Yes, there are. Rounding problems don't occur in the contrived
examples that show that "BCD is better than binary", but they do
occur, especially with division.
Alex Martelli wrote: If he's not a troll, he _should_ be -- it's just too sad to consider the possibility that somebody is really that ignorant and arrogant at the same time (although, tragically, human nature is such as to make that entirely possible). Nevertheless, newsgroups and mailing lists have an interesting characteristic: no "thoughtful answer" need ever be truly wasted, even if the person you're answering is not just a troll, but a robotized one, _because there are other readers_ which may find interest, amusement, or both, in that answer. On a newsgroup, or very-large-audience mailing list, one doesn't really write just for the person you're nominally answering, but for the public at large.
Exactly. One could wonder if more timid accusations would have
engendered such insightful and accurate responses. However, I do
apologize if I appeared trollish. Thank you for your contributions.
Gary Herron <gh*****@islandtraining.com> wrote in message news:<ma**************************************@pyt hon.org>... On Sunday 19 September 2004 01:00 am, Chris S. wrote: Gary Herron wrote: That's called rational arithmetic, and I'm sure you can find a package that implements it for you. However what would you propose for irrational numbers like sqrt(2) and transcendental numbers like PI? Sqrt is a fair criticism, but Pi equals 22/7,
What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal. They don't even share three digits beyond the decimal point.
There are, of course, reasonably accurate rational approximations of
pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
(9 decimal places), or 3126535/995207 (11 decimal places). Also, the
IEEE 754 double-precision representation of pi is equal to the
rational number 4503599627370496/281474976710656.
...Pi is a non-repeating and non-ending number in base 10 or any other base.)
It has a terminating representation in base pi ;-)
But you're right that it has a non-repeating and non-ending
representation in any _useful_ base.
If you are happy doing calculations with decimal numbers like 12.10 + 8.30, then the Decimal package may be what you want, but that fails as soon as you want 1/3. But then you could use a rational arithmetic package and get 1/3, but that would fail as soon as you needed sqrt(2) or Pi.
True, but who says we need to use the same representation for all
numbers. Python _could_ use rationals in situations where they'd work
(like int/int division), and only revert to floating-point when
necessary (like math.sqrt and math.pi).
And BTW, your calculator is not, in general, more accurate than the modern IEEE binary hardware representation of numbers used on most of today's computers.
In general, it's _less_ accurate. In IEEE 754 double-precision,
machine epsilon is 2**-53 (about 1e-16), but TI's calculators have a
machine epsilon of 1e-14. Thus, in general, IEEE 754 gives you about
2 more digits of precision than a calculator.
It is more accurate on only a select subset of all numbers,
Right. In most cases, base 10 has no inherent advantage. The number
1.41 is a _less_ accurate representation of sqrt(2) than 0x1.6A. The
number 3.14 is a less accurate representation of pi than 0x3.24. And
it's not inherently more accurate to say that my height is 1.80 meters
rather than 0x1.CD meters or 5'11".
Base 10 _is_ more accurate for monetary amounts, and for this reason I
agreed with the addition of a decimal class. But it would be a
mistake to use decimal arithmetic, which has a performance
disadvantage with no accuracy advantage, in the general case.
Paul Rubin <http://ph****@NOSPAM.invalid> wrote: al*****@yahoo.com (Alex Martelli) writes: Yes, but applying rational arithmetic by default might slow some computations far too much for beginners' liking!
I dunno, lots of Lisp dialects do rational arithmetic by default.
And...? What fractions of beginners get exposed to Lisp as their first
language just love the resulting precision/speed tradeoff...? I think
Paul Graham's "Worse is Better" article applies quite well here... Well, you can pretty easily use constructive reals with Python, see for example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a vastly vaster set than just algebraic numbers. If we DO want precision, after all, why should sqrt(5) be more important than log(3)?
I don't know that it's generally tractable to do exact computation on constructive reals. How do you implement comparison (<, >, ==)?
Well, if you can generate decimal representations on demand (and you'd
better, as the user might ask for such output at any time with any
a-priori unpredictable number of digits), worst case you can compare
them lexicographically, one digit at a time, until you find a different
digit (assuming identical signs and integer parts) -- except that equal
numbers would not terminate by this procedure. Briggs' implementation
finesses the issue by comparing no more than k significant digits, 1000
by default;-)
Alex
On Sun, 19 Sep 2004 20:31:48 +0200, al*****@yahoo.com (Alex Martelli) wrote: Dennis Lee Bieber <wl*****@ix.netcom.com> wrote:
Strangely, HP's tended not to hold guard digits... My HP-48sx gives the all-9s result, and I recall older models also not having guards.
Nothing strange there -- HP's calculators were squarely aimed at scientists and engineers, who are supposed to know what they're doing when it comes to numeric computation (they mostly _don't_, but they like to kid themselves that they do!-).
ISTM we humans mostly persist in ignoring seemingly inconsequential flaws in our
mental maps of reality until we are sufficiently surprised or fail too long
to find something dear to us (whether numerical results, a surfing beach,
a better map, a love, a purpose, or ultimate enlightenment ;-)
Regards,
Bengt Richter al*****@yahoo.com (Alex Martelli) writes: Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
al*****@yahoo.com (Alex Martelli) writes: Yes, but applying rational arithmetic by default might slow some computations far too much for beginners' liking!
I dunno, lots of Lisp dialects do rational arithmetic by default.
And...? What fractions of beginners get exposed to Lisp as their first language just love the resulting precision/speed tradeoff...? I think Paul Graham's "Worse is Better" article applies quite well here...
There is not much of a precision/speed tradoff in Common Lisp, you can
use fractional numbers (which give you exact results with operations
+, -, * and /) internally and round them off to decimal before
display. With the OP's example:
(+ 1210/100 830/100)
102/5
(coerce * 'float)
20.4
Integers can have unlimited number of digits, but the precision of
floats and reals are still limited to what the hardware can do, so if
you want to display for instance 2/3 with lots of decimals, you have
to multiply it first and insert the decimal point yourself, like in
(format t ".~d" (round (* 2/3 10000000000000000000)))
..6666666666666666667
Of course, long integers (bignums) are slower than short (fixnums), but
with automatic conversion to and fro, you pay the penalty only when
you need it.
On Sun, 19 Sep 2004 14:41:53 +0200, Peter Otten <__*******@web.de> wrote: Paul Rubin wrote:
Peter Otten <__*******@web.de> writes: Starting with Python 2.4 there will be the 'decimal' module supporting "arithmetic the way you know it":
>>> from decimal import * >>> Decimal("12.10") + Decimal("8.30")
I haven't tried 2.4 yet. After
The auther is currently working on an installer, but just dropping it into 2.3's site-packages should work, too.
a = Decimal("1") / Decimal("3") b = a * Decimal("3") print b
What happens? Is that arithmetic as the way I know it?
Decimal as opposed to rational:
from decimal import * Decimal(1)/Decimal(3)Decimal("0.3333333333333333333333333333") 3*_Decimal("0.9999999999999999999999999999")
Many people can cope with the inaccuracy induced by base 10 representations and are taken by surprise by base 2 errors. But you are right I left too much room for interpretation.
I hacked a little rational + decimal exponent representation based toy a while
back. The original post had a bug, which someone pointed out and I posted a
followup fix for, but the revised version was not posted. But I can if someone
is interested. from ut.exactdec import ED ED(1)/ED(3)
ED('1 / 3') 3*_
ED('1')
If you give it a float, it wants to know how many decimals you mean: ED(1./3)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "c:\pywk\ut\exactdec.py", line 93, in __init__
raise ValueError(
ValueError: Specify decimals for least significant digit of 10**(-decimals)
(decimals may also be specified as 'all' to capture all bits of float)
ED(1./3, 'all')
ED('0.33333333333333331482961625624739099293947219 8486328125')
If you give it a string literal, it takes it as accurate, but you can round it
to create a new accurate number: ED('1/3', 54)
ED('0.33333333333333333333333333333333333333333333 3333333333') ED('1/3', 60)
ED('0.33333333333333333333333333333333333333333333 3333333333333333')
That's an accurate number that has all zeroes to the right of those 60 3's ED('1/3', 60)*3
ED('0.99999999999999999999999999999999999999999999 9999999999999999')
If you don't round, you get a fully accurate result" ED('1/3')*3
ED('1')
It's interesting to look at pi: import math math.pi
3.1415926535897931 ED(math.pi, 'all')
ED('3.14159265358979311599796346854418516159057617 1875') ED(3.1415926535897931, 'all')
ED('3.14159265358979311599796346854418516159057617 1875')
Same actual exact decimal value gets created from repr(math.pi)
'3.1415926535897931'
meaning they both have the same floating point hardware representation,
but the short version decimal literal is sufficient to set all the bits right
even though it doesn't represent the fully exact value in decimal.
Economy courtesy of the Timbot I think ;-)
I don't know what the rules in Decimal are for stage-wise rounding vs keeping
accuracy, but I imagine you could get the same kind of surprises that are
available in binary from floating point, e.g.,
from ut.exactdec import ED
Floating point: acc = 1.0 for i in xrange(100): acc += 1e-300
... acc
1.0
That really is exactly 1.0 ED(acc,'all')
Now the calculation accurately:
ED('1') ecc = ED(1) for i in xrange(100): ecc += ED('1e-300')
... ecc
ED('1.00000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000
0000000000000001') ecc-1
ED('1.0e-298')
If you add a small Decimal delta repeatedly, will it get rounded away like the floating point
version, or will accuracy get promoted, or what? Sorry, I haven't read the docs yet ;-/
Regards,
Bengt Richter
On 19 Sep 2004 15:24:31 -0700, da*****@yahoo.com (Dan Bishop) wrote:
[...] There are, of course, reasonably accurate rational approximations of pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532 (9 decimal places), or 3126535/995207 (11 decimal places). Also, the IEEE 754 double-precision representation of pi is equal to the rational number 4503599627370496/281474976710656. divmod(4503599627370496,281474976710656)
(16L, 0L)
a little glitch somewhere ? ;-)
Others are nice though, but the last one shows up same way:
print '%s\n%s' %(ED('312689/99532').round(11), ED(math.pi,11))
ED('3.14159265362')
ED('3.14159265359') print '%s\n%s' %(ED('3126535/995207').round(13), ED(math.pi,13))
ED('3.1415926535887')
ED('3.1415926535898') print '%s\n%s' %(ED('4503599627370496/281474976710656'), ED(math.pi,'all'))
ED('16')
ED('3.14159265358979311599796346854418516159057617 1875')
Regards,
Bengt Richter
On 20 Sep 2004 00:35:33 GMT, bo**@oz.net (Bengt Richter) wrote:
[...] If you add a small Decimal delta repeatedly, will it get rounded away like the floating point version, or will accuracy get promoted, or what? Sorry, I haven't read the docs yet ;-/
I needn't have used 1e-300 to get the effect -- 1e-16 is relatively small enough: acc = 1.0 for i in xrange(100): acc += 1e-16
... acc
1.0 ED(acc, 'all')
ED('1') ecc = ED(1) for i in xrange(100): ecc += ED('1e-16')
... ecc
ED('1.00000000000001')
Regards,
Bengt Richter
Thanks to all for info here. Sorry for inadvertently creating such a
long thread.
On 19 Sep 2004 15:24:31 -0700, Dan Bishop wrote: There are, of course, reasonably accurate rational approximations of pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532 (9 decimal places), or 3126535/995207 (11 decimal places). Also, the IEEE 754 double-precision representation of pi is equal to the rational number 4503599627370496/281474976710656.
I hope not! That's equal to 16. (The double float closest to) pi is
884279719003555/281474976710656
--
Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
-- Howard Aiken
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>")) ### This discussion thread is closed Replies have been disabled for this discussion. ### Similar topics
1 post
views
Thread by TC |
last post: by
| | | | | | | | | | |