By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,905 Members | 894 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,905 IT Pros & Developers. It's quick & easy.

Floating point bug?

P: n/a
I've never had any call to use floating point numbers and now that I
want to, I can't!

*** Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32
bit (Intel)] on win32. ***
>>float (.3)
0.29999999999999999
>>foo = 0.3
foo
0.29999999999999999
>>>
Feb 13 '08 #1
Share this Question
Share on Google+
135 Replies


P: n/a
Not a bug. All languages implementing floating point numbers have the
same issue. Some just decide to hide it from you. Please read
http://docs.python.org/tut/node16.html and particularly
http://docs.python.org/tut/node16.ht...00000000000000

Regards,
Marek
Feb 13 '08 #2

P: n/a
ro**********@gmail.com wrote:
I've never had any call to use floating point numbers and now that
I want to, I can't!
Ever considered phrasing your actual problem so one can help, let
alone looking at the archive for many, many postings about this
topic?

Regards,
Bjrn

--
BOFH excuse #66:

bit bucket overflow

Feb 14 '08 #3

P: n/a
Dennis Lee Bieber wrote:
On Wed, 13 Feb 2008 17:49:08 -0800, Jeff Schwab <je**@schwabcenter.com>
declaimed the following in comp.lang.python:
>If you need a pretty string for use in code:
> >>def pretty_fp(fpnum, prec=8):
... return ('%.8f' % fpnum).rstrip('0')
...
> >>pretty_fp(0.3)
'0.3'

What's wrong with just

str(0.3)
Nothing!
that's what "print" invokes, whereas the interpreter prompt is using

repr(0.3)
Thanks for pointing that out.
Feb 14 '08 #4

P: n/a
Dennis Lee Bieber wrote:
What's wrong with just

str(0.3)

that's what "print" invokes, whereas the interpreter prompt is using

repr(0.3)
No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]

Christian

Feb 14 '08 #5

P: n/a
Bruno Desthuilliers wrote:
I Must have miss something...
Yeah, You have missed the beginning of the third sentence: "The tp_print
slot is not available from Python code". The tp_print slot is only
available in C code and is part of the C definition of a type. Hence tp_
as type.

Search for float_print and tp_print in
http://svn.python.org/view/python/tr...0567&view=auto

Christian

Feb 14 '08 #6

P: n/a
Bruno Desthuilliers <br********************@wtf.websiteburo.oops.com >
wrote:
I Must have miss something...
Perhaps you missed the part where Christian said "The tp_print slot is not
available from Python code"?
Feb 14 '08 #7

P: n/a
Christian Heimes a crit :
Bruno Desthuilliers wrote:
>I Must have miss something...

Yeah, You have missed the beginning of the third sentence: "The tp_print
slot is not available from Python code".
oops, my bad ! I missed the "not" !-)
Feb 14 '08 #8

P: n/a
Christian Heimes wrote:
Dennis Lee Bieber wrote:
> What's wrong with just

str(0.3)

that's what "print" invokes, whereas the interpreter prompt is using

repr(0.3)

No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]
Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (non-repr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a C-language extension?
Feb 14 '08 #9

P: n/a
En Thu, 14 Feb 2008 15:22:41 -0200, Jeff Schwab <je**@schwabcenter.com>
escribi:
Christian Heimes wrote:
>No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]

Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (non-repr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a C-language extension?
As a side note, the print statement has FIVE related opcodes. Looks like
printing has been considered a very important operation...

--
Gabriel Genellina

Feb 14 '08 #10

P: n/a
I did try searching, but I never found what I was looking for. This
thread has been very useful and informative. Thanks for all your
help! I was able to fix my problem. :)
Feb 14 '08 #11

P: n/a
Jeff Schwab <je**@schwabcenter.comwrote:
Christian Heimes wrote:
>Dennis Lee Bieber wrote:
>> What's wrong with just

str(0.3)

that's what "print" invokes, whereas the interpreter prompt is using

repr(0.3)

No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]

Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (non-repr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a C-language extension?
The tp_print slot is used only when printing to a C file descriptor. In
most cases where it is used it simply duplicates the str and repr
functionality but avoids building the entire output in memory. It also
takes a flag argument indicating whether it should output the str or repr,
the latter being used when rendering the content inside an object such as a
dict or list.

So for example a dict's repr builds a list containing the repr of each
key/value pair and then joins the list using a comma separator. The
tp_print simply outputs the '{', then uses tp_print to output the repr of
the key and repr of the value with appropriate separators and finally the
closing '}'. It would not suprise me if by replacing the output of a single
large string with a lot of small calls to fputs 'print x' could be slower
than 'print str(x)'.
Feb 14 '08 #12

P: n/a
That's a misconception. The decimal-module has a different base (10
instead of 2), and higher precision. But that doesn't change the fact
that it will expose the same rounding-errors as floats do - just for
different numbers.
>>import decimal as d
>>d = d.Decimal
>>d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")
Surely you jest. Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...). It would
only convert back to one if you have and infinite number of
significant digits. That has nothing to do with the Python decimal
module (which does what it claims). It is one of the idiosyncrasies
of the base 10 number system. Remember we are working with base 10
decimals and not fractions.
Feb 15 '08 #13

P: n/a
Zentrader wrote:
>That's a misconception. The decimal-module has a different base (10
instead of 2), and higher precision. But that doesn't change the fact
that it will expose the same rounding-errors as floats do - just for
different numbers.
> >>import decimal as d
d = d.Decimal
d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")

Surely you jest. Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...). It would
only convert back to one if you have and infinite number of
significant digits. That has nothing to do with the Python decimal
module (which does what it claims). It is one of the idiosyncrasies
of the base 10 number system. Remember we are working with base 10
decimals and not fractions.
Diez was not claiming that the decimal module did anything less than
what it promised. He just pointed out that the module does not support
infinitely precise floating-point arithmetic, any more than tradition
base-2 representations do. Please review the thread (the parts you
snipped) for clarification.
Feb 15 '08 #14

P: n/a
I disagree with this statement
<quote>But that doesn't change the fact that it will expose the same
rounding-errors as floats do - just for different numbers. </quote>
The example used has no rounding errors. For anything less that 28
significant digits it rounds to 1.0. With floats 1.0/3 yields
0.33333333333333331<-- on my machine. Also you can compare two
decimal.Decimal() objects for equality. With floats you have to test
for a difference less than some small value. BTW, a college professor
who also wrote code for a living made this offhand remark "In general
it is best to multiply first and then divide." Good general advice.
Feb 15 '08 #15

P: n/a
On Feb 15, 3:30*pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
The point is that all numbering systems with a base + precision will
have (rational) values they can't exactly represent. Q\R is of course
out of the question by definition....
This 'Decimal is exact' myth has been appearing often enough that I
wonder whether it's worth devoting a prominent paragraph to in the
docs.

Mark
Feb 15 '08 #16

P: n/a
Lie
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)

I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
Feb 16 '08 #17

P: n/a
Lie wrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)

I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
http://www.python.org/dev/peps/pep-0239/
Feb 16 '08 #18

P: n/a
Lie
On Feb 17, 1:40*am, Jeff Schwab <j...@schwabcenter.comwrote:
Lie wrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.

http://www.python.org/dev/peps/pep-0239/
Yes, I'm aware of the PEP and actually have been trying for some time
to reopen the PEP.

The reason that PEP is rejected is because Decimal is accepted, which
I think is a completely absurd reason as Decimal doesn't actually
solve the rounding problems and equality comparison problem. Some
people have also pointed out that Decimal IS Inexact, while a rational
number is always exact except if you have an operation with a (binary
or decimal) floating point involved (this can be easilty resolved by
making fraction recessive, i.e. an operation that receive a fraction
and a float should return a float).
Feb 16 '08 #19

P: n/a
On Feb 16, 1:35*pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
It's already in the trunk! Python will have a rational type (called
Fraction) in Python 2.6 and Python 3.0, thanks largely to the work of
Jeffrey Yaskin.

Mark
Feb 16 '08 #20

P: n/a
On Feb 16, 1:35*pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
Forgot to give the link:

http://docs.python.org/dev/library/fractions.html

Mark
Feb 16 '08 #21

P: n/a
On Feb 16, 12:35�pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)

I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
Have you looked at the gmpy madule? That's what I
use whenever this comes up. Works very nice to
eliminate the issues that prevent a float olution
for the problems I'm working on.

And some irrationals can be represented by infite
sequences of rationals that, coupled with gmpy's
unlimited precision floats, allows any number of
accurate decimal places to be calculated.

If you would like to see an example, check out

http://members.aol.com/mensanator/polynomial.py
Feb 16 '08 #22

P: n/a
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.

Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale, which means you have to periodically do non-exact reductions
to keep things running, and if you do that you might as well be using
floating point.

Rationals have their occasional special purpose uses, but for most
cases they're at best marginally better then floats and more often
incomparably worse.
Carl Banks
Feb 16 '08 #23

P: n/a
Lie
On Feb 17, 4:25*am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.

Maybe that's true for your use cases, but it's not true for most cases
in general.
OK, that might have been an overstatement, but as I see it, it is
safer to handle something in a Fraction compared to floats (which is
why I uses fractions whenever possible in non-computer maths).
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale, which means you have to periodically do non-exact reductions
to keep things running, and if you do that you might as well be using
floating point.
Rationals aren't that good if the same piece of variable is to be
calculated again and again because of its growth, but there are a lot
of cases where the data would only require five or six or so
operations done on it (and there are thousands or millions of such
datas), rationals is the perfect choice for those situations because
it is easier to use thanks to the comparison safety. Or in the
situations where speed isn't as important and accuracy is required,
Fraction may be faster than decimal and more accurate at the same time
(someone need to test that though).
Rationals have their occasional special purpose uses, but for most
cases they're at best marginally better then floats and more often
incomparably worse.
Feb 16 '08 #24

P: n/a
Carl Banks wrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
>Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.

Maybe that's true for your use cases, but it's not true for most cases
in general.

Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale,
What do you mean by "practically unusable?" I heard similar arguments
made against big integers at one point ("Primitive types are usually big
enough, why risk performance?") but I fell in love with them when I
first saw them in Smalltalk, and I'm glad Python supports them natively.
Feb 16 '08 #25

P: n/a
On Feb 16, 5:51 pm, Jeff Schwab <j...@schwabcenter.comwrote:
Carl Banks wrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale,

What do you mean by "practically unusable?" I heard similar arguments
made against big integers at one point ("Primitive types are usually big
enough, why risk performance?") but I fell in love with them when I
first saw them in Smalltalk, and I'm glad Python supports them natively.
Feb 16 '08 #26

P: n/a
On Feb 14, 8:10 pm, Zentrader <zentrad...@gmail.comwrote:
That's a misconception. The decimal-module has a different base (10
instead of 2), and higher precision. But that doesn't change the fact
that it will expose the same rounding-errors as floats do - just for
different numbers.
>>import decimal as d
>>d = d.Decimal
>>d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")

Surely you jest.
He's not joking at all.
Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...).
It does end in base 3, 6, 9, 12, etc.

You have to remember that base-ten wasn't chosen because it has
mathematical advantages over other bases, but merely because people
counted on their fingers. In light of this fact, why is one-fifth
more deserving of an exact representation than one-third is?
Feb 17 '08 #27

P: n/a
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. This sort of calculation is pretty common (examples:
compound interest, numerical integration).

Wrong. Addition and subtraction would only grow the denominator up to
a certain limit
I said repeated additions and divisions.

Anyways, addition and subtraction can increase the denominator a lot
if for some reason you are inputing numbers with many different
denominators.
Carl Banks
Feb 18 '08 #28

P: n/a
Lie
On Feb 18, 1:25*pm, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. *This sort of calculation is pretty common (examples:
compound interest, numerical integration).
Wrong. Addition and subtraction would only grow the denominator up to
a certain limit

I said repeated additions and divisions.
Repeated Addition and subtraction can't make fractions grow
infinitely, only multiplication and division could.
Anyways, addition and subtraction can increase the denominator a lot
if for some reason you are inputing numbers with many different
denominators.
Up to a certain limit. After you reached the limit, the fraction would
always be simplifyable.

If the input numerator and denominator have a defined limit, repeated
addition and subtraction to another fraction will also have a defined
limit.
Feb 24 '08 #29

P: n/a
>
Out of curiosity, of what use is denominator limits?

The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is
In Donald Knuth's The Art of Computer Programming, he described
floating slash arithmetic where the total number of bits by the
numerator and denominator was bounded. IIRC, a use case was matrix
inversion.

casevh
Feb 24 '08 #30

P: n/a
On Sun, 24 Feb 2008 11:09:32 -0800, Lie wrote:
I decided to keep the num/den limit low (10) because higher values might
obscure the fact that it do have limits.
You do realise that by putting limits on the denominator, you guarantee
that the sum of the fractions also has a limit on the denominator? In
other words, your "test" is useless.

With denominators limited to 1 through 9 inclusive, the sum will have a
denominator of 2*3*5*7 = 210. But that limit is a product (literally and
figuratively) of your artificial limit on the denominator. Add a fraction
with denominator 11, and the sum now has a denominator of 2310; add
another fraction n/13 and the sum goes to m/30030; and so on.
--
Steven
Feb 24 '08 #31

P: n/a
On Feb 24, 4:50�pm, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sun, 24 Feb 2008 11:09:32 -0800, Lie wrote:
I decided to keep the num/den limit low (10) because higher values might
obscure the fact that it do have limits.

You do realise that by putting limits on the denominator, you guarantee
that the sum of the fractions also has a limit on the denominator? In
other words, your "test" is useless.

With denominators limited to 1 through 9 inclusive, the sum will have a
denominator of 2*3*5*7 = 210.
Th limit will be 2*2*2*3*3*5*7. As MD said, "equivalently
the product over all primes p <= n of the highest power
of p not exceeding n".

But that limit is a product (literally and
figuratively) of your artificial limit on the denominator. Add a fraction
with denominator 11, and the sum now has a denominator of 2310; add
another fraction n/13 and the sum goes to m/30030; and so on.

--
Steven
Feb 25 '08 #32

P: n/a
Mel
Mensanator wrote:
On Feb 24, 1:09�pm, Lie <Lie.1...@gmail.comwrote:
>I decided to keep the num/den limit low (10) because higher values
might obscure the fact that it do have limits. [ ... ]

Out of curiosity, of what use is denominator limits?

The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is.
In calculations dealing only with selected units of measure: dollars
and cents, pounds, ounces and tons, teaspoons, gallons, beer bottles
28 to a case, then the denominators would settle out pretty quickly.

In general mathematics, not.

I think that might be the point.

Mel.
Feb 25 '08 #33

P: n/a
On Feb 24, 6:09*pm, Mel <mwil...@the-wire.comwrote:
Mensanator wrote:
On Feb 24, 1:09�pm, Lie <Lie.1...@gmail.comwrote:
I decided to keep the num/den limit low (10) because higher values
might obscure the fact that it do have limits. [ ... ]
Out of curiosity, of what use is denominator limits?
The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is.

In calculations dealing only with selected units of measure: dollars
and cents, pounds, ounces and tons, teaspoons, gallons, beer bottles
28 to a case, then the denominators would settle out pretty quickly.
Ok.
>
In general mathematics, not.
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
>
I think that might be the point.
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?
>
* * * * Mel.
Feb 25 '08 #34

P: n/a
On Feb 24, 7:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
Since you are expecting to work with unlimited (or at least, very
high) precision, then the behavior of rationals is not a surprise. But
a naive user may be surprised when the running time for a calculation
varies greatly based on the values of the numbers. In contrast, the
running time for standard binary floating point operations are fairly
constant.
>
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?
In the current version of GMP, the running time for the calculation of
the greatest common divisor is O(n^2). If you include reduction to
lowest terms, the running time for a rational add is now O(n^2)
instead of O(n) for a high-precision floating point addition or O(1)
for a standard floating point addition. If you need an exact rational
answer, then the change in running time is fine. But you can't just
use rationals and expect a constant running time.

There are trade-offs between IEEE-754 binary, Decimal, and Rational
arithmetic. They all have there appropriate problem domains.

And sometimes you just need unlimited precision, radix-6, fixed-point
arithmetic....

casevh
Feb 25 '08 #35

P: n/a
On Feb 24, 12:32 pm, Lie <Lie.1...@gmail.comwrote:
On Feb 18, 1:25 pm, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. This sort of calculation is pretty common (examples:
compound interest, numerical integration).
Wrong. Addition and subtraction would only grow the denominator up to
a certain limit
I said repeated additions and divisions.

Repeated Addition and subtraction can't make fractions grow
infinitely, only multiplication and division could.

What part of "repeated additions and divisions" don't you understand?

Carl Banks
Feb 25 '08 #36

P: n/a
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.

Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Carl Banks
Feb 25 '08 #37

P: n/a
Carl Banks <pa************@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Usually you would round to the nearest penny before storing in the
database.
Feb 25 '08 #38

P: n/a
On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Carl Banks <pavlovevide...@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.

Usually you would round to the nearest penny before storing in the
database.
I throw it out there as a hypothetical, not as a real world example.
"This is why we don't (usually) use rationals for accounting."
Carl Banks
Feb 25 '08 #39

P: n/a
If you're interested in rationals, then you might want to have a look
at mxNumber which is part of the eGenix mx Experimental
Distribution:

http://www.egenix.com/products/pytho...ntal/mxNumber/

It provides fast rational operations based on the GNU MP
library.

On 2008-02-25 07:58, Carl Banks wrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
>But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.

No, that's a specific algorithm. That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.

Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
It is well possible to limit the denominator before storing it
in a database or other external resource using Farey neighbors:

http://en.wikipedia.org/wiki/Farey_s...rey_neighbours

mxNumber implements an algorithm for this (not the most efficient
one, but it works nicely).

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source (#1, Feb 25 2008)
>>Python/Zope Consulting and Support ... http://www.egenix.com/
mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
__________________________________________________ ______________________

:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
Feb 25 '08 #40

P: n/a
Paul Rubin wrote:
Carl Banks <pa************@gmail.comwrites:
>Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.

Usually you would round to the nearest penny before storing in the
database.
There are cases where the law requires a higher precision or where the
rounding has to be a floor or...

Some things make no sense and when dealing with money things make even less
sense to either protect the customer or to grant the State getting its
share of the transaction.

Here in Brasil, for example, gas stations have to display the price with 3
decimal digits and round the end result down (IIRC). A truck filling 117
liters at 1.239 reais per liter starts making a mess... If the owner wants
to track "losses" due to rounding or if he wants to make his inventory of
fuel accurately, he won't be able to save just what he billed the customer
otherwise things won't match by the end of the month.

Feb 25 '08 #41

P: n/a
On Feb 25, 12:58�am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.

No, that's a specific algorithm. �That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.

Try doing numerical integration sometime with rationals, and tell me
how that works out. �Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Nobody said rationals were the appropriate solution
to _every_ problem, just as floats and integers aren't
the appropriate solution to _every_ problem.

Your argument is that I should be forced to use
an inappropriate type when rationals _are_
the appropriate solution.

I have never used the Decimal type, but I'm not
calling for it's removal because I know there are
cases where it's useful. If a rational type were
added, no one would force you to use it for
numerical integration.
>
Carl Banks
Feb 25 '08 #42

P: n/a
On Sun, 24 Feb 2008 23:41:53 -0800, Dennis Lee Bieber wrote:
On 24 Feb 2008 23:04:14 -0800, Paul Rubin <http://ph****@NOSPAM.invalid>
declaimed the following in comp.lang.python:

>Usually you would round to the nearest penny before storing in the
database.

Tell that to the payroll processing at Lockheed...My paycheck
tends to vary from week to week as the database apparently carries
amount to at least 0.001 resolution, only rounding when distributing
among various taxes for the paycheck itself. Tedious data entry in
Quicken as I have to keep tweaking various tax entries by +/- a penny
each week.

"Worst practice" in action *wink*

I predict they're using some funky in-house accounting software they've
paid millions to a consultancy firm (SAP?) for over the decades, written
by some guys who knows lots of Cobol but no accounting, and the internal
data type is a float.

[snip]
Oh... And M$ -- the currency type in VB is four decimal places.
Accounting standards do vary according to context: e.g. I see that
offical Australian government reporting standards for banks is to report
in millions of dollars rounded to one decimal place. Accountants can
calculate things more or less any way they like, so long as they tell
you. I found one really dodgy example:

"The MFS Water Fund ARSN 123 123 642 (‘the Fund’) is a registered managed
investment scheme. ... MFS Aqua may calculate the Issue Price to the
number of decimal places it determines."

Sounds like another place using native floats. But it's all above board,
because they tell you they'll use an arbitrary number of decimal places,
all the better to confuse the auditors my dear.
--
Steven
Feb 25 '08 #43

P: n/a
On Sun, 24 Feb 2008 23:09:39 -0800, Carl Banks wrote:
On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>Carl Banks <pavlovevide...@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.

Usually you would round to the nearest penny before storing in the
database.

I throw it out there as a hypothetical, not as a real world example.
"This is why we don't (usually) use rationals for accounting."
But since accountants (usually) round to the nearest cent, accounting is
a *good* use-case for rationals. Decimal might be better, but floats are
worst.

I wonder why you were doing numerical integration with rationals in the
first place? Are you one of those ABC users (like Guido) who have learnt
to fear rationals because ABC didn't have floats?
--
Steven

Feb 25 '08 #44

P: n/a
On 2008-02-25 16:03, Steven D'Aprano wrote:
On Sun, 24 Feb 2008 23:09:39 -0800, Carl Banks wrote:
>On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>>Carl Banks <pavlovevide...@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Usually you would round to the nearest penny before storing in the
database.
I throw it out there as a hypothetical, not as a real world example.
"This is why we don't (usually) use rationals for accounting."

But since accountants (usually) round to the nearest cent, accounting is
a *good* use-case for rationals. Decimal might be better, but floats are
worst.
That's not necessarily true in general: finance libraries usually try
to always do calculations at the best possible precision and then only
apply rounding at the very end of a calculation. Most of the time a float
is the best data type for this.

Accounting uses a somewhat different approach and one which various
between the different accounting standards and use cases. The decimal
type is usually better suited for this, since it supports various
ways of doing rounding.

Rationals are not always the best alternative, but they do help
in cases where you need to guarantee that the sum of all parts
is equal to the whole for all values. Combined with interval
arithmetic they go a long way towards more accurate calculations.

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source (#1, Feb 25 2008)
>>Python/Zope Consulting and Support ... http://www.egenix.com/
mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
__________________________________________________ ______________________

:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
Feb 25 '08 #45

P: n/a
On Feb 25, 9:41 am, Mensanator <mensana...@aol.comwrote:
On Feb 25, 12:58�am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. �That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.
Try doing numerical integration sometime with rationals, and tell me
how that works out. �Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.

Nobody said rationals were the appropriate solution
to _every_ problem, just as floats and integers aren't
the appropriate solution to _every_ problem.
I was answering your claim that rationals are appropriate for general
mathematical uses.

Your argument is that I should be forced to use
an inappropriate type when rationals _are_
the appropriate solution.
I don't know where you got that idea.

My argument is that rationals aren't suitable for ordinary uses
because they have poor performance and can easily blow up in your
face, trash your disk, and crash your program (your whole system if
you're on Windows).

In other words, 3/4 in Python rightly yields a float and not a
rational.
Carl Banks
Feb 25 '08 #46

P: n/a
On 2008-02-25, Carl Banks <pa************@gmail.comwrote:
In other words, 3/4 in Python rightly yields a float
Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
and not a rational.
--
Grant Edwards grante Yow! Zippy's brain cells
at are straining to bridge
visi.com synapses ...
Feb 25 '08 #47

P: n/a
Grant Edwards wrote:
On 2008-02-25, Carl Banks <pa************@gmail.comwrote:
>In other words, 3/4 in Python rightly yields a float

Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
>and not a rational.
No, that wouldn't be rational ;-)
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/

Feb 25 '08 #48

P: n/a
On Mon, 2008-02-25 at 16:27 +0000, Grant Edwards wrote:
On 2008-02-25, Carl Banks <pa************@gmail.comwrote:
In other words, 3/4 in Python rightly yields a float

Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
I'm in the camp that believes that 3/4 does indeed yield the integer 0,
but should be spelled 3//4 when that is the intention.

Cheers,
Cliff
Feb 25 '08 #49

P: n/a
Lie
On Feb 25, 11:34 am, casevh <cas...@gmail.comwrote:
On Feb 24, 7:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.

Since you are expecting to work with unlimited (or at least, very
high) precision, then the behavior of rationals is not a surprise. But
a naive user may be surprised when the running time for a calculation
varies greatly based on the values of the numbers. In contrast, the
running time for standard binary floating point operations are fairly
constant.
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?

In the current version of GMP, the running time for the calculation of
the greatest common divisor is O(n^2). If you include reduction to
lowest terms, the running time for a rational add is now O(n^2)
instead of O(n) for a high-precision floating point addition or O(1)
for a standard floating point addition. If you need an exact rational
answer, then the change in running time is fine. But you can't just
use rationals and expect a constant running time.

There are trade-offs between IEEE-754 binary, Decimal, and Rational
arithmetic. They all have there appropriate problem domains.
I very agree with this statement. Fractionals do have its weakness,
and so do Decimal and Hardware Floating Point. And they have their own
usage, their own scenarios where they're appropriate. If you needed
full-speed calculation, it is clear that floating point wins all over
the place, OTOH, if you need to manage your precision carefully
Fractional and Decimal both have their own plus and mins

Feb 26 '08 #50

135 Replies

This discussion thread is closed

Replies have been disabled for this discussion.