
I've never had any call to use floating point numbers and now that I
want to, I can't!
*** Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32
bit (Intel)] on win32. ***
>>float (.3)
0.29999999999999999
>>foo = 0.3 foo
0.29999999999999999
>>>
 
Share:
 ro**********@gmail.com wrote:
I've never had any call to use floating point numbers and now that
I want to, I can't!
Ever considered phrasing your actual problem so one can help, let
alone looking at the archive for many, many postings about this
topic?
Regards,
Björn

BOFH excuse #66:
bit bucket overflow   
Dennis Lee Bieber wrote:
On Wed, 13 Feb 2008 17:49:08 0800, Jeff Schwab <je**@schwabcenter.com>
declaimed the following in comp.lang.python:
>If you need a pretty string for use in code:
> >>def pretty_fp(fpnum, prec=8):
... return ('%.8f' % fpnum).rstrip('0') ...
> >>pretty_fp(0.3)
'0.3'
What's wrong with just
str(0.3)
Nothing!
that's what "print" invokes, whereas the interpreter prompt is using
repr(0.3)
Thanks for pointing that out.   
Dennis Lee Bieber wrote:
What's wrong with just
str(0.3)
that's what "print" invokes, whereas the interpreter prompt is using
repr(0.3)
No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]
Christian   
Bruno Desthuilliers wrote:
I Must have miss something...
Yeah, You have missed the beginning of the third sentence: "The tp_print
slot is not available from Python code". The tp_print slot is only
available in C code and is part of the C definition of a type. Hence tp_
as type.
Search for float_print and tp_print in http://svn.python.org/view/python/tr...0567&view=auto
Christian   
Bruno Desthuilliers <br********************@wtf.websiteburo.oops.com >
wrote:
I Must have miss something...
Perhaps you missed the part where Christian said "The tp_print slot is not
available from Python code"?   
Christian Heimes a écrit :
Bruno Desthuilliers wrote:
>I Must have miss something...
Yeah, You have missed the beginning of the third sentence: "The tp_print
slot is not available from Python code".
oops, my bad ! I missed the "not" !)   
Christian Heimes wrote:
Dennis Lee Bieber wrote:
> What's wrong with just
str(0.3)
that's what "print" invokes, whereas the interpreter prompt is using
repr(0.3)
No, print invokes the tp_print slot of the float type. Some core types
have a special handler for print. The tp_print slot is not available
from Python code and most people don't know about it. :]
Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (nonrepr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a Clanguage extension?   
En Thu, 14 Feb 2008 15:22:41 0200, Jeff Schwab <je**@schwabcenter.com>
escribió:
Christian Heimes wrote:
>No, print invokes the tp_print slot of the float type. Some core types have a special handler for print. The tp_print slot is not available from Python code and most people don't know about it. :]
Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (nonrepr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a Clanguage extension?
As a side note, the print statement has FIVE related opcodes. Looks like
printing has been considered a very important operation...

Gabriel Genellina   
I did try searching, but I never found what I was looking for. This
thread has been very useful and informative. Thanks for all your
help! I was able to fix my problem. :)   
Jeff Schwab <je**@schwabcenter.comwrote:
Christian Heimes wrote:
>Dennis Lee Bieber wrote:
>> What's wrong with just
str(0.3)
that's what "print" invokes, whereas the interpreter prompt is using
repr(0.3) No, print invokes the tp_print slot of the float type. Some core types have a special handler for print. The tp_print slot is not available from Python code and most people don't know about it. :]
Why does print use the tp_print slot, rather than str()? Are the two
effectively redundant? If (nonrepr) string representations are
frequently needed for a given type, could str() be implemented as a
reference to tp_slot, via a Clanguage extension?
The tp_print slot is used only when printing to a C file descriptor. In
most cases where it is used it simply duplicates the str and repr
functionality but avoids building the entire output in memory. It also
takes a flag argument indicating whether it should output the str or repr,
the latter being used when rendering the content inside an object such as a
dict or list.
So for example a dict's repr builds a list containing the repr of each
key/value pair and then joins the list using a comma separator. The
tp_print simply outputs the '{', then uses tp_print to output the repr of
the key and repr of the value with appropriate separators and finally the
closing '}'. It would not suprise me if by replacing the output of a single
large string with a lot of small calls to fputs 'print x' could be slower
than 'print str(x)'.   
That's a misconception. The decimalmodule has a different base (10
instead of 2), and higher precision. But that doesn't change the fact
that it will expose the same roundingerrors as floats do  just for
different numbers.
>>import decimal as d
>>d = d.Decimal
>>d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")
Surely you jest. Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...). It would
only convert back to one if you have and infinite number of
significant digits. That has nothing to do with the Python decimal
module (which does what it claims). It is one of the idiosyncrasies
of the base 10 number system. Remember we are working with base 10
decimals and not fractions.   
Zentrader wrote:
>That's a misconception. The decimalmodule has a different base (10 instead of 2), and higher precision. But that doesn't change the fact that it will expose the same roundingerrors as floats do  just for different numbers.
> >>import decimal as d d = d.Decimal d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")
Surely you jest. Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...). It would
only convert back to one if you have and infinite number of
significant digits. That has nothing to do with the Python decimal
module (which does what it claims). It is one of the idiosyncrasies
of the base 10 number system. Remember we are working with base 10
decimals and not fractions.
Diez was not claiming that the decimal module did anything less than
what it promised. He just pointed out that the module does not support
infinitely precise floatingpoint arithmetic, any more than tradition
base2 representations do. Please review the thread (the parts you
snipped) for clarification.   
I disagree with this statement
<quote>But that doesn't change the fact that it will expose the same
roundingerrors as floats do  just for different numbers. </quote>
The example used has no rounding errors. For anything less that 28
significant digits it rounds to 1.0. With floats 1.0/3 yields
0.33333333333333331< on my machine. Also you can compare two
decimal.Decimal() objects for equality. With floats you have to test
for a difference less than some small value. BTW, a college professor
who also wrote code for a living made this offhand remark "In general
it is best to multiply first and then divide." Good general advice.   
On Feb 15, 3:30*pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
The point is that all numbering systems with a base + precision will
have (rational) values they can't exactly represent. Q\R is of course
out of the question by definition....
This 'Decimal is exact' myth has been appearing often enough that I
wonder whether it's worth devoting a prominent paragraph to in the
docs.
Mark   
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.   
Lie wrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
http://www.python.org/dev/peps/pep0239/   
On Feb 17, 1:40*am, Jeff Schwab <j...@schwabcenter.comwrote:
Lie wrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
http://www.python.org/dev/peps/pep0239/
Yes, I'm aware of the PEP and actually have been trying for some time
to reopen the PEP.
The reason that PEP is rejected is because Decimal is accepted, which
I think is a completely absurd reason as Decimal doesn't actually
solve the rounding problems and equality comparison problem. Some
people have also pointed out that Decimal IS Inexact, while a rational
number is always exact except if you have an operation with a (binary
or decimal) floating point involved (this can be easilty resolved by
making fraction recessive, i.e. an operation that receive a fraction
and a float should return a float).   
On Feb 16, 1:35*pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
It's already in the trunk! Python will have a rational type (called
Fraction) in Python 2.6 and Python 3.0, thanks largely to the work of
Jeffrey Yaskin.
Mark   
On Feb 16, 1:35*pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
Forgot to give the link: http://docs.python.org/dev/library/fractions.html
Mark   
On Feb 16, 12:35ï¿½pm, Lie <Lie.1...@gmail.comwrote:
Would all these problems with floating points be a rational reason to
add rational numbers support in Python or Py3k? (pun not intended)
I agree, there are some numbers that is rationals can't represent
(like pi, phi, e) but these rounding problems also exist in floating
points, and rational numbers wouldn't be so easily fooled by something
like 1 / 3 * 3, and 1/10 (computer) is exactly 0.1 (human). The first
problem with rational is that to get an infinite precision rational,
the language would have to have an infinite length integers, which
Python have given us. The second problem with rationals is to keep
rationals in its most simple, trivial form. This can be solved with a
good GCD algorithm, which can also be a nice addition to Python's math
library.
Have you looked at the gmpy madule? That's what I
use whenever this comes up. Works very nice to
eliminate the issues that prevent a float olution
for the problems I'm working on.
And some irrationals can be represented by infite
sequences of rationals that, coupled with gmpy's
unlimited precision floats, allows any number of
accurate decimal places to be calculated.
If you would like to see an example, check out http://members.aol.com/mensanator/polynomial.py   
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale, which means you have to periodically do nonexact reductions
to keep things running, and if you do that you might as well be using
floating point.
Rationals have their occasional special purpose uses, but for most
cases they're at best marginally better then floats and more often
incomparably worse.
Carl Banks   
On Feb 17, 4:25*am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.
OK, that might have been an overstatement, but as I see it, it is
safer to handle something in a Fraction compared to floats (which is
why I uses fractions whenever possible in noncomputer maths).
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale, which means you have to periodically do nonexact reductions
to keep things running, and if you do that you might as well be using
floating point.
Rationals aren't that good if the same piece of variable is to be
calculated again and again because of its growth, but there are a lot
of cases where the data would only require five or six or so
operations done on it (and there are thousands or millions of such
datas), rationals is the perfect choice for those situations because
it is easier to use thanks to the comparison safety. Or in the
situations where speed isn't as important and accuracy is required,
Fraction may be faster than decimal and more accurate at the same time
(someone need to test that though).
Rationals have their occasional special purpose uses, but for most
cases they're at best marginally better then floats and more often
incomparably worse.
  
Carl Banks wrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
>Although rationals have its limitations too, it is a much better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale,
What do you mean by "practically unusable?" I heard similar arguments
made against big integers at one point ("Primitive types are usually big
enough, why risk performance?") but I fell in love with them when I
first saw them in Smalltalk, and I'm glad Python supports them natively.   
On Feb 16, 5:51 pm, Jeff Schwab <j...@schwabcenter.comwrote:
Carl Banks wrote:
On Feb 16, 3:03 pm, Lie <Lie.1...@gmail.comwrote:
Although rationals have its limitations too, it is a much
better choice compared to floats/Decimals for most cases.
Maybe that's true for your use cases, but it's not true for most cases
in general.
Rationals are pretty useless for almost any extended calculations,
since the denominator tends to grow in size till it's practically
unusbale,
What do you mean by "practically unusable?" I heard similar arguments
made against big integers at one point ("Primitive types are usually big
enough, why risk performance?") but I fell in love with them when I
first saw them in Smalltalk, and I'm glad Python supports them natively.
  
On Feb 14, 8:10 pm, Zentrader <zentrad...@gmail.comwrote:
That's a misconception. The decimalmodule has a different base (10
instead of 2), and higher precision. But that doesn't change the fact
that it will expose the same roundingerrors as floats do  just for
different numbers.
>>import decimal as d
>>d = d.Decimal
>>d("1") / d("3") * d("3")
Decimal("0.9999999999999999999999999999")
Surely you jest.
He's not joking at all.
Your example is exact to 28 digits. Your attempted
trick is to use a number that never ends (1/3=0.3333...).
It does end in base 3, 6, 9, 12, etc.
You have to remember that baseten wasn't chosen because it has
mathematical advantages over other bases, but merely because people
counted on their fingers. In light of this fact, why is onefifth
more deserving of an exact representation than onethird is?   
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. This sort of calculation is pretty common (examples:
compound interest, numerical integration).
Wrong. Addition and subtraction would only grow the denominator up to
a certain limit
I said repeated additions and divisions.
Anyways, addition and subtraction can increase the denominator a lot
if for some reason you are inputing numbers with many different
denominators.
Carl Banks   
On Feb 18, 1:25*pm, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. *This sort of calculation is pretty common (examples:
compound interest, numerical integration).
Wrong. Addition and subtraction would only grow the denominator up to
a certain limit
I said repeated additions and divisions.
Repeated Addition and subtraction can't make fractions grow
infinitely, only multiplication and division could.
Anyways, addition and subtraction can increase the denominator a lot
if for some reason you are inputing numbers with many different
denominators.
Up to a certain limit. After you reached the limit, the fraction would
always be simplifyable.
If the input numerator and denominator have a defined limit, repeated
addition and subtraction to another fraction will also have a defined
limit.   
>
Out of curiosity, of what use is denominator limits?
The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is
In Donald Knuth's The Art of Computer Programming, he described
floating slash arithmetic where the total number of bits by the
numerator and denominator was bounded. IIRC, a use case was matrix
inversion.
casevh   
On Sun, 24 Feb 2008 11:09:32 0800, Lie wrote:
I decided to keep the num/den limit low (10) because higher values might
obscure the fact that it do have limits.
You do realise that by putting limits on the denominator, you guarantee
that the sum of the fractions also has a limit on the denominator? In
other words, your "test" is useless.
With denominators limited to 1 through 9 inclusive, the sum will have a
denominator of 2*3*5*7 = 210. But that limit is a product (literally and
figuratively) of your artificial limit on the denominator. Add a fraction
with denominator 11, and the sum now has a denominator of 2310; add
another fraction n/13 and the sum goes to m/30030; and so on.

Steven   
On Feb 24, 4:50ï¿½pm, Steven D'Aprano <st...@REMOVETHIS
cybersource.com.auwrote:
On Sun, 24 Feb 2008 11:09:32 0800, Lie wrote:
I decided to keep the num/den limit low (10) because higher values might
obscure the fact that it do have limits.
You do realise that by putting limits on the denominator, you guarantee
that the sum of the fractions also has a limit on the denominator? In
other words, your "test" is useless.
With denominators limited to 1 through 9 inclusive, the sum will have a
denominator of 2*3*5*7 = 210.
Th limit will be 2*2*2*3*3*5*7. As MD said, "equivalently
the product over all primes p <= n of the highest power
of p not exceeding n".
But that limit is a product (literally and
figuratively) of your artificial limit on the denominator. Add a fraction
with denominator 11, and the sum now has a denominator of 2310; add
another fraction n/13 and the sum goes to m/30030; and so on.

Steven
  
Mensanator wrote:
On Feb 24, 1:09ï¿½pm, Lie <Lie.1...@gmail.comwrote:
>I decided to keep the num/den limit low (10) because higher values might obscure the fact that it do have limits. [ ... ]
Out of curiosity, of what use is denominator limits?
The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is.
In calculations dealing only with selected units of measure: dollars
and cents, pounds, ounces and tons, teaspoons, gallons, beer bottles
28 to a case, then the denominators would settle out pretty quickly.
In general mathematics, not.
I think that might be the point.
Mel.   
On Feb 24, 6:09Â*pm, Mel <mwil...@thewire.comwrote:
Mensanator wrote:
On Feb 24, 1:09ï¿½pm, Lie <Lie.1...@gmail.comwrote:
I decided to keep the num/den limit low (10) because higher values
might obscure the fact that it do have limits. [ ... ]
Out of curiosity, of what use is denominator limits?
The problems where I've had to use rationals have
never afforded me such luxury, so I don't see what
your point is.
In calculations dealing only with selected units of measure: dollars
and cents, pounds, ounces and tons, teaspoons, gallons, beer bottles
28 to a case, then the denominators would settle out pretty quickly.
Ok.
>
In general mathematics, not.
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
>
I think that might be the point.
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?
>
Â* Â* Â* Â* Mel.
  
On Feb 24, 7:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
Since you are expecting to work with unlimited (or at least, very
high) precision, then the behavior of rationals is not a surprise. But
a naive user may be surprised when the running time for a calculation
varies greatly based on the values of the numbers. In contrast, the
running time for standard binary floating point operations are fairly
constant.
>
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?
In the current version of GMP, the running time for the calculation of
the greatest common divisor is O(n^2). If you include reduction to
lowest terms, the running time for a rational add is now O(n^2)
instead of O(n) for a highprecision floating point addition or O(1)
for a standard floating point addition. If you need an exact rational
answer, then the change in running time is fine. But you can't just
use rationals and expect a constant running time.
There are tradeoffs between IEEE754 binary, Decimal, and Rational
arithmetic. They all have there appropriate problem domains.
And sometimes you just need unlimited precision, radix6, fixedpoint
arithmetic....
casevh   
On Feb 24, 12:32 pm, Lie <Lie.1...@gmail.comwrote:
On Feb 18, 1:25 pm, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 17, 1:45 pm, Lie <Lie.1...@gmail.comwrote:
Any iteration with repeated divisions and additions can thus run the
denominators up. This sort of calculation is pretty common (examples:
compound interest, numerical integration).
Wrong. Addition and subtraction would only grow the denominator up to
a certain limit
I said repeated additions and divisions.
Repeated Addition and subtraction can't make fractions grow
infinitely, only multiplication and division could.
What part of "repeated additions and divisions" don't you understand?
Carl Banks   
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Carl Banks   
Carl Banks <pa************@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Usually you would round to the nearest penny before storing in the
database.   
On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Carl Banks <pavlovevide...@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Usually you would round to the nearest penny before storing in the
database.
I throw it out there as a hypothetical, not as a real world example.
"This is why we don't (usually) use rationals for accounting."
Carl Banks   
If you're interested in rationals, then you might want to have a look
at mxNumber which is part of the eGenix mx Experimental
Distribution: http://www.egenix.com/products/pytho...ntal/mxNumber/
It provides fast rational operations based on the GNU MP
library.
On 20080225 07:58, Carl Banks wrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
>But that doesn't mean they become less manageable than other unlimited precision usages. Did you see my example of the polynomial finder using Newton's Forward Differences Method? The denominator's certainly don't settle out, neither do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
It is well possible to limit the denominator before storing it
in a database or other external resource using Farey neighbors: http://en.wikipedia.org/wiki/Farey_s...rey_neighbours
mxNumber implements an algorithm for this (not the most efficient
one, but it works nicely).

MarcAndre Lemburg
eGenix.com
Professional Python Services directly from the Source (#1, Feb 25 2008)
>>Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
__________________________________________________ ______________________
:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::
eGenix.com Software, Skills and Services GmbH PastorLoehStr.48
D40764 Langenfeld, Germany. CEO Dipl.Math. MarcAndre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611   
Paul Rubin wrote:
Carl Banks <pa************@gmail.comwrites:
>Try doing numerical integration sometime with rationals, and tell me how that works out. Try calculating compound interest and storing results for 1000 customers every month, and compare the size of your database before and after.
Usually you would round to the nearest penny before storing in the
database.
There are cases where the law requires a higher precision or where the
rounding has to be a floor or...
Some things make no sense and when dealing with money things make even less
sense to either protect the customer or to grant the State getting its
share of the transaction.
Here in Brasil, for example, gas stations have to display the price with 3
decimal digits and round the end result down (IIRC). A truck filling 117
liters at 1.239 reais per liter starts making a mess... If the owner wants
to track "losses" due to rounding or if he wants to make his inventory of
fuel accurately, he won't be able to save just what he billed the customer
otherwise things won't match by the end of the month.   
On Feb 25, 12:58ï¿½am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. ï¿½That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.
Try doing numerical integration sometime with rationals, and tell me
how that works out. ï¿½Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Nobody said rationals were the appropriate solution
to _every_ problem, just as floats and integers aren't
the appropriate solution to _every_ problem.
Your argument is that I should be forced to use
an inappropriate type when rationals _are_
the appropriate solution.
I have never used the Decimal type, but I'm not
calling for it's removal because I know there are
cases where it's useful. If a rational type were
added, no one would force you to use it for
numerical integration.
>
Carl Banks
  
On Sun, 24 Feb 2008 23:41:53 0800, Dennis Lee Bieber wrote:
On 24 Feb 2008 23:04:14 0800, Paul Rubin <http://ph****@NOSPAM.invalid>
declaimed the following in comp.lang.python:
>Usually you would round to the nearest penny before storing in the database.
Tell that to the payroll processing at Lockheed...My paycheck
tends to vary from week to week as the database apparently carries
amount to at least 0.001 resolution, only rounding when distributing
among various taxes for the paycheck itself. Tedious data entry in
Quicken as I have to keep tweaking various tax entries by +/ a penny
each week.
"Worst practice" in action *wink*
I predict they're using some funky inhouse accounting software they've
paid millions to a consultancy firm (SAP?) for over the decades, written
by some guys who knows lots of Cobol but no accounting, and the internal
data type is a float.
[snip]
Oh... And M$  the currency type in VB is four decimal places.
Accounting standards do vary according to context: e.g. I see that
offical Australian government reporting standards for banks is to report
in millions of dollars rounded to one decimal place. Accountants can
calculate things more or less any way they like, so long as they tell
you. I found one really dodgy example:
"The MFS Water Fund ARSN 123 123 642 (â€˜the Fundâ€™) is a registered managed
investment scheme. ... MFS Aqua may calculate the Issue Price to the
number of decimal places it determines."
Sounds like another place using native floats. But it's all above board,
because they tell you they'll use an arbitrary number of decimal places,
all the better to confuse the auditors my dear.

Steven   
On Sun, 24 Feb 2008 23:09:39 0800, Carl Banks wrote:
On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>Carl Banks <pavlovevide...@gmail.comwrites:
Try doing numerical integration sometime with rationals, and tell me
how that works out. Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Usually you would round to the nearest penny before storing in the database.
I throw it out there as a hypothetical, not as a real world example.
"This is why we don't (usually) use rationals for accounting."
But since accountants (usually) round to the nearest cent, accounting is
a *good* usecase for rationals. Decimal might be better, but floats are
worst.
I wonder why you were doing numerical integration with rationals in the
first place? Are you one of those ABC users (like Guido) who have learnt
to fear rationals because ABC didn't have floats?

Steven   
On 20080225 16:03, Steven D'Aprano wrote:
On Sun, 24 Feb 2008 23:09:39 0800, Carl Banks wrote:
>On Feb 25, 2:04 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>>Carl Banks <pavlovevide...@gmail.comwrites: Try doing numerical integration sometime with rationals, and tell me how that works out. Try calculating compound interest and storing results for 1000 customers every month, and compare the size of your database before and after. Usually you would round to the nearest penny before storing in the database.
I throw it out there as a hypothetical, not as a real world example. "This is why we don't (usually) use rationals for accounting."
But since accountants (usually) round to the nearest cent, accounting is
a *good* usecase for rationals. Decimal might be better, but floats are
worst.
That's not necessarily true in general: finance libraries usually try
to always do calculations at the best possible precision and then only
apply rounding at the very end of a calculation. Most of the time a float
is the best data type for this.
Accounting uses a somewhat different approach and one which various
between the different accounting standards and use cases. The decimal
type is usually better suited for this, since it supports various
ways of doing rounding.
Rationals are not always the best alternative, but they do help
in cases where you need to guarantee that the sum of all parts
is equal to the whole for all values. Combined with interval
arithmetic they go a long way towards more accurate calculations.

MarcAndre Lemburg
eGenix.com
Professional Python Services directly from the Source (#1, Feb 25 2008)
>>Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
__________________________________________________ ______________________
:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::
eGenix.com Software, Skills and Services GmbH PastorLoehStr.48
D40764 Langenfeld, Germany. CEO Dipl.Math. MarcAndre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611   
On Feb 25, 9:41 am, Mensanator <mensana...@aol.comwrote:
On Feb 25, 12:58ï¿½am, Carl Banks <pavlovevide...@gmail.comwrote:
On Feb 24, 10:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
No, that's a specific algorithm. ï¿½That some random algorithm doesn't
blow up the denominators to the point of disk thrashing doesn't mean
they won't generally.
Try doing numerical integration sometime with rationals, and tell me
how that works out. ï¿½Try calculating compound interest and storing
results for 1000 customers every month, and compare the size of your
database before and after.
Nobody said rationals were the appropriate solution
to _every_ problem, just as floats and integers aren't
the appropriate solution to _every_ problem.
I was answering your claim that rationals are appropriate for general
mathematical uses.
Your argument is that I should be forced to use
an inappropriate type when rationals _are_
the appropriate solution.
I don't know where you got that idea.
My argument is that rationals aren't suitable for ordinary uses
because they have poor performance and can easily blow up in your
face, trash your disk, and crash your program (your whole system if
you're on Windows).
In other words, 3/4 in Python rightly yields a float and not a
rational.
Carl Banks   
On 20080225, Carl Banks <pa************@gmail.comwrote:
In other words, 3/4 in Python rightly yields a float
Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
and not a rational.

Grant Edwards grante Yow! Zippy's brain cells
at are straining to bridge
visi.com synapses ...   
Grant Edwards wrote:
On 20080225, Carl Banks <pa************@gmail.comwrote:
>In other words, 3/4 in Python rightly yields a float
Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
>and not a rational.
No, that wouldn't be rational ;)

Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/   
On Mon, 20080225 at 16:27 +0000, Grant Edwards wrote:
On 20080225, Carl Banks <pa************@gmail.comwrote:
In other words, 3/4 in Python rightly yields a float
Unless you're in the camp that believes 3/4 should yield the
integer 0. ;)
I'm in the camp that believes that 3/4 does indeed yield the integer 0,
but should be spelled 3//4 when that is the intention.
Cheers,
Cliff   
On Feb 25, 11:34 am, casevh <cas...@gmail.comwrote:
On Feb 24, 7:56 pm, Mensanator <mensana...@aol.comwrote:
But that doesn't mean they become less manageable than
other unlimited precision usages. Did you see my example
of the polynomial finder using Newton's Forward Differences
Method? The denominator's certainly don't settle out, neither
do they become unmanageable. And that's general mathematics.
Since you are expecting to work with unlimited (or at least, very
high) precision, then the behavior of rationals is not a surprise. But
a naive user may be surprised when the running time for a calculation
varies greatly based on the values of the numbers. In contrast, the
running time for standard binary floating point operations are fairly
constant.
If the point was as SDA suggested, where things like 16/16
are possible, I see that point. As gmpy demonstrates thouigh,
such concerns are moot as that doesn't happen. There's no
reason to suppose a Python native rational type would be
implemented stupidly, is there?
In the current version of GMP, the running time for the calculation of
the greatest common divisor is O(n^2). If you include reduction to
lowest terms, the running time for a rational add is now O(n^2)
instead of O(n) for a highprecision floating point addition or O(1)
for a standard floating point addition. If you need an exact rational
answer, then the change in running time is fine. But you can't just
use rationals and expect a constant running time.
There are tradeoffs between IEEE754 binary, Decimal, and Rational
arithmetic. They all have there appropriate problem domains.
I very agree with this statement. Fractionals do have its weakness,
and so do Decimal and Hardware Floating Point. And they have their own
usage, their own scenarios where they're appropriate. If you needed
fullspeed calculation, it is clear that floating point wins all over
the place, OTOH, if you need to manage your precision carefully
Fractional and Decimal both have their own plus and mins   This discussion thread is closed Replies have been disabled for this discussion. Similar topics
31 posts
views
Thread by JS 
last post: by

5 posts
views
Thread by Anton Noll 
last post: by

687 posts
views
Thread by cody 
last post: by

24 posts
views
Thread by j0mbolar 
last post: by

7 posts
views
Thread by Vinoth 
last post: by

15 posts
views
Thread by michael.mcgarry 
last post: by

13 posts
views
Thread by Bern McCarty 
last post: by

4 posts
views
Thread by jacob navia 
last post: by

32 posts
views
Thread by ma740988 
last post: by

39 posts
views
Thread by rembremading 
last post: by
          