I've never had any call to use floating point numbers and now that I
want to, I can't!
*** Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32
bit (Intel)] on win32. ***
>>float (.3)
0.29999999999999999
>>foo = 0.3 foo
0.29999999999999999
>>>
Feb 13 '08
135 4136
Marc 'BlackJack' Rintsch <bj****@gmx.netwrites:
For implementing this in Python you have to carry an "is allowed to be
coerced to float" flag with every integer object to decide at run time if
it is an error to add it to a float or not.
Yeah, I guess it's not workable in a dynamic language. Hmm. Well I
could think of some crazy ways to do it.
Or you make Python into a statically typed language like Haskell.
But then it's not Python anymore IMHO.
There are some languages like Boo, that are sort of halfway between
Python and Haskell, so maybe that kind of idea could be used in them.
On Thu, 28 Feb 2008 00:30:11 -0800, Dennis Lee Bieber wrote:
On Thu, 28 Feb 2008 01:25:32 -0000, Steven D'Aprano
<st***@REMOVE-THIS-cybersource.com.audeclaimed the following in
comp.lang.python:
>When it comes to mixed arithmetic, it's just too darn inconvenient to forbid automatic conversions. Otherwise you end up either forbidding things like 1 + 1.0 on the basis that it isn't clear whether the programmer wants an int result or a float result, or else even more complex rules ("if the left operator is an int, and the result of the addition has a zero floating-point part, then the result is an int, otherwise it's an error, but if the left operator is a float, the result is always a float"). Or a proliferation of operators, with integer and floating point versions of everything.
Automatic conversions, okay... but converting a result when all
inputs are of one time, NO...
What? How does that make any sense?
By that logic, we should see this:
>>len("a string")
'8'
>>len([2, 4, 6])
[3]
>>len({'key': 'value'})
{1: None}
The only rule needed is very simple: promote simpler types to the
more complex type involved in the current expression (with expression
defined as "value operator value" -- so (1/2) * 3.0 is INTEGER 1/2,
resultant 0 then promoted to float 0.0 to be compatible with 3.0).
Very simple rule, used by very many traditional programming
languages.
And rightly rejected by many other programming languages, including
modern Python, not to mention calculators, real mathematics and common
sense.
--
Steven
On Feb 28, 3:30 am, Dennis Lee Bieber <wlfr...@ix.netcom.comwrote:
Automatic conversions, okay... but converting a result when all
inputs are of one time, NO...
People, this is so cognitive dissonance it's not even funny.
There is absolutely nothing obvious about 1/2 returning a number that
isn't at least approximately equal to one half. There is nothing self-
evident about operations maintaining types.
You people can't tell the difference between "obvious" and "learned
conventions that came about because in limitations in the hardware at
the time". Nobody would have come up with a silly rule like "x op y
must always have the same type as x and y" if computer hardware had
been up to the task when these languages were created.
Very simple rule, used by very many traditional >programming languages.
I'd be interested in hearing what languages those are.
Carl Banks
On 2008-02-28, Carl Banks <pa************@gmail.comwrote:
>Automatic conversions, okay... but converting a result when all inputs are of one time, NO...
People, this is so cognitive dissonance it's not even funny.
There is absolutely nothing obvious about 1/2 returning a number that
isn't at least approximately equal to one half.
I guess obviousness is in the eye of the beholder. To me it's
obvious that "1" and "2" are integers, and it's also obvious
that 2 goes into 1 zero times.
There is nothing self-evident about operations maintaining
types.
By that logic, there's no reason for 1 + "two" shouldn't
convert one operand or the other.
You people can't tell the difference between "obvious" and "learned
conventions that came about because in limitations in the hardware at
the time".
It seems to me that the expectation that 1/2 yield 0.5 is just
as much a convention as that it yield 0 or a true rational.
--
Grant Edwards grante Yow! I am covered with
at pure vegetable oil and I am
visi.com writing a best seller!
Hallöchen!
Grant Edwards writes:
[...]
>You people can't tell the difference between "obvious" and "learned conventions that came about because in limitations in the hardware at the time".
It seems to me that the expectation that 1/2 yield 0.5 is just as
much a convention as that it yield 0 or a true rational.
Should be set up a poll? Do you really think that less than 90% of
the voters would enter something else than 0.5 in the result edit
field?
Tschö,
Torsten.
--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: br*****@jabber.org
(See http://ime.webhop.org for further contact info.)
On Feb 28, 9:36 am, Grant Edwards <gra...@visi.comwrote:
On 2008-02-28, Carl Banks <pavlovevide...@gmail.comwrote:
Automatic conversions, okay... but converting a result when
all inputs are of one time, NO...
People, this is so cognitive dissonance it's not even funny.
There is absolutely nothing obvious about 1/2 returning a number that
isn't at least approximately equal to one half.
I guess obviousness is in the eye of the beholder. To me it's
obvious that "1" and "2" are integers, and it's also obvious
that 2 goes into 1 zero times.
2 goes into 1 0.5 times.
There is nothing self-evident about operations maintaining
types.
By that logic, there's no reason for 1 + "two" shouldn't
convert one operand or the other.
False dilemma, chief. That preserving type is not self-evident
doesn't make all operations that don't preserve type a good idea.
You people can't tell the difference between "obvious" and "learned
conventions that came about because in limitations in the hardware at
the time".
It seems to me that the expectation that 1/2 yield 0.5 is just
as much a convention as that it yield 0 or a true rational.
Sure it is, but unlike the old convention, it's the obvious one.
Carl Banks
On Thu, 28 Feb 2008 06:10:13 -0800 (PST)
Carl Banks <pa************@gmail.comwrote:
On Feb 28, 3:30 am, Dennis Lee Bieber <wlfr...@ix.netcom.comwrote:
Automatic conversions, okay... but converting a result when all
inputs are of one time, NO...
People, this is so cognitive dissonance it's not even funny.
I'll say.
There is absolutely nothing obvious about 1/2 returning a number that
isn't at least approximately equal to one half. There is nothing self-
evident about operations maintaining types.
Not obvious to you. You are using subjective perception as if it was a
law of nature. If "obvious" was the criteria then I would argue that
the only proper result of integer division is (int, int). Give me the
result and the remainder and let me figure it out.
You people can't tell the difference between "obvious" and "learned
conventions that came about because in limitations in the hardware at
the time". Nobody would have come up with a silly rule like "x op y
must always have the same type as x and y" if computer hardware had
been up to the task when these languages were created.
What makes you say they weren't? Calculating machines that handled
floating point are older than Python by far.
--
D'Arcy J.M. Cain <da***@druid.net | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
On Thu, 2008-02-28 at 11:22 -0500, D'Arcy J.M. Cain wrote:
Not obvious to you. You are using subjective perception as if it was
a
law of nature. If "obvious" was the criteria then I would argue that
the only proper result of integer division is (int, int). Give me the
result and the remainder and let me figure it out.
I'd like to point out that now you are talking about int OP int
returning a tuple, not an int.
D'Arcy J.M. Cain wrote:
Not obvious to you. You are using subjective perception as if it was
a law of nature. If "obvious" was the criteria then I would argue that
the only proper result of integer division is (int, int). Give me the
result and the remainder and let me figure it out.
J. Cliff Dyer <jc*@sdf.lonestar.orgwrote:
I'd like to point out that now you are talking about int OP int
returning a tuple, not an int.
No, D'Arcy's point was that "obvious" isn't the criteria because it
would lead to behaviour that no one wants.
No one is going to win this argument by using words like "natural" or
"obvious". You're just going to have to accept that there that there
is no concensus on this issue and there never was. In the end only one
person's opinion of what was natural and obvious really matters.
Ross Ridge
--
l/ // Ross Ridge -- The Great HTMU
[oo][oo] rr****@csclub.uwaterloo.ca
-()-/()/ http://www.csclub.uwaterloo.ca/~rridge/
db //
On Thu, 28 Feb 2008 13:32:06 -0500
"J. Cliff Dyer" <jc*@sdf.lonestar.orgwrote:
On Thu, 2008-02-28 at 11:22 -0500, D'Arcy J.M. Cain wrote:
Not obvious to you. You are using subjective perception as if it was
a
law of nature. If "obvious" was the criteria then I would argue that
the only proper result of integer division is (int, int). Give me the
result and the remainder and let me figure it out.
I'd like to point out that now you are talking about int OP int
returning a tuple, not an int.
Which would be stupid. Good thing I don't think that "obvious" should
be the criteria.
--
D'Arcy J.M. Cain <da***@druid.net | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
"D'Arcy J.M. Cain" <da***@druid.netwrites:
I'd like to point out that now you are talking about int OP int
returning a tuple, not an int.
Which would be stupid. Good thing I don't think that "obvious" should
be the criteria.
We already have that function (divmod) and it is very useful.
On 28 Feb 2008 12:25:14 -0800
Paul Rubin <"http://phr.cx"@NOSPAM.invalidwrote:
"D'Arcy J.M. Cain" <da***@druid.netwrites:
I'd like to point out that now you are talking about int OP int
returning a tuple, not an int.
Which would be stupid. Good thing I don't think that "obvious" should
be the criteria.
We already have that function (divmod) and it is very useful.
Yes it is and has nothing to do with this discussion. I don't think
that anyone here has suggested that methods should return the type of
their arguments although there have been a few suggestions that the
suggestions were made.
--
D'Arcy J.M. Cain <da***@druid.net | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
On Thu, 28 Feb 2008 11:22:43 -0500, D'Arcy J.M. Cain wrote:
Calculating machines that handled
floating point are older than Python by far.
Yes, and they almost universally give the result 1/2 -0.5.
--
Steven
On Thu, 28 Feb 2008 14:41:56 -0500, Ross Ridge wrote:
You're just going to have to accept that there that there is no
concensus on this issue and there never was.
But that's not true. The consensus, across the majority of people (both
programmers and non-programmers alike) is that 1/2 should return 0.5.
There's a small minority that argue for a rational result, and a bigger
minority who argue for 0.
The interesting case is -1/2. According to the argument that "2 doesn't
go into 1", -1/2 should also return 0. But that's not what Python
returns, so it looks like the "int division" camp is screwed no matter
whether Python keeps the status quo or the other status quo.
--
Steven
Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrites:
any restriction that functions must return the same
type as all its arguments is just crazy.
I don't think anyone is saying that they should necessarily do that
in general. Just in some specific cases.
Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrites:
Calculating machines that handled
floating point are older than Python by far.
Yes, and they almost universally give the result 1/2 -0.5.
Can you name an example of a calculating machine that both:
1) represents the integer 1 and the real number 1.0 as distinct objects;
and
2) says 1/2 = 0.5 ?
On Feb 28, 10:41*pm, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
[...]
The interesting case is -1/2. According to the argument that "2 doesn't
go into 1", -1/2 should also return 0. But that's not what Python
returns, so it looks like the "int division" camp is screwed no matter
whether Python keeps the status quo or the other status quo.
I'm in the "int division camp" and I like that -1 / 2 is -1. It's
nice because I can be sure that for any p and q:
* 0 <= p%q < q
* p = q*(p/q) + p%q
In other words, p/q is the largest r such that rq <= p.
It ensures that / and % are well behaved in a lot of situations, e.g:
* (x - y)%n == 0 if and only if x%n == y%n
* if a/n == b/n then abs(a - b) < n
...
So that doesn't screw me, on the contrary I find it a mathematically
very sound decision. What screws me is that I'm going to have to type
p//q in the future.
--
Arnaud
"Ross Ridge" <rr****@caffeine.csclub.uwaterloo.cawrote in message
news:fq**********@rumours.uwaterloo.ca...
| Ross Ridge wrote:
| You're just going to have to accept that there that there is no
| concensus on this issue and there never was.
|
| Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrote:
| >But that's not true. The consensus, across the majority of people (both
| >programmers and non-programmers alike) is that 1/2 should return 0.5.
|
| You're deluding yourself.
As a major participant in the discussion, who initially opposed the change,
I think Steven is right.
| If there were a concensus then this issue then
| it wouldn't be so controversial.
The controversy was initially inflamed by issues that did not directly bear
on the merit of the proposal. Worst was its cloaking it in a metaphysical
argument about the nature of integers. It also did not help that Guido
initially had trouble articulating the *practical*, Pythonic reason for the
proposal.
To me, the key is this (very briefly): The current overloading of '/' was
copied from C. But Python is crucially different from C in that
expressions can generally be generic, with run-time rather than compile
time typing of variables. But there are no practical use cases that anyone
ever presented for expr_a / expr_b having two different numerical values,
giving fixed numerical values for expr_a and expr_b, depending on the
number types of the two expressions.
Beyond the change itself, another issue was its timing. When I proposed
that the version making 1/2=.5 the default be called 3.0, and Guido agreed,
many who agreed with the change in theory but were concerned with stability
of the 2.x series agreed that that would make it more palatable.
A third issue was the work required to make the change. The future
mechanism eased that, and the 2to3 conversion program will also issue
warnings.
Terry Jan Reedy
"Arnaud Delobelle" <ar*****@googlemail.comwrote in message
| What screws me is that I'm going to have to type p//q in the future.
When I compare that pain to the gain of not having to type an otherwise
extraneous 'float(...)', and the gain of disambiguating the meaning of a/b
(for builtin numbers at least), I think there will be a net gain for the
majority.
tjr
On Feb 29, 10:10*pm, "Terry Reedy" <tjre...@udel.eduwrote:
"Arnaud Delobelle" <arno...@googlemail.comwrote in message
| What screws me is that I'm going to have to type p//q in the future.
When I compare that pain to the gain of not having to type an otherwise
extraneous 'float(...)', and the gain of disambiguating the meaning of a/b
(for builtin numbers at least), I think there will be a net gain for the
majority.
You may be right. I can see the rationale for this change (although
many aspects feel funny, such as doing integral arithmetic with
floats, i.e. 3.0//2.0).
Perhaps it'll be like when I quit smoking six years ago. I didn't
enjoy it although I knew it was good for me... And now I don't regret
it even though I still have the occasional craving.
--
Arnaud
"Arnaud Delobelle" <ar*****@googlemail.comwrote in message
news:ea**********************************@b1g2000h sg.googlegroups.com...
| Perhaps it'll be like when I quit smoking six years ago. I didn't
| enjoy it although I knew it was good for me... And now I don't regret
| it even though I still have the occasional craving.
In following the development of Py3, there have been a few decisions that I
wish had gone otherwise. But I agree with more than the majority and am
not going to deprive myself of what I expect to be an improved experience
without giving Py3 a fair trial.
tjr
Not sure if this is common knowledge yet but Sympy, http://code.google.com/p/sympy, has a rational type.
In [2]: from sympy import *
In [3]: Rational(21,4)
Out[3]: 21/4
In [4]: Rational(21,4)+Rational(3,4)
Out[4]: 6
On Feb 29, 5:33*am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
And rightly rejected by many other programming languages, including
modern Python, not to mention calculators, real mathematics and
common sense.
Lost me again. *I was not aware that calculators, real mathematics
and common sense were programming languages.
I didn't say they were. Please parse my sentence again.
In the widest sense of the term computer and programming language,
actually calculators and real mathematics are programming languages.
Programming languages is a way to communicate problems with computer.
Computer is anything that does computation (and that includes
calculator and a room full of a bunch of people doing calculation with
pencil and paper[1]). The expressions we write in a calculator is a
(very limited) programming language, while mathematics conventions is
a language to communicate a mathematician's problems with the
computers[2] and other mathematicians
[1] Actually the term computer was first used to refer to this bunch
of people
[2] The computers in this sense is people that does computation
Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrites:
def mean(data): return sum(data)/len(data)
That does the right thing for data, no matter of what it consists of:
floats, ints, Decimals, rationals, complex numbers, or a mix of all of
the above.
One of those types is not like the others: for all of them except int,
the quotient operation actually is the inverse of multiplication.
So I'm unpersuaded that the "mean" operation above does the "right
thing" for ints. If the integers being averaged were prices
in dollars, maybe the result type should even be decimal.
For this reason I think // is a good thing and I've gotten accustomed
to using it for integer division. I can live with int/int=float but
find it sloppy and would be happier if int/int always threw an error
(convert explicitly if you want a particular type result).
Lie <Li******@gmail.comwrites:
That's quite complex and restrictive, but probably it's because my
mind is not tuned to Haskell yet.
That aspect is pretty straightforward, other parts like only being
able to do i/o in functions having a special type are much more confusing.
Anyway, I don't think Python should
work that way, because Python have a plan for numerical integration
which would unify all numerical types into an apparent single type,
which requires removal of operator's limitations.
Well I think the idea is to have a hierarchy of nested numeric types,
not a single type.
from __future import division
a = 10
b = 5
c = a / b
if c * b == a: print 'multiplication is inverse of division'
Try with a=7, b=25
On Mar 2, 10:02 pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Lie <Lie.1...@gmail.comwrites:
Anyway, I don't think Python should
work that way, because Python have a plan for numerical integration
which would unify all numerical types into an apparent single type,
which requires removal of operator's limitations.
Well I think the idea is to have a hierarchy of nested numeric types,
not a single type.
You hit the right note, but what I meant is the numeric type
unification would make it _appear_ to consist of a single numeric type
(yeah, I know it isn't actually, but what appears from outside isn't
always what's inside).
from __future import division
a = 10
b = 5
c = a / b
if c * b == a: print 'multiplication is inverse of division'
Try with a=7, b=25
They should still compare true, but they don't. The reason why they
don't is because of float's finite precision, which is not exactly
what we're talking here since it doesn't change the fact that
multiplication and division are inverse of each other. One way to
handle this situation is to do an epsilon aware comparison (as should
be done with any comparison involving floats), but I don't do it cause
my intention is to clarify the real problem that multiplication is
indeed inverse of division and I want to avoid obscuring that with the
epsilon comparison.
Lie <Li******@gmail.comwrites:
You hit the right note, but what I meant is the numeric type
unification would make it _appear_ to consist of a single numeric type
(yeah, I know it isn't actually, but what appears from outside isn't
always what's inside).
That is clearly not intended; floats and decimals and integers are
really different from each other and Python has to treat them distinctly.
Try with a=7, b=25
They should still compare true, but they don't. The reason why they
don't is because of float's finite precision, which is not exactly
what we're talking here since it doesn't change the fact that
multiplication and division are inverse of each other.
What? Obviously they are not exact inverses for floats, as that test
shows. They would be inverses for mathematical reals or rationals,
but Python does not have those.
One way to handle this situation is to do an epsilon aware
comparison (as should be done with any comparison involving floats),
but I don't do it cause my intention is to clarify the real problem
that multiplication is indeed inverse of division and I want to
avoid obscuring that with the epsilon comparison.
I think you are a bit confused. That epsilon aware comparison thing
acknowledges that floats only approximate the behavior of mathematical
reals. When we do float arithmetic, we accept that "equal" often
really only means "approximately equal". But when we do integer
arithmetic, we do not expect or accept equality as being approximate.
Integer equality means equal, not approximately equal. That is why
int and float arithmetic cannot work the same way.
Paul Rubin wrote:
I can live with int/int=float but
find it sloppy and would be happier if int/int always threw an error
(convert explicitly if you want a particular type result).
Better yet, how hard would it be to define an otherwise int-like type
that did not define a non-flooring division operator? Are there any
real use cases for such a type? Maybe a division operator could be
defined to perform a run-time check that, for an operation n/d==q,
n==q*d; else, throw an exception. Code written to support duck-typed
integers should work with such a UDT "out of the box."
On Mar 1, 12:29*pm, "Anand Patil" <anand.prabhakar.pa...@gmail.com>
wrote:
Not sure if this is common knowledge yet but Sympy,http://code.google.com/p/sympy, has a rational type.
I hadn't heard of this before, thanks for the link.
Very nifty, lots of goodies not found in gmpy (although
it seems to lack a modular inverse function and the linear
congruence solver that can be derived from it making it
of no value to me).
Alas, it's written in Python. Who writes a math library
in Python?
Nevertheless, I thought I would try out the Rational numbers.
Once I figured out how to use them, I converted my Polynomial
Finder by Newton's Forward Difference Method program to use
sympy instead of gmpy.
I have a test case where I create 1 66 degree polynomial where
the coefficients are large rationals. The polynomial was
calculated flawlessly
<partial output>
## sympy
## Term0: [66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66,
-66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66,
66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66,
-66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, -66,
66, -66, 66, -66, 66, -66, 66, -66, 66, -66, 66, 0]
##
## Seq: [66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 66]
##
## The Polynomial:
##
##
## 1
##
-------------------------------------------------------------------------------------------
* n**66
##
82476505920824706667231703067854962521862585513454 37492922123134388955774976000000000000000
##
##
## -67
##
------------------------------------------------------------------------------------------
* n**65
##
24992880582068092929464152444804534097534116822258 9014937034034375422902272000000000000000
##
##
## 67
##
---------------------------------------------------------------------------------------
* n**64
##
23070351306524393473351525333665723782339184759008 2167634185262500390371328000000000000
But because they are calculated using Python,
it took 175 seconds compared to 0.2 seconds
for gmpy to do the same polynomial.
So, I'll keep it around for it's neat features
that gmpy doesn't have, but it won't replace gmpy
for any serious work.
>
In [2]: from sympy import *
In [3]: Rational(21,4)
Out[3]: 21/4
In [4]: Rational(21,4)+Rational(3,4)
Out[4]: 6
On Mar 3, 4:39*pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
[...]
You are right, C is even worse than I remembered.
It's good enough to be the language used for the reference
implementation of python :-)
[...]
--
Arnaud
On Mar 2, 11:36*pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Lie <Lie.1...@gmail.comwrites:
You hit the right note, but what I meant is the numeric type
unification would make it _appear_ to consist of a single numeric type
(yeah, I know it isn't actually, but what appears from outside isn't
always what's inside).
That is clearly not intended; floats and decimals and integers are
really different from each other and Python has to treat them distinctly.
In certain operations it would, such as:
a = Decimal('32.324')
b = 90.3453
c = 43
d = a + b + c <<< this should work without manual type casting
This behavior is what is intended in numeric type unification, floats
and decimals and integers should work together flawlessly without the
need for manual type casting. This gives the _impression_ of a single
numeric type (although programmers should still be aware that the
underlying type still exists). It's true Python have to treat them
differently, but programmers would be able to treat them all the same
(at least in most parts)
Try with a=7, b=25
They should still compare true, but they don't. The reason why they
don't is because of float's finite precision, which is not exactly
what we're talking here since it doesn't change the fact that
multiplication and division are inverse of each other.
What? *Obviously they are not exact inverses for floats, as that test
shows. *They would be inverses for mathematical reals or rationals,
but Python does not have those.
When I said multiplication and division are inverse, I was pointing
out the fact that even though float's inexactness make them imperfect
inverse, mult & div are still inverse of each other. In-practice, the
inversing behavior is impossible unless we have a way to represent
real number (which we don't), and the *workaround* to make them work
is to do epsilon comparison.
When I'm talking about things I usually talk in purely theoretical
condition first and considers practical implementations that doesn't
work that way as making up a workaround inside their limitations. In
this case, the theoretical condition is that multiplication and
division is inverse of each other. The practical consideration is
float is inexact and reals is impossible, and thus epsilon comparison
is necessary to walk around float's limitations so multiplication and
division could still be inverses.
Aside: Python would have rationals
One way to handle this situation is to do an epsilon aware
comparison (as should be done with any comparison involving floats),
but I don't do it cause my intention is to clarify the real problem
that multiplication is indeed inverse of division and I want to
avoid obscuring that with the epsilon comparison.
I think you are a bit confused. *That epsilon aware comparison thing
acknowledges that floats only approximate the behavior of mathematical
reals. *
Yes, I realized that floats aren't the same as reals.
When we do float arithmetic, we accept that "equal" often
really only means "approximately equal". *But when we do integer
arithmetic, we do not expect or accept equality as being approximate.
Integer equality means equal, not approximately equal. *That is why
int and float arithmetic cannot work the same way.
No, no, they don't work the same way, but they should appear to work
the same way as reals in pure mathematics do. Again, I'm talking in
theory first: ints and floats should work the same way, but since
practical considerations make them impossible, then they should at
least appear to work the same way (or they would have become
completely different things, remember duck typing?).
On Mar 4, 7:11 am, Lie <Lie.1...@gmail.comwrote:
On Mar 2, 11:36 pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Try with a=7, b=25
They should still compare true, but they don't. The reason why they
don't is because of float's finite precision, which is not exactly
what we're talking here since it doesn't change the fact that
multiplication and division are inverse of each other.
What? Obviously they are not exact inverses for floats, as that test
shows. They would be inverses for mathematical reals or rationals,
but Python does not have those.
When I said multiplication and division are inverse, I was pointing
out the fact that even though float's inexactness make them imperfect
inverse, mult & div are still inverse of each other. In-practice, the
inversing behavior is impossible unless we have a way to represent
real number (which we don't), and the *workaround* to make them work
is to do epsilon comparison.
A mildly interesting Py3k experiment:
Python 3.0a3+ (py3k:61229, Mar 4 2008, 21:38:15)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>from fractions import Fraction from decimal import Decimal def check_accuracy(num_type, max_val=1000):
.... wrong = 0
.... for x in range(1, max_val):
.... for y in range(1, max_val):
.... wrong += (x / num_type(y)) * y != x
.... return wrong
....
>>check_accuracy(float)
101502
>>check_accuracy(Decimal)
310013
>>check_accuracy(Fraction)
0
The conclusions I came to based on running that experiment are:
- Decimal actually appears to suffer more rounding problems than float
for rational arithmetic
- Decimal appears to be significantly slower than Fraction for small
denominator rational arithmetic
- both Decimal and Fraction are significantly slower than builtin
floats
The increased number of inaccurate answers with Decimal (31% vs 10%)
is probably due to the fact that it is actually more precise than
float - for the builtin floats, the rounding error in the division
step may be cancelled out by a further rounding error in the
multiplication step (this can be seen happening in the case of ((1 /
3.0) * 3) == 1.0, where the result of the multiplication ends up being
1.0 despite the rounding error on division, due to the next smallest
floating value being 0.99999999999999989).
The speed difference between Decimal and Fraction is likely due to the
fact that Fraction can avoid actually doing any division most of the
time - it does addition and multiplication instead. The main reason
behind the overall speed advantage of builtin floats should hopefully
be obvious ;)
Regardless, the result of integer division is going to be a binary
floating point value in Py3k. For cases where that isn't adequate or
acceptable, the application should really be tightly controlling its
numeric types anyway and probably using a high performance math
library like numpy or gmpy instead of the standard numeric types (as
others have already noted in this thread).
On Mar 4, 8:46*am, NickC <ncogh...@gmail.comwrote:
The increased number of inaccurate answers with Decimal (31% vs 10%)
is probably due to the fact that it is actually more precise than
float
I suspect it has more to do with the fact that 10 is bigger than 2,
though I'm not sure I could precisely articulate the reasons why
this matters. (A bigger base means a bigger 'wobble', the wobble
being the variation in the relationship between an error of 1ulp and
a relative error of base**-precision.)
Rerunning your example after a
getcontext().prec = 16
(i.e. with precision comparable to that of float) gives
>>check_accuracy(Decimal)
310176
Mark
En Tue, 04 Mar 2008 11:46:48 -0200, NickC <nc******@gmail.comescribi�:
A mildly interesting Py3k experiment:
Python 3.0a3+ (py3k:61229, Mar 4 2008, 21:38:15)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>from fractions import Fraction from decimal import Decimal def check_accuracy(num_type, max_val=1000):
... wrong = 0
... for x in range(1, max_val):
... for y in range(1, max_val):
... wrong += (x / num_type(y)) * y != x
... return wrong
...
>>>check_accuracy(float)
101502
>>>check_accuracy(Decimal)
310013
>>>check_accuracy(Fraction)
0
The conclusions I came to based on running that experiment are:
- Decimal actually appears to suffer more rounding problems than float
for rational arithmetic
Mmm, but I doubt that counting how many times the results are equal, is
the right way to evaluate "accuracy".
A stopped clock shows the right time twice a day; a clock that loses one
minute per day shows the right time once every two years. Clearly the
stopped clock is much better! http://mybanyantree.wordpress.com/ca.../lewis-carrol/
--
Gabriel Genellina
>>>>"Gabriel Genellina" <ga*******@yahoo.com.ar(GG) wrote:
>GGEn Tue, 04 Mar 2008 11:46:48 -0200, NickC <nc******@gmail.comescribió:
>>A mildly interesting Py3k experiment:
Python 3.0a3+ (py3k:61229, Mar 4 2008, 21:38:15) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >from fractions import Fraction >from decimal import Decimal >def check_accuracy(num_type, max_val=1000): ... wrong = 0 ... for x in range(1, max_val): ... for y in range(1, max_val): ... wrong += (x / num_type(y)) * y != x ... return wrong ... >check_accuracy(float) 101502 >check_accuracy(Decimal) 310013 >check_accuracy(Fraction) 0
The conclusions I came to based on running that experiment are: - Decimal actually appears to suffer more rounding problems than float for rational arithmetic
>GGMmm, but I doubt that counting how many times the results are equal, is GGthe right way to evaluate "accuracy". GGA stopped clock shows the right time twice a day; a clock that loses one GGminute per day shows the right time once every two years. Clearly the GGstopped clock is much better!
But if the answer is incorrect (in the float calculation) the error is
limited. IEEE 754 prescribes that the error should be at most 1 LSB, IIRC.
And then the number of errors is the proper measure.
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: pi**@vanoostrum.org
On Mar 12, 7:20*am, Piet van Oostrum <p...@cs.uu.nlwrote:
But if the answer is incorrect (in the float calculation) the error is
limited. IEEE 754 prescribes that the error should be at most 1 LSB, IIRC.
And then the number of errors is the proper measure.
There are two operations here, both of which can introduce error of
up to 0.5 ulp (ulp = unit in the last place). Moreover, the second
operation (the multiplication) can magnify the error introduced by
the first (the division).
You're correct that for IEEE 754 binary floating-point arithmetic,
(x/y)*y and x will either be equal or differ by exactly 1ulp (except
perhaps in rarely-occurring corner cases). But for decimal numbers,
(x/y)*y and x could be as much as 5 ulps apart.
Mark
--
Piet van Oostrum <p...@cs.uu.nl>
URL:http://pietvanoostrum.com[PGP 8DAE142BE17999C4]
Private email: p...@vanoostrum.org
This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: JS |
last post by:
We have the same floating point intensive C++ program that runs on
Windows on Intel chip and on Sun Solaris on SPARC chips. The program
reads the exactly the same input files on the two platforms....
|
by: Anton Noll |
last post by:
We are using Visual Studio 2003.NET (C++) for the development
of our software in the fields digital signal processing and
numerical acoustics.
One of our programs was working correctly if we are...
|
by: cody |
last post by:
no this is no trollposting and please don't get it wrong but iam very
curious why people still use C instead of other languages especially C++.
i heard people say C++ is slower than C but i can't...
|
by: j0mbolar |
last post by:
C supports single precision floating point and double precision
floating point but does it support fixed floating point? i've read
that fixed floating point is more accurate than single precision...
|
by: Vinoth |
last post by:
I'm working in an ARM (ARM9) system which does not have Floating point
co-processor or Floating point libraries. But it does support long long int
(64 bits).
Can you provide some link that would...
|
by: michael.mcgarry |
last post by:
Hi,
I have a question about floating point precision in C.
What is the minimum distinguishable difference between 2 floating point
numbers? Does this differ for various computers?
Is this...
|
by: Bern McCarty |
last post by:
I have run an experiment to try to learn some things about floating point
performance in managed C++. I am using Visual Studio
2003. I was hoping to get a feel for whether or not it would make...
|
by: jacob navia |
last post by:
Hi people
I continue to work in the tutorial for lcc-win32, and started to try to
explain the floating point flags.
Here is the relevant part of the tutorial. Since it is a difficult part,
I...
|
by: ma740988 |
last post by:
template <class T> inline bool isEqual( const T& a, const T& b,
const T epsilon = std::numeric_limits<T>::epsilon() )
{
const T diff = a - b;
return ( diff <= epsilon ) && ( diff >= -epsilon );...
|
by: rembremading |
last post by:
Hi all!
The following piece of code has (for me) completely unexpected behaviour.
(I compile it with gcc-Version 4.0.3)
Something goes wrong with the integer to float conversion.
Maybe somebody...
|
by: CloudSolutions |
last post by:
Introduction:
For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome former...
|
by: ryjfgjl |
last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: aa123db |
last post by:
Variable and constants
Use var or let for variables and const fror constants.
Var foo ='bar';
Let foo ='bar';const baz ='bar';
Functions
function $name$ ($parameters$) {
}
...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
| |