473,769 Members | 2,214 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

int/long unification hides bugs

there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.
most of the time, i expect my numbers to be small. 2**31 is good
enough for most uses of variables, and when more is needed, 2**63
should do most of the time.

granted, unification allows code to work for larger numbers than
foreseen (as PEP 237 states) but i feel the potential for more
undetected bugs outweighs this benefit.

the other benefit of the unification - portability - can be achieved
by defining int32 & int64 types (or by defining all integers to be
32-bit (or 64-bit))

PEP 237 says, "It will give new Python programmers [...] one less
thing to learn [...]". i feel this is not so important as the quality
of code a programmer writes once he does learn the language.

-kartik
Jul 18 '05
83 3465
ka************* @yahoo.com (kartik) wrote in message news:<94******* *************** ****@posting.go ogle.com>...
there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.


I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.

Jeremy
Jul 18 '05 #71
Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts
class RangedNumber:
def __init__(self, val, low = -sys.maxint-1, high = sys.maxint):
if not (low <= high):
raise ValueError("low (= %r) > high(= %r)" % (low, high))
if not (low <= val <= high):
raise ValueError("val ue %r not in range %r to %r" %
(val, low, high))
self.val = val
self.low = low
self.high = high
....

def _get_range(self , other):
if isinstance(othe r, RangedNumber):
low = max(self.low, other.low)
high = min(self.high, other.high)
other_val = other.val
else:
low = self.low
high = self.high
other_val = other

return other_val, low, high

def __add__(self, other):
other_val, low, high = self._get_range (other)
x = self.val + other_val
return RangedNumber(x, low, high)
...

and some of the example code
a = RangedNumber(10 , 0, 100)
print a 10 a+90 RangedNumber(10 0, 0, 100) a+91 Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 39, in __add__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("val ue %r not in range %r to %r" %
ValueError: value 101 not in range 0 to 100 10*a RangedNumber(10 0, 0, 100) 11*a Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 67, in __rmul__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("val ue %r not in range %r to %r" %
ValueError: value 110 not in range 0 to 100 b = RangedNumber(18 , 5, 30)
a+b RangedNumber(28 , 5, 30)


Andrew
da***@dalkescie ntific.com
Jul 18 '05 #72

tw*********@hot mail.com (Jeremy Fincher) wrote:

ka************* @yahoo.com (kartik) wrote in message news:<94******* *************** ****@posting.go ogle.com>...
there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.
I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.


And what is so wrong with implementing all of integer arithmetic in
Python? About all I can figure is speed, in which case, one could do
the following...

class BoundedInt(obje ct):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here

Which uses plain integers when Python is run with the -O option, but
bounds them during "debugging" without the -O option.

- Josiah

Jul 18 '05 #73
On Thu, Oct 28, 2004 at 10:47:35AM -0700, Josiah Carlson wrote:
class BoundedInt(obje ct):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here


Is there a reason you didn't use 'if __debug__' here?

class BoundedInt(obje ct):
...
if __debug__:
def __new__ ...
else:
def __new__ ...

Jeff

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFBgU6SJd0 1MZaTXX0RAhHwAJ 0WtYQB51PDqJXRy VmD72j9NQAFlACf SuWJ
ptCbe2SgUaJ5CNi 52Z4TZNE=
=s9Ml
-----END PGP SIGNATURE-----

Jul 18 '05 #74
Op 2004-10-28, Andrew Dalke schreef <ad****@mindspr ing.com>:
Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts


What I think is wrong with it, is that it distributes its constraineds
too far. The fact that you want constraints on a number doesn't imply
that all operations done on this number are bound by the same
constraints

The way I see myself using constrains, would mean I need them on
a name, not on an object.

--
Antoon Pardon
Jul 18 '05 #75
Antoon Pardon wrote:
The way I see myself using constrains, would mean I need them on
a name, not on an object.


Then you'll have to use the approach Bengt Richter used
with his LimitedIntSpace solution, posted ealier in this thread.
Variable names are just references. Only attribute names will
do what you want in Python.

Andrew
da***@dalkescie ntific.com
Jul 18 '05 #76
Andrew Dalke <ad****@mindspr ing.com> wrote in message news:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread?


How long with this take to run?
a = RangedNumber(2* *31, 0, 2**32)
a ** a


I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.

Jeremy
Jul 18 '05 #77
Josiah Carlson <jc******@uci.e du> wrote in message news:<ma******* *************** *************** *@python.org>.. .
And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.
I'm not quite sure how this is relevant. My issue is with the
unboundedness of computations, not the unboundedness of the numbers
themselves.
And what is so wrong with implementing all of integer arithmetic in
Python?


It's a whole lot of extra effort when a perfectly viable such
"battery" existed in previous versions of Python.

Jeremy
Jul 18 '05 #78
Jeff Epler <je****@unpytho nic.net> wrote in message news:<ma******* *************** *************** *@python.org>.. .
On Thu, Oct 28, 2004 at 10:47:35AM -0700, Josiah Carlson wrote:
class BoundedInt(obje ct):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here


Is there a reason you didn't use 'if __debug__' here?


__debug__ can be re-assigned. It has no effect on asserts (anymore;
this formerly was not the case, and I much preferred it that way) but
reassignments to it are still visible to the program.

Jeremy-Finchers-Computer:~/src/my/python/supybot/plugins jfincher$
python -O
Python 2.3 (#1, Sep 13 2003, 00:49:11)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright" , "credits" or "license" for more information.
__debug__ False __builtins__.__ debug__ = True
__debug__ True

Jul 18 '05 #79

tw*********@hot mail.com (Jeremy Fincher) wrote:

Andrew Dalke <ad****@mindspr ing.com> wrote in message news:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread?
How long with this take to run?
a = RangedNumber(2* *31, 0, 2**32)
a ** a


Considering that 2**31 is already a long int, you wouldn't get the
overflow error that is being argued about anyways. You would eventually
get a MemoryError though, as the answer is, quite literally, 2**(31 +
2**31).

Certainly if one were to implement the standard binary exponentiation
algorithm in Python, it fails quite early due to violating the range
constraint.

I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.


I don't quite follow what you mean. The provided RangedNumber uses
Python integers to store information as attributes of the RangedNumber
instances.
- Josiah

Jul 18 '05 #80

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.