473,396 Members | 1,996 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

int/long unification hides bugs

there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.
most of the time, i expect my numbers to be small. 2**31 is good
enough for most uses of variables, and when more is needed, 2**63
should do most of the time.

granted, unification allows code to work for larger numbers than
foreseen (as PEP 237 states) but i feel the potential for more
undetected bugs outweighs this benefit.

the other benefit of the unification - portability - can be achieved
by defining int32 & int64 types (or by defining all integers to be
32-bit (or 64-bit))

PEP 237 says, "It will give new Python programmers [...] one less
thing to learn [...]". i feel this is not so important as the quality
of code a programmer writes once he does learn the language.

-kartik
Jul 18 '05
83 3355
On Tue, 2004-10-26 at 21:00 -0700, kartik wrote:
Cliff Wells <cl************@comcast.net> wrote in message news:<ma**************************************@pyt hon.org>...

I'm going to rewrite that last line in English so that perhaps you'll
catch on to what you are saying:


thank u so much 4 your help, but i know what i'm saying without
assistance from clowns like u. & i dont give a damn about your rules 4
proper communciation, as long as i'm understood.


It's understood that you are completely dense, if that's what you meant.
What *you* need for *your* application isn't necessarily what
anyone else needs for theirs.


the required range, while being different for different variables, is
generally is less than 2**31 - & *that* can be checked by the
language.


Do you even know what "generally" means? Generally for *you* perhaps,
and the 3 toy programs you've written, certainly not for me.

--
Cliff Wells <cl************@comcast.net>

Jul 18 '05 #51
al*****@yahoo.com (Alex Martelli) wrote in message news:<1gm98e3.ewsm1jxg9y3tN%al*****@yahoo.com>...

Try doing some accounting in Turkish liras, one of these days. Today,
each Euro is 189782957 cents of Turkish liras. If an Italian firm
selling (say) woodcutting equipment bids on a pretty modest contract in
Turkey, offering machinery worth 2375220 Euros, they need to easily
compute that their bid is 450776275125540 cents of Turkisk Liras. And
that's a _pretty modest_ contract, again -- if you're doing some
computation about truly substantial sums (e.g. ones connected to
government budgets) the numbers get way larger.
[...]Even just for accounting, unlimited-size integers are simply much more
practical.

as another example, using too long a string as an index into a
dictionary is not a problem (true, the dictionary may not have a
mapping, but i have the same issue with a short string). but too long
an index into a list rewards me with an exception.
But the same index, used as a dictionary key, works just fine. Specious
argument, therefore.


I don't think so. I didn't say that large numbers always cause
trouble, so you can't claim to have refuted by argument by giving a
single counter-example.

As common and everyday a
computation (in some fields) as the factorial of 1000 (number of
permutations of 1000 objects) is 2**8530 -- and combinatorial arithmetic
is anything but an "ivory tower" pursuit these days, and factorial is
the simplest building block in combinatorial arithmetic.


It's nice to get some facts, rather than an attempt to prove your
position by analogy between ints & strings ("Proof by analogy is
fraud" - Bjarne Stroustrup)
Jul 18 '05 #52
>
the required range, while being different for different variables, is
generally is less than 2**31 - & *that* can be checked by the
language.

I was trying to write a program which counted the number of times
people made stupid arguments on this list so I wrote a python program
to do it. Unfortunately you came along and caused my variable to
overflow. How was I to know I'd go above that magic number? Life sure
would be easier if ints which got too large were seamlessly converted
to longs!

-- Tim

--
http://mail.python.org/mailman/listinfo/python-list

Jul 18 '05 #53
kartik wrote:
thank u so much 4 your help, but i know what i'm saying without
assistance from clowns like u. & i dont give a damn about your rules 4
proper communciation, as long as i'm understood.


I feel the need to point out in the above the parallel (and equally
mistaken) logic with your comments in the rest of the thread.

In the thread you basically are saying "I want high quality
code, but I refuse to do the thing that will give it to me
(writing good tests) as long as a tiny subset of possible bugs
are caught by causing overflow errors at an arbitrary limit".

Above you are basically saying "I want to be understood,
but I refuse to do the thing that will make it easy for me
to be understood (using proper grammer and spelling) as
long as it's possible for people to laboriously decipher
what I'm trying to say".

Or something like that... I'm with Cliff (which is to say,
I'm outta here).

-Peter
Jul 18 '05 #54
On 25 Oct 2004 21:05:30 -0700, ka*************@yahoo.com (kartik) wrote:
The question is how small is small? Less than 2**7? Less than 2**15?
Less than 2**31? Less than 2**63? And what's the significance of powers
of two? And what happens if you move from a 32 bit machine to a 64 bit
one? (or a 1024 bit one in a hundred years time?)
less than 2**31 most of the time & hardly ever greater than 2**63 - no
matter if my machine is 32-bit, 64-bit or 1024-bit. the required range
depends on the data u want 2 store in the variable & not on the
hardware.

r u pstg fm a cel fone?
Anyway, you might like Ada. Googling for ada reference manual gets

http://www.adahome.com/rm95/
----
Examples (buried in lots of language lawyer syntax stuff, maybe there's
a lighter weight manual ;-)

(33)
Examples of integer types and subtypes:

(34)
type Page_Num is range 1 .. 2_000;
type Line_Size is range 1 .. Max_Line_Size;
(35)
subtype Small_Int is Integer range -10 .. 10;
subtype Column_Ptr is Line_Size range 1 .. 10;
subtype Buffer_Size is Integer range 0 .. Max;
(36)
type Byte is mod 256; -- an unsigned byte
type Hash_Index is mod 97; -- modulus is prime

----
> PEP 237 says, "It will give new Python programmers [...] one less
> thing to learn [...]". i feel this is not so important as the quality
> of code a programmer writes once he does learn the language.
The thing is, the int/long cutoff is arbitrary, determined soley by
implemetation detail.


agreed, but it need not be that way. ints can be defined to be 32-bit
(or 64-bit) on all architectures.

So what's your point? That you're used to 32 and 64 bit registers?
Is signed two's (decided that was possessive, but maybe it's plural?;-)
complement the specific flavor you like? And python should offer these
constrained integer types? Why don't you scratch your own itch and see
if it's just a passing brain mite ;-)

A much better idea is the judicious use of assertions.

assert x < 15000

Not only does it protect you from runaway numbers, it also documents
what the expected range is, resulting in a much better "quality of code"


such an assertion must be placed before avery assignment to the
variable - & that's tedious. moreover, it can give u a false sense of
security when u think u have it wherever needed but u've forgotten it
somewhere.

a 32-bit limit is a crude kind of assertion that u get for free, and
one u expect should hold for most variables. for those few variables
it doesn't, u can use a long.


If you are willing to make your variables exist in an object's attribute
name space, you can define almost any behavior you want. E.g., here's
a class that will make objects that will only allow integral values within
the limits you specify to be bound to names in the attribute space. Since it's
guaranteed on binding, the retrieval needs no test.
import sys
class LimitedIntSpace(object): ... def __init__(self, lo=-sys.maxint-1, hi=sys.maxint):
... self.__dict__[''] = (lo, hi)
... def __setattr__(self, vname, value):
... if not isinstance(value,(int, long)):
... raise TypeError,'Only integral values allowed'
... lo, hi = self.__dict__['']
... if value < lo: raise ValueError, '%r < %r (lower limit)' %(value, lo)
... if value > hi: raise ValueError, '%r > %r (high limit)' %(value, hi)
... self.__dict__[vname] = value
... i3_10 = LimitedIntSpace(3,10)
for i in range(16): ... try: i3_10.x = i; print (i,i3_10.x),
... except ValueError, e: print e
...
0 < 3 (lower limit)
1 < 3 (lower limit)
2 < 3 (lower limit)
(3, 3) (4, 4) (5, 5) (6, 6) (7, 7) (8, 8) (9, 9) (10, 10) 11 > 10 (high limit)
12 > 10 (high limit)
13 > 10 (high limit)
14 > 10 (high limit)
15 > 10 (high limit)
It defaults to your favored 32-bit range ;-)
i32 = LimitedIntSpace()
i32.x = sys.maxint
i32.y = -sys.maxint-1
i32.x += 1 Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 9, in __setattr__
ValueError: 2147483648L > 2147483647 (high limit) i32.y -= 1

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 8, in __setattr__
ValueError: -2147483649L < -2147483648 (lower limit)
Regards,
Bengt Richter
Jul 18 '05 #55
al*****@yahoo.com (Alex Martelli) wrote in message news:<1gm98e3.ewsm1jxg9y3tN%al*****@yahoo.com>...

Try doing some accounting in Turkish liras, one of these days. Today,
each Euro is 189782957 cents of Turkish liras. If an Italian firm
selling (say) woodcutting equipment bids on a pretty modest contract in
Turkey, offering machinery worth 2375220 Euros, they need to easily
compute that their bid is 450776275125540 cents of Turkisk Liras. And
that's a _pretty modest_ contract, again -- if you're doing some
computation about truly substantial sums (e.g. ones connected to
government budgets) the numbers get way larger.

Even just for
accounting, unlimited-size integers are simply much more practical.


Thank you for the information. I appreciate it.
-kartik
Jul 18 '05 #56
Bengt Richter wrote:
If you are willing to make your variables exist in an object's attribute
name space, you can define almost any behavior you want.


Or, here's another solution -- make a number-like object which
handles range checking for every operation.
The big problem here is that unlike Ada, C++, or other languages
that let you declare variable types, I have to figure out how
to merge the allowed ranges of the two values during a binary op.
I decided to use the intersection. Unlike Pythons ranges, I
choose low <= val <= high (as compared to low <= val < high).

import sys

class RangedNumber:
def __init__(self, val, low = -sys.maxint-1, high = sys.maxint):
if not (low <= high):
raise ValueError("low(= %r) > high(= %r)" % (low, high))
if not (low <= val <= high):
raise ValueError("value %r not in range %r to %r" %
(val, low, high))
self.val = val
self.low = low
self.high = high

def __str__(self):
return str(self.val)
def __repr__(self):
return "RangedNumber(%r, %r, %r)" % (self.val, self.low, self.high)

def __int__(self):
return self.val
def __float__(self):
return self.val

def _get_range(self, other):
if isinstance(other, RangedNumber):
low = max(self.low, other.low)
high = min(self.high, other.high)
other_val = other.val
else:
low = self.low
high = self.high
other_val = other

return other_val, low, high

def __add__(self, other):
other_val, low, high = self._get_range(other)
x = self.val + other_val
return RangedNumber(x, low, high)

def __radd__(self, other):
other_val, low, high = self._get_range(other)
x = other_val + self.val
return RangedNumber(x, low, high)

def __sub__(self, other):
other_val, low, high = self._get_range(other)
x = self.val - other_val
return RangedNumber(x, low, high)

def __rsub__(self, other):
other_val, low, high = self._get_range(other)
x = other_val - self.val
return RangedNumber(x, low, high)

def __abs__(self):
return RangedNumber(abs(self.val), self.low, self.high)

def __mul__(self, other):
other_val, low, high = self._get_range(other)
x = self.val * other_val
return RangedNumber(x, low, high)

def __rmul__(self, other):
other_val, low, high = self._get_range(other)
x = other_val * self.val
return RangedNumber(x, low, high)

# ... and many, many more ...

Here's some code using it

a = RangedNumber(10, 0, 100)

a RangedNumber(10, 0, 100) print a 10 a+90 RangedNumber(100, 0, 100) a+91 Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 39, in __add__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 101 not in range 0 to 100 a*5 RangedNumber(50, 0, 100) 10*a RangedNumber(100, 0, 100) 11*a Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 67, in __rmul__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 110 not in range 0 to 100 0-a Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 54, in __rsub__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value -10 not in range 0 to 100 a-0 RangedNumber(10, 0, 100) b = RangedNumber(18, 5, 20)
b = RangedNumber(18, 5, 30)
a+b RangedNumber(28, 5, 30)


Andrew
da***@dalkescientific.com
Jul 18 '05 #57
kartik <ka*************@yahoo.com> wrote:
al*****@yahoo.com (Alex Martelli) wrote in message

news:<1gm9a9j.s0b279yqpnlvN%al*****@yahoo.com>...
Cliff Wells <cl************@comcast.net> wrote:

optional constraint checking [...] can be a handy feature for many kinds
of applications [...] Of course, this has nothing to do with silly and
arbitrary bounds such as 2**31-1.


bounds such as 2**31 are a crude form of constraint checking that you
get by default. if you feel your data is going to be larger, you can
use a long type


Too crude to be any real use, and wrong-headed _as a default_.
Constraints should apply _when explicitly requested_, with the default
always being "unconstrained". (It's a wart in Python that we don't get
that with recursion, for example; it forces people to look for
non-recursive solutions, where recursive ones are simpler and smoother,
because it robs recursive approaches of some generality).
Alex
Jul 18 '05 #58
kartik <ka*************@yahoo.com> wrote:
...
Try doing some accounting in Turkish liras, one of these days. Today, ...
[...]Even just for accounting, unlimited-size integers are simply much
more practical.
I see you quote this but can't refute it...
as another example, using too long a string as an index into a
dictionary is not a problem (true, the dictionary may not have a
mapping, but i have the same issue with a short string). but too long
an index into a list rewards me with an exception.


But the same index, used as a dictionary key, works just fine. Specious
argument, therefore.


I don't think so. I didn't say that large numbers always cause
trouble, so you can't claim to have refuted by argument by giving a
single counter-example.


I rewrote your post, using long strings instead of large numbers, to
show the arguments are exactly identical, and equally bereft of
substance, against getting either unlimited strings or numbers as the
default. You tried to show asymmetry by comparing strings used as keys
into a dictionary vs ints used as indices into a list, and I refute that
silly attempt: if you have dictionaries you can use long strings or
large numbers as keys into them just as well.

Your mention of lists, in fact, shows exactly how specious your
arguments for a default integer limit of 2**31-1 are. That totally
arbitrary limit has nothing to do with the size of any given list; the
size of number that would give problems when used as list index varies,
but it's more likely to be a few millions, than billions. _Moreover_,
as soon as you try to use a too-large index for the specific list, you
get an IndexError. It's therefore totally useless to try and get an
OverflowError instead if the index, besides being too big for that
specific list, is also over 2**31-1 (or other arbitrary boundary).

As common and everyday a
computation (in some fields) as the factorial of 1000 (number of
permutations of 1000 objects) is 2**8530 -- and combinatorial arithmetic
is anything but an "ivory tower" pursuit these days, and factorial is
the simplest building block in combinatorial arithmetic.


It's nice to get some facts, rather than an attempt to prove your
position by analogy between ints & strings ("Proof by analogy is
fraud" - Bjarne Stroustrup)


Go use C++, then, and stop wasting our time. If we preferred the way
Stroustrup designs programming languages, to that in which van Rossum
designs them, we'd be over in comp.lang.c++, not here in
comp.lang.python -- ever thought of that?

The analogy I posed, and still defend, merely shows your arguments were
badly thought out, weak, and totally useless in the first place. It
does not need to 'prove' anything, because we're neither in a court of
law, nor in mathematics: it just shows up your arguments for the
worthless froth they are. The facts (that should be obvious to anybody,
of course) that the factorial function easily makes very big numbers,
that some countries have very devalued currencies, etc, further show
that big numbers (just like big strings) _are_ useful to practical
problems, thus your totally wrong-headed request to put default limits
on numbers would also cause practical damage -- for example, it would
make an accounting-arithmetic package designed with strong currencies in
mind (Euros, dollars, pounds, ...) unusable for weak currencies, because
of the _default_ nature of the limits.

Fortunately there is no chance whatsoever that Python will get into
reverse gear and put back the arbitrary default limits you hanker for.
If the many analogies, arguments, and practical examples that have been
offered to help you see why, help you accept the fact, good. If not,
good riddance -- you have not offered _one_ sound and useful line of
reasoning throughout this big thread, after all, so it's not as if
losing your input will sadly impoverish discussions here.
Alex
Jul 18 '05 #59
Andrew Dalke <ad****@mindspring.com> wrote in message news:<X8****************@newsread3.news.pas.earthl ink.net>...

Real code? Here's one used for generating the canonical
SMILES representation of a chemical compound. It comes
from the FROWNS package.

try:
val = 1
for offset, bondtype in offsets[index]:
val *= symclasses[offset] * bondtype
except OverflowError:
# Hmm, how often does this occur?
val = 1L
for offset, bondtype in offsets[index]:
val *= symclasses[offset] * bondtype
The algorithm uses the fundamental theorem of arithmetic
as part of computing a unique characteristic value for
every atom in the molecule, up to symmetry.

It's an iterative algorithm, and the new value for
a given atom is the product of the old values of its
neighbor atoms in the graph:

V'(atom1) = V(atom1.neighbor[0]) * V(atom1.neighbor[1]) * ...

In very rare cases this can overflow 32 bits. Rare
enough that it's faster to do everything using 32 bit
numbers and just redo the full calculation if there's
an overflow.

Because Python now no longer gives this overflow error,
we have the advantage of both performance and simplified
code.

Relatively speaking, 2**31 is tiny. My little laptop
can count that high in Python in about 7 minutes, and
my hard drive has about 2**35 bits of space. I deal
with single files bigger than 2**32 bits.

Why then should I have to put in all sorts of workarounds
into *my* code because *you* don't know how to write
good code, useful test cases, and appropriate internal
sanity checks?


Thank you for the info.
-kartik
Jul 18 '05 #60
Andrew Dalke <ad****@mindspring.com> wrote in message news:<Z8***************@newsread3.news.pas.earthli nk.net>...

My guess is the not unusual case of someone who works mostly alone
and doesn't have much experience in diverse projects nor working
with more experience people.
Hey, that's right.

I've seen some similar symptoms working with, for example,
undergraduate students who are hotshot programmers ... when
compared to other students in their non-CS department but not
when compared to, say, a CS student, much less a experienced
developer.


Oops. I'm a grad student, and I certainly don't have as much
experience as a professional developer

-kartik
Jul 18 '05 #61
kartik wrote:
"Terry Reedy" <tj*****@udel.edu> wrote in message news:<ma**************************************@pyt hon.org>...
"kartik" <ka*************@yahoo.com> wrote in message
news:94**************************@posting.google .com...
1)catching overflow bugs in the language itself frees u from writing
overflow tests.
It is a fundamental characteristic of counts and integers that adding 1 is
always valid. Given that, raising an overflow exception is itself a bug,
one that Python had and has now eliminated.

If one wishes to work with residue classes mod n, +1 is also still always
valid. It is just that (n-1) + 1 is 0 instead of n. So again, raising an
overflow error is a bug.

i don't care what mathematical properties are satisfied; what matters
is to what extent the type system helps me in writing bug-free code

.... and the point most of your respondents are trying to make is that an
arbitrary restriction - ANY arbitrary restriction - on the upper limit
of integers is unhelpful, and that's precisely why it's been removed
from the language.
[...]However, the limit n could be
anything, so fixing it at, say, 2**31 - 1 is almost always useless.

i dont think so. if it catches bugs that cause numbers to increase
beyond 2**31, that's valuable.

But only if an increase beyond 2**31 IS a bug, which for many problem
domains it isn't. Usually the required upper limit is either above or
below 2**31, which is why that limit (or 2**63, or 2**7) is useless and
unhelpful.
The use of fixed range ints is a space-time machine performance hack that
has been costly in human thought time.

on what basis do u say that

A few more years' programming experience will teach you the truth of
this assertion. It appears that no amount of good advice will change
your opinion in the meantime.

regards
Steve
--
http://www.holdenweb.com
http://pydish.holdenweb.com
Holden Web LLC +1 800 494 3119
Jul 18 '05 #62
kartik wrote:
Steve Holden <st***@holdenweb.com> wrote in message news:<Z6kfd.18413$SW3.4432@fed1read01>...
kartik wrote:

Peter Hansen <pe***@engcorp.com> wrote in message news:<_a********************@powergate.ca>...

Do you feel strongly enough about the quality of your code to write
automated tests for it? Or are you just hoping that one tiny class
of potential bugs will be caught for you by this feature of the
language?
1)catching overflow bugs in the language itself frees u from writing
overflow tests.
That seems to me to be a bit like saying you don't need to do any
engineering calculations for your bridge because you'll find out if it's
not strong enough when it falls down.

i was inaccurate. what i meant was that overflow errors provide a
certain amount of sanity checking in the absence of explicit testing -
& do u check every assignment for bounds?

If limiting the range of integers is critical to a program's function
then I will happily make range assertions. Frankly I can't remember when
the last overflow error came up in my code.
2)no test (or test suite) can catch all errors, so language support 4
error detection is welcome.
Yes, but you appear to feel that an arbitrary limit on the size of
integers will be helpful [...] Relying on hardware overflows as error
detection is pretty poor, really.

i'm not relying on overflow errors to ensure correctness. it's only a
mechanism that sometimes catches bugs - & that's valuable.

But your original assertion was that the switch to unbounded integers
should be reversed because of this. Personally I think it's far more
valuable to be able to ignore the arbitrary limitations of supporting
hardware.
3)overflow detection helps when u dont have automated tests 4 a
particular part of your program.


But writing such tests would help much more.

agreed, but do u test your code so thoroughly that u can guarantee
your code is bug-free. till then, overflow errors help.

No they don't.

regards
Steve
--
http://www.holdenweb.com
http://pydish.holdenweb.com
Holden Web LLC +1 800 494 3119
Jul 18 '05 #63
kartik wrote:
Andrew Dalke <ad****@mindspring.com> wrote in message news:<Z8***************@newsread3.news.pas.earthli nk.net>...
My guess is the not unusual case of someone who works mostly alone
and doesn't have much experience in diverse projects nor working
with more experience people.

Hey, that's right.
I've seen some similar symptoms working with, for example,
undergraduate students who are hotshot programmers ... when
compared to other students in their non-CS department but not
when compared to, say, a CS student, much less a experienced
developer.

Oops. I'm a grad student, and I certainly don't have as much
experience as a professional developer

-kartik


You're certainly not short on arrogance, though.

regards
Steve
--
http://www.holdenweb.com
http://pydish.holdenweb.com
Holden Web LLC +1 800 494 3119
Jul 18 '05 #64
ka*************@yahoo.com (kartik) wrote in message news:<94**************************@posting.google. com>...
Cliff Wells <cl************@comcast.net> wrote in message news:<ma**************************************@pyt hon.org>... [...]i dont give a damn about your rules 4
proper communciation, as long as i'm understood.

Sorry for that.
-kartik
Jul 18 '05 #65
Peter Hansen <pe***@engcorp.com> wrote in message news:<lN********************@powergate.ca>...
kartik wrote:
thank u so much 4 your help, but i know what i'm saying without
assistance from clowns like u. & i dont give a damn about your rules 4
proper communciation, as long as i'm understood.


I feel the need to point out in the above the parallel (and equally
mistaken) logic with your comments in the rest of the thread.

In the thread you basically are saying "I want high quality
code, but I refuse to do the thing that will give it to me
(writing good tests) as long as a tiny subset of possible bugs
are caught by causing overflow errors at an arbitrary limit".

Above you are basically saying "I want to be understood,
but I refuse to do the thing that will make it easy for me
to be understood (using proper grammer and spelling) as
long as it's possible for people to laboriously decipher
what I'm trying to say".


Sorry
Jul 18 '05 #66
al*****@yahoo.com (Alex Martelli) wrote in message news:<1gmb486.1l924gm1ing3f9N%al*****@yahoo.com>.. .
If the many analogies, arguments, and practical examples that have been
offered to help you see why, help you accept the fact, good.


They have. Thank you.
-kartik
Jul 18 '05 #67
kartik wrote:
[Andrew Dalke]
My guess is the not unusual case of someone who works mostly alone
and doesn't have much experience in diverse projects nor working
with more experience people.


Hey, that's right.


I'm sorry for questioning whether you were a troll. Like I said before,
I spend waaaaay too much time hanging out on troll-infested fora and it
means certain behaviors cause me to automatically dismiss posters. You
seem to have recognized and stopped some of these behaviors.
--
Michael Hoffman
Jul 18 '05 #68
Steve Holden <st***@holdenweb.com> wrote in message news:<CgMfd.3$uN3.0@lakeread04>...
You're certainly not short on arrogance, though.


I didn't mean to (except in a couple of posts where I got a little
pissed off). Sorry for that.

-kartik
Jul 18 '05 #69
Michael Hoffman <m.*********************************@example.com > wrote in message news:<cl**********@gemini.csx.cam.ac.uk>...
I'm sorry for questioning whether you were a troll. Like I said before,
I spend waaaaay too much time hanging out on troll-infested fora and it
means certain behaviors cause me to automatically dismiss posters. You
seem to have recognized and stopped some of these behaviors.


No problem (at least compared to some of the other comments ;) ). It's
nice to know that my posts have not been completely useless!

-kartik
Jul 18 '05 #70
ka*************@yahoo.com (kartik) wrote in message news:<94**************************@posting.google. com>...
there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.


I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.

Jeremy
Jul 18 '05 #71
Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts
class RangedNumber:
def __init__(self, val, low = -sys.maxint-1, high = sys.maxint):
if not (low <= high):
raise ValueError("low(= %r) > high(= %r)" % (low, high))
if not (low <= val <= high):
raise ValueError("value %r not in range %r to %r" %
(val, low, high))
self.val = val
self.low = low
self.high = high
....

def _get_range(self, other):
if isinstance(other, RangedNumber):
low = max(self.low, other.low)
high = min(self.high, other.high)
other_val = other.val
else:
low = self.low
high = self.high
other_val = other

return other_val, low, high

def __add__(self, other):
other_val, low, high = self._get_range(other)
x = self.val + other_val
return RangedNumber(x, low, high)
...

and some of the example code
a = RangedNumber(10, 0, 100)
print a 10 a+90 RangedNumber(100, 0, 100) a+91 Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 39, in __add__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 101 not in range 0 to 100 10*a RangedNumber(100, 0, 100) 11*a Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 67, in __rmul__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 110 not in range 0 to 100 b = RangedNumber(18, 5, 30)
a+b RangedNumber(28, 5, 30)


Andrew
da***@dalkescientific.com
Jul 18 '05 #72

tw*********@hotmail.com (Jeremy Fincher) wrote:

ka*************@yahoo.com (kartik) wrote in message news:<94**************************@posting.google. com>...
there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.
I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.


And what is so wrong with implementing all of integer arithmetic in
Python? About all I can figure is speed, in which case, one could do
the following...

class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here

Which uses plain integers when Python is run with the -O option, but
bounds them during "debugging" without the -O option.

- Josiah

Jul 18 '05 #73
On Thu, Oct 28, 2004 at 10:47:35AM -0700, Josiah Carlson wrote:
class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here


Is there a reason you didn't use 'if __debug__' here?

class BoundedInt(object):
...
if __debug__:
def __new__ ...
else:
def __new__ ...

Jeff

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFBgU6SJd01MZaTXX0RAhHwAJ0WtYQB51PDqJXRyVmD72 j9NQAFlACfSuWJ
ptCbe2SgUaJ5CNi52Z4TZNE=
=s9Ml
-----END PGP SIGNATURE-----

Jul 18 '05 #74
Op 2004-10-28, Andrew Dalke schreef <ad****@mindspring.com>:
Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts


What I think is wrong with it, is that it distributes its constraineds
too far. The fact that you want constraints on a number doesn't imply
that all operations done on this number are bound by the same
constraints

The way I see myself using constrains, would mean I need them on
a name, not on an object.

--
Antoon Pardon
Jul 18 '05 #75
Antoon Pardon wrote:
The way I see myself using constrains, would mean I need them on
a name, not on an object.


Then you'll have to use the approach Bengt Richter used
with his LimitedIntSpace solution, posted ealier in this thread.
Variable names are just references. Only attribute names will
do what you want in Python.

Andrew
da***@dalkescientific.com
Jul 18 '05 #76
Andrew Dalke <ad****@mindspring.com> wrote in message news:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread?


How long with this take to run?
a = RangedNumber(2**31, 0, 2**32)
a ** a


I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.

Jeremy
Jul 18 '05 #77
Josiah Carlson <jc******@uci.edu> wrote in message news:<ma**************************************@pyt hon.org>...
And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.
I'm not quite sure how this is relevant. My issue is with the
unboundedness of computations, not the unboundedness of the numbers
themselves.
And what is so wrong with implementing all of integer arithmetic in
Python?


It's a whole lot of extra effort when a perfectly viable such
"battery" existed in previous versions of Python.

Jeremy
Jul 18 '05 #78
Jeff Epler <je****@unpythonic.net> wrote in message news:<ma**************************************@pyt hon.org>...
On Thu, Oct 28, 2004 at 10:47:35AM -0700, Josiah Carlson wrote:
class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here


Is there a reason you didn't use 'if __debug__' here?


__debug__ can be re-assigned. It has no effect on asserts (anymore;
this formerly was not the case, and I much preferred it that way) but
reassignments to it are still visible to the program.

Jeremy-Finchers-Computer:~/src/my/python/supybot/plugins jfincher$
python -O
Python 2.3 (#1, Sep 13 2003, 00:49:11)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
__debug__ False __builtins__.__debug__ = True
__debug__ True

Jul 18 '05 #79

tw*********@hotmail.com (Jeremy Fincher) wrote:

Andrew Dalke <ad****@mindspring.com> wrote in message news:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread?
How long with this take to run?
a = RangedNumber(2**31, 0, 2**32)
a ** a


Considering that 2**31 is already a long int, you wouldn't get the
overflow error that is being argued about anyways. You would eventually
get a MemoryError though, as the answer is, quite literally, 2**(31 +
2**31).

Certainly if one were to implement the standard binary exponentiation
algorithm in Python, it fails quite early due to violating the range
constraint.

I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.


I don't quite follow what you mean. The provided RangedNumber uses
Python integers to store information as attributes of the RangedNumber
instances.
- Josiah

Jul 18 '05 #80
On Thu, 2004-10-28 at 08:09 -0700, Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.


Color me ignorant (I still maintain that numbers were a bad idea to
start with), but can you give an example of something that would
*require* this? It seems to me that whether an OverflowError occurs is
only one way of determining whether a computation is bounded, and
further, it's a rather arbitrary marker (it is disassociated from the
capabilities of the computer hardware it's run on) . I'd think
computational resources are the real concern (time and memory). These
can be handled other ways (e.g. running the computation in a separate
process and killing it if it exceeds certain time/memory constraints).

I suppose my position is this: if I have a computer that can perform a
calculation in a reasonable time without exhausting the resources of the
machine then I would think that Python shouldn't try to stop me from
doing it, at least not based on some arbitrary number of bits.

Regards,
Cliff

--
Cliff Wells <cl************@comcast.net>

Jul 18 '05 #81
Cliff Wells <cl************@comcast.net> wrote in message news:<ma**************************************@pyt hon.org>...
On Thu, 2004-10-28 at 08:09 -0700, Jeremy Fincher wrote:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.
Color me ignorant (I still maintain that numbers were a bad idea to
start with), but can you give an example of something that would
*require* this?


Accepting integer equations from untrusted users and evaluating them.
That's my exact use case, in fact.
It seems to me that whether an OverflowError occurs is
only one way of determining whether a computation is bounded,
It's not really a way of *determining* whether a computation is
bounded, but *guaranteeing* that it is bounded. What I eventually did
was simply convert all numbers to floats, and do my calculations with
those; but this results in some ugliness, for instance, when a user
asks for the value of "10**24".
These
can be handled other ways (e.g. running the computation in a separate
process and killing it if it exceeds certain time/memory constraints).
They're also significantly more complex and significantly less
portable.
I suppose my position is this: if I have a computer that can perform a
calculation in a reasonable time without exhausting the resources of the
machine then I would think that Python shouldn't try to stop me from
doing it, at least not based on some arbitrary number of bits.


That's exactly it. With bounded integers, I know for sure that my
program can perform a calculation in a reasonable time without
exhausting the resources of the machine. Without them, I have to go
to great lengths to receive that assurance.

Jeremy
Jul 18 '05 #82

tw*********@hotmail.com (Jeremy Fincher) wrote:

Cliff Wells <cl************@comcast.net> wrote in message news:<ma**************************************@pyt hon.org>...
Color me ignorant (I still maintain that numbers were a bad idea to
start with), but can you give an example of something that would
*require* this?


Accepting integer equations from untrusted users and evaluating them.
That's my exact use case, in fact.


So seemingly you have a problem with:
val = RangedInteger(int(user_provided), mylow, myhigh)

Ok, so you've got a problem with that. *shrug*

It seems to me that whether an OverflowError occurs is
only one way of determining whether a computation is bounded,


It's not really a way of *determining* whether a computation is
bounded, but *guaranteeing* that it is bounded. What I eventually did
was simply convert all numbers to floats, and do my calculations with
those; but this results in some ugliness, for instance, when a user
asks for the value of "10**24".


Or any one of the other infinite values that binary floating point does
not represent exactly. What you have done is to take your problem of
limiting range, and attempt to fold it into a type with a larger dynamic
range, but with limited precision. Replacing an "OverflowError", with a
possible future infinite value returned by float and known precision
issues with non-integer math.

What gets me is that you seem to be saying that BEFORE 2.4 has
officially come out, you have switched to using floats because 2.4 will
remove the OverflowError when integers overflow.

I suppose my position is this: if I have a computer that can perform a
calculation in a reasonable time without exhausting the resources of the
machine then I would think that Python shouldn't try to stop me from
doing it, at least not based on some arbitrary number of bits.


That's exactly it. With bounded integers, I know for sure that my
program can perform a calculation in a reasonable time without
exhausting the resources of the machine. Without them, I have to go
to great lengths to receive that assurance.


Ah, but you, or anyone else, only needs to implement it once for some
class of things, and it is done. People who want more or different
functionality are free to derive. With a little bit of futzing, you
could take the base code given by Andrew Dalke, add in custom handlers
for binary exponentiation with __pow__, and you're basically done.

Is there anything stopping you from doing this other than laziness and a
desire to complain "Python 2.4 doesn't have functionality X that it
choose_one_or_more(used_to_have, that_i_want)"?
- Josiah

Jul 18 '05 #83
Jeremy Fincher wrote:
How long with this take to run?

a = RangedNumber(2**31, 0, 2**32)
a ** a

It wouldn't -- I didn't implement __pow__. ;)
I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.


It sounds like you don't like that the simple
__pow__ implementation will raise a MemoryError after
a long time when you would rather have it raise a
bounds violation error early.

If you want that, you could have an implementation that
uses a few logs to check for that possibility.

Though I'm not sure how to do the bounds checking for
the 3 arg form. Given a as above, I know that

pow(a, a, 2**50)

but without doing the computation my bounds checker
can't (or at least shouldn't) be that clever. Should
it require that the 3rd arg always be in the allowed
range?

What about

a = RangedNumber(100, 100, 400)
pow(a, 25424134, 321)

? It turns out the result is 321 which is in the right
range, but I can't know that without basically doing
the calculation.
Andrew
da***@dalkescientific.com
Jul 18 '05 #84

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.