473,397 Members | 1,960 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,397 software developers and data experts.

Class Variable Access and Assignment

This has to do with class variables and instances variables.

Given the following:

<code>

class _class:
var = 0
#rest of the class

instance_b = _class()

_class.var=5

print instance_b.var # -> 5
print _class.var # -> 5

</code>

Initially this seems to make sense, note the difference between to last
two lines, one is refering to the class variable 'var' via the class
while the other refers to it via an instance.

However if one attempts the following:

<code>

instance_b.var = 1000 # -> _class.var = 5
_class.var = 9999 # -> _class.var = 9999

</code>

An obvious error occurs. When attempting to assign the class variable
via the instance it instead creates a new entry in that instance's
__dict__ and gives it the value. While this is allowed because of
pythons ability to dynamically add attributes to a instance however it
seems incorrect to have different behavior for different operations.

There are two possible fixes, either by prohibiting instance variables
with the same name as class variables, which would allow any reference
to an instance of the class assign/read the value of the variable. Or
to only allow class variables to be accessed via the class name itself.

Many thanks to elpargo and coke. elpargo assisted in fleshing out the
best way to present this.

perhaps this was intended, i was just wondering if anyone else had
noticed it, and if so what form would you consider to be 'proper'
either referring to class variables via the class itself or via
instances of that class. Any response would be greatly appreciated.
Graham

Nov 3 '05
166 8455
On Fri, 04 Nov 2005 08:08:42 +0000, Antoon Pardon wrote:
One other way, to implement the += and likewise operators would be
something like the following.

Assume a getnsattr, which would work like getattr, but would also
return the namespace where the name was found. The implementation
of b.a += 2 could then be something like:

ns, t = getnsattr(b, 'a')
t = t + 2
setattr(ns, 'a')
I'm not arguing that this is how it should be implemented. Just
showing the implication doesn't follow.


Follow the logical implications of this proposed behaviour.

class Game:
current_level = 1
# by default, games start at level one

def advance(self):
self.current_level += 1
py> antoon_game = Game()
py> steve_game = Game()
py> steve_game.advance()
py> steve_game.advance()
py> print steve_game.level
3
py> print antoon_game.level

What will it print?

Hint: your scheme means that class attributes mask instance attributes.
--
Steven.

Nov 4 '05 #101
On Fri, 04 Nov 2005 10:48:54 +0000, Antoon Pardon wrote:
Please explain why this is illegal.

x = 1
def f():
x += 1


Because names in function namespaces don't have inheritance.
--
Steven.

Nov 4 '05 #102
On Fri, 04 Nov 2005 09:07:38 +0000, Antoon Pardon wrote:
Now the b.a on the right hand side refers to A.a the first time through
the loop but not the next times. I don't think it is sane that which
object is refered to depends on how many times you already went through
the loop.

[snip]
Look at that: the object which is referred to depends on how many times
you've already been through the loop. How nuts is that?


It is each time the 'x' from the same name space. In the code above the
'a' is not each time from the same namespace.

I also think you new very well what I meant.


I'm supposed to be a mindreader now? After you've spent multiple posts
ranting that, quote, "I don't think it is sane that which object is
refered to depends on how many times you already went through the loop",
I'm supposed to magically read your mind and know that you don't actually
object to what you say you object to, but to something completely
different?

--
Steven.

Nov 4 '05 #103
On Fri, 04 Nov 2005 07:46:45 +0000, Antoon Pardon wrote:
Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =
<something> to correspond to b.__class__.a = <something>?
That is an implemantation detail. The only answer that you are given
means nothing more than: because it is implemented that way.


You keep saying "that's an implementation detail" and dismissing the
question, but that's the heart of the issue. What does b.a += 2 *mean*? It
doesn't mean "sort the list referenced by b.a" -- we agree on that much.
You seem to think that it means "increment the object currently named
b.a by two". But that's not what it means.

b.a += 2 has a precise meaning, and for ints and many other objects that
meaning is the same as b.a = b.a + 2. Yes, it is an implementation detail.
So what? It is an implementation detail that "b.a += 2" doesn't mean "sort
the list referenced by b.a" too.

In some other language, that's precisely what it could mean -- but Python
is not that language.

b.a has a precise meaning too, and again you have got it wrong. It doesn't
mean "search b's namespace for attribute a". It means "search b's
namespace for attribute a, if not found search b's class' namespace, and
if still not found, search b's class' superclasses". It is analogous to
nested scopes. In fact, it is a type of nested scope.

In some other language, b.a could mean what you think it means, but Python
is not that language. That's a deliberate design decision. Nested
attribute search gives the most useful results in the most common cases,
while still being easy to work around in the rare cases where it is not
what is wanted.

I'm not saying that it couldn't, if that was the model for inheritance you
decided to use. I'm asking why would you want it? What is your usage case
that demonstrates that your preferred inheritance model is useful?


It has nothing to do with a model for inheritance, but with a model of
name resolution.


Which is designed to act the way it does in order to produce the
inheritance model. You can't have that inheritance model without that name
resolution.

The hierarchie of searching an instance first in an object and then in
a class isn't that different from searching first in a local namespace
and then in a more global namespace.

When we search names in a function we don't resolve the same name in
different name spacese each occurence of the same name in the same
function occurs in the same namespace.
That's because it isn't needed for function namespaces. Names in a
function don't inherit state or behaviour from names in a higher-level
scope. Attribute names in classes do.

But with class variables we can have that one and the same name
on a line refers to two different namespaces at the same time.
That is IMO madness. You may argue that the madness is of little
importance, you can argue that because of the current implementation
little can be done about it. But I don't see how one can defend
it as sane behaviour.


Because inheritance is useful, sensible, rational behaviour for OO
programming.
--
Steven.

Nov 4 '05 #104
On Fri, 04 Nov 2005 04:42:54 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
There are good usage cases for the current inheritance behaviour.


Can you name one? Any code that relies on it seems extremely dangerous to me.


Dangerous? In what way?
A basic usage case:

class Paper:
size = A4
def __init__(self, contents):
# it makes no sense to have class contents,
# so contents go straight into the instance
self.contents = contents
To internationalise it for the US market:

Paper.size = USLetter

Now create a document using the default paper size:

mydocument = Paper("Four score and seven years ago")
print mydocument.size == USLetter
=> True

Now create a document using another paper size:

page = Paper("Eleventy MILLION dollars")
page.size = Foolscap

Because that's an instance attribute, our default doesn't change:

assert Paper().size == mydocument.size == USLetter
assert page.size != mydocument.size

In case it wasn't obvious, this is the same inheritance behaviour Python
objects exhibit for methods, except that it isn't normal practice to add
methods to instances dynamically. (It is more common to create a
subclass.) But you can do it if you wish, at least for classes you create
yourself.

Objects in Python inherit behaviour from their class.
Objects in Python inherit state from their class, unless their state is
specifically stored in a per-instance basis.
Here's another usage case:

class PrintableWidget(Widget):
prefix = "START "
suffix = " STOP"

def __str__(self):
return self.prefix + Widget.__str__(self) + self.suffix

PrintableWidgets now print with a default prefix and suffix, which can be
easily changed on a per-instance basis without having to create
sub-classes for every conceivable modification:

english_gadget = PrintableWidget("data")
print english_gadget
=> prints "START data STOP"

dutch_gadget = PrintableWidget("data")
dutch_gadget.prefix = "BEGIN "
dutch_gadget.suffix = " EINDE"
print dutch_gadget
=> prints "BEGIN data EINDE"


I have to ask... did OO programming suddenly fall out of favour when my
back was turned? People still use C++, C#, Objective-C and Java, right?
Why are so many folks on this list having trouble with inheritance? Have I
missed something?
--
Steven.

Nov 4 '05 #105
On 4 Nov 2005 08:23:05 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
Op 2005-11-03, Magnus Lycka schreef <ly***@carmen.se>:
Antoon Pardon wrote:
There is no instance variable at that point. How can it add 2, to
something that doesn't exist at the moment.


Because 'a += 1' is only a shorthand for 'a = a + 1' if a is an
immutable object? Anyway, the behaviour is well documented.

http://docs.python.org/ref/augassign.html says:

An augmented assignment expression like x += 1 can be rewritten as x = x
+ 1 to achieve a similar, but not exactly equal effect. In the augmented
version, x is only evaluated once.


Then couldn't we expect that the namespace resolution is also done
only once?

I say that if the introduction on += like operators implied that the
same mentioning of a name would in some circumstances be resolved to
two different namespaces, then such an introduction would better have
not occured.

Would it be too much to ask that in a line like.

x = x + 1.

both x's would resolve to the same namespace?

I think I would rather seek consistency in terms of
order of evaluation and action. IOW, the right hand side
of an assignment is always evaluated before the left hand side,
and operator precedence and syntax defines order of access to names
in their expression context on either side.

The compilation of function bodies violates the above, even allowing
future (execution-wise) statements to influence the interpretation
of prior statements. This simplifies defining the local variable set,
and allows e.g. yield to change the whole function semantics, but
the practicality/purity ratio makes me uncomfortable ;-)

If there were bare-name properties, one could control the meaning
of x = x + 1 and x += 1, though of course one would need some way
to bind/unbind the property objects themselves to make them visible
as x or whatever names.

It might be interesting to have a means to push and pop objects
onto/off-of a name-space-shadowing stack (__nsstack__), such that the first place
to look up a bare name would be as an attribute of the top stack object, i.e.,

name = name + 1

if preceded by

__nsstack__.append(my_namespace_object)

would effectively mean

my_namespace_object.name = my_namespace_object.name + 1

by way of logic like

if __nsstack__:
setattr(__nsstack__[-1], getattr(__nstack__[-1], name) + 1))
else:
x = x + 1
Of course, my_namespace_object could be an instance of a class
that defined whatever properties or descriptors you wanted.
When you were done with that namespace, you'd just __nsstack__.pop()

If __nsstack__ is empty, then of course bare names would be looked
up as now.

BTW, __nsstack__ is not a literal proposal, just a way to illustrate the concept ;-)
OTOH, is suppose a function could have a reseved slot for a name space object stack
that wouldn't cost much run time to bypass with a machine language check for NULL.

BTW2, this kind of stack might play well with a future "with," to guarantee name
space popping. Perhaps "with" syntax could even be extended to make typical usage
slick ;-)

Regards,
Bengt Richter
Nov 4 '05 #106
Bengt Richter wrote:

It might be interesting to have a means to push and pop objects
onto/off-of a name-space-shadowing stack (__nsstack__), such that the first place
to look up a bare name would be as an attribute of the top stack object, i.e.,

name = name + 1


Don't be that specific; just unify Attributes and Names.

Instead of the 'name' X referring to locals()['X'] or globals()['X'],
have a hidden "namespace" object/"class", with lookups functioning akin
to class inheritence.

This would allow, in theory, more uniform namespace behaviour with outer
scoping:

x = 1
def f():
x += 1 # would work, as it becomes
setattr(namespace,'x',getattr(namespace,'x')+1), just like attribute loookup

Also, with a new keyword "outer", more rational closures would work:

def makeincr(start=0):
i = start
def inc():
outer i
j = i
i += 1
return j
return inc

From a "namespace object" point of view, 'outer i' would declare i to
be a descriptor on the namespace object, such that setting actions would
set the variable in the inherited scope (getting actions wouldn't
actually need modification, since it already falls-through). At the
first level, 'outer' would be exactly the same as 'global' -- indeed, it
would be reasonable for the outer keyword to entirely replace global
(which is actually module-scope).

As it stands, the different behaviours of names and attributes is only a
minor quirk, and the fix would definitely break backwards compatibility
in the language -- it'd have to be punted to Py3k.
Nov 4 '05 #107
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Follow the logical implications of this proposed behaviour.

class Game:
current_level = 1
# by default, games start at level one


That's bogus. Initialize the current level in the __init__ method
where it belongs.
Nov 4 '05 #108
Paul Rubin <http://ph****@NOSPAM.invalid> writes:
Mike Meyer <mw*@mired.org> writes:
I've already argued that the kludges suggested to "solve" this problem
create worse problems than this.

The most obvious solution is to permit (or even require) the
programmer to list the instance variables as part of the class
definition. Anything not in the list is not an instance variable,
i.e. they don't get created dynamically. That's what most other
languages I can think of do. Some Python users incorrectly think this
is what __slots__ does, and try to use __slots__ that way. That they
try to do that suggests that the approach makes some sense.


That breaks the ability to add attributes dynamically, which is
usefull. If you need an extra piece of data with some existing class,
it's much easier to just add an attribute to hold it than to create a
subclass for the sole purpose of adding that attribute.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 4 '05 #109
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-11-04, Mike Meyer schreef <mw*@mired.org>:
Would it be too much to ask that in a line like.

x = x + 1.

both x's would resolve to the same namespace?


Yes. That's to much bondage for programmers who've become accustomed
to freedom. Explain why this should be illegal:
> class C:

... def __getattr__(self, name):
... x = 1
... return locals()[name]
... def __setattr__(self, name, value):
... globals()[name] = value
...
> o = C()
> o.x = o.x + 1
> x

2


I'll answer with a contra question.

Please explain why this is illegal.

x = 1
def f():
x += 1

f()


It isn't illegal, it just requires a different syntax.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 4 '05 #110
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-11-04, Mike Meyer schreef <mw*@mired.org>:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-11-03, Mike Meyer schreef <mw*@mired.org>:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
>> What would you expect to get if you wrote b.a = b.a + 2?
> I would expect a result consistent with the fact that both times
> b.a would refer to the same object.
Except they *don't*. This happens in any language that resolves
references at run time.
Python doesn't resolve references at run time. If it did the following
should work.


You left out a key word: "all".
a = 1
def f():
a = a + 1

f()


If Python didn't resolve references at run time, the following
wouldn't work:
> def f():

... global a
... a = a + 1
...
> a = 1
> f()
>
Why do you think so? I see nothing here that couldn't work with
a reference resolved during compile time.


a - in the global name space - doedn't exist when f is compiled, and
hence can't be dereferenced at compile time. Of course, sufficiently
advanced analysis can figure out that a would exist before f is run,
but that's true no matter how a is added. That isn't the way python
works.
But letting that aside. There is still a difference between resolving
reference at run time and having the same reference resolved twice
with each resolution a different result.

The second is a direct result of the first. The environment can change
between the references, so they resolve to different results.

No the second is not a direct result of the first. Since there is
only one reference, I see nothing wrong with the environment
remebering the reference and reusing it if it needs the reference
a second time.


Please stay on topic: we're talking about "a = a + 1", not "a += 1".
The former has two references, not one. I've already agreed that the
semantics of += are a wart.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 4 '05 #111
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
equal? Some things are a matter of objective fact: should CPython use a
byte-code compiler and virtual machine, or a 1970s style interpreter that
interprets the source code directly?


For the record, I've only seen one interpreter that actually
interpreted the source directly. Pretty much all of the rest of them
do a lexical analysis, turning keywords into magic tokens (dare I say
"byte codes") and removing as much white space as possible. Or maybe
that's what you meant?

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 4 '05 #112
Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Follow the logical implications of this proposed behaviour.

class Game:
current_level = 1
# by default, games start at level one

That's bogus. Initialize the current level in the __init__ method
where it belongs.


But there is a relevant use case for this:

If you have a class hierarchy, where the difference between the
classes is mainly/completely a matter of data, i.e. default
values. Then it's very convenient to use such defaults in the
class scope.

Of course, you *could* have an __init__ in the base class that
copies this data from class scope to instance scope on instance
creation, but why make it more complicated?

You could also imagine cases where you have many instances and
a big immutable variable which typically stays as default, but
must sometimes vary between instances.

As I explained in another post, member lookups in the instance
must look in the class to find methods, so why not get used to
the fact that it works like this, and use it when it's convenient.
It's not as if anyone puts a gun to your head and force you to
use this feature.
Nov 5 '05 #113
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
A basic usage case:

class Paper:
size = A4
def __init__(self, contents):
# it makes no sense to have class contents,
# so contents go straight into the instance
self.contents = contents


So add:

self.size = Paper.size

and you've removed the weirdness. What do you gain here by inheriting?
Nov 5 '05 #114
On Fri, 04 Nov 2005 02:59:35 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Thu, 03 Nov 2005 14:13:13 +0000, Antoon Pardon wrote:
Fine, we have the code:

b.a += 2

We found the class variable, because there is no instance variable,
then why is the class variable not incremented by two now? Because the class variable doesn't define a self-mutating __iadd__
(which is because it's an immutable int, of course). If you want
b.__dict__['a'] += 2 or b.__class__.__dict__['a'] += 2 you can
always write it that way ;-)

(Of course, you can use a descriptor to define pretty much whatever semantics
you want, when it comes to attributes).

Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =
No, it doesn't expand like that. (Although, BTW, a custom import could
make it so by transforming the AST before compiling it ;-)

Note BINARY_ADD is not INPLACE_ADD:
def foo(): # for easy disassembly ... b.a += 2
... b.a = b.a + 2
... import dis
dis.dis(foo)

2 0 LOAD_GLOBAL 0 (b)
3 DUP_TOP
4 LOAD_ATTR 1 (a)
7 LOAD_CONST 1 (2)
10 INPLACE_ADD
11 ROT_TWO
12 STORE_ATTR 1 (a)

3 15 LOAD_GLOBAL 0 (b)
18 LOAD_ATTR 1 (a)
21 LOAD_CONST 1 (2)
24 BINARY_ADD
25 LOAD_GLOBAL 0 (b)
28 STORE_ATTR 1 (a)
31 LOAD_CONST 0 (None)
34 RETURN_VALUE

And BINARY_ADD calls __add__ and INPLACE_ADD calls __iadd__ preferentially.

About __ixxx__:
"""
These methods are called to implement the augmented arithmetic operations
(+=, -=, *=, /=, %=, **=, <<=, >>=, &=, ^=, |=).
These methods should attempt to do the operation in-place (modifying self)
and return the result (which could be, but does not have to be, self).
If a specific method is not defined, the augmented operation falls back
to the normal methods. For instance, to evaluate the expression x+=y,
where x is an instance of a class that has an __iadd__() method,
x.__iadd__(y) is called. If x is an instance of a class that does not define
a __iadd() method, x.__add__(y) and y.__radd__(x) are considered, as with
the evaluation of x+y.
"""

<something> to correspond to b.__class__.a = <something>?

I'm not saying that it couldn't, if that was the model for inheritance you
decided to use. I'm asking why would you want it? What is your usage case
that demonstrates that your preferred inheritance model is useful?


It can be useful to find-and-rebind (in the namespace where found) rather
than use separate rules for finding (or not) and binding. The tricks for
boxing variables in closures show there is useful functionality that
is still not as convenient to "spell" as could be imagined.
It is also useful to find and bind separately. In fact, IMO it's not
separate enough in some cases ;-)

I've wanted something like
x := expr
to spell "find x and rebind it to expr" (or raise NameError if not found).
Extending that to attributes and augassign,
b.a +:= 2
could mean find the "a" attribute, and in whatever attribute dict it's found,
rebind it there. Or raise an Exception for whatever failure is encountered.
This would be nice for rebinding closure variables as well. But it's been discussed,
like most of these things ;-)

Regards,
Bengt Richter
Nov 5 '05 #115
On Thu, 03 Nov 2005 13:37:08 -0500, Mike Meyer <mw*@mired.org> wrote:
[...]
I think it even less sane, if the same occurce of b.a refers to two
different objects, like in b.a += 2


That's a wart in +=, nothing less. The fix to that is to remove +=
from the language, but it's a bit late for that.

Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for a source transformation
(i.e., asstgt <op>= expr becomes by simple text substitution asstgt = asstgt <op> expr)
be as good a fix? Then we could discuss what

b.a = b.a + 2

should mean ;-)

OTOH, we could discuss how you can confuse yourself with the results of b.a += 2
after defining a class variable "a" as an instance of a class defining __iadd__ ;-)

Or point out that you can define descriptors (or use property to make it easy)
to control what happens, pretty much in as much detail as you can describe requirements ;-)

Regards,
Bengt Richter
Nov 5 '05 #116
On 04 Nov 2005 11:04:58 +0100, Stefan Arentz <st***********@gmail.com> wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-11-03, Mike Meyer schreef <mw*@mired.org>:
> Antoon Pardon <ap*****@forel.vub.ac.be> writes:
>>> What would you expect to get if you wrote b.a = b.a + 2?
>> I would expect a result consistent with the fact that both times
>> b.a would refer to the same object.
>
> Except they *don't*. This happens in any language that resolves
> references at run time.


Python doesn't resolve references at run time. If it did the following
should work.

a = 1
def f():
a = a + 1

f()


No that has nothing to do with resolving things at runtime. Your example
does not work because the language is very specific about looking up
global variables. Your programming error, not Python's shortcoming.

If someone has an old version of Python handy, I suspect that it used
to "work", and the "a" on the right hand side was the global "a" because
a local "a" hadn't been defined until the assignment, which worked to
produce a local binding of "a". Personally, I like that better than
the current way, because it follows the order of accesses implied
by the precedences in expression evaluation and statement execution.
But maybe I don't RC ;-)

Regards,
Bengt Richter
Nov 5 '05 #117
bo**@oz.net (Bengt Richter) writes:
Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for
a source transformation (i.e., asstgt <op>= expr becomes by simple
text substitution asstgt = asstgt <op> expr) be as good a fix? Then
we could discuss what


Consider "a[f()] += 3". You don't want to eval f() twice.
Nov 5 '05 #118
On 4 Nov 2005 11:09:36 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
[...]

Take the code:

lst[f()] += 1

Now let f be a function with a side effect, that in succession
produces the positive integers starting with one.

What do you think this should be equivallent to:

t = f()
lst[t] = lst[t] + 1

or

lst[f()] = lst[f()] + 1

If you think the environment can change between references then I
suppose you prefer the second approach.

I am quite sympathetic to your probe of python semantics, but I
don't think the above is an argument that should be translated
to attribute assignment. BTW, ISTM that augassign (+=) is
a red herring here, since it's easy to make a shared class variable
that is augassigned apparently as you want, e.g.,
class shared(object): ... def __init__(self, v=0): self.v=v
... def __get__(self, *any): return self.v
... def __set__(self, _, v): self.v = v
... class B(object): ... a = shared(1)
... b=B()
b.a 1 B.a 1 b.a += 2
b.a 3 B.a 3 vars(b) {} vars(b)['a'] = 'instance attr'
vars(b) {'a': 'instance attr'} b.a 3 b.a += 100
b.a 103 B.a 103 B.a = 'this could be prevented'
b.a 'instance attr' B.a 'this could be prevented'

The spelled out attribute update works too B.a = shared('alpha')
b.a 'alpha' b.a = b.a + ' beta'
b.a 'alpha beta' B.a 'alpha beta'

But the instance attribute we forced is still there vars(b)

{'a': 'instance attr'}

You could have shared define __add__ and __iadd__ and __radd__ also,
for confusion to taste ;-)

Regards,
Bengt Richter
Nov 5 '05 #119
bo**@oz.net (Bengt Richter) writes:
On Thu, 03 Nov 2005 13:37:08 -0500, Mike Meyer <mw*@mired.org> wrote:
[...]
I think it even less sane, if the same occurce of b.a refers to two
different objects, like in b.a += 2
That's a wart in +=, nothing less. The fix to that is to remove +=
from the language, but it's a bit late for that.

Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for a source transformation
(i.e., asstgt <op>= expr becomes by simple text substitution asstgt = asstgt <op> expr)
be as good a fix? Then we could discuss what

b.a = b.a + 2

should mean ;-)


The problem with += is how it behaves, not how you treat it. But you
can't treat it as a simple text substitution, because that would imply
that asstgt gets evaluated twice, which doesn't happen.
OTOH, we could discuss how you can confuse yourself with the results of b.a += 2
after defining a class variable "a" as an instance of a class defining __iadd__ ;-)
You may confuse yourself that way, I don't have any problems with it
per se.
Or point out that you can define descriptors (or use property to make it easy)
to control what happens, pretty much in as much detail as you can describe requirements ;-)


I've already pointed that out.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 5 '05 #120
On Fri, 04 Nov 2005 18:20:56 -0500, Mike Meyer wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
equal? Some things are a matter of objective fact: should CPython use a
byte-code compiler and virtual machine, or a 1970s style interpreter that
interprets the source code directly?


For the record, I've only seen one interpreter that actually
interpreted the source directly. Pretty much all of the rest of them
do a lexical analysis, turning keywords into magic tokens (dare I say
"byte codes") and removing as much white space as possible. Or maybe
that's what you meant?


We could argue about details of a throw away line for hours :-)

What I meant was, there is the way Python does it, and then there are (or
were) interpreters that when faced with a block like this:

for i in range(10):
print i

parses "print i" ten times.

It doesn't really matter whether any interpreters back in the 1970s were
actually that bad, or just toy interpreters as taught about in undergrad
university courses.
--
Steven.

Nov 5 '05 #121
On Fri, 04 Nov 2005 09:24:41 -0500, Christopher Subich <cs****************@spam.subich.block.com> wrote:
Steven D'Aprano wrote:
On Thu, 03 Nov 2005 14:13:13 +0000, Antoon Pardon wrote:

Fine, we have the code:

b.a += 2

We found the class variable, because there is no instance variable,
then why is the class variable not incremented by two now?

Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =
<something> to correspond to b.__class__.a = <something>?


Small correction, it expands to b.a = B.a.__class__.__iadd__(b.a,2),
assuming all relevant quantities are defined. For integers, you're
perfectly right.

But before you get to that, a (possibly inherited) type(b).a better
not have a __get__ method trumping __class__ and the rest ;-)

Regards,
Bengt Richter
Nov 5 '05 #122
On Fri, 04 Nov 2005 16:06:45 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
A basic usage case:

class Paper:
size = A4
def __init__(self, contents):
# it makes no sense to have class contents,
# so contents go straight into the instance
self.contents = contents


So add:

self.size = Paper.size

and you've removed the weirdness. What do you gain here by inheriting?

Documents which don't care what paper size they are will automatically use
the default paper size on whatever system they are opened under. Send them
to somebody in the US, and they will use USLetter. Send to someone in
Australia, and they will use A4.

In any case, even if you conclude that there is little benefit to
inheritance in this particular example, the principle is sound:
sometimes you gain benefit by inheriting state.

--
Steven.

Nov 5 '05 #123
On 04 Nov 2005 17:53:34 -0800, Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
bo**@oz.net (Bengt Richter) writes:
Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for
a source transformation (i.e., asstgt <op>= expr becomes by simple
text substitution asstgt = asstgt <op> expr) be as good a fix? Then
we could discuss what


Consider "a[f()] += 3". You don't want to eval f() twice.


Well, if you accepted macro semantics IWT you _would_ want to ;-)

Hm, reminds me of typical adding of parens in macros to control precedence
in expressions ... so I tried
a = [0]
(a[0]) += 1

SyntaxError: augmented assign to tuple literal or generator expression not possible

;-/

Regards,
Bengt Richter
Nov 5 '05 #124
On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <cs****************@spam.subich.block.com> wrote:
Antoon Pardon wrote:
Since ints are immutable objects, you shouldn't expect the value of b.a
to be modified in place, and so there is an assignment to b.a, not A.a.

You are now talking implementation details. I don't care about whatever
explanation you give in terms of implementation details. I don't think
it is sane that in a language multiple occurence of something like b.a
in the same line can refer to different objects


This isn't an implementation detail; to leading order, anything that
impacts the values of objects attached to names is a specification issue.

An implementation detail is something like when garbage collection
actually happens; what happens to:

b.a += 2

is very much within the language specification. Indeed, the language
specification dictates that an instance variable b.a is created if one
didn't exist before; this is true no matter if type(b.a) == int, or if
b.a is some esoteric mutable object that just happens to define
__iadd__(self,type(other) == int).

But if it is an esoteric descriptor (or even a simple property, which is
a descriptor), the behaviour will depend on the descriptor, and an instance
variable can be created or not, as desired, along with any side effect you like.

Regards,
Bengt Richter
Nov 5 '05 #125
On Sat, 05 Nov 2005 00:25:34 +0000, Bengt Richter wrote:
On Fri, 04 Nov 2005 02:59:35 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Thu, 03 Nov 2005 14:13:13 +0000, Antoon Pardon wrote:
Fine, we have the code:

b.a += 2

We found the class variable, because there is no instance variable,
then why is the class variable not incremented by two now?

Because the class variable doesn't define a self-mutating __iadd__
(which is because it's an immutable int, of course). If you want
b.__dict__['a'] += 2 or b.__class__.__dict__['a'] += 2 you can
always write it that way ;-)

(Of course, you can use a descriptor to define pretty much whatever semantics
you want, when it comes to attributes).

Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =


No, it doesn't expand like that. (Although, BTW, a custom import could
make it so by transforming the AST before compiling it ;-)

Note BINARY_ADD is not INPLACE_ADD:


Think about *what* b.a += 2 does, not *how* it does it. Perhaps for some
other data type it would make a difference whether the mechanism was
BINARY_ADD (__add__) or INPLACE_ADD (__iadd__), but in this case it does
not. Both of them do the same thing.

Actually, no "perhaps" about it -- we've already discussed the case of
lists.

Sometimes implementation makes a difference. I assume BINARY_ADD and
INPLACE_ADD work significantly differently for lists, because their
results are significantly (but subtly) different:

py> L = [1,2,3]; id(L)
-151501076
py> L += [4,5]; id(L)
-151501076
py> L = L + []; id(L)
-151501428
But all of this is irrelevant to the discussion about binding b.a
differently on the left and right sides of the equals sign. We have
discussed that the behaviour is different with mutable objects, because
they are mutable -- if I recall correctly, I was the first one in this
thread to bring up the different behaviour when you append to a list
rather than reassign, that is, modify the class attribute in place.

I'll admit that my choice of terminology was not the best, but it wasn't
misleading. b.a += 2 can not modify ints in place, and so the
effect of b.a += 2 is the same as b.a = b.a + 2, regardless of what
byte-codes are used, or even what C code eventually implements that
add-and-store.

In the case of lists, setting Class.a = [] and then calling instance.a +=
[1] would not exhibit the behaviour Antoon does not like, because the
addition is done in place. But calling instance.a = instance.a + [1]
would.

My question still stands: why would you want instance.a = <something>
to operate as instance.__class__.a = <something>?
--
Steven.

Nov 5 '05 #126
On Fri, 04 Nov 2005 13:52:22 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Follow the logical implications of this proposed behaviour.

class Game:
current_level = 1
# by default, games start at level one


That's bogus. Initialize the current level in the __init__ method
where it belongs.


It might be bogus to you, but it isn't to me. I prefer to delay setting
instance attributes until they are needed.

It also allows you to do something like this:

class ExpertGame(Game):
current_level = 100
and then use ExpertGame anywhere you would have used Game with no problems.

Yes, there are other ways to do this. I won't say they are wrong, but I
don't believe they are better.

--
Steven.

Nov 5 '05 #127
On Fri, 04 Nov 2005 12:10:11 +0000, Antoon Pardon wrote:
There are good usage cases for the current inheritance behaviour. I asked
before what usage case or cases you have for your desired behaviour, and
you haven't answered. Perhaps you missed the question? Perhaps you haven't
had a chance to reply yet? Or perhaps you have no usage case for the
behaviour you want.


There are good use cases for a lot of things python doesn't provide.
There are good use cases for writable closures, but python doesn't
provide it, shrug, I can live with that. Use cases is a red herring
here.


Is that a round-about way of saying that you really have no idea of
whether, how or when your proposed behaviour would be useful?

Personally, I think that when you are proposing a major change to a
language that would break the way inheritance works, there should be more
benefits to the new way than the old way.

Some things are a matter of taste: should CPython prefer <> or != for not
equal? Some things are a matter of objective fact: should CPython use a
byte-code compiler and virtual machine, or a 1970s style interpreter that
interprets the source code directly?

The behaviour you are calling "insane" is partly a matter of taste, but it
is mostly a matter of objective fact. I believe that the standard
model for inheritance that you call insane is rational because it is
useful in far more potential and actual pieces of code than the behaviour
you prefer -- and the designers of (almost?) all OO languages seem to
agree with me.


I didn't call the model for inheritance insane.


Antoon, I've been pedanted at by experts, and you ain't one. The behaviour
which you repeatedly described as not sane implements the model for
inheritance. The fact that you never explicitly said "the standard OO
model of inheritance" cuts no ice with me, not when you've written
multiple posts saying that the behaviour of that standard inheritance
model is not sane.

The standard behaviour makes it easy for code to do the right thing in
more cases, without the developer taking any special steps, and in the
few cases where it doesn't do the right thing (e.g. when the behaviour
you want is for all instances to share state) it is easy to work
around. By contrast, the behaviour you want seems to be of very limited
usefulness, and it makes it difficult to do the expected thing in
almost all cases, and work-arounds are complex and easy to get wrong.


Please don't make this about what I *want*. I don't want anything. I
just noted that one and the same reference can be processed multiple
times by the python machinery, resulting in that same reference
referencing differnt variables at the same time and stated that that was
unsane behaviour.


"Unsane" now?

Heaven forbid that I should criticise people for inventing new words, but
how precisely is unsane different from insane? In standard English,
something which is not sane is insane.

The standard behaviour makes it easy for objects to inherit state, and
easy for them to over-ride defaults. The behaviour(s) you and Graham
want have awkward side-effects: your proposed behaviour would mean that
class attributes would mask instance attributes, or vice versa, meaning
that the programmer would have to jump through hoops to get common
types of behaviour like inheriting state.


You don't know what I want. You only know that I have my criticism of
particular behaviour. You seem to have your idea about what the
alternative would be like, and project that to what I would want.


Well now is a good time for you to stop being so coy and tell us what you
want. You don't like the current behaviour. So what is your alternative?
I've given you some suggestions for alternative behaviour. You've refused
to say which one you prefer, or suggest your own.

If you're just trolling, you've done a great job of it because you fooled
me well and good. But if you are serious in your criticism about the
behaviour, then stop mucking about and tell us what the behaviour should
be. Otherwise your criticism isn't going to have any practical effect on
the language at all.
That's an objective claim: please explain what makes your behaviour
more rational than the standard behaviour. Is your behaviour more
useful? Does it make code easier to write? Does it result in more
compact code? What usage cases?


What my behaviour? I don't need to specify alternative behaviour in
order to judge specific behaviour.


If you are serious about wanting the behaviour changed, and not just
whining, then somebody has to come up with an alternative behaviour that
is better. If not you, then who? Most of the folks who have commented on
this thread seem to like the existing behaviour.

--
Steven.

Nov 5 '05 #128

Nov 5 '05 #129
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
It also allows you to do something like this:

class ExpertGame(Game):
current_level = 100 and then use ExpertGame anywhere you would have used Game with no problems.


Well, let's say you set, hmm, current_score = 100 instead of current_level.
Scores in some games can get pretty large as you get to the higher
levels, enough so that you start needing long ints, which maybe are
used elsewhere in your game too, like for the cryptographic signatures
that authenticate the pieces of treasure in the dungeon. Next you get
some performance gain by using gmpy to handle the long int arithmetic,
and guess what? Eventually a version of your game comes along that
enables the postulated (but not yet implemented) mutable int feature
of gmpy for yet more performance gains. So now, current_score += 3000
increments the class variable instead of creating an instance
variable, and whoever maintains your code by then now has a very weird
bug to track down and fix.

Anyway, I'm reacting pretty badly to the construction you're
describing. I haven't gotten around to looking at the asyncore code
but will try to do so.
Nov 5 '05 #130
bo**@oz.net (Bengt Richter) writes:
On 04 Nov 2005 17:53:34 -0800, Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
bo**@oz.net (Bengt Richter) writes:
Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for
a source transformation (i.e., asstgt <op>= expr becomes by simple
text substitution asstgt = asstgt <op> expr) be as good a fix? Then
we could discuss what


Consider "a[f()] += 3". You don't want to eval f() twice.


Well, if you accepted macro semantics IWT you _would_ want to ;-)


Another one of those throw-away lines.

I'd say that was true only if the macro was poorly written. Unless a
macros is intended as a tool to repeatedly evaluate an argument, it
should only evaluate it at most once.

Of course, if you're using some rock-stupid textual macro system, you
really don't have much choice in the matter.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 5 '05 #131
Paul Rubin <http://ph****@NOSPAM.invalid> writes:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
It also allows you to do something like this:
class ExpertGame(Game):
current_level = 100

and then use ExpertGame anywhere you would have used Game with no problems.

Well, let's say you set, hmm, current_score = 100 instead of current_level.
Scores in some games can get pretty large as you get to the higher
levels, enough so that you start needing long ints, which maybe are
used elsewhere in your game too, like for the cryptographic signatures
that authenticate the pieces of treasure in the dungeon. Next you get
some performance gain by using gmpy to handle the long int arithmetic,
and guess what? Eventually a version of your game comes along that
enables the postulated (but not yet implemented) mutable int feature
of gmpy for yet more performance gains. So now, current_score += 3000
increments the class variable instead of creating an instance
variable, and whoever maintains your code by then now has a very weird
bug to track down and fix.


I'd say that's a wart with +=, not with Python's inheritance
mechanisms. += is neither + nor =, but takes on different aspects of
each depending on what it's operating on. While it's true that python
is dynamic enough that the you can create classes that make this true
for any operator, += is the only one that acts like that on the
builtin types.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 5 '05 #132
Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
It also allows you to do something like this:

class ExpertGame(Game):
current_level = 100


and then use ExpertGame anywhere you would have used Game with no problems.

Well, let's say you set, hmm, current_score = 100 instead of current_level.
Scores in some games can get pretty large as you get to the higher
levels, enough so that you start needing long ints, which maybe are
used elsewhere in your game too, like for the cryptographic signatures
that authenticate the pieces of treasure in the dungeon. Next you get
some performance gain by using gmpy to handle the long int arithmetic,
and guess what? Eventually a version of your game comes along that
enables the postulated (but not yet implemented) mutable int feature
of gmpy for yet more performance gains. So now, current_score += 3000
increments the class variable instead of creating an instance
variable, and whoever maintains your code by then now has a very weird
bug to track down and fix.

Anyway, I'm reacting pretty badly to the construction you're
describing. I haven't gotten around to looking at the asyncore code
but will try to do so.


I wouldn't bother. From memory it's just using a class variable as an
initialiser.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Nov 5 '05 #133
On Fri, 04 Nov 2005 20:41:31 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
It also allows you to do something like this:

class ExpertGame(Game):
current_level = 100
and then use ExpertGame anywhere you would have used Game with no problems.


Well, let's say you set, hmm, current_score = 100 instead of current_level.


Right. Because inheriting scores makes so much sense. But that's okay.
Assume I think of some use for inheriting scores and implement this.
Scores in some games can get pretty large as you get to the higher
levels, enough so that you start needing long ints, which maybe are
used elsewhere in your game too, like for the cryptographic signatures
that authenticate the pieces of treasure in the dungeon.
Python already converts ints to long automatically, but please, do go on.
Next you get
some performance gain by using gmpy to handle the long int arithmetic,
Then whatever happens next will be my own stupid fault for prematurely
optimising code.

and guess what? Eventually a version of your game comes along that
enables the postulated (but not yet implemented) mutable int feature
of gmpy for yet more performance gains.
This would be using Python3 or Python4?
So now, current_score += 3000 increments the class variable instead of
creating an instance variable, and whoever maintains your code by then
now has a very weird bug to track down and fix.


That's a lot of words to say "If ints become mutable when you expect
them to be immutable, things will go badly for you." Well duh.

What exactly is your point? That bugs can happen if the behaviour of your
underlying libraries changes? If list.sort suddenly starts randomizing the
list instead of sorting it, I'll have bugs too. Should I avoid using sort
just in case?
--
Steven.

Nov 5 '05 #134
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Next you get some performance gain by using gmpy to handle the long int arithmetic,
Then whatever happens next will be my own stupid fault for prematurely optimising code.


Huh? There's nothing premature about using gmpy if you need better long int performance.
It was written for a reason, after all.
and guess what? Eventually a version of your game comes along
that enables the postulated (but not yet implemented) mutable int
feature of gmpy for yet more performance gains.


This would be using Python3 or Python4?


No, it would be a gmpy feature, not a Python feature. So it could be
used with any version of Python.
What exactly is your point? That bugs can happen if the behaviour of your
underlying libraries changes?


That your initialization scheme is brittle--the idea of data
abstraction is being able to change object behaviors -without- making
surprising bugs like that one. You don't even need the contrived gmpy
example. You might replace the level number with, say, a list of
levels that have been visited.

I don't think the culprit is the mutable/immutable distinction +=
uses, though that is certainly somewhat odd. I think Antoon is on the
right track: namespaces in Python live in sort of a ghetto unbecoming
of how the Zen list describes them as a "honking great idea". These
things we call variables are boxed objects where the namespace is the
box. So having x+=y resolve x to a slot in a namespace before
incrementing that same slot by y, maybe better uses the notion of
namespaces than what happens now. I'm too sleepy to see for sure
whether it gets rid of the mutable/immutable weirdness.
Nov 5 '05 #135
On Fri, 04 Nov 2005 21:14:17 -0500, Mike Meyer <mw*@mired.org> wrote:
bo**@oz.net (Bengt Richter) writes:
On Thu, 03 Nov 2005 13:37:08 -0500, Mike Meyer <mw*@mired.org> wrote:
[...]
I think it even less sane, if the same occurce of b.a refers to two
different objects, like in b.a += 2

That's a wart in +=, nothing less. The fix to that is to remove +=
from the language, but it's a bit late for that.
Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for a source transformation
(i.e., asstgt <op>= expr becomes by simple text substitution asstgt = asstgt <op> expr)
be as good a fix? Then we could discuss what

b.a = b.a + 2

should mean ;-)


The problem with += is how it behaves, not how you treat it. But you
can't treat it as a simple text substitution, because that would imply
that asstgt gets evaluated twice, which doesn't happen.

I meant that it would _make_ that happen, and no one would wonder ;-)

BTW, if b.a is evaluated once each for __get__ and __set__, does that not
count as getting evaluated twice?
class shared(object): ... def __init__(self, v=0): self.v=v
... def __get__(self, *any): print '__get__'; return self.v
... def __set__(self, _, v): print '__set__'; self.v = v
... class B(object): ... a = shared(1)
... b=B()
b.a __get__
1 b.a += 2 __get__
__set__ B.a __get__
3

Same number of get/sets:
b.a = b.a + 10 __get__
__set__ b.a __get__
13

I posted the disassembly in another part of the thread, but I'll repeat:
def foo(): ... a.b += 2
... a.b = a.b + 2
... import dis
dis.dis(foo) 2 0 LOAD_GLOBAL 0 (a)
3 DUP_TOP
4 LOAD_ATTR 1 (b)
7 LOAD_CONST 1 (2)
10 INPLACE_ADD
11 ROT_TWO
12 STORE_ATTR 1 (b)

3 15 LOAD_GLOBAL 0 (a)
18 LOAD_ATTR 1 (b)
21 LOAD_CONST 1 (2)
24 BINARY_ADD
25 LOAD_GLOBAL 0 (a)
28 STORE_ATTR 1 (b)
31 LOAD_CONST 0 (None)
34 RETURN_VALUE

It looks like the thing that's done only once for += is the LOAD_GLOBAL (a)
but DUP_TOP provides the two copies of the reference which are
used either way with LOAD_ATTR followed by STORE_ATTR, which UIAM
lead to the loading of the (descriptor above) attribute twice -- once each
for the __GET__ and __SET__ calls respectively logged either way above.
OTOH, we could discuss how you can confuse yourself with the results of b.a += 2
after defining a class variable "a" as an instance of a class defining __iadd__ ;-)
You may confuse yourself that way, I don't have any problems with it
per se.

I should have said "one can confuse oneself," sorry ;-)
Anyway, I wondered about the semantics of defining __iadd__, since it seems to work just
like __add__ except for allowing you to know what source got you there. So whatever you
return (unless you otherwise intercept instance attribute binding) will get bound to the
instance, even though you internally mutated the target and return None by default (which
gives me the idea of returning NotImplemented, but (see below) even that gets bound :-(

BTW, semantically does/should not __iadd__ really implement a _statement_ and therefore
have no business returning any expression value to bind anywhere?
class DoIadd(object): ... def __init__(self, v=0, **kw):
... self.v = v
... self.kw = kw
... def __iadd__(self, other):
... print '__iadd__(%r, %r) => '%(self, other),
... self.v += other
... retv = self.kw.get('retv', self.v)
... print repr(retv)
... return retv
... class B(object): ... a = DoIadd(1)
... b=B()
b.a <__main__.DoIadd object at 0x02EF374C> b.a.v 1

The normal(?) mutating way: b.a += 2 __iadd__(<__main__.DoIadd object at 0x02EF374C>, 2) => 3 vars(b) {'a': 3} B.a <__main__.DoIadd object at 0x02EF374C> B.a.v 3

Now fake attempt to mutate self without returning anything (=> None) B.a = DoIadd(1, retv=None) # naive default
b.a 3
Oops, remove instance attr del b.a
b.a <__main__.DoIadd object at 0x02EF3D6C> b.a.v 1
Ok, now try it b.a +=2 __iadd__(<__main__.DoIadd object at 0x02EF3D6C>, 2) => None vars(b) {'a': None}
Returned value None still got bound to instance B.a.v 3
Mutation did happen as planned

Now let's try NotImplemented as a return B.a = DoIadd(1, retv=NotImplemented) # mutate but probably do __add__ too
del b.a
b.a <__main__.DoIadd object at 0x02EF374C> b.a.v 1 b.a +=2 __iadd__(<__main__.DoIadd object at 0x02EF374C>, 2) => NotImplemented
__iadd__(<__main__.DoIadd object at 0x02EF374C>, 2) => NotImplemented vars(b) {'a': NotImplemented} B.a.v

5

No problem with that? ;-)

I'd say it looks like someone got tired of implementing __iadd__ since
it's too easy to work around the problem. If _returning_ NotImplemented
could have the meaning that return value processing (binding) should not
be effected, then mutation could happen without a second evaluation of
b.a as a target. ISTM a return value for __iadd__ is kind of strange in any case,
since it's a statement implementation, not an expression term implementation.
Or point out that you can define descriptors (or use property to make it easy)
to control what happens, pretty much in as much detail as you can describe requirements ;-)


I've already pointed that out.

Sorry, missed it. Big thread ;-)

Regards,
Bengt Richter
Nov 5 '05 #136
On Sat, 05 Nov 2005 14:37:19 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Sat, 05 Nov 2005 00:25:34 +0000, Bengt Richter wrote:
On Fri, 04 Nov 2005 02:59:35 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Thu, 03 Nov 2005 14:13:13 +0000, Antoon Pardon wrote:

Fine, we have the code:

b.a += 2

We found the class variable, because there is no instance variable,
then why is the class variable not incremented by two now? Because the class variable doesn't define a self-mutating __iadd__
(which is because it's an immutable int, of course). If you want
b.__dict__['a'] += 2 or b.__class__.__dict__['a'] += 2 you can
always write it that way ;-)

(Of course, you can use a descriptor to define pretty much whatever semantics
you want, when it comes to attributes).

Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =


No, it doesn't expand like that. (Although, BTW, a custom import could
make it so by transforming the AST before compiling it ;-)

Note BINARY_ADD is not INPLACE_ADD:


Think about *what* b.a += 2 does, not *how* it does it. Perhaps for some

what it does, or what in the abstract it was intended to do? (which we need
BDFL channeling to know for sure ;-)

It looks like it means, "add two to <whatever b.a is>". I think Antoon
is unhappy that <whatever b.a is> is not determined once for the one b.a
expression in the statement. I sympathize, though it's a matter of defining
what b.a += 2 is really intended to mean.
The parses are certainly distinguishable:
import compiler
compiler.parse('b.a +=2','exec').node Stmt([AugAssign(Getattr(Name('b'), 'a'), '+=', Const(2))]) compiler.parse('b.a = b.a + 2','exec').node

Stmt([Assign([AssAttr(Name('b'), 'a', 'OP_ASSIGN')], Add((Getattr(Name('b'), 'a'), Const(2))))])

Which I think leads to the different (BINARY_ADD vs INPLACE_ADD) code, which probably really
ought to have a conditional STORE_ATTR for the result of INPLACE_ADD, so that if __iadd__
was defined, it would be assumed that the object took care of everything (normally mutating itself)
and no STORE_ATTR should be done. But that's not the way it works now. (See also my reply to Mike).

Perhaps all types that want to be usable with inplace ops ought to inherit from some base providing
that, and there should never be a return value. This would be tricky for immutables though, since
re-binding is necessary, and the __iadd__ method would have to be passed the necessary binding context
and methods. Probably too much of a rewrite to be practical.
other data type it would make a difference whether the mechanism was
BINARY_ADD (__add__) or INPLACE_ADD (__iadd__), but in this case it does
not. Both of them do the same thing. Unfortunately you seem to be right in this case.
Actually, no "perhaps" about it -- we've already discussed the case of
lists. Well, custom objects have to be considered too. And where attribute access
is involved, descriptors.

Sometimes implementation makes a difference. I assume BINARY_ADD and
INPLACE_ADD work significantly differently for lists, because their
results are significantly (but subtly) different:

py> L = [1,2,3]; id(L)
-151501076
py> L += [4,5]; id(L)
-151501076
py> L = L + []; id(L)
-151501428
Yes.
But all of this is irrelevant to the discussion about binding b.a
differently on the left and right sides of the equals sign. We have
discussed that the behaviour is different with mutable objects, because
they are mutable -- if I recall correctly, I was the first one in this
thread to bring up the different behaviour when you append to a list
rather than reassign, that is, modify the class attribute in place.

I'll admit that my choice of terminology was not the best, but it wasn't
misleading. b.a += 2 can not modify ints in place, and so the
effect of b.a += 2 is the same as b.a = b.a + 2, regardless of what
byte-codes are used, or even what C code eventually implements that
add-and-store. It is so currently, but that doesn't mean that it couldn't be otherwise.
I think there is some sense to the idea that b.a should be re-bound in
the same namespace where it was found with the single apparent evaluation
of "b.a" in "b.a += 2" (which incidentally is Antoon's point, I think).
This is just for augassign, of course.

OTOH, this would be find-and-rebind logic for attributes when augassigned,
and that would enable some tricky name-collision bugs for typos, and code
that used instance.attr += incr depending on current behavior would break.

In the case of lists, setting Class.a = [] and then calling instance.a +=
[1] would not exhibit the behaviour Antoon does not like, because the
addition is done in place. But calling instance.a = instance.a + [1]
would.

My question still stands: why would you want instance.a = <something>
to operate as instance.__class__.a = <something>?

Because in the case of instance.a += <increment>, "instance.a"
is a short spelling for "instance.__class__.a" (in the limited case we are discussing),
and that spelling specifies _both_ source and target in a _single_ expression,
unlike instance.a = instance.a + <incr> where two expressions are used, which
one should expect to have their meaning accoring to the dynamic moment and
context of their evaluation.

If 'a' in vars(instance) then instance.a has the meaning instance.__dict__['a']
for both source and target of +=.

I think you can argue for the status quo or find-and-rebind, but since there
are adequate workarounds to let you do what you want, I don't expect a change.
I do think that returning NotImplemented from __iadd__ to indicate no binding
of return value desired (as opposed to __iadd__ itself not implemented, which
is detected before the call) might make things more controllable for custom objects.

Sorry about cramming too much into sentences ;-/

Regards,
Bengt Richter
Nov 5 '05 #137
On Sat, 05 Nov 2005 21:26:22 +0000, Bengt Richter wrote:
BTW, semantically does/should not __iadd__ really implement a _statement_ and therefore
have no business returning any expression value to bind anywhere?


We get to practicality versus purity here.

Consider x += y for some object type x. If x is a mutable object, then
__iadd__ could be a statement, because it can/should/must modify x in
place. That is the pure solution.

But do you want x += y to work for immutable objects as well? Then
__iadd__ cannot be a statement, because x can't be modified in place.
Our pure "add in place" solution fails in practice, unless we needlessly
restrict what can use it, or have the same syntactical expression (x +=
y) bind to two different methods (__iadd__ statement, and __riadd__
function, r for return). Either pure solution is yucky. (That's a
technical term for "it sucks".) So for practical reasons, __iadd__ can't
be a statement, it needs to return an object which gets bound to x.

Fortunately, that behaviour works for mutables as well, because __iadd__
simply returns self, which gets re-bound to x.

While I am enjoying the hoops people are jumping through to modify the
language so that b.a += 2 assigns b.a in the same scope as it was
accessed, I'm still rather perplexed as to why you would want that
behaviour. It seems to me like spending many hours building a wonderfully
polished, ornate, exquisite device for building hiking boots for mountain
goats.
--
Steven.

Nov 6 '05 #138
On Fri, 04 Nov 2005 22:19:39 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
> Next you get some performance gain by using gmpy to handle the long int arithmetic,
Then whatever happens next will be my own stupid fault for prematurely optimising code.


Huh? There's nothing premature about using gmpy if you need better long int performance.
It was written for a reason, after all.


Sure, but I would be willing to bet that incrementing a counter isn't it.

What exactly is your point? That bugs can happen if the behaviour of your
underlying libraries changes?


That your initialization scheme is brittle--the idea of data
abstraction is being able to change object behaviors -without- making
surprising bugs like that one. You don't even need the contrived gmpy
example. You might replace the level number with, say, a list of
levels that have been visited.


Do you expect level += 1 to still work when you change level to a list of
levels?

The problem with data abstraction is if you take it seriously, it means
"You should be able to do anything with anything". If I change
object.__dict__ to None, attribute lookup should work, yes? No? Then
Python isn't sufficiently abstract.

As soon as you accept that there are some things you can't do with some
data, you have to stop abstracting. *Prematurely* locking yourself into
one *specific* data structure is bad: as a basic principle, data
abstraction is very valuable -- but in practice there comes a time where
you have to say "Look, just choose a damn design and live with it." If you
choose sensibly, then it won't matter if your counter is an int or a long
or a float or a rational -- but you can't sensibly expect to change your
counter to a binary tree without a major redesign of your code.

I've watched developers with an obsession with data abstraction in
practice. I've watched one comp sci graduate, the ink on his diploma not
even dry yet, spend an hour mapping out state diagrams for a factorial
function.

Hello McFly? The customer is paying for this you know. Get a move on. I've
written five different implementations of factorial in ten minutes, and
while none of them worked with symbolic algebra I didn't need symbolic
algebra support, so I lost nothing by not supporting it.

So I hope you'll understand why I get a bad taste in my mouth when people
start talking about data abstraction.
I don't think the culprit is the mutable/immutable distinction +=
uses, though that is certainly somewhat odd. I think Antoon is on the
right track: namespaces in Python live in sort of a ghetto unbecoming
of how the Zen list describes them as a "honking great idea". These
things we call variables are boxed objects where the namespace is the
box. So having x+=y resolve x to a slot in a namespace before
incrementing that same slot by y, maybe better uses the notion of
namespaces than what happens now.
Perhaps it does, but it breaks inheritance, which is more important than
purity of namespace resolution. Practicality beats purity.

I'm too sleepy to see for sure
whether it gets rid of the mutable/immutable weirdness.


What weirdness? What would be weird is if mutable and immutable objects
worked the same as each other. They behave differently because they are
different. If you fail to see that, you are guilty of excessive data
abstraction.
--
Steven.

Nov 6 '05 #139
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
But do you want x += y to work for immutable objects as well? Then
__iadd__ cannot be a statement, because x can't be modified in place.
It never occurred to me that immutable objects could implement __iadd__.
If they can, I'm puzzled as to why.
While I am enjoying the hoops people are jumping through to modify the
language so that b.a += 2 assigns b.a in the same scope as it was
accessed, I'm still rather perplexed as to why you would want that
behaviour.


Weren't you the one saying += acting differently for mutables and
immutables was a wart? If it's such a wart, why are do you find it so
important to be able to rely on the more bizarre consequences of the
wartiness? Warts should be (if not fixed) avoided, not relied on.
Nov 6 '05 #140
On Sat, 05 Nov 2005 16:27:00 -0800, Paul Rubin wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
But do you want x += y to work for immutable objects as well? Then
__iadd__ cannot be a statement, because x can't be modified in place.
It never occurred to me that immutable objects could implement __iadd__.
If they can, I'm puzzled as to why.


???

The classic += idiom comes from C, where you typically use it on ints and
pointers.

In C, ints aren't objects, they are just bytes, so you can modify them
in place. I'm surprised that it never occurred to you that people might
want to do something like x = 1; x += 1 in Python, especially as the
lack of such a feature (as I recall) was one of the biggest complaints
from C programmers crossing over to Python.

Personally, I'm not fussed about +=. Now that it is in the language, I'll
use it, but I never missed it when it wasn't in the language.
While I am enjoying the hoops people are jumping through to modify the
language so that b.a += 2 assigns b.a in the same scope as it was
accessed, I'm still rather perplexed as to why you would want that
behaviour.


Weren't you the one saying += acting differently for mutables and
immutables was a wart?


Nope, not me.
If it's such a wart, why are do you find it so
important to be able to rely on the more bizarre consequences of the
wartiness? Warts should be (if not fixed) avoided, not relied on.


The consequences of instance.attribute += 1 may be unexpected for those
who haven't thought it through, or read the documentation, but they aren't
bizarre. Whether that makes it a feature or a wart depends on whether you
think non-method attributes should be inherited or not. I think they
should be.

I can respect the position of somebody who says that only methods
should be inherited -- somebody, I think it was you, suggested that there
is at least one existing OO language that doesn't allow inheritance for
attributes, but never responded to my asking what language it was.
Personally, I would not like an OO language that didn't inherit
attributes, but at least that is consistent. (At least, if you don't
consider methods to be a particular sort of attribute.)

But I can't understand the position of folks who want inheritance but
don't want the behaviour that Python currently exhibits.
instance.attribute sometimes reading from the class attribute is a feature
of inheritance; instance.attribute always writing to the instance is a
feature of OOP; instance.attribute sometimes writing to the instance and
sometimes writing to the class would be, in my opinion, not just a wart
but a full-blown misfeature.

I ask and I ask and I ask for some use of this proposed behaviour, and
nobody is either willing or able to tell me where how or why it would be
useful. What should I conclude from this?
--
Steven.

Nov 6 '05 #141
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
It never occurred to me that immutable objects could implement __iadd__.
If they can, I'm puzzled as to why.
I'm surprised that it never occurred to you that people might
want to do something like x = 1; x += 1 in Python,
But I wouldn't expect that to mean that ints implement __iadd__. I'd expect
the x+=1 to just use __add__. I haven't checked the spec though.
I can respect the position of somebody who says that only methods
should be inherited -- somebody, I think it was you, suggested that there
is at least one existing OO language that doesn't allow inheritance for
attributes, but never responded to my asking what language it was.
I was thinking of Flavors. You use a special function (send) to do method
calls. But people generally felt that was kludgy and CLOS eliminated it.
I'm not sure what happens in Smalltalk.
instance.attribute sometimes reading from the class attribute is a feature
of inheritance; instance.attribute always writing to the instance is a
feature of OOP; instance.attribute sometimes writing to the instance and
sometimes writing to the class would be, in my opinion, not just a wart
but a full-blown misfeature.


But that is what you're advocating: x.y+=1 writes to the instance or
the class depending on whether x.y is mutable or not. Say you have an
immutable class with a mutable subclass or vice versa. You'd like to
be able to replace a class instance with a subclass instance and not
have the behavior change (Liskov substitution principle), etc.
Nov 6 '05 #142
Steven D'Aprano wrote:
[...]

But I can't understand the position of folks who want inheritance but
don't want the behaviour that Python currently exhibits.
instance.attribute sometimes reading from the class attribute is a feature
of inheritance; instance.attribute always writing to the instance is a
feature of OOP; instance.attribute sometimes writing to the instance and
sometimes writing to the class would be, in my opinion, not just a wart
but a full-blown misfeature.

I ask and I ask and I ask for some use of this proposed behaviour, and
nobody is either willing or able to tell me where how or why it would be
useful. What should I conclude from this?


You should conclude that some readers of this group are happier
designing languages with theoretical purity completely disconnected from
users' needs. But of course we pragmatists know that practicality beats
purity :-)

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Nov 6 '05 #143
On Sat, 05 Nov 2005 18:14:03 -0800, Paul Rubin wrote:
instance.attribute sometimes reading from the class attribute is a feature
of inheritance; instance.attribute always writing to the instance is a
feature of OOP; instance.attribute sometimes writing to the instance and
sometimes writing to the class would be, in my opinion, not just a wart
but a full-blown misfeature.
But that is what you're advocating: x.y+=1 writes to the instance or
the class depending on whether x.y is mutable or not.


Scenario 1:

Pre-conditions: class.a exists; instance.a exists.
Post-conditions: class.a unchanged; instance.a modified.

I give that a big thumbs up, expected and proper behaviour.

Scenario 2:

Pre-conditions: class.a exists and is immutable; instance.a does not
exist.
Post-conditions: class.a unchanged; instance.a exists.

Again, expected and proper behaviour.

(Note: this is the scenario that Antoon's proposed behaviour would change
to class.a modified; instance.a does not exist.)

Scenario 3:

Pre-conditions: class.a exists and is mutable; instance.a exists.
Post-conditions: class.a unchanged; instance.a is modified.

Again, expected and proper behaviour.

Scenario 4:

Pre-conditions: class.a exists and is mutable; instance.a does
not exist.
Post-conditions: class.a modified; instance.a does not exist.

Well, that is a wart. It is the same wart, and for the same reasons, as
the behaviour of:

def function(value=[]):
value.append(None)

I can live with that. It is a familiar wart, and keeps inheritance of
attributes working the right way. And who knows? If your attributes are
mutable, AND you want Antoon's behaviour, then you get it for free just by
using b.a += 1 instead of b.a = b.a + 1.

Say you have an
immutable class with a mutable subclass or vice versa. You'd like to
be able to replace a class instance with a subclass instance and not
have the behavior change (Liskov substitution principle), etc.


That's easy. You just have to make sure that the subclass implements
__iadd__ the same way that the immutable parent class does.

You can't expect a class that performs += in place to act the
same as a class that doesn't perform += in place. Excessive data
abstraction, remember?

L = list("Liskov substitution principle")
L.sort() # sorts in place
print L # prints the sorted list

class immutable_list(list):
# __init__ not shown, but does the right thing
def sort(self):
tmp = list(self)
tmp.sort()
return immutable_list(tmp)

L = immutable_list("Liskov substitution principle")
L.sort() # throws the sorted list away
print L # prints the unsorted list

The only way the Liskov substitution principle works is if everything
works the same way, which means that all subclasses, all *possible*
subclasses, must have no more functionality than the subclass that does
the absolute least. Since the least is nothing, well, you work it out.

--
Steven.

Nov 6 '05 #144
On Sun, 06 Nov 2005 15:17:18 +1100, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Sat, 05 Nov 2005 18:14:03 -0800, Paul Rubin wrote:
instance.attribute sometimes reading from the class attribute is a feature
of inheritance; instance.attribute always writing to the instance is a
feature of OOP; instance.attribute sometimes writing to the instance and
sometimes writing to the class would be, in my opinion, not just a wart
but a full-blown misfeature.
But that is what you're advocating: x.y+=1 writes to the instance or
the class depending on whether x.y is mutable or not.


Scenario 1:

Pre-conditions: class.a exists; instance.a exists.
Post-conditions: class.a unchanged; instance.a modified.

I give that a big thumbs up, expected and proper behaviour.

Scenario 2:

Pre-conditions: class.a exists and is immutable; instance.a does not
exist.
Post-conditions: class.a unchanged; instance.a exists.

Again, expected and proper behaviour.

(Note: this is the scenario that Antoon's proposed behaviour would change
to class.a modified; instance.a does not exist.)

Scenario 3:

Pre-conditions: class.a exists and is mutable; instance.a exists.
Post-conditions: class.a unchanged; instance.a is modified.

Again, expected and proper behaviour.

Scenario 4:

Pre-conditions: class.a exists and is mutable; instance.a does
not exist.
Post-conditions: class.a modified; instance.a does not exist.

Are you saying the above is what happens or what should happen or not happen?
It's not what happens. Post-conditions are that class.a is modified AND
instance.a gets a _separate_ reference to the same result. Note:

Python 2.4b1 (#56, Nov 3 2004, 01:47:27)
[GCC 3.2.3 (mingw special 20030504-1)] on win32
Type "help", "copyright", "credits" or "license" for more information.
class A(object): ... a = []
... b=A()
id(A.__dict__['a']) 49230700 b.a += [123]
id(A.__dict__['a']) 49230700 id(b.__dict__['a']) 49230700 (b.__dict__['a']) [123] (A.__dict__['a']) [123]

Let's eliminate the inheritable class variable A.a:
del A.a
b.a [123] id(b.__dict__['a']) 49230700 vars(b) {'a': [123]}

Make sure we did eliminate A.a vars(A) <dictproxy object at 0x02E817AC> vars(A).keys()

['__dict__', '__module__', '__weakref__', '__doc__']

Is that the "wart" you were thinking of, or are you actually happier? ;-)
Well, that is a wart. It is the same wart, and for the same reasons, as
the behaviour of:

def function(value=[]):
value.append(None) IMO that's not a wart at all, that's a direct design decision, and it's
different from the dual referencing that happens in Scenario 4.

I can live with that. It is a familiar wart, and keeps inheritance of
attributes working the right way. And who knows? If your attributes are
mutable, AND you want Antoon's behaviour, then you get it for free just by
using b.a += 1 instead of b.a = b.a + 1.

Not quite, because there is no way to avoid the binding of the __iadd__
return value to b.a by effective setattr (unless you make type(b).a
a descriptor that intercepts the attempt -- see another post for example).

Regards,
Bengt Richter
Nov 6 '05 #145
On Sun, 06 Nov 2005 08:40:00 +0000, Bengt Richter wrote:
Pre-conditions: class.a exists and is mutable; instance.a does
not exist.
Post-conditions: class.a modified; instance.a does not exist.
Are you saying the above is what happens or what should happen or not happen?


Er, it's what I thought was happening without actually checking it...
It's not what happens. Post-conditions are that class.a is modified AND
instance.a gets a _separate_ reference to the same result. Note:
[snip demonstration]
Is that the "wart" you were thinking of, or are you actually happier?
;-)


In other words, the post-condition for all four scenarios includes that
the instance attribute now exists. I'm actually happier. My brain was full
of people talking about __iadd__ modifying mutables in place and I wasn't
thinking straight.

Well, that is a wart. It is the same wart, and for the same reasons, as
the behaviour of:

def function(value=[]):
value.append(None)

IMO that's not a wart at all, that's a direct design decision, and it's
different from the dual referencing that happens in Scenario 4.


Okay, perhaps wart is not quite the right word... but it is certainly
unexpected if you haven't come across it before, or thought *deeply* about
what is going on. A gotcha perhaps.
--
Steven.

Nov 6 '05 #146
Bengt Richter wrote:
On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <cs****************@spam.subich.block.com> wrote:

is very much within the language specification. Indeed, the language
specification dictates that an instance variable b.a is created if one
didn't exist before; this is true no matter if type(b.a) == int, or if
b.a is some esoteric mutable object that just happens to define
__iadd__(self,type(other) == int).


But if it is an esoteric descriptor (or even a simple property, which is
a descriptor), the behaviour will depend on the descriptor, and an instance
variable can be created or not, as desired, along with any side effect you like.


Right, and that's also language-specification. Voodoo, yes, but
language specification nonetheless. :)
Nov 6 '05 #147
On Sun, 06 Nov 2005 12:23:02 -0500, Christopher Subich <sp*********************@subich.nospam.com> wrote:
Bengt Richter wrote:
On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <cs****************@spam.subich.block.com> wrote:

is very much within the language specification. Indeed, the language
specification dictates that an instance variable b.a is created if one
didn't exist before; this is true no matter if type(b.a) == int, or if
b.a is some esoteric mutable object that just happens to define
__iadd__(self,type(other) == int).


But if it is an esoteric descriptor (or even a simple property, which is
a descriptor), the behaviour will depend on the descriptor, and an instance
variable can be created or not, as desired, along with any side effect you like.


Right, and that's also language-specification. Voodoo, yes, but
language specification nonetheless. :)


I guess http://docs.python.org/ref/augassign.html is the spec.
I notice its example at the end uses an old-style class, so maybe
it's understandable that when it talks about getattr/setattr, it doesn't
mention the possible role of descriptors, nor narrow the meaning of
"evaluate once" for a.x to exclude type(a).x in the setattr phase of execution.

I.e., if x is a descriptor, "evaluate" apparently means only

type(a).x.__get__(a, type(a))

since that is semantically getting the value behind x, and so both of the ".x"s in

type(a).x.__set__(a, type(a).x.__get__(a, type(a)).__add__(1)) # (or __iadd__ if defined, I think ;-)

don't count as "evaluation" of the "target" x, even though it means that a.x got evaluated twice
(via getattr and setattr, to get the same descriptor object (which was used two different ways)).

I think the normal, non-descriptor case still results in (optimized) probes for type(a).x.__get__
and type(a).x.__set__ before using a.__dict__['x'].

ISTM also that it's not clear that defining __iadd__ does _not_ prevent the setattr phase from going ahead.
I.e., a successful __iadd__ in-place mutation does not happen "instead" of the setattr.

Regards,
Bengt Richter
Nov 6 '05 #148
Op 2005-11-04, Christopher Subich schreef <cs****************@spam.subich.block.com>:
Antoon Pardon wrote:
Except when your default is a list

class foo:
x = [] # default

a = foo()
a.x += [3]

b = foo()
b.x

This results in [3]. So in this case using a class variable x to
provide a default empty list doesn't work out in combination
with augmented operators.
This has nothing to do with namespacing at all,


Yes it has.
it's the Python
idiosyncracy about operations on mutable types. In this case, +=
mutates an object, while + returns a new one -- as by definition, for
mutables.


It is the combination of the two.

If python had chosen for an approach like function namespaces, the
problem wouldn't have occured either. What would have happened then
is that the compilor would have noticed the a.x on the right hand
side and based on that fact would then have deciced that all a.x
references should be instance reference (at least in that function
block). The a.x += ... would then result in an AttributeError being raised.

You may prefer the current behaviour over this, but that is not the
point. The point is that resolution of name spaces does play its
role in this problem.
It also has little to do with mutable vs immutable types.
Someone could implement an immutable type, but take advantage
of some implemtation details to change the value inplace
in the __iadd__ method. Such an immutable type would show
the same problems.

--
Antoon Pardon
Nov 7 '05 #149
Op 2005-11-04, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Fri, 04 Nov 2005 09:03:56 +0000, Antoon Pardon wrote:
Op 2005-11-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Thu, 03 Nov 2005 13:01:40 +0000, Antoon Pardon wrote:

> Seems perfectly sane to me.
>
> What would you expect to get if you wrote b.a = b.a + 2?

I would expect a result consistent with the fact that both times
b.a would refer to the same object.

class RedList(list):
colour = "red"

L = RedList(())

What behaviour would you expect from len(L), given that L doesn't have a
__len__ attribute?
Since AFAICT there is no single reference to the __len__ attribute that
will be resolved to two different namespace I don't see the relevance.


Compare:

b.a += 2

Before the assignment, instance b does not have an attribute "a", so class
attribute "a" is accessed. You seem to be objecting to this inheritance.


I object to the inheritance in a scope where b.a also refers to the
instance.

If there is no problem that a reference can refer to different objects
in the same scope, then the following should work too.

a = 0
def f():
a += 2

One can reason just the same that before the assignment f doesn't have
a local variable yet, so the global should be accessed. People who
don't agree don't want functions to have access to outer scope
variables.
Do you object to import searching multiple directories?

Why do you object to attribute resolution searching multiple namespaces?
I don't.
I don't see the relevance of these pieces of code. In none of them is
there an occurence of an attribute lookup of the same attribute that
resolves to different namespaces.


Look a little more closely. In all three pieces of code, you have a
conflict between the class attribute 'ls' and an instance attribute 'ls'.


No you look a little more clearly.
In the first scenario, that conflict is resolved by insisting that
instances explicitly define an attribute, in other words, by making
instance attribute ONLY search the instance namespace and not the class
namespace.


No it isn't. You seem unable to make a difference between a resolution
in general, and a resolution in a scope where an assignment has been
made.

--
Antoon Pardon
Nov 7 '05 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

106
by: A | last post by:
Hi, I have always been taught to use an inialization list for initialising data members of a class. I realize that initialsizing primitives and pointers use an inialization list is exactly the...
5
by: xuatla | last post by:
Hi, I encountered the following compile error of c++ and hope to get your help. test2.cpp: In member function `CTest CTest::operator+=(CTest&)': test2.cpp:79: error: no match for 'operator='...
5
by: Chris | last post by:
Hi, I don't get the difference between a struct and a class ! ok, I know that a struct is a value type, the other a reference type, I understand the technical differences between both, but...
9
by: NevilleDNZ | last post by:
Can anyone explain why "begin B: 123" prints, but 456 doesn't? $ /usr/bin/python2.3 x1x2.py begin A: Pre B: 123 456 begin B: 123 Traceback (most recent call last): File "x1x2.py", line 13,...
14
by: lovecreatesbea... | last post by:
Could you tell me how many class members the C++ language synthesizes for a class type? Which members in a class aren't derived from parent classes? I have read the book The C++ Programming...
20
by: tshad | last post by:
Using VS 2003, I am trying to take a class that I created to create new variable types to handle nulls and track changes to standard variable types. This is for use with database variables. This...
20
by: d.s. | last post by:
I've got an app with two classes, and one class (InventoryInfoClass) is an object within the other class (InventoryItem). I'm running into problems with trying to access (get/set) a private...
16
by: John Doe | last post by:
Hi, I wrote a small class to enumerate available networks on a smartphone : class CNetwork { public: CNetwork() {}; CNetwork(CString& netName, GUID netguid): _netname(netName),...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.