By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,404 Members | 1,074 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,404 IT Pros & Developers. It's quick & easy.

Is there a reason not to do this?

P: n/a

One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)

class c1(object): pass

@defmethod(c1)
def m1(self, x): ...
Now if you redefine m1, existing instances of c1 will see the change.

My question is: is there a reason not to do this? Does it screw
something up behind the scenes? Is it unpythonic? Why isn't this
standard operating procedure?

rg
Nov 30 '06 #1
Share this Question
Share on Google+
16 Replies


P: n/a
Ron Garret schrieb:
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)

class c1(object): pass

@defmethod(c1)
def m1(self, x): ...
Now if you redefine m1, existing instances of c1 will see the change.

My question is: is there a reason not to do this? Does it screw
something up behind the scenes? Is it unpythonic? Why isn't this
standard operating procedure?
What are you doing that needs this permanent redefinition? I like the
repl, yet usually - especially when dealing with classes - I write a
text file containing code. So, i just run that on a command line again,
if I made some changes, recreating whatever objects I want again.

Even if I'd not do that, but used a long-running interpreter inside an
IDE (which is what I presume you are doing) - why do you _care_ about
the old objects the first place? I mean, you obviously changed the
classes for a reason. So, you are not being productive here, but still
programming. Which means that you don't _have_ to care about old,
unchanged objects too much.

But in the end - it's your code. It will run slower, it looks kinda
weird as someone who's reading it has to know what it is for, but if it
suits your needs - do it.

Diez
Nov 30 '06 #2

P: n/a
In article <4t*************@mid.uni-berlin.de>,
"Diez B. Roggisch" <de***@nospam.web.dewrote:
Ron Garret schrieb:
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)

class c1(object): pass

@defmethod(c1)
def m1(self, x): ...
Now if you redefine m1, existing instances of c1 will see the change.

My question is: is there a reason not to do this? Does it screw
something up behind the scenes? Is it unpythonic? Why isn't this
standard operating procedure?

What are you doing that needs this permanent redefinition? I like the
repl, yet usually - especially when dealing with classes - I write a
text file containing code. So, i just run that on a command line again,
if I made some changes, recreating whatever objects I want again.

Even if I'd not do that, but used a long-running interpreter inside an
IDE (which is what I presume you are doing) - why do you _care_ about
the old objects the first place? I mean, you obviously changed the
classes for a reason. So, you are not being productive here, but still
programming. Which means that you don't _have_ to care about old,
unchanged objects too much.

But in the end - it's your code. It will run slower, it looks kinda
weird as someone who's reading it has to know what it is for, but if it
suits your needs - do it.

Diez
I have two motivations.

First, I'm dealing with some classes whose instances take a long time to
construct, which makes for a long debug cycle if I have to reconstruct
them after every code change.

Second, the design just naturally seems to break down that way. I have
a library that adds functionality (persistence) to a set of existing
classes. The persistence code stores the objects in a relational DB, so
it's pretty hairy code, and it has nothing to do with the functionality
that the classes actually provide. The original classes are useful even
with the persistence code, and I'm trying to keep things lightweight.

If there's a better way to accomplish all this I'm all ears.

rg
Dec 1 '06 #3

P: n/a
Ron Garret wrote:
In article <4t*************@mid.uni-berlin.de>,
"Diez B. Roggisch" <de***@nospam.web.dewrote:
Ron Garret schrieb:
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:
>
def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)
>
class c1(object): pass
>
@defmethod(c1)
def m1(self, x): ...
>
>
Now if you redefine m1, existing instances of c1 will see the change.
>
My question is: is there a reason not to do this? Does it screw
something up behind the scenes? Is it unpythonic? Why isn't this
standard operating procedure?
1. Do you mean, is there an "it'll crash the interpreter" reason? No.
2. Not for most ordinary cases. There are a few gotchas (for example,
use of __private variables), but they're minor.
3. Yes.
4. Because it's unPythonic.

But don't worry too much if you do something unPythonic occasionally.
Python says there's should be one--and preferably only one--obvious way
to do it. So if there's no obvious way to do it, you probably have to
do something unPythonic. (Or consult someone with more experience. :)
[snip]
I have two motivations.

First, I'm dealing with some classes whose instances take a long time to
construct, which makes for a long debug cycle if I have to reconstruct
them after every code change.

Second, the design just naturally seems to break down that way. I have
a library that adds functionality (persistence) to a set of existing
classes. The persistence code stores the objects in a relational DB, so
it's pretty hairy code, and it has nothing to do with the functionality
that the classes actually provide. The original classes are useful even
with the persistence code, and I'm trying to keep things lightweight.
Seems like a reasonable use case to me.
If there's a better way to accomplish all this I'm all ears.
A straightforward, Pythonic way to do it would be to create an
intermediate representation that understands both the existing class
interfaces and the RDB stuff, but that could lead to synchronizing
problems and a big hit in performance. And it's probably a lot of work
compared to tacking on methods. OTOH, it could help with hairiness you
mention. (I recently did something similar in one of my projects,
though the intermediary was transient.)

You might be able to create a set of augmented subclasses to use
instead. The main problem with this is that base classes often don't
know about the augmented versions. You'd have to override code that
requires an ordinary object with code that allows an augmented object.
This is sometimes very inconvenient (like when the code requiring an
ordinary object is one line smack in the middle of a 100-line
function).

It actually sounds like Aspect Oriented Programming might be helpful
here (if you care to learn another wholly different programming
paradigm, that is). You have a concern (persistence) that's pretty
much off in another dimension from the purpose of the classes.
Or maybe the best way is just to teach an old class new tricks.
Carl Banks

Dec 1 '06 #4

P: n/a
"Carl Banks" <pa************@gmail.comwrote in message
news:11**********************@f1g2000cwa.googlegro ups.com...
>
A straightforward, Pythonic way to do it would be to create an
intermediate representation that understands both the existing class
interfaces and the RDB stuff, but that could lead to synchronizing
problems and a big hit in performance. And it's probably a lot of work
compared to tacking on methods. OTOH, it could help with hairiness you
mention. (I recently did something similar in one of my projects,
though the intermediary was transient.)
I would second Carl's recommendation that you find some way to persist an
interim version of these expensive-to-create objects, so you can quickly
load debuggable instances to accelerate your development process. With
luck, you can get by with out-of-the-box marshal/unmarshal using the pickle
module. We've done this several times in my office with objects that an
application creates only after some extensive GUI interaction - it just
slows down development too much without some quick import of debuggable
instances.

Even though this seems like a sidetrack, it's a pretty direct shortcut,
without too much unusual technology or design work.

-- Paul
Dec 1 '06 #5

P: n/a
"Ron Garret" <rN*******@flownet.comwrote:

>
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:
I would have thought that not changing yesterday was the very essence of
dynamism (dynamicness ??) - but that when you change something - it applies from
that point in time forwards...

What do you propose to do about the outputs from such classes that have already
happened?

confused - Hendrik

Dec 1 '06 #6

P: n/a
In article <ma**************************************@python.o rg>,
"Hendrik van Rooyen" <ma**@microcorp.co.zawrote:
"Ron Garret" <rN*******@flownet.comwrote:


One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

I would have thought that not changing yesterday was the very essence of
dynamism (dynamicness ??) - but that when you change something - it applies
from that point in time forwards...
I don't want to get into a philosophical debate. I'll just point you to
CLOS as an example of an object system that already works this way.
What do you propose to do about the outputs from such classes that have
already happened?
The ability to change methods on the fly will be used mainly for
debugging I expect.

rg
Dec 1 '06 #7

P: n/a
In article <rN*****************************@news.gha.charterm i.net>,
Ron Garret <rN*******@flownet.comwrote:
In article <ma**************************************@python.o rg>,
"Hendrik van Rooyen" <ma**@microcorp.co.zawrote:
"Ron Garret" <rN*******@flownet.comwrote:

>
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:
I would have thought that not changing yesterday was the very essence of
dynamism (dynamicness ??) - but that when you change something - it applies
from that point in time forwards...

I don't want to get into a philosophical debate.
Actually, I changed my mind. Consider:

def g(): print 'G'

def h(): print 'H'

def f(): g()

class C1:
def m1(self): f()

class C2:
def m1(self): g()

c1 = C1()
c2 = C2()

def f(): h()

class C2:
def m1(self): h()

c1.m1() # Prints H
c2.m1() # Prints G

On what principled basis can you justify two different outputs in this
case? Why should I be able to change the definition of f and not have
to go back and recompile all references to it, but not m1?

rg
Dec 1 '06 #8

P: n/a
In article <z3*****************@tornado.texas.rr.com>,
"Paul McGuire" <pt***@austin.rr._bogus_.comwrote:
"Carl Banks" <pa************@gmail.comwrote in message
news:11**********************@f1g2000cwa.googlegro ups.com...

A straightforward, Pythonic way to do it would be to create an
intermediate representation that understands both the existing class
interfaces and the RDB stuff, but that could lead to synchronizing
problems and a big hit in performance. And it's probably a lot of work
compared to tacking on methods. OTOH, it could help with hairiness you
mention. (I recently did something similar in one of my projects,
though the intermediary was transient.)
I would second Carl's recommendation that you find some way to persist an
interim version of these expensive-to-create objects, so you can quickly
load debuggable instances to accelerate your development process. With
luck, you can get by with out-of-the-box marshal/unmarshal using the pickle
module. We've done this several times in my office with objects that an
application creates only after some extensive GUI interaction - it just
slows down development too much without some quick import of debuggable
instances.

Even though this seems like a sidetrack, it's a pretty direct shortcut,
without too much unusual technology or design work.
These objects can be parts of huge networks of massively linked data
structures. They are in constant flux. It is not uncommon to hit a bug
after many minutes, sometimes hours, of computation. Having to store
the whole shlemobble after every operation would slow things down by
orders of magnitude. And writing code to be clever and only store the
dirty bits would be a pain in the ass. I think I'll stick with Plan A.

rg
Dec 1 '06 #9

P: n/a
Ron Garret wrote:
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)
Why not just ``return lambda func: setattr(cls, func.func_name, func)``
?
Your approach is certainly uncommon, but for your use case it seems to
me
a pretty much decent solution. The only thing I don't like is that all
your
functions/methods will end up begin 'None'. I'd rather to be able to
use
the help, so I would write

def defmethod(cls):
def decorator(func):
setattr(cls, func.func_name, func)
return func
return decorator

@defmethod(C)
def m1(self, x):pass

help(m1)
BTW, for people with a Lisp background I recommend using IPython with
emacs and the
ipython.el mode. It is pretty good, even if not comparable to Slime.

Michele Simionato

Dec 1 '06 #10

P: n/a
"Ron Garret" <rN*******@flownet.comwrote in message
news:rN*****************************@news.gha.char termi.net...
>
These objects can be parts of huge networks of massively linked data
structures. They are in constant flux. It is not uncommon to hit a bug
after many minutes, sometimes hours, of computation. Having to store
the whole shlemobble after every operation would slow things down by
orders of magnitude. And writing code to be clever and only store the
dirty bits would be a pain in the ass. I think I'll stick with Plan A.

rg
Sorry, not quite what I meant, I'm not suggesting storing everything after
every change. I just meant that to help your development, once you get some
instances to a steady state, persist them off in some picklish format, so
you can restart quickly by unpickling, instead of dynamically
reconstructing. But you know your problem domain better than I, so I'll
shut up. Best of luck to you.

-- Paul
Dec 1 '06 #11

P: n/a
Ron Garret wrote:
In article <rN*****************************@news.gha.charterm i.net>,
Ron Garret <rN*******@flownet.comwrote:
I don't want to get into a philosophical debate.

Actually, I changed my mind. Consider:

def g(): print 'G'

def h(): print 'H'

def f(): g()

class C1:
def m1(self): f()

class C2:
def m1(self): g()

c1 = C1()
c2 = C2()

def f(): h()

class C2:
def m1(self): h()

c1.m1() # Prints H
c2.m1() # Prints G

On what principled basis can you justify two different outputs in this
case? Why should I be able to change the definition of f and not have
to go back and recompile all references to it, but not m1?
I see what you were asking now: you want to know why a class statement
doesn't modify a previously existing class (as is the case in Ruby)
rather than to create a new one.

The principle behind this is pretty much "it was just a language design
decision".

The designers of Python felt it was generally best to have whole
classes in one place, rather than spread out over many locations. I
tend to agree with this. Changing classes in-place violates the
"principle of least surprise"--keep in mind the "surprise" we're
talking about is the reader's surprise, not the writer's. A person
might be reading a class definition wondering, "WTF is happening, why
doesn't it match the behavior?", not knowing that the class was
modified in-place somewhere else. (That person could be you three
months later.)

Valid use cases like yours are exceptional, and can be done
straightforwardly without changing class statement to modify in-place,
so I think it was the right decision.

Your opinion may differ. It doesn't seem to have wreaked havoc in
Common Lisp and Ruby. But that's not how Python is. I have things I
don't like about Python, too. You just deal with it.
P.S. If you want to be truly evil, you could use a class hook to get
the modifying in-place behavior:

def modify_in_place(name,bases,clsdict):
cls = globals()[name]
for attr,val in clsdict.iteritems():
setattr(cls,attr,val)
return cls

# Replace second C2 class above with this
class C2:
__metaclass__ = modify_in_place
def m1(self): h()
Carl Banks

Dec 1 '06 #12

P: n/a
In article <11**********************@l12g2000cwl.googlegroups .com>,
"Carl Banks" <pa************@gmail.comwrote:
The principle behind this is pretty much "it was just a language design
decision".
Yes, and I'm not taking issue with the decision, just pointing out that
the desire to do things differently is not necessarily perverse.
P.S. If you want to be truly evil, you could use a class hook to get
the modifying in-place behavior:

def modify_in_place(name,bases,clsdict):
cls = globals()[name]
for attr,val in clsdict.iteritems():
setattr(cls,attr,val)
return cls

# Replace second C2 class above with this
class C2:
__metaclass__ = modify_in_place
def m1(self): h()
Doesn't work for me:
>>c2
<__main__.C2 instance at 0x51e850>
>>c2.m1()
G
>>class C2:
.... __metaclass__ = modify_in_place
.... def m1(self): print 'Q'
....
>>c2.m1()
G
>>C2().m1()
Q

rg
Dec 1 '06 #13

P: n/a
In article <11**********************@79g2000cws.googlegroups. com>,
"Michele Simionato" <mi***************@gmail.comwrote:
Ron Garret wrote:
One of the things I find annoying about Python is that when you make a
change to a method definition that change is not reflected in existing
instances of a class (because you're really defining a new class when
you reload a class definition, not actually redefining it). So I came
up with this programming style:

def defmethod(cls):
return lambda (func): type.__setattr__(cls, func.func_name, func)

Why not just ``return lambda func: setattr(cls, func.func_name, func)``
?
Because I'm an idiot. (i.e. yes, that is obviously the right way to do
it.)
The only thing I don't like is that all your
functions/methods will end up begin 'None'.

I'd rather to be able to use
the help, so I would write

def defmethod(cls):
def decorator(func):
setattr(cls, func.func_name, func)
return func
return decorator

@defmethod(C)
def m1(self, x):pass

help(m1)
BTW, for people with a Lisp background I recommend using IPython with
emacs and the
ipython.el mode. It is pretty good, even if not comparable to Slime.

Michele Simionato
Good tips. Thanks!

rg
Dec 1 '06 #14

P: n/a
"Ron Garret" <rN*******@flownet.comwrote:
I don't want to get into a philosophical debate.

Actually, I changed my mind. Consider:

def g(): print 'G'

def h(): print 'H'

def f(): g()

class C1:
def m1(self): f()

class C2:
def m1(self): g()

c1 = C1()
c2 = C2()

def f(): h()

class C2:
def m1(self): h()

c1.m1() # Prints H
c2.m1() # Prints G

On what principled basis can you justify two different outputs in this
case? Why should I be able to change the definition of f and not have
to go back and recompile all references to it, but not m1?
This feels to me as if you are changing the specification of what wood to use
from yellowood to teak after the chair has already been made.

But maybe I am just simple minded...

- Hendrik

Dec 2 '06 #15

P: n/a
Since we are in a hackish mood, another alternative - interesting if
you want
the freedom to update your instances selectively - is to change their
class at
runtime.
In this way you can specify which instances must use the new version of
the class and which ones must keep the old one. It may be useful
for debugging purposes too. Here is some code, to get you started:

def update(obj):
"""Look if the class of obj has been redefined in the global
namespace:
if so, update obj"""
cls = globals().get(obj.__class__.__name__)
if cls and cls is not obj.__class__:
obj.__class__ = cls

class C(object): # old class
def m1(self):
return 1

c = C() # old instance
assert c.m1() == 1

class C(object): # new class
def m1(self):
return 2

update(c) # old instance updated
assert c.m1() == 2

Michele Simionato

Dec 2 '06 #16

P: n/a
Ron Garret wrote:
>
Doesn't work for me:
>c2
<__main__.C2 instance at 0x51e850>
>c2.m1()
G
>class C2:
... __metaclass__ = modify_in_place
... def m1(self): print 'Q'
...
>c2.m1()
G
>C2().m1()
Q
I assume your original C2 class was defined in a different module, not
in the current global
namespace. What do you get from c.__class__.__module__ ? It should be
__name__ for
this approach to work.

class C2:
def m1(self):
return 'G'

c = C2()

def modify_in_place(name,bases,clsdict):
cls = globals()[name]
for attr,val in clsdict.iteritems():
setattr(cls,attr,val)
return cls

# Replace second C2 class above with this
class C2:
__metaclass__ = modify_in_place
def m1(self):
return 'Q'

assert c.m1() == 'Q'
assert c.__class__.__module__ == __name__ # make sure c.__class__ is
defined in the current module
Michele Simionato

Dec 2 '06 #17

This discussion thread is closed

Replies have been disabled for this discussion.