473,473 Members | 2,002 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

Deprecating reload() ???

> > Other surprises: Deprecating reload()
Reload doesn't work the way most people think
it does: if you've got any references to the old module,
they stay around. They aren't replaced. It was a good idea, but the implementation simply
doesn't do what the idea promises.


I agree that it does not really work as most people think it does, but how
would you perform the same task as reload() without the reload()?

Would this be done by "del sys.modules['modulename']" and then perform an
'import'?

I use reload() for many purposes, knowing how it works/does not work.

Lance Ellinghaus

Jul 18 '05
66 3821

"David MacQuigg" <dm*@gain.com> wrote in message
news:fc********************************@4ax.com...
On Mon, 15 Mar 2004 11:33:04 -0600, Jeff Epler <je****@unpythonic.net>
wrote:
It's worse than just a misunderstanding. It's a serious limitation on
what we can do with editing a running program.
Well, that's a definite yes and no. The limitation is quite simple:
any object in the module that has a reference from outside of the
module will not have that reference changed. It will continue to
refer to the old copy of the object.

The solution to this is to apply some design discipline. Systems
exist that have absolute "cannot come down for any reason"
type of requirements where software has to be replaced while the
system is running. It's not impossible, it simply requires a great
deal of discipline in not allowing references to wander all over the
place.

As far as updating in place while debugging, there are a few
solutions that, so far, haven't been implemented. One is to
notice that functions are objects that have a reference to another
object called a "code object." This is the actual result of the
compilation, and if you go behind the scenes and replace the
code object reference in the function object, you've basically
done an update in place - as long as you don't have a stack
frame with a pointer into the old code object! (The stack frame
could, of course, be fixed too. Extra credit for doing so.)

I can easily imagine a development environment that could
do this kind of magic. If someone wants to build it, I'm
certainly not going to stop them (and wouldn't even if I could.)
I might even find it useful!

The thing that is not going to work, ever, is having reload()
do the work for you.
I don't agree that
what it does now is well defined (at least not in the documentation).
It's well enough defined for someone who knows how Python works.
The discussion in Learning Python is totally misleading. We should at
least update the description of the reload function in the Python
Library Reference. See the thread "Reload Confusion" for some
suggested text.
I agree with that: instead of "There are some caveats" it should say:

WARNING - references to objects in the old copy of the module
that have leaked out of the module will NOT be replaced. A few of
the implications are:

and then continue with the current text (my version of the doc is 2.3.2).

John Roth
-- Dave

Jul 18 '05 #51
David MacQuigg <dm*@gain.com> wrote in message news:<fc********************************@4ax.com>. ..
On Mon, 15 Mar 2004 11:33:04 -0600, Jeff Epler <je****@unpythonic.net>
wrote:
The only problem I can see with reload() is that it doesn't do what you
want. But on the other hand, what reload() does is perfectly well
defined, and at least the avenues I've seen explored for "enhancing" it
look, well, like train wreck.


It's worse than just a misunderstanding. It's a serious limitation on
what we can do with editing a running program.


As I said in another message, you CAN do the kinds of things you want
to do (edit-and-continue), if you use weakrefs, and use classes
instead of modules. Take a look at the weakref module. I am not saying
that it's trivial in Python: it does require a bit of work, but
edit-and-continue in Python is definitely doable. I've done it, many
other people have done it.

(Of course, if you asked whether the Ruby behavior is better, I'd
think so. I think it's better to automatically replace class behavior
on reload, by default, and leave open the possibility of explicitly
refusing replacement. Python is the opposite: by default the class
behavior does NOT get modified, and you have to do somework to replace
it.)

I think it is an historical accident in Python that modules are not
made more class-like. Another thing I have seen people wishing having
is getter/setter accessor methods (or properties) for module-level
attributes.

It usually is a better practice in Python to store attributes in
classes rather than modules, exactly because down the future you'd
often start to wish having class-like behaviors for your modules.

regards,

Hung Jung
Jul 18 '05 #52
On Mon, 15 Mar 2004 13:50:47 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
Dave> We should at least update the description of the reload function
Dave> in the Python Library Reference. See the thread "Reload
Dave> Confusion" for some suggested text.

Please file a bug report on Sourceforge so your ideas don't get lost. Feel
free to assign it to me (sf id == "montanaro").


Will do. I would like to get a few comments from folks following this
thread before submitting the proposed text to Sourceforge. Here is
the summary:
"""
To summarize what happens with reload(M1):

The module M1 must have been already imported in the current
namespace.

The new M1 is executed, and new objects are created in memory.

The names in the M1 namespace are updated to point to the new objects.
No other references are changed. References to objects removed from
the new module remain unchanged.

Previously created references from other modules to the old objects
remain unchanged and must be updated in each namespace where they
occur.

The old objects remain in memory until all references to them are
gone.
"""

To read this in context, with a nice example, background discussion,
etc. see http://ece.arizona.edu/~edatools/Python/Reload.htm I think
I've finally got it right, but I'm always prepared for another
surprise.

Once I'm confident that my understanding is correct, I'll see if I can
weave this into the existing text of the Library Reference. I may try
also to put in some of the "motivation" for the way things are (from
the Background section of the above webpage.)

-- Dave

Jul 18 '05 #53
David MacQuigg wrote:
On Mon, 15 Mar 2004 05:49:58 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
Dave> Maybe we could somehow switch off the generation of shared objects
Dave> for modules in a 'debug' mode.

You'd have to disable the integer free list. There's also code in
tupleobject.c to recognize and share the empty tuple. String interning
could be disabled as well. Everybody's ignored the gorilla in the room:
>>> sys.getrefcount(None)

1559


Implementation detail. ( half wink )
In general, I don't think that disabling immutable object sharing would be
worth the effort. Consider the meaning of module level integers. In my
experience they are generally constants and are infrequently changed once
set. Probably the only thing worth tracking down during a super reload
would be function, class and method definitions.


If you reload a module M1, and it has an attribute M1.x, which was
changed from '1' to '2', we want to change also any references that
may have been created with statements like 'x = M1.x', or 'from M1
import *' If we don't do this, reload() will continue to baffle and
frustrate new users.

What if one of your users does something like 'y = M1.x + 1'; then
what are you going to do?

It seems to me that your noble effort to make reload() completely
foolproof is ultimately in vain: there's just too many opportunities
for a module's variables to affect things far away.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #54
On Tue, 16 Mar 2004 04:43:01 GMT, Carl Banks
<im*****@aerojockey.invalid> wrote:
What if one of your users does something like 'y = M1.x + 1'; then
what are you going to do?
The goal is *not* to put the program into the state it "would have
been" had the changes in M1 been done earlier. That is impossible.
We simply want to have all *direct* references to objects in M1 be
updated. A direct reference, like 'y = M1.x', sets 'y' to the same
object as 'M1.x' The 'y' in the above example points to a new object,
with an identity different than anything in the M1 module. It should
not get updated.
It seems to me that your noble effort to make reload() completely
foolproof is ultimately in vain: there's just too many opportunities
for a module's variables to affect things far away.


It all depends on your goals for reload(). To me, updating all direct
references is a worthy goal, would add a lot of utility, and is easy
to explain. Going further than that, updating objects that are "only
one operation away from a direct reference" for example, gets into a
grey area where I see no clear line we can draw. There might be some
benefit, but the cost in user confusion would be too great.

Reload() will always be a function that needs to be used cautiously.
Changes in a running program can propagate in strange ways. "Train
wreck" was the term another poster used.

-- Dave

Jul 18 '05 #55
What if one of your users does something like 'y = M1.x + 1'; then
what are you going to do?
Dave> The goal is *not* to put the program into the state it "would have
Dave> been" had the changes in M1 been done earlier. That is
Dave> impossible. We simply want to have all *direct* references to
Dave> objects in M1 be updated.

The above looks like a pretty direct reference to M1.x. <0.5 wink>

It seems to me that you have a continuum from "don't update anything" to
"track and update everything":

don't update update global update all direct update
anything funcs/classes references everything

reload() super_reload() Dave nobody?

Other ideas have been mentioned, like fiddling the __bases__ of existing
instances or updating active local variables. I'm not sure precisely where
those concepts fall on the continuum. Certainly to the right of
super_reload() though.

In my opinion you do what's easy as a first step then extend it as you can.
I think you have to punt on shared objects (ints, None, etc). This isn't
worth changing the semantics of the language even in some sort of
interactive debug mode.

Sitting for long periods in an interactive session and expecting it to track
your changes is foreign to me. I will admit to doing stuff like this for
short sessions:
import foo
x = foo.Foo(...)
x.do_it() ...
TypeError ... # damn! tweak foo.Foo class in emacs
reload(foo)
x = foo.Foo(...)
x.do_it()

...

but that's relatively rare, doesn't go on for many cycles, and is only made
tolerable by the presence of readline/command retrieval/copy-n-paste in the
interactive environment.

Maybe it's just the nature of your users and their background, but an
(edit/test/run)+ cycle seems much more common in the Python community than a
run/(edit/reload)+ cycle. Note the missing "test" from the second cycle and
from the above pseudo-transcript. I think some Python programmers would
take the opportunity to add an extra test case to their code in the first
cycle, where in the second cycle the testing is going on at the interactive
prompt where it can get lost. "I don't need to write a test case. It will
just slow me down. The interactive session will tell me when I've got it
right." Of course, once the interactive sessions has ended, the sequence of
statements you executed is not automatically saved. You still need to pop
back to your editor to take care of that. It's a small matter of
discipline, but then so is not creating aliases in the first place.

Dave> Reload() will always be a function that needs to be used
Dave> cautiously. Changes in a running program can propagate in strange
Dave> ways. "Train wreck" was the term another poster used.

Precisely. You may wind up making reload() easier to explain in the common
case, but introduce subtleties which are tougher to predict (instances whose
__bases__ change or don't change depending how far along the above continuum
you take things). I think changing the definitions of functions and classes
will be the much more likely result of edits requiring reloads than tweaking
small integers or strings. Forcing people to recreate instances is
generally not that big of a deal.

Finally, I will drag the last line out of Tim's "The Zen of Python":

Namespaces are one honking great idea -- let's do more of those!

By making it easier for your users to get away with aliases like

x = M1.x

you erode the namespace concept ever so slightly just to save typing a
couple extra characters or executing a couple extra bytecodes. Why can't
they just type M1.x again? I don't think the savings is really worth it in
the long run.

Skip

Jul 18 '05 #56
On Tue, 16 Mar 2004 08:58:38 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
[snip]
It seems to me that you have a continuum from "don't update anything" to
"track and update everything":

don't update update global update all direct update
anything funcs/classes references everything

reload() super_reload() Dave nobody?

Other ideas have been mentioned, like fiddling the __bases__ of existing
instances or updating active local variables. I'm not sure precisely where
those concepts fall on the continuum. Certainly to the right of
super_reload() though.

In my opinion you do what's easy as a first step then extend it as you can.
I think you have to punt on shared objects (ints, None, etc). This isn't
worth changing the semantics of the language even in some sort of
interactive debug mode.
I agree, punt is the right play for now, but I want to make one
clarification, in case we need to re-open this question. The semantic
change we are talking about applies only to the 'is' operator, and
only to a few immutable objects which are created via a reload of a
module in "debug" mode. All other objects, including those from other
modules remain unchanged. Objects like None, 1, 'abc', which are
treated as shared objects in normal modules, will be given a unique ID
when loaded from a module in debug mode. This means you will have to
use '==' to test equality of those objects, not 'is'. Since 'is' is
already a tricky, implementation-dependent operator that is best
avoided in these situations, the impact of this added option seems far
less than "changing the semantics of the language".

I'll hold off on any push for expanding super_reload until I have a
good use case. Meanwhile, I'll assume I can work with the existing
reload. This will probably involve a combination of programming
discipline and user education. In programming, I'll do what I can to
avoid problems with the modules I expect will be reloaded. Where
"direct" references are made to constants or other objects in a
reloaded module, I'll be sure to refresh those references at the
beginning of each code section. In my user manual, there will
probably be statements like: """Don't attempt to reload the stats
module while a simulator is active. The reload function can save
having to restart your entire session, but it does not update
functions or data that have already been sent to the simulator. As
with reloading statefiles, always kill and restart the simulator after
a reload of the stats module."""
Sitting for long periods in an interactive session and expecting it to track
your changes is foreign to me. I will admit to doing stuff like this for
short sessions:
>>> import foo
>>> x = foo.Foo(...)
>>> x.do_it() ...
TypeError ... >>> # damn! tweak foo.Foo class in emacs
>>> reload(foo)
>>> x = foo.Foo(...)
>>> x.do_it()
...

but that's relatively rare, doesn't go on for many cycles, and is only made
tolerable by the presence of readline/command retrieval/copy-n-paste in the
interactive environment.

Maybe it's just the nature of your users and their background, but an
(edit/test/run)+ cycle seems much more common in the Python community than a
run/(edit/reload)+ cycle. Note the missing "test" from the second cycle and
from the above pseudo-transcript. I think some Python programmers would
take the opportunity to add an extra test case to their code in the first
cycle, where in the second cycle the testing is going on at the interactive
prompt where it can get lost. "I don't need to write a test case. It will
just slow me down. The interactive session will tell me when I've got it
right." Of course, once the interactive sessions has ended, the sequence of
statements you executed is not automatically saved. You still need to pop
back to your editor to take care of that. It's a small matter of
discipline, but then so is not creating aliases in the first place.


This is a good description of the program development cycle. My users
(circuit design engineers) won't be doing program development, but
will be making changes in existing data and functions. My goal is to
make that as easy as possible. The biggest step is offering Python as
the scripting language, rather than SKILL, OCEAN, MDL, or a number of
other CPL's ( complex proprietary languages ). I expect them to learn
in two days enough Python to understand a function definition, and to
be able to edit that definition, making it do whatever they want.
Dave> Reload() will always be a function that needs to be used
Dave> cautiously. Changes in a running program can propagate in strange
Dave> ways. "Train wreck" was the term another poster used.

Precisely. You may wind up making reload() easier to explain in the common
case, but introduce subtleties which are tougher to predict (instances whose
__bases__ change or don't change depending how far along the above continuum
you take things). I think changing the definitions of functions and classes
will be the much more likely result of edits requiring reloads than tweaking
small integers or strings. Forcing people to recreate instances is
generally not that big of a deal.

Finally, I will drag the last line out of Tim's "The Zen of Python":

Namespaces are one honking great idea -- let's do more of those!

By making it easier for your users to get away with aliases like

x = M1.x

you erode the namespace concept ever so slightly just to save typing a
couple extra characters or executing a couple extra bytecodes. Why can't
they just type M1.x again? I don't think the savings is really worth it in
the long run.


def h23(freq):
s = complex(2*pi*freq)
h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Notice the clarity in that last formula. This is a standard form of a
pole-zero transfer function that will be instantly recognized by a
circuit design engineer. The issue isn't the typing of extra
characters, but the compactness of expressions.

In this case we avoid the problem of local variables out-of-sync with
a reloaded module by refreshing those variables with every call to the
function. In other cases, this may add too much overhead to the
computation.

-- Dave

Jul 18 '05 #57

Dave> I agree, punt is the right play for now, but I want to make one
Dave> clarification, in case we need to re-open this question. The
Dave> semantic change we are talking about applies only to the 'is'
Dave> operator, and only to a few immutable objects which are created
Dave> via a reload of a module in "debug" mode. All other objects,
Dave> including those from other modules remain unchanged. Objects like
Dave> None, 1, 'abc', which are treated as shared objects in normal
Dave> modules, will be given a unique ID when loaded from a module in
Dave> debug mode. This means you will have to use '==' to test equality
Dave> of those objects, not 'is'. Since 'is' is already a tricky,
Dave> implementation-dependent operator that is best avoided in these
Dave> situations, the impact of this added option seems far less than
Dave> "changing the semantics of the language".

Don't forget all the C code. C programmers know the object which represents
None is unique, so their code generally looks like this snippet from
Modules/socketmodule.c:

if (arg == Py_None)
timeout = -1.0;

"==" in C is the equivalent of "is" in Python. If you change the uniqueness
of None, you have a lot of C code to change.

Dave> def h23(freq):
Dave> s = complex(2*pi*freq)
Dave> h0 = PZfuncs.h0
Dave> z1 = PZfuncs.z1; z2 = PZfuncs.z2
Dave> p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
Dave> return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Dave> Notice the clarity in that last formula.

Yeah, but h0, z1, z2, etc are not long-lived copies of attributes in
PZfuncs. If I execute:
blah = h23(freq)
reload(PZfuncs)
blah = h23(freq)


things will work properly. It's only long-lived aliases that present a
problem:

def h23(freq, h0=PZfuncs.h0):
s = complex(2*pi*freq)
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Dave> In this case we avoid the problem of local variables out-of-sync
Dave> with a reloaded module by refreshing those variables with every
Dave> call to the function. In other cases, this may add too much
Dave> overhead to the computation.

Unlikely. Creating local copies of frequently used globals (in this case
frequently used globals in another module) is almost always a win. In fact,
it's so much of a win that a fair amount of brain power has been devoted to
optimizing global access. See PEPs 266 and 267 and associated threads in
python-dev from about the time they were written. (Note that optimizing
global access is still an unsolved problem in Python.)

Skip

Jul 18 '05 #58

"David MacQuigg" <dm*@gain.com> wrote in message
news:42********************************@4ax.com...
def h23(freq):
s = complex(2*pi*freq)
h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Notice the clarity in that last formula. This is a standard form of a
pole-zero transfer function that will be instantly recognized by a
circuit design engineer. The issue isn't the typing of extra
characters, but the compactness of expressions.


For one use only, making local copies adds overhead without the
compensation of faster multiple accesses. To make the formula nearly as
clear without the overhead, I would consider

import PZfuncs as z
def h23(freq):
s = complex(2*pi*freq)
return z.h0 * (s-z.z1) * (s-z.z2) / ((s-z.p1) * (s-z.p2) * (s-z.p3))

Terry J. Reedy


Jul 18 '05 #59
On Tue, 16 Mar 2004 16:42:55 -0500, "Terry Reedy" <tj*****@udel.edu>
wrote:
"David MacQuigg" <dm*@gain.com> wrote in message
news:42********************************@4ax.com.. .
def h23(freq):
s = complex(2*pi*freq)
h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Notice the clarity in that last formula. This is a standard form of a
pole-zero transfer function that will be instantly recognized by a
circuit design engineer. The issue isn't the typing of extra
characters, but the compactness of expressions.


For one use only, making local copies adds overhead without the
compensation of faster multiple accesses. To make the formula nearly as
clear without the overhead, I would consider

import PZfuncs as z
def h23(freq):
s = complex(2*pi*freq)
return z.h0 * (s-z.z1) * (s-z.z2) / ((s-z.p1) * (s-z.p2) * (s-z.p3))


This is equivalent to:

h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
def h23(freq):
s = complex(2*pi*freq)
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Either way we have the architectural problem of ensuring that the code
just above the def gets executed *after* each reload of PZfuncs, and
*before* any call to h23.

-- Dave

Jul 18 '05 #60
On Tue, 16 Mar 2004 14:52:54 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
Dave> def h23(freq):
Dave> s = complex(2*pi*freq)
Dave> h0 = PZfuncs.h0
Dave> z1 = PZfuncs.z1; z2 = PZfuncs.z2
Dave> p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
Dave> return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

Dave> Notice the clarity in that last formula.

Yeah, but h0, z1, z2, etc are not long-lived copies of attributes in
PZfuncs. If I execute:
>>> blah = h23(freq)
>>> reload(PZfuncs)
>>> blah = h23(freq)

things will work properly. It's only long-lived aliases that present a
problem:

def h23(freq, h0=PZfuncs.h0):
s = complex(2*pi*freq)
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))


I think we are saying the same thing. I moved the alias definitions
inside the loop, knowing they would be short-lived, and therefor not a
problem.
Dave> In this case we avoid the problem of local variables out-of-sync
Dave> with a reloaded module by refreshing those variables with every
Dave> call to the function. In other cases, this may add too much
Dave> overhead to the computation.

Unlikely. Creating local copies of frequently used globals (in this case
frequently used globals in another module) is almost always a win. In fact,
it's so much of a win that a fair amount of brain power has been devoted to
optimizing global access. See PEPs 266 and 267 and associated threads in
python-dev from about the time they were written. (Note that optimizing
global access is still an unsolved problem in Python.)


Interesting. I just ran a test comparing 10,000 calls to the original
h23 above with 10,000 calls to h23a below.

h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3
def h23a(freq):
s = complex(2*pi*freq)
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

The results are very close:

time/loop (sec) %
Test 1: 6.48E-006 119.1 (class PZfuncs)
Test 2: 6.88E-006 126.4 (module PZfuncs)
Test 3: 5.44E-006 100.0 (z1, p1, etc outside loop)

There is not much difference in the original loop between accessing
the constants from a class vs accessing them from a module. There is
a significant difference ( but not as much as I expected ) if we move
the six assignments outside the loop. Then we are back to the problem
of ensuring that the aliases get updated each time we update the
module.

-- Dave

Jul 18 '05 #61

Terry> To make the formula nearly as clear without the overhead, I would
Terry> consider

Terry> import PZfuncs as z
Terry> def h23(freq):
Terry> s = complex(2*pi*freq)
Terry> return z.h0 * (s-z.z1) * (s-z.z2) / ((s-z.p1) * (s-z.p2) * (s-z.p3))

Or even:

def h23(freq):
z = PZfuncs
s = complex(2*pi*freq)
return z.h0 * (s-z.z1) * (s-z.z2) / ((s-z.p1) * (s-z.p2) * (s-z.p3))

(slightly faster and avoids the module aliasing problem Dave is concerned
about).

Skip

Jul 18 '05 #62

Dave> Interesting. I just ran a test comparing 10,000 calls to the
Dave> original h23 above with 10,000 calls to h23a below.

...

Dave> The results are very close:

Dave> time/loop (sec) %
Dave> Test 1: 6.48E-006 119.1 (class PZfuncs)
Dave> Test 2: 6.88E-006 126.4 (module PZfuncs)
Dave> Test 3: 5.44E-006 100.0 (z1, p1, etc outside loop)

I'm not sure what these particular tests are measuring. I can't tell which
are h23() calls and which are h23a() calls, but note that because h23() and
h23a() are actually quite simple, the time it takes to call them is going to
be a fair fraction of all calls.

For timing stuff like this I recommend you use timeit.py. Most people here
are getting used to looking at its output. Put something like:

import PZfuncs
h0 = PZfuncs.h0
z1 = PZfuncs.z1; z2 = PZfuncs.z2
p1 = PZfuncs.p1; p2 = PZfuncs.p2; p3 = PZfuncs.p3

def h23null(freq):
pass

def h23a(freq):
s = complex(2*pi*freq)
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

def h23b(freq):
z = PZfuncs
s = complex(2*pi*freq)
return z.h0*(s-z.z1)*(s-z.z2)/((s-z.p1)*(s-z.p2)*(s-z.p3))

into h23.py then run timeit.py like:

timeit.py -s "import h23 ; freq = NNN" "h23null(freq)"
timeit.py -s "import h23 ; freq = NNN" "h23a(freq)"
timeit.py -s "import h23 ; freq = NNN" "h23b(freq)"

Its output is straightforward and pretty immediately comparable across the
runs. The h23null() run will give you some idea of the call overhead. You
can, of course, dream up h23[cdefg]() variants as well.

Post code and results and we'll be happy to throw darts... :-)

Skip

Jul 18 '05 #63
On Tue, 16 Mar 2004 19:48:06 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
For timing stuff like this I recommend you use timeit.py. Most people here
are getting used to looking at its output.
Excellent utility. This ought to be highlighted in the docs on the
time module, at least listed under "See also". I just grabbed the
first thing that came up and wrote my own little routine around the
clock() function.

[...]Post code and results and we'll be happy to throw darts... :-)


# PZfuncs.py -- Timing test for reloaded constants.

# Local constants:
h0 = 1
z1 = 1; z2 = 1
p1 = -1 +1j; p2 = -1 -1j; p3 = -1
pi = 3.1415926535897931

import Constants

def h23null(freq):
pass

def h23a(freq):
s = complex(2*pi*freq)
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

def h23b(freq):
z = Constants
s = complex(2*pi*freq)
return z.h0*(s-z.z1)*(s-z.z2)/((s-z.p1)*(s-z.p2)*(s-z.p3))

def h23c(freq):
h0 = Constants.h0
z1 = Constants.z1; z2 = Constants.z2
p1 = Constants.p1; p2 = Constants.p2; p3 = Constants.p3
s = complex(2*pi*freq)
return h0*(s-z1)*(s-z2)/((s-p1)*(s-p2)*(s-p3))

% timeit.py -s "from PZfuncs import * ; freq = 2.0" "h23null(freq)"
1000000 loops, best of 3: 0.461 usec per loop
% timeit.py -s "from PZfuncs import * ; freq = 2.0" "h23a(freq)"
100000 loops, best of 3: 4.94 usec per loop
% timeit.py -s "from PZfuncs import * ; freq = 2.0" "h23b(freq)"
100000 loops, best of 3: 5.79 usec per loop
% timeit.py -s "from PZfuncs import * ; freq = 2.0" "h23c(freq)"
100000 loops, best of 3: 6.29 usec per loop

My conclusion is that I should stick with form c in my application.
The savings from moving these assignments outside the function (form
a) does not justify the cost in possible problems after a reload. The
savings in going to form b is negligible. Form a is the easiest to
read, but forms b and c are not much worse.

These functions will typically be used in the interactive part of the
program to set up plots with a few hundred points. The time-consuming
computations are all done in the simulator, which is written in C++.

-- Dave

Jul 18 '05 #64
On Thu, 11 Mar 2004 15:10:59 -0500, "Ellinghaus, Lance"
<la**************@eds.com> wrote:
Reload doesn't work the way most people think
it does: if you've got any references to the old module,
they stay around. They aren't replaced.

It was a good idea, but the implementation simply
doesn't do what the idea promises.


I agree that it does not really work as most people think it does, but how
would you perform the same task as reload() without the reload()?

pzfuncs <open file 'PZfuncs.py', mode 'r' at 0x00A86160> exec pzfuncs
p3 -2

--- Edit PZfuncs.py here ---
pzfuncs.seek(0)
exec pzfuncs
p3 -3


The disadvantage compared to reload() is that you get direct
references to *all* the new objects in your current namespace. With
reload() you get only a reference to the reloaded module. With the
proposed super_reload (at least the version I would like) you get no
new references in your current namespace, just updates on the
references that are already there.

Hmm. Maybe we could reload(), then loop over the available names, and
replace any that exist in the current namespace.

-- Dave

Jul 18 '05 #65
Skip Montanaro <sk**@pobox.com> wrote in message news:<ma***********************************@python .org>...

Sitting for long periods in an interactive session and expecting it to track
your changes is foreign to me.
...
Not sure whether this is related to what you are talking about. In
VC/VB, while debugging a program, it is very often to get into this
situation:

(a) you have loops somewhere, (say, from i=0 to i=2000000)
(b) your program fails at some particular points in the loop,
(c) your debugger tells you there is a problem (and maybe you have
some assertion points,) and the execution stops at that point.
(d) you want to add some more debugging code to narrow down the spot,
or narrow down or the condition of the error,
(e) if you do not have good IDE, you'll have to start your program all
over again. But in VC/VB, you just insert some more code, and resume
the execution, all in matter of seconds. And you have much better
insight into the source and nature of the bug. (Is the bug from the
code? Is the bug from the data? What to do? Is the bug from C++? Is
the bug coming from stored-procedure in the database?)

Is this pure theory talk? No, because I just need to use it, now.

Without interactive programming's edit-and-continue feature, very
often you have to stop the program, insert just a few lines of code,
and restart again. This turns really bad when the initial state setup
takes time. Of course, if your programs don't take much initial setup
time, then you won't be able to realize the need or benefit of
edit-and-continue.

Sure, you can unit test things all you want. But in real life,
interactive debugging is, and will always be, the king of bug killers,
especially in large and complex systems.
Maybe it's just the nature of your users and their background, but an
(edit/test/run)+ cycle seems much more common in the Python community than a
run/(edit/reload)+ cycle.


It all depends. For Zope's external methods (CGIs), you don't restart
the whole web/app server everytime you make changes to a CGI. The
run/(edit/reload) cycle is the typical behavior of long-running
applications. (Except for some earlier versions of Java and
Microsoft's web/app servers, where you DID have to restart. And that
was very annoying.)

An analogy is with Windows 95, where everytime you install/update an
application you need to reboot the OS. We know how annoying that is.
Edit-and-continue addresses a similar problem.

By the way, I am told that Common Lisp also has good edit-and-continue
feature.

regards,

Hung Jung
Jul 18 '05 #66
On Mon, 15 Mar 2004 13:50:47 -0600, Skip Montanaro <sk**@pobox.com>
wrote:
Please file a bug report on Sourceforge so your ideas don't get lost. Feel
free to assign it to me (sf id == "montanaro").


Done. SF bug ID is 919099. Here is the proposed addition to
reload(module), just after the first paragraph:

"""
When reload(module) is executed:

The objects defined in module are compiled and loaded into memory as
new objects.

The old objects remain in memory until all references to them are
gone, and they are removed by the normal garbage-collection process.

The names in the module namespace are updated to point to any new or
changed objects. Names of unchanged objects, or of objects no longer
present in the new module, remain pointing at the old objects.

Names in other modules that refer directly to the old objects (without
the module-name qualifier) remain unchanged and must be updated in
each namespace where they occur.
"""

Anyone with corrections or clarifications, speak now.

Also here is a bit that I'm not sure of from my write-up on reload()
at http://ece.arizona.edu/~edatools/Python/Reload.htm:

Footnotes
[1] Reload(M1) is equivalent to the following:
file = open(M1.__file__.rstrip('c'), 'r')
file.seek(0) # needed if this is not the first reload
exec file in M1.__dict__ # repeat from line 2
file.close()


-- Dave

Jul 18 '05 #67

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.