By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,866 Members | 1,704 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,866 IT Pros & Developers. It's quick & easy.

What is a good way of having several versions of a python moduleinstalled in parallell?

P: n/a
Hi!

I write, use and reuse a lot of small python programs for variuos purposes in my work. These use a growing number of utility modules that I'm continuously developing and adding to as new functionality is needed. Sometimes I discover earlier design mistakes in these modules, and rather than keeping old garbage I often rewrite the parts that are unsatisfactory. This often breaks backwards compatibility, and since I don't feel like updating all the code that relies on the old (functional but flawed) modules, I'm left with a hack library that depends on halting versions of my utility modules. The way I do it now is that I update the programs as needed when I need them, but this approach makes me feel a bit queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle it?

I have a feeling it should be possible to have multiple versions of the modules installed simultaneously, and maybe do something like this:

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer choose version after import:

import mymodule
mymodule.require_version("1.1.3")

Is this a good way of thinking about it? What would be an efficient way of implementing it?

Cheers!
/Joel Hedlund
Sep 25 '07 #1
Share this Question
Share on Google+
6 Replies


P: n/a
Joel Hedlund wrote:
Hi!

I write, use and reuse a lot of small python programs for variuos purposes
in my work. These use a growing number of utility modules that I'm
continuously developing and adding to as new functionality is needed.
Sometimes I discover earlier design mistakes in these modules, and rather
than keeping old garbage I often rewrite the parts that are
unsatisfactory. This often breaks backwards compatibility, and since I
don't feel like updating all the code that relies on the old (functional
but flawed) modules, I'm left with a hack library that depends on halting
versions of my utility modules. The way I do it now is that I update the
programs as needed when I need them, but this approach makes me feel a bit
queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle
it?

I have a feeling it should be possible to have multiple versions of the
modules installed simultaneously, and maybe do something like this:

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer
choose version after import:

import mymodule
mymodule.require_version("1.1.3")

Is this a good way of thinking about it? What would be an efficient way of
implementing it?
Use setuptools. It can exactly do that - install different versions parallel
as eggs, and with a pre-import require-statment you require the desired
one.

Diez
Sep 25 '07 #2

P: n/a
Diez B. Roggisch wrote:
Joel Hedlund wrote:
>Hi!

I write, use and reuse a lot of small python programs for variuos purposes
in my work. These use a growing number of utility modules that I'm
continuously developing and adding to as new functionality is needed.
Sometimes I discover earlier design mistakes in these modules, and rather
than keeping old garbage I often rewrite the parts that are
unsatisfactory. This often breaks backwards compatibility, and since I
don't feel like updating all the code that relies on the old (functional
but flawed) modules, I'm left with a hack library that depends on halting
versions of my utility modules. The way I do it now is that I update the
programs as needed when I need them, but this approach makes me feel a bit
queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle
it?

I have a feeling it should be possible to have multiple versions of the
modules installed simultaneously, and maybe do something like this:

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer
choose version after import:

import mymodule
mymodule.require_version("1.1.3")

Is this a good way of thinking about it? What would be an efficient way of
implementing it?

Use setuptools. It can exactly do that - install different versions parallel
as eggs, and with a pre-import require-statment you require the desired
one.

Diez
Of course a much simpler, less formal solution, is to install the
libraries required by a program along with that program in its own
directory. This more or less guarantees you will get out of sync.

Otherwise, three words:

test driven development

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden

Sorry, the dog ate my .sigline

Sep 25 '07 #3

P: n/a
First of all, thanks for all the input - it's appreciated.
Otherwise, three words:

test driven development
Do you also do this for all the little stuff, the small hacks you just
whip together to get a particular task done? My impression is that doing
proper unittests adds a lot of time to development, and I'm thinking
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard. My reasoning here is that if I mess up
somehow, sooner or later I'll notice, and then I have a chance of making
realistic damage assessments. But even so, I must admit that I tend to
do quite little testing for these small projects... Maybe I should be
rethinking this?

Cheers!
/Joel
Sep 25 '07 #4

P: n/a
First of all, thanks for all the input - it's appreciated.
Otherwise, three words:

test driven development
Do you also do this for all the little stuff, the small hacks you just
whip together to get a particular task done? My impression is that doing
proper unittests adds a lot of time to development, and I'm thinking
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard. My reasoning here is that if I mess up
somehow, sooner or later I'll notice, and then I have a chance of making
realistic damage assessments. But even so, I must admit that I tend to
do quite little testing for these small projects... Maybe I should be
rethinking this?

Cheers!
/Joel
Sep 25 '07 #5

P: n/a
Joel Hedlund <jo**********@gmail.comwrites:
Do you also do [test-driven development] for all the little stuff,
the small hacks you just whip together to get a particular task
done? My impression is that doing proper unittests adds a lot of
time to development, and I'm thinking that this may be a low return
investment for the small programs.
My impression is that failing to have reliable code adds a lot of time
to debugging and maintenance, and it is far cheaper to have tested and
flushed out the obvious bugs early in the development process.
I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard.
In which case, you will be maintaining that code beyond the initial
use for which you thought it up. Maintaining code without unit tests
is far more costly than maintaining code with tests, because you have
no regression test suite: you are prone to chasing down bugs that you
though you'd fixed in an earlier version of the code, but can't figure
out when you broke it again. This time is entirely wasted.

Instead, in code that has unit tests, a bug found means another unit
test to be added (to find and demonstrate that bug). This is work you
must do anyway, to be sure that you can actually reproduce the bug;
test-driven development merely means that you take that test case and
*keep it* in your unit test. Then, once you're assured that you will
find the bug again any time it reappears, go ahead and fix it.
My reasoning here is that if I mess up somehow, sooner or later I'll
notice
With test-driven development I get much closer to the "sooner" end of
that, and much more reliably.
Maybe I should be rethinking this?
That's my opinion, yes.

--
\ "My mother was like a sister to me, only we didn't have sex |
`\ quite so often." -- Emo Philips |
_o__) |
Ben Finney
Sep 26 '07 #6

P: n/a
Diez B. Roggisch wrote:
Sounds good to me. IMHO there are two ways one gathers tests:

- concrete bugs appear, and one writes a test that reproduces the bug &
eventually after the fix runs smoothly

- new features are planned/implemented, and the tests accompany them right
from the start, to allow .. .well, to test them :)

I always found it difficult to "just think" of new tests. Of course if you
_start_ with TDD, point two is applied right from the start and should
apply.
an approach that works for me is to start by adding "sanity checks";
that is, straightforward test code that simply imports all modules,
instantiates objects, calls the important methods with common argument
types/counts, etc, to make sure that the library isn't entirely braindead.

this has a bunch of advantages:

- it gets you started with very little effort (especially if you use
doctest; just tinker a little at the interactive prompt, and you
have a first version)
- it gives you a basic structure which makes it easier to add more
detailed tests
- it gives you immediate design feedback -- if it's difficult to
think of a even a simple test, is the design really optimal?
- it quickly catches build and installation issues during future
development (including refactoring).

and probably a few more things that I cannot think of right now.

</F>

Sep 26 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.