By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,813 Members | 1,121 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,813 IT Pros & Developers. It's quick & easy.

perl to python

P: n/a
Hello ,

What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
?
Thanks
Olivier
Jul 18 '05 #1
Share this Question
Share on Google+
52 Replies


P: n/a
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file


Use sed.

--
Jarek Zgoda
http://jpa.berlios.de/
Jul 18 '05 #2

P: n/a
Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file


Use sed.

yes, but in python ?
Jul 18 '05 #3

P: n/a
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file


Use sed.


yes, but in python ?


Are you paid for doing everything in Python? This problem is much easier
to sort out by other means.

But of course, it is possible. I'm pretty sure you will get such
solution here.

--
Jarek Zgoda
http://jpa.berlios.de/
Jul 18 '05 #4

P: n/a
Olivier Scalbert wrote:
Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file

Use sed.

yes, but in python ?

I wonder what the motivation behind your question is?
Do you have Python and not Perl or sed available?
Is the request from part of a larger conversion task?
Do you just want to compare the Perl to the Python solution?

Pad.

Jul 18 '05 #5

P: n/a

"Olivier Scalbert" <ol**************@algosyn.com> wrote in message
news:40***********************@news.skynet.be...
Hello ,

What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
?
Thanks
Olivier


I'm not sure what the -pi and -e switches do, but the
rest is fairly simple, although not as simple as the perl
one-liner.

Just load the file into a string variable, and either
use the string .replace() method, or use a regx,
depending on which is appropriate. Then write
it back out.

from the python prompt (not the command prompt)
that's something like: (untested)

var = open("file", "r").read().replace("string1", "string2")
open("file", "w").write(var)

I think this is about as obfusticated as you can get -
you'll lose the file if you try for a one-liner.

John Roth
Jul 18 '05 #6

P: n/a
John Roth wrote:
"Olivier Scalbert" <ol**************@algosyn.com> wrote in message
news:40***********************@news.skynet.be.. .

Hello ,

What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
?
Thanks
Olivier


I'm not sure what the -pi and -e switches do, but the
rest is fairly simple, although not as simple as the perl
one-liner.

Just load the file into a string variable, and either
use the string .replace() method, or use a regx,
depending on which is appropriate. Then write
it back out.

from the python prompt (not the command prompt)
that's something like: (untested)

var = open("file", "r").read().replace("string1", "string2")
open("file", "w").write(var)

I think this is about as obfusticated as you can get -
you'll lose the file if you try for a one-liner.

John Roth

Thx John !
Jul 18 '05 #7

P: n/a
John Roth wrote:
"Olivier Scalbert" <ol**************@algosyn.com> wrote in message
news:40***********************@news.skynet.be...
Hello ,

What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
?
Thanks
Olivier

I'm not sure what the -pi and -e switches do, but the
rest is fairly simple, although not as simple as the perl
one-liner.

Just load the file into a string variable, and either
use the string .replace() method, or use a regx,
depending on which is appropriate. Then write
it back out.

from the python prompt (not the command prompt)
that's something like: (untested)

var = open("file", "r").read().replace("string1", "string2")
open("file", "w").write(var)

I think this is about as obfusticated as you can get -
you'll lose the file if you try for a one-liner.

John Roth


More obfuscated:

python -c '(lambda fp: fp.write(fp.seek(0) or
"".join([L.replace("th","ht") for L in fp])))(file("foo","rw+"))'

Jul 18 '05 #8

P: n/a
Olivier Scalbert <ol**************@algosyn.com> writes:
Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file


Use sed.

yes, but in python ?


Jarek's answer is the correct one, for almost any real situation.

For the purposes of exposition, though, a pythonic equivalent would
be:

import fileinput

for l in fileinput.input():
print l.replace('string1', 'string2')

If you want regular expression substitution and not just constant
strings, use re.sub instead.

Mike

--
Mike Coleman, Scientific Programmer, +1 816 926 4419
Stowers Institute for Biomedical Research
1000 E. 50th St., Kansas City, MO 64110
Jul 18 '05 #9

P: n/a
Olivier Scalbert wrote:
Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file

Use sed.

yes, but in python ?


print 'Use sed.'
--
Steven Rumbalski
news|at|rumbalski|dot|com
Jul 18 '05 #10

P: n/a
>>>>> "Michael" == Michael Coleman <mk*@stowers-institute.org> writes:

Michael> Olivier Scalbert <ol**************@algosyn.com> writes:
Jarek Zgoda wrote:
Use sed.

yes, but in python ?


Michael> Jarek's answer is the correct one, for almost any real
Michael> situation.

Not really. Using Python is more portable, and doesn't introduce a new
dependency. And if it's trivial in Python, why introduce yet another
dependency?

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #11

P: n/a
On 2004-05-09, Olivier Scalbert <ol**************@algosyn.com> wrote:
Hello ,

What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
?
To expand on what others have said, python emphasizes readability over
compactness and obscure shortcuts. The perl "-pi" idiom wraps everything
around a nice ammount of code, and the "-e" idiom wraps some more code.

a script that sort of has some of the same functionality would go
something like this:

#############start bad code##################
#!/usr/local/bin/python
import getopt,sys,os,re

#get your command line options
#files will be in
optlist, args = getopt.getopt(sys.argv[1:],'e:')

#do the -p loop.
for filename in args:

#do the -i "in place" edit.
oldfilename = filename+'.bak'
os.rename(filename,oldfilename)

newfile = open(filename,'w')

#continue the -p loop
for line in open(oldfilename).readlines():
#execute all of the -e statements.
for command in optlist:
#warning bad mojo here
foo=(command[1] % line.rstrip("\n"))
exec(("line=%s" % foo))
#print line
#save to the new file
print line
newfile.write(line + "\n")
newfile.close()
os.unlink(oldfilename)
############end bad code##################

The above code runs, but is not very good because I'm not that familiar
with exec statements. Anyway I've tried to capture what "perl -pi -e"
actually does which is to execute an arbitrary command over every line of an
arbitrary list of files, editing them in place, with a temporary backup
copy.

Then you would call it with something like:
python badscript.py -e 're.sub("foo","bar","%s")' badtest.txt

However this is a place where an implicit loop works great.
You can just do:
perl -pi -e 's/foo/bar/' filelist

Or if you hate the perl/sed syntax, there is:
gawk '{gsub("foo", "bar", $0); print > FILENAME}' filelist

Both of these work because perl and awk have mechanisms to implicitly
loop over all the lines in a file. The python way tends to avoid
implicit loops except for a few cases.

Thanks
Olivier

Jul 18 '05 #12

P: n/a
Steven Rumbalski wrote:
Olivier Scalbert wrote:

Jarek Zgoda wrote:

Olivier Scalbert <ol**************@algosyn.com> pisze:
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file
Use sed.


yes, but in python ?

print 'Use sed.'

Yes, but you're assuming that the users are using Unix/linux. What's
about the windows users? Perhaps there is a sed for windows already, but
why to bother installing it?
Jul 18 '05 #13

P: n/a
On Tue, 11 May 2004 11:16:01 +0200, Josef Meile <jm****@hotmail.com>
wrote:
Steven Rumbalski wrote:
Olivier Scalbert wrote:

Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
>What is the python way of doing this :
>perl -pi -e 's/string1/string2/' file
>

Use sed.
yes, but in python ?

print 'Use sed.'

Yes, but you're assuming that the users are using Unix/linux. What's
about the windows users? Perhaps there is a sed for windows already, but
why to bother installing it?

There's definitely a sed available, possibly even in MingW (I have it
on my system, but am not sure if it arrived with MingW or something
else I installed). It's definitely available with cygwin. One reason
to install it is that it's smaller than perl or python; another is
that it probably performs the task faster, since it isn't a general
purpose state machine; another is that it's 25% shorter to type than
perl and 50% shorter to type than python.
--dang
Jul 18 '05 #14

P: n/a
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote in
news:slrnca0ub4.1bdc.ki**@eyegor.jobsluder.net:
The above code runs, but is not very good because I'm not that
familiar with exec statements. Anyway I've tried to capture what
"perl -pi -e" actually does which is to execute an arbitrary command
over every line of an arbitrary list of files, editing them in place,
with a temporary backup copy.


Your code might have been a bit shorter if you had used the existing
facility in Python for editing files in place. The code below is completely
untested, so I can all but guarantee it doesn't work, but you get the idea:

#!/usr/local/bin/python
import getopt,sys,os,re
import fileinput

#get your command line options
#files will be in
optlist, args = getopt.getopt(sys.argv[1:],'e:')

for line in fileinput.input(args, inplace=1):
#execute all of the -e statements.
for command in optlist:
#warning bad mojo here
foo=(command[1] % line.rstrip("\n"))
exec(("line=%s" % foo))
#save to the new file
print line

fileinput.close()
Jul 18 '05 #15

P: n/a
Daniel 'Dang' Griffith <no*****@noemail4u.com> wrote:
[on sed] One reason
to install it is that it's smaller than perl or python; another is
that it probably performs the task faster, since it isn't a general
purpose state machine;


FWIW, sed _is_ a state machine, although not really "general
purpose". It is a programming language with variables, loops
and conditionals, and I believe it is turing-complete. Most
of the time it is abused to perform simple search-and-replace
tasks, though. ;-)

But seriously ... I agree that the OP should really use sed
instead of Python in this particular case, for the reasons
that you've outlined.

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co KG, Oettingenstr. 2, 80538 Munich
Any opinions expressed in this message may be personal to the author
and may not necessarily reflect the opinions of secnetix in any way.

"Python is an experiment in how much freedom programmers need.
Too much freedom and nobody can read another's code; too little
and expressiveness is endangered." -- Guido van Rossum
Jul 18 '05 #16

P: n/a
> There's definitely a sed available, possibly even in MingW (I have it
on my system, but am not sure if it arrived with MingW or something
else I installed). It's definitely available with cygwin. One reason
to install it is that it's smaller than perl or python; another is
that it probably performs the task faster, since it isn't a general
purpose state machine; Ok, If those two are true, then using it should be
considered for big files.
another is that it's 25% shorter to type than
perl and 50% shorter to type than python.

I don't think that shorter codes are always the
most efficient. They are nicer, but you can't
assure that they are faster. For example a simple
sort algoritm implemented with two anidated loops:
It is well known that you can use trees or other
strategies to achieve better results; however,
some of them are larger as the loop implementation
Jul 18 '05 #17

P: n/a
Oliver Fromme <ol**@haluter.fromme.com> wrote:
FWIW, sed _is_ a state machine, although not really "general
purpose". It is a programming language with variables, loops
and conditionals, and I believe it is turing-complete. Most
of the time it is abused to perform simple search-and-replace
tasks, though. ;-)


I would disagree that the "simple search-and-replace" usage is abuse.
It's just using the tool to do what it's best at. Sure, there are some
more complex things you can do in sed, but the syntax is so baroque it
quickly becomes trying to bash a screw with a hammer.

In the old days, when the task became too complicated for sed, you
switched to awk. When things got even more complex, you pasted sed,
grep, awk, and shell together in various ways, and perl was invented to
cover all those functionalities in a single language.

In a sense, perl suffers from the same disease that C++ does; a desire
to maintain backwards compatability with its parents (thus the absurdly
eclectic syntax) while at the same time adding every new feature you
could imagine (and some that you can't).

Anyway, I think there's a lot of value in learning tools like grep and
sed, and using them when appropriate. The example that started this
thread is the canonical example of what sed does best. Sure, you can
make a general-purpose tool like Python do that job, but other than
proving that you can do it, I don't see any reason to bother.
Jul 18 '05 #18

P: n/a
On 2004-05-11, Daniel 'Dang' Griffith <no*****@noemail4u.com> wrote:
There's definitely a sed available, possibly even in MingW (I have it
on my system, but am not sure if it arrived with MingW or something
else I installed). It's definitely available with cygwin. One reason
to install it is that it's smaller than perl or python; another is
that it probably performs the task faster, since it isn't a general
purpose state machine; another is that it's 25% shorter to type than
perl and 50% shorter to type than python.

There is also a windows-native ssed (super sed).
--dang

Jul 18 '05 #19

P: n/a
On 2004-05-11, Duncan Booth <me@privacy.net> wrote:
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote in
Your code might have been a bit shorter if you had used the existing
facility in Python for editing files in place. The code below is completely
untested, so I can all but guarantee it doesn't work, but you get the idea:

#!/usr/local/bin/python
import getopt,sys,os,re
import fileinput


Thanks! Learn something new every day. I would argue that length of
code is less an issue than the nasty exec statement.

Jul 18 '05 #20

P: n/a
>>>>> "Roy" == Roy Smith <ro*@panix.com> writes:

Roy> Anyway, I think there's a lot of value in learning tools like
Roy> grep and sed, and using them when appropriate. The example

I tend to think pretty much the opposite. Most of the time you can do
things as easily with Python, with the added advantage of robust
exception handling (errors not passing silently) and not having to
learn the other things. You only need to know one regexp
syntax. Windows can also be quite unpredictable w/ customary Unix
tools. Cygwin has burned me a few times too many.

The things you usually do with the non-python tools are trivial, and
trivial things have the habit of being, well, trivial in Python too.

Roy> does best. Sure, you can make a general-purpose tool like
Roy> Python do that job, but other than proving that you can do
Roy> it, I don't see any reason to bother.

You can always implement modules to do the tasks you normally use sed
or awk for. I never saw much virtue in using the most specialized (or
crippled, if you wish) tool possible. Not even if it's "optimized" for
the thing. Actually, I tend to think that Python has to some extent
deprecated that part of the Unix tradition.

It's funny, but somehow I can't really think of cases that a
specialized language would do better (ignoring the performace, which
is rarely a concern in sysadmin tasks) than Python with some
modules. Specialized languages were great at time when the general
purpose languages sucked, but that's not the case anymore.

And yes, I'm aware that I'm exposing myself to some serious flammage
from "if it was good enough for my grandad, it's good enough for me"
*nix crowd. Emotional attachment to various cute little tools is
understandable, but sometimes it's good to take a fresh perspective
and just let go.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #21

P: n/a
Ville Vainio wrote:
It's funny, but somehow I can't really think of cases that a
specialized language would do better (ignoring the performace, which
is rarely a concern in sysadmin tasks) than Python with some
modules.


There is more to computer usage than sysadmin tasks, sed is an ideal
tool for processing large sets of large files (I have to handle small
files that are only 130 Mb in size, and I have around 140,000 of them).

Performance is not an issue you can ignore when you are handling large
amounts of data. Long may sed and awk live, just have to make sure that
the O'Reilly's are to hand because the syntax is a bugger.
Jul 18 '05 #22

P: n/a
Jason Mobarak <jmob@spam__unm.edu> writes:
John Roth wrote:
"Olivier Scalbert" <ol**************@algosyn.com> wrote in message
news:40***********************@news.skynet.be...
What is the python way of doing this :
perl -pi -e 's/string1/string2/' file


I'm not sure what the -pi and -e switches do, but the rest is
fairly simple, although not as simple as the perl one-liner.
Just load the file into a string variable, and either use the
string .replace() method, or use a regx, depending on which is
appropriate. Then write it back out.
[...]


More obfuscated:

python -c '(lambda fp: fp.write(fp.seek(0) or
"".join([L.replace("th","ht") for L in fp])))(file("foo","rw+"))'


For a less obfuscated approach, look at PyOne to run short python
scripts from a one-line command.

http://www.unixuser.org/~euske/pyone/

--
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco -./\.- by myself and does not represent
pe*********@westerngeco.com -./\.- opinion of Schlumberger, Baker
http://petef.port5.com -./\.- Hughes or their divisions.
Jul 18 '05 #23

P: n/a
On 2004-05-11, Ville Vainio <vi***@spammers.com> wrote:
The things you usually do with the non-python tools are trivial, and
trivial things have the habit of being, well, trivial in Python too.
I've not found this to be the case due to Python's emphasis on being
explicit rather than implicit. My emulation of
"perl -pi -e" was about 24 lines in length. Even with the improvement
there is still 10 times as many statements where things can go wrong.

It is really hard to be more trivial than a complete program in one
command line.
You can always implement modules to do the tasks you normally use sed
or awk for. I never saw much virtue in using the most specialized (or
crippled, if you wish) tool possible. Not even if it's "optimized" for
the thing. Actually, I tend to think that Python has to some extent
deprecated that part of the Unix tradition.
However, that raises its own host of problems such as how do you import
the needed modules on the command line? What do you do when that module is not
available? What do you do when you need additional functionality that
takes one line in awk but a major rewrite in python?

It's a matter of task efficiency. Why should I spend a half hour doing
in python something that takes 1 minute if you know the right sed, awk
or perl one-liner? There is a level of complexity where you are better
off using python. But why not use a one-liner when it is available?
And yes, I'm aware that I'm exposing myself to some serious flammage
from "if it was good enough for my grandad, it's good enough for me"
*nix crowd. Emotional attachment to various cute little tools is
understandable, but sometimes it's good to take a fresh perspective
and just let go.


Write me a two-line script in python that reads a character delimited
file, and printf pretty-prints all of the records in a different order.

Sometimes, a utility that uses an implicit loop over every line of a
file is useful. That's not emotional attachment, it's plain common
sense.

Jul 18 '05 #24

P: n/a
Kirk Job-Sluder wrote:
Write me a two-line script in python that reads a character delimited
file, and printf pretty-prints all of the records in a different order.


How about one line (broken into three for clarity):

for line in __import__('sys').stdin:
print ''.join([ x.rjust(10) for x in map(
line.strip().split(',').__getitem__,[4,3,2,1,0]) ])

Believe it or not, I actually do stuff like this on the command line
once in awhile; to me, it's less effort to type this in than to
remember (read: look up) the details of awk syntax. I don't think I'm
typical in this regard, though.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #25

P: n/a
On 2004-05-12, Carl Banks <im*****@aerojockey.invalid> wrote:
Kirk Job-Sluder wrote:
Write me a two-line script in python that reads a character delimited
file, and printf pretty-prints all of the records in a different order.


How about one line (broken into three for clarity):

for line in __import__('sys').stdin:
print ''.join([ x.rjust(10) for x in map(
line.strip().split(',').__getitem__,[4,3,2,1,0]) ])

Believe it or not, I actually do stuff like this on the command line
once in awhile; to me, it's less effort to type this in than to
remember (read: look up) the details of awk syntax. I don't think I'm
typical in this regard, though.


This looks like using the proverbial hammer to drive the screw.

I still find:
awk 'BEGIN {FS="\t"} {printf("pattern", $1,$4,$3,$2)}' file

to be more elegant and easier to debug. It does the required task in
two easy-to remember statements.
Jul 18 '05 #26

P: n/a
>>>>> "Kirk" == Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:

Kirk> I've not found this to be the case due to Python's emphasis
Kirk> on being explicit rather than implicit. My emulation of
Kirk> "perl -pi -e" was about 24 lines in length. Even with the
Kirk> improvement there is still 10 times as many statements where
Kirk> things can go wrong.

That's when you create a module which does the implicit looping. Or a
python script that evals the passed expression string in the loop.

Kirk> It is really hard to be more trivial than a complete program in one
Kirk> command line.

As has been stated elsewhere, you can do the trick on the command
line. The effort to create the required tools only needs to be paid
once.

However, many times it won't matter whether the whole program fits on
the command line. I always do a script into a file and then execute
it. I just prefer a real editor to command history editing if
something goes wrong.

Kirk> It's a matter of task efficiency. Why should I spend a half
Kirk> hour doing in python something that takes 1 minute if you
Kirk> know the right sed, awk or perl one-liner? There is a level
Kirk> of complexity where you are better off using python. But
Kirk> why not use a one-liner when it is available?

I think one should just analyze the need, implement the requisite
module(s) and the script to invoke the stuff in modules. The needs
have the habit of repeating themselves, and having a bit more
structure in the solution will pay off.

Kirk> Write me a two-line script in python that reads a character
Kirk> delimited file, and printf pretty-prints all of the records
Kirk> in a different order.

(Already done)

Kirk> Sometimes, a utility that uses an implicit loop over every line of a
Kirk> file is useful. That's not emotional attachment, it's plain common
Kirk> sense.

The virtual of the implicitness is still arguable.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #27

P: n/a
>>>>> "Pete" == Pete Forman <pe*********@westerngeco.com> writes:

Pete> For a less obfuscated approach, look at PyOne to run short python
Pete> scripts from a one-line command.

Pete> http://www.unixuser.org/~euske/pyone/

Looks exactly like something I always wanted to implement, but found
that doing the script in a multi-line file is easier. It's great that
someone has got around to imlpement something like this.

There should be a wiki entry for "quick and dirty python" (sounds
somehow... suspicious ;-), having awk/sed/oneliner workalikes.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #28

P: n/a
On 2004-05-12, Ville Vainio <vi***@spammers.com> wrote:
>> "Kirk" == Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:

Kirk> I've not found this to be the case due to Python's emphasis
Kirk> on being explicit rather than implicit. My emulation of
Kirk> "perl -pi -e" was about 24 lines in length. Even with the
Kirk> improvement there is still 10 times as many statements where
Kirk> things can go wrong.

That's when you create a module which does the implicit looping. Or a
python script that evals the passed expression string in the loop.


Except now you've just eliminated portability, one of the main arguments
for using python in the first place.

And here is the fundamental question. Why should I spend my time
writing a module in python to emulate another tool, when I can simply
use that other tool? Why should I, as a resarcher who must process
large quantities of data, spend my time and my employer's money
reinventing the wheel?

Kirk> It is really hard to be more trivial than a complete program in one
Kirk> command line.

As has been stated elsewhere, you can do the trick on the command
line. The effort to create the required tools only needs to be paid
once.
One can do the trick one one command line in python. However that
command line is an ugly inelegant hack that eliminates the most
important advantage of python: clear, easy to understand code. In
addition, that example still required 8 python statements compared to
two in awk.
However, many times it won't matter whether the whole program fits on
the command line. I always do a script into a file and then execute
it. I just prefer a real editor to command history editing if
something goes wrong.
Which is what I do as well. The question is, why should I write 8
python statements to perform a task that I can do in two using awk, or
sed? Why should I spend 30 minutes writing, testing and debugging a
python script that takes 5 minutes to write in awk, or sed taking
advantage of the implicit loops and record splitting.
I think one should just analyze the need, implement the requisite
module(s) and the script to invoke the stuff in modules. The needs
have the habit of repeating themselves, and having a bit more
structure in the solution will pay off.
I think you are missing a key step. You are starting off with a
solution (python scripts and modules) and letting it drive your
needs analysis. I don't get paid enough money to write pythonic
solutions to problems that have already been fixed using other tools.
The virtual of the implicitness is still arguable.


I'll be more specific about the challenge. Using only stock python with
no added modules, give me a script that pretty-prints a
character-delimted file using one variable assignment, and one function.

Here is the solution in awk:
BEGIN { FS="\t" }
{printf("%s %s %s %s", $4, $3, $2, $1)}
Jul 18 '05 #29

P: n/a
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote:
And here is the fundamental question. Why should I spend my time
writing a module in python to emulate another tool, when I can simply
use that other tool? Why should I, as a resarcher who must process
large quantities of data, spend my time and my employer's money
reinventing the wheel?
At the risk of veering this thread in yet another different direction,
anybody who does analysis of large amounts of data should take a look at
Gary Perlman's excellent, free, and generally under-appreciated |STAT
package.

http://www.acm.org/~perlman/stat/

It's been around in one version or another for something like 20 years.
It fills an interesting little niche that's part data manipulation and
part statistics.
Here is the solution in awk:
BEGIN { FS="\t" }
{printf("%s %s %s %s", $4, $3, $2, $1)}


In |STAT, that would be simply "colex 4 3 2 1".

There's nothing you can do in |STAT that you couldn't do with more
general purpose tools like awk, perl, python, etc, but |STAT often has a
quicker, simpler, easier way to do many common statistical tasks. A
good tool to have in your toolbox.

For example, on of the cool tools is the "validata". You feed it a file
and it applies some heuristics trying to guess which data in it might be
invalid. For example, if a file looks like it's columns of numbers, and
the third column is all integers except for one entry which is a
floating point number, it'll guess that might be an error and flag it.
It's great when you're analyzing 5000 log files of 100,000 lines each
and one of them makes your script crash for no apparent reason.
Jul 18 '05 #30

P: n/a
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote in
news:slrnca3t0e.2asc.ki**@eyegor.jobsluder.net:
I'll be more specific about the challenge. Using only stock python with
no added modules, give me a script that pretty-prints a
character-delimted file using one variable assignment, and one function.

Here is the solution in awk:
BEGIN { FS="\t" }
{printf("%s %s %s %s", $4, $3, $2, $1)}


One assignment statement and one function call is easy. Of course, you
could argue that more than one name gets rebound, but then that is also
true of the awk program:

import sys
for line in sys.stdin:
line = line[:-1].split('\t')
print "%s %s %s %s" % (line[3], line[2], line[1], line[0])

While I agree with you that using the appropriate tool is preferred over
using Python for everything, I don't really see much to choose between the
Python and awk versions here.
Jul 18 '05 #31

P: n/a
>>>>> "Kirk" == Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:

Kirk> And here is the fundamental question. Why should I spend my
Kirk> time writing a module in python to emulate another tool,
Kirk> when I can simply use that other tool? Why should I, as a

Perhaps you won't; but someone who isn't already proficient with the
tool may rest assured that learning the tool really isn't worth his
time. awk and sed fall into this category.

Kirk> resarcher who must process large quantities of data, spend
Kirk> my time and my employer's money reinventing the wheel?

You are not reinventing the wheel, you are refactoring it :). I don't
think your employer minds you spending 15 extra minutes creating some
tool infrastructure, if it allows you to drop awk/sed dependency that
your co-workers then won't need to learn.

Kirk> I think you are missing a key step. You are starting off
Kirk> with a solution (python scripts and modules) and letting it
Kirk> drive your needs analysis. I don't get paid enough money to
Kirk> write pythonic solutions to problems that have already been
Kirk> fixed using other tools.

I find writing pythonic tools a relaxing deversion from my everyday
work (cranking out C++), so I don't really mind. As long as the time
spent is within 5 minutes - 1 hour range.

Kirk> I'll be more specific about the challenge. Using only stock
Kirk> python with no added modules, give me a script that
Kirk> pretty-prints a character-delimted file using one variable
Kirk> assignment, and one function.

Kirk> Here is the solution in awk:
Kirk> BEGIN { FS="\t" }
Kirk> {printf("%s %s %s %s", $4, $3, $2, $1)}
for line in open("file.txt"):
fields = line.strip().split("\t")
print "%s %s %s" % (fields[2], fields[1], fields[0])

(untested code warning)

Time taken: 56 seconds, give or take. Roughly the same I would expect
writing your awk example took, and within the range I expect your
employer would afford ;-).

Technically it does two variable assignments, but I don't see the
problem (ditto with function calls - who cares?) Assignment is
conceptually cheap. It doesn't seem any less readable or elegant than
your awk example. I could have maybe lost a few seconds by using
shorter variable names.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #32

P: n/a
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:
I still find:
awk 'BEGIN {FS="\t"} {printf("pattern", $1,$4,$3,$2)}' file to be more elegant and easier to debug. It does the required task in
two easy-to remember statements.


It misses the "-i" thing. You have to wrap it:

NEWNAME=mktemp foo.XXXXXX
mv file $NEWNAME
your_awk $NEWNAME > file
rm $NEWNAME (unless there is a backup argument after the "-i")
mv $NEWNAME file.bak (if there is one)

Ralf
--
GS d->? s:++>+++ a+ C++++ UL+++ UH++ P++ L++ E+++ W- N++ o-- K- w--- !O M- V-
PS+>++ PE Y+>++ PGP+ !t !5 !X !R !tv b+++ DI+++ D? G+ e++++ h+ r? y?
Jul 18 '05 #33

P: n/a
In article <sl******************@eyegor.jobsluder.net>,
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote:

And here is the fundamental question. Why should I spend my time
writing a module in python to emulate another tool, when I can simply
use that other tool? Why should I, as a resarcher who must process
large quantities of data, spend my time and my employer's money
reinventing the wheel?


Why should your employer pay for the time for all of its employees to
learn all of those other tools, when Python will do the job? I've used
sed and awk often enough to read other people's code some of the time,
but I certainly can't write them without a great deal of effort, and
modifying an existing example to do what I want might or might not be
easy -- no way of knowing in advance.
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

Adopt A Process -- stop killing all your children!
Jul 18 '05 #34

P: n/a
On 11 May 2004 12:05:52 GMT, Oliver Fromme <ol**@haluter.fromme.com>
wrote:
Daniel 'Dang' Griffith <no*****@noemail4u.com> wrote:
[on sed] One reason
to install it is that it's smaller than perl or python; another is
that it probably performs the task faster, since it isn't a general
purpose state machine;


FWIW, sed _is_ a state machine, although not really "general
purpose". It is a programming language with variables, loops
and conditionals, and I believe it is turing-complete. Most
of the time it is abused to perform simple search-and-replace
tasks, though. ;-)


I never used sed for anything but "stream editing", aka search and
replace. Well, if it's turing complete, my apologies to the sed
author(s). :-)
--dang
Jul 18 '05 #35

P: n/a
On 2004-05-12, Roy Smith <ro*@panix.com> wrote:
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote:
And here is the fundamental question. Why should I spend my time
writing a module in python to emulate another tool, when I can simply
use that other tool? Why should I, as a resarcher who must process
large quantities of data, spend my time and my employer's money
reinventing the wheel?


At the risk of veering this thread in yet another different direction,
anybody who does analysis of large amounts of data should take a look at
Gary Perlman's excellent, free, and generally under-appreciated |STAT
package.

http://www.acm.org/~perlman/stat/

It's been around in one version or another for something like 20 years.
It fills an interesting little niche that's part data manipulation and
part statistics.

Thanks. I'll check it out.

Jul 18 '05 #36

P: n/a
At some point, Daniel 'Dang' Griffith <no*****@noemail4u.com> wrote:
On 11 May 2004 12:05:52 GMT, Oliver Fromme <ol**@haluter.fromme.com>
wrote:
Daniel 'Dang' Griffith <no*****@noemail4u.com> wrote:
> [on sed] One reason
> to install it is that it's smaller than perl or python; another is
> that it probably performs the task faster, since it isn't a general
> purpose state machine;


FWIW, sed _is_ a state machine, although not really "general
purpose". It is a programming language with variables, loops
and conditionals, and I believe it is turing-complete. Most
of the time it is abused to perform simple search-and-replace
tasks, though. ;-)


I never used sed for anything but "stream editing", aka search and
replace. Well, if it's turing complete, my apologies to the sed
author(s). :-)
--dang


There's a whole bunch of 'extreme' sed scripts at
http://sed.sourceforge.net/grabbag/scripts/

I like the dc.sed script there; it's an implementation of the UNIX
program 'dc', which is an arbitrary precision RPN calculator:
http://sed.sourceforge.net/grabbag/s...c_overview.htm
Only for the truly brave.

A Turing machine, too:
http://sed.sourceforge.net/grabbag/scripts/turing.sed

And I notice they have a Python sed debugger:
http://sed.sourceforge.net/grabbag/scripts/sd.py.txt

--
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke
|cookedm(at)physics(dot)mcmaster(dot)ca
Jul 18 '05 #37

P: n/a
On 2004-05-12, Ville Vainio <vi***@spammers.com> wrote:
>> "Kirk" == Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:

Kirk> And here is the fundamental question. Why should I spend my
Kirk> time writing a module in python to emulate another tool,
Kirk> when I can simply use that other tool? Why should I, as a

Perhaps you won't; but someone who isn't already proficient with the
tool may rest assured that learning the tool really isn't worth his
time. awk and sed fall into this category.


Actually, I'm not convinced of the learning time argument. It takes
about 30 minutes training time to learn enough awk or sed to handle
90% of the cases where it is the better tool for the job. A good
understanding of regular expressions will do most of your work for you no
matter which langugage you use.
Kirk> resarcher who must process large quantities of data, spend
Kirk> my time and my employer's money reinventing the wheel?

You are not reinventing the wheel, you are refactoring it :). I don't
think your employer minds you spending 15 extra minutes creating some
tool infrastructure, if it allows you to drop awk/sed dependency that
your co-workers then won't need to learn.
In which my case, the perl version is more likely to win out on the
basis of standardization. IME, the time involved to create good
infrastructure that does not come back to bite you in the ass is
considerably more than 15 minutes. Think also that for every minute you
spend designing something to share, you need to spend between 5-20
documenting and training in the organization (and this is not including
maintenance and distribution.)

The great thing is that the tool infrastructure already exists. Not
only does the tool infrastructure exist, but the training materials
already exist. Really, how hard is "perl -pi -e 's/foo/bar/'" to
understand? How hard is "sed -e 's/foo/bar/' < infile > outfile" to
understand? How hard is a shell script to understand?
I find writing pythonic tools a relaxing deversion from my everyday
work (cranking out C++), so I don't really mind. As long as the time
spent is within 5 minutes - 1 hour range.


Well, there is another big difference. I'm a big fan of instant
gratification so the off-the-shelf tool that does the job in 10 seconds
is better than 5 minutes to 1 hour writing a pythonic tool. I have
re-written shell scripts in python just for kicks, but I don't have any
illusions that refactoring everything into python should be a
perogative.

Jul 18 '05 #38

P: n/a
In article <sl******************@eyegor.jobsluder.net>,
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote:

Well, there is another big difference. I'm a big fan of instant
gratification so the off-the-shelf tool that does the job in 10 seconds
is better than 5 minutes to 1 hour writing a pythonic tool. I have
re-written shell scripts in python just for kicks, but I don't have any
illusions that refactoring everything into python should be a
perogative.


If it takes you an hour to rewrite a ten-second job into a Pythonic
script, you don't know Python very well. That kinda counters your claim
of a shallow learning curve for the other programs.
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

Adopt A Process -- stop killing all your children!
Jul 18 '05 #39

P: n/a
Duncan Booth <me@privacy.net> writes:
import sys
for line in sys.stdin:
line = line[:-1].split('\t')
print "%s %s %s %s" % (line[3], line[2], line[1], line[0]) While I agree with you that using the appropriate tool is preferred over
using Python for everything, I don't really see much to choose between the
Python and awk versions here.


1) Python throws an error if you have less than three fields,
requiring more typing to get the same effect.

2) Python generators on stdin behave strangely. For one thing,
they're not properly line buffered, so you don't get any lines until
eof. But then, eof is handled wrongly, and the loop doesn't exit.

3) There is no efficient RS equivalent, in case you need to read
paragraphs.

The simpler example

for line in sys.stdin:
print line

demonstrates the problem nicely.

$ python z
a
b
c
^D
a

b

c

foo
bar
baz
^D
foo

bar

baz
^D
^D
$

Explanations in the docs about buffering and readahead don't excuse
this poor result.

$ awk '{print}'
a
a
b
b
c
c
^D
$
Jul 18 '05 #40

P: n/a
Scott Schwartz <"schwartz+@usenet "@bio.cse.psu.edu> wrote in
news:8g************@galapagos.bx.psu.edu:
Duncan Booth <me@privacy.net> writes:
import sys
for line in sys.stdin:
line = line[:-1].split('\t')
print "%s %s %s %s" % (line[3], line[2], line[1], line[0])
While I agree with you that using the appropriate tool is preferred
over using Python for everything, I don't really see much to choose
between the Python and awk versions here.


1) Python throws an error if you have less than three fields,
requiring more typing to get the same effect.


I would rather have the error when the input isn't formatted as expected.
The alternative would be incorrect output.

If you really want to suppress the error then:

line = (line[:-1]+'\t'*3).split('\t')

2) Python generators on stdin behave strangely. For one thing,
they're not properly line buffered, so you don't get any lines until
eof. But then, eof is handled wrongly, and the loop doesn't exit.
True, if you are trying to reformat interactive input. I had assumed that
the use case here was redirecting input from a file, and in that case the
EOF problem isn't an issue. Buffering may or may not be a problem.

3) There is no efficient RS equivalent, in case you need to read
paragraphs.


In that case I would write a generator to group the lines. Longer than RS,
but also more flexible.
Jul 18 '05 #41

P: n/a
> > Kirk Job-Sluder wrote:
Write me a two-line script in python that reads a character delimited
file, and printf pretty-prints all of the records in a different order.

Carl Banks wrote one, convoluted so it can be on the command line.
Kirk Job-Sluder replied This looks like using the proverbial hammer to drive the screw.


But you asked use to use the hammer to drive in the screw. In real
life I have more tools to use. For this case I would use Perl or awk.

Here's one for you. I had several mailbox files arranged like
Inbox.mbox/mbox
Send.mbox/mbox
OBF&BOSC.mbox/mbox
Work Email.mbox/mbox

I wanted to raise the "*/mbox" files one directory so that
Inbox.mbox/mbox --becomes--> Inbox.mbox

My solution was to use the interactive Python shell. Something
like (untested)

import glob, os
filenames = glob.glob("*.mbox")
for name in filenames:
os.rename(name + "/mbox", "."+name)
os.rmdir(name)
os.rename("."+name, name+".mbox")

Trying doing that sanely with any programming language expressed
all on the command-line. No credit if you can't handle the '&' and space.

Andrew
da***@dalkescientific.com
Jul 18 '05 #42

P: n/a
Scott Schwartz:
1) Python throws an error if you have less than three fields,
requiring more typing to get the same effect.
The spec didn't say how to handle improperly formatted data.
Suppose the code is supposed to complain and stop at that
point - how much code would you need for awk to do the
extra check?
2) Python generators on stdin behave strangely. For one thing,
they're not properly line buffered, so you don't get any lines until
eof. But then, eof is handled wrongly, and the loop doesn't exit.
There's a command-line flag to make stdin/stdout be unbuffered.
Try your test again with 'python -u'.
3) There is no efficient RS equivalent, in case you need to read
paragraphs.


Again, not part of the spec. ;)

Andrew
da***@dalkescientific.com
Jul 18 '05 #43

P: n/a
On Thu, May 13, 2004 at 06:06:23AM +0000, Andrew Dalke wrote:
[...]

Trying doing that sanely with any programming language expressed
all on the command-line. No credit if you can't handle the '&' and space.


I'm almost positive you can do that entirely with bash[1], actually. I
don't have time to prove it right now, though... but you ought to be able to
use features like ${paramater%word} expansions, e.g.:

$ x='something.mbox/mbox'
$ echo ${x%/mbox}
something.mbox

Obviously you'd need to be careful about quoting things like & and space,
but that doesn't seem too hard.

-Andrew.

[1] And standard file utilties like mv and rmdir, obviously...

Jul 18 '05 #44

P: n/a
On 2004-05-13, Andrew Dalke <ad****@mindspring.com> wrote:
> Kirk Job-Sluder wrote:
>> Write me a two-line script in python that reads a character delimited
>> file, and printf pretty-prints all of the records in a different order.

Carl Banks wrote one, convoluted so it can be on the command line.
Kirk Job-Sluder replied
This looks like using the proverbial hammer to drive the screw.


But you asked use to use the hammer to drive in the screw. In real
life I have more tools to use. For this case I would use Perl or awk.


Bing, exactly the point.
My solution was to use the interactive Python shell. Something
like (untested)
Certainly, python is the best solution for many problems.

Trying doing that sanely with any programming language expressed
all on the command-line. No credit if you can't handle the '&' and space.
Missing the point. The point was not that everything should be done
using awk, sed or perl one-liners. The point was that there awk, sed,
or perl one-liners are useful for a subset of tasks where the
explicitness of python gets in the way.

Andrew
da***@dalkescientific.com

Jul 18 '05 #45

P: n/a
On Thu, May 13, 2004 at 06:06:23AM +0000, Andrew Dalke wrote:

Here's one for you. I had several mailbox files arranged like
Inbox.mbox/mbox
Send.mbox/mbox
OBF&BOSC.mbox/mbox
Work Email.mbox/mbox

I wanted to raise the "*/mbox" files one directory so that
Inbox.mbox/mbox --becomes--> Inbox.mbox
[...]
Trying doing that sanely with any programming language expressed
all on the command-line. No credit if you can't handle the '&' and space.


You can do it in one-line with sh:

for d in *.mbox ; do mv "${d}/mbox" ".${d}" ; rmdir "${d}" ; mv ".${d}" "${d}" ; done

Or more readably:

for d in *.mbox ; do
mv "${d}/mbox" ".${d}"
rmdir "${d}"
mv ".${d}" "${d}"
done

That doesn't look particularly insane to me. In fact, it looks quite like
the Python version...

-Andrew.
Jul 18 '05 #46

P: n/a
I do a lot of unix hacking type stuff here at work, and I understand
the wish to use the right tool for the right job and all that, and
awk, sed and perl can let you do all that quick command line stuff.
My problem with that approach is that after I do some knock-off hack
for something, the boss will come by and say, "That's nice, but can
you do it with this tweak?" or something, and soon it has snowballed
into a full blown script. So why bother with the one-off hacks, when
I can just write a function or something, put it in my utility object,
and it's done? I can use it again if I need to, use it in other
scripts, call it from the python interpreter, whatever I need. And if
(or when) I have to expand it for a more complicated purpose, nothing
could be easier. If I stick to the traditional unix approach, I'd
probably have tools piping into tools piping into shell scripts piping
into whatever to get things done, and I'll have to check the man page
to make sure the non-standard options for the rarely used tool work,
it just winds up being a pain.

Here's an example from today, for instance. There's a scratch area we
have here where files that are more than a month old are summarily
deleted. I don't often use the area for larger projects, but I had to
for my last one, which is now getting to be around a month old. I
still want to keep the files, so I wrote a little python script to
start from an initial directory, check the date on all the files, and
update them if necessary. Now sure, I could have done this with some
combination of 'find', 'touch' and whatever, but then again, I have
all the function blocks I need in my python utility library, so I just
import that and it's a snap. So I mention it to one of my co-workers,
and he now wants to use it for one of his projects, except it's using
a make-like system that will regenerate files if the dates on them get
changed relative to each other, so he wants it to take the times and
readjust them by an amount relative to the dates on each. That's just
the kind of thing (heavy conditionals) that little unix chain commands
as solution get really ugly with, but with my script, it shouldn't be
a problem. In fact, he's never written a script to do this because he
typically relies on the unix tool approach, and he figured it would be
too much trouble. But now, he'll have a solution, I'll still have
mine, and if needs change in the future, it's easy to evolve. Seems
like the right way to go to me.
Kirk Job-Sluder <ki**@eyegor.jobsluder.net> wrote in message news:<slrnca3t0e.2asc.ki**@eyegor.jobsluder.net>.. .
On 2004-05-12, Ville Vainio <vi***@spammers.com> wrote:
>>> "Kirk" == Kirk Job-Sluder <ki**@eyegor.jobsluder.net> writes:


Kirk> I've not found this to be the case due to Python's emphasis
Kirk> on being explicit rather than implicit. My emulation of
Kirk> "perl -pi -e" was about 24 lines in length. Even with the
Kirk> improvement there is still 10 times as many statements where
Kirk> things can go wrong.

That's when you create a module which does the implicit looping. Or a
python script that evals the passed expression string in the loop.


Except now you've just eliminated portability, one of the main arguments
for using python in the first place.

And here is the fundamental question. Why should I spend my time
writing a module in python to emulate another tool, when I can simply
use that other tool? Why should I, as a resarcher who must process
large quantities of data, spend my time and my employer's money
reinventing the wheel?

Kirk> It is really hard to be more trivial than a complete program in one
Kirk> command line.

As has been stated elsewhere, you can do the trick on the command
line. The effort to create the required tools only needs to be paid
once.


One can do the trick one one command line in python. However that
command line is an ugly inelegant hack that eliminates the most
important advantage of python: clear, easy to understand code. In
addition, that example still required 8 python statements compared to
two in awk.
However, many times it won't matter whether the whole program fits on
the command line. I always do a script into a file and then execute
it. I just prefer a real editor to command history editing if
something goes wrong.


Which is what I do as well. The question is, why should I write 8
python statements to perform a task that I can do in two using awk, or
sed? Why should I spend 30 minutes writing, testing and debugging a
python script that takes 5 minutes to write in awk, or sed taking
advantage of the implicit loops and record splitting.
I think one should just analyze the need, implement the requisite
module(s) and the script to invoke the stuff in modules. The needs
have the habit of repeating themselves, and having a bit more
structure in the solution will pay off.


I think you are missing a key step. You are starting off with a
solution (python scripts and modules) and letting it drive your
needs analysis. I don't get paid enough money to write pythonic
solutions to problems that have already been fixed using other tools.
The virtual of the implicitness is still arguable.


I'll be more specific about the challenge. Using only stock python with
no added modules, give me a script that pretty-prints a
character-delimted file using one variable assignment, and one function.

Here is the solution in awk:
BEGIN { FS="\t" }
{printf("%s %s %s %s", $4, $3, $2, $1)}

Jul 18 '05 #47

P: n/a
"Andrew Dalke" <ad****@mindspring.com> writes:
2) Python generators on stdin behave strangely. For one thing,
they're not properly line buffered, so you don't get any lines until
eof. But then, eof is handled wrongly, and the loop doesn't exit.


There's a command-line flag to make stdin/stdout be unbuffered.
Try your test again with 'python -u'.


No effect, because that's not the problem. The generator is reading
ahead and doing it's own buffering, in addition to whatever the stream
is doing (or not doing). Hence the bug.
Jul 18 '05 #48

P: n/a
Michael Coleman wrote:
Olivier Scalbert <ol**************@algosyn.com> writes:
Jarek Zgoda wrote:
Olivier Scalbert <ol**************@algosyn.com> pisze:
Use sed.


yes, but in python ?


Jarek's answer is the correct one, for almost any real situation.


I disagree, I think actually Perl is a better answer. Using grep, sed,
tr, etc. on Windows is error-prone anyway, because of the shell
difference. Perl is very complete as a parsing solution, personnally
that's what I would prefer, and it's more scalable, since when you want
to do more, you can copy your solution in a script and expand it. Using
a python script makes a lot of sense also, as people have pointed
two-lines solutions.

Regards,
Nicolas
Jul 18 '05 #49

P: n/a
Scott Schwartz
No effect, because that's not the problem.


Ahh, you're right. I keyword matched "buffer" and thought you meant
the system buffer and not Python's.

Andrew
da***@dalkescientific.com
Jul 18 '05 #50

52 Replies

This discussion thread is closed

Replies have been disabled for this discussion.