By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,105 Members | 2,560 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,105 IT Pros & Developers. It's quick & easy.

best way to replace first word in string?

P: n/a
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

Oct 20 '05 #1
Share this Question
Share on Google+
20 Replies


P: n/a
There is a "gotcha" on this:

How do you define "word"? (e.g. can the
first word be followed by space, comma, period,
or other punctuation or is it always a space).

If it is always a space then this will be pretty
"efficient".

string="aa to become"
firstword, restwords=s.split(' ',1)
newstring="/%s/ %s" % (firstword, restwords)

I'm sure the regular expression gurus here can come
up with something if it can be followed by other than
a space.

-Larry Bates
ha*****@gmail.com wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

Oct 20 '05 #2

P: n/a
"ha*****@gmail.com" <ha*****@gmail.com> writes:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


Assuming you know the whitespace will be spaces, I like find:

new = "/aa/" + old[old.find(' '):]

As for efficiency - I suggest you investigate the timeit module, and
do some tests on data representative of what you're actaully going to
be using.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 20 '05 #3

P: n/a
On Oct 20, ha*****@gmail.com wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


Of course there are many ways to skin this cat; here are some trials.
The timeit module is useful for comparison (and I think I'm using it
correctly :-). I thought that string concatenation was rather
expensive, so its being faster than %-formatting surprised me a bit:

$ python -mtimeit '
res = "/%s/ %s"% tuple("a b c".split(" ", 1))'
100000 loops, best of 3: 3.87 usec per loop

$ python -mtimeit '
b,e = "a b c".split(" ", 1); res = "/"+b+"/ "+e'
100000 loops, best of 3: 2.78 usec per loop

$ python -mtimeit '
"/"+"a b c".replace(" ", "/ ", 1)'
100000 loops, best of 3: 2.32 usec per loop

$ python -mtimeit '
"/%s" % ("a b c".replace(" ", "/ ", 1))'
100000 loops, best of 3: 2.83 usec per loop

$ python -mtimeit '
"a b c".replace("", "/", 1).replace(" ", "/ ", 1)'
100000 loops, best of 3: 3.51 usec per loop

There are possibly better ways to do this with strings.

And the regex is comparatively slow, though I'm not confident this one
is optimally written:

$ python -mtimeit -s'import re' '
re.sub(r"^(\w*)", r"/\1/", "a b c")'
10000 loops, best of 3: 44.1 usec per loop

You'll probably want to experiment with longer strings if a test like
"a b c" is not representative of your typical input.

--
_ _ ___
|V|icah |- lliott http://micah.elliott.name md*@micah.elliott.name
" " """
Oct 20 '05 #4

P: n/a
Micah Elliott wrote:
And the regex is comparatively slow, though I'm not confident this one
is optimally written:

$ python -mtimeit -s'import re' '
re.sub(r"^(\w*)", r"/\1/", "a b c")'
10000 loops, best of 3: 44.1 usec per loop


the above has to look the pattern up in the compilation cache for each loop,
and it also has to parse the template string. precompiling the pattern and
using a callback instead of a template string can speed things up somewhat:

timeit -s"import re; sub = re.compile(r'^(\w*)').sub"
"sub(lambda x: '/%s/' % x.groups(), 'a b c')"

(but the replace solutions should be faster anyway; it's not free to prepare
for a RE match, and sub uses the same split/join implementation as replace...)

</F>

Oct 20 '05 #5

P: n/a
On Thu, 20 Oct 2005 08:26:43 -0700, ha*****@gmail.com wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


Efficient for what?

Efficient in disk-space used ("least source code")?

Efficient in RAM used ("smallest objects and compiled code")?

Efficient in execution time ("fastest")?

Efficient in program time ("quickest to write and debug")?

If you use regular expressions, does the time taken in loading the module
count, or can we assume you have already loaded it?

It will also help if you specify your problem a little better. Are you
replacing one word, and then you are done? Or at you repeating hundreds of
millions of times? What is the context of the problem?

Most importantly, have you actually tested your code to see if it is
efficient enough, or are you just wasting your -- and our -- time with
premature optimization?

def replace_word(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

import time
def test():
t = time.time()
for i in range(10000):
s = replace_word("aa to become", "/aa/")
print ((time.time() - t)/10000), "s"

py> test()
3.6199092865e-06 s
Is that fast enough for you?
--
Steven.

Oct 22 '05 #6

P: n/a
ha*****@gmail.com <ha*****@gmail.com> wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


I doubt you'll find faster than Sed.

man sed

--
William Park <op**********@yahoo.ca>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
Oct 22 '05 #7

P: n/a
On Thu, 20 Oct 2005 10:25:27 -0700, Micah Elliott wrote:
I thought that string concatenation was rather
expensive, so its being faster than %-formatting surprised me a bit:


Think about what string concatenation actually does:

s = "hello " + "world"

In pseudo-code, it does something like this:

- Count chars in "hello" (six chars).
- Count chars in "world" (five chars).
- Allocate eleven bytes.
- Copy six chars from "hello " and five from "world" into the newly
allocated bit of memory.

(This should not be thought of as the exact process that Python uses, but
simply illustrating the general procedure.)

Now think of what str-formatting would do:

s = "hello %s" % "world"

In pseudo-code, it might do something like this:

- Allocate a chunk of bytes, hopefully not too big or too small.
- Repeat until done:
- Copy chars from the original string into the new string,
until it hits a %s placeholder.
- Grab the next string from the args, and copy chars from
that into the new string. If the new string is too small,
reallocate memory to make it bigger, potentially moving
chunks of bytes around.

The string formatting pseudo-code is a lot more complicated and has to do
more work than just blindly copying bytes. It has to analyse the bytes it
is copying, looking for placeholders.

So string concatenation is more efficient, right? No. The thing is, a
*single* string concatenation is almost certainly more efficient than a
single string concatenation. But now look what happens when you repeat it:

s = "h" + "e" + "l" + "l" + "o" + " " + "w" + "o" + "r" + "l" + "d"

This ends up doing something like this:

- Allocate two bytes, copying "h" and "e" into them.
- Allocate three bytes, copying "he" and "l" into them.
- Allocate four bytes, copying "hel" and "l" into them.
....
- Allocate eleven bytes, copying "hello worl" and "d" into them.

The problem is that string concatenation doesn't scale efficiently. String
formatting, on the other hand, does more work to get started, but scales
better.

See, for example, this test code:

py> def tester(n):
.... s1 = ""
.... s2 = "%s" * n
.... bytes = tuple([chr(i % 256) for i in range(n)])
.... t1 = time.time()
.... for i in range(n):
.... s1 = s1 + chr(i % 256)
.... t1 = time.time() - t1
.... t2 = time.time()
.... s2 = s2 % bytes
.... t2 = time.time() - t2
.... assert s1 == s2
.... print t1, t2
....
py> x = 100000
py> tester(x)
3.24212408066 0.01252317428
py> tester(x)
2.58376598358 0.01238489151
py> tester(x)
2.76262307167 0.01474809646

The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated:

py> tester(x*10)
2888.56399703 0.13130998611

Almost fifty minutes, versus less than a quarter of a second.
--
Steven.

Oct 22 '05 #8

P: n/a
On Sat, 22 Oct 2005 21:05:43 +1000, Steven D'Aprano wrote:
The thing is, a
*single* string concatenation is almost certainly more efficient than a
single string concatenation.


Dagnabit, I meant a single string concatenation is more efficient than a
single string replacement using %.
--
Steven.

Oct 22 '05 #9

P: n/a
On 2005-10-22, William Park wrote:
ha*****@gmail.com <ha*****@gmail.com> wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


I doubt you'll find faster than Sed.


On the contrary; to change a string, almost anything will be faster
than sed (except another external program).

If you are in a POSIX shell, parameter expansion will be a lot
faster.

In a python program, one of the solutions already posted will be
much faster.

--
Chris F.A. Johnson <http://cfaj.freeshell.org>
================================================== ================
Shell Scripting Recipes: A Problem-Solution Approach, 2005, Apress
<http://www.torfree.net/~chris/books/cfaj/ssr.html>
Oct 22 '05 #10

P: n/a
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
py> def tester(n):
... s1 = ""
... s2 = "%s" * n
... bytes = tuple([chr(i % 256) for i in range(n)])
... t1 = time.time()
... for i in range(n):
... s1 = s1 + chr(i % 256)
... t1 = time.time() - t1
... t2 = time.time()
... s2 = s2 % bytes
... t2 = time.time() - t2
... assert s1 == s2
... print t1, t2
...
py>
py> tester(x)
3.24212408066 0.01252317428
py> tester(x)
2.58376598358 0.01238489151
py> tester(x)
2.76262307167 0.01474809646

The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated:
The test isn't right - the addition test case includes the time to
convert the number into a char, including taking a modulo.

I couldn't resist adding the .join idiom to this test:
def tester(n): .... l1 = [chr(i % 256) for i in range(n)]
.... s1 = ""
.... t1 = time.time()
.... for c in l1:
.... s1 += c
.... t1 = time.time() - t1
.... s2 = '%s' * n
.... l2 = tuple(chr(i % 256) for i in range(n))
.... t2 = time.time()
.... s2 = s2 % l2
.... t2 = time.time() - t2
.... t3 = time.time()
.... s3 = ''.join(l2)
.... t3 = time.time() - t3
.... assert s1 == s2
.... assert s1 == s3
.... print t1, t2, t3
.... tester(x) 0.0551731586456 0.0251281261444 0.0264830589294 tester(x) 0.0585241317749 0.0239250659943 0.0256059169769 tester(x) 0.0544500350952 0.0271301269531 0.0232360363007

The "order of magnitude" now falls to a factor of two. The original
version of the test on my box also showed an order of magnitude
difference, so this isn't an implementation difference.

This version still includes the overhead of the for loop in the test.

The join idiom isn't enough faster to make a difference.
py> tester(x*10)
2888.56399703 0.13130998611

tester(x * 10) 1.22272014618 0.252701997757 0.27273607254 tester(x * 10) 1.21779584885 0.255345106125 0.242965936661 tester(x * 10) 1.25092792511 0.311630964279 0.241738080978


Here we get the addition idiom being closer to a factor of four
instead of two slower. The .joim idiom is still nearly identical.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 22 '05 #11

P: n/a
Steven D'Aprano wrote:
def replace_word(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

import time
def test():
t = time.time()
for i in range(10000):
s = replace_word("aa to become", "/aa/")
print ((time.time() - t)/10000), "s"

py> test()
3.6199092865e-06 s
Is that fast enough for you?

I agree in most cases it's premature optimization. But little tests
like this do help in learning to write good performing code in general.

Don't forget a string can be sliced. In this case testing before you
leap is a win. ;-)
import time
def test(func, n):
t = time.time()
s = ''
for i in range(n):
s = func("aa to become", "/aa/")
tfunc = t-time.time()
print func.__name__,':', (tfunc/n), "s"
print s

def replace_word1(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

def replace_word2(source, newword):
"""Replace the first word of source with newword."""
if ' ' in source:
return newword + source[source.index(' '):]
return newword

test(replace_word1, 10000)
test(replace_word2, 10000)
=======
replace_word1 : -3.09998989105e-006 s
/aa/ to become
replace_word2 : -1.60000324249e-006 s
/aa/ to become

Oct 22 '05 #12

P: n/a
Chris F.A. Johnson <cf********@gmail.com> wrote:
On 2005-10-22, William Park wrote:
ha*****@gmail.com <ha*****@gmail.com> wrote:
I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one


I doubt you'll find faster than Sed.


On the contrary; to change a string, almost anything will be faster
than sed (except another external program).

If you are in a POSIX shell, parameter expansion will be a lot
faster.

In a python program, one of the solutions already posted will be
much faster.


Care to put a wager on your claim?

--
William Park <op**********@yahoo.ca>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
Oct 23 '05 #13

P: n/a
On 2005-10-22, William Park wrote:
Chris F.A. Johnson <cf********@gmail.com> wrote:
On 2005-10-22, William Park wrote:
> ha*****@gmail.com <ha*****@gmail.com> wrote:
>> I am looking for the best and efficient way to replace the first word
>> in a str, like this:
>> "aa to become" -> "/aa/ to become"
>> I know I can use spilt and than join them
>> but I can also use regular expressions
>> and I sure there is a lot ways, but I need realy efficient one
>
> I doubt you'll find faster than Sed.


On the contrary; to change a string, almost anything will be faster
than sed (except another external program).

If you are in a POSIX shell, parameter expansion will be a lot
faster.

In a python program, one of the solutions already posted will be
much faster.


Care to put a wager on your claim?


In a shell, certainly.

If one of the python solutions is not faster than sed (e.g.,
os.system("sed .....")) I'll forget all about using python.

--
Chris F.A. Johnson <http://cfaj.freeshell.org>
================================================== ================
Shell Scripting Recipes: A Problem-Solution Approach, 2005, Apress
<http://www.torfree.net/~chris/books/cfaj/ssr.html>
Oct 23 '05 #14

P: n/a
On Sat, 22 Oct 2005 21:41:58 +0000, Ron Adam wrote:
Don't forget a string can be sliced. In this case testing before you
leap is a win. ;-)


Not much of a win: only a factor of two, and unlikely to hold in all
cases. Imagine trying it on *really long* strings with the first space
close to the far end: the split-and-join algorithm has to walk the string
once, while your test-then-index algorithm has to walk it twice.

So for a mere factor of two benefit on short strings, I'd vote for the
less complex split-and-join version, although it is just a matter of
personal preference.

--
Steven.

Oct 23 '05 #15

P: n/a
On Sat, 22 Oct 2005 14:54:24 -0400, Mike Meyer wrote:
The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated:
The test isn't right - the addition test case includes the time to
convert the number into a char, including taking a modulo.


I wondered if anyone would pick up on that :-)

You are correct, however that only adds a constant amount of time to
the time it takes for each concatenation. That's why I talked about order
of magnitude differences. If you look at the vast increase in time taken
for concatenation when going from 10**5 to 10**6 iterations, that cannot
be blamed on the char conversion.

At least, that's what it looks like to me -- I'm perplexed by the *vast*
increase in speed in your version, far more than I would have predicted
from pulling out the char conversion. I can think of three
possibilities:

(1) Your PC is *hugely* faster than mine;

(2) Your value of x is a lot smaller than I was using (you don't actually
say what x you use); or

(3) You are using a version and/or implementation of Python that has a
different underlying implementation of string concatenation.
I couldn't resist adding the .join idiom to this test:


[snip code]
tester(x) 0.0551731586456 0.0251281261444 0.0264830589294 tester(x) 0.0585241317749 0.0239250659943 0.0256059169769 tester(x) 0.0544500350952 0.0271301269531 0.0232360363007

The "order of magnitude" now falls to a factor of two. The original
version of the test on my box also showed an order of magnitude
difference, so this isn't an implementation difference.


[snip]
tester(x * 10) 1.22272014618 0.252701997757 0.27273607254 tester(x * 10) 1.21779584885 0.255345106125 0.242965936661 tester(x * 10)

1.25092792511 0.311630964279 0.241738080978


Looking just at the improved test of string concatenation, I get times
about 0.02 second for n=10**4. For n=10**5, the time blows out to 2
seconds. For 10**6, it explodes through the roof to about 2800 seconds, or
about 45 minutes, and for 10**7 I'm predicting it would take something of
the order of 500 HOURS.

In other words, yes the char conversion adds some time to the process, but
for large numbers of iterations, it gets swamped by the time taken
repeatedly copying chars over and over again.
--
Steven.

Oct 23 '05 #16

P: n/a
Steven D'Aprano wrote:
On Sat, 22 Oct 2005 21:41:58 +0000, Ron Adam wrote:

Don't forget a string can be sliced. In this case testing before you
leap is a win. ;-)

Not much of a win: only a factor of two, and unlikely to hold in all
cases. Imagine trying it on *really long* strings with the first space
close to the far end: the split-and-join algorithm has to walk the string
once, while your test-then-index algorithm has to walk it twice.

So for a mere factor of two benefit on short strings, I'd vote for the
less complex split-and-join version, although it is just a matter of
personal preference.


Guess again... Is this the results below what you were expecting?

Notice the join adds a space to the end if the source string is a single
word. But I allowed for that by adding one in the same case for the
index method.

The big win I was talking about was when no spaces are in the string.
The index can then just return the replacement.

These are relative percentages of time to each other. Smaller is better.

Type 1 = no spaces
Type 2 = space at 10% of length
Type 3 = space at 90% of length

Type: Length

Type 1: 10 split/join: 317.38% index: 31.51%
Type 2: 10 split/join: 212.02% index: 47.17%
Type 3: 10 split/join: 186.33% index: 53.67%
Type 1: 100 split/join: 581.75% index: 17.19%
Type 2: 100 split/join: 306.25% index: 32.65%
Type 3: 100 split/join: 238.81% index: 41.87%
Type 1: 1000 split/join: 1909.40% index: 5.24%
Type 2: 1000 split/join: 892.02% index: 11.21%
Type 3: 1000 split/join: 515.44% index: 19.40%
Type 1: 10000 split/join: 3390.22% index: 2.95%
Type 2: 10000 split/join: 2263.21% index: 4.42%
Type 3: 10000 split/join: 650.30% index: 15.38%
Type 1: 100000 split/join: 3342.08% index: 2.99%
Type 2: 100000 split/join: 1175.51% index: 8.51%
Type 3: 100000 split/join: 677.77% index: 14.75%
Type 1: 1000000 split/join: 3159.27% index: 3.17%
Type 2: 1000000 split/join: 867.39% index: 11.53%
Type 3: 1000000 split/join: 679.47% index: 14.72%


import time
def test(func, source):
t = time.clock()
n = 6000000/len(source)
s = ''
for i in xrange(n):
s = func(source, "replace")
tt = time.clock()-t
return s, tt

def replace_word1(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + " ".join(source.split(None, 1)[1:])

def replace_word2(source, newword):
"""Replace the first word of source with newword."""
if ' ' in source:
return newword + source[source.index(' '):]
return newword + ' ' # space needed to match join results
def makestrings(n):
s1 = 'abcdefghij' * (n//10)
i, j = n//10, n-n//10
s2 = s1[:i] + ' ' + s1[i:] + 'd.' # space near front
s3 = s1[:j] + ' ' + s1[j:] + 'd.' # space near end
return [s1,s2,s3]

for n in [10,100,1000,10000,100000,1000000]:
for sn,s in enumerate(makestrings(n)):
r1, t1 = test(replace_word1, s)
r2, t2 = test(replace_word2, s)
assert r1 == r2
print "Type %i: %-8i split/join: %.2f%% index: %.2f%%" \
% (sn+1, n, t1/t2*100.0, t2/t1*100.0)



Oct 23 '05 #17

P: n/a
interesting. seems that "if ' ' in source:" is a highly optimized code
as it is even faster than "if str.find(' ') != -1:' when I assume they
end up in the same C loops ?

Ron Adam wrote:
Guess again... Is this the results below what you were expecting?

Notice the join adds a space to the end if the source string is a single
word. But I allowed for that by adding one in the same case for the
index method.

The big win I was talking about was when no spaces are in the string.
The index can then just return the replacement.

These are relative percentages of time to each other. Smaller is better.

Type 1 = no spaces
Type 2 = space at 10% of length
Type 3 = space at 90% of length

Type: Length

Type 1: 10 split/join: 317.38% index: 31.51%
Type 2: 10 split/join: 212.02% index: 47.17%
Type 3: 10 split/join: 186.33% index: 53.67%
Type 1: 100 split/join: 581.75% index: 17.19%
Type 2: 100 split/join: 306.25% index: 32.65%
Type 3: 100 split/join: 238.81% index: 41.87%
Type 1: 1000 split/join: 1909.40% index: 5.24%
Type 2: 1000 split/join: 892.02% index: 11.21%
Type 3: 1000 split/join: 515.44% index: 19.40%
Type 1: 10000 split/join: 3390.22% index: 2.95%
Type 2: 10000 split/join: 2263.21% index: 4.42%
Type 3: 10000 split/join: 650.30% index: 15.38%
Type 1: 100000 split/join: 3342.08% index: 2.99%
Type 2: 100000 split/join: 1175.51% index: 8.51%
Type 3: 100000 split/join: 677.77% index: 14.75%
Type 1: 1000000 split/join: 3159.27% index: 3.17%
Type 2: 1000000 split/join: 867.39% index: 11.53%
Type 3: 1000000 split/join: 679.47% index: 14.72%


import time
def test(func, source):
t = time.clock()
n = 6000000/len(source)
s = ''
for i in xrange(n):
s = func(source, "replace")
tt = time.clock()-t
return s, tt

def replace_word1(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + " ".join(source.split(None, 1)[1:])

def replace_word2(source, newword):
"""Replace the first word of source with newword."""
if ' ' in source:
return newword + source[source.index(' '):]
return newword + ' ' # space needed to match join results
def makestrings(n):
s1 = 'abcdefghij' * (n//10)
i, j = n//10, n-n//10
s2 = s1[:i] + ' ' + s1[i:] + 'd.' # space near front
s3 = s1[:j] + ' ' + s1[j:] + 'd.' # space near end
return [s1,s2,s3]

for n in [10,100,1000,10000,100000,1000000]:
for sn,s in enumerate(makestrings(n)):
r1, t1 = test(replace_word1, s)
r2, t2 = test(replace_word2, s)
assert r1 == r2
print "Type %i: %-8i split/join: %.2f%% index: %.2f%%" \
% (sn+1, n, t1/t2*100.0, t2/t1*100.0)


Oct 23 '05 #18

P: n/a
bo****@gmail.com wrote:
interesting. seems that "if ' ' in source:" is a highly optimized code
as it is even faster than "if str.find(' ') != -1:' when I assume they
end up in the same C loops ?

The 'in' version doesn't call a function and has a simpler compare. I
would think both of those results in it being somewhat faster if it
indeed calls the same C loop.

import dis
def foo(a): .... if ' ' in a:
.... pass
.... dis.dis(foo) 2 0 LOAD_CONST 1 (' ')
3 LOAD_FAST 0 (a)
6 COMPARE_OP 6 (in)
9 JUMP_IF_FALSE 4 (to 16)
12 POP_TOP

3 13 JUMP_FORWARD 1 (to 17)
16 POP_TOP
17 LOAD_CONST 0 (None) 20 RETURN_VALUE
def bar(a): .... if str.find(' ') != -1:
.... pass
.... dis.dis(bar) 2 0 LOAD_GLOBAL 0 (str)
3 LOAD_ATTR 1 (find)
6 LOAD_CONST 1 (' ')
9 CALL_FUNCTION 1
12 LOAD_CONST 2 (-1)
15 COMPARE_OP 3 (!=)
18 JUMP_IF_FALSE 4 (to 25)
21 POP_TOP

3 22 JUMP_FORWARD 1 (to 26) 25 POP_TOP
26 LOAD_CONST 0 (None)

29 RETURN_VALUE

Oct 23 '05 #19

P: n/a
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
On Sat, 22 Oct 2005 14:54:24 -0400, Mike Meyer wrote:
The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated: The test isn't right - the addition test case includes the time to
convert the number into a char, including taking a modulo.

I wondered if anyone would pick up on that :-)
You are correct, however that only adds a constant amount of time to
the time it takes for each concatenation. That's why I talked about order
of magnitude differences. If you look at the vast increase in time taken
for concatenation when going from 10**5 to 10**6 iterations, that cannot
be blamed on the char conversion.


True. string addition is O(n^2); the conversion time is O(n). But
fair's fair.
At least, that's what it looks like to me -- I'm perplexed by the *vast*
increase in speed in your version, far more than I would have predicted
from pulling out the char conversion. I can think of three
possibilities:
Everything got faster, so it wasn't just pulling the chr conversion.
(1) Your PC is *hugely* faster than mine;
It's a 3Ghz P4.
(2) Your value of x is a lot smaller than I was using (you don't actually
say what x you use); or
It's still in the buffer, and I copied it from your timings:
x = 100000

(3) You are using a version and/or implementation of Python that has a
different underlying implementation of string concatenation.


I'm runing Python 2.4.1 built with GCC 3.4.2.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 23 '05 #20

P: n/a
On Sun, 23 Oct 2005 01:30:36 -0400, Mike Meyer wrote:
At least, that's what it looks like to me -- I'm perplexed by the *vast*
increase in speed in your version, far more than I would have predicted
from pulling out the char conversion. I can think of three
possibilities:


Everything got faster, so it wasn't just pulling the chr conversion.


Sure -- I'm not concerned about proportional speed increases.
(1) Your PC is *hugely* faster than mine;


It's a 3Ghz P4.


Perhaps a tad faster, but not too much.
(2) Your value of x is a lot smaller than I was using (you don't actually
say what x you use); or


It's still in the buffer, and I copied it from your timings:
x = 100000

(3) You are using a version and/or implementation of Python that has a
different underlying implementation of string concatenation.


I'm runing Python 2.4.1 built with GCC 3.4.2.


There is a difference there:

http://www.python.org/doc/2.4/whatsnew/node12.html

Second-last paragraph:

[quote]
String concatenations in statements of the form s = s + "abc" and s +=
"abc" are now performed more efficiently in certain circumstances. This
optimization won't be present in other Python implementations such as
Jython, so you shouldn't rely on it; using the join() method of strings is
still recommended when you want to efficiently glue a large number of
strings together. (Contributed by Armin Rigo.)
[end quote]
--
Steven.

Oct 23 '05 #21

This discussion thread is closed

Replies have been disabled for this discussion.