By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,917 Members | 1,313 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,917 IT Pros & Developers. It's quick & easy.

Method much slower than function?

P: n/a
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()

################# END SOURCE ##############
When running the function and the method on a 20,000 line text file, I
get the following:
>>cProfile.run("bar.readgenome(open('cb_foo'))")
20004 function calls in 10.214 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 10.214 10.214 <string>:1(<module>)
1 10.205 10.205 10.214 10.214 reader.py:11(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.009 0.000 0.009 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}

>>cProfile.run("z=r.readgenome(open('cb_foo'))")
20004 function calls in 0.041 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 0.041 0.041 <string>:1(<module>)
1 0.035 0.035 0.041 0.041 reader.py:2(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.007 0.000 0.007 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}
The method takes 10 seconds, the function call 0.041 seconds!

Yes, I know that I wrote the underlying code rather inefficiently, and
I can streamline it with a single file.read() call instead if an
xreadlines() + strip loop. Still, the differences in performance are
rather staggering! Any comments?

Thanks,

Iddo

Jun 14 '07 #1
Share this Question
Share on Google+
27 Replies


P: n/a
On 2007-06-14, id****@gmail.com <id****@gmail.comwrote:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()

################# END SOURCE ##############
When running the function and the method on a 20,000 line text file, I
get the following:
>>>cProfile.run("bar.readgenome(open('cb_foo'))" )
20004 function calls in 10.214 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 10.214 10.214 <string>:1(<module>)
1 10.205 10.205 10.214 10.214 reader.py:11(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.009 0.000 0.009 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}

>>>cProfile.run("z=r.readgenome(open('cb_foo'))" )
20004 function calls in 0.041 CPU seconds

Ordered by: standard name

ncalls tottime percall cumtime percall
filename:lineno(function)
1 0.000 0.000 0.041 0.041 <string>:1(<module>)
1 0.035 0.035 0.041 0.041 reader.py:2(readgenome)
1 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
19999 0.007 0.000 0.007 0.000 {method 'strip' of 'str'
objects}
1 0.000 0.000 0.000 0.000 {method 'xreadlines' of
'file' objects}
1 0.000 0.000 0.000 0.000 {open}
The method takes 10 seconds, the function call 0.041 seconds!

Yes, I know that I wrote the underlying code rather
inefficiently, and I can streamline it with a single
file.read() call instead if an xreadlines() + strip loop.
Still, the differences in performance are rather staggering!
Any comments?
It is likely the repeated attribute lookup, self.s, that's
slowing it down in comparison to the non-method version.

Try the following simple optimization, using a local variable
instead of an attribute to build up the result.

# The method in a class
class bar:
def readgenome(self, filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
self.s = s

To further speed things up, think about using the str.join idiom
instead of str.+=, and using a generator expression instead of an
explicit loop.

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''.join(line.strip() for line in filehandle)

--
Neil Cerutti
Jun 14 '07 #2

P: n/a
En Wed, 13 Jun 2007 21:40:12 -0300, <id****@gmail.comescribió:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

################# START SOURCE #############
# The function

def readgenome(filehandle):
s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
s += line.strip()
return s

# The method in a class
class bar:
def readgenome(self, filehandle):
self.s = ''
for line in filehandle.xreadlines():
if '>' in line:
continue
self.s += line.strip()
In the function above, s is a local variable, and accessing local
variables is very efficient (using an array of local variables, the
compiler assigns statically an index for each one).
Using self.s, on the other hand, requires a name lookup for each access.
The most obvious change would be to use a local variable s, and assign
self.s = s only at the end. This should make both methods almost identical
in performance.
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)
And since you have Python 2.5, you can use the file as its own iterator;
combining all this:

return ''.join(line.strip() for line in filehandle if '>' not in line)

--
Gabriel Genellina

Jun 14 '07 #3

P: n/a
On 2007-06-14, id****@gmail.com <id****@gmail.comwrote:
The method takes 10 seconds, the function call 0.041 seconds!
What happens when you run them in the other order?

The first time you read the file, it has to read it from disk.
The second time, it's probably just reading from the buffer
cache in RAM.

--
Grant Edwards grante Yow! Catsup and Mustard
at all over the place! It's
visi.com the Human Hamburger!
Jun 14 '07 #4

P: n/a
Gabriel Genellina wrote:
In the function above, s is a local variable, and accessing local
variables is very efficient (using an array of local variables, the
compiler assigns statically an index for each one).
Using self.s, on the other hand, requires a name lookup for each access.
The most obvious change would be to use a local variable s, and assign
self.s = s only at the end. This should make both methods almost identical
in performance.
Yeah, that's a big deal and makes a significant difference on my
machine.
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)
Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.
And since you have Python 2.5, you can use the file as its own iterator;
combining all this:

return ''.join(line.strip() for line in filehandle if '>' not in line)
That's probably pretty good.

Jun 14 '07 #5

P: n/a
"sj*******@yahoo.com" <sj*******@yahoo.comwrites:
take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue here.
You should not rely on using 2.5 or even on that optimization staying in
CPython. Best is to use StringIO or something comparable.
Jun 14 '07 #6

P: n/a
En Thu, 14 Jun 2007 01:39:29 -0300, sj*******@yahoo.com
<sj*******@yahoo.comescribió:
Gabriel Genellina wrote:
>In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)

Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.
Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

pyimport timeit
py>
pydef f1():
.... a=""
.... for i in xrange(100000):
.... a+=str(i)*20
....
pydef f2():
.... a=[]
.... for i in xrange(100000):
.... a.append(str(i)*20)
.... a="".join(a)
....
pyprint timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

....after a few minutes I aborted the process...

--
Gabriel Genellina

Jun 14 '07 #7

P: n/a
On Jun 13, 5:40 pm, ido...@gmail.com wrote:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.
>cProfile.run("bar.readgenome(open('cb_foo'))")

20004 function calls in 10.214 CPU seconds
>cProfile.run("z=r.readgenome(open('cb_foo'))")

20004 function calls in 0.041 CPU seconds
I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo

Jun 14 '07 #8

P: n/a
Neil Cerutti a écrit :
(snip)
class bar:
def readgenome(self, filehandle):
self.s = ''.join(line.strip() for line in filehandle)
=>
self.s = ''.join(line.strip() for line in filehandle if not
'>' in line)
Jun 14 '07 #9

P: n/a
Gabriel Genellina wrote:
En Thu, 14 Jun 2007 01:39:29 -0300, sj*******@yahoo.com
<sj*******@yahoo.comescribió:
>Gabriel Genellina wrote:
>>In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)

Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.

a=""
for i in xrange(100000):
a+="a"

and:

a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)

take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.

Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

pyimport timeit
py>
pydef f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
pydef f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
pyprint timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...
I can't confirm this.

$ cat join.py
def f1():
a = ""
for i in xrange(100000):
a += str(i)*20

def f2():
a = []
for i in xrange(100000):
a.append(str(i)*20)
a = "".join(a)

def f3():
a = []
append = a.append
for i in xrange(100000):
append(str(i)*20)
a = "".join(a)

$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop

Peter

Jun 14 '07 #10

P: n/a
Leo Kislov wrote:
On Jun 13, 5:40 pm, ido...@gmail.com wrote:
>Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.
>>cProfile.run("bar.readgenome(open('cb_foo'))")

20004 function calls in 10.214 CPU seconds
>>cProfile.run("z=r.readgenome(open('cb_foo'))")

20004 function calls in 0.041 CPU seconds

I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo
Indeed, the effect of attribute access is much smaller than what the OP is
seeing:

$ cat iadd.py
class A(object):
def add_attr(self):
self.x = 0
for i in xrange(100000):
self.x += 1
def add_local(self):
x = 0
for i in xrange(100000):
x += 1

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s 'from iadd import add_local' 'add_local()'
10 loops, best of 3: 21.6 msec per loop
$ python2.5 -m timeit -s 'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 52.2 msec per loop

Peter
Jun 14 '07 #11

P: n/a
On 6/14/07, Peter Otten <__*******@web.dewrote:
Gabriel Genellina wrote:
...
pyprint timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...

I can't confirm this.
[...]
$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop
On my machine (using python 2.5 under win xp) the results are:
>>print timeit.Timer("f2()", "from __main__ import f2").repeat(number = 1)
[0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
>>print timeit.Timer("f1()", "from __main__ import f1").repeat(number = 1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?

Francesco
Jun 14 '07 #12

P: n/a
Gabriel Genellina wrote:
[...]
pyimport timeit
py>
pydef f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
pydef f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
pyprint timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...
Using

Python 2.4.4 (#2, Jan 13 2007, 17:50:26)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2

f1() and f2() also virtually take the same amount of time, although I must admit
that this is quite different from what I expected.

Cheers,
Christof
Jun 14 '07 #13

P: n/a
On 2007-06-14, Bruno Desthuilliers
<br********************@wtf.websiteburo.oops.comwr ote:
Neil Cerutti a écrit :
(snip)
>class bar:
def readgenome(self, filehandle):
self.s = ''.join(line.strip() for line in filehandle)

=>
self.s = ''.join(line.strip() for line in filehandle if not
'>' in line)
Thanks for that correction.

--
Neil Cerutti
I don't know what to expect right now, but we as players have to do what we've
got to do to make sure that the pot is spread equally. --Jim Jackson
Jun 14 '07 #14

P: n/a
On 2007-06-14, Leo Kislov <Le********@gmail.comwrote:
On Jun 13, 5:40 pm, ido...@gmail.com wrote:
>Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.
>>cProfile.run("bar.readgenome(open('cb_foo'))")

20004 function calls in 10.214 CPU seconds
>>cProfile.run("z=r.readgenome(open('cb_foo'))")

20004 function calls in 0.041 CPU seconds

I suspect open files are cached
They shouldn't be.
so the second reader picks up where the first one left: at the
of the file.
That sounds like a bug. Opening a file a second time should
produce a "new" file object with the file-pointer at the
beginning of the file.
The second call doesn't do any text processing at all.
--
Grant Edwards grante Yow! I'm using my X-RAY
at VISION to obtain a rare
visi.com glimpse of the INNER
WORKINGS of this POTATO!!
Jun 14 '07 #15

P: n/a
The first time you read the file, it has to read it from disk.
The second time, it's probably just reading from the buffer
cache in RAM.
I can verify this type of behavior when reading large files. Opening
the file doesn't take long, but the first read will take a while
(multiple seconds depending on the size of the file). When the file is
opened a second time, the initial read takes significantly less time.

Matt

Jun 14 '07 #16

P: n/a
Grant Edwards schrieb:
On 2007-06-14, Leo Kislov <Le********@gmail.comwrote:
>On Jun 13, 5:40 pm, ido...@gmail.com wrote:
>>Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

>cProfile.run("bar.readgenome(open('cb_foo'))" )
20004 function calls in 10.214 CPU seconds
>cProfile.run("z=r.readgenome(open('cb_foo'))" )
20004 function calls in 0.041 CPU seconds
I suspect open files are cached

They shouldn't be.
>so the second reader picks up where the first one left: at the
of the file.

That sounds like a bug. Opening a file a second time should
produce a "new" file object with the file-pointer at the
beginning of the file.
It's a OS thing.

Diez
Jun 14 '07 #17

P: n/a
Peter Otten wrote:
Leo Kislov wrote:
>On Jun 13, 5:40 pm, ido...@gmail.com wrote:
>>Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

cProfile.run("bar.readgenome(open('cb_foo'))" )

20004 function calls in 10.214 CPU seconds
>>>cProfile.run("z=r.readgenome(open('cb_foo'))" )

20004 function calls in 0.041 CPU seconds

I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo

Indeed, the effect of attribute access is much smaller than what the OP is
seeing:
I have to take that back
$ cat iadd.py
class A(object):
def add_attr(self):
self.x = 0
for i in xrange(100000):
self.x += 1
def add_local(self):
x = 0
for i in xrange(100000):
x += 1

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s 'from iadd import add_local' 'add_local()'
10 loops, best of 3: 21.6 msec per loop
$ python2.5 -m timeit -s 'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 52.2 msec per loop
Iddo, adding integers is not a good model for the effect you are seeing.
Caching, while happening on the OS-level isn't, either.

As already mentioned in this thread there is a special optimization for

some_string += another_string

in the Python C-source. This optimization works by mutating on the C-level
the string that is immutable on the Python-level, and is limited to cases
where there are no other names referencing some_string. The statement

self.s += t

is executed internally as

tmp = self.s [1]
tmp += t [2]
self.s = tmp [3]

where tmp is not visible from the Python level. Unfortunately after [1]
there are two references to the string in question (tmp and self.s) so the
optimization cannot kick in.

Peter

Jun 14 '07 #18

P: n/a
On 6/14/07, Peter Otten <__*******@web.dewrote:
Peter Otten wrote:
Leo Kislov wrote:
On Jun 13, 5:40 pm, ido...@gmail.com wrote:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

cProfile.run("bar.readgenome(open('cb_foo'))")

20004 function calls in 10.214 CPU seconds

cProfile.run("z=r.readgenome(open('cb_foo'))")

20004 function calls in 0.041 CPU seconds
I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo
Indeed, the effect of attribute access is much smaller than what the OP is
seeing:

I have to take that back
Your tests (which I have snipped) show attribute access being about 3x
slower than local access, which is consistent with my own tests. The
OP is seeing a speed difference of 2 orders of magnitude. That's far
outside the range that attribute access should account for.
Jun 14 '07 #19

P: n/a
On Thu, 14 Jun 2007 00:40:12 +0000, idoerg wrote:

>>>cProfile.run("bar.readgenome(open('cb_foo'))" )
20004 function calls in 10.214 CPU seconds
This calls the method on the CLASS, instead of an instance. When I try it,
I get this:

TypeError: unbound method readgenome() must be called with bar instance as
first argument (got file instance instead)

So you're running something subtly different than what you think you're
running. Maybe you assigned bar = bar() at some point?

However, having said that, the speed difference does seem to be real: even
when I correct the above issue, I get a large time difference using
either cProfile.run() or profile.run(), and timeit agrees:
>>f = bar().readgenome
timeit.Timer("f(open('cb_foo'))", "from __main__ import f").timeit(5)
18.515995025634766
>>timeit.Timer("readgenome(open('cb_foo'))", "from __main__ import readgenome").timeit(5)
0.1940619945526123

That's a difference of two orders of magnitude, and I can't see why.
--
Steven.

Jun 14 '07 #20

P: n/a
On 2007-06-14, Steven D'Aprano <st***@REMOVE.THIS.cybersource.com.auwrote:
However, having said that, the speed difference does seem to be real: even
when I correct the above issue, I get a large time difference using
either cProfile.run() or profile.run(), and timeit agrees:
>>>f = bar().readgenome
timeit.Timer("f(open('cb_foo'))", "from __main__ import f").timeit(5)
18.515995025634766
>>>timeit.Timer("readgenome(open('cb_foo'))", "from __main__ import readgenome").timeit(5)
0.1940619945526123

That's a difference of two orders of magnitude, and I can't see why.
Is it independent of the test order?

What happens when you reverse the order?

What happens if you run the same test twice in a row?

--
Grant Edwards grante Yow! Thank god!! ... It's
at HENNY YOUNGMAN!!
visi.com
Jun 14 '07 #21

P: n/a
On Thu, 14 Jun 2007 00:40:12 +0000, idoerg wrote:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code that
is substantially slower when in a method than in a function.

After further testing, I think I have found the cause of the speed
difference -- and it isn't that the code is a method.

Here's my test code:
def readgenome(filehandle):
s = ""
for line in filehandle.xreadlines():
s += line.strip()

class SlowClass:
def readgenome(self, filehandle):
self.s = ""
for line in filehandle.xreadlines():
self.s += line.strip()

class FastClass:
def readgenome(self, filehandle):
s = ""
for line in filehandle.xreadlines():
s += line.strip()
self.s = s
Now I test them. For brevity, I am leaving out the verbose profiling
output, and just showing the total function calls and CPU time.

>>import cProfile
cProfile.run("readgenome(open('cb_foo'))")
20005 function calls in 0.071 CPU seconds
>>cProfile.run("SlowClass().readgenome(open('cb_fo o'))")
20005 function calls in 4.030 CPU seconds
>>cProfile.run("FastClass().readgenome(open('cb_fo o'))")
20005 function calls in 0.077 CPU seconds
So you can see that the slow-down for calling a method (compared to a
function) is very small.

I think what we're seeing in the SlowClass case is the "normal" speed of
repeated string concatenations. That's REALLY slow. In the function and
FastClass cases, the compiler optimization is able to optimize that slow
behaviour away.

So, nothing to do with methods vs. functions, and everything to do with
the O(N**2) behaviour of repeated string concatenation.
--
Steven.

Jun 14 '07 #22

P: n/a
Chris Mellon wrote:
On 6/14/07, Peter Otten <__*******@web.dewrote:
>Peter Otten wrote:
Leo Kislov wrote:

On Jun 13, 5:40 pm, ido...@gmail.com wrote:
Hi all,

I am running Python 2.5 on Feisty Ubuntu. I came across some code
that is substantially slower when in a method than in a function.

cProfile.run("bar.readgenome(open('cb_foo'))" )

20004 function calls in 10.214 CPU seconds

cProfile.run("z=r.readgenome(open('cb_foo'))" )

20004 function calls in 0.041 CPU seconds
I suspect open files are cached so the second reader
picks up where the first one left: at the of the file.
The second call doesn't do any text processing at all.

-- Leo

Indeed, the effect of attribute access is much smaller than what the OP
is seeing:

I have to take that back

Your tests (which I have snipped) show attribute access being about 3x
slower than local access, which is consistent with my own tests. The
OP is seeing a speed difference of 2 orders of magnitude. That's far
outside the range that attribute access should account for.
Not if it conspires to defeat an optimization for string concatenation

$ cat iadd.py
class A(object):
def add_attr(self):
self.x = ""
for i in xrange(10000):
self.x += " yadda"
def add_local(self):
x = ""
for i in xrange(10000):
x += " yadda"

add_local = A().add_local
add_attr = A().add_attr
$ python2.5 -m timeit -s'from iadd import add_local' 'add_local()'
100 loops, best of 3: 3.15 msec per loop
$ python2.5 -m timeit -s'from iadd import add_attr' 'add_attr()'
10 loops, best of 3: 83.3 msec per loop

As the length of self.x grows performance will continue to degrade.
The original test is worthless as I tried to explain in the section you
snipped.

Peter

Jun 14 '07 #23

P: n/a
On Jun 14, 1:12 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
En Thu, 14 Jun 2007 01:39:29 -0300, sjdevn...@yahoo.com
<sjdevn...@yahoo.comescribió:
Gabriel Genellina wrote:
In addition, += is rather inefficient for strings; the usual idiom is
using ''.join(items)
Ehh. Python 2.5 (and probably some earlier versions) optimize += on
strings pretty well.
a=""
for i in xrange(100000):
a+="a"
and:
a=[]
for i in xrange(100000):
a.append("a")
a="".join(a)
take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue
here.

Yes, for concatenating a lot of a's, sure... Try again using strings
around the size of your expected lines - and make sure they are all
different too.

pyimport timeit
py>
pydef f1():
... a=""
... for i in xrange(100000):
... a+=str(i)*20
...
pydef f2():
... a=[]
... for i in xrange(100000):
... a.append(str(i)*20)
... a="".join(a)
...
pyprint timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)

...after a few minutes I aborted the process...
Are you using an old version of python? I get a fairly small
difference between the 2:

Python 2.5 (r25:51908, Jan 23 2007, 18:42:39)
[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on ELIDED
Type "help", "copyright", "credits" or "license" for more information.
>>import timeit
a=""
def f1():
.... a=""
.... for i in xrange(100000):
.... a+=str(i)*20
....
>>def f2():
.... a=[]
.... for i in xrange(100000):
.... a.append(str(i)*20)
.... a="".join(a)
....
>>print timeit.Timer("f2()", "from __main__ import f2").repeat(number=1)
[0.91355299949645996, 0.86561012268066406, 0.84371185302734375]
>>print timeit.Timer("f1()", "from __main__ import f1").repeat(number=1)
[0.94637894630432129, 0.89946198463439941, 1.170320987701416]

Jun 14 '07 #24

P: n/a
On Jun 14, 1:10 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
"sjdevn...@yahoo.com" <sjdevn...@yahoo.comwrites:
take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue here.

You should not rely on using 2.5
I use generator expressions and passed-in values to generators and
other features of 2.5. Whether or not to rely on a new version is
really a judgement call based on how much time/effort/money the new
features save you vs. the cost of losing portability to older
versions.
or even on that optimization staying in CPython.
You also shouldn't count on dicts being O(1) on lookup, or "i in
myDict" being faster than "i in myList". A lot of quality of
implementation issues outside of the language specification have to be
considered when you're worried about running time.

Unlike fast dictionary lookup at least the += optimization in CPython
is specified in the docs (as well as noting that "".join is greatly
preferred if you're working across different versions and
implementations).
Best is to use StringIO or something comparable.
Yes, or the join() variant.

Jun 14 '07 #25

P: n/a
sj*******@yahoo.com wrote:
On Jun 14, 1:10 am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>"sjdevn...@yahoo.com" <sjdevn...@yahoo.comwrites:
>>take virtually the same amount of time on my machine (2.5), and the
non-join version is clearer, IMO. I'd still use join in case I wind
up running under an older Python, but it's probably not a big issue here.
You should not rely on using 2.5

I use generator expressions and passed-in values to generators and
other features of 2.5.
For reference, generator expressions are a 2.4 feature.
>or even on that optimization staying in CPython.

You also shouldn't count on dicts being O(1) on lookup, or "i in
myDict" being faster than "i in myList".
Python dictionaries (and most decent hash table implementations) may not
be O(1) technically, but they are expected O(1) and perform O(1) in
practice (at least for the Python implementations). If you have
particular inputs that force Python dictionaries to perform poorly (or
as slow as 'i in lst' for large dictionaries and lists), then you should
post a bug report in the sourceforge tracker.
- Josiah
Jun 14 '07 #26

P: n/a
Francesco Guerrieri wrote:
On 6/14/07, Peter Otten <__*******@web.dewrote:
>Gabriel Genellina wrote:
...
pyprint timeit.Timer("f2()", "from __main__ import
f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import
f1").repeat(number=1)
>
...after a few minutes I aborted the process...

I can't confirm this.

[...]
>$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop

On my machine (using python 2.5 under win xp) the results are:
>>>print timeit.Timer("f2()", "from __main__ import f2").repeat(number
= 1)
[0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
>>>print timeit.Timer("f1()", "from __main__ import f1").repeat(number
= 1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?
It sounds like a platform memory resize difference.
- Josiah
Jun 14 '07 #27

P: n/a
En Thu, 14 Jun 2007 05:54:25 -0300, Francesco Guerrieri
<f.*********@gmail.comescribió:
On 6/14/07, Peter Otten <__*******@web.dewrote:
>Gabriel Genellina wrote:
...
pyprint timeit.Timer("f2()", "from __main__ import
f2").repeat(number=1)
[0.42673663831576358, 0.42807591467630662, 0.44401481193838876]
pyprint timeit.Timer("f1()", "from __main__ import
f1").repeat(number=1)
>
...after a few minutes I aborted the process...

I can't confirm this.

[...]
>$ python2.5 -m timeit -s 'from join import f1' 'f1()'
10 loops, best of 3: 212 msec per loop
$ python2.5 -m timeit -s 'from join import f2' 'f2()'
10 loops, best of 3: 259 msec per loop
$ python2.5 -m timeit -s 'from join import f3' 'f3()'
10 loops, best of 3: 236 msec per loop

On my machine (using python 2.5 under win xp) the results are:
>>>print timeit.Timer("f2()", "from __main__ import f2").repeat(number =
1)
[0.19726834822823575, 0.19324697456408974, 0.19474492594212861]
>>>print timeit.Timer("f1()", "from __main__ import f1").repeat(number =
1)
[21.982707133304167, 21.905312587963252, 22.843430035622767]

so it seems that there is a rather sensible difference.
what's the reason of the apparent inconsistency with Peter's test?
I left the test running and went to sleep. Now, the results:

C:\TEMP>python -m timeit -s "from join import f1" "f1()"
10 loops, best of 3: 47.7 sec per loop

C:\TEMP>python -m timeit -s "from join import f2" "f2()"
10 loops, best of 3: 317 msec per loop

C:\TEMP>python -m timeit -s "from join import f3" "f3()"
10 loops, best of 3: 297 msec per loop

Yes, 47.7 *seconds* to build the string using the += operator.
I don't know what's the difference: python version (this is not 2.5.1
final), hardware, OS... but certainly in this PC it is *very* important.
Memory usage was around 40MB (for a 10MB string) and CPU usage went to 99%
(!).

--
Gabriel Genellina

Jun 14 '07 #28

This discussion thread is closed

Replies have been disabled for this discussion.