simon place wrote:

while playing about with inheriting from list to be able to cache macro

properties i noticed this, the rate of summing a list seems to be over

linear?

its nearly 3 times faster to sum the sums of smaller lists?

from time import clock

l=range(0,1000000)

start=clock()

print sum(l),

print clock()-start

# now sum a list of the sums of 1000 slices.

start=clock()

print sum([sum(l[x:x+1000]) for x in xrange(0,len(l),1000)]),

print clock()-start

# repeat

start=clock()

print sum(l),

print clock()-start

# output from 500MHz AMD K62 64Mb Python 2.3.3 (#51, Dec 18 2003,

20:22:39) [MSC v.1200 32 bit (Intel)] on win32

#499999500000 1.89348044721

#499999500000 0.731985115406

#499999500000 1.90818149818

You are confusing the behavior of summing long integers with that of

summing integers.

When you sum from 0 to 65,534, your sum is just under sys.maxint on (32

bit machines). When you sum from 0 to 65,535, your sum is just over

sys.maxint (on 32 bit machines), and becomes a Python Long. It is no

longer a single add instruction when running on the core processor when

adding the running total with the next number, it is multiple

instructions. If you check the implementation, Python ends up doing

manual adds and carries on unsigned C shorts, which makes up each

'digit' in a Python long.

When you break the pieces up into blocks of 1000, the largest sum is for

998,999 to 999,999, which is 999,499,500; which you will notice is

smaller than sys.maxint (on 32 bit machines), and at worst only results

in 1000 Python long adds, as compared with 934464 long additions in your

original method (that factor of 934 times as many long additions is crazy).

The fact that it is /only/ 3 times slower shows us that Python's long

integer implementation for smaller long integers works pretty damn well.

- Josiah