469,356 Members | 1,918 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,356 developers. It's quick & easy.

best cumulative sum

What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?

Thanks,
Alan Isaac
Nov 22 '05 #1
34 9273
David (Alan) Isaac wrote:
What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?


Is there something that this doesn't do, or something
it does do that it shouldn't?

You could do it this way:

# untested
def cumulative_sum(L):
CL = []
csum = 0
for x in L:
csum += x
CL.append(csum)
return CL

Whether it is better or worse depends on what you
consider better or worse.
--
Steven.

Nov 22 '05 #2
David (Alan) Isaac wrote:
What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?


Is there something that this doesn't do, or something
it does do that it shouldn't?

You could do it this way:

# untested
def cumulative_sum(L):
CL = []
csum = 0
for x in L:
csum += x
CL.append(csum)
return CL

Whether it is better or worse depends on what you
consider better or worse.
--
Steven.

Nov 22 '05 #3
David Isaac wrote:
What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?


Define better. More accurate? Less code?

--
Robert Kern
ro*********@gmail.com

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter

Nov 22 '05 #4
David Isaac wrote:
What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?


Define better. More accurate? Less code?

--
Robert Kern
ro*********@gmail.com

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter

Nov 22 '05 #5
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

import operator
x = 1,2,3
reduce(operator.add, x)

6

--
_ _ ___
|V|icah |- lliott <>< md*@micah.elliott.name
" " """
Nov 22 '05 #6
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

import operator
x = 1,2,3
reduce(operator.add, x)

6

--
_ _ ___
|V|icah |- lliott <>< md*@micah.elliott.name
" " """
Nov 22 '05 #7
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

import operator
x = 1,2,3
reduce(operator.add, x)

6


Or just sum(x).

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
Never contend with a man who has nothing to lose.
-- Baltasar Gracian
Nov 22 '05 #8
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

import operator
x = 1,2,3
reduce(operator.add, x)

6


Or just sum(x).

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
Never contend with a man who has nothing to lose.
-- Baltasar Gracian
Nov 22 '05 #9
Erik Max Francis wrote:
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

>import operator
>x = 1,2,3
>reduce(operator.add, x)


6


Or just sum(x).


That just gives you the tail end. The OP asked for a cumulative sum;
that is, a list with all of the intermediate sums, too.

--
Robert Kern
ro*********@gmail.com

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter

Nov 22 '05 #10
Erik Max Francis wrote:
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
What's the good way to produce a cumulative sum?

>import operator
>x = 1,2,3
>reduce(operator.add, x)


6


Or just sum(x).


That just gives you the tail end. The OP asked for a cumulative sum;
that is, a list with all of the intermediate sums, too.

--
Robert Kern
ro*********@gmail.com

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter

Nov 22 '05 #11

Erik Max Francis wrote:
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
>
What's the good way to produce a cumulative sum?

> import operator
> x = 1,2,3
> reduce(operator.add, x)

6


Or just sum(x).

He seems to want scanl

Nov 22 '05 #12

Erik Max Francis wrote:
Micah Elliott wrote:
On Nov 21, David Isaac wrote:
>
What's the good way to produce a cumulative sum?

> import operator
> x = 1,2,3
> reduce(operator.add, x)

6


Or just sum(x).

He seems to want scanl

Nov 22 '05 #13

David Isaac wrote:
<bo****@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
He seems to want scanl


Yes. But it's not in Python, right?
(I know about Keller's version.)

Robert Kern wrote:
Define better. More accurate? Less code?


Good point.
As Bonono (?) suggested: I'd most like a solution that
relies on a built-in to give me both of those.
(Pretty is good too.) Like SciPy's cumsum.

What is wrong with Keller's version ? I did write my own scanl/scanl1
using yield so it is a bit lazier than Keller's.

You can also use a Ref/Adder class. something like this :

class Adder:
def __init__(self, i):
self.val = i
def add(self,i):
self.val += i
return self

a=Adder(init)
cl = [a.add(x).val for x in list]

But this I believe is against the idiom of the language as l.sort()
returns None rather than 'l', the preference that object method with
side effect(in this case add) should return None.

And it is debatable whether "add" should return self.val or self(even
one doesn't return None). As both has their use.

b = a.add(10) + 10

But then it may be violating some other idioms in the language.

Nov 22 '05 #14
David Isaac wrote:
What's the good way to produce a cumulative sum?
E.g., given the list x,
cumx = x[:]
for i in range(1,len(x)):
cumx[i] = cumx[i]+cumx[i-1]

What's the better way?

Thanks,
Alan Isaac


Don't know about better, but this is what I came up with:
class PartialSum(object):
def __init__(self, alist):
self.sequence = alist[:]
self.N = len(alist)
self.reset()

def reset(self):
self._i = 0
self._sumi = 0

def __iter__(self):
return self

def next(self):
if self._i == self.N:
self.reset()
raise StopIteration
self._sumi += self.sequence[self._i]
self._i += 1
return self._sumi
for sum in PartialSum( [1,1,1,1,1] ):
print sum **2

1
4
9
16
25

Gerard

Nov 22 '05 #15
Here's a silly recursive version (don't really use, it's too slow):

def csum(s):
if len(s)==0: return s
return csum(s[:-1]) + [sum(s)]
Nov 22 '05 #16
David Isaac wrote:
for a solution when these are available.
Something like:
def cumreduce(func, seq, init = None):
"""Return list of cumulative reductions.

This can be written more concisely as a generator:
import operator
def ireduce(func, iterable, init): ... for i in iterable:
... init = func(init, i)
... yield init
... list(ireduce(operator.mul, range(1,5),init=1)) [1, 2, 6, 24]


Michael
Nov 23 '05 #17

Michael Spencer wrote:
David Isaac wrote:
for a solution when these are available.
Something like:
def cumreduce(func, seq, init = None):
"""Return list of cumulative reductions.

This can be written more concisely as a generator:
>>> import operator
>>> def ireduce(func, iterable, init): ... for i in iterable:
... init = func(init, i)
... yield init
... >>> list(ireduce(operator.mul, range(1,5),init=1)) [1, 2, 6, 24] >>>

If iterable has no elements, I believe the behaviour should be [init],
there is also the case of init=None that needs to be handled. But
otherwise, that is more or less what I wrote for my scanl/scanl1.
Michael


Nov 23 '05 #18
Michael Spencer wrote:
This can be written more concisely as a generator:


<bo****@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com... If iterable has no elements, I believe the behaviour should be [init],
there is also the case of init=None that needs to be handled.
Right. So it is "more concise" only by being incomplete, right?
What other advantages might it have?
otherwise, that is more or less what I wrote for my scanl/scanl1.


I didn't see a post with that code.

Alan Isaac
Nov 23 '05 #19

David Isaac wrote:
Michael Spencer wrote:
This can be written more concisely as a generator:

<bo****@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
If iterable has no elements, I believe the behaviour should be [init],
there is also the case of init=None that needs to be handled.


Right. So it is "more concise" only by being incomplete, right?
What other advantages might it have?
otherwise, that is more or less what I wrote for my scanl/scanl1.


I didn't see a post with that code.

Alan Isaac


def scanl1(func, seq):
def my_func(x):
my_func.init = func(my_func.init, x)
return my_func.init

i = iter(seq)
try: my_func.init = i.next()
except StopIteration: return []

return (chain([my_func.init], (my_func(y) for y in i)))

def scanl(func, seq, init):
def my_func(x):
my_func.init = func(my_func.init, x)
return my_func.init

my_func.init = init
return (chain([my_func.init], (my_func(y) for y in seq)))

Nov 23 '05 #20

"Michael Spencer" <ma**@telcopartners.com> wrote in message
news:mailman.1054.1132707811.18701.python-> This can be written more
concisely as a generator:
>>> import operator
>>> def ireduce(func, iterable, init):

... for i in iterable:
... init = func(init, i)
... yield init


OK, this might do it. But is a generator "better"?
(I assume accuracy is the same, so what about speed?)

def ireduce(func, iterable, init=None):
if not init:
iterable = iter(iterable)
init = iterable.next()
yield init
elif not iterable:
yield init
for item in iterable:
init = func(init, item)
yield init

Alan Isaac
Nov 23 '05 #21
[David Isaac]
def cumreduce(func, seq, init = None):
if*not*seq:
cr*=*[init]*bool(init)
else:
cr*=*[seq[0]]***len(seq)
if*init:
cr[0]*=*func(cr[0],init)
for*idx*in*range(1,len(seq)):
cr[idx]*=*func(cr[idx-1],seq[idx])
return*cr
[Michael Spencer]
*import*operator
*def*ireduce(func,*iterable,*init):

...*****for*i*in*iterable:
...*********init*=*func(init,*i)
...*********yield*init
...


[David Isaac]
Right.**So*it*is*"more*concise"*only*by*being*inco mplete,*right?
What other advantages might it have?


- allows arbitrary iterables, not sequences only
- smaller memory footprint if sequential access to the items is sufficient
- fewer special cases, therefore
- less error prone, e. g.
+ does your implementation work for functions with
f(a, b) != f(b, a)?
+ won't users be surprised that
cumreduce(f, [1]) == cumreduce(f, [], 1)
!=
cumreduce(f, [0]) == cumreduce(f, [], 0)

Of course nothing can beat a plain old for loop in terms of readability and
-- most likely -- speed.

Peter

Nov 23 '05 #22

David Isaac wrote:
OK, this might do it. But is a generator "better"?
(I assume accuracy is the same, so what about speed?)

I have the slightest idea. So long it works for me and is not too hard
to understand and has no obvious speed problem, I don't care too much.
I believe in line is in general faster, especially you don't call other
function too much.

I like to use generator/list because I find it easier to understand and
it is more BD(bondage and discipline) which save me a few occasions of
subtle errors.

Nov 23 '05 #23

"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
- allows arbitrary iterables, not sequences only
- smaller memory footprint if sequential access to the items is sufficient
Sure; I meant aside from that.
- fewer special cases, therefore
- less error prone, e. g.
+ does your implementation work for functions with
f(a, b) != f(b, a)?
See news:Mzpgf.12794$NN2.4089@trnddc02
+ won't users be surprised that
cumreduce(f, [1]) == cumreduce(f, [], 1)
!=
cumreduce(f, [0]) == cumreduce(f, [], 0)
THANKS!
Of course nothing can beat a plain old for loop in terms of readability and -- most likely -- speed.


OK.

Thanks,
Alan Isaac
Nov 23 '05 #24

"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
Of course nothing can beat a plain old for loop in terms of readability and -- most likely -- speed.


Here are two versions, meant to be comparable.
Thanks,
Alan Isaac

def cumreduce(func, seq, init = None):
cr = seq[:]
if not(init is None):
if seq:
cr[0] = func(init,seq[0])
else:
cr = [init]
for idx in range(1,len(seq)):
cr[idx] = func(cr[idx-1],seq[idx])
return cr

def ireduce(func, iterable, init=None):
if init is None:
iterable = iter(iterable)
init = iterable.next()
yield init
elif not iterable:
yield init
for item in iterable:
init = func(init, item)
yield init
Nov 23 '05 #25
David Isaac wrote:
def ireduce(func, iterable, init=None):
if init is None:
iterable = iter(iterable)
init = iterable.next()
yield init
elif not iterable:
You are in for a surprise here:
def empty(): .... for item in []:
.... yield item
.... bool(empty()) True
bool(iter([])) True # python 2.3 and probably 2.5
bool(iter([]))

False # python 2.4
yield init
for item in iterable:
init = func(init, item)
yield init


Peter

Nov 23 '05 #26

"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
You are in for a surprise here:


You got that right!
def empty(): ... for item in []:
... yield item
... bool(empty()) True


Ouch.
bool(iter([])) True # python 2.3 and probably 2.5
bool(iter([]))

False # python 2.4


Double ouch.
I was relying on Python 2.4 behavior.
What is the reasoning behind the changes?
(Can you offer a URL to a discussion?)

So, is the only way to test for an empty iterable
to see if it can generate an item? I found this:
http://aspn.activestate.com/ASPN/Coo.../Recipe/413614
Seems like a reason to rely on sequences ...

Thanks,
Alan Isaac
Nov 24 '05 #27
Alan aka David Isaac wrote:
"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
You are in for a surprise here:
You got that right!
>>> def empty():

... for item in []:
... yield item
...
>>> bool(empty())

True


Ouch.
>>> bool(iter([]))

True # python 2.3 and probably 2.5
>>> bool(iter([]))

False # python 2.4


Double ouch.
I was relying on Python 2.4 behavior.
What is the reasoning behind the changes?
(Can you offer a URL to a discussion?)


Raymond Hettinger introduced an optimization that looked cool until it broke
some of Guido's code, see

http://mail.python.org/pipermail/pyt...er/056594.html
So, is the only way to test for an empty iterable
to see if it can generate an item? I found this:
http://aspn.activestate.com/ASPN/Coo.../Recipe/413614
Seems like a reason to rely on sequences ...


I'd rather have a second look whether the test is really needed.

Peter

Nov 24 '05 #28

"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
I'd rather have a second look whether the test is really needed.


That's too obscure of a hint.
Can you be a bit more explicit?
Here's an example (below).
You're saying I think that most of it is unnecessary.
Thanks,
Alan

def ireduce(func, iterable, init=None):
iterable = iter(iterable)
if init is None:
init = iterable.next()
yield init
else:
try:
first = iterable.next()
init = func(init, first)
yield init
except StopIteration:
yield init
for item in iterable:
init = func(init, item)
yield init
Nov 24 '05 #29
David Isaac wrote:
"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
I'd rather have a second look whether the test is really needed.


That's too obscure of a hint.
Can you be a bit more explicit?
Here's an example (below).
You're saying I think that most of it is unnecessary.


From the Zen of Python:
"Special cases aren't special enough to break the rules."

I think that the test for an empty iterator makes ireduce() unintuitive. Try
asking someone who has not followed the discussion
what list(ireduce(add, [], 42)) might produce, given that

list(ireduce(add, [1], 42)) --> [43]
list(ireduce(add, [1, 2], 42)) --> [43, 45]
list(ireduce(add, [])) --> []
list(ireduce(add, [1])) --> [1]
list(ireduce(add, [1, 2])) --> [1, 3]

I suspect that [42] will be a minority vote.

Peter
Nov 26 '05 #30
"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
I think that the test for an empty iterator makes ireduce() unintuitive.


OK.
I misunderstood you point.
But that is needed to match the behavior of reduce.
reduce(operator.add,[],42)

42

Thanks,
Alan
Nov 27 '05 #31
David Isaac wrote:
"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
I think that the test for an empty iterator makes ireduce() unintuitive.


OK.
I misunderstood you point.
But that is needed to match the behavior of reduce.
reduce(operator.add,[],42)

42


Wouldn't an implementation that gives

list(ireduce(add, [], 42)) --> [42]
list(ireduce(add, [1], 42)) --> [42, 43]
list(ireduce(add, [1, 2], 42)) --> [42, 43, 45]
list(ireduce(add, [])) --> []
list(ireduce(add, [1])) --> [1]
list(ireduce(add, [1, 2])) --> [1, 3]

be sufficiently similar, too? E. g.

from itertools import chain

def ireduce(op, iterable, *init):
iterable = chain(init, iterable)
accu = iterable.next()
yield accu
for item in iterable:
accu = op(accu, item)
yield accu

Peter

Nov 27 '05 #32

Peter Otten wrote:
David Isaac wrote:
"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
I think that the test for an empty iterator makes ireduce() unintuitive.


OK.
I misunderstood you point.
But that is needed to match the behavior of reduce.
> reduce(operator.add,[],42)

42


Wouldn't an implementation that gives

list(ireduce(add, [], 42)) --> [42]
list(ireduce(add, [1], 42)) --> [42, 43]
list(ireduce(add, [1, 2], 42)) --> [42, 43, 45]
list(ireduce(add, [])) --> []
list(ireduce(add, [1])) --> [1]
list(ireduce(add, [1, 2])) --> [1, 3]

be sufficiently similar, too? E. g.

from itertools import chain

def ireduce(op, iterable, *init):
iterable = chain(init, iterable)
accu = iterable.next()
yield accu
for item in iterable:
accu = op(accu, item)
yield accu

I believe there is only one initializer in reduce. Also it is possible
to not provide it.

Nov 28 '05 #33
bo****@gmail.com wrote:
def ireduce(op, iterable, *init):
****iterable*=*chain(init,*iterable)
****accu*=*iterable.next()
****yield*accu
****for*item*in*iterable:
********accu*=*op(accu,*item)
********yield*accu
I believe there is only one initializer in reduce.
Throw in a

if len(init) > 1: raise TypeError

for increased similarity to reduce().
Also it is possible to not provide it.


Try it.

Peter

PS: Did I mention that I prefer for-loops over reduce() most of the time?
Nov 28 '05 #34

"Peter Otten" <__*******@web.de> wrote in message
news:dm*************@news.t-online.com...
sufficiently similar


I think I understand your points now.
But I wanted to match these cases:
import operator
reduce(operator.add,[],42) 42 reduce(operator.add,[1],42)

43

The idea is that the i-th yield of i-reduce shd be the
result of reduce on seq[:i] with the given initializer.

That said, for the applications I first intended,
yes it is sufficiently similar. For now, I'll stick
with the version below.

Thanks,
Alan

def ireduce(func, iterable, init=None):
iterable = iter(iterable)
if init is None:
init = iterable.next()
yield init
else:
try:
init = func(init, iterable.next())
yield init
except StopIteration:
yield init
for item in iterable:
init = func(init, item)
yield init
Nov 28 '05 #35

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Matt Larkin | last post: by
1 post views Thread by Victor | last post: by
reply views Thread by David Isaac | last post: by
reply views Thread by James Hallam | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
1 post views Thread by Marylou17 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.