By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,086 Members | 1,875 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,086 IT Pros & Developers. It's quick & easy.

reduce() anomaly?

P: n/a
This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?

Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
d1 = {'a':1}
d2 = {'b':2}
d3 = {'c':3}
l = [d1, d2, d3]
d4 = reduce(lambda x, y: x.update(y), l) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' d4 = reduce(lambda x, y: x.update(y), l, {})

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'

- Steve.
Jul 18 '05 #1
Share this Question
Share on Google+
226 Replies


P: n/a
Stephen C. Waterbury wrote:
This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?
the latter.
Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> d1 = {'a':1}
>>> d2 = {'b':2}
>>> d3 = {'c':3}
>>> l = [d1, d2, d3]
>>> d4 = reduce(lambda x, y: x.update(y), l)
the update method returns None.
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'


right.
>>> d4 = reduce(lambda x, y: x.update(y), l, {})

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'


same issue.

If you want to abuse reduce at all costs for this purpose,
reduce(lambda x, y: x.update(y) or x, l) might work.
Alex

Jul 18 '05 #2

P: n/a
Stephen C. Waterbury wrote:
This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?

Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> d1 = {'a':1}
>>> d2 = {'b':2}
>>> d3 = {'c':3}
>>> l = [d1, d2, d3]
>>> d4 = reduce(lambda x, y: x.update(y), l) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' >>> d4 = reduce(lambda x, y: x.update(y), l, {})

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'

- Steve.


The update method updates the dictionary in place, but returns None.
Thus, after the first call to x.update(y), reduce is trying to call
x.update(y) with x equal to None. Hence the error.

Alternatives which work include

def rupdate(d, other):
d.update(other)
return d

reduce(rupdate, l)

and

d = {}
map(lambda x: d.update(x), l)

David

Jul 18 '05 #3

P: n/a
Stephen C. Waterbury wrote:
This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?

Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> d1 = {'a':1}
>>> d2 = {'b':2}
>>> d3 = {'c':3}
>>> l = [d1, d2, d3]
>>> d4 = reduce(lambda x, y: x.update(y), l) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' >>> d4 = reduce(lambda x, y: x.update(y), l, {}) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'

- Steve.


No bug, your lambda evaluates d1.update(d2) on the first call and then
returns the result of the update() method which is None. So on the secend
call None.update(d3) fails. Here's what you might have intended:
d1 = {'a':1}
d2 = {'b':2}
d3 = {'c':3}
l = [d1, d2, d3]
d4 = reduce(lambda x, y: x.update(y) or x, l)
d4 {'a': 1, 'c': 3, 'b': 2}

Note the side effect on d1
d1

{'a': 1, 'c': 3, 'b': 2}

if you don't provide an initial dictionary.

Peter

Jul 18 '05 #4

P: n/a
"Alex Martelli" <al***@aleax.it> wrote in message
news:3f********************@news1.tin.it...
Stephen C. Waterbury wrote:
If you want to abuse reduce at all costs for this purpose,
reduce(lambda x, y: x.update(y) or x, l) might work.


Just out of curiosity, for what kind of problems do we find reduce to just
be the Right Way? I mean, summing a big list of numbers is fun and all, but
I never find any use for it otherwise. I *often* try to think if reduce
would be useful when I come across some collection-of-values manipulation
task, but usually one wants to repeat an operation on a whole set of values,
not reduce the set to one value.

So, are there any *non-trivial* and *unabusive* uses of reduce, where any
other way would be tedious, repetitive, slow, unclear, etc.? I'm very
curious to see them.

reduce--the black sheep of the functional-Python herd?
--
Francis Avila

Jul 18 '05 #5

P: n/a
Francis Avila wrote:
Just out of curiosity, for what kind of problems do we find reduce to
just
be the Right Way? I mean, summing a big list of numbers is fun and
all, but
I never find any use for it otherwise. I *often* try to think if
reduce
would be useful when I come across some collection-of-values
manipulation
task, but usually one wants to repeat an operation on a whole set of
values,
not reduce the set to one value.
I don't use reduce extremely routinely, but I certainly do find myself
using it. Grepping through my Python sources, the most common uses I
find are summing together lists of numeric values and joining together
(flattening) a list of lists (though only when the contained lists
themselves do not contain any sequences as elements).
So, are there any *non-trivial* and *unabusive* uses of reduce, where
any
other way would be tedious, repetitive, slow, unclear, etc.? I'm very
curious to see them.


My cellular automaton engine CAGE uses reduce several times, although
admittedly this use is academic (obviously a good way to implement a
ReductionRule is almost by definition with a reduce operation :-). Even
then, there are times when, say, you want to get the sum of the states
of the cells in the neighborhood, and "neighborhood" is defined in a
sufficiently generic way that the states of the neighborhood are just a
list of state values.

My Solar System calculator BOTEC has a similar application; when
plotting courses, you can have an arbitrary number of transfers, and
those transfers can each have an arbitrary number of burns. So it's
quite convenient to do a reduce on the list of burns (a sequence of
sequences), and then a reduce on the list of deltavees for the transfers
(a sequence of numerics).

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Thousands have lived without love, not one without water.
\__/ W.H. Auden
Jul 18 '05 #6

P: n/a
Erik Max Francis wrote:
...
Francis Avila wrote:
Just out of curiosity, for what kind of problems do we find reduce to
just
be the Right Way? I mean, summing a big list of numbers is fun and
all, but
I never find any use for it otherwise. I *often* try to think if
... I don't use reduce extremely routinely, but I certainly do find myself
using it. Grepping through my Python sources, the most common uses I
find are summing together lists of numeric values and joining together
In Python 2.3, we have sum for that (much faster, too).
(flattening) a list of lists (though only when the contained lists
themselves do not contain any sequences as elements).


_UN_fortunately sums works for that too -- but almost as slowly as reduce.
E.g., on a small example:

[alex@lancelot bo]$ timeit.py -c -s'lol=[range(20)]*20'
'reduce(list.__add__, lol, [])'
10000 loops, best of 3: 91 usec per loop

[alex@lancelot bo]$ timeit.py -c -s'lol=[range(20)]*20' -s'import operator'
'reduce(operator.add, lol, [])'
10000 loops, best of 3: 88 usec per loop

[alex@lancelot bo]$ timeit.py -c -s'lol=[range(20)]*20' 'sum(lol, [])'
10000 loops, best of 3: 82 usec per loop

while a simple loop is way faster:

[alex@lancelot bo]$ timeit.py -c -s'lol=[range(20)]*20' 'x=[]' 'for l in
lol: x.extend(l)'
10000 loops, best of 3: 26 usec per loop

and can easily be sped up a little more:

[alex@lancelot bo]$ timeit.py -c -s'lol=[range(20)]*20' 'x=[]' 'xe=x.extend'
'for l in lol: xe(l)'
10000 loops, best of 3: 20 usec per loop
Given the typical use cases of reduce are covered by sum -- and sometimes
even better by simple loops &c -- then I would say that in Python 2.3 and
following reduce should not be used often at all.
Alex

Jul 18 '05 #7

P: n/a
Francis Avila wrote:
...
Just out of curiosity, for what kind of problems do we find reduce to just
be the Right Way? I mean, summing a big list of numbers is fun and all,
And best handled by sum (in Python 2.3).
reduce--the black sheep of the functional-Python herd?


Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)
Alex

Jul 18 '05 #8

P: n/a
"David C. Fox" <da*******@post.harvard.edu> wrote in
news:bjbqb.81856$mZ5.559701@attbi_s54:
and

d = {}
map(lambda x: d.update(x), l)


which can be written more concisely without the lambda:

d = {}
map(d.update, l)

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #9

P: n/a
> Just out of curiosity, for what kind of problems do we find reduce to just
be the Right Way? I mean, summing a big list of numbers is fun and all, but
I never find any use for it otherwise. I *often* try to think if reduce
would be useful when I come across some collection-of-values manipulation
task, but usually one wants to repeat an operation on a whole set of values,
not reduce the set to one value.

So, are there any *non-trivial* and *unabusive* uses of reduce, where any
other way would be tedious, repetitive, slow, unclear, etc.? I'm very
curious to see them.

reduce--the black sheep of the functional-Python herd?
--


IMHO, not. Once I read an article that explains that foldr and
foldl---almost Python's reduce---are most important HOFs. Actually,
functions like map can be easily defined with reduce:

def my_map(func, seq):
return reduce(lambda seq, el: seq + [func(el)], seq, [])

print my_map(lambda x: 2*x, [1, 2, 3, 4, 5])

regards,
anton.

Jul 18 '05 #10

P: n/a
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message news:<Xn***************************@127.0.0.1>...
which can be written more concisely without the lambda:

d = {}
map(d.update, l)


Although he shouldn't be using map at all for this case; he's not
using the resulting list.

Just a plan for loop will suffice:

for otherDict in l:
d.update(otherDict)

Jeremy
Jul 18 '05 #11

P: n/a
Jeremy Fincher wrote:
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message
news:<Xn***************************@127.0.0.1>...
which can be written more concisely without the lambda:

d = {}
map(d.update, l)
Although he shouldn't be using map at all for this case; he's not
using the resulting list.


Absolutely true.

Just a plan for loop will suffice:

for otherDict in l:
d.update(otherDict)


And if one needs to get top performance, an _almost_ plain for loop
will give performance just as good as map (rather than, say, 15% worse
or so -- not a worry in 99.999% of the cases, of course...):

d_upda = d.update
for otherDictin l:
d_upda(otherDict)

[the usual case of "manual constant subexpression hoisting" where the
constant subexpression is the attribute lookup for d.update ...).
Alex

Jul 18 '05 #12

P: n/a
At 03:41 PM 11/5/2003, Francis Avila wrote:
"Alex Martelli" <al***@aleax.it> wrote in message
news:3f********************@news1.tin.it...
Stephen C. Waterbury wrote:
If you want to abuse reduce at all costs for this purpose,
reduce(lambda x, y: x.update(y) or x, l) might work.


Just out of curiosity, for what kind of problems do we find reduce to just
be the Right Way? I mean, summing a big list of numbers is fun and all, but
I never find any use for it otherwise. I *often* try to think if reduce
would be useful when I come across some collection-of-values manipulation
task, but usually one wants to repeat an operation on a whole set of values,
not reduce the set to one value.

So, are there any *non-trivial* and *unabusive* uses of reduce, where any
other way would be tedious, repetitive, slow, unclear, etc.? I'm very
curious to see them.


One's prior programming experience can affect one's view of reduce. My
favorite language, prior to Python, was APL. APL's native data container is
the array, and reduce is a native operator in APL. So we used reduce a lot,
sometimes to add things up, other times to check for all or any conditions,
other times for other at this moment forgotten purposes. A companion of
reduce in APL is scan, which did reduce but preserved all the intermediate
values (cumulative sum for example).

To expand your reduce horizons in Python, consider reduce as a way to
examine the relationship between successive pairs of a sequence. For
example one might wants the difference between successive elements: given
seq = [1,3,4,7] we'd like a differences == [2,1,3].

differences = []
def neighborDifference(left, right):
differences.append(right - left)
return right
reduce(neighborDifference, seq)

Bob Gailer
bg*****@alum.rpi.edu
303 442 2625
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.532 / Virus Database: 326 - Release Date: 10/27/2003

Jul 18 '05 #13

P: n/a
In article <ma************************************@python.org >,
Bob Gailer <bg*****@alum.rpi.edu> wrote:
To expand your reduce horizons in Python, consider reduce as a way to
examine the relationship between successive pairs of a sequence. For
example one might wants the difference between successive elements: given
seq = [1,3,4,7] we'd like a differences == [2,1,3].

differences = []
def neighborDifference(left, right):
differences.append(right - left)
return right
reduce(neighborDifference, seq)


seq=[1,3,4,7]
[y-x for x,y in zip(seq,seq[1:])]

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jul 18 '05 #14

P: n/a
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than ....zip.... for long arrays

G-:

"David Eppstein" <ep******@ics.uci.edu> wrote in message news:ep****************************@news.service.u ci.edu...
In article <ma************************************@python.org >,
Bob Gailer <bg*****@alum.rpi.edu> wrote:

<...>
seq = [1,3,4,7] we'd like a differences == [2,1,3].

<...>

seq=[1,3,4,7]
[y-x for x,y in zip(seq,seq[1:])]

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science


Jul 18 '05 #15

P: n/a
Bob Gailer wrote:
...
One's prior programming experience can affect one's view of reduce. My
favorite language, prior to Python, was APL. APL's native data container
is the array, and reduce is a native operator in APL. So we used reduce a
I've done a _lot_ of APL and APL2 (albeit well before I met Python --
basically in the 1978-1988 time frame) and I don't think that "affects
my view of reduce" _in Python_ -- any more than (e.g.) having done a
lot of Scheme would affect my view of lambda and tail recursion _in
Python_, a lot of Icon that of generators, a lot of Perl that of RE's, ...

Basically, the old quip about "coding Fortran in any language" does not
apply to Fortran _only_: you can, up to a point, code (APL, Icon, Scheme,
Perl, ...) in any language... but it's _still_ basically a bad idea.

Mastering a language means (in part) mastering the idioms thereof --
what works well/smoothly/elegantly in that language, what most of that
language's best practitioners _do_. Knowledge of other languages --
the more the merrier -- can help, giving you perspective, but in the
end it shouldn't really affect your view of the target language.

The reduce method of Numeric's ufuncs _is_ a good, close analogy
for APL's reduce (more generally, APL concepts -- though not syntax --
do help a lot with Numeric, far more than they do with base Python).
Numeric.add.reduce(xs) IS a perfect equivalent to APL's +/xs -- if
xs is a Numeric.array (else the conversion overhead's gonna kill you:-):

$ timeit.py -c -s'import Numeric' -s'import operator' -s'xs=range(9999)'
'reduce(operator.add, xs)'
100 loops, best of 3: 2.9e+03 usec per loop
$ timeit.py -c -s'import Numeric' -s'import operator' -s'xs=range(9999)'
'Numeric.add.reduce(xs)'
10 loops, best of 3: 2.2e+04 usec per loop
$ timeit.py -c -s'import Numeric' -s'import operator' -s'xs=range(9999)'
'sum(xs)'
1000 loops, best of 3: 1.15e+03 usec per loop
$ timeit.py -c -s'import Numeric' -s'import operator' -s'xs=range(9999)'
-s'xa=Numeric.array(xs)' 'Numeric.add.reduce(xa)'
10000 loops, best of 3: 54 usec per loop

Yeah, I know, speed isn't everything. But when the speed of a perfectly
analogous operation drops by two times (from reduce to sum) or even
more when it drops by 50 times (from reduce on a list to Numeric.add.
reduce on a Numeric.array) one _does_ get a sense of when one's using
a language "reasonably idiomatically" -- within the design space of that
language, within the pathways most often taken by the language's best
practitioners and therefore most likely to be solid and optimized.

companion of reduce in APL is scan, which did reduce but preserved all the
intermediate values (cumulative sum for example).
Yep, that's e.g. Numeric.add.accumulate in Numeric + Python.

To expand your reduce horizons in Python, consider reduce as a way to
examine the relationship between successive pairs of a sequence. For
example one might wants the difference between successive elements: given
seq = [1,3,4,7] we'd like a differences == [2,1,3].

differences = []
def neighborDifference(left, right):
differences.append(right - left)
return right
reduce(neighborDifference, seq)


def difs_reduce(seq):
differences = []
def neighborDifference(left, right):
differences.append(right - left)
return right
reduce(neighborDifference, seq)
return differences

def difs_zip(seq):
return [ b-a for a, b in zip(seq,seq[1:]) ]

import itertools
def difs_izip(seq):
return [ b-a for a, b in itertools.izip(seq,seq[1:]) ]

def difs_loop(seq):
differences = []
it = iter(seq)
a = it.next()
for b in it:
differences.append(b-a)
a = b
return differences

$ timeit.py -c -s'import a' -s'x=range(9999)' 'a.difs_reduce(x)'
100 loops, best of 3: 1.8e+04 usec per loop
$ timeit.py -c -s'import a' -s'x=range(9999)' 'a.difs_zip(x)'
100 loops, best of 3: 1.8e+04 usec per loop

so, the list comprehension with zip has just the same performance
as the clever use of reduce -- and is far more concise. But if
you're looking for speed, then the humble loop...:

timeit.py -c -s'import a' -s'x=range(9999)' 'a.difs_loop(x)'
100 loops, best of 3: 9e+03 usec per loop

....is twice as fast! Simplicity isn't to be sneered at. And
if you're keep for higher-level reasoning and expression, itertools
is quite good for that:

timeit.py -c -s'import a' -s'x=range(9999)' 'a.difs_izip(x)'
100 loops, best of 3: 7.6e+03 usec per loop

a one-liner, and another 15% speed boost wrt the humble loop.
[I can shave another 10% performance improvement with an
experimental 'w2' type I've coded to do specifically the
window-by-two job, but that small level of acceleration is
probably not enough to justify an extra type...;-)]
Yeah, I know, sigh, I come across as a micro-optimization freak,
which most definitely _ISN'T_ the point. I'm using speed and
concision as proxies for *idiomatic use* -- simplicity and
elegance and maintainability... "elegance" &c we could argue about
forever, so I go for numbers that aren't quite as arguable:-).
I think reduce (like apply, filter, and probably map too) has had
its day but really doesn't have enough good use cases any more, in
this day and age of list comprehensions and iterator-tools, to
justify its existence as a Python built-in except for legacy reasons.
Alex

Jul 18 '05 #16

P: n/a
On Thu, 06 Nov 2003 21:57:33 -0800, David Eppstein <ep******@ics.uci.edu> wrote:
In article <ma************************************@python.org >,
Bob Gailer <bg*****@alum.rpi.edu> wrote:
To expand your reduce horizons in Python, consider reduce as a way to
examine the relationship between successive pairs of a sequence. For
example one might wants the difference between successive elements: given
seq = [1,3,4,7] we'd like a differences == [2,1,3].

differences = []
def neighborDifference(left, right):
differences.append(right - left)
return right
reduce(neighborDifference, seq)
(IMO that's an ugly way to get a global result ;-).

seq=[1,3,4,7]
[y-x for x,y in zip(seq,seq[1:])]


The OP might like to see how generators make this sort of thing easy, e.g.,
def pairs(seq): ... itseq = iter(seq)
... earlier = itseq.next()
... for later in itseq:
... yield earlier, later
... earlier = later
... seq = [1,3,4,7]
Reincarnating your list comprehension, kind of:
[later-earlier for earlier,later in pairs(seq)] [2, 1, 3]

or factoring out the first difference ...
def diff1(seq): ... for earlier,later in pairs(seq): yield later-earlier
...

that becomes
[d for d in diff1(seq)] [2, 1, 3]

or perhaps better
list(diff1(seq)) [2, 1, 3]

And then we can play with other kinds of sequences and first differences, e.g.,
linetext = """\ ... abc
... abx
... bx
... zax
... """ import sets
for d in diff1([sets.Set(line) for line in linetext.splitlines()]): print d ...
Set(['x'])
Set([])
Set(['a', 'z'])

That was the first differences of:
for d in ([sets.Set(line) for line in linetext.splitlines()]): print d

...
Set(['a', 'c', 'b'])
Set(['a', 'x', 'b'])
Set(['x', 'b'])
Set(['a', 'x', 'z'])

Regards,
Bengt Richter
Jul 18 '05 #17

P: n/a
Alex Martelli wrote:
Yeah, I know, speed isn't everything. But when the speed of a
perfectly
analogous operation drops by two times (from reduce to sum) or even
more when it drops by 50 times (from reduce on a list to Numeric.add.
reduce on a Numeric.array) one _does_ get a sense of when one's using
a language "reasonably idiomatically" -- within the design space of
that
language, within the pathways most often taken by the language's best
practitioners and therefore most likely to be solid and optimized.


But reduce isn't simply intended for adding up numbers. It's for doing
any kind of reduction.

Yes, I'm sure specific reductions based on summing are faster than using
reduce. But that goes without saying. Furthermore, Numeric is not
builtin to Python, so that seems a red herring to me. Either you should
compare builtins to builtins, or not.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Every time a friend succeeds, I die a little.
\__/ Gore Vidal
Jul 18 '05 #18

P: n/a
Erik Max Francis wrote:
But reduce isn't simply intended for adding up numbers. It's for doing
any kind of reduction.
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.

If you consider the couple of uses left in the standard library, for
example... e.g., take cvs.py:

quotechar = reduce(lambda a, b, quotes = quotes:
(quotes[a] > quotes[b]) and a or b, quotes.keys())

....a masterpiece of clarity, right? The simple Python alternative,

quotechar = None
for k in quotes:
if not quotechar or quotes[k]>quotes[quotechar]:
quotechar = k

may be deemed boring, but then why not go for speed & concision with...:

quotechar = max([ (v,k) for k,v in quotes.iteritems() ])[-1]

....?-) All 4 uses in csv.py are similar to this, and the one
in difflib.py:

matches = reduce(lambda sum, triple: sum + triple[-1],
self.get_matching_blocks(), 0)

is clearly best expressed in Python 2.3 as:

matches = sum([ triple[-1] for triple in self.get_matching_blocks() ])
The only other uses are in Lib/test/ -- and even there, apart from
tests of reduce itself, all are "reduce(add..." [i.e., sum!] save
for *one*...:

def factorial(n):
return reduce(int.__mul__, xrange(1, n), 1)

even in that one case (and apart from the confusing choice of having
factorial(n) return the factorial of n-1...), the most simplistic
implementation:

def fac(n):
result = 1
for i in xrange(2, n):
result *= n
return result

is only 3 times slower, and, if one is in a hurry, recursion and
memoization are obviously preferable:

def facto(n, _memo={1:1}):
try: return _memo[n]
except KeyError:
result = _memo[n] = (n-1) * facto(n-1)
return result

the performance numbers being:

[alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.factorial(13)'
100000 loops, best of 3: 10.3 usec per loop

[alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.fac(13)'
10000 loops, best of 3: 32 usec per loop

[alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.facto(13)'
1000000 loops, best of 3: 1.26 usec per loop

Yes, I'm sure specific reductions based on summing are faster than using
reduce. But that goes without saying. Furthermore, Numeric is not
builtin to Python, so that seems a red herring to me. Either you should
compare builtins to builtins, or not.


If you want APL-ish functionality with Python, Numeric is where you'll
find it (one day numarray, Numeric's successor, may finally gain entrance
into the standard library, but, don't hold your breath...).

But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate. If a built-in gives me
obfuscated or slow code, where plain good old "let's code it out in
Python" gains clarity or speed, then it's time for that built-in to
go. 'reduce' exists for purely legacy reasons, and, IMHO, would count
as a wart were it not for Python's admirable determination to keep
old code running (however, even that determination can be overdone,
and I look forwards to the 3.0 release where old by-now-unwarranted
built-ins can be put out to pasture...).
Alex

Jul 18 '05 #19

P: n/a
JCM
Alex Martelli <al***@aleax.it> wrote:
....various good points...
if one is in a hurry, recursion and
memoization are obviously preferable: def facto(n, _memo={1:1}):
try: return _memo[n]
except KeyError:
result = _memo[n] = (n-1) * facto(n-1)
return result the performance numbers being: [alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.factorial(13)'
100000 loops, best of 3: 10.3 usec per loop [alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.fac(13)'
10000 loops, best of 3: 32 usec per loop [alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.facto(13)'
1000000 loops, best of 3: 1.26 usec per loop


I'm going off topic, but it's really not fair to compare a memoized
function to non-memoized implementations using a "best of 3" timing
test.
Jul 18 '05 #20

P: n/a
Georgy Pruss wrote:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays


If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?

[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:
if __name__ != '__main__':
def difs_reduce(seq):
differences = []
def neighborDifference(left, right, accum=differences.append):
accum(right - left)
return right
reduce(neighborDifference, seq)
return differences

def difs_zip(seq):
return [ b-a for a, b in zip(seq,seq[1:]) ]

import itertools
def difs_izip(seq):
return [ b-a for a, b in itertools.izip(seq,seq[1:]) ]

def difs_w2(seq, wib=itertools.w2):
return [ b-a for a, b in wib(seq) ]

def window_by_two(iterable):
it = iter(iterable)
last = it.next()
for elem in it:
yield last, elem
last = elem

def difs_wib(seq, wib=window_by_two):
return [ b-a for a, b in wib(seq) ]

def difs_loop(seq):
differences = []
it = iter(seq)
a = it.next()
for b in it:
differences.append(b-a)
a = b
return differences

def difs_map(seq):
return map(int.__sub__, seq[1:], seq[:-1])

if __name__ == '__main__':
import timeit
bargs = ['-c', '-simport a', '-sx=range(9999)']
funs = 'reduce zip izip w2 wib loop map'.split()
for fun in funs:
args = bargs + ['a.difs_%s(x)' % fun]
print '%8s:' % fun,
timeit.main(args)
Alex

Jul 18 '05 #21

P: n/a
Alex Martelli wrote:
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.
Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.
But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate.


reduce will be at least as fast as writing an explicit loop.
Potentially more if the function object used is itself a builtin
function.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ A man cannot be comfortable without his own approval.
\__/ Mark Twain
Jul 18 '05 #22

P: n/a
Erik Max Francis wrote:
Alex Martelli wrote:
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.
Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is


If this a good example, what's a bad one?
sum([ ['a list'], ['of', 'non'], ['recursive', 'lists'] ], [])
['a list', 'of', 'non', 'recursive', 'lists']

sum, exactly like reduce(operator.add ... , has bad performance for this
kind of one-level flattening, by the way. A plain loop on .extend is much
better than either (basically, whenever a loop of x+=y has much better
performance than one of x = x + y -- since the latter is what both sum
and reduce(operator.add... are condemned to do -- you should consider
choosing a simple loop if performance has any importance at all).
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.
Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks. I much prefer to "lose" the "legacy only real use" built-ins
(presumably to some kind of legacy.py module whence they can easily
be installed to keep some old and venerable piece of code working) than
to choose either of those tragic alternatives.

3.0 is years away, but functions that are clearly being aimed at for
obsolencence then should, IMHO, already be better avoided in new code;
particularly because the obsolescence is due to the existence of better
alternatives. You claim "there will be no replacement", but in fact I have
posted almost a dozen possible replacements for 'reduce' for a case that
was being suggested as a good one for it and _every_ one of them is
better than reduce; I have posted superior replacements for ALL uses of
reduce in the standard library (except those that are testing reduce itself,
but if that's the only good use of reduce -- testing itself -- well, you see
my point...:-). I don't intend to spend much more time pointing out how
reduce can be replaced by better code in real use cases, by the way:-).

But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate.


reduce will be at least as fast as writing an explicit loop.


You are wrong: see the almost-a-dozen cases I posted to try and demolish
one suggested "good use" of reduce.
Potentially more if the function object used is itself a builtin
function.


In one of the uses of reduce in the standard library, for which I posted
superior replacements today, the function object was int.__mul__ -- builtin
enough for you? Yet I showed how using recursion and memoization instead of
reduce would speed up that case by many, many times.

Another classic example of reduce being hopelessly slow, despite using
a built-in function, is exactly the "flatten a 1-level list" case mentioned
above. Try:
x = reduce(operator.add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be
can get another nice little multiplicative speedup by hoisting the x.extend
lookup, but the key issue is O(N**2) reduce vs O(N) loop...).

[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'x=reduce(operator.add, xs, x)'
100 loops, best of 3: 8.7e+03 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'for L in xs: x.extend(L)'
1000 loops, best of 3: 860 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]; xex=x.extend' 'for L in xs: xex(L)'
1000 loops, best of 3: 560 usec per loop

See what I mean? Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce, and if that's not quite enough and
you want it 15 times faster instead, that's trivial to get, too.
Alex

Jul 18 '05 #23

P: n/a
JCM wrote:
Alex Martelli <al***@aleax.it> wrote:
...various good points...
if one is in a hurry, recursion and
memoization are obviously preferable:

def facto(n, _memo={1:1}):
try: return _memo[n]
except KeyError:
result = _memo[n] = (n-1) * facto(n-1)
return result ... [alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.facto(13)'
1000000 loops, best of 3: 1.26 usec per loop


I'm going off topic, but it's really not fair to compare a memoized
function to non-memoized implementations using a "best of 3" timing
test.


The best-of-3 is irrelevant, it's the million loops that help:-).

Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations.

Once you run timeit.py, with its million loops (and the best-of-3, but
that's not crucial:-), this effect disappears. But on smaller tests it is
more easily seen. You can for example define the memoized functions
by a def nested inside another, so each call of the outer function will undo
the memoization, and exercise them that way even with timeit.py. E.g.:

import operator

def wireduce(N=23):
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = reduce(operator.mul, xrange(2,x+1), 1)
return result
for x in range(N, 0, -1): factorial(x)

def wirecurse(N=23):
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = x * factorial(x-1)
return result
for x in range(N, 0, -1): factorial(x)

[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wireduce()'
1000 loops, best of 3: 710 usec per loop
[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wirecurse()'
1000 loops, best of 3: 280 usec per loop
Alex

Jul 18 '05 #24

P: n/a

"Alex Martelli" <al***@aleax.it> wrote in message news:Sw*********************@news1.tin.it...
Georgy Pruss wrote:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays
If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?


:) No, it's not a race. I just found the map expression to be clear and elegant.
Fortunatelly, one of the fastest solutions too.

G-:

[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:
<...>

Alex


Jul 18 '05 #25

P: n/a

"Alex Martelli" <al*****@yahoo.com> wrote in message
news:aK*********************@news1.tin.it...
above. Try:
x = reduce(operator.add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be can get another nice little multiplicative speedup by hoisting the x.extend lookup, but the key issue is O(N**2) reduce vs O(N) loop...).
Right: however that issue and the possibility of hoisting x.extend
have *nothing* to do with reduce vs. for. For a fair comparison of
the latter pair, try the following, which is algorithmicly equivalent
to your sped-up for loop.
xs=[[i] for i in range(10)]
x=[]
xtend=x.extend
reduce(lambda dum,L: xtend(L), xs, x)
x

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[timeits snipped]... See what I mean?
That you hate reduce?

I am sure that the O(N) reduce would be similarly faster than an
O(N**2) operator.add for loop: what would *that* mean?
Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce,


No, you have only shown that for N=999, O(N) can be O(N**2)/10, and
that smart programmers who understand that can write better (faster)
code than those who do not.

Terry J. Reedy

PS: for practical rather than didactic programming, I am pretty sure I
would have written a for loop 'reduced' with xtend.

Jul 18 '05 #26

P: n/a
In article <dx***********************@news2.tin.it>, Alex Martelli wrote:
Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)


Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)

--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Jul 18 '05 #27

P: n/a
In article <sl******************@lackingtalent.com>,
Dave Benjamin <ra***@lackingtalent.com> wrote:
In article <dx***********************@news2.tin.it>, Alex Martelli wrote:

Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)


Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)


There wouldn't be any; it's a straightforward integration with the
long-standing ability to use *args and **kwargs when defining a
function.
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

"It is easier to optimize correct code than to correct optimized code."
--Bill Harlan
Jul 18 '05 #28

P: n/a
Alex Martelli <al*****@yahoo.com> writes:
Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks.


I agree: Down with bloat! Get rid of sum() -- it's redundant with
reduce(), which I use all the time, like so:

def longer(x, y):
if len(y) > len(x): return y
else: return x

def longest(seq):
return reduce(longer, seq)

print longest(("abc", "yumyum!", "hello", "goodbye", "?"))

=> yumyum!

|>oug
Jul 18 '05 #29

P: n/a
Alex Martelli <al*****@yahoo.com> wrote:
Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations.


Maybe we need a better example because it is possible to use reduce on
a function which has side effects.

Anton.

class _Memo:
biggest = 1
facdict = {0 : 1, 1 : 1}

def fac(n):
def mymul(x,y):
res = x*y
_Memo.facdict[y] = res
_Memo.biggest = y
return res
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]
return factorial(n)

def test():
print fac(5)

if __name__=='__main__':
test()

Jul 18 '05 #30

P: n/a
an***@vredegoor.doge.nl (Anton Vredegoor) wrote:
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]


Sorry, this part should be:

def factorial(x):
b = _Memo.biggest
if x > b:
start = _Memo.facdict[b]
return reduce(mymul, xrange(b+1,x+1), start)
return _Memo.facdict[x]
Jul 18 '05 #31

P: n/a
Anton Vredegoor wrote:
an***@vredegoor.doge.nl (Anton Vredegoor) wrote:
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]


Sorry, this part should be:

def factorial(x):
b = _Memo.biggest
if x > b:
start = _Memo.facdict[b]
return reduce(mymul, xrange(b+1,x+1), start)
return _Memo.facdict[x]


The horrid complication of this example -- including the global side
effects, and the trickiness that made even you, its author, fall into such
a horrid bug -- is, I think, a _great_ testimonial to why reduce should go.
I would also include in this harrowing complexity the utter frailty of the
(textually separated) _Memo / mymul coupling, needed to maintain
_Memo's unstated invariant. This code is as close to unmaintainable,
due to complexity and brittleness, as Python will allow.

I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.

Which is exactly why I'll be enormously glad to see the back of it, come
3.0 time, and in the meantime I heartily suggest to all readers that (like,
say, apply) it is best (_way_ best) avoided in all new code.
Alex

Jul 18 '05 #32

P: n/a
In article <TE*********************@news1.tin.it>, Alex Martelli
<al*****@yahoo.com> writes
.....
I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.

Which is exactly why I'll be enormously glad to see the back of it, come
3.0 time, and in the meantime I heartily suggest to all readers that (like,
say, apply) it is best (_way_ best) avoided in all new code. .....

I don't understand why reduce makes the function definition brittle. The
code was brittle independently of the usage. Would a similar brittle
usage of sum make sum bad?

I really don't think reduce, map etc are bad. Some just don't like that
style. I like short code and if reduce, filter et al make it short I'll
use that.

As for code robustness/fragility how should it be measured? Certainly I
can make any code fragile if I keep randomly changing the language
translator. So code using reduce is fragile if we remove reduce.
Similarly sum is fragile when we intend to remove that. Code is made
robust by using features that stay in the language. Keep reduce and
friends and make Python more robust.
Alex


--
Robin Becker
Jul 18 '05 #33

P: n/a
Alex Martelli <al*****@yahoo.com> wrote:
I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.


Answering in the same vein I could now concede that you had
successfully distracted from my suggestion to find a better example in
order to demonstrate the superiority of recursive techniques over
iterative solutions in programming memoization functions.

However I will not do that, and instead concede that my code is often
"complex, brittle, hard-to-maintain". This probably results from me
being Dutch, and so being incapable of being wrong, one has to mangle
code (and speech!) in mysterious ways in order to simulate mistakes.

I'd like to distance myself from the insinuation that such things
happen on purpose. It was not necessary to use reduce to show an
iterative memoization technique, but since the thread title included
reduce I felt compelled to use it in order to induce the reader to
find a better example of recursive functions that memoize and which
cannot be made iterative.

This would generate an important precedent for me because I have
believed for a long time that every recursive function can be made
iterative, and gain efficiency as a side effect.

By the way, there is also some other association of reduce besides the
one with functional programming. Reductionism has become politically
incorrect in certain circles because of its association with
neopositivism and behavioristic psychology.

I would like to mention that there is also a phenomenological
reduction, which contrary to neopositivistic reduction does not try to
express some kind of transcendental idealism, in which the world would
be made transparent in some light of universal reason. Instead it
tries to lead us back to the world and the concrete individual
properties of the subjects under discussion.

Anton
Jul 18 '05 #34

P: n/a
Anton Vredegoor wrote:
...
find a better example of recursive functions that memoize and which
cannot be made iterative.

This would generate an important precedent for me because I have
believed for a long time that every recursive function can be made
iterative, and gain efficiency as a side effect.


Well, without stopping to ponder the issue deeply, I'd start with:

def Ack(M, N, _memo={}):
try: return _memo[M,N]
except KeyError: pass
if not M:
result = N + 1
elif not N:
result = Ack(M-1, 1)
else:
result = Ack(M-1, Ack(M, N-1))
_memo[M,N] = result
return result

M>=0 and N>=0 (and M and N both integers) are preconditions of the Ack(M, N)
call.

There is a substantial body of work on this function in computer science
literature, including work on a technique called "incrementalization" which,
I believe, includes partial but not total iterativization (but I am not
familiar with the details). I would be curious to examine a totally
iterativized and memoized version, and comparing its complexity, and
performance on a few typical (M,N) pairs, to both this almost-purest
recursive version, and an incrementalized one.
Alex

Jul 18 '05 #35

P: n/a
Anton Vredegoor wrote:
find a better example of recursive functions that memoize and which
cannot be made iterative.

This would generate an important precedent for me because I have
believed for a long time that every recursive function can be made
iterative, and gain efficiency as a side effect.


Recursive memoization can be better than iteration when the recursion
can avoid evaluating a large fraction of the possible subproblems.

An example would be the 0-1 knapsack code in
http://www.ics.uci.edu/~eppstein/161/python/knapsack.py

If there are n items and size limit L, the iterative versions (pack4 and
pack5) take time O(nL), while the recursive version (pack3) takes time
O(min(2^n, nL)). So the recursion can be better when L is really large.

An example of this came up in one of my research papers some years ago,
http://www.ics.uci.edu/~eppstein/pubs/p-3lp.html
which involved a randomized recursive memoization technique with runtime
significantly faster than that for iterating through all possible
subproblems.

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jul 18 '05 #36

P: n/a
Alex Martelli <al*****@yahoo.com> wrote:
def Ack(M, N, _memo={}):
try: return _memo[M,N]
except KeyError: pass
if not M:
result = N + 1
elif not N:
result = Ack(M-1, 1)
else:
result = Ack(M-1, Ack(M, N-1))
_memo[M,N] = result
return result

M>=0 and N>=0 (and M and N both integers) are preconditions of the Ack(M, N)
call.


Defined as above the number of recursions is equal to the return
value, because there is only one increment per call.

Have a look at the paper about the ackerman function at:

http://www.dur.ac.uk/martin.ward/martin/papers/

(the 1993 paper, halfway down the list, BTW, there's also something
there about automatically translating assembler to C-code, maybe it
would also be possible to automatically translate C-code to Python?
Start yet another subthread :-)

Another thing is that long integers cannot be used to represent the
result values because the numbers are just too big.

It seems possible to make an iterative version that computes the
values more efficiently, but it suffers from the same number
representation problem.

Maybe Bengt can write a class for representing very long integers as
formulas. For example an old post by François Pinard suggests that:

ackerman(4, 4) == 2 ** (2 ** (2 ** (2 ** (2 ** (2 ** 2))))) - 3

Anton
Jul 18 '05 #37

P: n/a
Georgy Pruss wrote:
"Alex Martelli" <al***@aleax.it> wrote in message
news:Bs***********************@news2.tin.it...
| <...>
| is way faster than "reduce(operator.add, ..." -- and the advantage in
| speed *AND* simplicity grows even more when one considers how often
| that typical reduce is miscoded as "reduce(lambda x, y: x+y, ..." or
| "reduce(int.__add__, ..." and the like.

What's the difference between operator.add and int.__add__?
operator.add just adds anything, not caring about the type of what
it's adding. Internally, it just reaches for the "addition function"
slot and calls that function, period.
Why is operator.add faster and "better" than int.__add__ if we deal
with integers?


int.__add__ must also check the type of its argument, and only call
the addition function if the argument is indeed integer, because
otherwise it must raise a TypeError. The type-checking is normally
needless overhead.

So, we can measure, for example:

alex@lancelot src]$ timeit.py -c -s'import operator' 'map(int.__add__,
xrange(999), xrange(999))'
1000 loops, best of 3: 730 usec per loop

[alex@lancelot src]$ timeit.py -c -s'import operator' 'map(operator.__add__,
xrange(999), xrange(999))'
1000 loops, best of 3: 460 usec per loop

the 999 (useless) type-checks cost 270 microseconds, for a net of
about 270 nanoseconds per check; the 999 additions and the building
of the 999-elements result list cost another 460 microseconds, or
about 460 nanoseconds per operation-and-addition-to-list-of-results.
Alex

Jul 18 '05 #38

P: n/a
Alex Martelli wrote:
Erik Max Francis wrote:

But reduce isn't simply intended for adding up numbers. It's for doing
any kind of reduction.

However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.


How about

import operator
seq_of_flag_integers = [01020, 02300, 00132] #etc.
reduce(operator.or_, seq_of_integers)

My simple timing tests (cumulative time for 1000 trials, each with a
sequence of 1000 integers), show no significant difference between the
above and an alternative using a for loop and |=, but the above is
clearer to read.

I'm not claiming that the use case above is common, or particularly
useful. I'm just pointing out sum doesn't replace reduce unless the
function being applied is addition. Addition may be the most common
case, and in that case, sum is both clearer and faster. However, that
doesn't detract from the clarity and usefulness of reduce in the
remaining cases.

David

P. S. I've seen a lot of talk about removing old features from Python,
or specifically old built-ins, because of bloat. Does this "bloat"
reduce performance, or does it refer to the extra burden on someone
learning the language or reading someone else's code?

Jul 18 '05 #39

P: n/a
David C. Fox wrote:
...
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.
How about


....a typical example of *made-up* case follows...:
import operator
seq_of_flag_integers = [01020, 02300, 00132] #etc.
reduce(operator.or_, seq_of_integers)

My simple timing tests (cumulative time for 1000 trials, each with a
sequence of 1000 integers), show no significant difference between the
above and an alternative using a for loop and |=, but the above is
clearer to read.
You may consider it "clearer to read", and I might opine differently
(apart from the typo/bug that you're reducing a sequence you never
defined, of course:-). That (necessary) trailing underscore in the
name of operator.or_ is quite unpleasant, for example. But I think
the real point is another.

If you're doing lots of bit-twidding in Python, you surely need to learn
such Python constructs as for and |= anyway.

With these can't-avoid-learning-them constructs, you can obviously
code a simple, elementary loop such as:
ored_flags = 0
for flags in all_the_flags:
ored_flags |= flags

Alternatively, you may learn _two more_ things -- that 'reduce' thingy
_plus_ the fact that module operator has an or_ function that does the
same job as | -- all in order to get an *equivalent* way to code the
same task?! Not to mention that what you end up coding this way is most
definitely NOT going to be any clearer than the simple, elementary loop
to any future maintainer of your code -- if your bit-twiddling code is
going to be maintained by somebody who doesn't understand for or | you
are in trouble anyway, friend, while -- given that reduce and operator.or_
give NO real advantages! -- it WOULD be perfectly possible for your future
maintainer to NOT be familiar with them.

I'm not claiming that the use case above is common, or particularly
useful. I'm just pointing out sum doesn't replace reduce unless the
function being applied is addition. Addition may be the most common
case, and in that case, sum is both clearer and faster. However, that
doesn't detract from the clarity and usefulness of reduce in the
remaining cases.
It's hard to detract from the clarity and usefulness of a construct
that never had much usefulness OR clarity in the first place. When
we've taken away just about all of its reasonably frequent use cases,
what remains? Essentially only a case of "more than one way to do-itis"
where in the "BEST" case the existence of 'reduce' and module operator
let you "break even" compared to elementary, simplest coding -- and
more often than not you can fall into traps such as those exemplified by
most reduce-defenders' posts on this thread.

P. S. I've seen a lot of talk about removing old features from Python,
or specifically old built-ins, because of bloat. Does this "bloat"
reduce performance, or does it refer to the extra burden on someone
learning the language or reading someone else's code?


A slightly larger memory footprint, and larger built-ins dictionaries,
can only reduce runtime performance very marginally. The "cognitive
burden" of having built-ins that "don't carry their weight", on the
other hand, is considered an issue *in Python*, because it doesn't fit
in with Python's general philosophy and worldview. Python is meant to
be a LEAN language (with a not-lean standard library of modules that
are specifically imported when needed); certain "legacy" features are
sub-optimal (and best removed, when the strict constraint of backwards
compatibility can be relaxed) because they interfere with that (and
built-ins are close enough to "the core language" that they _do_ need
to be weighed severely in terms of "how often will they be useful").

It's a conceptual wart that the only sincere response to this thread
subject can be "not much, really -- basically backwards compatibility,
and some people's preference for constructs they're used to, often in
preference to simpler or better performing ones, some of which didn't
happen to have been introduced yet when they learned Python"...:-).
Alex

Jul 18 '05 #40

P: n/a
On 9 Nov 2003 22:40:57 GMT, bo**@oz.net (Bengt Richter) wrote:

[ackerman function generates very long integers]
Is there a fast formula for computing results, ignoring the representation problem?


This seems to be an easy way to generate them faster:

http://www.kosara.net/thoughts/ackermann.html

... back to the topic of this thread ...

From what I get of the main discussion (recursion vs iteration) it is
reasonable to not make a distinction based on the type of algorithm
but to only look at the order in which the nodes of the search space
are visited. It seems possible to turn recursive functions into
iterative ones by pushing and popping a stack of argument values.

This opens up possibilities of continuing from arbitrary positions of
the stack instead of just by pushing and popping ...

Anton
Jul 18 '05 #41

P: n/a
Alex Martelli wrote:
P. S. I've seen a lot of talk about removing old features from Python,
or specifically old built-ins, because of bloat. Does this "bloat"
reduce performance, or does it refer to the extra burden on someone
learning the language or reading someone else's code?

A slightly larger memory footprint, and larger built-ins dictionaries,
can only reduce runtime performance very marginally. The "cognitive
burden" of having built-ins that "don't carry their weight", on the
other hand, is considered an issue *in Python*, because it doesn't fit
in with Python's general philosophy and worldview. Python is meant to
be a LEAN language (with a not-lean standard library of modules that
are specifically imported when needed); certain "legacy" features are
sub-optimal (and best removed, when the strict constraint of backwards
compatibility can be relaxed) because they interfere with that (and
built-ins are close enough to "the core language" that they _do_ need
to be weighed severely in terms of "how often will they be useful").


Thanks, that's what I was trying to understand.

David

Jul 18 '05 #42

P: n/a
Thank you!
G-:

"Alex Martelli" <al***@aleax.it> wrote in message news:%R*****************@news2.tin.it...
| <skip>
Jul 18 '05 #43

P: n/a
"Alex Martelli" <al***@aleax.it> wrote in message
news:Bs***********************@news2.tin.it...
Francis Avila wrote:
[a reduce() clone]

[a long and thorough critique]


*Sigh* My only objective was to combine functional/recursive/iterative
programming styles into an olive branch to extend to both parties of this
silly war.

But oh well. Like I said, I never found a use for reduce, but alas, I am a
ham-fisted programmer entirely lacking in subtlety and art....
--
Francis Avila

Jul 18 '05 #44

P: n/a
Alex Martelli <al***@aleax.it> writes:
Anybody who can claim (without a smilie) that "sum is redundant with
reduce" has clearly lost all touch with reality.
*All* touch? My girlfriend still feels real to me when I touch
her....
"sum" accounts for about 80-90% of the uses of reduce found on large
existing codebases,
I have *never* used sum() (or a reduce() that ammounts to sum()). And
I use reduce() fairly often, in more or less the way that I described.
and even more when only _good_ uses of reduce are considered. "sum"
is way faster than "reduce(operator.add, ..." -- and the advantage
in speed *AND* simplicity
I don't care about speed all that much when using Python. If I did, I
probably wouldn't be using Python. And to me, "simple" means
providing a small number of orthogonal features that combine easily,
rather than a lot of special purpose features, that all have to be
learned, remembered, and documented separately. sum() is a
special-purpose feature, while reduce() is a very general feature that
is very easily adapted to a myriad of situations. So what if 80-90%
of the uses of reduce in your experience is to support sum()? That's
not my experience, and even if it were, if sum() is used a lot, then
10-20% of that is still a lot. max(), like sum() also has a built-in
reduce. So the current approach for the future of Python seems to be
to build reduce() into everything that might need it? I'd prefer to
see reduce() taken out of sum() and out of max() and then no one needs
to document or learn that sum() and max() duplicate the functionality
of reduce(). Just learn, remember, and document one thing, and you're
all set.
grows even more when one considers how
often that typical reduce is miscoded as "reduce(lambda x, y: x+y,
..." or "reduce(int.__add__, ..." and the like.
People who do this, no doubt, are more interested in coding than
wading through a big manual to figure out to make use of Python's
idiosyncracities to get their code to run as fast as possible in this
one particular implementation of Python.

Such people are surely not going to want to wade through the manual to
find sum().
The complications of going to a higher-order function and learning about
module operator "pay off" in a SLOWDOWN compared to elementary code...!
All of 3.4% according to your calculations. I'm not losing any sleep
over it.

Learning about module operator is not a chore, since I'd want to know
how to access these operators anyway.

Higher order functions are not complicated -- they are beautiful and
elegant.
The "fancy" ways reduce is often coded slow things down further by about
two times. This is clearly unacceptable.
There's nothing fancy about reduce() -- it's taught in every CS 101
class!
And the right solution is, quite obviously: [alex@lancelot test]$ timeit.py -c 'sum(xrange(999))'
10000 loops, best of 3: 129 usec per loop
Why? Because it saves having to type "operator.add,"? Because it's a
bit faster? Who cares about either one of these things. Certainly
not I.
giving speed, simplicity, and code that is clean, high-level, concise, and
immediately obvious to any reader or maintainer.
reduce() is immediately obvious to anyone who has ever taken CS
101. sum() is going in the direction of Perl, where there's a separate
idiomatic feature for everything that anyone might ever want to do.

I like Python because it is a very simple, generic, easy-to-learn
programming language, where I didn't have to learn much of anything
anything new. It was just like all the other myriad of programming
languages I have programmed in, only less cluttered than most, and
"light-weight" in its implementation and accessibility.

I having nothing against learning new stuff, by the way, but when I
want to learn new stuff, I'm not particularly interested in the
idiosyncrasies of Python -- I'd spend my effort learning something a
bit more exotic, like OCaml, or Haskell, that might stretch my brain a
bit. Learning a bunch of idiosyncratic detail is just tedious. This
is one of the most important reasons that Perl sucks so badly.
The paladins of reduce don't help their case by posting reduce "use
cases" that are complicated and rarely optimal:
def longer(x, y):
if len(y) > len(x): return y
else: return x def longest(seq):
return reduce(longer, seq) print longest(("abc", "yumyum!", "hello", "goodbye", "?"))


There's nothing complicated about my example. It's the epitome of
clarity and elegance.
In 2.4, the much simpler expression:
list.sorted(seq, key=len)[-1]
is about 3 times faster on a typical long seq (seq=map(str, xrange(2000)).
There's nothing simpler about that (in fact, conceptually, it's much
more complex) and it's only faster because of Python implementation
details. In most languages, or an alternate implementation of Python,
sorting a list would typically be significantly slower than a linear
search on it.
We still don't have (in the current pre-alpha 2.4) the "one obvious way",
max(seq, key=len), because max & other functions haven't yet been upgraded
to take the optional parameter key= as lists' sort and sorted methods, but
I trust they will be.
More annoying bloat. Instead of pumping up max() with bloat, tell
people to use reduce(). It's already there and simple and elegant.
And if one doesn't care for the concision of these new idioms, the most
elementary programming is still best: [alex@lancelot test]$ timeit.py -c -s'xs=map(str,xrange(2000))' -s'''
def longest(seq):
seq=iter(seq)
best = seq.next()
blen = len(best)
for item in seq:
ilen = len(item)
if ilen>blen:
best = item
blen = ilen
return best
''' 'longest(xs)' 1000 loops, best of 3: 980 usec per loop


That's supposed to be best? It would take me 100 times as long to
figure out that code.
Once again, reduce proves to be an "attractive nuisance":
With the emphasis on "attractive". It results in easy-to-read, easy
to remember coding techniques.
Higher order programming, such as the "max(seq, key=len)" which I
hope will make it into 2.4, is of course way simpler, faster, and
more concise -- but the "neither fish nor fowl" level at which most
uses of reduce fall just doesn't cut it, IMHO.
Except for that I'd never use it because I'd never even know it was
there because I prefer to spend my time writing software than wading
through manuals.
And btw, if one wants concision rather than speed, even in 2.3, the
typical DSU-like idiom:
def longest(seq):
aux = [ (len(x), -i, x) for i, x in enumerate(seq) ]
return max(aux)[-1]
still handily beats the reduce-coded alternative (even though
having the len= parameter instead of explicit decoration will
of course make it even smoother, simpler, and faster, in 2.4).


The above is particularly ugly.

|>oug
Jul 18 '05 #45

P: n/a
In article <vr************@corp.supernews.com>, Francis Avila
<fr***********@yahoo.com> writes
"Alex Martelli" <al***@aleax.it> wrote in message
news:Bs***********************@news2.tin.it...
Francis Avila wrote:
[a reduce() clone]

[a long and thorough critique]


*Sigh* My only objective was to combine functional/recursive/iterative
programming styles into an olive branch to extend to both parties of this
silly war.

But oh well. Like I said, I never found a use for reduce, but alas, I am a
ham-fisted programmer entirely lacking in subtlety and art....
--
Francis Avila

This whole thread is reminiscent of vi vs emacs or an os war or similar.
It's a pity that people with a preferred style should be so dogmatic
that they want to remove language features to prevent others using them.

The whole 'only one way to do it' concept is almost certainly wrong.
There should be maximal freedom to express algorithms. As others have
stated min, max,... sum et al are useful specialisations, but because
they cover 99% of the space doesn't mean that reduce is redundant.
-Eliminate reducespeak and control the future-ly yrs-
Robin Becker
Jul 18 '05 #46

P: n/a
Robin Becker wrote:
...
This whole thread is reminiscent of vi vs emacs or an os war or similar.
It's a pity that people with a preferred style should be so dogmatic
that they want to remove language features to prevent others using them.
Python's essence is simplicity and uniformity. Having extra features
in the language and built-ins runs directly counter to that.

The whole 'only one way to do it' concept is almost certainly wrong.
Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.

The world is *chock full* of languages designed with the philosophy
of "the more, the merrier". Why can't you let us have just _*ONE*_
higher-level language designed with full appreciation for "the spirit
of C" (as per the C standard's Rationale) in its key principles:

"""
(c) Keep the language small and simple.
(d) Provide only one way to do an operation.
"""

Maybe "the spirit of C" is as wrong as you claim it is. Personally,
I _cherish_ it, and in Python I found a language that cherishes it
just as much, while working at a much higher semantic level and thus
affording me vastly higher productivity. I will not stand idly by,
and let you or anybody else attack its foundations and thus try to
destroy its key strengths, without a fight.

There should be maximal freedom to express algorithms. As others have


Want "maximal freedom to express algorithms"? You can choose among
a BAZILLION languages designed according to your philosophy -- even
sticking just to higher-level languages with dynamic typing and
generally infix/algoloid syntax, Perl is the obvious example, and if
you just can't stand its core syntax, in full respect of this
"maximal freedom" principle, you can of course change it, too -- see
http://www.csse.monash.edu.au/~damia...Perligata.html and
revel in that _truly_ maximal freedom. Or, for a language that's
somewhat in-between, but still philosophically accepts the "maximal
freedom" tenet for which you crave, try Ruby -- if I _did_ want such
freedom, then Ruby is what I'd most likely use.

But can't you let us have *ONE* language that's designed according
to the concept you consider "almost certainly wrong"?! Why do YOU
use this language, even while you condemn its foundation and try to
undermine them, rather than using any one of the myriad that already
DO follow your philosophy?!

Trying to make Python into (e.g.) a Ruby with slightly different
cosmetics and "fine points" is truly absurd: if you want Ruby, you
know where to find it. Let Python stay Python, and leave those of
us who find ourselves best served by its design principles in peace!
Alex

Jul 18 '05 #47

P: n/a
In article <Os******************@news1.tin.it>, Alex Martelli
<al***@aleax.it> writes
Robin Becker wrote:
...
This whole thread is reminiscent of vi vs emacs or an os war or similar.
It's a pity that people with a preferred style should be so dogmatic
that they want to remove language features to prevent others using them.
Python's essence is simplicity and uniformity. Having extra features
in the language and built-ins runs directly counter to that.


no disagreement, reduce is in line with that philosophy sum is a
shortcut and as others have said is less general.
The whole 'only one way to do it' concept is almost certainly wrong.
Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.


not so, I agree that there ought to be at least one way to do it.
Want "maximal freedom to express algorithms"? You can choose among
.... you may be right, but I object to attempts to restrict my existing
freedoms at the expense of stability of Python as a whole.
But can't you let us have *ONE* language that's designed according


I am not attempting to restrict anyone or change anyone's programming
style. I just prefer to have a stable language.
--
Robin Becker
Jul 18 '05 #48

P: n/a
On Tue, 11 Nov 2003 11:39:20 +0000, Robin Becker
<ro***@jessikat.fsnet.co.uk> wrote:
... you may be right, but I object to attempts to restrict my existing
freedoms at the expense of stability of Python as a whole.


To paraphrase a famous saying (at least in the context of US law), your
freedom ends where another person's maintenance headache begins.

Gary
Jul 18 '05 #49

P: n/a
Alex Martelli <al***@aleax.it> writes:
Robin Becker wrote:
The whole 'only one way to do it' concept is almost certainly wrong.

Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.


The "only one way to do it" mantra is asinine. It's like saying that
because laissez faire capitalism (Perl) is obviously wrong that
communism (FP) is obviously right. The truth lies somewhere in the
middle.

The mantra should be "small, clean, simple, powerful, general,
elegant". This, however, does not imply "only one way to do it",
because power and generality often provide for multiple "right" ways
to flourish. In fact, trying to enforce that there be only "one way
to do it", will make your language bigger, messier, more complicated,
less powerful, less general, and uglier, as misguided souls rush to
remove powerful and general tools like reduce() from the language, and
fill it up with special-purpose tools like sum() and max().

People have written entire articles on how to do functional
programming in Python:

http://www-106.ibm.com/developerwork...ry/l-prog.html

You would castrate Python so that this is not possible? Then you
would diminish Python, by making it a less general, less elegant
language, that has become unsuitable as a language for teaching CS101,
and only suitable for teaching How Alex Martelli Says You Should
Program 101.

|>oug
Jul 18 '05 #50

226 Replies

This discussion thread is closed

Replies have been disabled for this discussion.