473,756 Members | 1,904 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

reduce() anomaly?

This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?

Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright" , "credits" or "license" for more information.
d1 = {'a':1}
d2 = {'b':2}
d3 = {'c':3}
l = [d1, d2, d3]
d4 = reduce(lambda x, y: x.update(y), l) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' d4 = reduce(lambda x, y: x.update(y), l, {})

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'

- Steve.
Jul 18 '05
226 12650
Georgy Pruss wrote:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays


If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?

[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:
if __name__ != '__main__':
def difs_reduce(seq ):
differences = []
def neighborDiffere nce(left, right, accum=differenc es.append):
accum(right - left)
return right
reduce(neighbor Difference, seq)
return differences

def difs_zip(seq):
return [ b-a for a, b in zip(seq,seq[1:]) ]

import itertools
def difs_izip(seq):
return [ b-a for a, b in itertools.izip( seq,seq[1:]) ]

def difs_w2(seq, wib=itertools.w 2):
return [ b-a for a, b in wib(seq) ]

def window_by_two(i terable):
it = iter(iterable)
last = it.next()
for elem in it:
yield last, elem
last = elem

def difs_wib(seq, wib=window_by_t wo):
return [ b-a for a, b in wib(seq) ]

def difs_loop(seq):
differences = []
it = iter(seq)
a = it.next()
for b in it:
differences.app end(b-a)
a = b
return differences

def difs_map(seq):
return map(int.__sub__ , seq[1:], seq[:-1])

if __name__ == '__main__':
import timeit
bargs = ['-c', '-simport a', '-sx=range(9999)']
funs = 'reduce zip izip w2 wib loop map'.split()
for fun in funs:
args = bargs + ['a.difs_%s(x)' % fun]
print '%8s:' % fun,
timeit.main(arg s)
Alex

Jul 18 '05 #21
Alex Martelli wrote:
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.
Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.
But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate.


reduce will be at least as fast as writing an explicit loop.
Potentially more if the function object used is itself a builtin
function.

--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ A man cannot be comfortable without his own approval.
\__/ Mark Twain
Jul 18 '05 #22
Erik Max Francis wrote:
Alex Martelli wrote:
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.
Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is


If this a good example, what's a bad one?
sum([ ['a list'], ['of', 'non'], ['recursive', 'lists'] ], [])
['a list', 'of', 'non', 'recursive', 'lists']

sum, exactly like reduce(operator .add ... , has bad performance for this
kind of one-level flattening, by the way. A plain loop on .extend is much
better than either (basically, whenever a loop of x+=y has much better
performance than one of x = x + y -- since the latter is what both sum
and reduce(operator .add... are condemned to do -- you should consider
choosing a simple loop if performance has any importance at all).
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.
Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks. I much prefer to "lose" the "legacy only real use" built-ins
(presumably to some kind of legacy.py module whence they can easily
be installed to keep some old and venerable piece of code working) than
to choose either of those tragic alternatives.

3.0 is years away, but functions that are clearly being aimed at for
obsolencence then should, IMHO, already be better avoided in new code;
particularly because the obsolescence is due to the existence of better
alternatives. You claim "there will be no replacement", but in fact I have
posted almost a dozen possible replacements for 'reduce' for a case that
was being suggested as a good one for it and _every_ one of them is
better than reduce; I have posted superior replacements for ALL uses of
reduce in the standard library (except those that are testing reduce itself,
but if that's the only good use of reduce -- testing itself -- well, you see
my point...:-). I don't intend to spend much more time pointing out how
reduce can be replaced by better code in real use cases, by the way:-).

But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate.


reduce will be at least as fast as writing an explicit loop.


You are wrong: see the almost-a-dozen cases I posted to try and demolish
one suggested "good use" of reduce.
Potentially more if the function object used is itself a builtin
function.


In one of the uses of reduce in the standard library, for which I posted
superior replacements today, the function object was int.__mul__ -- builtin
enough for you? Yet I showed how using recursion and memoization instead of
reduce would speed up that case by many, many times.

Another classic example of reduce being hopelessly slow, despite using
a built-in function, is exactly the "flatten a 1-level list" case mentioned
above. Try:
x = reduce(operator .add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be
can get another nice little multiplicative speedup by hoisting the x.extend
lookup, but the key issue is O(N**2) reduce vs O(N) loop...).

[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'x=reduce(opera tor.add, xs, x)'
100 loops, best of 3: 8.7e+03 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'for L in xs: x.extend(L)'
1000 loops, best of 3: 860 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]; xex=x.extend' 'for L in xs: xex(L)'
1000 loops, best of 3: 560 usec per loop

See what I mean? Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce, and if that's not quite enough and
you want it 15 times faster instead, that's trivial to get, too.
Alex

Jul 18 '05 #23
JCM wrote:
Alex Martelli <al***@aleax.it > wrote:
...various good points...
if one is in a hurry, recursion and
memoization are obviously preferable:

def facto(n, _memo={1:1}):
try: return _memo[n]
except KeyError:
result = _memo[n] = (n-1) * facto(n-1)
return result ... [alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.facto(13) '
1000000 loops, best of 3: 1.26 usec per loop


I'm going off topic, but it's really not fair to compare a memoized
function to non-memoized implementations using a "best of 3" timing
test.


The best-of-3 is irrelevant, it's the million loops that help:-).

Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations .

Once you run timeit.py, with its million loops (and the best-of-3, but
that's not crucial:-), this effect disappears. But on smaller tests it is
more easily seen. You can for example define the memoized functions
by a def nested inside another, so each call of the outer function will undo
the memoization, and exercise them that way even with timeit.py. E.g.:

import operator

def wireduce(N=23):
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = reduce(operator .mul, xrange(2,x+1), 1)
return result
for x in range(N, 0, -1): factorial(x)

def wirecurse(N=23) :
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = x * factorial(x-1)
return result
for x in range(N, 0, -1): factorial(x)

[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wireduce()'
1000 loops, best of 3: 710 usec per loop
[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wirecurse() '
1000 loops, best of 3: 280 usec per loop
Alex

Jul 18 '05 #24

"Alex Martelli" <al***@aleax.it > wrote in message news:Sw******** *************@n ews1.tin.it...
Georgy Pruss wrote:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays
If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?


:) No, it's not a race. I just found the map expression to be clear and elegant.
Fortunatelly, one of the fastest solutions too.

G-:

[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:
<...>

Alex


Jul 18 '05 #25

"Alex Martelli" <al*****@yahoo. com> wrote in message
news:aK******** *************@n ews1.tin.it...
above. Try:
x = reduce(operator .add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be can get another nice little multiplicative speedup by hoisting the x.extend lookup, but the key issue is O(N**2) reduce vs O(N) loop...).
Right: however that issue and the possibility of hoisting x.extend
have *nothing* to do with reduce vs. for. For a fair comparison of
the latter pair, try the following, which is algorithmicly equivalent
to your sped-up for loop.
xs=[[i] for i in range(10)]
x=[]
xtend=x.extend
reduce(lambda dum,L: xtend(L), xs, x)
x

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[timeits snipped]... See what I mean?
That you hate reduce?

I am sure that the O(N) reduce would be similarly faster than an
O(N**2) operator.add for loop: what would *that* mean?
Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce,


No, you have only shown that for N=999, O(N) can be O(N**2)/10, and
that smart programmers who understand that can write better (faster)
code than those who do not.

Terry J. Reedy

PS: for practical rather than didactic programming, I am pretty sure I
would have written a for loop 'reduced' with xtend.

Jul 18 '05 #26
In article <dx************ ***********@new s2.tin.it>, Alex Martelli wrote:
Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)


Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)

--
..:[ dave benjamin (ramenboy) -:- www.ramenfest.com -:- www.3dex.com ]:.
: d r i n k i n g l i f e o u t o f t h e c o n t a i n e r :
Jul 18 '05 #27
In article <sl************ ******@lackingt alent.com>,
Dave Benjamin <ra***@lackingt alent.com> wrote:
In article <dx************ ***********@new s2.tin.it>, Alex Martelli wrote:

Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)


Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)


There wouldn't be any; it's a straightforward integration with the
long-standing ability to use *args and **kwargs when defining a
function.
--
Aahz (aa**@pythoncra ft.com) <*> http://www.pythoncraft.com/

"It is easier to optimize correct code than to correct optimized code."
--Bill Harlan
Jul 18 '05 #28
Alex Martelli <al*****@yahoo. com> writes:
Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks.


I agree: Down with bloat! Get rid of sum() -- it's redundant with
reduce(), which I use all the time, like so:

def longer(x, y):
if len(y) > len(x): return y
else: return x

def longest(seq):
return reduce(longer, seq)

print longest(("abc", "yumyum!", "hello", "goodbye", "?"))

=> yumyum!

|>oug
Jul 18 '05 #29
Alex Martelli <al*****@yahoo. com> wrote:
Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations .


Maybe we need a better example because it is possible to use reduce on
a function which has side effects.

Anton.

class _Memo:
biggest = 1
facdict = {0 : 1, 1 : 1}

def fac(n):
def mymul(x,y):
res = x*y
_Memo.facdict[y] = res
_Memo.biggest = y
return res
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1) , b)
return _Memo.facdict[x]
return factorial(n)

def test():
print fac(5)

if __name__=='__ma in__':
test()

Jul 18 '05 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

181
8875
by: Tom Anderson | last post by:
Comrades, During our current discussion of the fate of functional constructs in python, someone brought up Guido's bull on the matter: http://www.artima.com/weblogs/viewpost.jsp?thread=98196 He says he's going to dispose of map, filter, reduce and lambda. He's going to give us product, any and all, though, which is nice of him.
16
2185
by: clintonG | last post by:
At design-time the application just decides to go boom claiming it can't find a dll. This occurs sporadically. Doing a simple edit in the HTML for example and then viewing the application has caused the application to go boom. Sometimes the page will compile and run using F5 and others not. So I do the web search tango looking around the blogs and the tuts and determine I should go into Temporary ASP.NET Files and delete the directory...
1
2802
by: mai | last post by:
Hi everyone, i'm trying to exhibit FIFO anomaly(page replacement algorithm),, I searched over 2000 random strings but i couldnt find any anomaly,, am i I doing it right?,, Please help,,,The following is the code,, #include <iostream> #include <string> #include <stdio.h> #include <stdlib.h> #include <ctime // For time()
7
3505
by: cnb | last post by:
This must be because of implementation right? Shouldn't reduce be faster since it iterates once over the list? doesnt sum first construct the list then sum it? ----------------------- reduce with named function: 37.9864357062 reduce with nested, named function: 39.4710288598 reduce with lambda: 39.2463927678 sum comprehension: 25.9530121845
0
9456
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9872
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9713
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7248
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5142
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5304
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3805
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3358
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2666
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.