This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?
Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information. d1 = {'a':1} d2 = {'b':2} d3 = {'c':3} l = [d1, d2, d3] d4 = reduce(lambda x, y: x.update(y), l)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' d4 = reduce(lambda x, y: x.update(y), l, {})
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'
- Steve.
Jul 18 '05
226 11696
[Robin Becker] The whole 'only one way to do it' concept is almost certainly wrong.
[Alex Martelli] Bingo! You disagree with the keystone of Python's philosophy. Every other disagreement, quite consequently, follows from this one.
[Douglas Alan] The "only one way to do it" mantra is asinine.
I hate to interrupt anybody's free flowing argument, but isn't it the
case that Guido never said "There should be only one way to do it"?
My understanding of the "Pythonic Philosophy" is that "there should be
only one *obvious* way to do it", which is quite a different thing
entirely.
This philosophy is aimed at making python easy for newbies: they
shouldn't get confused by a million and one different possible
approaches. There *should* (not "must"!) be a simple and obvious way
to solve the problem.
Once one is familiar with the language, and all of the subtle power it
encompasses, anything goes in relation to implementing an algorithm.
Just my €0,02.
--
alan kennedy
-----------------------------------------------------
check http headers here: http://xhaus.com/headers
email alan: http://xhaus.com/mailto/alan
> >> The whole 'only one way to do it' concept is almost certainly wrong. Bingo! You disagree with the keystone of Python's philosophy. Every other disagreement, quite consequently, follows from this one.
The "only one way to do it" mantra is asinine. It's like saying that because laissez faire capitalism (Perl) is obviously wrong that communism (FP) is obviously right. The truth lies somewhere in the middle.
Part of the problem here is that just saying "only one way to do it" is a
horrible misquote, and one that unfortunately misses IMO some of the most
important parts of that "mantra":
c:\>python
Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
Type "help", "copyright", "credits" or "license" for more information. import this
The Zen of Python, by Tim Peters
[snip]
There should be one-- and preferably only one --obvious way to do it.
-Dave
Robin Becker wrote:
... Python's essence is simplicity and uniformity. Having extra features in the language and built-ins runs directly counter to that.
no disagreement, reduce is in line with that philosophy sum is a shortcut and as others have said is less general.
'sum' is _way simpler_: _everybody_ understands what it means to sum a
bunch of numbers, _without_ necessarily having studied computer science.
The claim, made by somebody else, that _every_ CS 101 course teaches the
functionality of 'reduce' is not just false, but utterly absurd: 'reduce',
'foldl', and similar higher-order functions, were not taught to me back when
_I_ took my first university exam in CS [it used Fortran as te main
language], they were not taught to my son in _his_ equivalent course [it
used Pascal], and are not going to be taught to my daughter in _her_
equivalent course [it uses C]. Google for "CS 101" and convince yourself
of how utterly absurd that claim is, if needed -- how small is the
proportion of "CS 101" courses that teach these subjects.
Python's purpose is not, and has never been, to maximize the generality
of the constructs it offers. For example, Ruby's hashes (and, I believe,
Perl's) are more general than Python's dicts, because in those hashes
you can use arbitrary mutable keys -- e.g., arrays (Ruby's equivalent of
Python's lists), strings (which in Ruby are mutable -- more general than
Python's strings, innit?), etc. Python carefully balances generality,
simplicity, and performance considerations. Every design is a series of
compromise decisions, and Python's design is, in my opinion, the best
one around (for my purposes) because those compromises are struck with
an _excellent_ batting average (not perfectly, but better than any other
language I've ever studied, or designed myself). The underlying idea
that there should preferably be ONE obvious way to express a solution
is part of what has kept Python so great as it evolved during the years. The whole 'only one way to do it' concept is almost certainly wrong.
Bingo! You disagree with the keystone of Python's philosophy. Every other disagreement, quite consequently, follows from this one.
not so, I agree that there ought to be at least one way to do it.
But not with the parts that I quoted from the "spirit of C", and I
repeat them because they were SO crucial in the success of C as a
lower-level language AND are similarly crucial in the excellence of
Python as a higher-level one -- design principles that are *VERY*
rare among computer languages and systems, by the way:
Keep the language small and simple.
Provide only one way to do an operation.
"Only one way" is of course an _ideal_ goal (so I believe the way
it's phrased in Python, "preferably only one obvious way") -- but
it's a guiding light in the fog of languages constructed instead
according to YOUR completely oppposite goal, and I quote you:
There should be maximal freedom to express algorithms.
Choose just about every OTHER language on Earth, and you'll find
it TRIES (with better or worse results depending on how well or
badly it was designed, of course) to meet your expressed goal.
But NOT Python: you're using one of the _extremely few_ languages
that expressly do NOT try to provide such "maximal freedom", that
try instead to stay small and simple and provide (preferably)
only one (obvious) way to do an operation. Your choice of language
is extremely peculiar in that it _contradicts_ your stated goal! Want "maximal freedom to express algorithms"? You can choose among
... you may be right, but I object to attempts to restrict my existing freedoms at the expense of stability of Python as a whole.
Nobody restricts your existing freedom of using Python 2.3.2 (or
whatever other release you prefer) and all of its constructs and
built-ins; nobody ever proposed retroactively changing the license
to do that (and I doubt it could be done even if anyone wished!).
But we're talking about Python 3.0, "the point at which backwards
compatibility will be broken" -- the next _major_ release. To quote
Guido, in 3.0 "We're throwing away a lot of the cruft that Python has
accumulated." After a dozen years of backwards compatible growth,
Python has a surprisingly small amount of such cruft, but it definitely
does have some. Exactly _what_ qualifies as 'cruft' is not yet decided,
and it won't be for quite a while (Guido thinks he won't do 3.0 until
he can take PSF-financed time off to make sure he does it right). But
there is no doubt that "reduce feature duplication" and "change rules
every so slightly to benefit optimization" _are_ going to be the
themes of 3.0.
Python can't keep growing with great new ideas, _AND_ still be a
small and simple language, without shedding old ideas that do not
pull their weight any more, if they ever did. Check out, e.g.,
the "python regrets" talk of well over a year ago, http://www.python.org/doc/essays/ppt...honRegrets.ppt
to see that lambda, map, filter, and reduce are all among those
regrets -- things that Guido believe he never should have allowed
in the language in the first place. E.g., and I quote from him:
"""
reduce()
nobody uses it, few understand it
a for loop is clearer & (usually) faster
"""
and that was way BEFORE sum took away the vast majority of reduce's
use cases -- so, guess how he may feel about it now...?
One of Python's realities is that _Guido decides_. Not without lots
of pressure being put on him each and every time he does decide, of
course -- that's part of why he doesn't read c.l.py any more, because
the pressure from this venue had gotten way excessive. Of course,
he's going to be pressured on each and every one of the items he
mentions in detail in the "regrets" talk and more summarily in the
Python 3.0 "State of the Python Union" talk. But I'm surely not the
only one convinced that here, like in (by far) most difficult design
decisions in Python's past, he's on the right track. Python does
need to keep growing (languages that stop growing die), but it must
not become big, so it must at long last lose SOME of the accumulated
cruft, the "feature duplication".
I'll deeply regret, come Python 3.0, not being able to code
"if blah(): fleep()"
on one single like any more, personally. And I may try to put on
pressure for a last-minute reprieve for my own pet "duplicated
feature", of course. But in the end, if I use Python it's because
I believe Guido is a better language designer than I am (and that
most other language designers are), so I will accept and respect
his decisions (and maybe keep whining about it forevermore, as I
do for the "print>>bah,gorp" one:-). But can't you let us have *ONE* language that's designed according
I am not attempting to restrict anyone or change anyone's programming style. I just prefer to have a stable language.
I think Python's stability is superb, but stability cannot mean that
there will never be a 3.0 release, or that the language will have to
carry around forever any mistaken decision that was once taken. I'm
not advocating a "high level of churn" or anything like that: we have
extremely "sedate" and stable processes to gradually deprecate old
features. But such deprecation _will_ happen -- of that, there is
most definitely no doubt.
Alex
"Dave Brueck" <da**@pythonapocrypha.com> writes: Part of the problem here is that just saying "only one way to do it" is a horrible misquote, and one that unfortunately misses IMO some of the most important parts of that "mantra":
Well, perhaps anything like "only one way to do it" should be removed
from the mantra altogether, since people keep misquoting it in order
to support their position of removing beautiful features like reduce()
from the language.
|>oug
> > Part of the problem here is that just saying "only one way to do it" is
a horrible misquote, and one that unfortunately misses IMO some of the
most important parts of that "mantra":
Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language.
You're joking, right? It is one of the key aspects of Python that makes the
language such a good fit for me. Changing the philosophy because a *few*
people don't "get it" or because they are apt to misquote it seems crazy.
-Dave
P.S. If reduce() were removed, none of my code would break. ;-)
"Dave Brueck" <da**@pythonapocrypha.com> writes: > Part of the problem here is that just saying "only one way to do > it" is a horrible misquote, and one that unfortunately misses IMO > some of the most important parts of that "mantra":
Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language.
You're joking, right? It is one of the key aspects of Python that makes the language such a good fit for me. Changing the philosophy because a *few* people don't "get it" or because they are apt to misquote it seems crazy.
Of course I am not joking. I see no good coming from the mantra, when
the mantra should be instead what I said it should be: "small, clean,
simple, powerful, general, elegant" -- not anything like, "there
should be only one way" or "one right way" or "one obviously right
way". I have no idea what "one obviously right way" is supposed to
mean (and I don't want to have to become Dutch to understand it)
without the language being overly-restricted to the point of
uselessness like FP is. Even in FP, I doubt that there is always, or
even typically one obviously right way to accomplish a goal. To me,
there is never *one* obviously "right way" to do anything -- the world
(and the programming languages I chose to use) offer a myriad of
possible adventures, and I would never, ever want it to be otherwise.
|>oug
In article <lc************@gaffa.mit.edu>,
Douglas Alan <ne****@mit.edu> wrote: "Dave Brueck" <da**@pythonapocrypha.com> writes:
Part of the problem here is that just saying "only one way to do it" is a horrible misquote, and one that unfortunately misses IMO some of the most important parts of that "mantra":
Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language.
I think the more relevant parts of the zen are:
Readability counts.
Although practicality beats purity.
The argument is that reduce is usually harder to read than the loops it
replaces, and that practical examples of it other than sum are sparse
enough that it is not worth keeping it just for the sake of
functional-language purity.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
> >> > Part of the problem here is that just saying "only one way to do > it" is a horrible misquote, and one that unfortunately misses IMO > some of the most important parts of that "mantra": Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language. You're joking, right? It is one of the key aspects of Python that makes
the language such a good fit for me. Changing the philosophy because a *few* people don't "get it" or because they are apt to misquote it seems
crazy. Of course I am not joking. I see no good coming from the mantra, when the mantra should be instead what I said it should be:
Nah, I don't really like your version. Also, the "only one way to do it"
misquote has been singled out when it really should be considered in the
context of the other items in that list - for whatever reason (maybe to
contrast with Perl, I don't know) it's been given a lot of weight in c.l.py
discussion threads.
"small, clean, simple, powerful, general, elegant"
It's really a matter of taste - both "versions" mean about the same to me
(and to me both mean "get rid of reduce()" ;-) ).
To me, there is never *one* obviously "right way" to do anything
Never? I doubt this very much. When you want to add two numbers in a
programming language, what's your first impulse? Most likely it is to write
"a + b". The same is true of a lot of other, even much more complex, things.
And IMO that's where this principle of an obvious way to do things comes
into play, and it's tightly coupled with the principle of least surprise. In
both cases they are of course just guiding principles or ideals to shoot
for, so there will always be exceptions (not to mention the fact that what
is obvious to one person isn't universal, in the same way that "common
sense" is rarely common).
Having said that though, part of the appeal of Python is that it hits the
nail on the head surprisingly often: if you don't know (from prior
experience) how to do something in Python, your first guess is very often
correct. Correspondingly, when you read someone else's Python code that uses
some feature you're not familiar with, odds are in your favor that you'll
correctly guess what that feature actually does.
And that is why I wouldn't be sad if reduce() were to disappear - I don't
use reduce() and _anytime_ I see reduce() in someone's code I have to slow
way down and sort of rehearse in my mind what it's supposed to do and see if
I can successfully interpret its meaning (and, when nobody's looking, I
might even replace it with a for-loop!). Of course that would be different
if I had a history of using functional programming languages, which I don't.
That's the line Guido walks: trying to find just the right combination of
different-but-better and intuitive-for-most-people, and the aforementioned
items from the Zen of Python are a way of expressing that.
-Dave
"David Eppstein" <ep******@ics.uci.edu> wrote in message
news:ep****************************@news.service.u ci.edu... In article <lc************@gaffa.mit.edu>, Douglas Alan <ne****@mit.edu> wrote:
"Dave Brueck" <da**@pythonapocrypha.com> writes:
Part of the problem here is that just saying "only one way to do it"
is a horrible misquote, and one that unfortunately misses IMO some of the
most important parts of that "mantra": Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language.
I think the more relevant parts of the zen are: Readability counts. Although practicality beats purity.
The argument is that reduce is usually harder to read than the loops it replaces, and that practical examples of it other than sum are sparse enough that it is not worth keeping it just for the sake of functional-language purity.
IMO, this arguement is basically religious, that is, it is not based
on common sense. Apply, lambda, map, filter and reduce never constituted
a complete set of functional programming constructs, so trying to
make them so for the sake of the arguement is, basically, silly.
Apply was absorbed into the language core with a small change
in function call specifications. Good idea - it gets rid of a built-in
function.
Map and filter were (almost) obsoleted by list comprehensions and the zip
built-in function. Whether or not list comprehensions are clearer than map
and filter is debatable, but the only thing we lost in the transition was
map's
capability of processing lists of different lengths.
Sum is not an adequate replacement for reduce, regardless of the
performance benefits. Something similar to a list comprehension would
be. I don't, at this point, have a good syntax to suggest though.
A not so good example would be:
numbers = [1, 2, 3, 4]
result = [x: x + i for i in numbers]
The ":" signals that the result is a single object, not a list of
objects. The first list element is bound to that label, and then
the expression is evaluated for the rest of the elements of the list(s).
The problem with the syntax is the brackets, which suggest that the
result should be a list.
John Roth
-- David Eppstein http://www.ics.uci.edu/~eppstein/ Univ. of California, Irvine, School of Information & Computer Science
I am glad to hear others rise to the "defense" of reduce(). I too am
reluctant to see it leave the language, as I'd have to rewrite some of my
code to accommodate the change.
The use of zip(seq[1:], [:-1]) to me is more obscure, and
memory/cpu-expensive in terms of creating 3 new lists.
Bob Gailer bg*****@alum.rpi.edu
303 442 2625
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system ( http://www.grisoft.com).
Version: 6.0.538 / Virus Database: 333 - Release Date: 11/10/2003
David Eppstein <ep******@ics.uci.edu> wrote: The argument is that reduce is usually harder to read than the loops it replaces, and that practical examples of it other than sum are sparse enough that it is not worth keeping it just for the sake of functional-language purity.
One argument in favor of reduce that I haven't seen anywhere yet is
that it is a kind of bottleneck. If I can squeeze my algorithm through
a reduce that means that I have truly streamlined it and have removed
superfluous cruft. After having done that or other code mangling
tricks -like trying to transform my code into a one liner- I usually
have no mental problems refactoring it back into a wider frame.
So some things have a use case because they require a special way of
thinking, and impose certain restrictions on the flow of my code.
Reduce is an important cognitive tool, at least it has been such for
me during a certain time frame, because nowadays I can often go
through the mental hoop without needing to actually produce the code.
I would be reluctant to deny others the chance to learn how not to use
it.
Anton
Bob Gailer wrote:
... The use of zip(seq[1:], [:-1]) to me is more obscure, and
Very obscure indeed (though it's hard to say if it's _more_ obscure without
a clear indication of what to compare it with). Particularly considering
that it's incorrect Python syntax, and the most likely correction gives
probably incorrect semantics, too, if I understand the task (give windows
of 2 items, overlapping by one, on seq?).
memory/cpu-expensive in terms of creating 3 new lists.
Fortunately, Hettinger's splendid itertools module, currently in Python's
standard library, lets you perform this windowing task without creating any
new list whatsoever.
Wen seq is any iterable, all you need is izip(seq, islice(seq, 1, None)),
and you'll be creating no new list whatsoever. Still, tradeoffs in
obscurity (and performance for midsized lists) are quite as clear.
Alex
"Dave Brueck" <da**@pythonapocrypha.com> writes: Of course I am not joking. I see no good coming from the mantra, when the mantra should be instead what I said it should be:
"small, clean, simple, powerful, general, elegant"
It's really a matter of taste - both "versions" mean about the same to me (and to me both mean "get rid of reduce()" ;-) ).
No, my mantra plainly states to keep general and powerful features
over specific, tailored features. reduce() is more general and
powerful than sum(), and would thus clearly be preferred by my
mantra.
The mantra "there should be only one obvious way to do it" apparently
implies that one should remove powerful, general features like
reduce() from the language, and clutter it up instead with lots of
specific, tailored features like overloaded sum() and max(). If so,
clearly this mantra is harmful, and will ultimately result in Python
becoming a bloated language filled up with "one obvious way" to solve
every particular idiom. This would be very bad, and make it less like
Python and more like Perl.
I can already see what's going to happen with sum(): Ultimately,
people will realize that they may want to perform more general types
of sums, using alternate addition operations. (For intance, there may
be a number of different ways that you might add together vectors --
e.g, city block geometry vs. normal geometry. Or you may want to add
together numbers using modular arithmetic, without worrying about
overflowing into bignums.) So, a new feature will be added to sum()
to allow an alternate summing function to be passed into sum(). Then
reduce() will have effectively been put back into the language, only
its name will have been changed, and its interface will have been
changed so that everyone who has taken CS-101 and knows off the top of
their head what reduce() is and does, won't easily be able to find it.
Yes, there are other parts of The Zen of Python that point to the
powerful and general, rather than the clutter of specific and
tailored, but nobody seems to quote them these days, and they surely
are ignoring them when they want to bloat up the language with
unneccessary features like overloaded sum() and max() functions,
rather than to rely on trusty, powerful, and elegant reduce(), which
can easily and lightweightedly do everything that overloaded sum() and
max() can do and quite a bit more.
To me, there is never *one* obviously "right way" to do anything
Never? I doubt this very much. When you want to add two numbers in a programming language, what's your first impulse? Most likely it is to write "a + b".
Or b + a. Perhaps we should prevent that, since that makes two
obviously right ways to do it!
Having said that though, part of the appeal of Python is that it hits the nail on the head surprisingly often: if you don't know (from prior experience) how to do something in Python, your first guess is very often correct. Correspondingly, when you read someone else's Python code that uses some feature you're not familiar with, odds are in your favor that you'll correctly guess what that feature actually does.
All of this falls out of "clean", "simple", and "elegant".
And that is why I wouldn't be sad if reduce() were to disappear - I don't use reduce() and _anytime_ I see reduce() in someone's code I have to slow way down and sort of rehearse in my mind what it's supposed to do and see if I can successfully interpret its meaning (and, when nobody's looking, I might even replace it with a for-loop!).
C'mon -- all reduce() is is a generalized sum or product. What's
there to think about? It's as intuitive as can be. And taught in
every CS curiculum. What more does one want out of a function?
|>oug
In article <%F*******************@news2.tin.it>,
Alex Martelli <al*****@yahoo.com> wrote: The use of zip(seq[1:], [:-1]) to me is more obscure, and
Very obscure indeed (though it's hard to say if it's _more_ obscure without a clear indication of what to compare it with). Particularly considering that it's incorrect Python syntax, and the most likely correction gives probably incorrect semantics, too, if I understand the task (give windows of 2 items, overlapping by one, on seq?).
memory/cpu-expensive in terms of creating 3 new lists.
Fortunately, Hettinger's splendid itertools module, currently in Python's standard library, lets you perform this windowing task without creating any new list whatsoever.
Wen seq is any iterable, all you need is izip(seq, islice(seq, 1, None)), and you'll be creating no new list whatsoever. Still, tradeoffs in obscurity (and performance for midsized lists) are quite as clear.
If I'm not mistaken, this is buggy when seq is an iterable, and you need
to do something like
seq1,seq2 = tee(seq)
izip(seq1,islice(seq2,1,None))
instead.
--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Douglas Alan <ne****@mit.edu> wrote in message news:<lc************@gaffa.mit.edu>... Well, perhaps anything like "only one way to do it" should be removed from the mantra altogether, since people keep misquoting it in order to support their position of removing beautiful features like reduce() from the language.
I don't know what your definition of beautiful is, but reduce is the
equivalent of Haskell's foldl1, a function not even provided by most
of the other functional languages I know. I can't see how someone
could consider it "beautiful" to include a rarely-used and
limited-extent fold and not provide the standard folds.
You want to make Python into a functional language? Write a
functional module. foldl, foldr, etc; basically a copy of the Haskell
List module. That should give you a good start, and then you can use
such facilities to your heart's content.
Me? I love functional programming, but in Python I'd much rather read
a for loop than a reduce or probably even a fold. Horses for courses,
you know?
Jeremy
Alex Martelli <al***@aleax.it> writes: no disagreement, reduce is in line with that philosophy sum is a shortcut and as others have said is less general.
'sum' is _way simpler_: _everybody_ understands what it means to sum a bunch of numbers, _without_ necessarily having studied computer science.
Your claim is silly. sum() is not *way* simpler than reduce(), and
anyone can be explained reduce() in 10 seconds: "reduce() is just like
sum(), only with reduce() you can specify whatever addition function
you would like."
The claim, made by somebody else, that _every_ CS 101 course teaches the functionality of 'reduce' is not just false, but utterly absurd: 'reduce', 'foldl', and similar higher-order functions, were not taught to me back when _I_ took my first university exam in CS [it used Fortran as te main language],
Then you weren't taught Computer Science -- you were taught Fortran
programming. Computer Science teaches general concepts, not specific
languages.
they were not taught to my son in _his_ equivalent course [it used Pascal], and are not going to be taught to my daughter in _her_ equivalent course [it uses C].
Then your children were done a great diservice by receiving a poor
education. (Assuming that is that they wanted to learn Computer
Science, and not Programming in Pascal or Programming in C.)
Python's purpose is not, and has never been, to maximize the generality of the constructs it offers.
Whoever said anything about "maximizing generality"? If one's mantra
is "small, clean, simple, general, powerful, elegant", then clearly
there will come times when one must ponder on a trade-off between, for
example, elegant and powerful. But if you end up going and removing
elegant features understood by anyone who has studied Computer Science
because you think your audience is too dumb to make a slight leap from
the specific to the general that can be explained on one simple
sentence, then you are making those trade-off decisions in the
*utterly* wrong manner. You should be assuming that your audience are
the smart people that they are, rather than the idiots you are
assuming them to be.
But not with the parts that I quoted from the "spirit of C", and I repeat them because they were SO crucial in the success of C as a lower-level language AND are similarly crucial in the excellence of Python as a higher-level one -- design principles that are *VERY* rare among computer languages and systems, by the way:
I sure hope that Python doesn't try to emulate C. It's a terrible,
horrible programming language that held back the world of software
development by at least a decade.
Keep the language small and simple.
Provide only one way to do an operation.
It is not true these principles are rare among computer languages --
they are quite common. Most such language (like most computer
languages in general) just never obtained any wide usage.
The reason for Python's wide acceptance isn't because it is
particularly well-designed compared to other programming languages
that had similar goals of simplicity and minimality (it also isn't
poorly designed compared to any of them -- it is on par with the
better ones) -- the reason for its success is that it was in the right
place at the right time, it had a lightweight implementation, was
well-suited to scripting, and it came with batteries included.
|>oug
> >> Of course I am not joking. I see no good coming from the mantra, when the mantra should be instead what I said it should be: "small, clean, simple, powerful, general, elegant" It's really a matter of taste - both "versions" mean about the same to
me (and to me both mean "get rid of reduce()" ;-) ).
No, my mantra plainly states to keep general and powerful features over specific, tailored features.
And I disagree that that's necessarily a Good Thing. Good language design is
about finding that balance between general and specific. It's why I'm not a
language designer and it's also why I'm a Python user.
reduce() is more general and powerful than sum(), and would thus clearly be preferred by my mantra.
Yes, and eval() would clearly be preferred over them all.
The mantra "there should be only one obvious way to do it" apparently implies that one should remove powerful, general features like reduce() from the language, and clutter it up instead with lots of specific, tailored features like overloaded sum() and max().
I completely disagree - I see no evidence of that. We're looking at the same
data but drawing very different conclusions from it.
I can already see what's going to happen with sum(): Ultimately, people will realize that they may want to perform more general types of sums, using alternate addition operations.
Not gonna happen - this _might_ happen if Python was a design-by-committee
language, but it's not.
Yes, there are other parts of The Zen of Python that point to the powerful and general, rather than the clutter of specific and tailored, but nobody seems to quote them these days,
'not quoting' != 'not following'
and
'what gets debated on c.l.py' != 'what the Python developers do' Having said that though, part of the appeal of Python is that it hits
the nail on the head surprisingly often: if you don't know (from prior experience) how to do something in Python, your first guess is very
often correct. Correspondingly, when you read someone else's Python code that
uses some feature you're not familiar with, odds are in your favor that
you'll correctly guess what that feature actually does.
All of this falls out of "clean", "simple", and "elegant".
Not at all - I cut my teeth on 6502 assembly and there is plenty that I
still find clean, simple, and elegant about it, but it's horrible to program
in. And that is why I wouldn't be sad if reduce() were to disappear - I
don't use reduce() and _anytime_ I see reduce() in someone's code I have to
slow way down and sort of rehearse in my mind what it's supposed to do and
see if I can successfully interpret its meaning (and, when nobody's looking, I might even replace it with a for-loop!).
C'mon -- all reduce() is is a generalized sum or product. What's there to think about? It's as intuitive as can be.
To you, perhaps. Not me, and not a lot of other people. To be honest I don't
really care that it's in the language. I'm not dying to see it get
deprecated or anything, but I do avoid it in my own code because it's
non-obvious to me, and if it were gone then Python would seem a little
cleaner to me.
Obviously what is intuitive to someone is highly subjective - I was really
in favor of adding a conditional operator to Python because to me it _is_
intuitive, clean, powerful, etc. because of my previous use of it in C. As
much as I wanted to have it though, on one level I'm really pleased that a
whole lot of clamoring for it did not result in its addition to the
language. I *like* the fact that there is someone making subjective
judgement calls, even if it means I sometimes don't get my every wish.
A good programming language is not the natural by-product of a series of
purely objective tests.
And taught in every CS curiculum.
Doubtful, and if it were universally true, it would weaken your point
because many people still find it a foreign or awkward concept. Besides,
whether or not something is taught in a CS program is a really poor reason
for doing anything.
-Dave
"Dave Brueck" <da**@pythonapocrypha.com> writes: And I disagree that that's necessarily a Good Thing. Good language design is about finding that balance between general and specific. It's why I'm not a language designer and it's also why I'm a Python user.
It's surely the case that there's a balance, but if you assume that
your audience is too stupid to be able to be able to cope with
reduce(add, seq)
instead of
sum(seq)
then you are not finding the proper balance. reduce() is more general and powerful than sum(), and would thus clearly be preferred by my mantra.
Yes, and eval() would clearly be preferred over them all.
And, damned right, eval() should stay in the language!
The mantra "there should be only one obvious way to do it" apparently implies that one should remove powerful, general features like reduce() from the language, and clutter it up instead with lots of specific, tailored features like overloaded sum() and max().
I completely disagree - I see no evidence of that. We're looking at the same data but drawing very different conclusions from it.
Well, that's the argument you seem to be making -- that reduce() is
superfluous because a sum() and max() that work on sequences were
added to the language.
I can already see what's going to happen with sum(): Ultimately, people will realize that they may want to perform more general types of sums, using alternate addition operations.
Not gonna happen - this _might_ happen if Python was a design-by-committee language, but it's not.
According to Alex Martelli, max() and min() are likely to be extended
in this fashion. Why not sum() next?
All of this falls out of "clean", "simple", and "elegant".
Not at all - I cut my teeth on 6502 assembly and there is plenty that I still find clean, simple, and elegant about it, but it's horrible to program in.
I think we can both agree that not all of programming language design can
be crammed into a little mantra!
C'mon -- all reduce() is is a generalized sum or product. What's there to think about? It's as intuitive as can be.
To you, perhaps. Not me, and not a lot of other people.
Well, perhaps you can explain your confusion to me? What could
possibly be unintuitive about a function that is just like sum(), yet
it allows you to specify the addition operation that you want to use?
Of course, you can get the same effect by defining a class with your
own special __add__ operator, and then encapsulating all the objects
you want to add in this new class, and then using sum(), but that's a
rather high overhead way to accomplish the same thing that reduce()
lets you do very easily.
I *like* the fact that there is someone making subjective judgement calls, even if it means I sometimes don't get my every wish.
Likewise.
A good programming language is not the natural by-product of a series of purely objective tests.
And taught in every CS curiculum.
Doubtful, and if it were universally true, it would weaken your point because many people still find it a foreign or awkward concept.
I doubt that anyone who has put any thought into it finds it a foreign
concept. If they do, it's just because they have a knee-jerk reaction
and *want* to find it a foreign concept.
If you can cope with modular arithmetic, you can cope with the idea of
allowing people to sum numbers with their own addition operation!
Besides, whether or not something is taught in a CS program is a really poor reason for doing anything.
No, it isn't. CS is there for a reason, and one should not ignore the
knowledge it contains. That doesn't mean that one should feel
compelled to repeat the mistakes of history. But fear of that is no
reason not to accept its successes. Those who don't, end up inventing
languages like Perl.
|>oug
Douglas Alan: Then you weren't taught Computer Science -- you were taught Fortran programming. Computer Science teaches general concepts, not specific languages.
I agree with Alex on this. I got a BS in CS but didn't learn about
lambda, reduce, map, and other aspects of functional programming
until years later, and it still took some effort to understand it.
(Granted,
learning on my own at that point.)
But I well knew what 'sum' did.
Was I not taught Computer Science? I thought I did pretty well
on the theoretical aspects (state machines, automata, discrete math,
algorithms and data structures). Perhaps my school was amiss in
leaving it out of the programming languages course, and for
teaching its courses primarily in Pascal. In any case, it contradicts
your assertion that anyone who has studied CS knows what
reduce does and how its useful.
Then your children were done a great diservice by receiving a poor education. (Assuming that is that they wanted to learn Computer Science, and not Programming in Pascal or Programming in C.)
Strangely enough, I didn't see an entry for 'functional programming'
in Knuth's "The Art of Computer Programming" -- but that's just
programming. ;)
But if you end up going and
removing elegant features understood by anyone who has studied Computer Science because you think your audience is too dumb to make a slight leap from the specific to the general that can be explained on one simple sentence, then you are making those trade-off decisions in the *utterly* wrong manner. You should be assuming that your audience are the smart people that they are, rather than the idiots you are assuming them to be.
Your predicate (that it's understood by anyone who has studied
CS) is false so your argument is moot. In addition, I deal with a
lot of people who program but didn't study CS. And I rarely
use reduce in my code (even rarer now that 'sum' exists) so would
not miss its exclusion or its transfer from builtins to a module.
Andrew da***@dalkescientific.com
Douglas Alan wrote: Your claim is silly. sum() is not *way* simpler than reduce(), and anyone can be explained reduce() in 10 seconds: "reduce() is just like sum(), only with reduce() you can specify whatever addition function you would like."
Maybe reduce() can be explained in 10 seconds to someone who has used
sum() a few times, but that has no bearing whatsoever on trying to
explain reduce() to someone if sum() is not available and hasn't
been used by them. they were not taught to my son in _his_ equivalent course [it used Pascal], and are not going to be taught to my daughter in _her_ equivalent course [it uses C].
Then your children were done a great diservice by receiving a poor education. (Assuming that is that they wanted to learn Computer Science, and not Programming in Pascal or Programming in C.)
I'm sorry, but from a pure CS viewpoint, reduce() is way off the
radar screen. Especially for a 101 course. If either of my daughters
wanted to discuss reduce() while taking such a course, I'd want to
see the curriculum to figure out what they _weren't_ being taught.
You should be assuming that your audience are the smart people that they are, rather than the idiots you are assuming them to be.
Ignorance is not stupidity. I have yet to see a language which
can be used by stupid people with consistent success; however I
have seen a great deal of evidence that Python can be used very
successfully by ignorant people. It gently eases them in and
allows them to perform useful work while slowly reducing their
ignorance.
I sure hope that Python doesn't try to emulate C. It's a terrible, horrible programming language that held back the world of software development by at least a decade.
I used to hate C. But then, when it borrowed enough good concepts
from Pascal and other languages, and the compilers got smart enough
to warn you (if you cared to see the warnings) about things like
"if (x = y)" I stopped using Modula-2. C held software back 10
years in the same manner as Microsoft did, e.g. by helping to
standardize things to where I can buy a $199 system from WalMart
which would cost over $20,000 if everybody kept writing code like
the pointy-headed ivory tower academics thought it ought to be written.
For certain problem domains (where domain includes the entire system
of software, hardware, real-time constraints, portability, user
expectations, maintainability by a dispersed team, etc.), C is an
excellent implementation language. But don't take my word for it --
open your eyes and look around. Could there be a better implementation
language? Sure. Would C acquire the bare minimum features needed
to compete with the new language? Absolutely.
(But I freely admit I'm a luddite: I prefer Verilog to VHDL, as well.)
The reason for Python's wide acceptance isn't because it is particularly well-designed compared to other programming languages that had similar goals of simplicity and minimality (it also isn't poorly designed compared to any of them -- it is on par with the better ones) -- the reason for its success is that it was in the right place at the right time, it had a lightweight implementation, was well-suited to scripting, and it came with batteries included.
I'd vote this as the statement in this group most likely to start
a religious flamewar since the lisp threads died down.
I'm not particularly religious, but I _will_ bite on this one:
1) In what way was it at the "right place at the right time?" You
didn't name names of other languages, but I'll bet that if you can
name 5 which are similar by your criteria, at least two of them
were available when Python first came out.
2) What part of "lightweight implementation, well suited to scripting"
contradicts, or is even merely orthorgonal to "particularly well-designed"?
3) Do you _really_ think that all the batteries were included when
Python first came out? Do you even think that Python has more batteries
_right_ _now_ than Perl (via CPAN), or that some competing language
couldn't or hasn't already been designed which can coopt other languages'
batteries?
I can accept the premise that, for Python to enjoy the acceptance
it does today, Guido had to be lucky in addition to being an excellent
language designer. But if I were to accept the premise that Python's
popularity is due to sheer luck alone, my only logical course of action
would to be to buy Guido a plane ticket to Vegas and front him $10,000
worth of chips, because he has been extremely lucky for many years now.
Pat
Douglas Alan <ne****@mit.edu> wrote in message news:<lc************@gaffa.mit.edu>... C'mon -- all reduce() is is a generalized sum or product. What's there to think about? It's as intuitive as can be. And taught in every CS curiculum. What more does one want out of a function?
|>oug
Others pointed out that 'reduce' is not taught in every CS curriculum
and that many (most?) programmers didn't have a CS curriculum as you
intend it, so let me skip on this point. The real point, as David Eppstein
said, is readability:
reduce(operator.add, seq)
is not readable to many people, even if is readable to you.
That's the only reason why I don't use 'reduce'. I would have preferred
a better 'reduce' rather than a new ad hoc 'sum': for instance something
like
reduce([1,2,3],'+')
in which the sequence goes *before* the operator. It is interesting to
notice that somebody recently suggested in the Scheme newsgroup that
(map function sequence)
was a bad idea and that
(map sequence function)
would be better!
Of course, I do realize that there is no hope of changing 'reduce' or 'map'
at this point; moreover the current trend is to make 'reduce' nearly
useless, so I would not complain about its dead in Python 3.0.
Better no reduce than an unreadable reduce.
Also, I am not happy with 'sum' as it is, but at least it is
dead easy to read.
Michele Simionato
Douglas Alan <ne****@mit.edu> wrote in message news:<lc************@gaffa.mit.edu>... The reason for Python's wide acceptance isn't because it is particularly well-designed compared to other programming languages that had similar goals of simplicity and minimality (it also isn't poorly designed compared to any of them -- it is on par with the better ones) -- the reason for its success is that it was in the right place at the right time, it had a lightweight implementation, was well-suited to scripting, and it came with batteries included.
.... and it is free!!! ;)
More seriously, the fact that it has a standard implementation and a
BDFL ensuring the consistency of the language (not committee for Python!)
is also a big plus. Moreover, it got a good documentation, a very active
community and a wonderful newgroup. Also, the time scale between the
submission of a bug (there are very few of them, BTW) and its fixing
is surprisingly short. This is something I value a lot. Finally, the
language is still evolving at fast pace and you feel the sensation
that is has a future. Probably the same things can be said for Ruby
and Perl, so you have a choice if you don't like the Zen of Python ;)
Michele Simionato
"Michele Simionato" <mi**@pitt.edu> wrote in message news:22**************************@posting.google.c om...
| <...>
| in which the sequence goes *before* the operator. It is interesting to
| notice that somebody recently suggested in the Scheme newsgroup that
|
| (map function sequence)
|
| was a bad idea and that
|
| (map sequence function)
|
| would be better!
|
| <...>
Yes, that's right. Some scientific studies proved that humans think and
express their thoughts in order subject/object-action. So OOP is very
"human" in this aspect, btw.
G-:
In article <17**********************@twister.southeast.rr.com >, Georgy
Pruss <SE************@hotmail.com> writes "Michele Simionato" <mi**@pitt.edu> wrote in message news:2259b0e2.0311112248.3b e2****@posting.google.com... | <...> | in which the sequence goes *before* the operator. It is interesting to | notice that somebody recently suggested in the Scheme newsgroup that | | (map function sequence) | | was a bad idea and that | | (map sequence function) | | would be better! | | <...>
Yes, that's right. Some scientific studies proved that humans think and express their thoughts in order subject/object-action. So OOP is very "human" in this aspect, btw.
G-:
on oop in this thread nobody has pointed out that we could have
sequence.sum()
sequence.reduce(operator.add[,init])
or even
sequence.filter(func) etc etc
and similar. That would make these frighteningly incomprehensible ;)
concepts seem less like functional programming. Personally I wouldn't
like that to happen.
--
Robin Becker pm*****@speakeasy.net (Patrick Maupin) writes: Douglas Alan wrote:
Your claim is silly. sum() is not *way* simpler than reduce(), and anyone can be explained reduce() in 10 seconds: "reduce() is just like sum(), only with reduce() you can specify whatever addition function you would like."
Maybe reduce() can be explained in 10 seconds to someone who has used sum() a few times, but that has no bearing whatsoever on trying to explain reduce() to someone if sum() is not available and hasn't been used by them.
Describing reduce() in 10 seconds is utterly trivial to anyone with an
IQ above 100, whether or not they have ever used sum():
"To add a sequence of numbers together:
reduce(add, seq)
To multiply a sequence of numbers together:
reduce(mul, seq)
To subtract all of the numbers of a sequence (except the first
number) from the first number of the sequence:
reduce(sub, seq)
To divide the first number in a sequence by all the remaining
numbers in the sequence:
reduce(div, seq)
Any two-argument function can be used in place of add, mul, sub, or
div and you'll get the appropriate result. Other interesting
examples are left as an exercise for the reader."
If someone can't understand this quickly, then they shouldn't be
programming!
I'm sorry, but from a pure CS viewpoint, reduce() is way off the radar screen. Especially for a 101 course.
I'm sorry, but you are incorrect. When I took CS-101, we learned
assembly language, then were assigned to write a text editor in
assembly language, then we learned LISP and were assigned to write
some programs in LISP, and then we learned C, and then we were
assigned to implement LISP in C.
If you can write a !$#@!!%# LISP interpreter in C, you no doubt can
figure out something as mind-achingly simple as reduce()!
You should be assuming that your audience are the smart people that they are, rather than the idiots you are assuming them to be.
Ignorance is not stupidity.
Assuming that your audience cannot learn the simplest of concepts is
assuming that they are stupid, not that they are ignorant.
I sure hope that Python doesn't try to emulate C. It's a terrible, horrible programming language that held back the world of software development by at least a decade.
I used to hate C. But then, when it borrowed enough good concepts from Pascal and other languages, and the compilers got smart enough to warn you (if you cared to see the warnings) about things like "if (x = y)" I stopped using Modula-2. C held software back 10 years in the same manner as Microsoft did, e.g. by helping to standardize things to where I can buy a $199 system from WalMart which would cost over $20,000 if everybody kept writing code like the pointy-headed ivory tower academics thought it ought to be written.
You score no points for C by saying that it is like Microsoft. That's
a strong damnation in my book. And you really don't know how the
world would have turned out if a different programming language had
been adopted rather than C for all those years. Perhaps computers
would be more expensive today, perhaps not. On the other hand, we
might not have quite so many buffer overflow security exploits.
Perhaps we'd have hardware support for realtime GC, which might be
very nice. On the other hand, perhaps people would have stuck with
assembly language for developing OS's. That wouldn't have been so
pretty, but I'm not sure that that would have made computers more
expensive. Perhaps a variant of Pascal or PL/1 would have taken the
niche that C obtained. Either of those would have been better, though
no great shakes either.
Many of the pointy-headed ivory tower academics, by the way, thought
that code should look something like Python. The reason these
languages are not widely used is because typically they either did not
come with batteries, or there was no lightweight implmentation
provided, or they only ran on special hardware, or all of the above.
The reason for Python's wide acceptance isn't because it is particularly well-designed compared to other programming languages that had similar goals of simplicity and minimality (it also isn't poorly designed compared to any of them -- it is on par with the better ones) -- the reason for its success is that it was in the right place at the right time, it had a lightweight implementation, was well-suited to scripting, and it came with batteries included.
I'd vote this as the statement in this group most likely to start a religious flamewar since the lisp threads died down.
The only way it could start a religious flamewar is if there are
people who wish to present themselves as fanboys. I have said nothing
extreme -- just what is obvious: There are many nice computer
programming languages -- Python is but one of them. If someone
wishes to disagree with this, then they would have to argue that there
are no other nice programming languages. Now that would be a flame!
I'm not particularly religious, but I _will_ bite on this one:
1) In what way was it at the "right place at the right time?"
Perl was in the right place at the right time because system
administrators had gotten frustrated with doing all their scripts in a
mishmash of shell, awk, sed, and grep, etc. And then web-scripting
kicked Perl into even more wide acceptance. Python was in the right
place in the right time because many such script-writers (like yours
truly) just could not stomach Perl, since it is an ugly monstrocity,
and Python offered such people relief from Perl. If Perl had been a
sane OO language, Python would never have had a chance.
You didn't name names of other languages, but I'll bet that if you can name 5 which are similar by your criteria, at least two of them were available when Python first came out.
I'm not sure what you are getting at. There were many nice
programming languages before Python, but not many of them, other than
Perl, were portable and well-suited to scripting.
Oh, yeah, I forgot to mention portability in my list of reasons why
Python caught on. That's an essential one. Sure you could elegantly
script a Lisp Machine with Lisp, and some Xerox computers with
Smalltalk, but they didn't provide versions of these languages
well-suited for scripting other platforms.
2) What part of "lightweight implementation, well suited to scripting" contradicts, or is even merely orthorgonal to "particularly well-designed"?
Again, I'm not sure what you are getting at. "Lightweight
implementation" and "well-suited to scripting" do not contradict
"well-designed", as Python proves. Lightweightedness and capability
at scripting are certainly orthogonal to the property of being
well-designed, however, since there are a plethora of well-designed
languages that are not suited to scripting. They just weren't
designed to address this niche.
3) Do you _really_ think that all the batteries were included when Python first came out?
It certainly was not a particularly popular language until it came
with pretty hefty batteries. There are many other languages that
would have been equally popular before Python started coming with
batteries.
Do you even think that Python has more batteries _right_ _now_ than Perl (via CPAN), or that some competing language couldn't or hasn't already been designed which can coopt other languages' batteries?
Um, the last time I checked Perl was still a lot more popular than
Python, so once again I'm not sure what you are getting at. Regarding
whether or not some future language might also come with batteries and
therefore steal away Python's niche merely due to having more
batteries: Anything is possible, but this will be an uphill battle for
another language because once a language takes a niche, it is very
difficult for the language to be displaced. On the other hand, a new
language can take over a sub-niche by providing more batteries in a
particular area. PHP would be an example of this.
I can accept the premise that, for Python to enjoy the acceptance it does today, Guido had to be lucky in addition to being an excellent language designer. But if I were to accept the premise that Python's popularity is due to sheer luck alone my only logical course of action would to be to buy Guido a plane ticket to Vegas and front him $10,000 worth of chips, because he has been extremely lucky for many years now.
I never claimed *anything* like the assertion that Python's popularity
is due to luck alone!
|>oug
David Eppstein wrote:
... Wen seq is any iterable, all you need is izip(seq, islice(seq, 1, None)), and you'll be creating no new list whatsoever. Still, tradeoffs in obscurity (and performance for midsized lists) are quite as clear. If I'm not mistaken, this is buggy when seq is an iterable, and you need
Sorry, I should have said something like "re-iterable" -- an object such
that e.g.:
it1 = iter(seq)
val1 = it1.next()
it2 = iter(seq)
val2 = it2.next()
assert val1 == val2
holds (and keeps holding as you keen next'ing:-). list, tuple, dict, etc.
In particular, when the idiom zip(seq, seq[1:]) works, so should this one
(note in passing that, in said idiom, there is no need to slice the first
seq in the zip call to seq[:-1] -- zip truncates at the end of the
_shorter_ sequence anyway).
to do something like seq1,seq2 = tee(seq) izip(seq1,islice(seq2,1,None)) instead.
Yes, this is totally general. However, even though tee has now (2.4)
been implemented very smartly, this overall approach is still way
"conceptually heavy" (IMHO) when compared to, e.g.:
def window_by_2(it):
it = iter(it)
first = it.next()
for second in it:
yield first, second
fist = second
in any case, I do think that such 'windowing' is a general enough
need that it deserves its own optimized itertool...
Alex pm*****@speakeasy.net (Patrick Maupin) writes: Douglas Alan wrote: Then your children were done a great diservice by receiving a poor education. (Assuming that is that they wanted to learn Computer Science, and not Programming in Pascal or Programming in C.)
I'm sorry, but from a pure CS viewpoint, reduce() is way off the radar screen. Especially for a 101 course. If either of my daughters wanted to discuss reduce() while taking such a course, I'd want to see the curriculum to figure out what they _weren't_ being taught.
I'd tend to agree, and I've never found myself on the not teaching
generalities side of this debate before. An intro CS class should be
about intro to Computer Science and not any particular language, but
reduce() is a fairly specific functional programming concept. To say
that it should be taught in an intro class is like saying you should
deal with metaclasses, function pointers, or the stack pointer
immediately to teach good computer science. Algorithms, data
structures, state machines and computational theory are fairly basic
computer science, functional programming can be used to present those,
but is by no means needed.
Also the assumption isn't that Python users are stupid, it's that they
may have little to no CS training. Python wasn't designed for the
exclusive use of CS trained individuals (which is a good thing,
because I use it to teach computers to Boy Scouts).
Back on topic, I very rarely use reduce() in Python and then only for
optimization (which I rarely do). The vast majority of the time I
just use a for loop; it just seems to flow better.
--
Christopher A. Craig <li*********@ccraig.org>
"Tragedy is when I cut my finger. Comedy is when you fall in an open
sewer and die." Mel Brooks.
re cs 101: I'm told (by friends who were undergrads there in the early
90's) that Yale used to have an intro to CS course taught by the late
Alan Perlis. The usual high level concepts you'd expect - but since
it helps a lot to have a language to give examples in, it used APL.
APL has the kind of array manipulation tools that these days you find
in matlab or other specialized tools - I'd say "that other languages
aspire to" but from the looks of it other languages *don't* aspire to
that kind of array handling. But the point here is that "reduce" is
fundamental: x/i5 (where x is multiplication-sign and i is iota) is
a lot like reduce(int.__mul__, range(1,6)), it's just "readable" if
you're comfortable with the notation (and more general, I can't find
a builtin way to say "coerced multiply" without lambda, in 30 seconds
of poking around.) On the other hand, that readability does assume
you're thinking in terms of throwing arrays around, which can be
an... *odd* way of looking at things, though of course when it fits,
it's very nice.
In other words, I'd expect anyone who had a reasonably rich CS
background to have been exposed to it, either from the historical
perspective, the "languages influence how you think" perspective, or
the mathematical operation perspective.
At the same time, I'll admit to not having used it (I've only been
using python for a year, and will more likely write an
accumulator-style block since it will always be clearer than a lambda
(which is what you generally need for reduce - if you are going to
write a function anyway, it's easier to just write an n-adic instead
of only dyadic function, and skip the need for reduce altogether - and
python has a "bias for functions", the gap between "sum" and "write a
function for the whole thing" is fairly narrow.)
(Hmm, r5rs doesn't have reduce, but mit-scheme does.)
> you're comfortable with the notation (and more general, I can't find a builtin way to say "coerced multiply" without lambda, in 30 seconds of poking around.) ...
Sigh, I just realized - everyone talking about operator.mul was being
*literal*, not abstract, there's actually an operator package :-) Oops.
Not that reduce(operator.mul, range(1,6)) is any more readable, I'd
still define Product around it...
On Wed, Nov 12, 2003 at 08:28:29AM +0000, Robin Becker wrote: sequence.sum() sequence.reduce(operator.add[,init]) sequence.filter(func) etc etc That would make these frighteningly incomprehensible ;) concepts seem less like functional programming. Personally I wouldn't like that to happen.
I'm hoping you were being sarcastic ... but I get the feeling you aren't.
Why, pray-tell, would you want an OO program to do:
results = [ func(x) for x in sequence ]
... instead of ...
results = sequence.map(func) ??
I can understand if you're writing highly LISP-like Python (see IBM's
articles on functional programming with Python for example). However, I
don't see the 'harm' in offering functional methods to lists/arrays.
Looking at this code I wrote today:
matches = [ get_matches(file) for file in duplicates ]
todelete = [ get_oldest(files) for files in matches ]
... would end up being ...
matches = duplicates.map(get_matches)
todelete = matches.map(get_oldest)
... or better ...
todelete = duplicates.map(get_matches).map(get_oldest)
... and I somewhat like that as I look at it.
--
Michael T. Babcock
CTO, FibreSpeed Ltd. (Hosting, Security, Consultation, Database, etc) http://www.fibrespeed.net/~mbabcock/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: http://www.fibrespeed.net/~mbabcock/
iQF1AwUBP7JnS2/Ol25qCQydAQIpSAsAkWwusLupCYm59p2+0Ms6cSfJpdmMCgZB
U6J/0UJgeV6MKdRPVMDE8D3Lbvi5fOldQIHAeRhwbUOJsZYCNP2VHk VEZzuV/2e2
1JDF8FEiIBr7eWNXwy6lS77A/JZ4223e44rFToS3dIkh4Zq0UJPWRzv1rhTsaXp5
F9/qxPzhsAwg7i0Wsar+7aUTUQtQMGdl1eYKnKY/nEMnYN5YaKLnrfuXJuC7/vd0
w51tfuvIjZwNjZUtDmx6kFVpBFJOlM2NcDELA/xHAFfzqJ5KVRTGhav7Mi2TQ7D5
JFeU9dpD7F11gZw/z91z/aVtOMpOC5+3OGkuXwFx393Sn42ZziFct9Bans5SdeyP
HClMFsNdonGpeaG7nxSBrvI4sNnwJ2/1u4pMni2YxkqhvUeexBPoWhSEGZexubum
yFg8Bi/CgjlaoiGTNxSyilb8IZ0Jr+w0qHgZkBf+uVRcgISypH7p1Q==
=z+Uz
-----END PGP SIGNATURE-----
> > On Wed, Nov 12, 2003 at 08:28:29AM +0000, Robin Becker wrote: sequence.sum() sequence.reduce(operator.add[,init]) sequence.filter(func) etc etc
That would make these frighteningly incomprehensible ;) concepts seem less like functional programming. Personally I wouldn't like that to happen.
I'm hoping you were being sarcastic ... but I get the feeling you aren't.
Why, pray-tell, would you want an OO program to do:
results = [ func(x) for x in sequence ]
... instead of ...
results = sequence.map(func) ??
My apologies: my goofy mailer isn't adding the "On <date>, <person> wrote:"
lines, so it sort of confuses what Robin wrote with what Michael wrote. Sorry!
-Dave
> On Wed, Nov 12, 2003 at 08:28:29AM +0000, Robin Becker wrote: sequence.sum() sequence.reduce(operator.add[,init]) sequence.filter(func) etc etc
That would make these frighteningly incomprehensible ;) concepts seem less like functional programming. Personally I wouldn't like that to happen.
I'm hoping you were being sarcastic ... but I get the feeling you aren't.
Why, pray-tell, would you want an OO program to do:
results = [ func(x) for x in sequence ]
... instead of ...
results = sequence.map(func) ??
Because I find the first much more readable (and IMO the "an OO program to
do" bit is irrelevent from a practical point of view).
Also, someone not familiar with either version is likely to correctly guess
what the first one does. It's not an issue of whether or not a person can be
taught what 'map' means). It's subjective, yes, but not _completely_
subjective because the "guessability" of the first form is higher because it
uses other well-known keywords to do its thing. (FWIW I don't think map() is
that big of a deal, but you asked... :) ).
-Dave
"Andrew Dalke" <ad****@mindspring.com> writes: Douglas Alan:
Then you weren't taught Computer Science -- you were taught Fortran programming. Computer Science teaches general concepts, not specific languages.
I agree with Alex on this. I got a BS in CS but didn't learn about lambda, reduce, map, and other aspects of functional programming until years later, and it still took some effort to understand it. (Granted, learning on my own at that point.)
But I well knew what 'sum' did.
How's that? I've never used a programming language that has sum() in
it. (Or at least not that I was aware of.) In fact, the *Python* I
use doesn't even have sum() in it! I've used a number of languages
that have reduce(). If I didn't already know that it exists, I
wouldn't even think to look in the manual for a function that adds up
a sequence of numbers, since such a function is so uncommon and
special-purpose.
(Someone pointed out that different versions of Scheme vary on whether
they give you reduce. In Scheme reduce would have fewer uses since
Scheme uses prefix notation, and operators like "+" can already take
any number of arguments (this is a definite advantage to prefix
notation). Therefore, in Scheme, to add up all of the numbers in a
list, you can just use apply and '+', like so: "(apply + my-list)".)
Was I not taught Computer Science?
I would say that if you didn't get introduced to at least the concept
of functional programming and a hint of how it works, then something
was seriously wrong with your CS education.
But if you end up going and removing elegant features understood by anyone who has studied Computer Science because you think your audience is too dumb to make a slight leap from the specific to the general that can be explained on one simple sentence, then you are making those trade-off decisions in the *utterly* wrong manner. You should be assuming that your audience are the smart people that they are, rather than the idiots you are assuming them to be.
Your predicate (that it's understood by anyone who has studied CS) is false so your argument is moot.
It's irrelevant whether or not many people have received poor CS
educations -- there are many people who haven't. These people should
be pleased to find reduce() in Python. And the people who received
poor or no CS educations can learn reduce() in under a minute and
should be happy to have been introduced to a cool and useful concept!
|>oug
Me: But I well knew what 'sum' did.
Douglas Alan How's that? I've never used a programming language that has sum() in it.
1) From Microsoft Multiplan (a pre-Excel spreadsheet). That
was one of its functions, which I used to help my Mom manage
her cheese co-op accounts in ... 1985?
2) I wrote such a routine many times for my own projects
By 1987 I was using a BASIC version (QuickBasic) which
had functions and using Pascal. But I don't recall in either of
those languages ever passing functions into an object until I
started using non-trivial C a couple of years later. (Probably
for a numerical analysis class.)
I wouldn't even think to look in the manual for a function that adds up a sequence of numbers, since such a function is so uncommon and special-purpose.
Uncommon? Here's two pre-CS 101 assignments that use
exactly that idea:
- make a computerized grade book (A=4.0, B=3.0, etc.) which
can give the grade point average
- make a program to compute the current balance of a bank
account given the initial amount and
That's not to say that the BASIC I used at that time had support
for functions. It's only to say that the functionALITY is common
and more easily understood than reduce.
Was I not taught Computer Science?
I would say that if you didn't get introduced to at least the concept of functional programming and a hint of how it works, then something was seriously wrong with your CS education.
It may be. That was my third-place major, which I did mostly
for fun. I focused more on my math and physics degrees, so might
have skipped a few things I would have learned had I been more
rigorous in my studies.
It may also be that my department focused on other things. Eg,
we had a very solid "foundations of computer science" course
compared to some CS departments, and we learned some fuzzy
logic and expert systems because those were a focus of the
department.
It's irrelevant whether or not many people have received poor CS educations -- there are many people who haven't. These people should be pleased to find reduce() in Python. And the people who received poor or no CS educations can learn reduce() in under a minute and should be happy to have been introduced to a cool and useful concept!
Actually, your claim is 'anyone can be explained reduce() in 10 seconds' ;)
I tell you this. Your estimate is completely off-base. Reduce is
more difficult to understand than sum. It requires knowing that
functions can be passed around. That is non-trivial to most,
based on my experience in explaining it to other people (which
for the most part have been computational physicists, chemists,
and biologists).
It may be different with the people you hang around -- your
email address says 'mit.edu' which is one of the *few* places
in the world which teach Scheme as the intro language for
undergrads, so you already have a strong sampling bias.
(I acknowledge my own sampling bias from observing people
in computational sciences. I hazard to guess that I know more
of those people than you do people who have studied
computer science.)
Andrew da***@dalkescientific.com
"Andrew Dalke" <ad****@mindspring.com> writes: Me:
> But I well knew what 'sum' did.
Douglas Alan How's that? I've never used a programming language that has sum() in it.
1) From Microsoft Multiplan (a pre-Excel spreadsheet). That was one of its functions, which I used to help my Mom manage her cheese co-op accounts in ... 1985?
Okay, well I can certainly see that a spreadsheet program should have
a built-in sum() function, since that's about 50% of what spreadsheets
do! But general-purpose programming languages rarely have it.
I wouldn't even think to look in the manual for a function that adds up a sequence of numbers, since such a function is so uncommon and special-purpose.
Uncommon? Here's two pre-CS 101 assignments that use exactly that idea: - make a computerized grade book (A=4.0, B=3.0, etc.) which can give the grade point average - make a program to compute the current balance of a bank account given the initial amount and
I'm not saying that it's uncommon to want to sum a sequence of numbers
(though it's not all that common, either, for most typical programming
tasks) -- just that it's uncommon to build into the language a special
function to do it. reduce(+, seq) apply(+, seq) are much more common,
since reduce and/or apply can do the job fine and are more general.
Or just a good old-fashioned loop.
It's irrelevant whether or not many people have received poor CS educations -- there are many people who haven't. These people should be pleased to find reduce() in Python. And the people who received poor or no CS educations can learn reduce() in under a minute and should be happy to have been introduced to a cool and useful concept!
Actually, your claim is 'anyone can be explained reduce() in 10 seconds' ;)
Now you want consistency from me? Boy, you ask a lot!
Besides, 10 seconds is under a minute, is it not?
Also, I said it could be explained in 10 seconds. Perhaps it takes a
minute to learn because one would need the other 50 seconds for it to
sink in.
I tell you this. Your estimate is completely off-base. Reduce is more difficult to understand than sum. It requires knowing that functions can be passed around.
Something that anyone learning the language should learn by the time
they need a special-purpose summing function! Before then, they can
use a loop. They need the practice anyway.
That is non-trivial to most, based on my experience in explaining it to other people (which for the most part have been computational physicists, chemists, and biologists).
I find this truly hard to believe. APL was a favorite among
physicists who worked at John's Hopkins Applied Physics Laboratory
where I lived for a year when I was in high school, and you wouldn't
survive five minutes in APL without being able to grok this kind of
thing.
It may be different with the people you hang around -- your email address says 'mit.edu' which is one of the *few* places in the world which teach Scheme as the intro language for undergrads, so you already have a strong sampling bias.
Yeah, and using Scheme was the *right* way to teach CS-101, dangit!
But, like I said, I was taught APL in high-school in MD, and no one
seemed troubled by reduce-like things, so it was hardly just an MIT
thing. In fact, people seemed to like reduce() and friends -- people
seemed to think it was a much more fun way to program, rather than
using boring ol' loops.
(I acknowledge my own sampling bias from observing people in computational sciences. I hazard to guess that I know more of those people than you do people who have studied computer science.)
Hmm, well I work for X-ray astronomers. Perhaps I should take a
poll.
|>oug
Douglas Alan wrote: Describing reduce() in 10 seconds is utterly trivial to anyone with an IQ above 100, whether or not they have ever used sum()
Well, yeah, but they may not need or want to learn or remember it.
And then there are the corner cases, e.g. sum([]) vs.
reduce(operator.add,[]) (which throws an exception).
If someone can't understand this quickly, then they shouldn't be programming!
Again, it's not "can't", it's whether they need to or not.
I'm sorry, but you are incorrect. When I took CS-101, we learned assembly language, then were assigned to write a text editor in assembly language, then we learned LISP and were assigned to write some programs in LISP, and then we learned C, and then we were assigned to implement LISP in C.
If you can write a !$#@!!%# LISP interpreter in C, you no doubt can figure out something as mind-achingly simple as reduce()!
Ahh, the lisp background. I _knew_ that would come out sometime :)
Seriously, though, even in this scenario -- you don't really need
reduce() to create a LISP interpreter (as I'm sure you found
when you wrote one in C). Ignorance is not stupidity.
Assuming that your audience cannot learn the simplest of concepts is assuming that they are stupid, not that they are ignorant.
As I and others have pointed out, it's not a matter of assuming they
can't learn, it's a matter of assuming they have better things to
do. Many people can write all the useful programs they will ever
need without reduce, and sum() makes the percentage of Python
users who can do this even higher. (Having said that, I never
personally argued that reduce() should be removed from the language,
but I do agree that it does not have to be part of "core" Python,
and could easily be relegated to a module.) I sure hope that Python doesn't try to emulate C. It's a terrible, horrible programming language that held back the world of software development by at least a decade. I used to hate C. But then, when it borrowed enough good concepts from Pascal and other languages, and the compilers got smart enough to warn you (if you cared to see the warnings) about things like "if (x = y)" I stopped using Modula-2. C held software back 10 years in the same manner as Microsoft did, e.g. by helping to standardize things to where I can buy a $199 system from WalMart which would cost over $20,000 if everybody kept writing code like the pointy-headed ivory tower academics thought it ought to be written. You score no points for C by saying that it is like Microsoft. That's a strong damnation in my book. And you really don't know how the world would have turned out if a different programming language had been adopted rather than C for all those years. Perhaps computers would be more expensive today, perhaps not. On the other hand, we might not have quite so many buffer overflow security exploits. Perhaps we'd have hardware support for realtime GC, which might be very nice. On the other hand, perhaps people would have stuck with assembly language for developing OS's. That wouldn't have been so pretty, but I'm not sure that that would have made computers more expensive. Perhaps a variant of Pascal or PL/1 would have taken the niche that C obtained. Either of those would have been better, though no great shakes either.
I agree that I cannot know how the world would have turned out
without C and Microsoft; but likewise, you cannot know for sure
that computer science would be ten years farther along by now :)
(And I personally feel my alternate universe is more realistic
than yours, but then everybody should feel that way about their
own private alternate universe.) The reason for Python's wide acceptance isn't because it is particularly well-designed compared to other programming languages
that had similar goals of simplicity and minimality (it also isn't poorly designed compared to any of them -- it is on par with the better ones) -- the reason for its success is that it was in the right place at the right time, it had a lightweight implementation, was well-suited to scripting, and it came with batteries included. I'd vote this as the statement in this group most likely to start a religious flamewar since the lisp threads died down.
The only way it could start a religious flamewar is if there are people who wish to present themselves as fanboys. I have said nothing extreme -- just what is obvious: There are many nice computer programming languages -- Python is but one of them. If someone wishes to disagree with this, then they would have to argue that there are no other nice programming languages. Now that would be a flame!
Well, I guess I may have read more into your original statement
than you put there. You wrote "similar goals of simplicity
and minimality", and to me, the language is pretty much a gestalt
whole, in the sense that when I read "similar goals" I was thinking
about all the features that, to me, make Python Python. These goals
actually include the lightweight implementation, the portability,
the suitability to scripting, etc. In one way or another, I feel
that these contribute to its simplicity and minimality, and on
rereading your words, I think you were probably mainly referring
to the syntax and semantics. (Even there, however, as I think Alex
has pointed out, design decisions were made which might make the
semantics less than optimal, yet contribute heavily to the small
size and portability of the language.)
I'm not sure what you are getting at. There were many nice programming languages before Python, but not many of them, other than Perl, were portable and well-suited to scripting.
I was just challenging you to defend a position which it appears
in hindsight you didn't really take :) 3) Do you _really_ think that all the batteries were included when Python first came out?
It certainly was not a particularly popular language until it came with pretty hefty batteries. There are many other languages that would have been equally popular before Python started coming with batteries.
Here is one area where I think the genius of the design shows
through. Even _before_ the batteries were included, in a crowded
field of other languages, Python was good enough to acquire enough
mindshare to start the snowball rolling, by attracting the kind of
people who can actually build batteries.
Do you even think that Python has more batteries _right_ _now_ than Perl (via CPAN), or that some competing language couldn't or hasn't already been designed which can coopt other languages' batteries?
Um, the last time I checked Perl was still a lot more popular than Python, so once again I'm not sure what you are getting at. Regarding whether or not some future language might also come with batteries and therefore steal away Python's niche merely due to having more batteries: Anything is possible, but this will be an uphill battle for another language because once a language takes a niche, it is very difficult for the language to be displaced. On the other hand, a new language can take over a sub-niche by providing more batteries in a particular area. PHP would be an example of this.
We are in agreement that Perl has more batteries than Python,
and also more "marketshare." To me, this is yet another testament
to Python's good design -- it is in fact currently on a marketshare
ramp, mostly because it attracts the kind of people who can do an
excellent job of writing the batteries. I can accept the premise that, for Python to enjoy the acceptance it does today, Guido had to be lucky in addition to being an excellent language designer. But if I were to accept the premise that Python's popularity is due to sheer luck alone my only logical course of action would to be to buy Guido a plane ticket to Vegas and front him $10,000 worth of chips, because he has been extremely lucky for many years now.
I never claimed *anything* like the assertion that Python's popularity is due to luck alone!
In the post I was responding to, you wrote "The reason for Python's wide
acceptance isn't because it is particularly well-designed compared to
other programming languages," and you also used the phrase "in the right
place at the right time." To me, these statements taken together implied
that you thought the process leading to Python's ascending popularity was
mostly stochastic.
Your later posting (and a more careful reading of your original post) help
me to put your words in the proper context, and seem to indicate that our
opinions on the subject are not as divergent as I first thought they were.
Regards,
Pat pm*****@speakeasy.net (Patrick Maupin) writes: Douglas Alan wrote:
Describing reduce() in 10 seconds is utterly trivial to anyone with an IQ above 100, whether or not they have ever used sum()
Well, yeah, but they may not need or want to learn or remember it. And then there are the corner cases, e.g. sum([]) vs. reduce(operator.add,[]) (which throws an exception).
If someone can't understand this quickly, then they shouldn't be programming!
Again, it's not "can't", it's whether they need to or not.
If you don't want to learn a cool concept that will only take you 60
seconds to learn, then you shouldn't be programming! Or you can stick
to loops.
The argument that some programmers might be too lazy to want to learn
powerful, simple, and elegant features that can be taught in seconds,
is no good reason to remove such features from Python and bloat Python
by replacing them with a plethora of less powerful, less elegant
features.
I'm sorry, but you are incorrect. When I took CS-101, we learned assembly language, then were assigned to write a text editor in assembly language, then we learned LISP and were assigned to write some programs in LISP, and then we learned C, and then we were assigned to implement LISP in C.
If you can write a !$#@!!%# LISP interpreter in C, you no doubt can figure out something as mind-achingly simple as reduce()!
Ahh, the lisp background. I _knew_ that would come out sometime :)
Knowing about reduce() doesn't come from a LISP background, since it
is uncommon to use reduce() in LISP. There are few binary operators
in LISP, so instead of doing reduce(+, seq), in LISP, you would
typically do apply(+, seq). Knowing about reduce() comes from the
part of your CS education in which they give you a small taste of what
it is like to program in a purely combinator-based style. E.g., you
might have a problem set where they ask you to solve the same problem
in three different ways: (1) using iteration, (2) using recursion, (3)
using only combinators.
Besides, if you weren't exposed at all to LISP (or a LISP-like
language) while getting a CS degree, it wasn't a very good CS
program! They're going to teach you AI techniques in a different
language? That would be rather silly.
As I and others have pointed out, it's not a matter of assuming they can't learn, it's a matter of assuming they have better things to do. Many people can write all the useful programs they will ever need without reduce, and sum() makes the percentage of Python users who can do this even higher.
And as I have pointed out, it goes against the principle of simplicity
and expressiveness to remove an easy to use and easy to learn simple
and powerful feature with a slew of specific, tailored features. If
reduce() can be relegated to a library or for the user to implement
for himself, then so can sum(). If the language is to only have one,
it should be reduce().
I agree that I cannot know how the world would have turned out without C and Microsoft; but likewise, you cannot know for sure that computer science would be ten years farther along by now :)
I didn't say it held back Computer Science -- Computer Science went
along fine. I said it held back software development.
That's not to say that something else wouldn't have taken C's place in
holding back software development, but, in that case, I'd be railing
against that instead.
Here is one area where I think the genius of the design shows through. Even _before_ the batteries were included, in a crowded field of other languages, Python was good enough to acquire enough mindshare to start the snowball rolling, by attracting the kind of people who can actually build batteries.
I think that Python always came with batteries, since it was designed
to be the scripting language for an OS called Amoeba that Guido was
working on. There were precious few other well-designed,
well-implemented languages around that the time that were aimed at
being scripting languages (and didn't run only on Lisp machines or
what have you).
|>oug
Douglas Alan <ne****@mit.edu> writes: Knowing about reduce() doesn't come from a LISP background, since it is uncommon to use reduce() in LISP. There are few binary operators in LISP, so instead of doing reduce(+, seq), in LISP, you would typically do apply(+, seq).
Ah, that reminds me -- both sum() and reduce() can be removed from
Python by extending operator.add so that it will take any number of
arguments. Then you can add up a sequence of numbers by doing
add(*seq). The same thing goes for every other binary operator where
it makes sense to operate on more than two arguments at a time.
Now that's clean, simple, and powerful.
|>oug
In article <ma************************************@python.org >, Michael
T. Babcock <mb******@fibrespeed.net> writes
....... I'm hoping you were being sarcastic ... but I get the feeling you aren't.
Why, pray-tell, would you want an OO program to do:
results = [ func(x) for x in sequence ]
... instead of ...
results = sequence.map(func) ??
..... well actually I'm quite happy with reduce as a function or as a
method on sequences. I actually feel uncomfortable with all the plethora
of iteration tools, comprehensions etc etc. However, I'm not forced to
use them.
--
Robin Becker
Douglas Alan <ne****@mit.edu> writes:
[reduce] If someone can't understand this quickly, then they shouldn't be programming! Again, it's not "can't", it's whether they need to or not.
If you don't want to learn a cool concept that will only take you 60 seconds to learn, then you shouldn't be programming! Or you can stick to loops.
As far as reduce goes, ppl will undoubtedly take a look at the
description, understand it in well under 60 seconds, can't think of
any use for the feature during the next 60 seconds (that wouldn't be
clearer with explicit iteration), and forget it soon after turning the
page. I didn't forget it, just wondered why such an oddball feature
was a builtin. Obviously reduce can rock someone's world, but life is
too short to bother if it doesn't rock yours.
and powerful feature with a slew of specific, tailored features. If reduce() can be relegated to a library or for the user to implement for himself, then so can sum(). If the language is to only have one, it should be reduce().
I also think that reduce, sum, map and filter (and lots of others,
__builtins__ has *too much stuff*) should be removed from builtins,
but that will probably take some time (1997 years?). LC's and genexpes
will take care of most of that stuff. And people can always do:
from funtional import *
# map, filter, reduce, curry, ... (I want lots of these :)
There are also tons of functions that should be in sys, math or
whatever:
reload, repr, divmod, max, min, hash, id, compile, hex...
What's your pet deprecation candidate? I have always thought
`backticks` as repr has got to be the most useless feature around.
--
Ville Vainio http://www.students.tut.fi/~vainio24
Douglas Alan wrote: Ah, that reminds me -- both sum() and reduce() can be removed from Python by extending operator.add so that it will take any number of arguments.
reduce can't, since reduce doesn't require the function passed to be
operator.add.
--
Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \
\__/ The golden rule is that there are no golden rules. -- George
Bernard Shaw
Douglas Alan wrote: "Dave Brueck" <da**@pythonapocrypha.com> writes:
Of course I am not joking. I see no good coming from the mantra, when the mantra should be instead what I said it should be: "small, clean, simple, powerful, general, elegant"
It's really a matter of taste - both "versions" mean about the same to me (and to me both mean "get rid of reduce()" ;-) ).
No, my mantra plainly states to keep general and powerful features over specific, tailored features. reduce() is more general and powerful than sum(), and would thus clearly be preferred by my mantra.
The mantra "there should be only one obvious way to do it" apparently implies that one should remove powerful, general features like reduce() from the language, and clutter it up instead with lots of specific, tailored features like overloaded sum() and max().
That's not what _I_ understand. There's room for powerful features, but
when these features gives trouble to the people who just want something
done, then another approach should be taken.
And I'm not talking about stupid people. I'm talking about the
microbiolgist/chemist/physics/etc who is programming because of need. CS
people would do a better job, but they are more costly to bring up in
any proyect that requires especific knowledge in one area.
If so, clearly this mantra is harmful, and will ultimately result in Python becoming a bloated language filled up with "one obvious way" to solve every particular idiom. This would be very bad, and make it less like Python and more like Perl.
You feel ok? Perl's mantra is "More Than One Way To Do It"...
I can already see what's going to happen with sum(): Ultimately, people will realize that they may want to perform more general types of sums, using alternate addition operations. (For intance, there may be a number of different ways that you might add together vectors -- e.g, city block geometry vs. normal geometry. Or you may want to add together numbers using modular arithmetic, without worrying about overflowing into bignums.) So, a new feature will be added to sum() to allow an alternate summing function to be passed into sum(). Then reduce() will have effectively been put back into the language, only its name will have been changed, and its interface will have been changed so that everyone who has taken CS-101 and knows off the top of their head what reduce() is and does, won't easily be able to find it.
I don't get you. There's a reason why special functions can be
overloaded (__init__, __cmp__, etc.; I don't use others very often).
That would allow for this kind of special treatments. Besides GvR would
not accept your scenario.
Also, whytf do you mention so much CS101? Maybe you took the course with
LISP, assembly and Scheme, but AFAIK, not everyone has/had access to
this course. Many people learned to program way before taking CS101.
IMHO, you think that the only people that should make software is a CS
major.
Yes, there are other parts of The Zen of Python that point to the powerful and general, rather than the clutter of specific and tailored, but nobody seems to quote them these days, and they surely are ignoring them when they want to bloat up the language with unneccessary features like overloaded sum() and max() functions, rather than to rely on trusty, powerful, and elegant reduce(), which can easily and lightweightedly do everything that overloaded sum() and max() can do and quite a bit more.
GvR (or BDFL, as most people know him) has been very careful with his
design decisions. I've been only for about 2 years 'round here, but I've
seen why list compressions came by, why there's no ternary operator and
why Python uses indentation as block separations. These are design
decisions that GvR took.
And there's a good reason why they _are_ design decisions. (Try to guess
why. :P) To me, there is never *one* obviously "right way" to do anything
Never? I doubt this very much. When you want to add two numbers in a programming language, what's your first impulse? Most likely it is to write "a + b".
Or b + a. Perhaps we should prevent that, since that makes two obviously right ways to do it!
Even without any algebra, any kid can tell you that 1 + 2 is the same as
2 + 1. Replace 1 and 2 by a and b and you get the same result.
[snip]And that is why I wouldn't be sad if reduce() were to disappear - I don't use reduce() and _anytime_ I see reduce() in someone's code I have to slow way down and sort of rehearse in my mind what it's supposed to do and see if I can successfully interpret its meaning (and, when nobody's looking, I might even replace it with a for-loop!).
C'mon -- all reduce() is is a generalized sum or product. What's there to think about? It's as intuitive as can be. And taught in every CS curiculum. What more does one want out of a function?
|>oug
It wasn't obvious for me until later. reduce() is more likely to be used
for optimization. IIRC, some said that optimization is the root of all evil.
Just because it's _obvious_ to you, it doesn't mean it's obvious to
people who self taught programming.
--
Andres Rosado
-----BEGIN TF FAN CODE BLOCK-----
G+++ G1 G2+ BW++++ MW++ BM+ Rid+ Arm-- FR+ FW-
#3 D+ ADA N++ W OQP MUSH- BC- CN++ OM P75
-----END TF FAN CODE BLOCK-----
"Well, That's Just Prime"
"Shut up, Rattrap."
-- Rattrap and Optimus Primal, innumerable occasions
"Patrick Maupin" <pm*****@speakeasy.net> wrote in message
news:65**************************@posting.google.c om... And then there are the corner cases, e.g. sum([]) vs. reduce(operator.add,[]) (which throws an exception).
The proper comparison is to reduce(operator.add, [], 0), which does
not throw an exception either. sum(seq, start=0) is equivalent to
reduce(operator.add, seq, start=0) except that sum excludes seq of
string. (The doc specifically says equivalent for number (int) seqs,
so there might be something funny with non-number, non-string seqs.)
In other words, sum is more-or-less a special case of reduce with the
within-context constants operator.add and default start=0 built in
(and the special special case of seq of strings excluded).
I think it a big mistake (that should be repaired in 3.0) that the
result start value was made optional, leading to unnecessary empty-seq
exceptions. I also think the order is wrong and should also be fixed.
I believe it was placed last, after the seq of update values, so that
it could be made optional (which it should not be). But it should
instead come before the seq, to match the update function (result,
seq-item) arg order. This reversal has confused people, including
Guido.
(Having said that, I never personally argued that reduce() should be removed from the language, but I do agree that it does not have to be part of "core" Python, and could easily be relegated to a module.)
If the builtins are reduced in 3.0, as I generally would like, I would
be fine with moving apply, map, filter, and a repaired version of
reduce to a 'fun'ctional or hof module. But the argument of some
seems to be that this batteries-included language should specifically
exclude even that.
Terry J. Reedy
Erik Max Francis <ma*@alcyone.com> writes: Douglas Alan wrote:
Ah, that reminds me -- both sum() and reduce() can be removed from Python by extending operator.add so that it will take any number of arguments.
reduce can't, since reduce doesn't require the function passed to be operator.add.
Well, as I said, for this to be true, *all* binary operators (that it
makes sense to) would have to be upgraded to take an arbitrary number
of arguments, like they do in Lisp.
|>oug
BW Glitch <bw******@hotpop.com> writes: The mantra "there should be only one obvious way to do it" apparently implies that one should remove powerful, general features like reduce() from the language, and clutter it up instead with lots of specific, tailored features like overloaded sum() and max().
That's not what _I_ understand. There's room for powerful features, but when these features gives trouble to the people who just want something done, then another approach should be taken.
If there's a problem with people not understaning how to sum numbers
with reduce(), then the problem is with the documentation, not with
reduce() and the documentation should be fixed. It is quite easy to
make this fix. Here it is:
FAQ
---
Q: How do I sum a sequence of numbers?
A: from operator import add
reduce(add, seq)
Problem solved.
And I'm not talking about stupid people. I'm talking about the microbiolgist/chemist/physics/etc who is programming because of need.
If a microbiologist cannot understand the above, they have no business
being a microbiologist. I learned reduce() in high school, and it
didn't take me any longer than the 60 seconds I claim it will take
anyone with a modicum of intelligence. If so, clearly this mantra is harmful, and will ultimately result in Python becoming a bloated language filled up with "one obvious way" to solve every particular idiom. This would be very bad, and make it less like Python and more like Perl.
You feel ok? Perl's mantra is "More Than One Way To Do It"...
If both the mantras cause a language to have general features
replaced with a larger number of specialized features that accomplish
less, then both mantras are bad.
I can already see what's going to happen with sum(): Ultimately, people will realize that they may want to perform more general types of sums, using alternate addition operations. (For intance, there may be a number of different ways that you might add together vectors -- e.g, city block geometry vs. normal geometry. Or you may want to add together numbers using modular arithmetic, without worrying about overflowing into bignums.) So, a new feature will be added to sum() to allow an alternate summing function to be passed into sum(). Then reduce() will have effectively been put back into the language, only its name will have been changed, and its interface will have been changed so that everyone who has taken CS-101 and knows off the top of their head what reduce() is and does, won't easily be able to find it.
I don't get you. There's a reason why special functions can be overloaded (__init__, __cmp__, etc.; I don't use others very often). That would allow for this kind of special treatments. Besides GvR would not accept your scenario.
There are often multiple different ways to add together the same data
types, and you wouldn't want to have to define a new class for each
way of adding. For instance, you wouldn't want to have to define a
new integer class to support modular arithmetic -- you just want to
use a different addition operation.
Also, whytf do you mention so much CS101?
Because anyone who has studied CS, which should include a significant
percentage of programmers, will know instantly where to look for the
summing function if it is called reduce(), but they won't necessarily
know to look for sum(), since languages don't generally have a
function called sum(). And everyone else will not know to look for
either, so they might as well learn a more powerful concept in the
extra 30 seconds it will take them.
Maybe you took the course with LISP, assembly and Scheme, but AFAIK, not everyone has/had access to this course. Many people learned to program way before taking CS101.
As did, I, and I had no problem with reduce() when I learned it long
before I took CS-101.
IMHO, you think that the only people that should make software is a CS major.
Did I say that? To me, there is never *one* obviously "right way" to do anything
Never? I doubt this very much. When you want to add two numbers in a programming language, what's your first impulse? Most likely it is to write "a + b".
Or b + a. Perhaps we should prevent that, since that makes two obviously right ways to do it!
Even without any algebra, any kid can tell you that 1 + 2 is the same as 2 + 1. Replace 1 and 2 by a and b and you get the same result.
Yes, but they are still two *different* ways to to get to that result.
Starting with a and adding b to it, is not the same thing as starting
with b and adding a to it. It is only the commutative law of
arithmetic, as any good second grade student can tell you, that
guarantees that the result will be the same. On the other hand, not
all mathematical groups are albelian, and consequently, a + b != b + a
for all mathematical groups.
C'mon -- all reduce() is is a generalized sum or product. What's there to think about? It's as intuitive as can be. And taught in every CS curiculum. What more does one want out of a function? |>oug
It wasn't obvious for me until later. reduce() is more likely to be used for optimization. IIRC, some said that optimization is the root of all evil.
I don't know what you are getting at about "optimization". Reduce()
exists for notational convenience--i.e., for certain tasks it is easer
to read, write, and understand code written using reduce() than it
would be for the corresponding loop--and understanding it is no more
difficult than understanding that a summing function might let you
specify the addition operation that you'd like to use, since that's
all that reduce() is!
Just because it's _obvious_ to you, it doesn't mean it's obvious to people who self taught programming.
It was obvious to me when I was self-taught and I taught myself APL in
high-school. It also seemed obvious enough to all the physicists who
used APL at the lab where I was allowed to hang out to teach myself
APL.
|>oug
Terry Reedy wrote:
... personally argued that reduce() should be removed from the language, but I do agree that it does not have to be part of "core" Python, and could easily be relegated to a module.)
If the builtins are reduced in 3.0, as I generally would like, I would be fine with moving apply, map, filter, and a repaired version of reduce to a 'fun'ctional or hof module. But the argument of some seems to be that this batteries-included language should specifically exclude even that.
A functional module would be neat. A great way to enhance the chance
that there will be one would be starting one today (e.g. on sourceforge),
ideally with both pure-Python and C-helped (or pyrex, etc) implementations,
and get it reasonably-widely used, debugged, optimized. There's plenty
of such ideas around, but gathering a group of people particularly keen
and knowledgeable about functional programming and hashing out the "best"
design for such a module would seem to be a productive endeavour.
Also, I advocate that 3.0 should have a module or package (call it
"legacy" for example) such that, if you started your program with
some statement such as:
from legacy import *
compatibility with Python 2.4 or thereabouts would be repaired as
much as feasible, to ease running legacy code, and to the expense
of performance, 'neatness' and all such other great things if needed
(e.g., no "repaired" versions or anything -- just compatibility).
One reasonably popular counterproposal to that was to have it as
from __past__ import *
by analogy with today's "from __future__". I'd also like to make it
easy to get this functionality with a commandline switch, like is
the case today with -Q specifically for _division_ legacy issues.
But mostly, each time I mention that on python-dev, I'm rebuffed with
remarks about such issues being way premature today. Somebody's
proposed starting a new list specifically about 3.0, to make sure
remarks and suggestions for it made today are not lost about more
day-to-day python-dev traffic, but I don't think anything's been
done about that yet.
Alex
Douglas Alan wrote: Erik Max Francis <ma*@alcyone.com> writes:
Douglas Alan wrote:
Ah, that reminds me -- both sum() and reduce() can be removed from Python by extending operator.add so that it will take any number of arguments.
reduce can't, since reduce doesn't require the function passed to be operator.add.
Well, as I said, for this to be true, *all* binary operators (that it makes sense to) would have to be upgraded to take an arbitrary number of arguments, like they do in Lisp.
Your definition of "operator" appears to be widely at variance with
the normally used one; I've also noticed that in your comparisons of
reduce with APL's / , which DOES require specifically an OPERATOR (of
the binary persuasion) on its left. reduce has no such helpful
constraints: not only does it allow any (callable-as-binary) FUNCTION
as its first argument, but any other CALLABLE at all. Many of the
craziest, most obscure, and worst-performing examples of use of
reduce are indeed based on passing as the first argument some callable
whose behaviour is anything but "operator-like" except with respect to
the detail that it IS callable with two arguments and returns something
that may be passed back in again as the first argument on the next call.
[see note later].
Anyway, to remove 'reduce' by the trick of "upgrading to take an
arbitrary number of arguments", that "upgrade" should be applied to
EVERY callable that's currently subsceptible to being called with
exactly two arguments, AND the semantics of "calling with N arguments"
for N != 2 would have to be patterned on what 'reduce' would do
them -- this may be totally incompatible with what the callable does
now when called with N != 2 arguments, of course. For example,
pow(2, 10, 100)
now returns 24, equal to (2**10) % 100; would you like it to return
10715086071862673209484250490600018105614048117055 33607443750388370351051124936122493198378815695858 12759467291755314682518714528569231404359845775746 98574803934567774824230985421074605062371141877954 18215304647498358194126739876755916554394607706291 45711964776865421676604298316526243868372056680693 76
instead...?-)
I doubt there would be any objection to upgrading the functions in
module operator in the way you request -- offer a patch, or make a
PEP for it first, I would hope it would be taken well (I can't speak
for Guido, of course). But I don't think it would make much more of
a dent in the tiny set of reduce's current use cases.
[note on often-seen abuses of FP built-ins in Python]
Such a typical abuse, for example, is connected with the common idiom:
for item in seq: acc.umul(item)
which simply calls the same one-argument callable on each item of seq.
Clearly the idiom must rely on some side effect, since it ignores the
return values, and therefore it's totally unsuitable for shoehorning
into "functional-programming" idioms -- functional programming is
based on an ABSENCE of side effects.
Of course, that something is totally inappropriate never stops fanatics
of functional programming, that have totally misunderstood what FP is
all about, from doing their favourite shoehorning exercises. So you see:
map(acc.umul, seq)
based on ignoring the len(seq)-long list of results; or, relying on the
fact that acc.umul in fact returns None (which evaluates as false),
filter(acc.umul, seq)
which in this case just ignores an _empty_ list of results (I guess
that's not as bad as ignoring a long list of them...?); or, of course:
reduce(lambda x, y: x.umul(y) or x, seq, acc)
which does depend strictly on acc.umul returning a false result so
that the 'or' will let x (i.e., always acc) be returned; or just to
cover all bases
reduce(lambda x, y: (x.umul(y) or x) and x, seq, acc)
Out of all of these blood-curling abuses, it seems to me that the
ones abusing 'reduce' are the very worst, because of the more
complicated signature of reduce's first argument, compared to the
first argument taken by filter, or map with just one sequence.
To be sure, one also sees abuses of list comprehensions here:
[acc.umul(item) for item in seq]
which basically takes us right back to the "abuse of map" case.
List comprehension is also a rather FP-ish construct, in fact,
or we wouldn't have found it in Haskell to steal/copy it...;-).
Alex
BW Glitch wrote:
... It wasn't obvious for me until later. reduce() is more likely to be used for optimization. IIRC, some said that optimization is the root of all evil.
That's *PREMATURE* optimization (and "of all evil IN PROGRAMMING", but
of the two qualifications this one may be less crucial here) -- just like
misquoting "The LOVE OF money is the root of all evil" as "MONEY is the
root of all evil", so does this particular misquote trigger me;-).
Optimization is just fine IN ITS PROPER PLACE -- after "make it work"
and "make it right", there MAY come a time where "make it fast" applies.
It's extremely unlikely that 'reduce' has a good place in an optimization
phase, of course -- even when some operator.foo can be found:
[alex@lancelot tmp]$ timeit.py -c -s'import operator' -s'xs=range(1,321)'
'r=reduce(operator.mul, xs)'
1000 loops, best of 3: 450 usec per loop
[alex@lancelot tmp]$ timeit.py -c -s'import operator' -s'xs=range(1,321)'
'r=1' 'for x in xs: r*=x'
1000 loops, best of 3: 440 usec per loop
reduce shows no advantage compared with a perfectly plain loop, and
when no operator.bar is around and one must use lambda:
[alex@lancelot tmp]$ timeit.py -c -s'import operator' -s'xs=range(1,321)'
'r=reduce(lambda x, y: pow(x, y, 100), xs)'
1000 loops, best of 3: 650 usec per loop
[alex@lancelot tmp]$ timeit.py -c -s'import operator' -s'xs=range(1,321)'
'r=1' 'for x in xs: r = pow(r, x, 100)'
1000 loops, best of 3: 480 usec per loop
reduce gets progressively slower and slower than the pian loop. It's
just no bloody good, except maybe for people short-sighted enough to
believe that it's "more concise" than the loop (check out the lengths
of the timed statements above to dispel THAT myth) and who'd rather
slow things down by (say) 35% than stoop to writing a shorter, plainer
loop that every mere mortal would have no trouble understanding.
Just because it's _obvious_ to you, it doesn't mean it's obvious to people who self taught programming.
That may be the real motivation for the last-ditch defenders of reduce:
it's one of the few (uselessly) "clever" spots in Python (language and
built-ins) where they may show off their superiority to mere mortals,
happily putting down as sub-human anybody who doesn't "get" higher-order
functions in 10 seconds flat (or less)...;-)
Alex
"Terry Reedy" wrote: "Patrick Maupin" wrote: And then there are the corner cases, e.g. sum([]) vs. reduce(operator.add,[]) (which throws an exception). The proper comparison is to reduce(operator.add, [], 0), which does not throw an exception either. sum(seq, start=0) is equivalent to reduce(operator.add, seq, start=0) except that sum excludes seq of string. (The doc specifically says equivalent for number (int) seqs, so there might be something funny with non-number, non-string seqs.) In other words, sum is more-or-less a special case of reduce with the within-context constants operator.add and default start=0 built in (and the special special case of seq of strings excluded).
I agree. Please remember that my post was arguing that it would
take more than 30 seconds to teach someone reduce() instead of sum().
This is precisely because sum() was deliberately chosen to special-case
the most common uses of reduce, including not only the add operator,
but also the default initial value.
I think it a big mistake (that should be repaired in 3.0) that the result start value was made optional, leading to unnecessary empty-seq exceptions. I also think the order is wrong and should also be fixed. I believe it was placed last, after the seq of update values, so that it could be made optional (which it should not be). But it should instead come before the seq, to match the update function (result, seq-item) arg order. This reversal has confused people, including Guido.
That makes some sense, but you'd _certainly_ have to move and/or rename
the function in that case. There's breaking some backward compatibility,
and then there's taking a sledgehammer to things which used to work
perfectly.
As an aside, if sum() grew an optional _third_ parameter, which
was the desired operator, sum() would FULLY recreate the functionality
of reduce(), but with the same default behavior that was deemed
desirable enough to create the sum() function. This is similar to
your preferred form in that the starting value is not optional
when you specify the operator (simply because you have to use
three parameters to specify the operator), differing only in
the order of the first two parameters.
Although this is not quite your preferred form, perhaps those in such
a rush to remove reduce() should consider this slight enhancement to
sum(), to mollify those who don't want to see the functionality disappear,
but who could care less about the name of the function which provides
the functionality (e.g. sum(mylist,1,operator.mul) is slightly
counterintuitive).
If the builtins are reduced in 3.0, as I generally would like, I would be fine with moving apply, map, filter, and a repaired version of reduce to a 'fun'ctional or hof module. But the argument of some seems to be that this batteries-included language should specifically exclude even that.
I'm taking a wait-and-see attitude on this. I don't think any of the
implementers have such a vested interest in being "right" that these
functions will be removed at all cost. As soon as the implementers
start porting their _own_ code to 3.0, I believe we'll starting getting
useful feedback, either of the form "itertools et al. has everything
I need", or "boy, this SUCKS! I really need map!"
(I'm curious, though, why you included "apply" in this list. I've
personally never needed it since the * enhancement to the call syntax.)
Regards,
Pat
Alex Martelli wrote: BW Glitch wrote: ...
It wasn't obvious for me until later. reduce() is more likely to be used for optimization. IIRC, some said that optimization is the root of all evil.
That's *PREMATURE* optimization (and "of all evil IN PROGRAMMING", but of the two qualifications this one may be less crucial here) -- just like misquoting "The LOVE OF money is the root of all evil" as "MONEY is the root of all evil", so does this particular misquote trigger me;-).
Sorry, I didn't remember the quote completely. :S But that was the point
I wanted to make.
Andres Rosado This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Tom Anderson |
last post by:
Comrades,
During our current discussion of the fate of functional constructs in
python, someone brought up Guido's bull on the matter:
...
|
by: clintonG |
last post by:
At design-time the application just decides to go boom claiming it can't
find a dll. This occurs sporadically. Doing a simple edit in the HTML for...
|
by: mai |
last post by:
Hi everyone,
i'm trying to exhibit FIFO anomaly(page replacement algorithm),, I
searched over 2000 random strings but i couldnt find any anomaly,,...
|
by: cnb |
last post by:
This must be because of implementation right? Shouldn't reduce be
faster since it iterates once over the list?
doesnt sum first construct the list...
|
by: Kemmylinns12 |
last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and...
|
by: jalbright99669 |
last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
|
by: Matthew3360 |
last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function.
Here is my code.
...
|
by: Matthew3360 |
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...
|
by: AndyPSV |
last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...
|
by: WisdomUfot |
last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
|
by: Matthew3360 |
last post by:
Hi,
I have been trying to connect to a local host using php curl. But I am finding it hard to do this. I am doing the curl get request from my web...
|
by: BLUEPANDA |
last post by:
At BluePanda Dev, we're passionate about building high-quality software and sharing our knowledge with the community. That's why we've created a SaaS...
|
by: Rahul1995seven |
last post by:
Introduction:
In the realm of programming languages, Python has emerged as a powerhouse. With its simplicity, versatility, and robustness, Python...
| |