By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,403 Members | 891 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,403 IT Pros & Developers. It's quick & easy.

Why return None?

P: n/a
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.

For instance

def f(lst1, lst2):
g((lst1 + lst2).reverse()) # doesn't work!

you need to say

def f(lst1, lst2):
a = lst1 + lst2
a.reverse()
g(a)

this is actually getting in my way a lot when scripting Blender - for
instance, I can't say move(Vector([a,b,c]).normalize()), I have to do
a = Vector([a,b,c])
a.normalize()
move(a)

but it seems to be recommended practice rather than a fault in the
Blender API, since the standard list does it. Is there any drawback to
returning self rather than None?

martin

c


Jul 18 '05 #1
Share this Question
Share on Google+
47 Replies


P: n/a
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the
new list.sorted() method in 2.4...

Anthony
Jul 18 '05 #2

P: n/a
Martin DeMello wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.

With newstyle classe you can subclass the list class and make some
functions return self.

class List(list):

def sort(self):
list.sort(self)
return self
l = List([1,2,4,2,5,2,6])
print l.sort()
[1, 2, 2, 2, 4, 5, 6]


Well ok. Sort is a bad example, as you could just use sorted() insted.
regards Max M
Jul 18 '05 #3

P: n/a
Anthony Baxter <an***********@gmail.com> wrote:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the


Yes, but why? I mean, is there either an advantage to returning None or
some inherent danger in returning self?

martin
Jul 18 '05 #4

P: n/a
Martin DeMello wrote:
Anthony Baxter <an***********@gmail.com> wrote:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the

Yes, but why? I mean, is there either an advantage to returning None or
some inherent danger in returning self?

The danger is that you might forget about the side effect.

l = [1, 2, 4, 3]
sorted = l.sort()
.... # do something with sorted list
print l # I want to unsorted list here, but oops...

In place object modification is often faster, but you loose the original
data - this should be explicitly visible in the code
Jul 18 '05 #5

P: n/a
Martin DeMello wrote:
Anthony Baxter <an***********@gmail.com> wrote:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
> It seems to be a fairly common pattern for an object-modifying method
> to return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the


Yes, but why? I mean, is there either an advantage to returning None or
some inherent danger in returning self?

martin


I think Guido would rather have newbies stumbling over
a = list("abc")
zip(a, a.reverse()) Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: zip argument #2 must support iteration

than
class List(list): .... def reverse(self):
.... list.reverse(self)
.... return self
.... a = List("abc")
zip(a, a.reverse())

[('c', 'c'), ('b', 'b'), ('a', 'a')]

The latter is more likely to remain undetected until some damage is done.
Still, I would prefer it.

Peter

Jul 18 '05 #6

P: n/a
Martin DeMello wrote:
Anthony Baxter <an***********@gmail.com> wrote:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the


Yes, but why? I mean, is there either an advantage to returning None or
some inherent danger in returning self?


It is a design philosophy. Explicit is better than implicit.

what would you expect a_list.sort() to return?

If it returns a list, you would expect it to be a sorted copy of the
list. Not the list itself.

But for performance reasons the list is sorted in place.

So if you modify the list in place, why should sort() then return the list?

That the sort() method returns a None is actually a pedagocical tool to
tell the programmer that the list is modified in place.

If it had returned the list the programmer would later be surprised to
find that the list had been modified. It would seem like hard to find
nasty side effect. The way it is now is very explicit and easy to find out.
regards Max M
Jul 18 '05 #7

P: n/a
On Wed, 25 Aug 2004 10:27:51 GMT,
Martin DeMello <ma***********@yahoo.com> wrote:
Anthony Baxter <an***********@gmail.com> wrote:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
> It seems to be a fairly common pattern for an object-modifying method to
> return None - however, this is often quite inconvenient.
list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the

Yes, but why? I mean, is there either an advantage to returning None or
some inherent danger in returning self?


If list.sort returned the sorted list, how many lists would there be
after this code executed?

original_list = a_function_that_returns_a_list( )
sorted_list = original_list.sort( )

HTH,
Dan

--
Dan Sommers
<http://www.tombstonezero.net/dan/>
Never play leapfrog with a unicorn.
Jul 18 '05 #8

P: n/a
Anthony Baxter <an***********@gmail.com> writes:
On Wed, 25 Aug 2004 08:26:26 GMT, Martin DeMello
<ma***********@yahoo.com> wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


list.reverse() modifies the list in place. The python idiom is that
these don't return a reference to the modified list. Although note the
new list.sorted() method in 2.4...


It's a builtin now (as is reversed).

Cheers,
mwh

--
Back in the old days, software would grow until it could send and
receive e-mail, but now that even the virusses are doing that, the
fashion has changed, and now software evolves until it has venomous
fangs, the better to do serious damage when it sucks. -- AdB, asr
Jul 18 '05 #9

P: n/a
Dan Sommers <me@privacy.net> wrote:
If list.sort returned the sorted list, how many lists would there be
after this code executed?

original_list = a_function_that_returns_a_list( )
sorted_list = original_list.sort( )


One - sorted_list would be a reference to the (now sorted) original
list. The newbie problem could be solved by having <verb> and <verbed>
versions of each destructive method - i.e. list.sort() to sort the list
in-place and return it, and list.sorted() to return a new, sorted list,
and experienced users could chain methods and all that other good stuff.

martin
Jul 18 '05 #10

P: n/a
In article <uK%Wc.213074$gE.193548@pd7tw3no>,
Martin DeMello <ma***********@yahoo.com> wrote:
Dan Sommers <me@privacy.net> wrote:
If list.sort returned the sorted list, how many lists would there be
after this code executed?

original_list = a_function_that_returns_a_list( )
sorted_list = original_list.sort( )


One - sorted_list would be a reference to the (now sorted) original
list. The newbie problem could be solved by having <verb> and <verbed>
versions of each destructive method - i.e. list.sort() to sort the list
in-place and return it, and list.sorted() to return a new, sorted list,
and experienced users could chain methods and all that other good stuff.

martin


The problem with sort() vs. sorted() is that it's obscure. I'm not a
big fan of quoting gospel, but it seems to me that it violates the
gospel of "explicit is better than implicit".

If I took 10 programmers who were generally familiar with typical OO
syntax, but not specifically with Python and asked them what list.sort()
and list.sorted() did, I imagine all of them would say something like,
"Well, they both obviously sort a list, and clearly the existence of two
different methods means they do it differently somehow, but I can't for
the life of me guess what the difference is".

Explicit would be something like having two methods named in ways that
people could figure out without having to RTFM:

list.sort () ==> returns a sorted copy of the list
list.sort_in_place () ==> sorts in place, returns None

or if you prefer to do it the other way:

list.sort () ==> sorts in place, returns None
list.sort_copy () ==> returns a sorted copy of the list

But, personally, I find that all kind of silly; IMHO, list.sort ()
should have just returned self to begin with and then we wouldn't be
having this discussion.

heretic-ly yours.
Jul 18 '05 #11

P: n/a
In article <41*********************@dread12.news.tele.dk>,
Max M <ma**@mxm.dk> wrote:
It is a design philosophy. Explicit is better than implicit.

what would you expect a_list.sort() to return?

If it returns a list, you would expect it to be a sorted copy of the
list. Not the list itself.

But for performance reasons the list is sorted in place.

So if you modify the list in place, why should sort() then return the list?

That the sort() method returns a None is actually a pedagocical tool to
tell the programmer that the list is modified in place.

If it had returned the list the programmer would later be surprised to
find that the list had been modified. It would seem like hard to find
nasty side effect. The way it is now is very explicit and easy to find out.


But isn't the fact that the list is modified in place incidental to the
fact of sorting? One is an implementation detail, and the other is the
semantic meaning you're trying to express.

In my opinion, it would make more sense to have:

[1, 3, 4, 2].sort() return [1, 2, 3, 4]
(1, 3, 4, 2).sort() return (1, 2, 3, 4)
'1342'.sort() return '1234'

and so on. As it is, we have sort working on lists but not on immutable
sequences, which is inconvenient at times.

Just my tuppence.
Dave
Jul 18 '05 #12

P: n/a

[Martin DeMello]
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.

For instance

def f(lst1, lst2):
g((lst1 + lst2).reverse()) # doesn't work!


If you can use python 2.3 or newer, try with [::-1]
def f(m, n): ... print (m + n)[::-1]
... f([1,2,3], [10,20,30]) [30, 20, 10, 3, 2, 1]



--
Ayose Cazorla León
Debian GNU/Linux - setepo
Jul 18 '05 #13

P: n/a
In article <da*******************************@reader0903.news .uu.net>,
Dave Opstad <da*********@agfamonotype.com> wrote:
In article <41*********************@dread12.news.tele.dk>,
Max M <ma**@mxm.dk> wrote:
That the sort() method returns a None is actually a pedagocical tool to
tell the programmer that the list is modified in place.

If it had returned the list the programmer would later be surprised to
find that the list had been modified. It would seem like hard to find
nasty side effect. The way it is now is very explicit and easy to find out.


But isn't the fact that the list is modified in place incidental to the
fact of sorting? One is an implementation detail, and the other is the
semantic meaning you're trying to express.


It's not incidental to the fact that the original list is
"destroyed" by the sorting operation.

Regards. Mel.
Jul 18 '05 #14

P: n/a
Dave Opstad <da*********@agfamonotype.com> wrote in message news:<da*******************************@reader0903 .news.uu.net>...

[discussion of why list.sort returns None instead of a list]

But isn't the fact that the list is modified in place incidental to the
fact of sorting? One is an implementation detail, and the other is the
semantic meaning you're trying to express.


The fact that the list is sorted in-place *is* semantically
meaningful. It's often important to know whether the original list is
sorted or not.
Jul 18 '05 #15

P: n/a
Dave Opstad <da*********@agfamonotype.com> wrote:

But isn't the fact that the list is modified in place incidental to the
fact of sorting? One is an implementation detail, and the other is the
semantic meaning you're trying to express.
Nope, especially since objects are passed by reference.
In my opinion, it would make more sense to have:

[1, 3, 4, 2].sort() return [1, 2, 3, 4]
(1, 3, 4, 2).sort() return (1, 2, 3, 4)
'1342'.sort() return '1234'

and so on. As it is, we have sort working on lists but not on immutable
sequences, which is inconvenient at times.


This again would be a difference between sort() and sorted().

My main point is that returning None is pretty useless - it 'wastes' the
return value, and doesn't allow method chaining. Returning self allows
you to either use or discard the return value depending on whether
you're interested in it, whereas using None doesn't give you any choice.

martin
Jul 18 '05 #16

P: n/a
Ayose <ay***********@hispalinux.es> wrote:

If you can use python 2.3 or newer, try with [::-1]
>>> def f(m, n): ... print (m + n)[::-1]
... >>> f([1,2,3], [10,20,30])

[30, 20, 10, 3, 2, 1]


Thanks! Where can I look this up?

martin
Jul 18 '05 #17

P: n/a
Martin DeMello wrote:
Yes, but why? I mean, is there either an advantage to returning None
or
some inherent danger in returning self?


The "inherent danger" is that the user might think that it returns a new
object rather than mutating the original. Returning a new object vs.
mutating the argument and returning None is merely a convention, but
it's one used consistently in the Python standard library.

--
__ Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
/ \ San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
\__/ Trying to mend our hearts in vain
-- Sandra St. Victor
Jul 18 '05 #18

P: n/a
Martin DeMello <ma***********@yahoo.com> wrote in message news:<S2YWc.201615$M95.19420@pd7tw1no>...
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


The FAQs provide a clue to Guido's thinking on this subject:

http://www.python.org/doc/faq/genera...he-sorted-list

Raymond Hettinger
Jul 18 '05 #19

P: n/a
Martin DeMello wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.


http://www.python.org/doc/faq/genera...he-sorted-list

(Not an answer to the general question you ask, unless you
extrapolate from the specific answer given there to the
more general case.)

-Peter
Jul 18 '05 #20

P: n/a
Martin DeMello wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient. .... this is actually getting in my way a lot when scripting Blender - for
instance, I can't say move(Vector([a,b,c]).normalize()), I have to do
a = Vector([a,b,c])
a.normalize()
move(a)


By the way, the second version is much more readable than
the first, so perhaps there is a secondary reason for this
"return None" thing in addition to the more important one...

-Peter
Jul 18 '05 #21

P: n/a
Peter Hansen <pe***@engcorp.com> writes:
instance, I can't say move(Vector([a,b,c]).normalize()), I have to do
a = Vector([a,b,c])
a.normalize()
move(a)


By the way, the second version is much more readable than the first,


That's a matter of opinion. The lines are shorter but there are three
times as many of them. I think programmers ought to be able to make
their own choices about this. There are a lot of different styles
that are equally legitimate.
Jul 18 '05 #22

P: n/a
Martin DeMello <ma***********@yahoo.com> wrote:
...
My main point is that returning None is pretty useless - it 'wastes' the
return value, and doesn't allow method chaining. Returning self allows
you to either use or discard the return value depending on whether
you're interested in it, whereas using None doesn't give you any choice.

import this

The Zen of Python, by Tim Peters
[snipped]
There should be one-- and preferably only one --obvious way to do it.
Not leaving stylistic choice (which would lead to more than one obvious
way to do it) is quite consonant with the Zen of Python. Of course one
can't always reach what's preferable, but "your main point" which is
presumably meant as a criticism of this design choice comes across as
praise: the design choice follows the overall design's philosophy.

Guido doesn't like method chaining, so he made a design choice that did
not allow method chaining, and did not give several equally obvious ways
to perform some typical, important tasks. This consistency between
detailed design decisions and overall philosophy is exactly that Quality
Without a Name which makes Python so great.
Alex
Jul 18 '05 #23

P: n/a
Martin DeMello <ma***********@yahoo.com> wrote:
Ayose <ay***********@hispalinux.es> wrote:

If you can use python 2.3 or newer, try with [::-1]
>>> def f(m, n):

... print (m + n)[::-1]
...
>>> f([1,2,3], [10,20,30])

[30, 20, 10, 3, 2, 1]


Thanks! Where can I look this up?


There should be recipes for such handy shortcuts in the Python Cookbook
(the current printed edition doesn't have this one as it only covers
Python 1.5.2 to 2.2, but we're preparing a second edition focusing on
Python 2.3 and 2.4).

In Python 2.4, by the way, reversed(x) [[using the new built-in function
'reversed']] is most often preferable to x[::-1]. reversed returns an
iterator, optimized for looping on, but if you need a list, tuple, etc,
you can just call list(reversed(x)) and so on.
Alex
Jul 18 '05 #24

P: n/a
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
Peter Hansen <pe***@engcorp.com> writes:
instance, I can't say move(Vector([a,b,c]).normalize()), I have to do
a = Vector([a,b,c])
a.normalize()
move(a)


By the way, the second version is much more readable than the first,


That's a matter of opinion. The lines are shorter but there are three
times as many of them. I think programmers ought to be able to make
their own choices about this. There are a lot of different styles
that are equally legitimate.


But they're not equally Pythonic -- Python's philosophy is that there
should be preferably only one obvious way to do it. It's a target, a
goal, not something that can be actually reached in 100% of the cases,
but it's an excellent idea.
Alex
Jul 18 '05 #25

P: n/a
al*****@yahoo.com (Alex Martelli) writes:
There should be one-- and preferably only one --obvious way to do it.

Not leaving stylistic choice (which would lead to more than one obvious
way to do it) is quite consonant with the Zen of Python.


Well, what you're left with is that doing it the one obvious way
doesn't work, and you have to do it in some contorted way instead.
Jul 18 '05 #26

P: n/a
Peter Hansen <pe***@engcorp.com> wrote:
Martin DeMello wrote:
It seems to be a fairly common pattern for an object-modifying method to
return None - however, this is often quite inconvenient.

...
this is actually getting in my way a lot when scripting Blender - for
instance, I can't say move(Vector([a,b,c]).normalize()), I have to do
a = Vector([a,b,c])
a.normalize()
move(a)


By the way, the second version is much more readable than
the first, so perhaps there is a secondary reason for this
"return None" thing in addition to the more important one...


It depends on what you're doing - to me, the first is simply "the
normalised vector (a,b,c)", inlined. It's a single concept, much like an
inlined string is - would you want to do the following?
a = "Hello "
a = a + str(name)
print a

martin

Jul 18 '05 #27

P: n/a
Alex Martelli <al*****@yahoo.com> wrote:
Not leaving stylistic choice (which would lead to more than one obvious
way to do it) is quite consonant with the Zen of Python. Of course one
can't always reach what's preferable, but "your main point" which is
presumably meant as a criticism of this design choice comes across as
praise: the design choice follows the overall design's philosophy.
I still feel the the One Obvious Way should have been to return self...
Guido doesn't like method chaining, so he made a design choice that did
not allow method chaining, and did not give several equally obvious ways


But that's pretty hard to argue with :)

martin
Jul 18 '05 #28

P: n/a
Martin DeMello wrote:
Peter Hansen <pe***@engcorp.com> wrote:
By the way, the second version is much more readable than
the first, so perhaps there is a secondary reason for this
"return None" thing in addition to the more important one...


It depends on what you're doing - to me, the first is simply "the
normalised vector (a,b,c)", inlined. It's a single concept, much like an
inlined string is - would you want to do the following?
a = "Hello "
a = a + str(name)
print a


I'd want to do neither, often. If I had to do either, often,
I would write a function which did the required operations
and I'd call that instead. Then both it and the calling
code are readable. But I suppose that's just me... maybe
others would prefer to repeat things more.

-Peter
Jul 18 '05 #29

P: n/a
Martin DeMello <ma***********@yahoo.com> wrote:
Alex Martelli <al*****@yahoo.com> wrote:
Not leaving stylistic choice (which would lead to more than one obvious
way to do it) is quite consonant with the Zen of Python. Of course one
can't always reach what's preferable, but "your main point" which is
presumably meant as a criticism of this design choice comes across as
praise: the design choice follows the overall design's philosophy.


I still feel the the One Obvious Way should have been to return self...


When you design your own language, you get to impose in it what's
obvious to _you_ -- or go the Perl way and try to squeeze in as many
different ways to do every single task as possible so everybody's happy
except those who can't stand bloated languages (who'll stick with
Python;-).

Guido doesn't like method chaining, so he made a design choice that did
not allow method chaining, and did not give several equally obvious ways


But that's pretty hard to argue with :)


Indeed, it's not meant to be arguable-with;-). Personally I like method
chaining, but it's clearly not the Python way, and I appreciate
consistency and simplicity more than I appreciate picking and choosing
details of my preferred style.
Alex
Jul 18 '05 #30

P: n/a
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
al*****@yahoo.com (Alex Martelli) writes:
There should be one-- and preferably only one --obvious way to do it.

Not leaving stylistic choice (which would lead to more than one obvious
way to do it) is quite consonant with the Zen of Python.


Well, what you're left with is that doing it the one obvious way
doesn't work, and you have to do it in some contorted way instead.


If somelist.sort() returned the list there could not possibly be one
obvious way to do it, since both

print somelist.sort()

and

somelist.sort()
print somelist

would be ways to do it, both pretty obvious to different people. [[The
introduction of 'sorted' will muddy 'obviousness' for some, but IMHO it
just changes what the one obvious way is: if you do want an inplace
sort, then somelist.sort(), otherwise then sorted(somelist) when you do
not want to alter somelist in place.]]
Alex
Jul 18 '05 #31

P: n/a
Op 2004-08-26, Alex Martelli schreef <al*****@yahoo.com>:
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
al*****@yahoo.com (Alex Martelli) writes:
> There should be one-- and preferably only one --obvious way to do it.
>
> Not leaving stylistic choice (which would lead to more than one obvious
> way to do it) is quite consonant with the Zen of Python.


Well, what you're left with is that doing it the one obvious way
doesn't work, and you have to do it in some contorted way instead.


If somelist.sort() returned the list there could not possibly be one
obvious way to do it, since both

print somelist.sort()

and

somelist.sort()
print somelist


Then python has already deviated from the one obvious way to do it.
I can do:

a = a + b vs a += b.

or

a = b + c vs a = ''.join(b,c)
The difference between

print somelist.sort()

and

somelist.sort()
print somelist
is IMO of the same order as the difference between
print a + b

and

r = a + b
print r
--
Antoon Pardon
Jul 18 '05 #32

P: n/a
Antoon Pardon wrote:
If somelist.sort() returned the list there could not possibly be one
obvious way to do it, since both

print somelist.sort()

and

somelist.sort()
print somelist

Then python has already deviated from the one obvious way to do it.
I can do:

a = a + b vs a += b.

or

a = b + c vs a = ''.join(b,c)


a = ''.join((b,c))

join doesn't join all it's arguments; it joins all elements from it's
one argument, which should be a sequence. That's a detail though.

The difference between

print somelist.sort()

and

somelist.sort()
print somelist
is IMO of the same order as the difference between
print a + b

and

r = a + b
print r


Not IMO.

print somelist.sort()

has a side-effect (somelist is sorted), which is not clearly visible if
used that way.

a + b doesn't have any side-effects.
IMO, 'print somelist.sort()' is more equivalent to 'print a += b', which
also doesn't work (fortunately).

--
"Codito ergo sum"
Roel Schroeven
Jul 18 '05 #33

P: n/a
In article <1g*************************@yahoo.com>,
al*****@yahoo.com (Alex Martelli) wrote:
In Python 2.4, by the way, reversed(x) [[using the new built-in function
'reversed']] is most often preferable to x[::-1]. reversed returns an
iterator, optimized for looping on, but if you need a list, tuple, etc,
you can just call list(reversed(x)) and so on.


Now that it's too late, I wish it were called
'reversing', after reading the story of somebody who got an
iterator, thinking it was a thing. Then made a dictionary
out of it twice, and got one good dict and one empty dict.

'reverse' and 'reversed' remind me of Extended Memory and
Expanded Memory. One was one, and one the other, but which
was which?

Regards. Mel.
Jul 18 '05 #34

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Then python has already deviated from the one obvious way to do it.
Yep, ever since it let you code 2+3 and 3+2 with just the same effect --
which was from day one, and couldn't have been otherwise. _Preferably_
only one way, but as I said what's preferable can't always be achieved.

Nevertheless, when for some task there _is_ one obvious way to do it,
adding a feature whose main effect would be giving two alternative
obvious ways to do it would be unPythonic.
I can do:

a = a + b vs a += b.
Yes you can, and in the general case get very different effects, e.g.:
c=a=range(3)
b=range(2)
a+=b
c [0, 1, 2, 0, 1]

versus:
c=a=range(3)
b=range(2)
a=a+b
c

[0, 1, 2]

So, which one is the obvious way to do it depends on what 'it' is. In
some cases it doesn't matter, just like b+a and a+b are going to have
the same effect when a and b are numbers rather than sequences, and
there's nothing Python can do to fight this -- practicality beats
purity. If you're (when feasible) altering the object to which name 'a'
is bound, a+=b is the obvious way to do it; if you're in any case
rebinding name 'a' and letting the original object stand undisturbed,
'a=a+b' is the one obvious way to do THAT. Not all objects can be
altered, so the first ones of these tasks isn't always going to be
feasible, of course.

or

a = b + c vs a = ''.join(b,c)
You should try out the code you post, otherwise you risk ending up with
code in your face -- ''.join(b, c) will just raise an exception, which
is a VERY different effect from what b + c will give in most cases.
I'll be charitable and assume you meant ''.join((a, b)) or something
like that.

Again, it's only in one very special case that these two very different
'ways to do it' produce the same effect, just like in other different
special cases 'a = b + c' and 'a = c + b' produce the same effect and
there's nothing Python can do about it.

But let's be sensible: if 'it' is joining two strings which are bound to
names b and c, b+c is the only OBVIOUS way to do it. Building a
sequence whose items are b and c and calling ''.join on it is clearly an
indirect and roundabout -- therefore NOT "the one obvious way"! -- to
achieve a result. Proof: it's so unobvious, unusual, rarely used if
ever, that you typed entirely wrong code for the purpose...

Nobody ever even wished for there to never be two sequences of code with
the same end-result. The idea (a target to strive for) is that out of
all the (probably countable) sequences with that property, ONE stands
out as so much simpler, clearer, more direct, more obvious, to make that
sequence the ONE OBVIOUS way. We can't always get even that, as a+b vs
b+a show when a and b are bound to numbers, but we can sure get closer
to it by respecting most of GvR's design decisions than by offering
unfounded, hasty and badly reasoning critiques of them.
The difference between

print somelist.sort()

and

somelist.sort()
print somelist
is IMO of the same order as the difference between
print a + b

and

r = a + b
print r


For a sufficiently gross-grained comparison, sure. And so? In the
second case, if you're not interested in having the value of a+b kept
around for any subsequent use, then the first approach is the one
obvious way; if you ARE, the second, because you've bound a name to it
(which you might have avoided) so you can reuse it (if you have no
interest in such reuse, it's not obvious why you've bound any name...).

In the first case, fortunately the first approach is illegal, the second
one is just fine. Were they exactly equivalent in effect neither would
be the one obvious way for all reasonable observer -- some would hate
the side effect in the first case, some would hate the idea of having
two statements where one might suffice in the second case.

Fortunately the first approach does NOT do the same thing as the second
(it prints out None:-) so Python sticks to its design principles. Let
me offer a private libation to whatever deities protect programmers,
that Python was designed by GvR rather than by people able to propose
analogies such as this last one without following through on all of
their implications and seeing why this SHOWS Python is consistent in
applying its own design principles!
Alex

Jul 18 '05 #35

P: n/a
Op 2004-08-26, Alex Martelli schreef <al*****@yahoo.com>:
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Then python has already deviated from the one obvious way to do it.
Yep, ever since it let you code 2+3 and 3+2 with just the same effect --
which was from day one, and couldn't have been otherwise. _Preferably_
only one way, but as I said what's preferable can't always be achieved.

Nevertheless, when for some task there _is_ one obvious way to do it,
adding a feature whose main effect would be giving two alternative
obvious ways to do it would be unPythonic.
I can do:

a = a + b vs a += b.


Yes you can, and in the general case get very different effects, e.g.:


And what about

a += b vs a.extend(b)
c=a=range(3)
b=range(2)
a+=b
c [0, 1, 2, 0, 1]

versus:
c=a=range(3)
b=range(2)
a=a+b
c
[0, 1, 2]


I wouldn't say you get different effects in *general*. You get the
same effect if you use numbers or tuples or any other immutable
object.
So, which one is the obvious way to do it depends on what 'it' is. In
some cases it doesn't matter, just like b+a and a+b are going to have
the same effect when a and b are numbers rather than sequences, and
there's nothing Python can do to fight this -- practicality beats
purity. If you're (when feasible) altering the object to which name 'a'
is bound, a+=b is the obvious way to do it; if you're in any case
rebinding name 'a' and letting the original object stand undisturbed,
'a=a+b' is the one obvious way to do THAT. Not all objects can be
altered, so the first ones of these tasks isn't always going to be
feasible, of course.

or

a = b + c vs a = ''.join(b,c)
You should try out the code you post, otherwise you risk ending up with
code in your face -- ''.join(b, c) will just raise an exception, which
is a VERY different effect from what b + c will give in most cases.
I'll be charitable and assume you meant ''.join((a, b)) or something
like that.

Again, it's only in one very special case that these two very different
'ways to do it' produce the same effect, just like in other different
special cases 'a = b + c' and 'a = c + b' produce the same effect and
there's nothing Python can do about it.

But let's be sensible: if 'it' is joining two strings which are bound to
names b and c, b+c is the only OBVIOUS way to do it. Building a
sequence whose items are b and c and calling ''.join on it is clearly an
indirect and roundabout -- therefore NOT "the one obvious way"! -- to
achieve a result. Proof: it's so unobvious, unusual, rarely used if
ever, that you typed entirely wrong code for the purpose...


That is just tradition. Suppose the "+" operator wouldn't have worked
on strings an concatenating would from the start been done by joining,
then that would have been the one obvious way to do it.

Nobody ever even wished for there to never be two sequences of code with
the same end-result. The idea (a target to strive for) is that out of
all the (probably countable) sequences with that property, ONE stands
out as so much simpler, clearer, more direct, more obvious, to make that
sequence the ONE OBVIOUS way.
And what if it are three sequences of code with the same end-result,
or four. From what number isn't it a problem any more if two sequences
of that length or more produce the same result.
We can't always get even that, as a+b vs
b+a show when a and b are bound to numbers, but we can sure get closer
to it by respecting most of GvR's design decisions than by offering
unfounded, hasty and badly reasoning critiques of them.
I think that this goal of GvR is a bad one. If someway of doing it
is usefull then I think it should be included and the fact that
it introduces more than one obvious way to do some things shouldn't
count for much.

Sure you shouldn't go the perl-way where things seemed to have
been introduced just for the sake of having more than obvious way
to do things. But eliminating possibilities (method chaining)
just because you don't like them and because they would create
more than one obvious way to do things, seems just as bad to
me.
What I have herad about the decorators is that one of the
arguments in favor of decorators is, that you have to
give the name of the function only once, where tradionally
you have to repeat the function name and this can introduce
errors.

But the same argument goes for allowing method chaining.
Without method chaining you have to repeat the name of
the object which can introduce errors.
The difference between

print somelist.sort()

and

somelist.sort()
print somelist
is IMO of the same order as the difference between
print a + b

and

r = a + b
print r
For a sufficiently gross-grained comparison, sure. And so? In the
second case, if you're not interested in having the value of a+b kept
around for any subsequent use, then the first approach is the one
obvious way;


No it isn't because programs evolve. So you may think you don't
need the result later on, but that may change, so writing it
the second way, will making changes easier later on.
if you ARE, the second, because you've bound a name to it
(which you might have avoided) so you can reuse it (if you have no
interest in such reuse, it's not obvious why you've bound any name...).

In the first case, fortunately the first approach is illegal, the second
one is just fine. Were they exactly equivalent in effect neither would
be the one obvious way for all reasonable observer -- some would hate
the side effect in the first case, some would hate the idea of having
two statements where one might suffice in the second case.
So? I sometimes get the idea that people here can't cope with
differences in how people code. So any effort must be made
to force people to code in one specific way.
Fortunately the first approach does NOT do the same thing as the second
(it prints out None:-) so Python sticks to its design principles. Let
me offer a private libation to whatever deities protect programmers,
that Python was designed by GvR rather than by people able to propose
analogies such as this last one without following through on all of
their implications and seeing why this SHOWS Python is consistent in
applying its own design principles!


That these implications are important is just an implication on the
design principles. If someone doesn't think particular design principles
are that important, he doesn't care that if somethings is changed that
particulat design principle will be violated. Personnaly I'm not
that impressed with the design of python, it is a very usefull language
but having operators like '+=' which have a different kind of result
depending on whether you have a mutable or immutable object is IMO
not such a good design and I wonder what design principle inspired
them.

--
Antoon Pardon
Jul 18 '05 #36

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
but having operators like '+=' which have a different kind of result
depending on whether you have a mutable or immutable object is IMO
not such a good design and I wonder what design principle inspired them.


+= was added to Python fairly recently (in 2.0, I think) and there was
a lot of agonizing about whether to add it. It was one of those
things like the ?: construction. On the one hand there was the
abstract objection you raise. On the other hand there were lots of
users frustrated at being unable to say v[f(x)] += 1 instead of having
to call f twice or introduce a temp variable or something.
Eventually, practicality beat purity.
Jul 18 '05 #37

P: n/a
Op 2004-08-27, Paul Rubin schreef <>:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
but having operators like '+=' which have a different kind of result
depending on whether you have a mutable or immutable object is IMO
not such a good design and I wonder what design principle inspired them.


+= was added to Python fairly recently (in 2.0, I think) and there was
a lot of agonizing about whether to add it. It was one of those
things like the ?: construction. On the one hand there was the
abstract objection you raise. On the other hand there were lots of
users frustrated at being unable to say v[f(x)] += 1 instead of having
to call f twice or introduce a temp variable or something.
Eventually, practicality beat purity.


Fine practicality beats purity, but then the proponents shouldn't
put that much weight on consistency, because practicality breaks
consistency.

In this case I think the practicality of method chaining beats
the purity of not allowing side-effects in print statements and
of having only one obvious way to do things.

I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1

and the frustration of having to write

lst = f(x)
lst.sort()
lst.reverse()

--
Antoon Pardon
Jul 18 '05 #38

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Yes you can, and in the general case get very different effects, e.g.:
And what about

a += b vs a.extend(b)


I can go on repeating "in the general case [these constructs] get very
different effects" just as long as you can keep proposing, as if they
might be equivalent, constructs that just aren't so in the general case.

Do I really need to point out that a.extend(b) doesn't work for tuples
and strings, while a+=b works as polymorphically as feasible on all
these types? It should be pretty obvious, I think. So, if you want to
get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
is clearly the way to go -- if you want para-polymorphic behavior in
those cases, a+=b. Isn't it obvious, too?
> c=a=range(3)
> b=range(2)
> a+=b
> c

[0, 1, 2, 0, 1]

versus:
> c=a=range(3)
> b=range(2)
> a=a+b
> c

[0, 1, 2]


I wouldn't say you get different effects in *general*. You get the
same effect if you use numbers or tuples or any other immutable
object.


a+=b is defined to be: identical to a=a+b for immutable objects being
bound to name 'a'; but not necessarily so for mutable objects -- mutable
types get a chance to define __iadd__ and gain efficiency through
in-place mutation for a+=b, while the semantics of a=a+b strictly forbid
in-place mutation. *IN GENERAL*, the effects of a+=b and a=a+b may
differ, though in specific cases ('a' being immutable, or of a mutable
type which strangely chooses to define __add__ but not __iadd__) they
may be identical. Like for a+b vs b+a: in general they may differ, but
they won't differ if the types involved just happen to have commutative
addition, of if a and b are equal or identical objects, i.e., in various
special cases.

"You get different effects *in general*" does not rule out that there
may be special cases (immutable types for one issue,
commutative-addition types for another, etc, etc) in which the effects
do not differ. Indeed, if it was "always" true that you got different
effects, it would be superfluous to add that "in general" qualifier.
Therefore, I find your assertion that you "wouldn't say you get
different effects in *general*" based on finding special cases in which
the effects do not differ to be absurd and unsupportable.

But let's be sensible: if 'it' is joining two strings which are bound to
names b and c, b+c is the only OBVIOUS way to do it. Building a
sequence whose items are b and c and calling ''.join on it is clearly an
indirect and roundabout -- therefore NOT "the one obvious way"! -- to
achieve a result. Proof: it's so unobvious, unusual, rarely used if
ever, that you typed entirely wrong code for the purpose...


That is just tradition. Suppose the "+" operator wouldn't have worked
on strings an concatenating would from the start been done by joining,
then that would have been the one obvious way to do it.


In a hypothetical language without any + operator, but with both unary
and binary - operators, the one "obvious" way to add two numbers a and b
might indeed be to code: a - (-b). So what? In a language WITH a
normal binary + operator, 'a - (-b)' is nothing like 'an obvious way'.

Nobody ever even wished for there to never be two sequences of code with
the same end-result. The idea (a target to strive for) is that out of
all the (probably countable) sequences with that property, ONE stands
out as so much simpler, clearer, more direct, more obvious, to make that
sequence the ONE OBVIOUS way.


And what if it are three sequences of code with the same end-result,
or four. From what number isn't it a problem any more if two sequences
of that length or more produce the same result.


To add N integers that are bound to N separate identifiers, there are
(quite obviously) N factorial "sequences of [the same] length" producing
the same result. Is it "a problem"? I guess it may be considered a
minor annoyance, but it would be absurd to try and do something against
it, e.g. by arbitrary rules forbidding addition between variables except
in alphabetical order. Practicality beats purity.
We can't always get even that, as a+b vs
b+a show when a and b are bound to numbers, but we can sure get closer
to it by respecting most of GvR's design decisions than by offering
unfounded, hasty and badly reasoning critiques of them.
I think that this goal of GvR is a bad one.


I'm sure you're a better language designer than GvR, since you're
qualified to critique, not just a specific design decision, but one of
the pillars on which he based many of the design decisions that together
made Python.
Therefore, I earnestly urge you to stop wasting your time critiquing an
inferiorly-designed language and go off and design your own, which will
no doubt be immensely superior. Good bye; don't slam the door on the
way out, please.
If someway of doing it
is usefull then I think it should be included and the fact that
it introduces more than one obvious way to do some things shouldn't
count for much.
This is exactly Perl's philosophy, of course.

Sure you shouldn't go the perl-way where things seemed to have
been introduced just for the sake of having more than obvious way
to do things. But eliminating possibilities (method chaining)
just because you don't like them and because they would create
more than one obvious way to do things, seems just as bad to
me.
If a language should not eliminate possibilities because its designer
does not like those possibilities, indeed if it's BAD for a language
designer to omit from his language the possibilities he dislikes, what
else should a language designer do then, except include every
possibility that somebody somewhere MIGHT like? And that IS a far
better description of Perl's philosophy than "just for the sake" quips
(which are essentially that -- quips).
What I have herad about the decorators is that one of the
arguments in favor of decorators is, that you have to
give the name of the function only once, where tradionally
you have to repeat the function name and this can introduce
errors.

But the same argument goes for allowing method chaining.
Without method chaining you have to repeat the name of
the object which can introduce errors.
I've heard that argument in favour of augmented assignment operators
such as += -- and there it makes sense, since the item you're operating
on has unbounded complexity... mydict[foo].bar[23].zepp += 1 may indeed
be better than repeating that horrid LHS (although "Demeter's Law"
suggests that such multi-dotted usage is a bad idea in itself, one
doesn't always structure code with proper assignment of responsibilities
to objects and so forth...).

For a plain name, particularly one which is just a local variable and
therefore you can choose to be as simple as you wish, the argument makes
no sense to me. If I need to call several operations on an object I'm
quite likely to give that object a 'temporary alias' in a local name
anyway, of course:
target = mydict[foo].bar[23].zepp
target.pop(xu1)
target.sort()
target.pop(xu3)
target.reverse()
target.pop(xu7)

Doing just the same thing when I don't need intermediate access to the
object between calls that mutate the object and currently return None is
no hardship, just as it isn't when such access IS needed. Note that you
couldn't do chaining here anyway, since pop mutates the object but also
returns a significant value...

The difference between

print somelist.sort()

and

somelist.sort()
print somelist
is IMO of the same order as the difference between
print a + b

and

r = a + b
print r


For a sufficiently gross-grained comparison, sure. And so? In the
second case, if you're not interested in having the value of a+b kept
around for any subsequent use, then the first approach is the one
obvious way;


No it isn't because programs evolve. So you may think you don't
need the result later on, but that may change, so writing it
the second way, will making changes easier later on.


Ridiculous. Keep around a+b, which for all we know here might be a
million-items list!, by having a name bound to it, without ANY current
need for that object, because some FUTURE version of your program may
have different specs?!
If specs change, refactoring the program written in the sensible way,
the way that doesn't keep memory occupied to no good purpose, won't be
any harder than refactoring the program that wastes megabytes by always
keeping all intermediate results around "just in case".

if you ARE, the second, because you've bound a name to it
(which you might have avoided) so you can reuse it (if you have no
interest in such reuse, it's not obvious why you've bound any name...).

In the first case, fortunately the first approach is illegal, the second
one is just fine. Were they exactly equivalent in effect neither would
be the one obvious way for all reasonable observer -- some would hate
the side effect in the first case, some would hate the idea of having
two statements where one might suffice in the second case.


So? I sometimes get the idea that people here can't cope with
differences in how people code. So any effort must be made
to force people to code in one specific way.


When more than one person cooperates in writing a program, the group
will work much better if there is no "code ownership" -- the lack of
individualized, quirky style variations helps a lot. It's not imposible
to 'cope with differences' in coding style within a team, but it's just
one more roadblock erected to no good purpose. A language can help the
team reach reasonably uniform coding style (by trying to avoid offering
gratuitous variation which serves no real purpose), or it can hinder the
team in that same goal (by showering gratuitous variation on them).

Fortunately the first approach does NOT do the same thing as the second
(it prints out None:-) so Python sticks to its design principles. Let
me offer a private libation to whatever deities protect programmers,
that Python was designed by GvR rather than by people able to propose
analogies such as this last one without following through on all of
their implications and seeing why this SHOWS Python is consistent in
applying its own design principles!


That these implications are important is just an implication on the
design principles. If someone doesn't think particular design principles
are that important, he doesn't care that if somethings is changed that
particulat design principle will be violated. Personnaly I'm not
that impressed with the design of python, it is a very usefull language


Great, so, I repeat: go away and design your language, one that WILL
impress you with its design. Here, you're just waiting your precious
time and energy, as well of course as ours.
but having operators like '+=' which have a different kind of result
depending on whether you have a mutable or immutable object is IMO
not such a good design and I wonder what design principle inspired
them.


Practicality beats purity: needing to polymorphically concatenate two
sequences of any kind, without caring if one gets modified or not, is a
reasonably frequent need and is quite well satisfied by += for example.
Alex
Jul 18 '05 #39

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Fine practicality beats purity, but then the proponents shouldn't
put that much weight on consistency, because practicality breaks
consistency.
No: "but special cases aren't special enough to break the rules". No
rule was broken by introducing += and friends.

In this case I think the practicality of method chaining beats
the purity of not allowing side-effects in print statements and
of having only one obvious way to do things.
You think one way, GvR thinks another, and in Python GvR wins. Go
design your own language where what you think matters.

I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1
You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.

and the frustration of having to write

lst = f(x)
lst.sort()
lst.reverse()


Here, no operation is needlessly getting repeated.

If you don't see much difference between forcing people to code in a way
that repeats potentially-costly operations, and forcing a style that
doesn't imply such repetitions, I wonder how your language will look.
Still, I'm much happier thinking of you busy designing your own
wonderful language, than wasting your time and yours here, busy
criticizing what you cannot change.
Alex
Jul 18 '05 #40

P: n/a
Op 2004-08-27, Alex Martelli schreef <al*****@yahoo.com>:
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Fine practicality beats purity, but then the proponents shouldn't
put that much weight on consistency, because practicality breaks
consistency.
No: "but special cases aren't special enough to break the rules". No
rule was broken by introducing += and friends.

In this case I think the practicality of method chaining beats
the purity of not allowing side-effects in print statements and
of having only one obvious way to do things.


You think one way, GvR thinks another, and in Python GvR wins. Go
design your own language where what you think matters.


Why the fuss over the chosen decorator syntax if GvR
wins anyhow. Why don't you go tell all those people
arguing decorator syntax that they should design their own
language where what they think matters.

If you think I shouldn't voice an opinion here because GvR
wins anyhow and my opinion won't matter fine. Just say so
from the beginning. Don't start with pretending that you
have good arguments that support the status quo because
all that matters is that GvR prefers it this way.
All good arguments in support are just a coincidence in
that case.
I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1


You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.


That repetion is just pythons inabilty to optimise.
and the frustration of having to write

lst = f(x)
lst.sort()
lst.reverse()


Here, no operation is needlessly getting repeated.


Yes there is, the operation to find lst from the local dictionary.
Although it wont be unboundedly costly.
If you don't see much difference between forcing people to code in a way
that repeats potentially-costly operations,
and forcing a style that
doesn't imply such repetitions, I wonder how your language will look.
I'm sure that if I ever find the time to do so, you won't like it.
Still, I'm much happier thinking of you busy designing your own
wonderful language, than wasting your time and yours here, busy
criticizing what you cannot change.


If you don't want to waste time, just state from the beginning
that this is how GvR wanted it and people won't be able to
change it.

You shouldn't start by arguing why the language as it is is as
it should because that will just prolong the discussion as
people will give counter arguments for what they think would
be better. If you know that, should people not be persuaded
by your arguments, you will resort to GvR autority and declare
the arguments a waste of time, you are better of puttings
GvR autority that can't be questioned on the table as soon
as possible.

--
Antoon Pardon
Jul 18 '05 #41

P: n/a
Op 2004-08-27, Alex Martelli schreef <al*****@yahoo.com>:
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
> Yes you can, and in the general case get very different effects, e.g.:
And what about

a += b vs a.extend(b)


I can go on repeating "in the general case [these constructs] get very
different effects" just as long as you can keep proposing, as if they
might be equivalent, constructs that just aren't so in the general case.

Do I really need to point out that a.extend(b) doesn't work for tuples
and strings, while a+=b works as polymorphically as feasible on all
these types?


That extend doesn't work for strings and tupples is irrelevant.
for those types that have an extend a.extend(b) is equivallent
to a+=b

In other words there is no reason to have an extend member for
lists.

furthermore a+=b doesn't work polymorphically feasable because
the result of

a=c
a+=b

is different depending on the type of c.
It should be pretty obvious, I think. So, if you want to
get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
is clearly the way to go -- if you want para-polymorphic behavior in
those cases, a+=b. Isn't it obvious, too?
No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.
>>>> c=a=range(3)
>>>> b=range(2)
>>>> a+=b
>>>> c
> [0, 1, 2, 0, 1]
>
> versus:
>
>>>> c=a=range(3)
>>>> b=range(2)
>>>> a=a+b
>>>> c
> [0, 1, 2]


I wouldn't say you get different effects in *general*. You get the
same effect if you use numbers or tuples or any other immutable
object.


a+=b is defined to be: identical to a=a+b for immutable objects being
bound to name 'a'; but not necessarily so for mutable objects -- mutable
types get a chance to define __iadd__ and gain efficiency through
in-place mutation for a+=b, while the semantics of a=a+b strictly forbid
in-place mutation. *IN GENERAL*, the effects of a+=b and a=a+b may
differ, though in specific cases ('a' being immutable, or of a mutable
type which strangely chooses to define __add__ but not __iadd__) they
may be identical.


which makes them para-polymorphic infeasable.
Like for a+b vs b+a: in general they may differ, but
they won't differ if the types involved just happen to have commutative
addition, of if a and b are equal or identical objects, i.e., in various
special cases.

"You get different effects *in general*" does not rule out that there
may be special cases (immutable types for one issue,
If those specific cases can be half of the total cases I wouldn't
call the not specific cases *in general*.
commutative-addition types for another, etc, etc) in which the effects
do not differ. Indeed, if it was "always" true that you got different
effects, it would be superfluous to add that "in general" qualifier.
Therefore, I find your assertion that you "wouldn't say you get
different effects in *general*" based on finding special cases in which
the effects do not differ to be absurd and unsupportable.

> But let's be sensible: if 'it' is joining two strings which are bound to
> names b and c, b+c is the only OBVIOUS way to do it. Building a
> sequence whose items are b and c and calling ''.join on it is clearly an
> indirect and roundabout -- therefore NOT "the one obvious way"! -- to
> achieve a result. Proof: it's so unobvious, unusual, rarely used if
> ever, that you typed entirely wrong code for the purpose...
That is just tradition. Suppose the "+" operator wouldn't have worked
on strings an concatenating would from the start been done by joining,
then that would have been the one obvious way to do it.


In a hypothetical language without any + operator, but with both unary
and binary - operators, the one "obvious" way to add two numbers a and b
might indeed be to code: a - (-b). So what? In a language WITH a
normal binary + operator, 'a - (-b)' is nothing like 'an obvious way'.


The point is that there is a difference between what is obvious in
general and what is obvious within a certain tradition. If python
would limit itself to only one obvious way for those things that
are obvious in general that would be one way.

But here you are defending one thing that is only obvious through
tradition, by pointing out that something that hasn't had the time
to become a tradition isn't obvious.

Personly I don't find the use of "+" as a concat operator obvious.
There are types for which both addition and concatenation can be
a usefull operator. Using the same symbol for both will make
it just that much harder to implement such types and as a result
there is no obvious interface for such types.

> We can't always get even that, as a+b vs
> b+a show when a and b are bound to numbers, but we can sure get closer
> to it by respecting most of GvR's design decisions than by offering
> unfounded, hasty and badly reasoning critiques of them.


I think that this goal of GvR is a bad one.


I'm sure you're a better language designer than GvR, since you're
qualified to critique, not just a specific design decision, but one of
the pillars on which he based many of the design decisions that together
made Python.


Therefore, I earnestly urge you to stop wasting your time critiquing an
inferiorly-designed language and go off and design your own, which will
no doubt be immensely superior. Good bye; don't slam the door on the
way out, please.
If someway of doing it
is usefull then I think it should be included and the fact that
it introduces more than one obvious way to do some things shouldn't
count for much.


This is exactly Perl's philosophy, of course.


No it isn't. Perl offers you choice in a number of situations
where a number of the alternatives don't offer you anything usefull.
unless a way to do things differently and eliminate a few characters.
Sure you shouldn't go the perl-way where things seemed to have
been introduced just for the sake of having more than obvious way
to do things. But eliminating possibilities (method chaining)
just because you don't like them and because they would create
more than one obvious way to do things, seems just as bad to
me.


If a language should not eliminate possibilities because its designer
does not like those possibilities, indeed if it's BAD for a language
designer to omit from his language the possibilities he dislikes, what
else should a language designer do then, except include every
possibility that somebody somewhere MIGHT like?


So if you think it is good for a language designer to omit what
he dislikes. Do you think it is equally good for a language
designer to add just because he likes it. And if you think so,
do you think the earlier versions of perl, where we can think
the language was still mainly driven by what Larry Wall liked,
was a good language.

I can understand that a designer has to make choices, but
if the designer can allow a choice and has no other arguments
to limit that choice than that he doesn't like one alternative
then that is IMO a bad design decision.
What I have herad about the decorators is that one of the
arguments in favor of decorators is, that you have to
give the name of the function only once, where tradionally
you have to repeat the function name and this can introduce
errors.

But the same argument goes for allowing method chaining.
Without method chaining you have to repeat the name of
the object which can introduce errors.


I've heard that argument in favour of augmented assignment operators
such as += -- and there it makes sense, since the item you're operating
on has unbounded complexity... mydict[foo].bar[23].zepp += 1 may indeed
be better than repeating that horrid LHS (although "Demeter's Law"
suggests that such multi-dotted usage is a bad idea in itself, one
doesn't always structure code with proper assignment of responsibilities
to objects and so forth...).

For a plain name, particularly one which is just a local variable and
therefore you can choose to be as simple as you wish, the argument makes
no sense to me. If I need to call several operations on an object I'm
quite likely to give that object a 'temporary alias' in a local name
anyway, of course:
target = mydict[foo].bar[23].zepp
target.pop(xu1)
target.sort()
target.pop(xu3)
target.reverse()
target.pop(xu7)


I find this a questionable practice. What if you need to make the list
empty at some time. The most obvious way to do so after a number of
such statements would be:

target = []

But of course that won't work.
>> The difference between
>>
>> print somelist.sort()
>>
>> and
>>
>> somelist.sort()
>> print somelist
>>
>>
>> is IMO of the same order as the difference between
>>
>>
>> print a + b
>>
>> and
>>
>> r = a + b
>> print r
>
> For a sufficiently gross-grained comparison, sure. And so? In the
> second case, if you're not interested in having the value of a+b kept
> around for any subsequent use, then the first approach is the one
> obvious way;


No it isn't because programs evolve. So you may think you don't
need the result later on, but that may change, so writing it
the second way, will making changes easier later on.


Ridiculous. Keep around a+b, which for all we know here might be a
million-items list!, by having a name bound to it, without ANY current
need for that object, because some FUTURE version of your program may
have different specs?!
If specs change, refactoring the program written in the sensible way,
the way that doesn't keep memory occupied to no good purpose, won't be
any harder than refactoring the program that wastes megabytes by always
keeping all intermediate results around "just in case".


One could argue that this is again just an deficiency of pythons
implementation, that can't optimise the code in such a way so that
unused variables will have there memory released.
> if you ARE, the second, because you've bound a name to it
> (which you might have avoided) so you can reuse it (if you have no
> interest in such reuse, it's not obvious why you've bound any name...).
>
> In the first case, fortunately the first approach is illegal, the second
> one is just fine. Were they exactly equivalent in effect neither would
> be the one obvious way for all reasonable observer -- some would hate
> the side effect in the first case, some would hate the idea of having
> two statements where one might suffice in the second case.


So? I sometimes get the idea that people here can't cope with
differences in how people code. So any effort must be made
to force people to code in one specific way.


When more than one person cooperates in writing a program, the group
will work much better if there is no "code ownership" -- the lack of
individualized, quirky style variations helps a lot. It's not imposible
to 'cope with differences' in coding style within a team, but it's just
one more roadblock erected to no good purpose. A language can help the
team reach reasonably uniform coding style (by trying to avoid offering
gratuitous variation which serves no real purpose), or it can hinder the
team in that same goal (by showering gratuitous variation on them).


If a language goes so far as to make a particular coding impossible
while that would have been the prefered coding style for most of
the project members then such a limitation can hinder the decision
to agree upon a certain style instead of helping.

I also think this attitude is appalling. Python is for consenting
adults I hear. But that doesn't seem to apply here, as python
seems to want to enforce a certain coding style instead of
letting consenting adults work it out among themselves.

> Fortunately the first approach does NOT do the same thing as the second
> (it prints out None:-) so Python sticks to its design principles. Let
> me offer a private libation to whatever deities protect programmers,
> that Python was designed by GvR rather than by people able to propose
> analogies such as this last one without following through on all of
> their implications and seeing why this SHOWS Python is consistent in
> applying its own design principles!


That these implications are important is just an implication on the
design principles. If someone doesn't think particular design principles
are that important, he doesn't care that if somethings is changed that
particulat design principle will be violated. Personnaly I'm not
that impressed with the design of python, it is a very usefull language


Great, so, I repeat: go away and design your language, one that WILL
impress you with its design. Here, you're just waiting your precious
time and energy, as well of course as ours.


That you waste yours, is entirly your choice, nobody forces your hand
to reply to me.
but having operators like '+=' which have a different kind of result
depending on whether you have a mutable or immutable object is IMO
not such a good design and I wonder what design principle inspired
them.


Practicality beats purity: needing to polymorphically concatenate two
sequences of any kind, without caring if one gets modified or not, is a
reasonably frequent need and is quite well satisfied by += for example.


It isn't. Either you know what types the variables are and then
using a different operator depending on the type is no big deal,
or you don't know what type the variables are and then not caring
if one gets modified or not, is a disaster in waiting.

--
Antoon Pardon
Jul 18 '05 #42

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
You think one way, GvR thinks another, and in Python GvR wins. Go
design your own language where what you think matters.
Why the fuss over the chosen decorator syntax if GvR
wins anyhow. Why don't you go tell all those people
arguing decorator syntax that they should design their own
language where what they think matters.


Because in this case, specifically, the decision is explicitly
considered "not definitive yet". Alpha versions get released
_specifically_ to get community feedback, so a controversial innovation
gets a chance to be changed if the community can so persuade Guido.

I do not know if you're really unable to perceive this obvious
difference, or are just trying to falsely convince others that you're as
thick as that; in this like in many other cases in this thread, if
you're pretending, then you're doing a good job of it, because you're
quite close to convincing me that your perception _is_ truly as impeded
as it would appear to be.

If you think I shouldn't voice an opinion here because GvR
wins anyhow and my opinion won't matter fine. Just say so
from the beginning. Don't start with pretending that you
have good arguments that support the status quo because
all that matters is that GvR prefers it this way.
All good arguments in support are just a coincidence in
that case.
I do think, and I have indeed "stated so from the beginning" (many years
ago), that it's generally a waste of time and energy for people to come
charging here criticizing Python's general design and demanding changes
that won't happen anyway. There are forums essentialy devoted to
debates and flamewars independenlty of their uselessness, and this
newsgroup is not one of them.

People with normal levels of perceptiveness can see the difference
between such useless rants, on one side, and, on the other, several
potentially useful kinds of discourse, that I, speaking personally, do
indeed welcome. Trying to understand the design rationale for some
aspect of the language is just fine, for example -- and that's because
trying to understand any complicated artefact X is often well served by
efforts to build a mental model of how X came to be as it is, quite
apart from any interest in _changing_ X. You may not like the arguments
I present, but I'm not just "pretending" that they're good, as you
accuse me of doing: many people like them, as you can confirm for
yourself by studying the google groups archives of my posts and of the
responses to them over the years, checking out the reviews of my books,
and so on. If you just don't like reading my prose, hey, fine, many
others don't particularly care for it either (including Guido,
mostly;-); I'll be content with being helpful to, and appreciated by,
that substantial contingent of people who do like my writing.

And so, inevitably, each and every time I return to c.l.py, I find some
people who might be engaging in either kind of post -- the useful
"trying to understand" kind, or the useless "criticizing what you cannot
change" one -- and others who are clearly just flaming. And inevitably
I end up repeating once again all the (IMHO) good arguments which (IMHO)
show most criticisms to be badly conceived and most design decisions in
Python to be consistent, useful, helpful and well-founded. Why?
Because this is a _public_ forum, with many more readers than writers
for most any thread. If these were private exchanges, I'd happily set
my mail server to bounce any mail from people I know won't say anything
useful or insightful, and good riddance. But since it's a public forum,
there are likely to be readers out there who ARE honestly striving to
understand, and if they see unanswered criticisms they may not have
enough Python knowledge to see by themselves the obvious answers to
those criticisms -- so, far too often, I provide those answers, as a
service to those readers out there. I would much rather spend this time
and energy doing something that's more fun and more useful, but it's
psychologically difficult for me to see some situation that can
obviously use some help on my part, and do nothing at all about it.

Maybe one day I'll be psychologically able to use killfiles more
consistently, whenever I notice some poster that I can reliably classify
as a useless flamer, and let readers of that poster's tripe watch out
for themselves. But still I find it less painful to just drop out of
c.l.py altogether when I once again realize I just can't afford the time
to show why every flawed analysis in the world IS flawed, why every
false assertion in the world IS false, and so on -- and further realize
that there will never be any shortage of people eager to post flawed
analysis, false assertions, and so on, to any public forum.

I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1


You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.


That repetion is just pythons inabilty to optimise.


There being, in general, no necessary correlation whatsoever between the
computations performed by __getitem__ and those performed by
__setitem__, the repetition of the indexing operation is (in general)
indeed inevitable here. Python does choose not to hoist constant
subexpressions even in other cases, but here, unless one changed
semantics very deeply and backwards-incompatibly, there's nothing to
hoist. ((note carefully that I'm not claiming v[t] += 1 is inherently
different...))
and the frustration of having to write

lst = f(x)
lst.sort()
lst.reverse()


Here, no operation is needlessly getting repeated.


Yes there is, the operation to find lst from the local dictionary.
Although it wont be unboundedly costly.


If you're thinking of a virtual machine based on a stack (which happens
to be the case in the current Python), you can indeed imagine two
repeated elementary operations, in current bytecode LOAD_FAST for name
'lst' and POP_TOP to ignore the result of each call -- they're extremely
fast but do indeed get repeated. But that's an implementation detail
based on using a stack-based virtual machine, and therefore irrelevant
in terms of judging the _language_ (as opposed to its implementations).
Using a register-based virtual machine, name 'lst' could obviously be
left in a register after the first look-up -- no repetition at all is
_inherently_ made necessary by this aspect of language design.

Still, I'm much happier thinking of you busy designing your own
wonderful language, than wasting your time and yours here, busy
criticizing what you cannot change.


If you don't want to waste time, just state from the beginning
that this is how GvR wanted it and people won't be able to
change it.

You shouldn't start by arguing why the language as it is is as
it should because that will just prolong the discussion as
people will give counter arguments for what they think would
be better. If you know that, should people not be persuaded
by your arguments, you will resort to GvR autority and declare
the arguments a waste of time, you are better of puttings
GvR autority that can't be questioned on the table as soon
as possible.


On the other hand, _reasonable_ readers (and there are some, as shown by
the various feedback on my work that I have referred to in previous
parts of this post) can benefit by a presentation of the _excellent_
reasons underlying Python's design, and such readers would be badly
served if the flawed arguments and false assertions presented to justify
some criticisms of Python were left unanswered.
Alex
Jul 18 '05 #43

P: n/a
Op 2004-08-27, Alex Martelli schreef <al*****@yahoo.com>:
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...

If you think I shouldn't voice an opinion here because GvR
wins anyhow and my opinion won't matter fine. Just say so
from the beginning. Don't start with pretending that you
have good arguments that support the status quo because
all that matters is that GvR prefers it this way.
All good arguments in support are just a coincidence in
that case.
I do think, and I have indeed "stated so from the beginning" (many years
ago), that it's generally a waste of time and energy for people to come
charging here criticizing Python's general design and demanding changes
that won't happen anyway. There are forums essentialy devoted to
debates and flamewars independenlty of their uselessness, and this
newsgroup is not one of them.


I don't demand changes. I have my critisms of the language and
think that some arguments used to defend the language are not
well founded and when I see one of those I sometimes respond
to it. That is all. I realise no language is perfect
and I don't have the time to design the one true perfect language
my self. In general I'm happy to program in python with the
warts I think it has. I'll just see how it evolves and based
on that evolution and the appearance of other languages will
decide what language I will use in the future.

I hope that someday a ternary operator will arive, but my choice
of language will hardly depend on that and I won't ask for it
except if ever that particulat PEP becomes activated again.
But if someone argues there is no need for a ternary operator
I'll probably respond.
People with normal levels of perceptiveness can see the difference
between such useless rants, on one side, and, on the other, several
potentially useful kinds of discourse, that I, speaking personally, do
indeed welcome. Trying to understand the design rationale for some
aspect of the language is just fine, for example -- and that's because
trying to understand any complicated artefact X is often well served by
efforts to build a mental model of how X came to be as it is, quite
apart from any interest in _changing_ X. You may not like the arguments
I present, but I'm not just "pretending" that they're good, as you
accuse me of doing: many people like them, as you can confirm for
yourself by studying the google groups archives of my posts and of the
responses to them over the years, checking out the reviews of my books,
and so on.
The number of people that like your arguments is irrelevant to me.
If I don't think it is a good argument chances are I will respond
to it.
If you just don't like reading my prose, hey, fine, many
others don't particularly care for it either (including Guido,
mostly;-); I'll be content with being helpful to, and appreciated by,
that substantial contingent of people who do like my writing.

And so, inevitably, each and every time I return to c.l.py, I find some
people who might be engaging in either kind of post -- the useful
"trying to understand" kind, or the useless "criticizing what you cannot
change" one -- and others who are clearly just flaming.
The problem IMO is that often enough, when a usefull trying to
understand article arrives, the answers are not limited to
explaining what is going on, but often include some advocacy
of why the choice made in python was the correct one.

This invites people who are less happy with that particular choice
to argue why that choice isn't so good as the first responder may
have let to believe. Even if they don't particularly want the
language to change.

And inevitably
I end up repeating once again all the (IMHO) good arguments which (IMHO)
show most criticisms to be badly conceived and most design decisions in
Python to be consistent, useful, helpful and well-founded. Why?
Because this is a _public_ forum, with many more readers than writers
for most any thread. If these were private exchanges, I'd happily set
my mail server to bounce any mail from people I know won't say anything
useful or insightful, and good riddance. But since it's a public forum,
there are likely to be readers out there who ARE honestly striving to
understand, and if they see unanswered criticisms they may not have
enough Python knowledge to see by themselves the obvious answers to
those criticisms -- so, far too often, I provide those answers, as a
service to those readers out there.


Well the same work the other way around. There are those people who
think that some of the choices that python made are not that consistent,
usefull, helpfull and well-founded as some would like us to believe
and that those things may be known too.
>> I don't see that much difference in the frustration of having
>> to write:
>>
>> t = f(x)
>> v[t] = v[t] + 1
>
> You're repeating (necessarily) the indexing operation, which may be
> unboundedly costly for a user-coded type.


That repetion is just pythons inabilty to optimise.


There being, in general, no necessary correlation whatsoever between the
computations performed by __getitem__ and those performed by
__setitem__,


Maybe that is the problem here. I think one could argue that a c++
approach here would have been better, where v[t] would result in
an lvalue, from which a value could be extracted or that could
be set to a value depending on which side of an assignment it
was found. And no I'm not asking that python should be changed
this way.

--
Antoon Pardon
Jul 18 '05 #44

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
In this case I think the practicality of method chaining beats
the purity of not allowing side-effects in print statements and
of having only one obvious way to do things.


Especially since the whole decorator thing is essentially about method
chaining.
Jul 18 '05 #45

P: n/a
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
Do I really need to point out that a.extend(b) doesn't work for tuples
and strings, while a+=b works as polymorphically as feasible on all
these types?
That extend doesn't work for strings and tupples is irrelevant.
for those types that have an extend a.extend(b) is equivallent
to a+=b


It's perfectly relevant, because not all types have extend and += works
para-polymorphically anyway.
In other words there is no reason to have an extend member for
lists.
If lists were being designed from scratch today, there would be a design
decision involved: give them a nicely named normal method 'extend' that
is a synonym for __iadd__, so that the callable bound method can be
nicely extracted as 'mylist.extend' and passed around / stored somewhere
/ etc etc, or demand that people wanting to pass the callable bound
method around use 'mylist.__iadd__' which is somewhat goofied and less
readable. I'm glad I do not have to make that decision myself, but if I
did I would probably tend to err on the side of minimalism -- no
synonyms.

However, lists had extend before += existed, so clearly they kept it for
backwards compatibility. Similarly, dicts keep their method has_key
even though it later became just a synonym for __contains__, etc etc.

If the point you're trying to make here is that Python chooses to be
constrained by backwards compatibility, keeping older approaches around
as new ones get introduced, I do not believe I ever heard anybody
arguing otherwise. You may know that at some unspecified time in the
future a Python 3.0 version will be designed, unconstrained by strict
needs of backwards compatibility and specifically oriented to removing
aspects that have become redundant. Guido has stated so repeatedly,
although he steadfastly refuses to name a precise date. At that time
every aspect of redundancy will be carefully scrutinized, extend versus
_iadd__ and has_key versus __contains__ being just two examples; any
such redundancy that still remain in Python 3.0 will then have done so
by deliberate, specific decision, rather than due to the requirements of
backwards compatibility.

A "greenfield design", an entirely new language designed from scratch,
has no backwards compatibility constraints -- there do not exist million
of lines of production code that must be preserved. One can also decide
to develop a language without special regards to backwards
compatibility, routinely breaking any amount of working code, but that
would be more appropriate to a language meant for such purposes as
research and experimentation, rather than for writing applications in.

furthermore a+=b doesn't work polymorphically feasable because
the result of

a=c
a+=b

is different depending on the type of c.
Indeed it's _para_-polymorphic, exactly as I said, not fully
polymorphic. In many cases, as you code, you know you have no other
references bound to the object in question, and in that case the
polymorphism applies.
It should be pretty obvious, I think. So, if you want to
get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
is clearly the way to go -- if you want para-polymorphic behavior in
those cases, a+=b. Isn't it obvious, too?
No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.


I can silently suffer MOST spelling mistakes, but please, PLEASE do not
write "it's" where you mean "its", or viceversa: it's the most horrible
thing you can do to the poor, old, long-suffering English language, and
it makes me physically ill. Particularly in a paragraph as content-free
as this one of yours that I've just quoted, where you're really saying
nothing at all, you could AT LEAST make an attempt to respect the rules
of English, if not of logic and common sense.

On to the substance: your assertion is absurd. You say it isn't obvious
that a.extend(b) will raise an exception if a is bound to a str or
tuple, yet it patently IS obvious, given that str and tuple do not have
a method named 'extend'. Whether that's stupid or clever is a
completely different issue, and one which doesn't make your "No it isn't
obvious" assertion any closer to sanity one way or another.

in-place mutation. *IN GENERAL*, the effects of a+=b and a=a+b may
differ, though in specific cases ('a' being immutable, or of a mutable
type which strangely chooses to define __add__ but not __iadd__) they
may be identical.


which makes them para-polymorphic infeasable.


I don't know what "infeasable" means -- it's a word I cannot find in the
dictionary -- and presumably, if it means something like "unfeasible",
you do not know what the construct 'para-polymorphic' means (it means:
polymorphic under given environmental constraints -- I constructed it
from a common general use of the prefix 'para').

Like for a+b vs b+a: in general they may differ, but
they won't differ if the types involved just happen to have commutative
addition, of if a and b are equal or identical objects, i.e., in various
special cases.

"You get different effects *in general*" does not rule out that there
may be special cases (immutable types for one issue,


If those specific cases can be half of the total cases I wouldn't
call the not specific cases *in general*.


There is no sensible way to specify how to count "cases" of types that
have or don't have commutative addition -- anybody can code their own
types and have their addition operation behave either way. Therefore,
it makes no sense to speak of "half the total cases".

Still, the English expression "in general" is ambiguous, as it may be
used to mean either "in the general case" (that's how it's normally used
in mathematical discourse in English, for example) or "in most cases"
(which is how you appear to think it should exclusively be used).

The point is that there is a difference between what is obvious in
general and what is obvious within a certain tradition. If python
Absolutely true: the fact that a cross like + stands for addition is
only obvious for people coming from cultures in which that symbol has
been used to signify addition for centuries, for example. There is
nothing intrinsical in the graphics of the glyph '+' that makes it
'obviously' mean 'addition'.
would limit itself to only one obvious way for those things that
are obvious in general that would be one way.
I cannot think of _any_ aspect of a programming language that might
pertain to 'things that are obvious in general' as opposed to culturally
determined traits -- going left to right, using ASCII rather than other
alphabets, using '+' to indicate addition, and so on, and so forth.
Please give examples of these 'things that are obvious in general' where
you think Python might 'limit oneself to only one obvious way'.
But here you are defending one thing that is only obvious through
tradition, by pointing out that something that hasn't had the time
to become a tradition isn't obvious.
When there is one operator to do one job, _in the context of that set of
operator_, it IS obviously right to use that operator, rather than using
two operators which, combined, give the same effect. I claim that, no
matter what symbols you use to represent the operators, "TO a, ADD b" is
'more obvious' than "FROM a, SUBTRACT the NEGATIVE of b", because the
former requires one operator, binary ADD, the latter requires two,
binary SUBTRACT and unary NEGATIVE. I do not claim that this is
necessarily so in any culture or tradition whatsoever: I do claim it is
true for cultures sufficiently influenced by Occam's Razor, "Entities
are not to be multiplied beyond necessity", and that the culture to
which Python addresses itself is in fact so influenced. ((If you feel
unable to relate to a culture influenced by Occam's Razor, then it is
quite possible that Python is in fact not suitable for you)).

Personly I don't find the use of "+" as a concat operator obvious.
There are types for which both addition and concatenation can be
a usefull operator. Using the same symbol for both will make
it just that much harder to implement such types and as a result
there is no obvious interface for such types.
True, if I were designing a language from scratch I would surely
consider the possiibly of using different operators for addition and
concatenation, and, similarly, for multiplication and repetition --
there are clearly difficult trade-offs here. On one side, in the future
I may want to have a type that has both addition and concatenation
(presumably a numeric array); on the other, if concatenation is a
frequent need in the typical use cases of the language it's hard to
think of a neater way to express it than '+, in this culture (where, for
example, PL/I's concatenation operator '||' has been appropriated by C
to mean something completely different, 'logical or' -- now, using '||'
for concatenation would be very confusing to a target audience that is
more familiar with C than with PL/I or SQL...). Any design choice in
the presence of such opposite constraints can hardly be 'obvious' (and
in designing a language from scratch there is an enormously high number
of such choices to be made -- not independently from each other,
either).

But note that the fact that choosing to use the same operator was not an
_obvious_ choice at the time the language was designed has nothing to do
with the application of the Zen of Python point to 'how to concatenate
two strings'. Python _is_ designed in such a way that the task "how do
I concatenate the strings denoted by names a and b" has one obvious
answer: a+b. This is because of how Python is designed (with + between
sequences meaning concatenation) and already-mentioned cultural aspects
(using a single operator that does job X is the obvious way in a culture
influenced by Occam's Razor to do job X). All alternatives require
multiple operations ('JOIN the LIST FORMED BY ITEMS a and b' -- you have
to form an intermediate list, or tuple, and then join it, for example)
and therefore are not obvious under these conditions. This is even
sometimes unfortunate, since
for piece in makepieces(): bigstring += piece
is such a performance disaster (less so in Python 2.4, happily!), yet
people keep committing it because it IS an "attractive nuisance" -- an
OBVIOUS solution that is not the RIGHT solution. That it's obvious to
most beginners is proven by the fact that so many beginners continue to
do it, even though ''.join(makepieces()) is shorter and faster. I once
hoped that sum(makepieces()) could field this issue, but Guido decided
that having an alternative to ''.join would be bad and had me take the
code out of 'sum' to handle string arguments . Note that I do not
_whine_ about it, even though it meant giving up both one of my pet
ideas _and_ some work I had already done, rather I admit it's his
call... and I use his language rather than making my own because over
the years I've learned that _overall_ his decisions make a better
language than mine would, even though I may hotly differ with him
regarding a few specific decisions out of the huge numbers needed to
build and grow a programming language.

If I didn't think that, I wouldn't use Python, of course: besides the
possibility of making my own languages, there are many truly excellent
very high level languages to choose among -- Lisp, Smalltalk, Haskell,
ML of many stripes, Erlang, Ruby. I think I could be pretty happy with
any of these... just not quite as happy as I am with Python, therefore
it is with Python that I stick!

If someway of doing it
is usefull then I think it should be included and the fact that
it introduces more than one obvious way to do some things shouldn't
count for much.


This is exactly Perl's philosophy, of course.


No it isn't. Perl offers you choice in a number of situations
where a number of the alternatives don't offer you anything usefull.
unless a way to do things differently and eliminate a few characters.


And for some people eliminating some characters is very important and
makes those alternatives preferable and useful to them, according to
their criteria.

Sure you shouldn't go the perl-way where things seemed to have
been introduced just for the sake of having more than obvious way
to do things. But eliminating possibilities (method chaining)
just because you don't like them and because they would create
more than one obvious way to do things, seems just as bad to
me.


If a language should not eliminate possibilities because its designer
does not like those possibilities, indeed if it's BAD for a language
designer to omit from his language the possibilities he dislikes, what
else should a language designer do then, except include every
possibility that somebody somewhere MIGHT like?


So if you think it is good for a language designer to omit what
he dislikes. Do you think it is equally good for a language
designer to add just because he likes it. And if you think so,
do you think the earlier versions of perl, where we can think
the language was still mainly driven by what Larry Wall liked,
was a good language.


Do you know how to use the question mark punctuation character? It's
hard to say whether you're asking questions or making assertions, when
your word order suggests one thing and your punctuation says otherwise.

"You know a design is finished, not when there is nothing left to add,
but when there is nothing left to take away" (Antoine de Saint Exupery,
widely quoted and differently translated from French). There is no
necessary symmetry between adding features and avoiding them.

But sure, it's a designer's job to add what he likes and thinks
necessary and omit what he dislikes and thinks redundant or worse. I
met Perl when Perl was at release 3.something, and by that time it was
already florid with redundancy -- I believe it was designed that way
from the start, with "&foo if $blah;" and "if($blah) {&foo;}" both
included because some people would like one and others would like the
other, 'unless' as a synonym of 'if not' for similar reasons, etc, etc,
with a design principle based on the enormous redundancy of natural
language (Wall's field of study). ((However, I have no experience with
the very first few releases of Perl)). At the time when I met Perl 3 I
thought it was the best language for my needs under Unix given the
alternatives I believed I had (sh and its descendants, awk -- Rexx was
not available for Unix then, Python I'd never heard of, Lisp would have
cost me money, etc, etc), which is why I used it for years (all the way
to Perl 4 and the dawn of Perl 5...) -- but, no, I never particularly
liked its florid redundancy, its lack of good data structures (at the
time, I do understand the current Perl is a bit better there!), and the
need for stropping just about every identifier. Why do you ask? I do
not see the connection between my opinion of Perl and anything else we
were discussing.

I can understand that a designer has to make choices, but
if the designer can allow a choice and has no other arguments
to limit that choice than that he doesn't like one alternative
then that is IMO a bad design decision.
Ah, you're making a common but deep mistake here: the ability to DO
something great, and the ability to explain WHY one has acted in one way
or another in the process of doing that something, are not connected.

Consider a musician composing a song: the musician's ability to choose a
sequence of notes that when played will sound just wonderful is one
thing, his ability to explain WHY he's put a Re there instead of a Mi is
quite another issue. Would you say "if a musician could have used a
note and has no other arguments to omit that note than that he doesn't
like it then than is a bad music composition decision"? I think it's
absurd to infer, from somebody's inability to explain a decision to your
satisfaction, _or at all_, that the decision is bad.

"Those who can, do, those who can't, explain" may come closer (except
that there _are_ a few musicians, language designers, architects, and
other creative types, who happen to be good at both doing and
explaining, but they're a minority, I believe).

I've never made any claim about Guido's skill as an explainer or
debater, please note. I do implicitly claim he's great at language
design, by freely choosing to use the language he's designed when there
are so many others I could just as freely choose among. (_Your_ use of
Python, on the other hand, is obviously totally contradictory with your
opinion, which you just expressed, that it's a horribly badly designed
language, since its designer is not good at argumenting about each and
every one of the uncountable decisions he's made -- to disallow
possibility a, possibility b, possibility c, and so on, and so forth).

target = mydict[foo].bar[23].zepp
target.pop(xu1)
target.sort()
target.pop(xu3)
target.reverse()
target.pop(xu7)


I find this a questionable practice. What if you need to make the list
empty at some time. The most obvious way to do so after a number of
such statements would be:

target = []

But of course that won't work.


That would be 'obvious' only to someone so totally ignorant of Python's
most fundamental aspects that I _cringe_ to think of that someone using
Python. By just asserting it would be obvous you must justify serious
doubts about your competence in Python use.

Assigning to a bare name NEVER mutates the object to which that name
previously referred to, if any. NEVER.

Therefore, thinking of assigning to a bare name as a way of mutating an
object is not obvious -- on the contrary, it's absurd, in Python.

One obvious way is:

target[:] = []

"assigning to the CONTENTS of the object" does mutate it, and this works
just fine, of course. Unfortunately there is another way, also obvious:

del target[:]

"deleting the CONTENTS of the object". This will also work just fine.
Alas, it's only _preferable_ that the obvious way be just one, and we
cannot always reach the results we would prefer.

So, your assertion that this is a questionable practice proves
untenable. But then, as this thread shows, _most_ of your assertions
are untenable, so you're clearly comfortable with the fact. I guess it
goes well with freely choosing to use a language which you consider so
badly designed!

If a language goes so far as to make a particular coding impossible
while that would have been the prefered coding style for most of
the project members then such a limitation can hinder the decision
to agree upon a certain style instead of helping.
And in this case the team should definitely choose another language,
just like you should do instead of wasting your time using Python, and
yours AND ours whining against it.

I also think this attitude is appalling. Python is for consenting
adults I hear. But that doesn't seem to apply here, as python
seems to want to enforce a certain coding style instead of
letting consenting adults work it out among themselves.


Python most definitely does not multiply entities beyond necessity in
order to allow multiple equivalent coding styles -- it's that Occam
Razor thing again, see. If a team wants enormous freedom of design,
short of designing their own language from scratch, they can choose
among Lisp, Scheme, Dylan -- all good languages with enormously powerful
MACRO systems which let you go wild in ways languages without macros
just can't match. Of course, it's likely that nobody besides the
original team can maintain their code later -- that's the flip side of
that freedom... it can go all the way to designing your own language,
and who else but you will know it so they can advise, consult, maintain,
and so on, once you choose to avail yourself of that freedom?

Python values uniformity -- values the ability of somebody "from outside
the team" read the code, advise and consult about it, and maintain it
later, higher than it values the possibility of giving the team *yet
another way* to design their own language... why would you NEED another
Lisp? There IS one, go and use THAT (or if you can't stand parentheses,
use Dylan -- not far from Lisp with different surface syntax after all).

I also appreciate this uniformity highly -- it lets me advise and
consult all manners of teams using Python, it makes my books and courses
and presentations more useful to them, it lets me turn for advice and
consultancy to the general community regarding my own projects and
teams, all without difficulty. What could possibly be "appalling" in
not wanting to be yet another Lisp, yet another Perl, and so on?! Why
shouldn't there be on this Earth ONE language which makes software
maintenance easier, ONE language which care more about the ease of
reading others' code than about the ease of writing that code?! Now
THAT absolutism, this absurd attitude of yours that wants to wipe out
from the face of the Earth the ONLY language so focused on uniformity,
egoless and ownerless code, community, maintenance, etc, to turn it into
yet another needless would-be clone of Lisp, Perl, etc... *THAT* is
truly appalling indeed!

Great, so, I repeat: go away and design your language, one that WILL
impress you with its design. Here, you're just waiting your precious
time and energy, as well of course as ours.


That you waste yours, is entirly your choice, nobody forces your hand
to reply to me.


Absolutely my choice, of course. But I have a reasonable motivation,
which I have already explained: there may be other readers which would
be ill-served by leaving your untenable assertions, etc etc,
unchallenged, when those assertions &c are so easy to tear into small
bloody pieces and deserve nothing better.

YOUR motivation for using a language you consider badly designed, one
whose underlying culture you find APPALLING (!your choice of word!), and
then spending your time spewing venom against it, is, on the other hand,
totally mysterious.

Practicality beats purity: needing to polymorphically concatenate two
sequences of any kind, without caring if one gets modified or not, is a
reasonably frequent need and is quite well satisfied by += for example.


It isn't. Either you know what types the variables are and then
using a different operator depending on the type is no big deal,
or you don't know what type the variables are and then not caring
if one gets modified or not, is a disaster in waiting.


Your assertion is, once again, totally false and untenable.

def frooble(target, how_many, generator, *args, **kwds):
for i in xrange(how_many):
target += generator(i, *args, **kwds)
return target

Where is the "disaster in waiting" here? The specifications of
'frooble' are analogous to those of '+=': if you pass it a first
argument that is mutable it will extend it, otherwise it obviously
won't.
Alex
Jul 18 '05 #46

P: n/a
al*****@yahoo.com (Alex Martelli) writes:
[...]
Maybe one day I'll be psychologically able to use killfiles more
consistently, whenever I notice some poster that I can reliably classify
as a useless flamer, and let readers of that poster's tripe watch out
for themselves. But still I find it less painful to just drop out of
c.l.py altogether when I once again realize I just can't afford the time
to show why every flawed analysis in the world IS flawed, why every
false assertion in the world IS false, and so on -- and further realize
that there will never be any shortage of people eager to post flawed
analysis, false assertions, and so on, to any public forum.

[...]

It can be amusing, in a sadistic sort of way, to watch you attempt to
nail to the floor every protruding flabby piece of argument, no matter
how peripheral or repetitious. It's not *always* as edifying as other
ways you could spend your time, though...

But I do see the temptation :-/
John
Jul 18 '05 #47

P: n/a
Op 2004-08-27, Alex Martelli schreef <al*****@yahoo.com>:
Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
...
> It should be pretty obvious, I think. So, if you want to
> get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
> is clearly the way to go -- if you want para-polymorphic behavior in
> those cases, a+=b. Isn't it obvious, too?
No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.


I can silently suffer MOST spelling mistakes, but please, PLEASE do not
write "it's" where you mean "its", or viceversa: it's the most horrible
thing you can do to the poor, old, long-suffering English language, and
it makes me physically ill. Particularly in a paragraph as content-free
as this one of yours that I've just quoted, where you're really saying
nothing at all, you could AT LEAST make an attempt to respect the rules
of English, if not of logic and common sense.

On to the substance: your assertion is absurd. You say it isn't obvious
that a.extend(b) will raise an exception if a is bound to a str or
tuple, yet it patently IS obvious, given that str and tuple do not have
a method named 'extend'.


That is a non sequitur, the fact that something is a given, doesn't
make that something obvious. I dare say that if you have two parts
of equal code, one that uses += and the other that uses extend,
it will not be obvious to the reader that you want to first
code to work only with lists and the other with strings and tupples
too. He can probably figure it out but IMO it is not the most clear
way to make that disticntion.

Whether that's stupid or clever is a
completely different issue, and one which doesn't make your "No it isn't
obvious" assertion any closer to sanity one way or another.


IMO, obvious means, that it is the first thing that comes to mind
when someone reads the code. IMO it is not obvious in that sense.

--
Antoon Pardon
Jul 18 '05 #48

This discussion thread is closed

Replies have been disabled for this discussion.