473,287 Members | 1,564 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,287 software developers and data experts.

Attack a sacred Python Cow

Hi everyone,

I'm a big Python fan who used to be involved semi regularly in
comp.lang.python (lots of lurking, occasional posting) but kind of
trailed off a bit. I just wrote a frustration inspired rant on my
blog, and I thought it was relevant enough as a wider issue to the
Python community to post here for your discussion and consideration.

This is not flamebait. I love Python, and I'm not out to antagonise
the community. I also realise that one of the issues I raise is way
too ingrained to be changed now. I'd just like to share my thinking on
a misstep in Python's guiding principles that has done more harm than
good IMO. So anyway, here's the post.

I've become utterly convinced that at least one criticism leveled at
my favourite overall programming language, Python, is utterly true and
fair. After quite a while away from writing Python code, I started
last night on a whim to knock up some code for a prototype of an idea
I once had. It's going swimmingly; the Python Image Library, which I'd
never used before, seems quick, intuitive, and with the all the
features I need for this project. As for Python itself, well, my heart
still belongs to whitespace delimitation. All the basics of Python
coding are there in my mind like I never stopped using them, or like
I've been programming in this language for 10 years.

Except when it comes to Classes. I added some classes to code that had
previously just been functions, and you know what I did - or rather,
forgot to do? Put in the 'self'. In front of some of the variable
accesses, but more noticably, at the start of *every single method
argument list.* This cannot be any longer blamed as a hangover from
Java - I've written a ton more code, more recently in Python than in
Java or any other OO language. What's more, every time I go back to
Python after a break of more than about a week or so, I start making
this 'mistake' again. The perennial justification for this 'feature'
of the language? That old Python favourite, "Explicit is better than
implicit."

I'm sorry, but EXPLICIT IS NOT NECESSARILY BETTER THAN IMPLICIT.
Assembler is explicit FFS. Intuitive, clever, dependable, expected,
well-designed *implicit* behaviour is one of the chief reasons why I
use a high level language. Implicitly garbage collect old objects for
me? Yes, please!

I was once bitten by a Python wart I felt was bad enough to raise and
spend some effort advocating change for on comp.lang.python (never got
around to doing a PEP; partly laziness, partly young and inexperienced
enough to be intimidated at the thought. Still am, perhaps.)

The following doesn't work as any sane, reasonable person would
expect:

# Blog code, not tested
class A():
def __eq__(self, obj):
return True
a = A()
b = []
assert a == b
assert not (a != b)

The second assertion fails. Why? Because coding __eq__, the most
obvious way to make a class have equality based comparisons, buys you
nothing from the != operator. != isn't (by default) a synonym for the
negation of == (unlike in, say, every other language ever); not only
will Python let you make them mean different things, without
documenting this fact - it actively encourages you to do so.

There were a disturbingly high number of people defending this
(including one quite renowned Pythonista, think it might have been
Effbot). Some had the temerity to fall back on "Explicit is better
than implict: if you want != to work, you should damn well code
__ne__!"

Why, for heaven's sake, should I have to, when in 99.99% of use cases
(and of those 0.01% instances quoted in the argument at the time only
one struck me as remotely compelling) every programmer is going to
want __ne__ to be the logical negation of __eq__? Why, dear Python,
are you making me write evil Java-style language power reducing
boilerplate to do the thing you should be doing yourself anyway?
What's more, every programmer is going to unconciously expect it to
work this way, and be as utterly as mystified as me when it fails to
do so. Don't tell me to RTFM and don't tell me to be explicit. I'll
repeat myself - if I wanted to be explicit, I'd be using C and
managing my own memory thank you very much. Better yet, I'd explicitly
and graphically swear - swear in frustration at this entrenched design
philosophy madness that afflicts my favourite language.

I think the real problem with the explicit is better than implicit,
though, is that while you can see the underlying truth its trying to
get at (which is perhaps better expressed by Ruby's more equivocal,
less dependable, but more useful Principle of Least Surprise), in its
stated form its actually kind of meanginless and is used mainly in
defence of warts - no, we'll call them for what they are, a language
design *bugs*.

You see, the problem is, there's no such thing of explict in
programming. Its not a question of not doing things implicitly; its a
question of doing the most sensible thing implicitly. For example this
python code:

some_obj.some_meth(some_arg1, some_arg2)

is implicitly equivalent to

SomeClass.some_meth(some_obj, some_arg1, some_arg2)

which in turn gives us self as a reference to some_obj, and Python's
OO model merrily pretends its the same as Java's when in fact is a
smarter version that just superficially looks the same.

The problem is that the explicit requirement to have self at the start
of every method is something that should be shipped off to the
implicit category. You should have to be explicit, yes - explicit when
you want the *other* behaviour, of self *not* being an argument,
because thats the more unusual, less likely case.

Likewise,

a != b

is implicitly equivalent to something like calling this function (may
not be correct, its a while since I was heavily involved in this
issue):

def equal(a, b):
if hasattr(a, "__ne__"): return a.__ne__(b)
if hasattr(b, "__ne__"): return b.__ne__(a)
if hasattr(a, "__cmp__"): return not (a.__cmp__(b) == 0)
if hasattr(b, "__cmp__"): return not (b.__cmp__(a) == 0)
return not (a is b)

There's absolutely nothing explicit about this. I wasn't arguing for
making behaviour implicit; I was arguing for changing the stupid
implict behaviour to something more sensible and less surprising.

The sad thing is there are plenty of smart Python programmers who will
justify all kinds of idiocy in the name of their holy crusade against
the implict.

If there was one change I could make to Python, it would be to get
that damn line out of the Zen.
Jul 24 '08
270 7833
On Jul 27, 12:39 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
On Sun, Jul 27, 2008 at 08:19:17AM +0000, Steven D'Aprano wrote:
>You take the name down to a single letter. As I suggested in an earlier
post on this thread, why not take it down to zero letters?
The question isn't "why not", but "why". The status quo works well as it
is, even if it isn't perfect. Prove that implicit self is a good idea --
or at least prove that it is an idea worth considering.
Come on, this sounds like a schoolyard argument. This comes down to a
matter of style, and as such, is impossible to prove. It's largely a
question of individual preference.
That said, the argument in favor is rather simple:
1. This is an extremely common idiom in Python
2. It is completely unnecessary, and the language does not suffer for
making it implicit
3. Making it implicit reduces typing, reduces opportunities for
mistakes, and arguably increases consistency.

"arguably", indeed, cf below.
As for the latter part of #3, self (or some other variable) is
required in the parameter list of object methods,

It's actually the parameter list of the *function* that is used as the
implementation of a method. Not quite the same thing. And then,
consistency mandates that the target object of the method is part of the
parameter list of the *function*, since that's how you make objects
availables to a function.
however when the
method is *called*, it is omitted.

Certainly not. You need to lookup the corresponding attribute *on a
given object* to get the method. Whether you write

some_object.some_method()

or

some_function(some_object)

you still need to explicitely mention some_object.
It is implied, supplied by Python.

Neither. The target object is passed to the function by the method
object, which is itself returned by the __get__ method of function
objects, which is one possible application of the more general
descriptor protocol (the same protocol that is used for computed
attributes). IOW, there's nothing specific to 'methods' here, just the
use of two general features (functions and the descriptor protocol).
FWIW, you can write your own callable, and write it so it behave just
like a function here:

import types

class MyCallable(object):
def __call__(self, obj):
print "calling %s with %s" % (self, obj)
def __get__(self, instance, cls):
return types.MethodType(self.__call__, instance, cls)

class Foo(object):
bar = MyCallable()

print Foo.bar
f = Foo()
f.bar()
Thus when an object method is called, it must be called with one fewer
arguments than those which are defined. This can be confusing,
especially to new programmers.

This is confusing as long as you insist on saying that what you
"def"ined is a method - which is not the case.
It can also be argued that it makes the code less ugly, though again,
that's a matter of preference.
It's not enough to show that a change "isn't bad" -- you have to show
that it is actively good.
But he did... he pointed out that *it saves work*, without actually
being bad. Benefit, without drawback. Sounds good to me!
"Don't need to look at the method signature" is not an argument in favour
of implicit self.
Yes, actually, it is.

It isn't, since there's no "method signature" to look at !-)
If there is a well-defined feature of Python
which provides access to the object within itself,

The point is that you don't get access to the object "within itself".
You get access to an object *within a function*.

The fact that a function is defined within a class statement doesn't
imply any "magic", it just creates a function object, bind it to a name,
and make that object an attribute of the class. You have the very same
result by defining the function outside the class statement and binding
it within the class statement, by defining the function outside the
class and binding it to the class outside the class statement, by
binding the name to a lambda within the class statement etc...
then the
opportunities for mistakes when someone decides to use something else
are lessened.
You don't need to look at the method signature when you're using an
explicit self either.
That isn't necessarily true. If you're using someone else's code, and
they didn't use "self" -- or worse yet, if they chose this variable's
name randomly throughout their classes -- then you may well need to
look back to see what was used.
It's bad programming, but the world is full of bad programmers, and we
don't always have the choice not to use their code. Isn't one of
Python's goals to minimize opportunities for bad programming?

Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.
Providing a keyword equivalent to self and removing the need to name
it in object methods is one way to do that.

It's also a way to make Python more complicated than it needs to be. At
least with the current state, you define your functions the same way
regardless of how they are defined, and the implementation is
(relatively) easy to explain. Special-casing functions definition that
happens within a class statement would only introduce a special case.
Then you'd have to explain why you need to specify the target object in
the function's parameters when the function is defined outside the class
but not when it's defined within the class.

IOW : there's one arguably good reason to drop the target object from
functions used as methods implementation, which is to make Python looks
more like Java, and there's at least two good reason to keep it the way
it is, which are simplicity (no special case) and consistency (no
special case).

Anyway, the BDFL has the final word, and it looks like he's not going to
change anything here - but anyone is free to propose a PEP, isn't it ?
The issue here has nothing to do with the inner workings of the Python
interpreter. The issue is whether an arbitrary name such as "self"
needs to be supplied by the programmer.

Neither I nor the person to whom you replied to here (as far as I can
tell) is suggesting that Python adopt the syntax of Java or C++, in
which member data or functions can be accessed the same as local
variables. Any suggestion otherwise is a red herring.

All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary. Otherwise, everything would work
*EXACTLY* the same as it does now. This would be a shallow syntactical
change with no effect on the inner workings of Python, but it could
significantly unclutter code in many instances.

The fact that you seem to think it would change the inner functioning
of Python just shows that you don't understand the proposal.
Jul 27 '08 #101
On Jul 27, 3:11 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 27, 12:39 pm, Bruno Desthuilliers

<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
On Sun, Jul 27, 2008 at 08:19:17AM +0000, Steven D'Aprano wrote:
>>You take the name down to a single letter. As I suggested in an earlier
>>post on this thread, why not take it down to zero letters?
>The question isn't "why not", but "why". The status quo works well as it
>is, even if it isn't perfect. Prove that implicit self is a good idea --
>or at least prove that it is an idea worth considering.
Come on, this sounds like a schoolyard argument. This comes down to a
matter of style, and as such, is impossible to prove. It's largely a
question of individual preference.
That said, the argument in favor is rather simple:
1. This is an extremely common idiom in Python
2. It is completely unnecessary, and the language does not suffer for
making it implicit
3. Making it implicit reduces typing, reduces opportunities for
mistakes, and arguably increases consistency.
"arguably", indeed, cf below.
As for the latter part of #3, self (or some other variable) is
required in the parameter list of object methods,
It's actually the parameter list of the *function* that is used as the
implementation of a method. Not quite the same thing. And then,
consistency mandates that the target object of the method is part of the
parameter list of the *function*, since that's how you make objects
availables to a function.
however when the
method is *called*, it is omitted.
Certainly not. You need to lookup the corresponding attribute *on a
given object* to get the method. Whether you write
some_object.some_method()
or
some_function(some_object)
you still need to explicitely mention some_object.
It is implied, supplied by Python.
Neither. The target object is passed to the function by the method
object, which is itself returned by the __get__ method of function
objects, which is one possible application of the more general
descriptor protocol (the same protocol that is used for computed
attributes). IOW, there's nothing specific to 'methods' here, just the
use of two general features (functions and the descriptor protocol).
FWIW, you can write your own callable, and write it so it behave just
like a function here:
import types
class MyCallable(object):
def __call__(self, obj):
print "calling %s with %s" % (self, obj)
def __get__(self, instance, cls):
return types.MethodType(self.__call__, instance, cls)
class Foo(object):
bar = MyCallable()
print Foo.bar
f = Foo()
f.bar()
Thus when an object method is called, it must be called with one fewer
arguments than those which are defined. This can be confusing,
especially to new programmers.
This is confusing as long as you insist on saying that what you
"def"ined is a method - which is not the case.
It can also be argued that it makes the code less ugly, though again,
that's a matter of preference.
>It's not enough to show that a change "isn't bad" -- you have to show
>that it is actively good.
But he did... he pointed out that *it saves work*, without actually
being bad. Benefit, without drawback. Sounds good to me!
>"Don't need to look at the method signature" is not an argument in favour
>of implicit self.
Yes, actually, it is.
It isn't, since there's no "method signature" to look at !-)
If there is a well-defined feature of Python
which provides access to the object within itself,
The point is that you don't get access to the object "within itself".
You get access to an object *within a function*.
The fact that a function is defined within a class statement doesn't
imply any "magic", it just creates a function object, bind it to a name,
and make that object an attribute of the class. You have the very same
result by defining the function outside the class statement and binding
it within the class statement, by defining the function outside the
class and binding it to the class outside the class statement, by
binding the name to a lambda within the class statement etc...
then the
opportunities for mistakes when someone decides to use something else
are lessened.
>You don't need to look at the method signature when you're using an
>explicit self either.
That isn't necessarily true. If you're using someone else's code, and
they didn't use "self" -- or worse yet, if they chose this variable's
name randomly throughout their classes -- then you may well need to
look back to see what was used.
It's bad programming, but the world is full of bad programmers, and we
don't always have the choice not to use their code. Isn't one of
Python's goals to minimize opportunities for bad programming?
Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.
Providing a keyword equivalent to self and removing the need to name
it in object methods is one way to do that.
It's also a way to make Python more complicated than it needs to be. At
least with the current state, you define your functions the same way
regardless of how they are defined, and the implementation is
(relatively) easy to explain. Special-casing functions definition that
happens within a class statement would only introduce a special case.
Then you'd have to explain why you need to specify the target object in
the function's parameters when the function is defined outside the class
but not when it's defined within the class.
IOW : there's one arguably good reason to drop the target object from
functions used as methods implementation, which is to make Python looks
more like Java, and there's at least two good reason to keep it the way
it is, which are simplicity (no special case) and consistency (no
special case).
Anyway, the BDFL has the final word, and it looks like he's not going to
change anything here - but anyone is free to propose a PEP, isn't it ?

The issue here has nothing to do with the inner workings of the Python
interpreter. The issue is whether an arbitrary name such as "self"
needs to be supplied by the programmer.

Neither I nor the person to whom you replied to here (as far as I can
tell) is suggesting that Python adopt the syntax of Java or C++, in
which member data or functions can be accessed the same as local
variables. Any suggestion otherwise is a red herring.

All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary. Otherwise, everything would work
*EXACTLY* the same as it does now. This would be a shallow syntactical
change with no effect on the inner workings of Python, but it could
significantly unclutter code in many instances.

The fact that you seem to think it would change the inner functioning
of Python just shows that you don't understand the proposal.
It just occurred to me that Python could allow the ".member" access
regardless of the name supplied in the argument list:

class Whatever:

def fun(self, cat):

.cat = cat
self.cat += 1

This allows the programmer to use ".cat" and "self.cat"
interchangeably. If the programmer intends to use only the ".cat"
form, the first argument becomes arbitrary. Allowing him to use an
empty argument or "." would simply tell the reader of the code that
the ".cat" form will be used exclusively.

When I write a function in which a data member will be used several
times, I usually do something like this:

data = self.data

so I can avoid the clutter of repeated use of "self.data". If I could
just use ".data", I could avoid most of the clutter without the extra
line of code renaming the data member. A bonus is that it becomes
clearer at the point of usage that ".data" is member data rather than
a local variable.

Jul 27 '08 #102
On Sun, 27 Jul 2008 12:33:16 -0700, Russ P. wrote:
On Jul 27, 1:19 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
>On Sat, 26 Jul 2008 17:14:46 -0700, Russ P. wrote:
You take the name down to a single letter. As I suggested in an
earlier post on this thread, why not take it down to zero letters?

The question isn't "why not", but "why". The status quo works well as
it is, even if it isn't perfect. Prove that implicit self is a good
idea -- or at least prove that it is an idea worth considering.

"I don't like typing self" doesn't convince me. The same argument could
be made typing parentheses, colons, commas, etc. We could end up with
something like this:

class Foo base
def method x y z
.args = list x y z

That's not necessarily wrong, but it's not Python.

And what does that have to do with my suggestion? Absolutely nothing.
Not at all. You're suggesting a change to Python's syntax. I've suggested
a couple more changes to Python syntax. I don't intend them to be taken
seriously, but only to illustrate a point that syntax defines how a
language is written. You want to change that.

It's a red herring that you seem to be using to obscure the fact that
you have no rational argument to make.
I don't have to make a rational argument for keeping the status quo. That
status quo just *is*. You want people to change, you need to convince
them that such a change is not just "not bad" but a serious advantage,
enough to make up for all the work required to implement it.

I'm listening. Tell me why removing self if not merely harmless, but
actively better.

[...]

>By "better" do you mean "uglier"? If so, I agree with you. If not, then
I disagree that it is better.

You seem to be freaked out by an empty argument. Actually, it bothers me
a bit too, which is why I suggested that a period could be used as the
first argument to indicate that, like Clint Eastwood in The Good, the
Bad, and the Ugly, "self" had no name here.
Well there you go now. How should we *talk* about this piece of code? Try
writing a comment or docstring, or even sitting down with a fellow
programmer and discussing it. What do you call this implicit Object With
No Name?

def fun( , cat):
.cat = cat # assumes that the Object With No Name has foo

versus

def fun(self, cat):
self.cat = cat # assumes that self has foo

Before you suggest that people will continue to call the first argument
"self" but just not write it down anywhere, I suggest that's a terrible
idea and one which will confuse a lot of people. "Where's this 'self'
defined? I can't find it anywhere!"

A slightly better suggestion is "the instance", but that fails here:

class C(object):
def method(, other):
assert isinstance(other, C)
.cat = other # assumes that the instance has foo
# er, that is to say, the implicit instance,
# not the other instance
The ability to talk easily about the code is invaluable. Implicit self
makes it harder to talk about the code.
[...]
>Even uglier than the first. Convince me there's a benefit.

Actually, I think it's elegant. And I'll bet that if Guido had suggested
it, you would think it was beautiful.
Oh please. I think the syntax for ternary if is ugly, and Guido came up
with that, and it doesn't even use punctuation.
Why force a name to be used when none is needed?
But a name is needed.

class Foo(base1, base2, base3):
def meth(self, arg):
super(Foo, self).meth(arg)
print self
try:
value = _cache[self]
except KeyError:
value = some_long_calculation(self)
How do you pass self to arbitrary functions without a name?

--
Steven
Jul 27 '08 #103
On Sun, 27 Jul 2008 16:04:43 -0400, Colin J. Williams wrote:
>For those who don't like the way the empty first argument looks, maybe
something like this could be allowed:

def fun( ., cat):
I don't see the need for the comma in fun.
<tongue firmly in cheek>
Or the parentheses and colon. Can we remove them too?
--
Steven
Jul 27 '08 #104
Russ P. wrote:
On Jul 27, 12:39 pm, Bruno Desthuilliers
All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary.
I presume you are proposing the opposite also, that ".member" would
internally be expanded to "self.member".

As I said before, that, or something near like it (it is hard to exactly
compare underspecified proposals) has be suggested and rejected, even if
someone gave you not the exact reference. For one thing, as Guido
noted, a single . can be hard to see and easy to miss, depending on
one's eyesight, environmental lighting, and exact display medium,
including font.

I suspect Guido's other reasons have been covered, but I do not want
misquote him. I will leave you to search the pydev list archives.
Otherwise, everything would work *EXACTLY* the same as it does now.
If I understand you, that would mean that .attribute would raise
NameError: name 'self' is not defined
if used anywhere where 'self' was not defined.

Jul 28 '08 #105


Russ P. wrote:
When I write a function in which a data member will be used several
times, I usually do something like this:

data = self.data

so I can avoid the clutter of repeated use of "self.data".
Another reason people do this is for speed, even if self.data is used
just once but in a loop.

Jul 28 '08 #106
On Jul 27, 3:54 pm, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sun, 27 Jul 2008 12:33:16 -0700, Russ P. wrote:
On Jul 27, 1:19 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sat, 26 Jul 2008 17:14:46 -0700, Russ P. wrote:
You take the name down to a single letter. As I suggested in an
earlier post on this thread, why not take it down to zero letters?
The question isn't "why not", but "why". The status quo works well as
it is, even if it isn't perfect. Prove that implicit self is a good
idea -- or at least prove that it is an idea worth considering.
"I don't like typing self" doesn't convince me. The same argument could
be made typing parentheses, colons, commas, etc. We could end up with
something like this:
class Foo base
def method x y z
.args = list x y z
That's not necessarily wrong, but it's not Python.
And what does that have to do with my suggestion? Absolutely nothing.

Not at all. You're suggesting a change to Python's syntax. I've suggested
a couple more changes to Python syntax. I don't intend them to be taken
seriously, but only to illustrate a point that syntax defines how a
language is written. You want to change that.
But the syntax change I am suggesting is trivial compared to the
draconian examples you gave.
It's a red herring that you seem to be using to obscure the fact that
you have no rational argument to make.

I don't have to make a rational argument for keeping the status quo. That
status quo just *is*. You want people to change, you need to convince
them that such a change is not just "not bad" but a serious advantage,
enough to make up for all the work required to implement it.
I'm listening. Tell me why removing self if not merely harmless, but
actively better.

[...]
I thought I did just that. I am very meticulous about the appearance
of my code, and the less cluttered the better. That's one of the main
reasons that I use Python. My suggestion would be relatively trivial
to implement, yet it would dramatically reduce clutter. You may not
agree, but I think the case is strong.
By "better" do you mean "uglier"? If so, I agree with you. If not, then
I disagree that it is better.
You seem to be freaked out by an empty argument. Actually, it bothers me
a bit too, which is why I suggested that a period could be used as the
first argument to indicate that, like Clint Eastwood in The Good, the
Bad, and the Ugly, "self" had no name here.

Well there you go now. How should we *talk* about this piece of code? Try
writing a comment or docstring, or even sitting down with a fellow
programmer and discussing it. What do you call this implicit Object With
No Name?
How do Java and C++ programmers talk about the instance for which the
method was called? I wasn't aware that that was a problem for them.
def fun( , cat):
.cat = cat # assumes that the Object With No Name has foo

versus

def fun(self, cat):
self.cat = cat # assumes that self has foo

Before you suggest that people will continue to call the first argument
"self" but just not write it down anywhere, I suggest that's a terrible
idea and one which will confuse a lot of people. "Where's this 'self'
defined? I can't find it anywhere!"
Any programmer who can't get past that point needs to find a new line
of work -- such as moving furniture.
A slightly better suggestion is "the instance", but that fails here:

class C(object):
def method(, other):
assert isinstance(other, C)
.cat = other # assumes that the instance has foo
# er, that is to say, the implicit instance,
# not the other instance

The ability to talk easily about the code is invaluable. Implicit self
makes it harder to talk about the code.
I can only imagine what you must think about lambda functions and list
comprehensions.
[...]
Even uglier than the first. Convince me there's a benefit.
Actually, I think it's elegant. And I'll bet that if Guido had suggested
it, you would think it was beautiful.

Oh please. I think the syntax for ternary if is ugly, and Guido came up
with that, and it doesn't even use punctuation.
Off topic, but I happen to like Guido's ternary "if." I use it
wherever I can, within reason.
Why force a name to be used when none is needed?

But a name is needed.

class Foo(base1, base2, base3):
def meth(self, arg):
super(Foo, self).meth(arg)
print self
try:
value = _cache[self]
except KeyError:
value = some_long_calculation(self)

How do you pass self to arbitrary functions without a name?
I didn't say you *never* need the name. If you need it, then use it.
But if you don't need it, you shouldn't be forced to use a name just
for the sake of having a name.
Jul 28 '08 #107
On Jul 28, 4:59*am, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 27, 3:11 am, alex23 <wuwe...@gmail.comwrote:
On Jul 27, 4:26 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 26, 11:18 pm, Terry Reedy <tjre...@udel.eduwrote:
The use of <nothing>'.' has been suggested before and rejected.
Where and why?
Google is your friend:http://mail.python.org/pipermail/pyt...il/000793.html

What Guido rejected there is most certainly *not*
what I suggested. I agree with Guido on that one.
Orly?

Ian Bicking wrote: "I propose that the self argument be removed from
method definitions."

Philip Eby suggested:
def .aMethod(arg1, arg2):
return .otherMethod(arg1*2+arg2)
Guido shot them all down by stating:
[Y]ou're proposing to hide a
fundamental truth in Python, that methods are "just" functions whose
first argument can be supplied using syntactic sugar
Any more reading comprehension we can do for you?
Jul 28 '08 #108
On Sun, Jul 27, 2008 at 09:39:26PM +0200, Bruno Desthuilliers wrote:
As for the latter part of #3, self (or some other variable) is
required in the parameter list of object methods,

It's actually the parameter list of the *function* that is used as the
implementation of a method. Not quite the same thing.
The idea that Python behaves this way is new to me. For example, the
tutorials make no mention of it:

http://docs.python.org/tut/node11.ht...00000000000000

The Python reference manual has very little to say about classes,
indeed. If it's discussed there, it's buried somewhere I could not
easily find it.
consistency mandates that the target object of the method is part of
the parameter list of the *function*, since that's how you make
objects availables to a function.
Fair enough, but I submit that this distinction is abstruse, and
poorly documented, and also generally not something the average
application developer should want to or have to care about... it's of
interest primarily to computer scientists and language enthusiasts.
The language should prefer to hide such details from the people using
it.
however when the method is *called*, it is omitted.

Certainly not.
Seems not so certain to me... We disagree, even after your careful
explanation. See below.
You need to lookup the corresponding attribute *on a given object*
to get the method. Whether you write

some_object.some_method()

or

some_function(some_object)

you still need to explicitely mention some_object.
But these two constructs are conceptually DIFFERENT, whether or not
their implementation is the same or similar. The first says that
some_method is defined within the name space of some_object. The
second says that some_object is a parameter of some_function...

Namespace != parameter!!!!!!!!!

To many people previously familiar with OO programming in other
languages (not just Java or C++), but not intimately familiar with
Python's implementation details, the first also implies that
some_method is inherently part of some_object, in which case
explicitly providing a parameter to pass in the object naturally seems
kind of crazy. The method can and should have implicit knowledge of
what object it has been made a part. Part of the point of using
objects is that they do have special knowledge of themselves... they
(generally) manipulate data that's part of the object. Conceptually,
the idea that an object's methods can be defined outside of the scope
of the object, and need to be told what object they are part
of/operating on is somewhat nonsensical...
Thus when an object method is called, it must be called with one fewer
arguments than those which are defined. This can be confusing,
especially to new programmers.

This is confusing as long as you insist on saying that what you
"def"ined is a method - which is not the case.
I can see now the distinction, but please pardon my prior ignorance,
since the documentation says it IS the case, as I pointed out earlier.
Furthermore, as you described, defining the function within the scope
of a class binds a name to the function and then makes it a method of
the class. Once that happens, *the function has become a method*.

To be perfectly honest, the idea that an object method can be defined
outside the scope of an object (i.e. where the code has no reason to
have any knowledge of the object) seems kind of gross to me... another
Python wart. One which could occasionally be useful I suppose, but a
wart nonetheless. This seems inherently not object-oriented at all,
for reasons I've already stated. It also strikes me as a feature
designed to encourage bad programming practices.

Even discounting that, if Python had a keyword which referenced the
object of which a given peice of code was a part, e.g. self, then a
function written to be an object method could use this keyword *even
if it is defined outside of the scope of a class*. The self keyword,
once the function was bound to an object, would automatically refer to
the correct object. If the function were called outside of the
context of an object, then referencing self would result in an
exception.

You'll probably argue that this takes away your ability to define a
function and subsequently use it both as a stand-alone function and
also as a method. I'm OK with that -- while it might occasionally
be useful, I think if you feel the need to do this, it probably means
your program design is wrong/bad. More than likely what you really
needed was to define a class that had the function as a method, and
another class (or several) that inherits from the first.

The point is that you don't get access to the object "within itself".
You get access to an object *within a function*.
Thus methods are not really methods at all, which would seem to
suggest that Python's OO model is inherently broken (albeit by design,
and perhaps occasionally to good effect).
The fact that a function is defined within a class statement doesn't
imply any "magic",
It does indeed -- it does more than imply. It states outright that
the function is defined within the namespace of that object, and as
such that it is inherently part of that object. So why should it need
to be explicitly told about the object of which it is already a part?

It further does indeed imply, to hordes of programmers experienced
with OO programming in other languages, that as a member, property,
attribute, or what ever you care to call it, of the object, it should
have special knowledge about the object of which it is a part. It
just so happens that in Python, this implication is false.
IOW : there's one arguably good reason to drop the target object from
functions used as methods implementation, which is to make Python looks
more like Java
No, that's not the reason. I don't especially like Java, nor do I use
it. The reason is to make the object model behave more intuitively.
>, and there's at least two good reason to keep it the way it is,
which are simplicity (no special case) and consistency (no special
case).
Clearly a lot of people find that it is less simple TO USE. The point
of computers is to make hard things easier... if there is a task that
is annoying, or tedious, or repetitive, it should be done by code, not
humans. This is something that Python should do automatically for its
users.

--
Derek D. Martin
http://www.pizzashack.org/
GPG Key ID: 0x81CFE75D
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFIjT9mdjdlQoHP510RAnxjAKC7rqfzfYCDhLLhI+M/ZZCDYCdmBwCfYp0k
+/raNY4nvrMQvXwoKWcZMxo=
=wTKA
-----END PGP SIGNATURE-----

Jul 28 '08 #109
In message
<63**********************************@a1g2000hsb.g ooglegroups.com>,
s0****@gmail.com wrote:
On Jul 26, 6:47 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
>In message
<024ace13-f72f-4093-bcc9-f8a339c32...@v1g2000pra.googlegroups.com>,

s0s...@gmail.com wrote:
On Jul 24, 5:01 am, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
>In message
<52404933-ce08-4dc1-a558-935bbbae7...@r35g2000prm.googlegroups.com>,
Jordan wrote:
Except when it comes to Classes. I added some classes to code that
had previously just been functions, and you know what I did - or
rather, forgot to do? Put in the 'self'. In front of some of the
variable accesses, but more noticably, at the start of *every single
method argument list.*
>The reason is quite simple. Python is not truly an "object-oriented"
language. It's sufficiently close to fool those accustomed to OO ways
of doing things, but it doesn't force you to do things that way. You
still have the choice. An implicit "self" would take away that choice.
By that logic, C++ is not OO.

Yes it is, because it has "this".

You mean the keyword "this"? It's just a feature. How does that make a
difference on being or not being OO?
Because it was one of the things the OP was complaining about (see above).
Jul 28 '08 #110
On Jul 27, 2:39*pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
It's bad programming, but the world is full of bad programmers, and we
don't always have the choice not to use their code. *Isn't one of
Python's goals to minimize opportunities for bad programming?

Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.
+1 QOTW
Jul 28 '08 #111
On Jul 27, 6:21 pm, Terry Reedy <tjre...@udel.eduwrote:
Russ P. wrote:
On Jul 27, 12:39 pm, Bruno Desthuilliers
All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary.

I presume you are proposing the opposite also, that ".member" would
internally be expanded to "self.member".

As I said before, that, or something near like it (it is hard to exactly
compare underspecified proposals) has be suggested and rejected, even if
someone gave you not the exact reference. For one thing, as Guido
noted, a single . can be hard to see and easy to miss, depending on
one's eyesight, environmental lighting, and exact display medium,
including font.

I suspect Guido's other reasons have been covered, but I do not want
misquote him. I will leave you to search the pydev list archives.
Otherwise, everything would work *EXACTLY* the same as it does now.

If I understand you, that would mean that .attribute would raise
NameError: name 'self' is not defined
if used anywhere where 'self' was not defined.
After thinking about this a bit more, let me try to be more specific.

Forget about the empty first argument and the "." for the first
argument. Just let the first argument be "self" or anything the
programmer chooses. No change there.

If access is needed to "self.var", let it be accessable as either
"self.var" or simply ".var". Ditto for "self.method()", which would be
accessible as ".method()".

In other words, wherever ".var" appears, let it be interpreted as
"<arg1>.var". If the first argument is "self", then it should be
equivalent to "self.var". If the first argument is "snafu", then
".var" should be equivalent to "snafu.var".

I can't think of any technical problem with this proposal, but I may
be overlooking something. If so, please let me know.

This proposal should be relatively easy to implement, and it would
reduce code clutter significantly. (I could probably write a pre-
processor to implement myself it in less than a day.)
Jul 28 '08 #112
On Jul 27, 8:38 pm, alex23 <wuwe...@gmail.comwrote:
On Jul 28, 4:59 am, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 27, 3:11 am, alex23 <wuwe...@gmail.comwrote:
On Jul 27, 4:26 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 26, 11:18 pm, Terry Reedy <tjre...@udel.eduwrote:
The use of <nothing>'.' has been suggested before and rejected.
Where and why?
Google is your friend:http://mail.python.org/pipermail/pyt...il/000793.html
What Guido rejected there is most certainly *not*
what I suggested. I agree with Guido on that one.

Orly?

Ian Bicking wrote: "I propose that the self argument be removed from

method definitions."

Philip Eby suggested:
def .aMethod(arg1, arg2):
return .otherMethod(arg1*2+arg2)

Guido shot them all down by stating:
[Y]ou're proposing to hide a
fundamental truth in Python, that methods are "just" functions whose
first argument can be supplied using syntactic sugar

Any more reading comprehension we can do for you?
Dude, I agree with Guido completely on this one. You
seem to be clueless about the issue here. You're the
one with the reading comprehension problem. Please
quit wasting my time with your irrelevant crap.
Jul 28 '08 #113
On Jul 27, 8:58 pm, castironpi <castiro...@gmail.comwrote:
On Jul 27, 2:39 pm, Bruno Desthuilliers

<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
It's bad programming, but the world is full of bad programmers, and we
don't always have the choice not to use their code. Isn't one of
Python's goals to minimize opportunities for bad programming?
Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.
Oh, gosh, that is so clever. What a bunch of crap.
+1 QOTW
Do you realize what an insult that is to everyone else who has posted
here in the past week?
Jul 28 '08 #114
On Jul 27, 4:26 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
Terry Reedy <tjre...@udel.eduwrote:
The use of <nothing>'.' has been suggested before and rejected.
Where and why?
Dude, I agree with Guido completely on this one. You
seem to be clueless about the issue here. You're the
one with the reading comprehension problem. Please
quit wasting my time with your irrelevant crap.
I pointed you at a thread -where it had been suggested and rejected-.
And I'm the clueless one?

I don't think I'm the one wasting anyone's time here, but fine. I've
got far better things to do with my time than waste it talking to you.
Jul 28 '08 #115
On Jul 27, 9:44 pm, alex23 <wuwe...@gmail.comwrote:
On Jul 27, 4:26 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
Terry Reedy <tjre...@udel.eduwrote:
The use of <nothing>'.' has been suggested before and rejected.
Where and why?
Dude, I agree with Guido completely on this one. You
seem to be clueless about the issue here. You're the
one with the reading comprehension problem. Please
quit wasting my time with your irrelevant crap.

I pointed you at a thread -where it had been suggested and rejected-.
And I'm the clueless one?

I don't think I'm the one wasting anyone's time here, but fine. I've
got far better things to do with my time than waste it talking to you.
What was "suggested in rejected" on the thread you pointed me to was
not what I suggested. Not even close. Get it, genius?
Jul 28 '08 #116
Derek Martin wrote:
Regardless of how it's implementd, it's such a common idiom to use
self to refer to object instances within a class in Python that it
ought to be more automatic. Personally, I kind of like the idea of
using @ and thinking of it more like an operator... Kind of like
dereferencing a pointer, only with an implied pointer name.

class foo:
def __init__():
@.increment = 2

def bar(a)
return a + @.increment

I'm sure all the Pythonistas will hate this idea though... ;-) To be
honest, it smacks a little of Perl's magic variables, which I actually
hate with a passion. This is the only place in Python I'd consider
doing something like this.
I think the biggest reason why an implicit self is bad is because it
prevents monkey-patching of existing class objects. Right now I can add
a new method to any existing class just with a simple attribute like so
(adding a new function to an existing instance object isn't so simple,
but ah well):

def a(self, x, y):
self.x = x
self.y = y

class Test(object):
pass

Test.setxy = a

b = Test()

b.setxy(4,4)

print b.x, b.y

If self was implicit, none of this would work. Now this contrived
example is not useful, and maybe not even correct, but I have patched
existing classes on several occasions using this method. How could
python retain it's dynamic nature and still have an implicit self? How
would the interpreter know when to add the self variable and when not to?
Jul 28 '08 #117
Derek Martin wrote:
Furthermore, as you described, defining the function within the scope
of a class binds a name to the function and then makes it a method of
the class. Once that happens, *the function has become a method*.
If you mean that a user-defined function object becomes a different
class of object when bound to a class attribute name, that is wrong.
Once a function, always a function. It may be called an 'instance
method' but it is still a function. Any function object can be an
attribute of multiple classes, without inheritance, or of none.

When a function attribute is accessed via an instance of the class, it
is *wrapped* with a bound method object that basically consists of
references to the function and instance. When the 'bound method' is
called, the instance is inserted in front of the other arguments to be
matched with the first parameter.

In 2.0, functions accessed through the class were rather uselessly
wrapped as an 'unbound method', but those wrappers have been discarded
in 3.0.
To be perfectly honest, the idea that an object method can be defined
outside the scope of an object (i.e. where the code has no reason to
have any knowledge of the object) seems kind of gross to me...
I happen to like the simplicity that "def statements (and lambda
expressions) create function objects." Period.

....
It does indeed -- it does more than imply. It states outright that
the function is defined within the namespace of that object,
True.
and as such that it is inherently part of that object.
False. That does not follow. Python objects generally exist
independently of each other. Think of them as existing in a nameless
dataspace if you want. Collection/container objects collect/contain
references to their members, just as a club roster does, but they only
metaphorically 'contain' their members. Any object can be a member of
any number of collections, just as humans can join any number of clubs
and groups. In mathematical set theory, membership is also non-exclusive.
So why should it need
to be explicitly told about the object of which it is already a part?
Because it is not a 'part' of a class in the sense you seem to mean.

What is true is that functions have a read-only reference to the global
namespace of the module in which they are defined. But they do not have
to be a member of that namespace.

Terry Jan Reedy

Jul 28 '08 #118
On Jul 27, 10:32 pm, Terry Reedy <tjre...@udel.eduwrote:
Derek Martin wrote:
Furthermore, as you described, defining the function within the scope
of a class binds a name to the function and then makes it a method of
the class. Once that happens, *the function has become a method*.

If you mean that a user-defined function object becomes a different
class of object when bound to a class attribute name, that is wrong.
Once a function, always a function. It may be called an 'instance
method' but it is still a function. Any function object can be an
attribute of multiple classes, without inheritance, or of none.

When a function attribute is accessed via an instance of the class, it
is *wrapped* with a bound method object that basically consists of
references to the function and instance. When the 'bound method' is
called, the instance is inserted in front of the other arguments to be
matched with the first parameter.

In 2.0, functions accessed through the class were rather uselessly
wrapped as an 'unbound method', but those wrappers have been discarded
in 3.0.
To be perfectly honest, the idea that an object method can be defined
outside the scope of an object (i.e. where the code has no reason to
have any knowledge of the object) seems kind of gross to me...

I happen to like the simplicity that "def statements (and lambda
expressions) create function objects." Period.

...
It does indeed -- it does more than imply. It states outright that
the function is defined within the namespace of that object,

True.
and as such that it is inherently part of that object.

False. That does not follow. Python objects generally exist
independently of each other. Think of them as existing in a nameless
dataspace if you want. Collection/container objects collect/contain
references to their members, just as a club roster does, but they only
metaphorically 'contain' their members. Any object can be a member of
any number of collections, just as humans can join any number of clubs
and groups. In mathematical set theory, membership is also non-exclusive.
So why should it need
to be explicitly told about the object of which it is already a part?

Because it is not a 'part' of a class in the sense you seem to mean.

What is true is that functions have a read-only reference to the global
namespace of the module in which they are defined. But they do not have
to be a member of that namespace.

Terry Jan Reedy
This whole discussion reminds me of discussions I saw on comp.lang.ada
several years ago when I had a passing interest in Ada.

My memory on this is a bit fuzzy, but IFIRC Ada 95 did not support
standard OO "dot" syntax of the form

myObject.myFunction(args)

Instead, "myfunction" was just a "regular" function that took
"myObject" and "args" as arguments. It was called as

myFunction(myObject, args)

It was put into the appropriate package or subpackage where it
belonged rather than in a class definition. Namespaces were defined by
a package hierarchy rather than by classes (which is actually more
logical, but that's another topic).

Well, so many people demanded the "dot" notation that it was finally
implemented in Ada 2005. So now user can use the more familiar dot
notation, but my understanding is that it is just "syntactic sugar"
for the old notation.

So when Python people go out of their way to point out that class
"methods" in Python are implemented as regular functions, that seems
fairly obvious to me -- but perhaps only because of my passing
familiarity with Ada.
Jul 28 '08 #119
On Jul 27, 10:55 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<6385b0a8-f7f3-4dc3-91be-e6f158ffb...@a1g2000hsb.googlegroups.com>,

s0s...@gmail.com wrote:
On Jul 26, 6:47 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<024ace13-f72f-4093-bcc9-f8a339c32...@v1g2000pra.googlegroups.com>,
s0s...@gmail.com wrote:
On Jul 24, 5:01 am, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<52404933-ce08-4dc1-a558-935bbbae7...@r35g2000prm.googlegroups.com>,
Jordan wrote:
Except when it comes to Classes. I added some classes to code that
had previously just been functions, and you know what I did - or
rather, forgot to do? Put in the 'self'. In front of some of the
variable accesses, but more noticably, at the start of *every single
method argument list.*
The reason is quite simple. Python is not truly an "object-oriented"
language. It's sufficiently close to fool those accustomed to OO ways
of doing things, but it doesn't force you to do things that way. You
still have the choice. An implicit "self" would take away that choice.
By that logic, C++ is not OO.
Yes it is, because it has "this".
You mean the keyword "this"? It's just a feature. How does that make a
difference on being or not being OO?

Because it was one of the things the OP was complaining about (see above).
Wrong. What the OP complains about has no relevance on what makes a
language OO or not.

Jul 28 '08 #120
On Jul 27, 5:14 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sat, 26 Jul 2008 15:58:16 -0700, Carl Banks wrote:
On Jul 26, 5:07 pm, Terry Reedy <tjre...@udel.eduwrote:
Whether or not one should write 'if x' or 'if x != 0' [typo corrected]
depends on whether one means the general 'if x is any non-null object
for which bool(x) == True' or the specific 'if x is anything other than
numeric zero'. The two are not equivalent. Ditto for the length
example.
Can you think of any use cases for the former? And I mean something
where it can't be boiled down to a simple explicit test for the sorts of
arguments you're expecting; something that really takes advantage of the
"all objects are either true or false" paradigm.

But why do you need the explicit test?
I asked you first, buddy.
[snip attempt to reverse argument]
The best thing I can come up with out of my mind is cases where you want
to check for zero or an empty sequence, and you want to accept None as
an alternative negative as well. But that's pretty weak.

You might find it pretty weak, but I find it a wonderful, powerful
feature.
Powerful? You've got to be kidding me. If I have a function

create_object(name)

where one creates an anonymous object by passing an empty string,
behold! now I can also create an anonymous object by passing None.
You call that powerful? I call it simple convenience, and not
something that we'd suffer much for for not having. But it's still
the one thing I can think of that can't be replaced by a simple
explicit test.

I recently wrote a method that sequentially calls one function after
another with the same argument, looking for the first function that
claims a match by returning a non-false result. It looked something like
this:

def match(arg, *functions):
for func in functions:
if func(arg):
return func

I wanted the function itself, not the result of calling the function. I
didn't care what the result was, only that it was something (indicates a
match) or nothing (no match). In one application, the functions might
return integers or floats; in another they might return strings. In a
third, they might return re match objects or None. I don't need to care,
because my code doesn't make any assumptions about the type of the result.
Couldn't you write the function to return None on no match, then test
if func(arg) is None? That way would seem a lot more natural to me.
As an added bonus, you don't have to return some sort of wrapped
object if suddenly you decide that you want to match a zero.

Sorry, can't give it credit for the use case I was asking for. I want
something where "if x" will do but a simple explicit test won't.
Carl Bannks
Jul 28 '08 #121
In message
<c5**********************************@y21g2000hsf. googlegroups.com>,
s0****@gmail.com wrote:
On Jul 27, 10:55 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
>In message
<6385b0a8-f7f3-4dc3-91be-e6f158ffb...@a1g2000hsb.googlegroups.com>,

s0s...@gmail.com wrote:
On Jul 26, 6:47 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<024ace13-f72f-4093-bcc9-f8a339c32...@v1g2000pra.googlegroups.com>,
>s0s...@gmail.com wrote:
On Jul 24, 5:01 am, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
>In message
<52404933-ce08-4dc1-a558-935bbbae7...@r35g2000prm.googlegroups.com>,
>Jordan wrote:
Except when it comes to Classes. I added some classes to code
that had previously just been functions, and you know what I did
- or rather, forgot to do? Put in the 'self'. In front of some of
the variable accesses, but more noticably, at the start of *every
single method argument list.*
>The reason is quite simple. Python is not truly an
"object-oriented" language. It's sufficiently close to fool those
accustomed to OO ways of doing things, but it doesn't force you to
do things that way. You still have the choice. An implicit "self"
would take away that choice.
By that logic, C++ is not OO.
>Yes it is, because it has "this".
You mean the keyword "this"? It's just a feature. How does that make a
difference on being or not being OO?

Because it was one of the things the OP was complaining about (see
above).

Wrong.
Reread what the OP said.
Jul 28 '08 #122
castironpi <ca********@gmail.comwrites:
>I think you misunderstood him. What he wants is to write

class foo:
Â* Â*def bar(arg):
Â* Â* Â* Â*self.whatever = arg + 1

instead of

class foo:
Â* Â*def bar(self, arg)
Â* Â* Â* Â*self.whatever = arg + 1

so 'self' should *automatically* only be inserted in the function
declaration, and *manually* be typed for attributes.

There's a further advantage:

class A:
def get_auxclass( self, b, c ):
class B:
def auxmeth( self2, d, e ):
#here, ...
return B
In auxmeth, self would refer to the B instance. In get_auxclass, it
would refer to the A instance. If you wanted to access the A instance
in auxmeth, you'd have to use

class A:
def get_auxclass(b, c ):
a_inst = self
class B:
def auxmeth(d, e ):
self # the B instance
a_inst # the A instance
return B
This seems pretty natural to me (innermost scope takes precedence),
and AFAIR this is also how it is done in Java.
Best,

-Nikolaus

--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Jul 28 '08 #123
Bruno Desthuilliers <bd*****************@free.quelquepart.frwrites:
The fact that a function is defined within a class statement doesn't
imply any "magic", it just creates a function object, bind it to a
name, and make that object an attribute of the class. You have the
very same result by defining the function outside the class statement
and binding it within the class statement, by defining the function
outside the class and binding it to the class outside the class
statement, by binding the name to a lambda within the class statement
etc...
But why can't the current procedure to resolve method calls be changed
to automatically define a 'self' variable in the scope of the called
function, instead of binding its first argument?
Best,

-Nikolaus

--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Jul 28 '08 #124
"Russ P." <Ru**********@gmail.comwrites:
The issue here has nothing to do with the inner workings of the Python
interpreter. The issue is whether an arbitrary name such as "self"
needs to be supplied by the programmer.

All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary. Otherwise, everything would work
*EXACTLY* the same as it does now. This would be a shallow syntactical
change with no effect on the inner workings of Python, but it could
significantly unclutter code in many instances.

The fact that you seem to think it would change the inner
functioning of Python just shows that you don't understand the
proposal.

So how would you translate this into a Python with implicit self, but
without changing the procedure for method resolution?

def will_be_a_method(self, a)
# Do something with self and a

class A:
pass

a = A()
a.method = will_be_a_method
It won't work unless you change the interpreter to magically insert a
'self' variable into the scope of a function when it is called as a
method.

I'm not saying that that's a bad thing, but it certainly requires some
changes to Python's internals.
Best,

-Nikolaus

--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Jul 28 '08 #125
Michael Torrie <to*****@gmail.comwrites:
I think the biggest reason why an implicit self is bad is because it
prevents monkey-patching of existing class objects. Right now I can add
a new method to any existing class just with a simple attribute like so
(adding a new function to an existing instance object isn't so simple,
but ah well):

def a(self, x, y):
self.x = x
self.y = y

class Test(object):
pass

Test.setxy = a

b = Test()

b.setxy(4,4)

print b.x, b.y

If self was implicit, none of this would work.
No, but it could work like this:

def a(x, y):
self.x = x
self.y = y

class Test(object):
pass

Test.setxy = a
b = Test()

# Still all the same until here

# Since setxy is called as an instance method, it automatically
# gets a 'self' variable and everything works nicely
b.setxy(4,4)

# This throws an exception, since self is undefined
a(4,4)
Best,

-Nikolaus

--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Jul 28 '08 #126
On Jul 28, 1:55 am, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<c578790e-dfb4-4c7f-8647-282ab5f8a...@y21g2000hsf.googlegroups.com>,

s0s...@gmail.com wrote:
On Jul 27, 10:55 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<6385b0a8-f7f3-4dc3-91be-e6f158ffb...@a1g2000hsb.googlegroups.com>,
s0s...@gmail.com wrote:
On Jul 26, 6:47 pm, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message
<024ace13-f72f-4093-bcc9-f8a339c32...@v1g2000pra.googlegroups.com>,
s0s...@gmail.com wrote:
On Jul 24, 5:01 am, Lawrence D'Oliveiro <l...@geek-
central.gen.new_zealandwrote:
In message

<52404933-ce08-4dc1-a558-935bbbae7...@r35g2000prm.googlegroups.com>,
Jordan wrote:
Except when it comes to Classes. I added some classes to code
that had previously just been functions, and you know what I did
- or rather, forgot to do? Put in the 'self'. In front of some of
the variable accesses, but more noticably, at the start of *every
single method argument list.*
The reason is quite simple. Python is not truly an
"object-oriented" language. It's sufficiently close to fool those
accustomed to OO ways of doing things, but it doesn't force you to
do things that way. You still have the choice. An implicit "self"
would take away that choice.
By that logic, C++ is not OO.
Yes it is, because it has "this".
You mean the keyword "this"? It's just a feature. How does that make a
difference on being or not being OO?
Because it was one of the things the OP was complaining about (see
above).
Wrong.

Reread what the OP said.
Stop quoting only portions of my posts that lead to misinterpretation
of them. Next time you quote, be sure to quote this (which I also
mentioned in the previous post):

What the OP complains about has no relevance on what makes a language
OO or not.

Do you believe otherwise?

Jul 28 '08 #127
Nikolaus Rath wrote:
No, but it could work like this:

def a(x, y):
self.x = x
self.y = y
Frankly this would make reading and debugging the code by a third party
to be a nightmare. Rather than calling the variable self as I did in my
example, I could it in a much better way:

def method(my_object, a, b):
my_object.a = a
my_object.b = b
Now if I saw this function standalone, I'd immediately know what it was
doing. In fact, I can even unit test this function by itself, without
even having to know that later on it's monkey-patched into an existing
class.

With your idea, I might get the picture this function should be used as
a method in some object because of the self reference, but I can't test
the method by itself. Trying to call it would instantly result in an
exception. And if this was a large function, I might not even see the
self reference right away.
Jul 28 '08 #128
Nikolaus Rath wrote:
Thats true. But out of curiosity: why is changing the interpreter such
a bad thing? (If we suppose for now that the change itself is a good
idea).
Round and round and round we go.

Because by doing as you suggest you'd be altering the internal
consistency of how python deals with objects, particularly function
objects. By keeping all functions as function objects, we're able to
provide a consistent view of things both to the programmer and the
interpreter. If we break this consistency then we're saying, well this
is always a function, except in this case, and except in this case, when
it's dealt with specially. This does not fit at all with python's
established mantras.

In python, "def" does only one thing: it creates a function object and
binds it to a name. That's it! Making what def does context-sensitive
seems pretty silly to me, and without any real benefit. As said before,
adding attributes after the fact that are functions is a chore, because
those functions can't be unit tested, and can't be clearly read and
understood by other programmers.

Jul 28 '08 #129
Cutting to the crux of the discussion...

On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
I want something where "if x" will do but a simple explicit test won't.
Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length is
non-zero even when it's empty? Who knows? How many cases do you need to
consider?

Explicit tests are not necessarily simple for custom classes. Testing for
emptiness could be arbitrarily complex. That's why we have __nonzero__,
so you don't have to fill your code with complex expressions like (say)

if len(x.method()[x.attribute]) -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.

In general, you should write "if x" instead of an explicit test whenever
you care whether or not x is something (true), as opposed to nothing
(false), but you don't care what the type-specific definition of
something vs. nothing actually is.

To put it another way... using "if x" is just a form of duck-typing. Let
the object decide itself whether it is something or nothing.
--
Steven
Jul 28 '08 #130
On Sun, 27 Jul 2008 21:42:37 -0700, Russ P. wrote:
>+1 QOTW

Do you realize what an insult that is to everyone else who has posted
here in the past week?
Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"
--
Steven
Jul 28 '08 #131
On 28 Jul 2008 14:07:44 GMT, Steven D'Aprano
<st***@remove-this-cybersource.com.auwrote:
On Sun, 27 Jul 2008 21:42:37 -0700, Russ P. wrote:
>+1 QOTW
>
Do you realize what an insult that is to everyone else who has posted
here in the past week?

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"
--
Steven
It is difficult to not offend the insult-sensitive.
Jul 28 '08 #132
On 28 Jul., 06:42, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 27, 8:58 pm, castironpi <castiro...@gmail.comwrote:
On Jul 27, 2:39 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
It's bad programming, but the world is full of bad programmers, andwe
don't always have the choice not to use their code. Isn't one of
Python's goals to minimize opportunities for bad programming?
Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.

Oh, gosh, that is so clever. What a bunch of crap.
+1 QOTW

Do you realize what an insult that is to everyone else who has posted
here in the past week?
Nothing glues a community together so well as a common enemy. Or even
better: two enemies i.e. Perl and Java in Pythons case. On the other
hand, some enemies have to be ignored or declared to be not an enemy
( Ruby ), although oneself is clearly an enemy for them. The same
antisymmetry holds for Python and Java. Java is an enemy for Python
but Python is not worth for Java to be an enemy as long as it can be
ignored. C++ and Java are enemies for each other. Same holds for Java
and C#.
Jul 28 '08 #133
On Jul 28, 2:26*am, Nikolaus Rath <Nikol...@rath.orgwrote:
castironpi <castiro...@gmail.comwrites:
I think you misunderstood him. What he wants is to write
class foo:
* *def bar(arg):
* * * *self.whatever = arg + 1
instead of
class foo:
* *def bar(self, arg)
* * * *self.whatever = arg + 1
so 'self' should *automatically* only be inserted in the function
declaration, and *manually* be typed for attributes.
There's a further advantage:
class A:
* def get_auxclass( self, b, c ):
* * class B:
* * * def auxmeth( self2, d, e ):
* * * * #here, ...
* * return B

In auxmeth, self would refer to the B instance. In get_auxclass, it
would refer to the A instance. If you wanted to access the A instance
in auxmeth, you'd have to use

class A:
* *def get_auxclass(b, c ):
* * *a_inst = self
* * *class B:
* * * *def auxmeth(d, e ):
* * * * *self # the B instance
* * * * *a_inst # the A instance
* * *return B

This seems pretty natural to me (innermost scope takes precedence),
and AFAIR this is also how it is done in Java.
True. Net keystrokes are down in this method. Consider this:

class A:
def get_auxclass(b, c ):
a_inst = self
class B:
@staticmethod #<--- change
def auxmeth(d, e ):
self # -NOT- the B instance
a_inst # the A instance
return B

What are the semantics here? Error, No 'self' allowed in staticmethod-
wrapped functions. Or, the a instance, just like a_inst?

Do you find no advantage to being able to give 'self' different names
in different cases?
Jul 28 '08 #134
On Jul 28, 9:07*am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sun, 27 Jul 2008 21:42:37 -0700, Russ P. wrote:
+1 QOTW
Do you realize what an insult that is to everyone else who has posted
here in the past week?

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

--
Steven
No insult was intended. The writer stated that where Java minimizes
bad, Python maximizes good. This is a non-trivial truth, and a non-
trivial observation. Also, clever. I agreed and said so, and
compliments go a long way. Do you?
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"
Arf, arf.

--
For my special power, I want immunity to insults.
Jul 28 '08 #135
On Jul 28, 4:23 am, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
Russ P. a écrit :
On Jul 27, 3:11 pm, "Russ P." <Russ.Paie...@gmail.comwrote:
On Jul 27, 12:39 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
Derek Martin a écrit :
On Sun, Jul 27, 2008 at 08:19:17AM +0000, Steven D'Aprano wrote:
You take the name down to a single letter. As I suggested in an earlier
post on this thread, why not take it down to zero letters?
The question isn't "why not", but "why". The status quo works well as it
is, even if it isn't perfect. Prove that implicit self is a good idea --
or at least prove that it is an idea worth considering.
Come on, this sounds like a schoolyard argument. This comes down toa
matter of style, and as such, is impossible to prove. It's largely a
question of individual preference.
That said, the argument in favor is rather simple:
1. This is an extremely common idiom in Python
2. It is completely unnecessary, and the language does not suffer for
making it implicit
3. Making it implicit reduces typing, reduces opportunities for
mistakes, and arguably increases consistency.
"arguably", indeed, cf below.
As for the latter part of #3, self (or some other variable) is
required in the parameter list of object methods,
It's actually the parameter list of the *function* that is used as the
implementation of a method. Not quite the same thing. And then,
consistency mandates that the target object of the method is part of the
parameter list of the *function*, since that's how you make objects
availables to a function.
however when the
method is *called*, it is omitted.
Certainly not. You need to lookup the corresponding attribute *on a
given object* to get the method. Whether you write
some_object.some_method()
or
some_function(some_object)
you still need to explicitely mention some_object.
It is implied, supplied by Python.
Neither. The target object is passed to the function by the method
object, which is itself returned by the __get__ method of function
objects, which is one possible application of the more general
descriptor protocol (the same protocol that is used for computed
attributes). IOW, there's nothing specific to 'methods' here, just the
use of two general features (functions and the descriptor protocol).
FWIW, you can write your own callable, and write it so it behave just
like a function here:
import types
class MyCallable(object):
def __call__(self, obj):
print "calling %s with %s" % (self, obj)
def __get__(self, instance, cls):
return types.MethodType(self.__call__, instance, cls)
class Foo(object):
bar = MyCallable()
print Foo.bar
f = Foo()
f.bar()
Thus when an object method is called, it must be called with one fewer
arguments than those which are defined. This can be confusing,
especially to new programmers.
This is confusing as long as you insist on saying that what you
"def"ined is a method - which is not the case.
It can also be argued that it makes the code less ugly, though again,
that's a matter of preference.
It's not enough to show that a change "isn't bad" -- you have to show
that it is actively good.
But he did... he pointed out that *it saves work*, without actually
being bad. Benefit, without drawback. Sounds good to me!
"Don't need to look at the method signature" is not an argument in favour
of implicit self.
Yes, actually, it is.
It isn't, since there's no "method signature" to look at !-)
If there is a well-defined feature of Python
which provides access to the object within itself,
The point is that you don't get access to the object "within itself".
You get access to an object *within a function*.
The fact that a function is defined within a class statement doesn't
imply any "magic", it just creates a function object, bind it to a name,
and make that object an attribute of the class. You have the very same
result by defining the function outside the class statement and binding
it within the class statement, by defining the function outside the
class and binding it to the class outside the class statement, by
binding the name to a lambda within the class statement etc...
then the
opportunities for mistakes when someone decides to use something else
are lessened.
You don't need to look at the method signature when you're using an
explicit self either.
That isn't necessarily true. If you're using someone else's code, and
they didn't use "self" -- or worse yet, if they chose this variable's
name randomly throughout their classes -- then you may well need to
look back to see what was used.
It's bad programming, but the world is full of bad programmers, and we
don't always have the choice not to use their code. Isn't one of
Python's goals to minimize opportunities for bad programming?
Nope. That's Java's goal. Python's goals are to maximize opportunities
for good programming, which is quite different.
Providing a keyword equivalent to self and removing the need to name
it in object methods is one way to do that.
It's also a way to make Python more complicated than it needs to be. At
least with the current state, you define your functions the same way
regardless of how they are defined, and the implementation is
(relatively) easy to explain. Special-casing functions definition that
happens within a class statement would only introduce a special case.
Then you'd have to explain why you need to specify the target object in
the function's parameters when the function is defined outside the class
but not when it's defined within the class.
IOW : there's one arguably good reason to drop the target object from
functions used as methods implementation, which is to make Python looks
more like Java, and there's at least two good reason to keep it the way
it is, which are simplicity (no special case) and consistency (no
special case).
Anyway, the BDFL has the final word, and it looks like he's not goingto
change anything here - but anyone is free to propose a PEP, isn't it ?
The issue here has nothing to do with the inner workings of the Python
interpreter. The issue is whether an arbitrary name such as "self"
needs to be supplied by the programmer.
Neither I nor the person to whom you replied to here (as far as I can
tell) is suggesting that Python adopt the syntax of Java or C++, in
which member data or functions can be accessed the same as local
variables. Any suggestion otherwise is a red herring.
All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary. Otherwise, everything would work
*EXACTLY* the same as it does now. This would be a shallow syntactical
change with no effect on the inner workings of Python, but it could
significantly unclutter code in many instances.
The fact that you seem to think it would change the inner functioning
of Python just shows that you don't understand the proposal.
It just occurred to me that Python could allow the ".member" access
regardless of the name supplied in the argument list:
class Whatever:
def fun(self, cat):
.cat = cat
self.cat += 1
This allows the programmer to use ".cat" and "self.cat"
interchangeably. If the programmer intends to use only the ".cat"
form, the first argument becomes arbitrary. Allowing him to use an
empty argument or "." would simply tell the reader of the code that
the ".cat" form will be used exclusively.
When I write a function in which a data member

Python has nothing like "data member" (nor "member function" etc). It
has class attributes and instance attributes, period. *Python is not
C++*. So please stop using C++ terms which have nothing to do with
Python's object model.
will be used several
times, I usually do something like this:
data = self.data
so I can avoid the clutter of repeated use of "self.data". If I could
just use ".data", I could avoid most of the clutter without the extra
line of code renaming the data member.

The main reason to alias an attribute is to avoid the overhead of
attribute lookup.
A bonus is that it becomes
clearer at the point of usage that ".data" is member data rather than
a local variable.

I totally disagree. The dot character is less obvious than the 'self.'
sequence, so your proposition is bad at least wrt/ readability (it's
IMHO bad for other reasons too but I won't continue beating that poor
dead horse...)
Man, you are one dense dude! Can I give you a bit of personal advice?
I suggest you quit advertising your denseness in public.

Letting "self" (or whatever the first argument was) be implied in
".cat" does absolutely *NOTHING* to change the internal workings of
the Python interpreter. It's a very simple idea that you insist on
making complicated. As I said, I could write a pre-processor myself to
implement it in less than a day.

As for "dot" being less obvious than "self.", no kidding? Hey, "self."
is less obvious than "extraspecialme.", so why don't you start using
the latter? Has it occurred to you that the difference between 1.000
and 1000 is just a dot? Can you see the difference, Mr. Magoo?

Your posts here are typical. I'm trying to make a suggestion to reduce
the clutter in Python code, and you throw tomatoes mindlessly.

You seem to think that being a "regular" on this newsgroup somehow
gives you special status. I sure wish I had one tenth the time to
spend here that you have. But even if I did, I have far more important
work to do than to "hang out" on comp.lang.python all day every day.
Man, what a waste of a life. Well, I guess it keeps you off the
streets at least.

Jul 28 '08 #136
On Jul 28, 7:07 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Sun, 27 Jul 2008 21:42:37 -0700, Russ P. wrote:
+1 QOTW
Do you realize what an insult that is to everyone else who has posted
here in the past week?

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

--
Steven
That would all be true if the comment that was called "QOTW" was
indeed clever or, for that matter, true. It was neither.

The idea that Python does not try to discourage bad programming
practice is just plain wrong. Ask yourself why Python doesn't allow
assignment within a conditional test ("if x = 0"), for example. Or,
why it doesn't allow "i++" or "++i"? I'll leave it as an exercise for
the reader to give more examples.

Also, the whole idea of using indentation to define the logical
structure of the code is really a way to ensure that the indentation
structure is consistent with the logical structure. Now, is that a way
to "encourage good practice," or is it a way to "discourage bad
practice"? The notion that the two concepts are "very different" (as
the "QOTW" claimed) is just plain nonsense.
Jul 28 '08 #137
On Jul 28, 10:00 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
Cutting to the crux of the discussion...

On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
I want something where "if x" will do but a simple explicit test won't.

Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length is
non-zero even when it's empty? Who knows? How many cases do you need to
consider?
Use case, please. I'm asking for code, not arguments. Please give me
a piece of code where you can write "if x" that works but a simple
explicit test won't.

(Note: I'm not asking you to prove that "if len(x)!=0" might fail for
some contrived, poorly-realized class you could write. I already know
you can do that.)
Carl Banks
Jul 28 '08 #138
Steven D'Aprano wrote:
On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
>I want something where "if x" will do but a simple explicit test won't.

Explicit tests aren't simple unless you know what type x is.
If you don't even know a duck-type for x, you have no business invoking any
methods on that object.

If you do know a duck-type for x, then you also know which explicit test to perform.
Explicit tests are not necessarily simple for custom classes. Testing for
emptiness could be arbitrarily complex. That's why we have __nonzero__,
so you don't have to fill your code with complex expressions like (say)

if len(x.method()[x.attribute]) -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.
Okay, so you have this interesting object property that you often need to test
for, so you wrap the code for the test up in a method, because that way you only
need to write the complex formula once. I'm with you so far. But then you
decide to name the method "__nonzero__", instead of some nice descriptive name?
What's up with that?

This is the kind of code I would write:
class C:
def attribute_is_nonnegative(self):
return len(self.method()[self.attribute]) -1
...
c = get_a_C()
if c.attribute_is_nonnegative():
...

Now suppose you were reading these last few lines and got to wondering if
get_a_C might ever return None.

The answer is obviously no. get_a_C must always return a C object or something
compatible. If not, it's a bug and an AttributeError will ensue. The code
tells you that. By giving the method a name the intent of the test is perfectly
documented.

In comparison, I gather you would write something like this:
class C:
def __nonzero__(self):
return len(self.method()[self.attribute]) -1
...
c = get_a_C()
if c:
...

Again, the question is, can get_a_C return None? Well that's hard to say
really. It could be that "if c" is intended to test for None. Or it could be
intended to call C.__nonzero__. Or it could be cleverly intended to test
not-None and C.__nonzero__ at the same time. It may be impossible to discern
the writer's true intent.

Even if we find out that C.__nonzero__ is called, what was it that __nonzero__
did again? Did it test for the queue being non-empty? Did it test for the
queue being not-full? Did it test whether for the consumer thread is running?
Did it test for if there are any threads blocked on the queue? Better dig up
the class C documentation and find out, because there is no single obvious
interpretation of what is means for an object to evaluate to true.

"if x" is simple to type, but not so simple to read. "if x.namedPredicate()" is
harder to type, but easier to read. I prefer the latter because code is read
more often than it is written.

regards,
Anders
Jul 28 '08 #139
On Mon, 28 Jul 2008 13:22:37 -0700, Carl Banks wrote:
On Jul 28, 10:00 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
>Cutting to the crux of the discussion...

On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
I want something where "if x" will do but a simple explicit test
won't.

Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length
is non-zero even when it's empty? Who knows? How many cases do you need
to consider?

Use case, please. I'm asking for code, not arguments. Please give me a
piece of code where you can write "if x" that works but a simple
explicit test won't.
I gave you a piece of code, actual code from one of my own projects. If
you wouldn't accept that evidence then, why would you accept it now?

It isn't that explicit tests will fail, it is that explicit tests are
more work for no benefit. You keep talking about "simple explicit tests",
but it seems to me that you're missing something absolutely fundamental:
"if x" is simpler than "if x!=0" and significantly simpler than "if len
(x)!=0". Even if you discount the evidence of the character lengths (4
versus 7 versus 12) just use the dis module to see what those expressions
are compiled into. Or use timeit to see the execution speed.

So I'm not really sure what your point is. Yes, for vast amounts of code,
there's no *need* to write "if x". If x is always a number, you can
replace it with "if x != 0" and it will still work. Big deal. I never
said differently. But if I don't care what type x is, why should I write
code that cares what type x is?

All you're doing is writing an expression that does more work than it
needs to. Although the amount of work is trivial for built-ins, it's
still more than necessary. But provided x is always the right sort of
duck, your code will work. It will be longer, more verbose, slower, fail
in unexpected ways if x is an unexpected type, and it goes against the
spirit of duck-typing, but it will work.
--
Steven
Jul 29 '08 #140
On Tue, 29 Jul 2008 01:19:00 +0200, Anders J. Munch wrote:
Steven D'Aprano wrote:
>On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
>>I want something where "if x" will do but a simple explicit test
won't.

Explicit tests aren't simple unless you know what type x is.

If you don't even know a duck-type for x, you have no business invoking
any methods on that object.
Have you tried to make "if x" fail?

Pull open an interactive interpreter session and try. You might learn
something.

If you do know a duck-type for x, then you also know which explicit test
to perform.
>Explicit tests are not necessarily simple for custom classes. Testing
for emptiness could be arbitrarily complex. That's why we have
__nonzero__, so you don't have to fill your code with complex
expressions like (say)

if len(x.method()[x.attribute]) -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.

Okay, so you have this interesting object property that you often need
to test for, so you wrap the code for the test up in a method, because
that way you only need to write the complex formula once. I'm with you
so far. But then you decide to name the method "__nonzero__", instead
of some nice descriptive name?
What's up with that?
Dude. Dude. Just... learn some Python before you embarrass yourself
further.

http://www.python.org/doc/ref/customization.html


--
Steven
Jul 29 '08 #141
Bruno Desthuilliers <bd*****************@free.quelquepart.frwrites:
Boy, I don't know who you think you're talking to, but you're
obviously out of luck here. I'm 41, married, our son is now a
teenager, I have an happy social life, quite a lot of work, and no
time to waste in the streets. And FWIW, name-calling won't buy you
much here.
It has, at least, long ago bought him a place in my kill-file. Seeing
your side of the conversation, I can only confirm that decision as
correct.

--
\ “bash awk grep perl sed, df du, du-du du-du, vi troff su fsck |
`\ rm * halt LART LART LART!†—The Swedish BOFH, |
_o__) alt.sysadmin.recovery |
Ben Finney
Jul 29 '08 #142
On Tue, 29 Jul 2008 00:23:02 +0000, Steven D'Aprano wrote:
Dude. Dude. Just... learn some Python before you embarrass yourself
further.

I'm sorry Anders, that was a needlessly harsh thing for me to say. I
apologize for the unpleasant tone.

Still, __nonzero__ is a fundamental part of Python's behaviour. You
should learn about it.

--
Steven
Jul 29 '08 #143
On Jul 28, 5:44 pm, Ben Finney <bignose+hates-s...@benfinney.id.au>
wrote:
Bruno Desthuilliers <bdesth.quelquech...@free.quelquepart.frwrites:
Boy, I don't know who you think you're talking to, but you're
obviously out of luck here. I'm 41, married, our son is now a
teenager, I have an happy social life, quite a lot of work, and no
time to waste in the streets. And FWIW, name-calling won't buy you
much here.

It has, at least, long ago bought him a place in my kill-file. Seeing
your side of the conversation, I can only confirm that decision as
correct.
Now there's a classic reply from another comp.lang.python regular. All
he needs is one side of the conversation to know what's going on!

I'd really like to know where both of you guys find the time to spend
here, because unless you type extremely fast, I figure it's about
10-20 hours per week every week. Sorry, but if you also have "quite a
lot of work," I don't believe that leaves much time for a "happy
social life."

Each and every time I get involved in an extended discussion about
Python here, I end up realizing that I've wasted a lot of time trying
to reason with unreasonable people who are infatuated with Python and
incapable of recognizing any deficiencies.

I will thank you for one thing. By taking your best shot at my
suggestion and coming up with nothing significant or relevant, you
have given me confidence to go ahead with a PEP. I may just do it. If
it is rejected, then so be it. Unlike you, they will have to give me a
valid reason to reject it.
Jul 29 '08 #144
On Jul 28, 12:08 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.frwrote:
It's a very simple idea that you insist on
making complicated. As I said, I could write a pre-processor myself to
implement it in less than a day.

Preprocessor are not a solution. Sorry.
I never said that a pre-processor is a good permanent solution, but
the simple fact that my proposal can be done with a simple pre-
processor means that it does not change the inner workings of the
Python interpreter.
Oh, you don't stand people disagreing with you, that's it ?
I can stand people disagreeing with me all day long, so long as they
are making sense. You are simply making the issue more complicated
that it is because you are apparently incapable of understanding the
substance of it.
You seem to think that being a "regular" on this newsgroup somehow
gives you special status.

Why so ? Because I answer to your proposition and not agree with your
arguments ??? C'mon, be serious, you have the right to post your
proposition here, I have the right to post my reaction to your
proposition, period. Grow up, boy.
Boy, I don't know who you think you're talking to, but you're obviously
out of luck here. I'm 41, married, our son is now a teenager, I have an
happy social life, quite a lot of work, and no time to waste in the
streets. And FWIW, name-calling won't buy you much here.
There you go again. Do you think you are proving something by
addressing me as "boy"? I'll make a little wager with you. I'll bet
that what I am using Python for dwarfs in importance what you are
using it for. Care to put the cards on the table, big man?

I'll tell you something else while I'm at it. I'm ten years older than
you, but I pump iron and stay in excellent shape. If you addressed me
as "boy" in person, I'd be more than happy to teach you a badly needed
lesson you won't soon forget, big man.
Jul 29 '08 #145
Ben Finney <bi****************@benfinney.id.auwrites:
It has, at least, long ago bought him a place in my kill-file.
Seeing your side of the conversation, I can only confirm that
decision as correct.
This should perhaps say "seeing the parts of his communication that
leak through by being quoted in others's posts, I can only confirm my
decision as correct."

--
\ “Pinky, are you pondering what I'm pondering?†“Wuh, I think |
`\ so, Brain, but wouldn't anything lose its flavor on the bedpost |
_o__) overnight?†—_Pinky and The Brain_ |
Ben Finney
Jul 29 '08 #146
On Jul 29, 4:46*am, "Russ P." <Russ.Paie...@gmail.comwrote:
As I said, I could write a pre-processor myself to
implement it in less than a day.
So WHY DON'T YOU WRITE IT ALREADY?

If you're meeting so much resistance to your idea, why not scratch
your own damn itch and just do it?

Or doesn't that afford you as many chances to insult others while
feeling smugly superior?
Jul 29 '08 #147
On Jul 28, 8:44 pm, alex23 <wuwe...@gmail.comwrote:
On Jul 29, 4:46 am, "Russ P." <Russ.Paie...@gmail.comwrote:
As I said, I could write a pre-processor myself to
implement it in less than a day.

So WHY DON'T YOU WRITE IT ALREADY?
I'm working on something else right now if you don't mind, but I'll
get to it in good time.

Conceptually, the matter is simple. All I need to do is to grab the
first formal argument of each def, then search for occurrences of any
word in the body of the def that starts with a dot, and insert that
first argument in front of it.

I expect the "hard" part will be breaking up the body of the def into
"words." I could just split each line on white space, except for
situations like

x+=.zzz

So I need to account for the fact that operators do not need to be
surrounded by spaces. That's the hardest part I can think of off the
top of my head.

Maybe I'll encounter an insurmountable problem and realize that the
idea can't work in general. If so, then so be it. Certainly, no one on
this thread has anticipated such a problem. Had someone pointed out an
actual technical problem with the idea, I would have gladly thanked
them. But I got a load of irrelevant crap instead, not to mention
being addressed as "boy."
If you're meeting so much resistance to your idea, why not scratch
your own damn itch and just do it?

Or doesn't that afford you as many chances to insult others while
feeling smugly superior?
This coming from a guy who insulted my reading comprehension ability
-- when he was the one who was wrong!
Jul 29 '08 #148
Bruno Desthuilliers <bd*****************@free.quelquepart.frwrites:
Nikolaus Rath a écrit :
>Michael Torrie <to*****@gmail.comwrites:

(snip)
>>In short, unlike what most of the implicit self advocates are
saying, it's not just a simple change to the python parser to do
this. It would require a change in the interpreter itself and how it
deals with classes.


Thats true. But out of curiosity: why is changing the interpreter such
a bad thing? (If we suppose for now that the change itself is a good
idea).

Because it would very seriously break a *lot* of code ?
Well, Python 3 will break lots of code anyway, won't it?
Best,

-Nikolaus

--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy

PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C

Jul 29 '08 #149
On Jul 28, 8:15 pm, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Mon, 28 Jul 2008 13:22:37 -0700, Carl Banks wrote:
On Jul 28, 10:00 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
Cutting to the crux of the discussion...
On Sun, 27 Jul 2008 23:45:26 -0700, Carl Banks wrote:
I want something where "if x" will do but a simple explicit test
won't.
Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length
is non-zero even when it's empty? Who knows? How many cases do you need
to consider?
Use case, please. I'm asking for code, not arguments. Please give me a
piece of code where you can write "if x" that works but a simple
explicit test won't.

I gave you a piece of code, actual code from one of my own projects. If
you wouldn't accept that evidence then, why would you accept it now?
I would accept as "evidence" something that satisfies my criteria,
which your example did not: it could have easily (and more robustly)
been written with a simple explicit test. I am looking for one that
can't.

You keep bringing up this notion of "more complex with no benefit",
which I'm simply not interested in talking about that at this time,
and I won't respond to any of your points. I am seeking the answer to
one question: whether "if x" can usefully do something a simple
explicit test can't. Everyone already knows that "if x" requires
fewer keystrokes and parses to fewer nodes.
Carl Banks
Jul 29 '08 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.