469,327 Members | 1,347 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,327 developers. It's quick & easy.

Death to tuples!

It seems that the distinction between tuples and lists has slowly been
fading away. What we call "tuple unpacking" works fine with lists on
either side of the assignment, and iterators on the values side. IIRC,
"apply" used to require that the second argument be a tuple; it now
accepts sequences, and has been depreciated in favor of *args, which
accepts not only sequences but iterators.

Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?

If not, then it's not clear that tuples as a distinct data type still
serves a purpose in the language. In which case, I think it's
appropriate to consider doing away with tuples. Well, not really: just
changing their intended use, changing the name to note that, and
tweaking the implementation to conform to this.

The new intended use is as an immutable sequence type, not a
"lightweight C struct". The new name to denote this new use -
following in the footsteps of the set type - is "frozenlist". The
changes to the implementation would be adding any non-mutating methods
of list to tuple, which appears to mean "index" and "count".

Removing the tuple type is clearly a Py3K action. Adding frozenlist
could be done now. Whehter or not we could make tuple an alias for
frozenlist before Py3K is an interesting question.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 28 '05
66 3061
On 2005-12-01, Mike Meyer <mw*@mired.org> wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.


The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?


Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...

2)
def f(arg = None):
if arg is None:
arg = default.

--
Antoon Pardon
Dec 1 '05 #51
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
On 2005-12-01, Mike Meyer <mw*@mired.org> wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.
The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?


Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...


Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
....

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.
2)
def f(arg = None):
if arg is None:
arg = default.


Um, that's just rewriting the first one in an uglier fashion, except
you omitted setting the default value before the function.

This may not have been the reason it was done in the first place, but
this loss of functionality would seem to justify the current behavior.

And, just for fun:

def setdefaults(**defaults):
def maker(func):
def called(*args, **kwds):
defaults.update(kwds)
func(*args, **defaults)
return called
return maker

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 1 '05 #52
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-11-30, Duncan Booth <du**********@invalid.invalid> wrote:
Antoon Pardon wrote:
The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)
Because the empty list expression '[]' is evaluated when the expression
containing it is executed.


This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression

^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^determining the default value. ^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where & how?
1) If they should be re-evaluated in the enclosing scope, as default arg expressions
are now, you can just write foo(PI*func(x)) as your call. So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...

Of course, if you wanted just the expression value as now at def time, you could write
def foo(...):...; foo.__default0=PI*fun(x) and later call foo(foo.__default0), which is
what foo() effectively does now.

2) Or did you want the def code to look up the bindings at def time and save them
in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings in the scope
enclosing the def, so that when foo is called, it can do arg = _deftup0[0]*_deftup0[1](_deftup0[2])
to initialize arg and maybe trigger some side effects at call time.

3) Or did you want to save the names themselves, __default0_names=('PI', 'func', 'x')
and look them up at foo call time, which is tricky as things are now, but could be done?
The left has one list created outside the body of the function, the
right one has two lists created outside the body of the function. Why
on earth should these be the same?

Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument?
It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)


Because the empty list expression '[]' is evaluated when the
expression containing it is executed.
Again you are just stating the specific choice python has made.
Not why they made this choice.

Why are you interested in the answer to this question? ;-) Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way? I see no reason why your and my question should be answered
differently.
We are agreed on that, the answers should be the same, and indeed they are.
In each case the list is created when the expression (an assigment or a
function definition) is executed. The behaviour, as it currently is, is
entirely self-consistent.

I think perhaps you are confusing the execution of the function body with
the execution of the function definition. They are quite distinct: the
function definition evaluates any default arguments and creates a new
function object binding the code with the default arguments and any scoped
variables the function may have.


I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily be handled. If you want
an expression evaluated at call time, why don't you write it at the top
of the function body instead of lobbying for a change to the default arg
semantics? The answer could be a scoping problem, I suppose. Is there
something you'd like that couldn't be handled with (an efficent sugary version of)

sentinel = object()
def foo(arg=(sentinel,lambda:expr)):
if type(arg) is tuple and len(arg)==2 and arg[0] is sentinel: arg = arg[0]()
...

or would the expression evaluation maybe not suit once beyond expr being just []?
I'm trying to move off "why" onto "what" ;-)

So when these kind of expression are evaluated at definition time,
I don't see what would be so problematic when other functions are
evaluated at definition time to.
If the system tried to delay the evaluation until the function was called
you would get surprising results as variables referenced in the default
argument expressions could have changed their values.


This would be no more surprising than a variable referenced in a normal
expression to have changed values between two evaluations.

Sure, you could have it work that way, but would it really be useful?

Is this a matter of thinking up some sugar for

def foo(arg=None)
if arg is None: arg = []

or what are we pursuing?
Hm, I was just going to say it might be nice to have a builtin standard sentinel,
or a convention for using something as such. I don't really like manufacturing
sentinel=object() when I need something other than None. So it just occurred to me
maybe

def foo(arg=NotImplemented)
if arg is NotImplemented: arg = []

maybe SENTINEL could be defined similarly as a builtin constant.

Regards,
Bengt Richter
Dec 2 '05 #53

Bengt Richter wrote:

Because the empty list expression '[]' is evaluated when the
expression containing it is executed.


Again you are just stating the specific choice python has made.
Not why they made this choice.

Why are you interested in the answer to this question? ;-) Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?


My WAG :

Because it is usually presented as "this is the best way" rather than
"this is the python way". For the former one, I think people would be
curious of why it is best(or better than other considered alternative),
as a learning excercise may be.

Dec 2 '05 #54
On 2005-12-01, Mike Meyer <mw*@mired.org> wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
On 2005-12-01, Mike Meyer <mw*@mired.org> wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?
Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...


Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
...

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.


It's not because accidents can happen, that it doesn't work.
IMO that accidents can happen here is because python
doesn't allow a name to be marked as a constant or unreboundable.
This may not have been the reason it was done in the first place, but
this loss of functionality would seem to justify the current behavior.

And, just for fun:

def setdefaults(**defaults):
def maker(func):
def called(*args, **kwds):
defaults.update(kwds)
func(*args, **defaults)
return called
return maker


So it seems that with a decorator there would be no loss of
functionality.

--
Antoon Pardon
Dec 2 '05 #55
On 2005-12-02, Bengt Richter <bo**@oz.net> wrote:
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-11-30, Duncan Booth <du**********@invalid.invalid> wrote:
Antoon Pardon wrote:

> The left one is equivalent to:
>
> __anon = []
> def Foo(l):
> ...
>
> Foo(__anon)
> Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression
containing it is executed.
This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression

^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
determining the default value.

^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,


Yes, one of the questions I have is why python people whould consider
it a problem if it wasn't.

Personnaly I expect the following pieces of code

a = <const expression>
b = <same expression>

to be equivallent with

a = <const expression>
b = a

But that isn't the case when the const expression is a list.

A person looking at:

a = [1 , 2]

sees something resembling

a = (1 , 2)

Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
the second is build at compile time and just bound.

This seems to go against the pythonic spirit of explicit is
better than implicit.

It also seems to go against the way default arguments are treated.
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where & how?
1) If they should be re-evaluated in the enclosing scope, as default arg expressions
are now, you can just write foo(PI*func(x)) as your call.
I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.
So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...
This was my first thought.
Of course, if you wanted just the expression value as now at def time, you could write
def foo(...):...; foo.__default0=PI*fun(x) and later call foo(foo.__default0), which is
what foo() effectively does now.

2) Or did you want the def code to look up the bindings at def time and save them
in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings in the scope
enclosing the def, so that when foo is called, it can do arg = _deftup0[0]*_deftup0[1](_deftup0[2])
to initialize arg and maybe trigger some side effects at call time.
This is tricky, I think it would depend on how foo(arg=[]) would be
translated.

2a) _deftup0=([]), with a subsequent arg = _deftup0[0]
or
2b) _deftup0=(list, ()), with subsequently arg = _deftup0[0](_deftup0[1])
My feeling is that this proposal would create a lot of confusion.

Something like def f(arg = s) might give very different results
depending on s being a list or a tuple.
3) Or did you want to save the names themselves, __default0_names=('PI', 'func', 'x')
and look them up at foo call time, which is tricky as things are now, but could be done?
No, this would make for some kind of dynamic scoping, I don't think it
would mingle with the static scoping python has now.

> The left has one list created outside the body of the function, the
> right one has two lists created outside the body of the function. Why
> on earth should these be the same?

Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument? It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)


I didn't start my question because I wanted something to change in
python. It was just something I wondered about. Now I wouldn't
mind python to be enhanced at this point, so should the python
people decide to work on this, I'll give you my proposal. Using your
syntax.

def foo(arg{expr}):
...

should be translated something like:

class _def: pass

def foo(arg = _def):
if arg is _def:
arg = expr
...

I think this is equivallent with your first proposal and probably
not worth the trouble, since it is not that difficult to get
the behaviour one wants.

I think such a proposal would be most advantaged for the newbees
because the two possibilities for default values would make them
think about what the differences are between the two, so they
are less likely to be confused about the def f(l=[]) case.
Because the empty list expression '[]' is evaluated when the
expression containing it is executed.


Again you are just stating the specific choice python has made.
Not why they made this choice.

Why are you interested in the answer to this question? ;-)


Because my impression is that a number of decisions were made
that are inconsistent with each other. I'm just trying to
understand how that came about.
Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?
If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily be handled. If you want
an expression evaluated at call time, why don't you write it at the top
of the function body instead of lobbying for a change to the default arg
semantics?


I'm not lobbying for a change. You are probably right that this is again
the "Practical beats purity" rule working again. But IMO the python
people are making use of that rule too much, making the total language
less pratical as a whole.

Purity is often practical, because it makes it easier to infer knowlegde
from things you already know. If you break the purity for the practical
you may make one specific aspect easier to understand, but make it
less practical to understand the language as a whole.

Personnally I'm someone for whom purity is practical in most cases.
If a language is pure/consistent it makes the langauge easier to
learn and understand, because your knowledge of one part of the
language will carry over to other parts.

Isn't is practical that strings tuples and list all treat '[]'
similarly for accessing an individual in the sequence. That means
I just have to learn what v[x] means for tuples and I know what
it means for lists, strings and a lot of other things.

Having a count method for lists but not for tuples breaks that
consistency and makes that I have to look it up for each sequence
whether or not it has that method. Not that practical IMO.
[ ... ]

or what are we pursuing?


What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.

My impression is that when purity is balanced against practicallity
this balancing is only done on a local scale without considering
what practicallity is lost over the whole language by persuing
praticallity on a local aspect.

--
Antoon Pardon
Dec 2 '05 #56
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...


Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
...

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.

It's not because accidents can happen, that it doesn't work.
IMO that accidents can happen here is because python
doesn't allow a name to be marked as a constant or unreboundable.


Loets of "accidents" could be fixed if Python marked names in various
ways: with a type, or as only being visible to certain other types, or
whatever. A change that requires such a construct in order to be
usable probably needs rethinking.

Even if that weren't a problem, this would still require introducting
a new variable into the global namespace for every such
argument. Unlike other similar constructs, you *can't* clean them up,
because the whole point is that they be around later.

The decorator was an indication of a possible solution. I know it
fails in some cases, and it probably fails in others as well.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 2 '05 #57
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
On 2005-12-02, Bengt Richter <bo**@oz.net> wrote:
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-11-30, Duncan Booth <du**********@invalid.invalid> wrote:
Antoon Pardon wrote:
I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression ^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
determining the default value.

^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,

Yes, one of the questions I have is why python people whould consider
it a problem if it wasn't.


That would make [] behave differently from [compute_a_value()]. This
doesn't seem like a good idea.
Personnaly I expect the following pieces of code
a = <const expression>
b = <same expression>
to be equivallent with
a = <const expression>
b = a
But that isn't the case when the const expression is a list.
It isn't the case when the const expression is a tuple, either:

x = (1, 2)
(1, 2) is x False
or an integer:
a = 1234
a is 1234 False
Every value (in the sense of a syntactic element that's a value, and
not a keyword, variable, or other construct) occuring in a program
should represent a different object. If the compiler can prove that an
value can't be changed, it's allowed to use a single instance for all
occurences of that value. Is there *any* language that behaves
differently from this?
A person looking at:
a = [1 , 2]
sees something resembling
a = (1 , 2)
Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
the second is build at compile time and just bound.
No, that translation doesn't happen. [1, 2] builds a list of
values. (1, 2) builds and binds a constant, which is only possible
because it, unlike [1, 2], *is* a constant. list(1, 2) calls the
function "list" on a pair of values:
def f(): .... a = [1, 2]
.... b = list(1, 2)
.... c = (1, 2)
.... dis.dis(f)

2 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BUILD_LIST 2
9 STORE_FAST 0 (a)

3 12 LOAD_GLOBAL 1 (list)
15 LOAD_CONST 1 (1)
18 LOAD_CONST 2 (2)
21 CALL_FUNCTION 2
24 STORE_FAST 2 (b)

4 27 LOAD_CONST 3 ((1, 2))
30 STORE_FAST 1 (c)
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
This seems to go against the pythonic spirit of explicit is
better than implicit.
Even if "[arg]" were just syntactic sugar for "list(arg)", why would
that be "implicit" in some way?
It also seems to go against the way default arguments are treated.


Only if you don't understand how default arguments are treated.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 2 '05 #58
On 2 Dec 2005 13:05:43 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-12-02, Bengt Richter <bo**@oz.net> wrote:
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-11-30, Duncan Booth <du**********@invalid.invalid> wrote:
Antoon Pardon wrote:

Personnaly I expect the following pieces of code

a = <const expression>
b = <same expression>

to be equivallent with

a = <const expression>
b = a

But that isn't the case when the const expression is a list.
ISTM the line above is a symptom of a bug in your mental Python source interpreter.
It's a contradiction. A list can't be a "const expression".
We probably can't make real progress until that is debugged ;-)
Note: assert "const expression is a list" should raise a mental exception ;-)
A person looking at:

a = [1 , 2] English: let a refer to a mutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.
sees something resembling

a = (1 , 2) English: let a refer to an immutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.
Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while They are of course different in that two different kinds of objects
(mutable vs immutable containers) are generated. This can allow an optimization
in the one case, but not generally in the other.
the second is build at compile time and just bound. a = (1, 2) is built at compile time, but a = (x, y) would not be, since x and y
can't generally be known a compile time. This is a matter of optimization, not
semantics. a = (1, 2) _could_ be built with the same code as a = (x, y) picking up
1 and 2 constants as arguments to a dynamic construction of the tuple, done in the
identical way as the construction would be done with x and y. But that is a red herring
in this discussion, if we are talking about the language rather than the implementation.

This seems to go against the pythonic spirit of explicit is
better than implicit. Unless you accept that '[' is explicitly different from '(' ;-)

It also seems to go against the way default arguments are treated. I suspect another bug ;-)
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where & how?
1) If they should be re-evaluated in the enclosing scope, as default arg expressions
are now, you can just write foo(PI*func(x)) as your call.


I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.

Yes, I was trying to make you notice this ;-)
So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...
This was my first thought.

[...]
> Why on earth should it be the same list, when a function is called
> and is provided with a list as a default argument?

It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)


I didn't start my question because I wanted something to change in
python. It was just something I wondered about. Now I wouldn't

I wonder if this "something" will still exist once you get
assert "const expression is a list" to raise a mental exception ;-)
mind python to be enhanced at this point, so should the python
people decide to work on this, I'll give you my proposal. Using your
syntax.

def foo(arg{expr}):
...

should be translated something like:

class _def: pass

def foo(arg = _def):
if arg is _def:
arg = expr
...

I think this is equivallent with your first proposal and probably
not worth the trouble, since it is not that difficult to get
the behaviour one wants. Again, I'm not "proposing" anything except to help lay out evidence.
The above is just a spelling of typical idiom for mutable default
value initialization.

I think such a proposal would be most advantaged for the newbees
because the two possibilities for default values would make them
think about what the differences are between the two, so they
are less likely to be confused about the def f(l=[]) case.
So are you saying it's not worth the trouble or that it would be
worth the trouble to help newbies?
[...]
Again you are just stating the specific choice python has made.
Not why they made this choice.

Why are you interested in the answer to this question? ;-)


Because my impression is that a number of decisions were made
that are inconsistent with each other. I'm just trying to
understand how that came about.

An inconsistency in our impression of the world
is not an inconsistency in the world ;-)
Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?


If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.

An example?
I know what happens, I would like to know, why they made this choice. One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily be handled. If you want
an expression evaluated at call time, why don't you write it at the top
of the function body instead of lobbying for a change to the default arg
semantics?


I'm not lobbying for a change. You are probably right that this is again
the "Practical beats purity" rule working again. But IMO the python

I didn't say this was a case of "Practical beats purity" -- I said "maybe
it was just easier" -- which doesn't necessarily mean impure to me, nor
that the more difficult choice would have been better ;-)
In fact I think the way default args work now works fine.
If I were to make a list of things to change, that would not be at the top.
people are making use of that rule too much, making the total language
less pratical as a whole. IMO this is hand waving unless you can point to specifics, and a kind of
unseemly propaganda/innuendo if you can't.

Purity is often practical, because it makes it easier to infer knowlegde
from things you already know. If you break the purity for the practical
you may make one specific aspect easier to understand, but make it
less practical to understand the language as a whole. That is a good point, but to have the real moral standing to talk about purity
one has to be able to demonstrate it, which is really hard.

Personnally I'm someone for whom purity is practical in most cases.
If a language is pure/consistent it makes the langauge easier to
learn and understand, because your knowledge of one part of the
language will carry over to other parts. I agree, so long as the "knowledge of one part" is not a misconception.

Isn't is practical that strings tuples and list all treat '[]'
similarly for accessing an individual in the sequence. That means
I just have to learn what v[x] means for tuples and I know what
it means for lists, strings and a lot of other things. But not all, since your example v[x] requires that v not be a dict.

Having a count method for lists but not for tuples breaks that
consistency and makes that I have to look it up for each sequence
whether or not it has that method. Not that practical IMO. But I think you are having the wrong expectation of v[x] syntax.
It will generate code that looks for __getitem__ and having __getitem__
means that iteration syntax may access it if __iter__ does not preempt,
but if you expect this to guarantee the presence of other methods
such as count, then you are misreading v[x].

OTOH, you can have some expectations of list(v), which if it succeeds
will give you the list methods. As mentioned in another post, I think
if iter were a type, iter(v) could return an iterator object that could
have all the methods one might think appropriate for all sequences, and
could thus be a way of unifying sequence usage. iter could also allow
some handy methods that return further specialized iterators (a la itertools)
rather than consuming itself to return a specific result like a count.
[ ... ]

or what are we pursuing?
What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.

I think this is nicely said and important. I wish it were possible
to arrive at a statement like this without wading though massive irrelevancies ;-)

My impression is that when purity is balanced against practicallity
this balancing is only done on a local scale without considering
what practicallity is lost over the whole language by persuing
praticallity on a local aspect.

It is hard to demonstrate personal impressions to others, since
we are not Vulcans capable of mind-melds, so good concrete examples
are critical, along with prose that focuses attention on the aspects
to be demonstrated. Good luck, considering that what may seem
like a valid impression to you may seem like a mis-reading to others ;-)

BTW, I am participating in this thread more out of interest in
the difficulties of human communication that in the topic per se,
so I am probably OT ;-)

Regards,
Bengt Richter
Dec 2 '05 #59
Op 2005-12-02, Bengt Richter schreef <bo**@oz.net>:
On 2 Dec 2005 13:05:43 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
On 2005-12-02, Bengt Richter <bo**@oz.net> wrote:
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:

On 2005-11-30, Duncan Booth <du**********@invalid.invalid> wrote:
> Antoon Pardon wrote:
>
Personnaly I expect the following pieces of code

a = <const expression>
b = <same expression>

to be equivallent with

a = <const expression>
b = a

But that isn't the case when the const expression is a list.

ISTM the line above is a symptom of a bug in your mental Python source interpreter.
It's a contradiction. A list can't be a "const expression".
We probably can't make real progress until that is debugged ;-)
Note: assert "const expression is a list" should raise a mental exception ;-)


Why should "const expression is a list" raise a mental exception with
me? I think it should have raised a mental exception with the designer.
If there is a problem with const list expression, maybe the language
shouldn't have something that looks so much like one?

This seems to go against the pythonic spirit of explicit is
better than implicit.

Unless you accept that '[' is explicitly different from '(' ;-)

It also seems to go against the way default arguments are treated.
I suspect another bug ;-)
The question is where is the bug? You can start from the idea that
the language is how it was defined and thus by definition correct
and so any problem is user problem.

You can also notice that a specific construct is a stumbling block
with a lot a new people and wonder if that doens't say something
about the design.
Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?


If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.

An example?


Well there is the documentation about function calls, which states
something like the first positional argument provided will go
to the first parameter, ... and that default values will be used
for parameters not filled by arguments. Then you stumble on
the build in function range with the signature:

range([start,] stop[, step])

Why if you only provide one arguments, does it go to the second
parameter?

Why are a number of constructs for specifying/creating a value/object
limited to subscriptions? Why is it impossible to do the following:

a = ...
f(...)
a = 3:8
tree.keys('a':'b')

Why is how you can work with defaults in slices not similar with
how you work with defaults in calls. You can do:

lst[:7]

So why can't you call range as follows:

range(,7)
lst[::] is a perfect acceptable slice, so why doesn't, 'slice()' work?

Positional arguments must come before keyword arguments, but when
you somehow need to do the following:

foo(arg0, *args, kwd = value)

You suddenly find out the above is illegal and it should be written

foo(arg0, kwd = value, *args)

people are making use of that rule too much, making the total language
less pratical as a whole.

IMO this is hand waving unless you can point to specifics, and a kind of
unseemly propaganda/innuendo if you can't.


IMO the use of negative indexing is the prime example in this case.
Sure it is practical that if you want the last element of a list,
you can just use -1 as a subscript. However in a lot of cases -1,
is just as out of bounds as an index greater than the list length.

At one time I was given lower en upperlimit for a slice from a list,
Both could range from 0 to len - 1. But I really needed three slices,
I needed lst[low:up], lst[low-1,up-1] and lst[low+1,up+1].
Getting lst[low+1:up+1] wasn't a problem, The way python treated
slices gave me just what I wanted, even if low and up were to big.
But when low or up were zero, the lst[low-1,up-1] gave trouble.

If I want lst[low:up] in reverse, then the following works in general:

lst[up-1 : low-1 : -1]

Except of course when low or up is zero.
Of course I can make a subclass list that works as I want it, but IMO that
is the other way around. People should use a subclass for special cases,
like indexes that wrap around, not use a subclass to remove the special
casing, that was put in the base class.

Of course this example fits between the examples above, and some of
those probably will fit here too.
or what are we pursuing?


What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.

I think this is nicely said and important. I wish it were possible
to arrive at a statement like this without wading though massive irrelevancies ;-)


Well I hope you didn't have to wade such much this time.
BTW, I am participating in this thread more out of interest in
the difficulties of human communication that in the topic per se,
so I am probably OT ;-)


Well I hope you are having a good time anyway.

--
Antoon Pardon
Dec 5 '05 #60

I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound to
objects that they should not allow modification of the object, then we would
have a concept _orthogonal_ to data types themselves and, as a by-product, a
way to declare tuples as constant lists.

So this could look like this:

const l = [1, 2, 3]

def foo( const l ): ...

and also

const d = { "1" : 1, "2" : 2, ... }

etc.

It seems to me that implementing that feature would be fairly easy.
All that would be needed is a flag with each variable.

Just my tupence,
Gabriel.
--
/-----------------------------------------------------------------------\
| Any intelligent fool can make things bigger, more complex, |
| or more violent. It takes a touch of genius - and a lot of courage - |
| to move in the opposite direction. (Einstein) |
\-----------------------------------------------------------------------/
Dec 14 '05 #61
On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:
I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound to
objects that they should not allow modification of the object, then we would
have a concept _orthogonal_ to data types themselves and, as a by-product, a
way to declare tuples as constant lists.
In an earlier thread, somebody took me to task for saying that Python
doesn't have variables, but names and objects instead.

This is another example of the mental confusion that occurs when you think
of Python having variables. Some languages have variables. Some do not. Do
not apply the rules of behaviour of C (which has variables) to Python
(which does not).

Python already has objects which do not allow modification of the object.
They are called tuples, strings, ints, floats, and other immutables.
So this could look like this:

const l = [1, 2, 3]
(As an aside, it is poor practice to use "l" for a name, because it is too
easy to mistake for 1 in many fonts. I always use capital L for quick
throw-away lists, like using x or n for numbers or s for a string.)

Let's do some tests with the constant list, which I will call L:
py> L.count(2)
1
py> L.append(4)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ConstantError: can't modify constant object

(Obviously I've faked that last error message.) So far so good: we can
call list methods on L, but we can't modify it.

But now look what happens when we rebind the name L:

py> L = 2
py> print L
2

Rebinding the name L doesn't do anything to the object that L pointed to.
That "constant list" will still be floating in memory somewhere. If L was
the only reference to it, then it will be garbage collected and the memory
it uses reclaimed.
Now, let's look at another problem with the idea of constants for Python:

py> L = [1, 2, 3] # just an ordinary modifiable list
py> const D = {1: "hello world", 2: L} # constant dict

Try to modify the dictionary:

py> D[0] = "parrot"
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ConstantError: can't modify constant object

So far so good.

py> L.append(4)

What should happen now? Should Python allow the modification of ordinary
list L? If it does, then this lets you modify constants through the back
door: we've changed one of the items of a supposedly unchangeable dict.

But if we *don't* allow the change to take place, we've locked up an
ordinary, modifiable list simply by putting it inside a constant. This
will be a great way to cause horrible side-effects: you have some code
which accesses an ordinary list, and expects to be able to modify it. Some
other piece of code, could be in another module, puts that list inside a
constant, and *bam* your code will break when you try to modify your list.

Let me ask you this: what problem are you trying to solve by adding
constants to Python?
It seems to me that implementing that feature would be fairly easy.
All that would be needed is a flag with each variable.


Surely not. Just adding a flag to objects would not actually implement the
change in behaviour you want. You need to code changes to the parser to
recognise the new keyword, and you also need code to actually make objects
unmodifiable.

--
Steven.

Dec 14 '05 #62
Gabriel Zachmann wrote:
[...]

It seems to me that implementing that feature would be fairly easy.
All that would be needed is a flag with each variable.

It seems to me like it should be quite easy to add a sixth forward gear
to my car, but I'm quite sure an auto engineer would quickly be able to
point out several reasons why it wasn't, as well as questioning my
"need" for a sixth gear in the first place.

Perhaps you could explain why the absence of const objects is a problem?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Dec 14 '05 #63
Gabriel Zachmann wrote:

I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound to
objects that they should not allow modification of the object, then we
would have a concept _orthogonal_ to data types themselves and, as a
by-product, a way to declare tuples as constant lists. ..
..
.. It seems to me that implementing that feature would be fairly easy.
All that would be needed is a flag with each variable.


Nope, that's not all you need; in fact, your definition of 'const'
conflates two sorts of constants.

Consider:
const l = 1
l = 2 # error?
Andconst l = []
l.append(foo) # error?
with its more general:const foo = MyClass()
foo.myMethod() # error? myMethod might mutate.
And none of this can prevent:d = {}
const foo=[d]
d['bar']='baz'


The first "constant" is the only well-defined one in Python: a constant
name. A "constant" name would prohibit rebinding of the name for the
scope of the name. Of course, it can't prevent whatsoever mutation of
the object which is referenced by the name.

Conceptually, a constant name would be possible in a python-like
language, but it would require significant change to the language to
implement; possibly something along the lines of name/attribute
unification (because with properties it's possible to have
nearly-constant[1] attributes on class instances).

The other form of constant, that of a frozen object, is difficult
(probably impossible) to do for a general object: without knowing ahead
of time the effects of any method invocation, it is very difficult to
know whether the object will be mutated. Combine this with exec/eval
(as the most absurd level of generality), and I'd argue that it is
probably theoretically impossible.

For more limited cases, and for more limited definitions of immutable,
and ignoring completely the effects of extremely strange code, you might
be able to hack something together with a metaclass (or do something
along the lines of a frozenset). I wouldn't recommend it just for
general use.

Really, the single best purpose of constant names/objects is for
compiler optimization, which CPython doesn't do as-of-yet. When it
does, possibly through the PyPy project, constants will more likely be
discovered automatically from analysis of running code.

[1] -- barring straight modification of __dict__
Dec 14 '05 #64
Gabriel Zachmann wrote:

I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound to
objects that they should not allow modification of the object, then we
would have a concept _orthogonal_ to data types themselves and, as a
by-product, a way to declare tuples as constant lists.

So this could look like this:

const l = [1, 2, 3]


That was a bit confusing. Is it the name 'l' or the list
object [1, 2, 3] that you want to make const? If you want
to make the list object immutable, it would make more sense
to write "l = const [1, 2, 3]". I don't quite see the point
though.

If you could write "const l = [1, 2, 3]", that should logically
mean that the name l is fixed to the (mutable) list object
that initially contains [1, 2, 3], i.e. l.append(6) is OK,
but l = 'something completely different" in the same scope
as "const l = [1, 2, 3]" would be forbidden.

Besides, what's the use case for mutable numbers for instance,
when you always use freely rebindable references in your source
code to refer to these numbers. Do you want to be able to play
nasty tricks like this?
f = 5
v = f
v++
print f

6

It seems to me that you don't quite understand what the
assignment operator does in Python. Please read
http://effbot.org/zone/python-objects.htm

Dec 14 '05 #65
On Wed, 14 Dec 2005, Steven D'Aprano wrote:
On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:
I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound
to objects that they should not allow modification of the object, then
we would have a concept _orthogonal_ to data types themselves and, as a
by-product, a way to declare tuples as constant lists.
In an earlier thread, somebody took me to task for saying that Python
doesn't have variables, but names and objects instead.


I'd hardly say it was a taking to task - that phrase implies
authoritativeness on my part! :)
This is another example of the mental confusion that occurs when you
think of Python having variables.
What? What does this have to do with it? The problem here - as Christopher
and Magnus point out - is the conflation in the OP's mind of the idea of a
variable, and of the object referenced by that variable. He could have
expressed the same confusion using your names-values-and-bindings
terminology - just replace 'variable' with 'name'. The expression would be
nonsensical, but it's nonsensical in the variables-objects-and-pointers
terminology too.
Some languages have variables. Some do not.


Well, there is the lambda calculus, i guess ...

tom

--
The sky above the port was the colour of television, tuned to a dead
channel
Dec 14 '05 #66
On Wed, 14 Dec 2005 18:35:51 +0000, Tom Anderson wrote:
On Wed, 14 Dec 2005, Steven D'Aprano wrote:
On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:
I was wondering why python doesn't contain a way to make things "const"?

If it were possible to "declare" variables at the time they are bound
to objects that they should not allow modification of the object, then
we would have a concept _orthogonal_ to data types themselves and, as a
by-product, a way to declare tuples as constant lists.


In an earlier thread, somebody took me to task for saying that Python
doesn't have variables, but names and objects instead.


I'd hardly say it was a taking to task - that phrase implies
authoritativeness on my part! :)
This is another example of the mental confusion that occurs when you
think of Python having variables.


What? What does this have to do with it? The problem here - as Christopher
and Magnus point out - is the conflation in the OP's mind of the idea of a
variable, and of the object referenced by that variable. He could have
expressed the same confusion using your names-values-and-bindings
terminology - just replace 'variable' with 'name'. The expression would be
nonsensical, but it's nonsensical in the variables-objects-and-pointers
terminology too.


If the OP was thinking names-and-bindings, he would have immediately
realised there is a difference between unmodifiable OBJECTS and
unchangeable NAMES, a distinction which doesn't appear to have even passed
his mind.

"Variable" is a single entity of name+value, so it makes perfect sense to
imagine a variable with a constant, unchangeable value. But a name+object
is two entities, and to implement constants you have to have both
unmodifiable objects and names that can't be rebound -- and even that may
not be sufficient.

--
Steven.

Dec 14 '05 #67

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.