By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,262 Members | 2,664 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,262 IT Pros & Developers. It's quick & easy.

Beginner question - How to effectively pass a large list

P: n/a
Hi folks,

The python can only support passing value in function call (right?), I'm
wondering how to effectively pass a large parameter, such as a large list or
dictionary?

It could achieved by pointer in C++, is there such way in Python?

Thansk in advance.
J.R.
Jul 18 '05 #1
Share this Question
Share on Google+
46 Replies


P: n/a
J.R. wrote:
Hi folks,

The python can only support passing value in function call (right?),


let's see :
l = range(5)
def modif(alist): alist[0] = 'allo' .... l [0, 1, 2, 3, 4] modif(l)
l ['allo', 1, 2, 3, 4]

Er... Ok, let's try something else :

class Toto: .... pass
.... t = Toto()
t.name = "Toto"
def rename(t): t.name = "Titi" .... t <__main__.Toto instance at 0x402dccec> t.name 'Toto' rename(t)
t <__main__.Toto instance at 0x402dccec> t.name

'Titi'
Well... You may want to read more about bindings (not 'assignements') in
Python.

(hint : it already *does* work like pointers in C[++] - at least for
mutable objects).

Bruno

Jul 18 '05 #2

P: n/a
On Mon, 15 Dec 2003 15:14:48 +0800, J.R. wrote:
The python can only support passing value in function call (right?)


Wrong. Function parameters in Python are always passed by reference,
not by value. The local function parameter gets a binding to the same
object that was passed, so there are not two copies of the object in
memory.

Of course, because of the way that Python treats assignment, you can
then change that local parameter within the function and it will re-bind
to the new value, without altering the original value passed.

--
\ "I went to a fancy French restaurant called 'DÚjÓ Vu'. The head |
`\ waiter said, 'Don't I know you?'" -- Steven Wright |
_o__) |
Ben Finney <http://bignose.squidly.org/>
Jul 18 '05 #3

P: n/a
In article <3f***********************@news.free.fr>,
Bruno Desthuilliers <bd***********@removeme.free.fr> wrote:
....
(hint : it already *does* work like pointers in C[++] - at least for
mutable objects).


For any and all objects, irrespective of mutability.

Donn Cave, do**@u.washington.edu
Jul 18 '05 #4

P: n/a
Thanks for the response.

I got following conclusion after reading your reply and other documents:

Actually, the python is passing the identity (i.e. memory address) of each
parameter, and it will bind to a local name within the function.

Right?

Thanks,
J.R.

"Donn Cave" <do**@u.washington.edu> wrote in message
news:do************************@nntp3.u.washington .edu...
In article <3f***********************@news.free.fr>,
Bruno Desthuilliers <bd***********@removeme.free.fr> wrote:
...
(hint : it already *does* work like pointers in C[++] - at least for
mutable objects).


For any and all objects, irrespective of mutability.

Donn Cave, do**@u.washington.edu

Jul 18 '05 #5

P: n/a

"J.R." <j.*****@motorola.com> wrote in message
news:br**********@newshost.mot.com...
I got following conclusion after reading your reply and other documents:
Actually, the python is passing the identity (i.e. memory address) of each parameter, and it will bind to a local name

within the function.

Some months ago, there was a long thread on
whether Python function calling is 'call by name',
'call by value', or something else. Without
reiterating long discussion, I think from a user's
viewpoint, it is best considered 'call by
name-binding', with the value-carrying object
'passing' being by cross-namespace binding. One
can think of the return process as being similar
in that the return object is substituted for the
function name as if it had been bound to that
name.

The usage I believe promoted by Knuth and
supported by some here defines 'parameter' as the
within-function local name and 'argument' as the
outside value/object bound to that name at a
particular function call. Yes, CPython does that
with ids that are memory addresses, but that is
implementation rather than part of the language
definition itself. Who know what we do when we
act as Python interpreters!

Terry J. Reedy


Jul 18 '05 #6

P: n/a
In article <br**********@newshost.mot.com>,
"J.R." <j.*****@motorola.com> wrote:
Thanks for the response.

I got following conclusion after reading your reply and other documents:

Actually, the python is passing the identity (i.e. memory address) of each
parameter, and it will bind to a local name within the function.


That should work. This notion of binding a object to a name or
data structure is all over Python, as you have probably noticed,
and it will be time well spent if you experiment with it a little.

For example, how could you verify that arrays are passed without
copying? Unless you are unusually dense for a programmer or have
no access to a Python interpreter, that could be no more than a
couple of minutes work. As penance for having failed to do this,
I assign a more mysterious problem to you:

def f(d=[]):
d.append(0)
print d
f()
f()

Explain results. When is d bound?

Here's an easier one:

a = {'z': 0}
b = [a] * 4
b[0]['z'] = 1
print b

If the result is anywhere near a surprise to you, then you would
do well to stick to this line of inquiry if you want to do anything
more than the most trivial Python programming.

Donn Cave, do**@u.washington.edu
Jul 18 '05 #7

P: n/a
> As penance for having failed to do this,
I assign a more mysterious problem to you:

def f(d=[]):
d.append(0)
print d
f()
f()

Explain results. When is d bound?


I found that the "problem" showed above is caused by the python principle
"everything
is object".

Here the function "f" is an object as well, the object function f is created
once this function is defined. And there is tuple attribute(func_defaults)
inside
function object to record the default arguments.
def f(d=[]): print id(d)
d.append(0)
print d f.func_defaults ([],) id(f.func_defaults[0]) 11279792 f() 11279792
[0] f() 11279792
[0, 0] f([1]) 11279952
[1, 0]

1. There is no value passed to the default argument
The name "d" is bound to the first element of the f.func_defaults. Since the
function "f" is an
object, which will be kept alive as long as there is name (current is "f")
refered to it, the
list in the func_defaults shall be accumulated by each invoking.

2. There is a value passed to the default argument
The name "d" will be bound to the passed object, it's proven from the
different identity showd above.

I think we could eliminate such accumulation effect by changing the function
as follow: def f(d=[]):

d = d+[0]
print d

J.R.
Jul 18 '05 #8

P: n/a
* J.R. spake thusly:
def f(d=[]):
d.append(0)
print d
f()
f()
Explain results. When is d bound?
When is this issue going to be resolved? Enough newbie-pythoners have
made this mistake now.

Why not evaluate the parameter lists at calltime instead of definition
time? This should work the same way as lambdas.
f = lambda: []
a=f()
b=f()
a.append(1)
print a, b [1] []
Maybe this could be defined in a similar way to remind of the
"lazy-evaluation":

def getvalue(cls):
return "OK"

class SomeClass:
def blapp(something: getvalue(), other: []):
print something, other

This way, the lambda forms defined after 'something' and 'other' are
evaluated each time the function is called without supplying those
parameters.

The lambda forms could be evaluated as if within the class-block, and
therefore they might actually use other values defined within that
namespace. However, this might again confuse users, as subclassed
attributes and instance attributes would not be resolved that way.

(Note that default-parameters-lambdas by today does NOT resolve this way:
class Blapp: .... ting = 15
.... def fix(self, per=lambda: ting):
.... print per()
.... a = Blapp()
a.fix()

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 4, in fix
File "<stdin>", line 3, in <lambda>
NameError: global name 'ting' is not defined

Finally, things could also be done like this, to avoid confusion to all
those new folks (our main goal):

def blapp(something=getvalue(), other=[]):

getvalue() should be called if something-paramer is not specified (ie.
expression something=getvalue() is evaluated), and likewise for other.
Although this would break existing code and need to be delayed to at
least 3.0 and implemented in the __future__.

I must say I can't see the reason to not delay evalution now that we
have nested scopes. This way, even this would work:
class A:
default_height=100
default_width=200
def make_picture(self, height=default_height,
width=default_width):
self.gui.do_blabla(height, width)
set_default_width = classmethod(set_default_width)

a = A()
a.make_picture()
A.default_width = 150
a.make_picture() # Now with new default width
One might argue that this could could benefit from resolving
self.default_width instead, that would still require setting
height=None and testing inside make_picture.

--
Stian S°iland Being able to break security doesn't make
Trondheim, Norway you a hacker more than being able to hotwire
http://stain.portveien.to/ cars makes you an automotive engineer. [ESR]
Jul 18 '05 #9

P: n/a

"Stian S°iland" <st***@stud.ntnu.no> wrote in
message
news:sl******************@ozelot.stud.ntnu.no...
When is this issue going to be resolved? Enough newbie-pythoners have made this mistake now.
I am puzzled as to why. When I learned Python, I
read something to the effect that default value
expressions are evaluated at definition time. I
understood that the resulting objects were saved
for later use (parameter binding) when needed (as
default for value not given). I believed this and
that was that.
Why not evaluate the parameter lists at calltime instead of definition time?


Run time code belongs in the body of the function
and the corresponding byte code in the code
object. To me anyway.

Terry J. Reedy
Jul 18 '05 #10

P: n/a
Stian S°iland wrote:
When is this issue going to be resolved? Enough newbie-pythoners have
made this mistake now.


Changes are rarely if ever made to Python for the sole reason
of reducing newbie mistakes. There needs to be a payoff for
long-term use of the language as well.

In this case, evaluating the default args at call time would
have a negative payoff, since it would slow down every call to
the function in cases where the default value doesn't need
to be evaluated more than once.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #11

P: n/a
"Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> writes:
Changes are rarely if ever made to Python for the sole reason
of reducing newbie mistakes.


print 3/2
Jul 18 '05 #12

P: n/a
"Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> writes:
In this case, evaluating the default args at call time would
have a negative payoff, since it would slow down every call to
the function in cases where the default value doesn't need
to be evaluated more than once.


In those cases the compiler should notice it and generate appropriate
code to evaluate the default arg just once. In many of the cases it
can put a static value into the .pyc file.
Jul 18 '05 #13

P: n/a
On Wed, 17 Dec 2003 06:20:54 +0000 (UTC), st***@stud.ntnu.no (Stian =?iso-8859-1?Q?S=F8iland?=) wrote:
* J.R. spake thusly:
> def f(d=[]):
> d.append(0)
> print d
> f()
> f()
> Explain results. When is d bound?
When is this issue going to be resolved? Enough newbie-pythoners have
made this mistake now. It works as designed. The default parameter value bindings are made a def time.
If you want to do them at call time, the idiom is

def f(d=None):
if d is None: d = []
d.append(0)
print d

Why not evaluate the parameter lists at calltime instead of definition
time? This should work the same way as lambdas. Lambdas do work the same as defs, except for the automatic name binding
and the limitation to an expression as the body. f = lambda: []
a=f()
b=f()
a.append(1)
print a, b[1] [] The comparable def code to your lambda would be
def f(): return []
The above is misguiding attention away from your point, IMO.
Maybe this could be defined in a similar way to remind of the
"lazy-evaluation":

def getvalue(cls):
return "OK"

class SomeClass:
def blapp(something: getvalue(), other: []):
print something, other
This might be an interesting concise spelling to accomplish what the following does:
(BTW, you need a self for normal methods)
class Defer(object): ... def __init__(self, fun): self.fun = fun
... class SomeClass(object): ... def blapp(self, something=Defer(lambda: getvalue()), other=Defer(lambda:[])):
... if isinstance(something, Defer): something = something.fun()
... if isinstance(other, Defer): other = other.fun()
... print something, other
... def getvalue(): return 'gotten_value' ... sc = SomeClass()
sc.blapp() gotten_value [] sc.blapp(1) 1 [] sc.blapp('one', 'two') one two
This way, the lambda forms defined after 'something' and 'other' are
evaluated each time the function is called without supplying those
parameters.

The lambda forms could be evaluated as if within the class-block, and
therefore they might actually use other values defined within that
namespace. However, this might again confuse users, as subclassed
attributes and instance attributes would not be resolved that way. Yes, messy. ISTM better to let them be defined in the same scope
as the def, as in my example above.

(Note that default-parameters-lambdas by today does NOT resolve this way:
class Blapp:... ting = 15
... def fix(self, per=lambda: ting):
... print per()
... a = Blapp()
a.fix()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 4, in fix
File "<stdin>", line 3, in <lambda>
NameError: global name 'ting' is not defined

Right, though you could get the functionality if you wanted to.
Finally, things could also be done like this, to avoid confusion to all
those new folks (our main goal): If they're confused because their preconceptions are filtering out
anything discordant, pandering to them would not serve the language.

def blapp(something=getvalue(), other=[]):

getvalue() should be called if something-paramer is not specified (ie.
expression something=getvalue() is evaluated), and likewise for other. No. I like your version with the lambda-colons much better.

Although this would break existing code and need to be delayed to at
least 3.0 and implemented in the __future__.

I must say I can't see the reason to not delay evalution now that we Are you sure you counted your negatives there? ;-)
have nested scopes. This way, even this would work:
You're assuming default_height will refer to A.default_height
class A:
default_height=100
default_width=200
def make_picture(self, height=default_height,
width=default_width):
self.gui.do_blabla(height, width)
set_default_width = classmethod(set_default_width)
IMO better:

class A:
default_height=100
default_width=200
def make_picture(self, height: A.default_height,
width: A.default_width):
self.gui.do_blabla(height, width)
set_default_width = classmethod(set_default_width)
a = A()
a.make_picture()
A.default_width = 150
a.make_picture() # Now with new default width
One might argue that this could could benefit from resolving
self.default_width instead, that would still require setting
height=None and testing inside make_picture.


Or maybe have the implict lambda have an implicit self arg bound like a method
if it is the default arg of a method, ie., so you could write

class A:
default_height=100
default_width=200
def make_picture(self, height: self.default_height,
width: self.default_width):
self.gui.do_blabla(height, width)

Then when the function was called, it would be like getting an implicit something like
class A(object): ... default_height=100
... default_width=200
... def make_picture(self, height=Defer(lambda self: self.default_height),
... width=Defer(lambda self: self.default_width)):
... if isinstance(height, Defer): height = height.fun.__get__(self)()
... if isinstance(width, Defer): width = width.fun.__get__(self)()
... self.gui.do_blabla(height, width)
... class gui(object): # something to catch the above ;-/
... def do_blabla(h,w): print 'h=%r, w=%r'%(h,w)
... do_blabla = staticmethod(do_blabla)
... a=A()
a.make_picture() h=100, w=200 a.make_picture('one') h='one', w=200 a.make_picture('one','two')

h='one', w='two'

Obviously the nested class gui is just to make the self.gui.do_blabla call work as spelled ;-)

I doubt if this is going to make it, but I think it's feasible without backwards breakage.
Write a PEP if you want to pursue something like that, but don't get overexcited ;-)

Regards,
Bengt Richter
Jul 18 '05 #14

P: n/a
On Tue, 16 Dec 2003 16:21:00 +0800, "J.R." <j.*****@motorola.com>
wrote:
Actually, the python is passing the identity (i.e. memory address) of each
parameter, and it will bind to a local name within the function.

Right?


Nope.
This is one case where understanding something of the insides of
Python helps. Basically Python variables are dictionary entries.
The variable values are the the dictionary values associated with
the variable names which are the dictionary keys.

Thus when you pass an argument to a function you are passing a
dictionary key. When the function uses the argument it looks up
the dictionary and uses the value found there.

This applies to all sorts of things in Python including modules -
a local dictionary associated with the module, and classes -
another dictionary. Dictionaries are fundamental to how Python
works and memory addresses per se play no part in the procedings.

HTH

Alan G.

Author of the Learn to Program website
http://www.freenetpages.co.uk/hp/alan.gauld
Jul 18 '05 #15

P: n/a
Terry Reedy wrote:

"Stian S?iland" <st***@stud.ntnu.no> wrote in
message
news:sl******************@ozelot.stud.ntnu.no...
When is this issue going to be resolved? Enough

newbie-pythoners have
made this mistake now.


I am puzzled as to why. When I learned Python, I
read something to the effect that default value
expressions are evaluated at definition time. I
understood that the resulting objects were saved
for later use (parameter binding) when needed (as
default for value not given). I believed this and
that was that.

I am puzzled as to why you're puzzled. Not everyone who reads the
manual pays attention to the time of evaluation explanation, if the
manual they're using even covers. Not everyone stops and says, "Oh my
God, I don't know whether this is evaluated when the function is
defined or called. I better find out." (And of course, not everyone
reads the manual.)

It seems that most people who haven't thought about time of evaluation
tend to expect it to be evaluated when the function is called; I know
I would expect this. (I think I even made the mistake.)

--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #16

P: n/a
Paul Rubin wrote:


"Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> writes:
In this case, evaluating the default args at call time would
have a negative payoff, since it would slow down every call to
the function in cases where the default value doesn't need
to be evaluated more than once.


In those cases the compiler should notice it and generate appropriate
code to evaluate the default arg just once. In many of the cases it
can put a static value into the .pyc file.


In a perfect world, that would be a good way to do it. However,
Python is NOT in the business of deciding whether an arbitrary object
is constant or not, except maybe in the parsing stages. Internally,
it's just not built that way.

If I were designing, I would definitely make it the language's (and
extension writers') business, because there is a lot of opportunity
for optimization.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #17

P: n/a
Stian S?iland wrote:


* J.R. spake thusly:
> def f(d=[]):
> d.append(0)
> print d
> f()
> f()
> Explain results. When is d bound?


When is this issue going to be resolved? Enough newbie-pythoners have
made this mistake now.

Why not evaluate the parameter lists at calltime instead of definition
time? This should work the same way as lambdas.

Consider something like this:

def func(param=((1,2),(3,4),(5,6),(7,8))):
whatever

Do you really want to be building a big-ass nested tuple every time
the function is called?

Python evaluates default args at time of definition mostly for
performance reasons (and maybe so we could simulate closures before we
had real closures). My gut feeling is, moving the evaluation to call
time would be too much of a performance hit to justify it.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #18

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
It seems that most people who haven't thought about time of evaluation
tend to expect it to be evaluated when the function is called; I know
I would expect this. (I think I even made the mistake.)


The principle of least astonishment then suggests that Python made
a suboptical choice.
Jul 18 '05 #19

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
Consider something like this:

def func(param=((1,2),(3,4),(5,6),(7,8))):
whatever

Do you really want to be building a big-ass nested tuple every time
the function is called?
Come on, the compiler can easily recognize that that list is constant.
Python evaluates default args at time of definition mostly for
performance reasons (and maybe so we could simulate closures before we
had real closures). My gut feeling is, moving the evaluation to call
time would be too much of a performance hit to justify it.


Python takes so many other performance hits for the sake of
convenience and/or clarity that this particular one would be miniscule
by comparison.
Jul 18 '05 #20

P: n/a
JCM
Alan Gauld <al********@btinternet.com> wrote:
On Tue, 16 Dec 2003 16:21:00 +0800, "J.R." <j.*****@motorola.com>
wrote:
Actually, the python is passing the identity (i.e. memory address) of each
parameter, and it will bind to a local name within the function.

Right?
Nope.
This is one case where understanding something of the insides of
Python helps. Basically Python variables are dictionary entries.
The variable values are the the dictionary values associated with
the variable names which are the dictionary keys. Thus when you pass an argument to a function you are passing a
dictionary key. When the function uses the argument it looks up
the dictionary and uses the value found there. This applies to all sorts of things in Python including modules -
a local dictionary associated with the module, and classes -
another dictionary. Dictionaries are fundamental to how Python
works and memory addresses per se play no part in the procedings.


You're talking about the implementation of the interpreter. I
wouldn't have used the term "memory address" as J.R. did, as this also
implies something about the implementation, but it does make sense to
say object IDs/object references are passed into functions and bound
to names/variables.
Jul 18 '05 #21

P: n/a
On Wed, 17 Dec 2003 18:46:10 GMT, al********@btinternet.com (Alan Gauld) wrote:
On Tue, 16 Dec 2003 16:21:00 +0800, "J.R." <j.*****@motorola.com>
wrote:
Actually, the python is passing the identity (i.e. memory address) of each
parameter, and it will bind to a local name within the function.

Right?
Depends on what you mean by "python" ;-) Python the language doesn't pass memory
addresses, but an implementation of Python might very well. The distinction is
important, or implementation features will be misconstrued as language features.

I suspect Alan is trying to steer you away from discussing implementation. I think
it can be useful to talk about both, if the discussion can be plain about which
it is talking about.
Nope. IMO that is a little too dismissive ;-)
This is one case where understanding something of the insides of
Python helps. Basically Python variables are dictionary entries.
The variable values are the the dictionary values associated with
the variable names which are the dictionary keys.

Thus when you pass an argument to a function you are passing a
dictionary key. When the function uses the argument it looks up
the dictionary and uses the value found there. That's either plain wrong or mighty misleading. IOW this glosses
over (not to say mangles) some important details, and not just of
the implementation. E.g., when you write

foo(bar)

you are not passing a "key" (bar) to foo. Yes, foo will get access to
the object indicated by bar by looking in _a_ "dictionary". But
when foo accesses bar, it will be using as "key" the parameter name specified
in the parameter list of the foo definition, which will be found in _another_
"dictionary", i.e., the one defining the local namespace of foo. I.e., if foo is

def foo(x): return 2*x

and we call foo thus
bar = 123
print foo(bar)

What happens is that 'bar' is a key in the global (or enclosing) scope and 'x' is a
key in foo's local scope. Thus foo never sees 'bar', it sees 'x'. It is the job of
the function-calling implementation to bind parameter name x to the same thing as the specified
arg name (bar here) is bound to, before the first line in foo is executed. You could write

globals()['bar'] = 123
print foo(globals['bar'])

and

def foo(x): return locals()['x']*2

to get the flavor of what's happening. Functions would be little more than global-access macros
if it were not for the dynamic of binding local function parameter names to the call-time args.
This applies to all sorts of things in Python including modules -
a local dictionary associated with the module, and classes -
another dictionary. Dictionaries are fundamental to how Python
works and memory addresses per se play no part in the procedings.

Well, ISTM that is contrasting implementation and semantics. IOW, memory addresses may
(and very likely do) or may not play a part in the implementation, but Python
the language is not concerned with that for its _definition_ (though of course implementers
and users are concerned about _implementation_ for performance reasons).

I think the concept of name space is more abstract and more helpful in encompassing the
various ways of finding objects by name that python implements. E.g., when you interactively
type dir(some_object), you will get a list of key names, but typically not from one single dictionary.
There is potentially a complex graph of "dictionaries" to search according to specific rules
defining order for the name in question. Thus one could speak of the whole collection of visible
names in that process as a (complex) name space, or one could speak of a particular dict as
implementing a (simple) name space.

HTH

Regards,
Bengt Richter
Jul 18 '05 #22

P: n/a
"J.R." <j.*****@motorola.com> wrote in message news:<br**********@newshost.mot.com>...

1. There is no value passed to the default argument
The name "d" is bound to the first element of the f.func_defaults. Since the
function "f" is an
object, which will be kept alive as long as there is name (current is "f")
refered to it, the
list in the func_defaults shall be accumulated by each invoking.

....

I think we could eliminate such accumulation effect by changing the function
as follow:
def f(d=[]): d = d+[0]
print d


And the reason this eliminates the accumulation is that the assignment
('d = d + [0]') rebinds the name 'd' to the new list object ([] +
[0]), ie. it no longer points to the first value in f.func_defaults.

What surprised me was that the facially equivalent:
def f (d=[]) :

.... d += [0]
.... print d
did not do so. Apparently '+=' in regards to lists acts like
list.apppend, rather than as the assignment operator it looks like.
Jul 18 '05 #23

P: n/a
On Thu, Dec 18, 2003 at 03:29:55PM -0800, Asun Friere wrote:
"J.R." <j.*****@motorola.com> wrote in message news:<br**********@newshost.mot.com>...

1. There is no value passed to the default argument
The name "d" is bound to the first element of the f.func_defaults. Since the
function "f" is an
object, which will be kept alive as long as there is name (current is "f")
refered to it, the
list in the func_defaults shall be accumulated by each invoking.

...

I think we could eliminate such accumulation effect by changing the function
as follow:
>> def f(d=[]):

d = d+[0]
print d


And the reason this eliminates the accumulation is that the assignment
('d = d + [0]') rebinds the name 'd' to the new list object ([] +
[0]), ie. it no longer points to the first value in f.func_defaults.

What surprised me was that the facially equivalent:
def f (d=[]) :

... d += [0]
... print d
did not do so. Apparently '+=' in regards to lists acts like
list.apppend, rather than as the assignment operator it looks like.


list.extend, to be more precise, or list.__iadd__ to be completely precise
:) The maybe-mutate-maybe-rebind semantics of += lead me to avoid its use
in most circumstances.

Jp
--
http://mail.python.org/mailman/listinfo/python-list


Jul 18 '05 #24

P: n/a
Paul Rubin wrote:
"Greg Ewing (using news.cis.dfn.de)" <g2********@sneakemail.com> writes:
Changes are rarely if ever made to Python for the sole reason
of reducing newbie mistakes.

print 3/2


That's not a counterexample! There are sound non-newbie-related
reasons for wanting to fix that.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #25

P: n/a
Paul Rubin wrote:
In those cases the compiler should notice it and generate appropriate
code to evaluate the default arg just once.


How is the compiler supposed to know? In the general case
it requires reading the programmer's mind.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #26

P: n/a
On Thu, 18 Dec 2003 18:25:27 +1300, "Greg Ewing (using
news.cis.dfn.de)" <g2********@sneakemail.com> wrote:
Stian S°iland wrote:
When is this issue going to be resolved? Enough newbie-pythoners have
made this mistake now.


Changes are rarely if ever made to Python for the sole reason
of reducing newbie mistakes. There needs to be a payoff for
long-term use of the language as well.


I strongly concur with this principle. Too often, I've been on a team
that goes to such efforts to make a system easy for newbies to learn
that the normal/advanced users are then handicapped by a dumbed-down
interface. E.g., after you've performed some process enough times, do
you *really* need a step-by-step wizard?

--dang
Jul 18 '05 #27

P: n/a
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
Consider something like this:

def func(param=((1,2),(3,4),(5,6),(7,8))):
whatever

Do you really want to be building a big-ass nested tuple every time
the function is called?


Come on, the compiler can easily recognize that that list is constant.


Yes, but that doesn't account for all expensive parameters. What
about this:

DEFAULT_LIST = ((1,2),(3,4),(5,6),(7,8))

def func(param=DEFAULT_LIST):
pass

Or this:

import external_module

def func(param=external_modules.create_constant_object ()):
pass

Or how about this:

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass
The compiler couldn't optimize any of the above cases.

Python evaluates default args at time of definition mostly for
performance reasons (and maybe so we could simulate closures before we
had real closures). My gut feeling is, moving the evaluation to call
time would be too much of a performance hit to justify it.


Python takes so many other performance hits for the sake of
convenience and/or clarity that this particular one would be miniscule
by comparison.

Well, I don't have any data, but my gut feeling is this would be
somewhat more than "miniscule" performance hit. Seeing how pervasive
default arguments are, I'm guessing it would be a very significant
slowdown if default arguments had to be evaluated every call.

But since I have no numbers, I won't say anything more about it.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #28

P: n/a
On Sat, 20 Dec 2003 01:43:00 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
Consider something like this:

def func(param=((1,2),(3,4),(5,6),(7,8))):
whatever

Do you really want to be building a big-ass nested tuple every time
the function is called?
Come on, the compiler can easily recognize that that list is constant.


Yes, but that doesn't account for all expensive parameters. What
about this:

DEFAULT_LIST = ((1,2),(3,4),(5,6),(7,8))

def func(param=DEFAULT_LIST):
pass

Or this:

import external_module

def func(param=external_modules.create_constant_object ()):
pass

Or how about this:

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass
The compiler couldn't optimize any of the above cases.

For the DEFAULT_LIST (tuple?) and that particular dict literal, why not?

Python evaluates default args at time of definition mostly for
performance reasons (and maybe so we could simulate closures before we
had real closures). My gut feeling is, moving the evaluation to call
time would be too much of a performance hit to justify it.


Python takes so many other performance hits for the sake of
convenience and/or clarity that this particular one would be miniscule
by comparison.

Well, I don't have any data, but my gut feeling is this would be
somewhat more than "miniscule" performance hit. Seeing how pervasive
default arguments are, I'm guessing it would be a very significant
slowdown if default arguments had to be evaluated every call.

But since I have no numbers, I won't say anything more about it.

Don't know if I got this right, but

[18:32] /d/Python23/Lib>egrep -c 'def .*=' *py |cut -d: -f 2|sum
Total = 816
[18:32] /d/Python23/Lib>egrep -c 'def ' *py |cut -d: -f 2|sum
Total = 4454

would seem to suggest pervasive ~ 816/4453
or a little less than 20%

Of course that says nothing about which are typically called in hot loops ;-)
But I think it's a bad idea as a default way of operating anyway. You can
always program call-time evaluations explicitly. Maybe som syntactic sugar
could be arranged, but I think I would rather have some sugar for the opposite
instead -- i.e., being able to code a block of preset locals evaluated and bound
locally like current parameter defaults, but not being part of the call signature.

Regards,
Bengt Richter
Jul 18 '05 #29

P: n/a
Bengt Richter wrote:


On Sat, 20 Dec 2003 01:43:00 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
Consider something like this:

def func(param=((1,2),(3,4),(5,6),(7,8))):
whatever

Do you really want to be building a big-ass nested tuple every time
the function is called?

Come on, the compiler can easily recognize that that list is constant.
Yes, but that doesn't account for all expensive parameters. What
about this:

DEFAULT_LIST = ((1,2),(3,4),(5,6),(7,8))

def func(param=DEFAULT_LIST):
pass

Or this:

import external_module

def func(param=external_modules.create_constant_object ()):
pass

Or how about this:

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass
The compiler couldn't optimize any of the above cases.


For the DEFAULT_LIST (tuple?) and that particular dict literal, why not?

Well, the value of DEFAULT_LIST is not known a compile time (unless, I
suppose, this happens to be in the main module or command prompt).
The literal is not a constant, so the compiler couldn't optimize this.

(Remember, the idea is that default parameters should be evaluated at
call time, which would require the compiler to put the evaluations
inside the function's pseudo-code. The compiler could optimize default
parameters by evaluating them at compile time: but you can only do
that with constants, for obvious reasons.)

Well, I don't have any data, but my gut feeling is this would be
somewhat more than "miniscule" performance hit. Seeing how pervasive
default arguments are, I'm guessing it would be a very significant
slowdown if default arguments had to be evaluated every call.

But since I have no numbers, I won't say anything more about it.

Don't know if I got this right, but

[18:32] /d/Python23/Lib>egrep -c 'def .*=' *py |cut -d: -f 2|sum
Total = 816
[18:32] /d/Python23/Lib>egrep -c 'def ' *py |cut -d: -f 2|sum
Total = 4454

would seem to suggest pervasive ~ 816/4453
or a little less than 20%


Well, if you don't like the particular adjective I used, feel free to
substitute another. This happens a lot to me in c.l.p (see Martelli).
All I'm saying is, default arguments are common in Python code, and
slowing them down is probably going to be a significant performance
hit.

(You probably underestimated a little bit anyways: some functions
don't get to the default arguments until the second line.)

Of course that says nothing about which are typically called in hot
loops ;-) But I think it's a bad idea as a default way of operating
anyway. You can always program call-time evaluations
explicitly. Maybe som syntactic sugar could be arranged, but I think
I would rather have some sugar for the opposite instead -- i.e.,
being able to code a block of preset locals evaluated and bound
locally like current parameter defaults, but not being part of the
call signature.


Well, personally, I don't see much use for non-constant default
arguments, as we have them now, wheras they would be useful if you
could get a fresh copy. And, frankly, the default arguments feel like
they should be evaluated at call time. Now that we have nested
scopes, there's no need for them to simulate closures. So, from a
purely language perspective, I think they ought to be evaluated at
call time.

The only thing is, I very much doubt I'd be willing to take the
performance hit for it.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #30

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
The only thing is, I very much doubt I'd be willing to take the
performance hit for it.


If you don't like performance hits, you're using the wrong language :).

Seriously, this hasn't been an issue in Common Lisp, which in general
pays far more attention to performance issues than Python ever has.
Jul 18 '05 #31

P: n/a
Carl Banks wrote:
Well, personally, I don't see much use for non-constant default
arguments, as we have them now, wheras they would be useful if you
could get a fresh copy.


I disagree completely. If a function modifies one of its arguments, this is
a fundamental property of that function. If using a default value for that
argument causes the function to swallow the changes it performs, the
function no longer has consistent behavior.

On the other hand, the following function has consistent, if silly,
behavior:
default_list = []

def append_5_to_a_list(which_list = default_list):
which_list.append(5)
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #32

P: n/a
Quoth Jp Calderone <ex*****@intarweb.us>:
| On Thu, Dec 18, 2003 at 03:29:55PM -0800, Asun Friere wrote:
....
|> What surprised me was that the facially equivalent:
|> >>> def f (d=[]) :
|> ... d += [0]
|> ... print d
|> did not do so. Apparently '+=' in regards to lists acts like
|> list.apppend, rather than as the assignment operator it looks like.
|
| list.extend, to be more precise, or list.__iadd__ to be completely precise
| :) The maybe-mutate-maybe-rebind semantics of += lead me to avoid its use
| in most circumstances.

This tacky feature certainly ought to be considered for the
chopping block in version 3, for exactly that reason.

Donn Cave, do**@drizzle.com

PS. Good analysis, JR, I learned something from it.
Jul 18 '05 #33

P: n/a
Paul Rubin wrote:
Carl Banks <im*****@aerojockey.invalid> writes:
The only thing is, I very much doubt I'd be willing to take the
performance hit for it.


If you don't like performance hits, you're using the wrong language :).

Seriously, this hasn't been an issue in Common Lisp, which in general
pays far more attention to performance issues than Python ever has.


Well, if you're right, I'm all for it.

But I hope the reason you want this is because it's less surprising,
more intuitive, more useful, or whatnot, and not just because Common
Lisp did it that way. Because people who can't see the side of any
computer programming argument except in the context of Common Lisp are
just pathetic.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #34

P: n/a
Rainer Deyke wrote:
Carl Banks wrote:
Well, personally, I don't see much use for non-constant default
arguments, as we have them now, wheras they would be useful if you
could get a fresh copy.


I disagree completely. If a function modifies one of its arguments, this is
a fundamental property of that function. If using a default value for that
argument causes the function to swallow the changes it performs, the
function no longer has consistent behavior.

On the other hand, the following function has consistent, if silly,
behavior:
default_list = []

def append_5_to_a_list(which_list = default_list):
which_list.append(5)

I said a non-constant default argument wasn't useful. As evidence
against, you suggest that the function is "consistent."

Now, do have any evidence that non-constant, default arguments (as
they are now) are USEFUL?

(I don't agree with your "consistent" theory, anyways. The function
would be treating all arguments consistently: it's just that it'd get
a fresh copy of the default arguments each call.)
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #35

P: n/a
Carl Banks wrote:
Now, do have any evidence that non-constant, default arguments (as
they are now) are USEFUL?
def draw_pixel(x, y, color, surface=screen):
screen[y][x] = color
(I don't agree with your "consistent" theory, anyways. The function
would be treating all arguments consistently: it's just that it'd get
a fresh copy of the default arguments each call.)


A function that mutates its arguments should not be called with "fresh"
arguments, implicitly or explicitly. If the purpose of the function is to
modify its arguments, then doing so would throw away the effect of the
function. If the purpose of the function is not to modify its arguments,
then it shouldn't do so.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #36

P: n/a
Rainer Deyke wrote:
def draw_pixel(x, y, color, surface=screen):
screen[y][x] = color


This should of course be:

def draw_pixel(x, y, color, surface=screen):
surface[y][x] = color
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #37

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
But I hope the reason you want this is because it's less surprising,
more intuitive, more useful, or whatnot, and not just because Common
Lisp did it that way. Because people who can't see the side of any
computer programming argument except in the context of Common Lisp are
just pathetic.


It avoids the need for ridiculous kludges to check whether there is a
real arg there or not, etc. I'd prefer that Python had used the CL
method in the first place since I find the Python method bizarre and
counterintuitive. However, changing it now would introduce
incompatibility that's harder to justify. So we probably have to live
with it.
Jul 18 '05 #38

P: n/a
On Sat, 20 Dec 2003 03:55:22 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
Bengt Richter wrote:


On Sat, 20 Dec 2003 01:43:00 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
Paul Rubin wrote:
Carl Banks <im*****@aerojockey.invalid> writes:
> Consider something like this:
>
> def func(param=((1,2),(3,4),(5,6),(7,8))):
> whatever
>
> Do you really want to be building a big-ass nested tuple every time
> the function is called?

Come on, the compiler can easily recognize that that list is constant.

Yes, but that doesn't account for all expensive parameters. What
about this:

DEFAULT_LIST = ((1,2),(3,4),(5,6),(7,8))

def func(param=DEFAULT_LIST):
pass

Or this:

import external_module

def func(param=external_modules.create_constant_object ()):
pass

Or how about this:

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass
The compiler couldn't optimize any of the above cases.
For the DEFAULT_LIST (tuple?) and that particular dict literal, why not?

Well, the value of DEFAULT_LIST is not known a compile time (unless, I
suppose, this happens to be in the main module or command prompt).
The literal is not a constant, so the compiler couldn't optimize this.


Well, according to the argument, we would be dealing with an optimizing compiler,
so presumably the compiler would see a name DEFAULT_LIST and simply compile a
call-time binding of param to whatever DEFAULT_LIST was bound to, and not bother
further. It could notice that the DEFAULT_LIST binding was still undisturbed, and
that it was to an immutable tuple with no mutable elements, which ISTM is effectively
a constant, but that analysis would be irrelevant, since the semantics would be
copying pre-existing binding (which is pretty optimized anyway).

The dict literal looks to me to be made up entirely of immutable keys and values, so
the value of that literal expression seems to me to be a constant. If you had call time
evaluation, you would be evaluating that expression each time, and the result would be
a fresh mutable dict with that constant initial value each time. ISTM that could be
optimized as param=private_dict_compile_time_created_from_liter al.copy().
OTOH, if you used a pre-computed binding like DEFAULT_LIST, and wrote

SHARED_DICT = {'1': 'A', '2': 'B', '3': 'C', '4': 'D'}
def func(param=SHARED_DICT):
pass

then at def-time the compiler would not see the literal, but rather a name bound to
a mutable dict instance. The call-time effect would be to bind param to whatever SHARED_DICT
happened to be bound to, just like for DEFAULT_LIST. But the semantics, given analysis that
showed no change to the SHARED_DICT _binding_ before the func call, would be to share a single
mutable dict instance. This is unlike the semantics of

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass

which implies a fresh mutable dict instance bound to param, with the same initial value
(thus "constant" in a shallow sense at least, which in this case is fully constant).

(Remember, the idea is that default parameters should be evaluated at
call time, which would require the compiler to put the evaluations
inside the function's pseudo-code. The compiler could optimize default
parameters by evaluating them at compile time: but you can only do
that with constants, for obvious reasons.) Yes, but note the difference between evaluating a name and a fixed-value literal expression,
as noted above.
Well, I don't have any data, but my gut feeling is this would be
somewhat more than "miniscule" performance hit. Seeing how pervasive
default arguments are, I'm guessing it would be a very significant
slowdown if default arguments had to be evaluated every call.

But since I have no numbers, I won't say anything more about it.
Don't know if I got this right, but

[18:32] /d/Python23/Lib>egrep -c 'def .*=' *py |cut -d: -f 2|sum
Total = 816
[18:32] /d/Python23/Lib>egrep -c 'def ' *py |cut -d: -f 2|sum
Total = 4454

would seem to suggest pervasive ~ 816/4453
or a little less than 20%


Well, if you don't like the particular adjective I used, feel free to
substitute another. This happens a lot to me in c.l.p (see Martelli).

Sorry, I didn't mean make anything "happen to" you, especially if it was unpleasant ;-)
I just meant to pick up on "pervasive" and "numbers" and try to provide some anecdotal data.
All I'm saying is, default arguments are common in Python code, and
slowing them down is probably going to be a significant performance
hit. Probably in specific cases, but other cases could have no hit at all, given optimization.

(You probably underestimated a little bit anyways: some functions
don't get to the default arguments until the second line.) Agreed.

Of course that says nothing about which are typically called in hot
loops ;-) But I think it's a bad idea as a default way of operating
anyway. You can always program call-time evaluations
explicitly. Maybe som syntactic sugar could be arranged, but I think
I would rather have some sugar for the opposite instead -- i.e.,
being able to code a block of preset locals evaluated and bound
locally like current parameter defaults, but not being part of the
call signature.
Well, personally, I don't see much use for non-constant default
arguments, as we have them now, wheras they would be useful if you
could get a fresh copy. And, frankly, the default arguments feel like
they should be evaluated at call time. Now that we have nested
scopes, there's no need for them to simulate closures. So, from a
purely language perspective, I think they ought to be evaluated at
call time.

I'd worry a bit about the meaning of names used in initialization expressions
if their values are to be looked up at call time. E.g., do you really want

a = 2
def foo(x=a): print 'x =', x
...
...
a = 'eh?'
foo()

to print 'eh?' By the time you are past a lot of ...'s, ISTM the code intent is not
so clear. But you can make dynamic access to the current a as a default explicit by

class Defer(object):
def __init__(self, lam): self.lam = lam

def foo(x=Defer(lambda:a)):
if isinstance(x, Defer): x=x.lam()
print 'x =', x

The semantics are different. I'd prefer to have the best of both worlds and be able
to do both, as now, though I might not object to some nice syntactic sugar along the
lines suggested by OP Stian S°iland. E.g., short spelling for the above Defer effect:

def foo(x:a): print 'x =', x

The only thing is, I very much doubt I'd be willing to take the
performance hit for it.

Moore+PyPy => less worry about that in future, I think.

Regards,
Bengt Richter
Jul 18 '05 #39

P: n/a
On Sat, 20 Dec 2003 06:56:07 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
[...]
I said a non-constant default argument wasn't useful. As evidence
against, you suggest that the function is "consistent."

Now, do have any evidence that non-constant, default arguments (as
they are now) are USEFUL?


From zenmeister Tim Peter's fixedpoint module (argue with that ;-):

def _tento(n, cache={}):
try:
return cache[n]
except KeyError:
answer = cache[n] = 10L ** n
return answer

That's useful, but IMO not the optimal syntax, because cache here is really not
being used as a parameter. That's why I would like a way to bind locals similarly
without being part of the calling signature. E.g.,

def _tento(n)(
# preset bindings evaluated here at def-time
cache={}
):
try:
return cache[n]
except KeyError:
answer = cache[n] = 10L ** n
return answer

Regards,
Bengt Richter
Jul 18 '05 #40

P: n/a
Bengt Richter wrote:


On Sat, 20 Dec 2003 03:55:22 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:
Bengt Richter wrote:


On Sat, 20 Dec 2003 01:43:00 GMT, Carl Banks <im*****@aerojockey.invalid> wrote:

Paul Rubin wrote:
>
>
> Carl Banks <im*****@aerojockey.invalid> writes:
>> Consider something like this:
>>
>> def func(param=((1,2),(3,4),(5,6),(7,8))):
>> whatever
>>
>> Do you really want to be building a big-ass nested tuple every time
>> the function is called?
>
> Come on, the compiler can easily recognize that that list is constant.

Yes, but that doesn't account for all expensive parameters. What
about this:

DEFAULT_LIST = ((1,2),(3,4),(5,6),(7,8))

def func(param=DEFAULT_LIST):
pass

Or this:

import external_module

def func(param=external_modules.create_constant_object ()):
pass

Or how about this:

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass
The compiler couldn't optimize any of the above cases.

For the DEFAULT_LIST (tuple?) and that particular dict literal, why not?

Well, the value of DEFAULT_LIST is not known a compile time (unless, I
suppose, this happens to be in the main module or command prompt).
The literal is not a constant, so the compiler couldn't optimize this.


Well, according to the argument, we would be dealing with an
optimizing compiler, so presumably the compiler would see a name
DEFAULT_LIST and simply compile a call-time binding of param to
whatever DEFAULT_LIST was bound to, and not bother further. It could
notice that the DEFAULT_LIST binding was still undisturbed, and that
it was to an immutable tuple with no mutable elements, which ISTM is
effectively a constant, but that analysis would be irrelevant, since
the semantics would be copying pre-existing binding (which is pretty
optimized anyway).

The dict literal looks to me to be made up entirely of immutable
keys and values, so the value of that literal expression seems to me
to be a constant. If you had call time evaluation, you would be
evaluating that expression each time, and the result would be a
fresh mutable dict with that constant initial value each time. ISTM
that could be optimized as
param=private_dict_compile_time_created_from_liter al.copy(). OTOH,
if you used a pre-computed binding like DEFAULT_LIST, and wrote

SHARED_DICT = {'1': 'A', '2': 'B', '3': 'C', '4': 'D'}
def func(param=SHARED_DICT):
pass
then at def-time the compiler would not see the literal, but rather
a name bound to a mutable dict instance. The call-time effect would
be to bind param to whatever SHARED_DICT happened to be bound to,
just like for DEFAULT_LIST. But the semantics, given analysis that
showed no change to the SHARED_DICT _binding_ before the func call,
would be to share a single mutable dict instance. This is unlike the
semantics of

def func(param={'1': 'A', '2': 'B', '3': 'C', '4': 'D'}):
pass

which implies a fresh mutable dict instance bound to param, with the
same initial value (thus "constant" in a shallow sense at least,
which in this case is fully constant).


Good analysis.

Well, personally, I don't see much use for non-constant default
arguments, as we have them now, wheras they would be useful if you
could get a fresh copy. And, frankly, the default arguments feel like
they should be evaluated at call time. Now that we have nested
scopes, there's no need for them to simulate closures. So, from a
purely language perspective, I think they ought to be evaluated at
call time.


I'd worry a bit about the meaning of names used in initialization expressions
if their values are to be looked up at call time. E.g., do you really want

a = 2
def foo(x=a): print 'x =', x
...
...
a = 'eh?'
foo()

to print 'eh?' By the time you are past a lot of ...'s, ISTM the
code intent is not so clear. But you can make dynamic access to the
current a as a default explicit by

class Defer(object):
def __init__(self, lam): self.lam = lam

def foo(x=Defer(lambda:a)):
if isinstance(x, Defer): x=x.lam()
print 'x =', x

The semantics are different. I'd prefer to have the best of both
worlds and be able to do both, as now, though I might not object to
some nice syntactic sugar along the lines suggested by OP Stian
S?iland. E.g., short spelling for the above Defer effect:

def foo(x:a): print 'x =', x


All good points; doing something like this always seems to have
further repurcussions.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #41

P: n/a
Rainer Deyke wrote:


Carl Banks wrote:
Now, do have any evidence that non-constant, default arguments (as
they are now) are USEFUL?


def draw_pixel(x, y, color, surface=screen):
screen[y][x] = color


Was that so hard? (Although this would still work if surface was
evaluated call time.)

(I don't agree with your "consistent" theory, anyways. The function
would be treating all arguments consistently: it's just that it'd get
a fresh copy of the default arguments each call.)


A function that mutates its arguments should not be called with "fresh"
arguments, implicitly or explicitly. If the purpose of the function is to
modify its arguments, then doing so would throw away the effect of the
function. If the purpose of the function is not to modify its arguments,
then it shouldn't do so.

In the function:

def a(b=[]):
pass

b=c is either part of the function, or part of the definition. If
it's part of the definition, it gets evaulated once, and the function
gets the same object each time. If it's part of the function, it gets
evaluated every call, and b gets a new list every time. Either way is
consistent.

You ideas about what a function's purpose is are just not relevant to
whether the time of evaluation is consistent.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #42

P: n/a
Carl Banks wrote:
You ideas about what a function's purpose is are just not relevant to
whether the time of evaluation is consistent.


I'm not talking about whether or not time of evaluation is consistent -
clearly either way is consistent so long as it is consistently used. I'm
saying that getting a fresh copy of default args is useless and dangerous
because it only helps with functions which shouldn't be written in the first
place.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #43

P: n/a
Rainer Deyke wrote:


Carl Banks wrote:
You ideas about what a function's purpose is are just not relevant to
whether the time of evaluation is consistent.


I'm not talking about whether or not time of evaluation is consistent -
clearly either way is consistent so long as it is consistently used. I'm
saying that getting a fresh copy of default args is useless and dangerous
because it only helps with functions which shouldn't be written in the first
place.

Well, ok. I don't agree that it's dangerous, and there are certainly
useful functions that modify their arguments that could benefit from a
fresh copy.

def top_secret_code(a,b,c,stub=[]):
stub.extend([f(a,b),g(b,c),g(c,a)])
return stub
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #44

P: n/a
Carl Banks wrote:
Well, ok. I don't agree that it's dangerous, and there are certainly
useful functions that modify their arguments that could benefit from a
fresh copy.

def top_secret_code(a,b,c,stub=[]):
stub.extend([f(a,b),g(b,c),g(c,a)])
return stub


This is exactly the kind of function I would call inconsistent. Called with
three arguments, it creates and returns a new list. Called with four, it
modifies an existing list and returns a reference to it. I find this highly
counterintuitive. If I was a user of this function and I learned about the
three argument form first, I would expect the four argument form to leave
its fourth argument unmodified and return a new list.
--
Rainer Deyke - ra*****@eldwood.com - http://eldwood.com
Jul 18 '05 #45

P: n/a
"Donn Cave" <do**@drizzle.com> wrote in message news:<1071900300.434091@yasure>...
Quoth Jp Calderone <ex*****@intarweb.us>: |The maybe-mutate-maybe-rebind semantics of += lead me to avoid its use
| in most circumstances.

This tacky feature certainly ought to be considered for the
chopping block in version 3, for exactly that reason.


And go back to 'someVariableName = someVariableName + 1' to do
the simple increment? The various 'assignment' operators are
popular with good reason imho.

Perhaps what should be considered is making sure that operators which
appear to be assignment operators actually (and consistently) behave as
such.
Jul 18 '05 #46

P: n/a
Ouch! That sounds painful! ;)
Re: Beginner question - How to effectively pass a large list


Jul 18 '05 #47

This discussion thread is closed

Replies have been disabled for this discussion.