By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,994 Members | 2,036 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,994 IT Pros & Developers. It's quick & easy.

determining the number of output arguments

P: n/a
Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

Thank you,
Darren
Jul 18 '05 #1
Share this Question
Share on Google+
66 Replies


P: n/a
On Sun, 14 Nov 2004 17:12:24 -0500, Darren Dale <dd**@cornell.edu>
declaimed the following in comp.lang.python:
Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

This is rather confusing...

What do you expect to receive for

a,b,c,d = test([1,2,3,4,5])

Note, you are only passing ONE argument into the function, and
apparently wanting a tuple in return...

Problem: in the case of mismatched lengths (4 items in
destination, and 5 in the list) you get an exception. Otherwise you
could just use

a,b,c,d = tuple([1,2,3,4])
If you are actually trying to work with an unknown number of
input arguments (rather than a single argument of a list)...
def test(*data): .... print len(data)
.... test([1,2,3,4]) 1 test(1,2,3,4) 4
def test(*data): .... return data
.... test([1,2,3,4]) ([1, 2, 3, 4],) test(1,2,3,4) (1, 2, 3, 4) def test(*data): .... return list(data)
.... test([1,2,3,4]) [[1, 2, 3, 4]] test(1,2,3,4) [1, 2, 3, 4]

Thank you,
Darren
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Jul 18 '05 #2

P: n/a
Darren Dale wrote:
Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?


Something like this:

def test(*args, **kwargs):
i = len(args) + len(kwargs)

should work. But note that the usage example you gave will result in i
having a value of 1 -- you're passing in a single argument (which is a
list).

Of course, if you're always going to be passing a sequence into your
function, and you want to get the length of that sequence, then it's
pretty simple:

def test(data):
i = len(data)
return data[:i]

Note, however, that this function is effectively a no-op as it stands.
Presumably you're intending to do something to transform data, which may
change its length? Otherwise, it would be simpler to just modify the
list (or a copy of it) in-place in a for loop / list comp, and not worry
about the length at all.

Jeff Shannon
Technician/Programmer
Credit International
Jul 18 '05 #3

P: n/a
Fernando Perez <fp*******@yahoo.com> wrote:
suspect that by playing very nasty tricks with sys._getframe(), the dis
and the inspect modules, you probably _could_ get to this information, at
least if the caller is NOT a C extension module. But I'm not even 100%
sure this works, and it would most certainly the kind of black magic I'm
sure you are not asking about. But given the level of expertise here, I
better cover my ass ;-)


Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).
Alex
Jul 18 '05 #4

P: n/a
Alex Martelli wrote:
Fernando Perez <fp*******@yahoo.com> wrote:
suspect that by playing very nasty tricks with sys._getframe(), the dis
and the inspect modules, you probably _could_ get to this information, at
least if the caller is NOT a C extension module. But I'm not even 100%
sure this works, and it would most certainly the kind of black magic I'm
sure you are not asking about. But given the level of expertise here, I
better cover my ass ;-)


Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).


Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.

Anyway, is it true that this will only work for non-extension code? If you are
being called from a C extension, dis & friends are toast, no?

Cheers,

f

Jul 18 '05 #5

P: n/a
Fernando Perez <fp*******@yahoo.com> wrote:
Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).
Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.


Yeah, but "once and only once" is a great principle of programming. Any
time you have to say something _TWICE_ there's something wrong going on.

So,

a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
having to tell twice that you're getting four items into separate
variables, once by listing exactly four variables on the LHS, and
another time by that ':4' on the RHS. IMHO, that's just as bogus as
struct.unpack's limitation of not having any way to indicate explicitly
'and all the rest of the bytes goes here', and for similar reasons.

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.
Anyway, is it true that this will only work for non-extension code? If
you are being called from a C extension, dis & friends are toast, no?


Yep, whenever your function isn't being called as the only item on the
RHS of a multiple assignment, then counting how many items are on the
LHS are being unpacked in that inexistent or unapplicable multiple
assignment is right out. Presumably, any way to count the number of
items in this fashion will need a way to indicate "not applicable",
though it's not obvious whether raising an exception, or returning a
clearly bogus value such as 0, is most useful.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]
i.e. having to extract the extended slice first, on the RHS, just to
gauge how many times I must repeat foo. However, if I squint in just
the right way, I _can_ try to convince myself that this _isn't_ really
violating "once and only once"... and I do understand how hard it would
be to allow a 'how many items are needed' function to cover this
case:-(.
Alex
Jul 18 '05 #6

P: n/a

al*****@yahoo.com (Alex Martelli) wrote:

Fernando Perez <fp*******@yahoo.com> wrote:
Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).
Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.


Yeah, but "once and only once" is a great principle of programming. Any
time you have to say something _TWICE_ there's something wrong going on.

So,

a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
having to tell twice that you're getting four items into separate
variables, once by listing exactly four variables on the LHS, and
another time by that ':4' on the RHS. IMHO, that's just as bogus as
struct.unpack's limitation of not having any way to indicate explicitly
'and all the rest of the bytes goes here', and for similar reasons.


The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.
Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient? In the latter case, I'm sure you could come up with such a
mechanism, and when done, maybe you want to offer it up as a recipe in
the cookbook *wink*.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]

I much prefer...

for i in xrange(x, y, z):
L[i] = foo

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

- Josiah

Jul 18 '05 #7

P: n/a
On Mon, 15 Nov 2004 19:44:06 -0800, Josiah Carlson wrote:
I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.


Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient?


Who needs a keyword?

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

In the latter case I'd expect c to be the empty tuple.

Clear parallels to function syntax:
def f(a, b, *c): .... print c
.... f(1, 2, 3, 4) (3, 4) f(1, 2)

()

No parallel for **, but... *shrug* who cares?
Jul 18 '05 #8

P: n/a
Josiah Carlson <jc******@uci.edu> wrote:
a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're ...
The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]
I would like to get exceptions in these cases, which, being exactly what
IS done now, makes exactly as much sense as itself. Note that there is
no indicator in either of these forms that non-rigid unpacking is
desired. Assuming the indicator for 'and all the rest' were a prefix
star, then:

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

should both set a to 1, b to 2, and c respectively to [3, 4] and []. Of
course there would still be failing cases:

a, b, *c = [1]

this should still raise -- 'a, b, *c' needs at least 2 items to unpack.

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.


Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient? In the latter case, I'm sure you could come up with such a
mechanism, and when done, maybe you want to offer it up as a recipe in
the cookbook *wink*.


Two recipes in the 2nd ed of the CB can be combined to that effect
(well, nearly; c becomes an _iterator_ over 'all the rest'), one by
Brett Cannon and one by Sami Hangaslammi. Not nearly as neat and clean
as if the language did it, of course.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]


I much prefer...

for i in xrange(x, y, z):
L[i] = foo


Not the same semantics, in the general case. For example:

L[1:-1:2] = ...

rebinds (len(L)/2)-1 items; your version rebinds no items, since
xrange(1, -1, 2) is empty. To simulate the effect that assigning to an
extended slice has, you have to take a very different tack:

for i in slice(x, y, z).indices(len(L)):
L[i] = foo

and that's still quite a bit less terse and elegant, as is usually the
case for fussy index-based looping whenever it's decently avoidable.
But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).


It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's
somewhat more likely that you're not taking advantage of the
opportunities because, given your dislike, you don't even notice them --
as opposed to the opportunities not existing at all, or you noticing
them, evaluating them against the lower-level index-based-looping
alternatives, and selecting the latter. If you think that extended
slices are sort of equivalent to xrange, as above shown, for example,
it's not surprising that you're missing their actual use cases.
Alex
Jul 18 '05 #9

P: n/a

Jeremy Bowers <je**@jerf.org> wrote:

On Mon, 15 Nov 2004 19:44:06 -0800, Josiah Carlson wrote:
I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.
Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient?


Who needs a keyword?

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

I'll post the same thing that was posted by James Knight on python-dev
about this precise syntax...
James Knight wrote: Two threads on the topic, with rejections by Guido:
http://mail.python.org/pipermail/pyt...er/030349.html
http://mail.python.org/pipermail/pyt...st/046684.html

Guido's rejection from
http://mail.python.org/pipermail/pyt...t/046794.html:
For the record, let me just say that I'm -1 on adding this feature
now. It's only a minor convenience while it's a fairly big change to
the parser and code generator, but most of all, Python's growth needs
to be reduced. If we were to add every "nice" feature that was
invented (or present in other languages), Python would lose one of its
main charms, which is that it is really a rather simple language that
can be learned and put to production quickly.

So while I appreciate the idea (which BTW has been proposed before) I
think it's not worth adding now, and if you don't hear from me again
on this topic it's because I haven't changed my mind...

Guido hasn't updated his stance, so don't hold your breath.

- Josiah

Jul 18 '05 #10

P: n/a
Alex Martelli wrote:
Josiah Carlson <jc******@uci.edu> wrote:

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).


It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's


Indeed, those of us from the Numeric/numarray side of things rely every day on
extended slicing, and consider it an absolute necessity for writing compact,
clean, readable numerical python code.

It's no surprise that this syntax (as far as I know) originated from the needs
of the scientific computing community, a group where python is picking up users
every day.

Cheers,

f

Jul 18 '05 #11

P: n/a
Fernando Perez <fp*******@yahoo.com> wrote:
Alex Martelli wrote:
Josiah Carlson <jc******@uci.edu> wrote:

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).


It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's


Indeed, those of us from the Numeric/numarray side of things rely every
day on extended slicing, and consider it an absolute necessity for writing
compact, clean, readable numerical python code.

It's no surprise that this syntax (as far as I know) originated from the
needs of the scientific computing community, a group where python is
picking up users every day.


Definitely no suprise to me -- although I didn't come to Python by way
of scientific programming, I do have a solid background in that field,
and the concept of addressing an array with a stride is very obvious to
me. Indeed, I was disappointed, early on, that lists didn't support
extended slicing, and very happy when we were able to add it to them.
Alex

Jul 18 '05 #12

P: n/a
On Mon, 15 Nov 2004 23:43:55 -0800, Josiah Carlson wrote:
Guido hasn't updated his stance, so don't hold your breath.


I'm not in favor of it either. I just think that *if* it were going in, it
shouldn't be a "keyword".

I think variable size tuple returns are a code smell. Now that I think of
it, *tuple* returns are a code smell. (Remember, "code smells" are strong
hints of badness, not proof.) Of the easily thousands of functions in
Python I've written, less than 10 have returned a tuple that was expected
to be unpacked.

Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much. One of
the instances I do have is a tree iterator that on every "next()" returns
a depth *and* the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.
Jul 18 '05 #13

P: n/a
Jeremy Bowers wrote:
the instances I do have is a tree iterator that on every "next()" returns
a depth and the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.


You could yield Indent/Dedent (possibly the same class) instances whenever
the level changes - provided that the length of sequences of nodes with the
same depth does not approach one.

Peter
Jul 18 '05 #14

P: n/a
On Tue, 16 Nov 2004 17:58:52 +0100, Peter Otten wrote:
You could yield Indent/Dedent (possibly the same class) instances whenever
the level changes - provided that the length of sequences of nodes with the
same depth does not approach one.


In this case, the depth of the node is multiplied by some indentation
parameter, or some similar operation, and it occurs in three or places, so
the duplication of the

if token == INDENT:
depth += 1
elif token == DEDENT:
depth -= 1
if depth == 0:
abort or something

three or four times was starting to smell itself.

Jul 18 '05 #15

P: n/a

Jeremy Bowers <je**@jerf.org> wrote:

On Mon, 15 Nov 2004 23:43:55 -0800, Josiah Carlson wrote:
Guido hasn't updated his stance, so don't hold your breath.
I'm not in favor of it either. I just think that *if* it were going in, it
shouldn't be a "keyword".

I think variable size tuple returns are a code smell. Now that I think of
it, *tuple* returns are a code smell. (Remember, "code smells" are strong
hints of badness, not proof.) Of the easily thousands of functions in
Python I've written, less than 10 have returned a tuple that was expected
to be unpacked.


I agree with you on the one hand (I also think that variable lengthed
tuple returns are smelly), and have generally returned tuples of the
same length whenever possible. However, I can't agree with you on
general tuple returns. Why? For starters, dict.[iter]items(),
struct.unpack() and various client socket libraries in the standard
library that return both status codes and status messages/data on
command completion (smtplib, nntplib, imaplib).

Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much. One of
the instances I do have is a tree iterator that on every "next()" returns
a depth *and* the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.


The real question is whether /every/ return of more than one item
deserves to have its own non-tuple instance, and whether one really
wants the called function to define names for attributes on that
returned instance. Me, I'm leaning towards no. Two, three or even
four-tuple returns, to me, seem reasonable, and in the case of struct,
whatever suits the program/programmer. Anything beyond that should
probably be a class, but I don't think that the Python language should
artificially restrict itself when common sense would keep most people
from:
a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y, z = range(26)

...or at least I would hope.

- Josiah

Jul 18 '05 #16

P: n/a

al*****@yahoo.com (Alex Martelli) wrote:

Josiah Carlson <jc******@uci.edu> wrote:
a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're ...
The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]
I would like to get exceptions in these cases, which, being exactly what
IS done now, makes exactly as much sense as itself. Note that there is
no indicator in either of these forms that non-rigid unpacking is
desired. Assuming the indicator for 'and all the rest' were a prefix
star, then:

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

should both set a to 1, b to 2, and c respectively to [3, 4] and []. Of
course there would still be failing cases:

a, b, *c = [1]

this should still raise -- 'a, b, *c' needs at least 2 items to unpack.


The only limitation right now is Guido. That is, you need to convince
Guido, and likely get the syntax implemented. See James Knight's post
on 11/12/2004 in python-dev (or my recent quoting here) with a quote
from Guido in regards to this syntax.

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.
I have another response to this...what would be shorter, using some
automated 'how many items are on the left' discovery mechanism, or just
putting in the ':4'?

Certainly some symbol could be reused for something like...
a,b,c = L[:%] #first part
a,b,c = L[%:] #last part, automatically negating the value

But is something like the follwing even desireable?
a,b,c = L[i:%:j] #automatically calculate the ending index
a,b,c = L[%:i:j] #automatically calculate the start index

Regardless, I'm not sure I particularly like any symbol that could be
placed in either set of slices. If it is not some single-character
symbol, then it is actually shorter to just count them.

Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient? In the latter case, I'm sure you could come up with such a
mechanism, and when done, maybe you want to offer it up as a recipe in
the cookbook *wink*.


Two recipes in the 2nd ed of the CB can be combined to that effect
(well, nearly; c becomes an _iterator_ over 'all the rest'), one by
Brett Cannon and one by Sami Hangaslammi. Not nearly as neat and clean
as if the language did it, of course.


What is to stop the recipes from wrapping that final iterator with a
list() call?

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]


I much prefer...

for i in xrange(x, y, z):
L[i] = foo


Not the same semantics, in the general case. For example:

L[1:-1:2] = ...


And I would just use...

for i in xrange(1, len(L)-1, 2):
L[i] = ...

As in anything, if there is more than one way to do something, at least
a bit of translation is required.

rebinds (len(L)/2)-1 items; your version rebinds no items, since
xrange(1, -1, 2) is empty. To simulate the effect that assigning to an
extended slice has, you have to take a very different tack:

for i in slice(x, y, z).indices(len(L)):
L[i] = foo

and that's still quite a bit less terse and elegant, as is usually the
case for fussy index-based looping whenever it's decently avoidable.


You know, terse != elegant. While extended slice assignments are terse,
I would not consider them elegant. Elegant is quicksort, the
Floyd-Warshall algorithm for APSP, Baruvka's MST algorithm, etc. Being
able to say "from here to here with this stride", that's a language
convenience, and its use is on par with using fcn(*args, **kwargs).
Have I used it? Sure, a few times. My most memorable experience is
using a similar functionality in C with MPI. Unbelievably useful for
chopping up data for distribution and reintegration. Was it elegant, I
wouldn't ever make such a claim, it was a library feature, and extended
slice assignments in Python are a language feature. A useful language
feature for a reasonably sized subset of the Python community certainly,
but elegant, not so much.
Using an imperative programming style with Python (using indices to
index a sequence), I thought, was to be encouraged; why else would
xrange and range be offered? Oh, I know, because people use 'fussy
index-based looping' in C/C++, Java, pascal...and don't want to change
the way they develop. Or maybe because not every RHS is a sequence, and
sequence indexing is the more general case which works for basically
everything (with minimal translation).

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).


It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's
somewhat more likely that you're not taking advantage of the
opportunities because, given your dislike, you don't even notice them --
as opposed to the opportunities not existing at all, or you noticing
them, evaluating them against the lower-level index-based-looping
alternatives, and selecting the latter. If you think that extended
slices are sort of equivalent to xrange, as above shown, for example,
it's not surprising that you're missing their actual use cases.


It is the case that I have rarely needed to replace non-contiguous
sections of lists. It has been a few years since I used MPI and the
associated libraries. Recently in those cases that I have found such a
need, I find using xrange to be far more readable. It's an opinion thing,
and we seem to differ in the case of slice assignments (and certainly a
few other things I am aware of). How about we agree to disagree?

I have also found little use of them in the portions of the standard
library that I peruse on occasion, which in my opinion, defines what it
means to be Pythonic (though obviously many extended slice usages are
application-specific).
- Josiah

Jul 18 '05 #17

P: n/a
Jeremy Bowers wrote:
On Tue, 16 Nov 2004 17:58:52 +0100, Peter Otten wrote:
You could yield Indent/Dedent (possibly the same class) instances
whenever the level changes - provided that the length of sequences of
nodes with the same depth does not approach one.


In this case, the depth of the node is multiplied by some indentation
parameter, or some similar operation, and it occurs in three or places, so
the duplication of the

if token == INDENT:
depth += 1
elif token == DEDENT:
depth -= 1
if depth == 0:
abort or something

three or four times was starting to smell itself.


I guess I don't understand, so I wrote a simple example:
DEDENT = object()
INDENT = object()

def _walk(items):
for item in items:
if isinstance(item, list):
yield INDENT
for child in _walk(item):
yield child
yield DEDENT
else:
yield item

class Tree(object):
def __init__(self, data):
self.data = data
def __iter__(self):
for item in _walk(self.data):
yield item

class WalkBase(object):
def __call__(self, tree):
dispatch = {
DEDENT: self.dedent,
INDENT: self.indent
}
default = self.default
for item in tree:
dispatch.get(item, default)(item)

class PrintIndent(WalkBase):
def __init__(self):
self._indent = ""
def indent(self, node):
self._indent += " "
def dedent(self, node):
self._indent = self._indent[:-4]
def default(self, node):
print self._indent, node

class PrintXml(WalkBase):
def indent(self, node):
print "<node>"
def dedent(self, node):
print "</node>"
def default(self, node):
print "<leaf>%s</leaf>" % node
def __call__(self, tree):
print "<tree>"
super(PrintXml, self).__call__(tree)
print "</tree>"

if __name__ == "__main__":
tree = Tree([
0,
[1, 2, 3],
[4,
[5, 6, 7],
[8, [9, 10]]],
[11],
12
])
for i, Class in enumerate([PrintIndent, PrintXml]):
print "%d " % i * 5
Class()(tree)
I think taking actions on indent/dedent "events" is easier and simpler than
keeping track of a numerical depth value, and at least the PrintXml example
would become more complicated if you wanted to infer the beginning/end of a
level from the change in the depth.
I do check the depth level twice (isinstance(item, list) and
dispatch.get()), but I think the looser coupling is worth it.
If you are looking for the cool version of such a dispatch mechanism,
Phillip J. Eby's article

http://peak.telecommunity.com/DevCen...sitorRevisited

(found in the Daily Python URL) might be interesting.

Peter



Jul 18 '05 #18

P: n/a
Greg Ewing wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

While I suspect you may be largely right, I
find myself wondering why this should be so.


*Is* it largely right? I don't think so. As you said, there doesn't seem to be
anything "wrong" with passing multiple pieces of data _into_ a function, so why
should we assume that complex data passing should be so one way?

One of my biggest pet peeves about programming in C, for example, is that you
are often forced to wrap stuff up into a structure just to pass data back and
forth - you create the structure, populate it, send it, and then pull out the data.

In many, many cases this is nothing more than poor-man's tuple unpacking, and
you end up with lots of extra work and lots of one-off structures. Also annoying
is the use of "out" parameters, which is again basically manual tuple unpacking.

Python isn't too different from C wrt deciding when to move from "bare"
parameters to a structure or object - is the number of parameters becoming
cumbersome? are the interrelationships becoming complex? will I need to use
those same parameters as a group elsewhere? etc. The difference is that Python
facilitates more natural data passing on the return.

A function like divmod is a prime example; rather than having

div = 0
mod = 0
divmod(x,y, &div, &mod)

we instead have the much more elegant

div, mod = divmod(x,y)

We can definitely come up with some hints or guidelines ("if your function
returns 10 parameters, that's probably bad" or "if the parameters are tightly
coupled and/or are often used & passed along together to different functions,
you probably should wrap them into an object"), but I don't think returning
tuples is _generally_ a sign of anything good or bad.

-Dave
Jul 18 '05 #19

P: n/a
On Wed, 17 Nov 2004 16:49:03 +1300, Greg Ewing
<gr**@cosc.canterbury.ac.nz> wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.


While I suspect you may be largely right, I
find myself wondering why this should be so. We
don't seem to have any trouble with multiple inputs
to a function, so why should multiple outputs be
a bad thing? What is the reason for this asymmetry?


If there is an assymetry, it is in *favor* of returning tuples for
multiple outputs. Using a tuple removes some of the coupling between
the caller and the function. Note that the function signature defines
names for its internal use, and the caller is not required to know the
internal names (unless kw parameter passing is used). Forcing names on
the output introduces *more* coupling, and makes the design less
flexible.

In practical terms, it's important to note that generic utility
functions often do not return a "full class", but just a bunch of
results that may or not be stored into a single class. Requiring that
a class to be defined just to return a bunch of data is overkill.

Finally, I assume that it's considered standard practice in Python to
use tuples to store arbitrary collections of data, pretty much like C
structs. The only difference is that the members are not named (and
thus loosely coupled). This should not be a big deal for *most*
situation. The structure of the return tuple can be documented in the
doc string. For complex structures, returning a dict is good
possibility (although in this case the fact that the names are defined
introduces some degree of coupling). Of course, if the case is complex
enough, declaring a class is the way to go, but then, it's a design
decision.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #20

P: n/a
In article <ma**************************************@python.o rg>,
Dave Brueck <da**@pythonapocrypha.com> wrote:
Greg Ewing wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

While I suspect you may be largely right, I
find myself wondering why this should be so.


*Is* it largely right? I don't think so. As you said, there doesn't seem to be
anything "wrong" with passing multiple pieces of data _into_ a function, so why
should we assume that complex data passing should be so one way?


What's up with keyword arguments, then? I don't think you can argue
that on one hand keyword arguments are a valuable feature, but plain
by-order tuples are a totally satisfactory interface on the other hand.
One of my biggest pet peeves about programming in C, for example, is that you
are often forced to wrap stuff up into a structure just to pass data back and
forth - you create the structure, populate it, send it, and then pull out the
data.

In many, many cases this is nothing more than poor-man's tuple unpacking, and
you end up with lots of extra work and lots of one-off structures. Also
annoying
is the use of "out" parameters, which is again basically manual tuple
unpacking.


Take stat(), which for those who haven't used it reports a number
of characteristics of a file. It's not the ideal example, because
it's big (10 items) and it's already "fixed" (has attributes like
a class instance.) But its worst problem has nothing to do with
the size, or whether or not the items can be referred to by name.
As long as they can be referred to by position, we're stuck with
the exact same items on all platforms, in all situations, for the
rest of eternity. Whether some of them might not really be supported
on some platforms, or it omits some useful values on others. It's
not very flexible. C stat(2) doesn't have this problem.

I don't know if I'd go along with "largely right", but it could
fall somewhere between "some truth in it" and "completely whacked."

Donn Cave, do**@u.washington.edu
Jul 18 '05 #21

P: n/a
Greg Ewing:
Maybe things would be better if we had "dict unpacking":

a, c, b = {'a': 1, 'b': 2, 'c': 3}

would give a == 1, c == 3, b == 2. Then we could
accept outputs by keyword as well as inputs...


Tuple returns do seem trickier than multiple arguments, partly due to
being more novel for me and partly because the function definition does not
document the tuple in code, although there is often a comment. Perhaps
something like:

def Transform(filename) -> (share, permissions, lock):
....
return (s, p, l)
....
(s=share, p=permissions) = Transform(name)

Neil
Jul 18 '05 #22

P: n/a
On Wed, 17 Nov 2004 16:49:03 +1300, Greg Ewing
<gr**@cosc.canterbury.ac.nz> wrote:
Maybe things would be better if we had "dict unpacking":

a, c, b = {'a': 1, 'b': 2, 'c': 3}

would give a == 1, c == 3, b == 2. Then we could
accept outputs by keyword as well as inputs...


Dicts are not the best way to make it, because they are not ordered
:-) But perhaps we could have something like "named tuples"; immutable
objects, like tuples, with the property that a unique name can be
associated to every item. Couple it with a decorator, and it can be
written like this:

@returns('year','month','day')
def today():
...
return year, month, day

The return of this function would be a 'named tuple'. It could be
treated like a sequence (using __getitem__), or like a dict (using
__getattr__). This way, we could have the best of both worlds; the
simplicity of tuples and the convenience of named attribute access.

(warning: now I am going to give myself enough rope to hang everyday
in my life... and please, flames off :-)

What if 'named tuples' were supported natively? Entering into pre-PEP
mode (a really dangerous thing to do), the idea is that a sequence
could have a __names__ property, that would contain a tuple of names.
On __set__, this property would automatically build a hash using the
names & and the tuple contents. On __get__, it would simply return a
tuple. (I think that it is better to expose the __names__ attribute as
a tuple, and not as a dict; it's clean, more tuple-like, and hides a
lot of the complexity).

(BTW, I still think that using tuples is convenient, flexible, and as
loosely coupled as it can get. But having named access is also
convenient sometimes).

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #23

P: n/a
Donn Cave wrote:
In article <ma**************************************@python.o rg>,
Dave Brueck <da**@pythonapocrypha.com> wrote:
Greg Ewing wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.
While I suspect you may be largely right, I
find myself wondering why this should be so.


*Is* it largely right? I don't think so. As you said, there doesn't seem to be
anything "wrong" with passing multiple pieces of data _into_ a function, so why
should we assume that complex data passing should be so one way?

What's up with keyword arguments, then? I don't think you can argue
that on one hand keyword arguments are a valuable feature, but plain
by-order tuples are a totally satisfactory interface on the other hand.


I don't think anybody is putting forth the opinion that by-order tuples are
totally satisfactory. I just disagree that returning a tuple is generally a sign
of something wrong with your function.

-Dave
Jul 18 '05 #24

P: n/a
On Wed, 17 Nov 2004 19:46:02 -0200, Carlos Ribeiro <ca********@gmail.com> wrote:
The return of this function would be a 'named tuple'. It could be
treated like a sequence (using __getitem__), or like a dict (using
__getattr__). This way, we could have the best of both worlds; the
simplicity of tuples and the convenience of named attribute access.


Well.. I think I should have googled for it *before* hitting the send button :-)

There are a few recipes in the Python Cookbook to deal with named
tuples. I've collected some pointers for those who may be interested
into this topic:

* Tuples with Named Elements via Spawning -- Derrick Wallace
http://aspn.activestate.com/ASPN/Coo.../Recipe/303770

* Tuples with named elements -- Andrew Durdin
http://aspn.activestate.com/ASPN/Coo.../Recipe/303439

* super tuples -- Gonçalo Rodrigues
http://aspn.activestate.com/ASPN/Coo.../Recipe/218485

* Tuples with named elements - using metaclasses -- Andrew Durdin
http://aspn.activestate.com/ASPN/Coo.../Recipe/303481

* Just van Rossum's NamedTuple
http://just.letterror.com/ltrwiki/Ju...m_2fNamedTuple

* Christos Georgiou -- an old reference to a similar problem (named
structs) on Google Groups
http://groups.google.com/groups?selm...bo6k%404ax.com

* Another reference to the same code posted above, from the author's website
http://www.sil-tec.gr/~tzot/python/TupleStruct.py

One of the reasons I became interested on this is related to my
previous problems with 'unordered' dicts in some contexts, specially
when implementing some metaclass tricks. We've had some discussion
here about two months ago, and one of the comments was that it would
be useful if the locals() dict used in a class definition stored order
information. I see some parallels between the two problems (ordered
dicts and named tuples). The fact that named tuple are immutable by
definition should help in this case.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #25

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
But perhaps we could have something like "named tuples"; immutable
objects, like tuples, with the property that a unique name can be
associated to every item.


Of course we should -- I've lost count of how many recipes in the
cookbook I've merged that were implementing that idea in umpteen ways,
not counting several other ideas that only flew by in this NG. Since
standard modules time, os (for stat), resource (dunno if any others),
started returning this kind of supertuples, their convenience has been
obvious to all. I do believe they SHOULD _be_ tuples (that matters when
they're the only RHS argument of a % formatting operator) and it should
also be easy to get from them a name->value mapping (for % formatting
with named-items format style, and the like). Definitely PEP time...
who's gonna carry the torch?
Alex
Jul 18 '05 #26

P: n/a
On Wed, 17 Nov 2004 16:49:03 +1300, Greg Ewing <gr**@cosc.canterbury.ac.nz> wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

ISTM that a tuple _is_ a class that wraps content for return as a _single value_,
just like a custom container class instance. What's the problem? The fact that you
can write

r,g,b = foo()

instead of

t = foo()
r = t[0]
g = t[1]
b = t[0]

(or some other verbose object-content-accessing code)
is very nice, if you ask me. There are lots of uses for
small ordered sets of values where no explicit naming
is required, any more than it is in a call to def foo(r,g,b): ...

I agree that unpacking long tuples to a series of local names
is bug-prone, just like a similar arg list for function call, but
it is efficient and sometimes that's worth the debugging of a
couple of lines of code. I don't think programmers should arbitrarily
be prevented from using tuples as they see fit.

Since the alternative of returning a dict of keyworded values is now
syntactically so simple (i.e., return dict(a=a_value, b=b_value, c=etc),
ISTM this is mostly a design/style issue, and I disagree with the idea
that returning small tuples is generally bad design.
While I suspect you may be largely right, I I suspect that most tuples returned are small, and the too-broad
criticism of tuple returning is therefore largely inappropriate ;-)
find myself wondering why this should be so. We
don't seem to have any trouble with multiple inputs
to a function, so why should multiple outputs be
a bad thing? What is the reason for this asymmetry? Actually, it is not multiple outputs. It is a single tuple.
Perhaps it has something to do with positional vs.
keyword arguments. If a function has too many input
arguments to remember what order they go in, we
always have the option of specifying them by
keyword. But we can't do that with the return-a-tuple-
and-unpack technique for output arguments -- it's
strictly positional. We can (in current python) as easily return a dict as a tuple,
e.g., return dict(a=1, b=2, c=3) for your dict below.
Maybe things would be better if we had "dict unpacking":

a, c, b = {'a': 1, 'b': 2, 'c': 3}
Yes, that is an interesting idea. It's part of the larget issue
of programmatic creation of bindings in the local namespace. The
trouble is that locals() effectively generates a snapshot dict
of current local bindings, and it's possible to mutate this dict,
but it's not possible (w/o black magic) to get the mods back into
the actual local bindings. If locals() returned a proxy for the
local namespace that could rebind local names, we could write

localsproxy().update({'a': 1, 'b': 2, 'c': 3})

instead of your line above. Likewise if foo returned a dict,

localsproxy().update(foo())

Maybe keyword unpacking could spell that with a '**' assignment target,
e.g.,

** = foo() # update local bindings with all legal-name bindings in returned dict

Hm, maybe you could use the same sugar for attributes, e.g.,

obj.** = foo() # sugar for obj.__dict__.update(foo())

IWT it would be acceptable to limit binding to existing bindings, so that frame structure
would not have to be altered. Still, as Carlos pointed out, formal parameter names
are private to a function, and their raison d'etre is to decouple function code internal
naming from external naming. Returning named values (whether using dict per se or the
attribute dict of a custom object, etc) creates coupling of a kind again. Or course, so
does any custom object with named attributes or methods.
would give a == 1, c == 3, b == 2. Then we could
accept outputs by keyword as well as inputs...

I think

a, c, b = {'a': 1, 'b': 2, 'c': 3}

probably needs an unpacking indicator in the syntax, e.g.,

a, c, b = **{'a': 1, 'b': 2, 'c': 3}

And I think this should be sugar for the effect

a = {'a': 1, 'b': 2, 'c': 3}['a']
c = {'a': 1, 'b': 2, 'c': 3}['c']
b = {'a': 1, 'b': 2, 'c': 3}['b']

I.e., if the dict has additional content, it is not an error
Hm, do you want to go for a dict.get-like default value in that?

a, c, b = **(default_value){'a': 1, 'b': 2}

would be sugar for

a = {'a': 1, 'b': 2, 'c': 3}.get('a', default_value)
c = {'a': 1, 'b': 2, 'c': 3}.get('c', default_value)
b = {'a': 1, 'b': 2, 'c': 3}.get('b', default_value)

Hm, better stop ;-)

Regards,
Bengt Richter
Jul 18 '05 #27

P: n/a
On Wed, 17 Nov 2004 23:56:31 +0100, Alex Martelli <al*****@yahoo.com> wrote:
Carlos Ribeiro <ca********@gmail.com> wrote:
But perhaps we could have something like "named tuples"; immutable
objects, like tuples, with the property that a unique name can be
associated to every item.


Of course we should -- I've lost count of how many recipes in the
cookbook I've merged that were implementing that idea in umpteen ways,
not counting several other ideas that only flew by in this NG. Since
standard modules time, os (for stat), resource (dunno if any others),
started returning this kind of supertuples, their convenience has been
obvious to all. I do believe they SHOULD _be_ tuples (that matters when
they're the only RHS argument of a % formatting operator) and it should
also be easy to get from them a name->value mapping (for % formatting
with named-items format style, and the like). Definitely PEP time...
who's gonna carry the torch?


*After* I posted that message, I *did* some research (wrong order, I
know). And you're right, the situation now is pretty much like
Tanenbaum's famous quote: "the nice thing about standards is that
there are so many of them to choose from".

I am tempted to start writing this PEP. I think that I have a pretty
good idea about what do I want (which by itself isn't worth very
much). Anyway, it's a starting point:

1. Do not change the syntax. This leaves out some of the fancy
proposals, but greatly improves the acceptance chances.

2. Named tuples should, for all practical purposes, be an extension of
standard tuples.

3. Conventional access returns a tuple. __getitem__ works as in a
tuple, and the object itself is represented (by repr() and str()) as a
tuple.

4. Named attribute access is supported by __getattr__. Names are
looked up on the magic __names__ attribute of the tuple.

5. On slicing, a named tuple should return another named tuple. This
means that the __names__ tuple has to be sliced also.

6. Although useful, a new named tuple *cannot* be built directly from
a dict, as in this example:

tuple({'a':1, 'b':2})

....because the dict isn't ordered, and there is no way to guarantee
that the tuple would be constructed in the correct order. This will
only be possible if the dict stores the ordering information (btw, the
'correct' order is the definition order; it's the only one that can't
be inferred later at runtime, and alphabetical ordering can be always
obtained by a simple sort).

(However, *if* ordered dicts ever make it into the language, then a
conversion between 'named tuples' and 'ordered dicts' would become
natural -- think about it as an 'adaptation' :-)

7. Now for the controversial stuff. NamedTuples can be implemented as
a regular class -- there are many recipes to look at and choose from.
However, I think that is possible to implement named attribute access
as an "improvement" of regular sequence types, with no syntax changes.

The idea is that any tuple could be turned into a named tuple by
assigning a sequence of names to its __names__ magic attribute. If the
attribute isn't assigned (as it's the case with plain tuples), then
the named attribute is disabled. BTW, adding names to an unnamed tuple
*does not* violates the immutability of the tuple.

The usage would be as follows:

time = now()
time.__names__ = ('hour', 'minute', 'second')
print time.hour

The reasoning behind this proposal is as follows:

7.1. It's fully backwards-compatible.

7.2. It allow for easy annotation of existing tuples.

7.3. It allows for internal optimization. By using an internal mapping
directly wired into the sequence, it is possible to hide the mapping
mechanism, and also to make it work faster. This also adds a safety
layer, and helps to avoid direct 'peeking' at the internal mapping
structure that will be exposed by a native Python implementation.

The *biggest* problem of this proposal is that it's MUCH more complex
-- orders of magnitude, perhaps -- and also, that it may present
problems for others implementations besides CPython. Anyway, it's
something that I think deserves discussion.
--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #28

P: n/a
Carlos Ribeiro wrote:
4. Named attribute access is supported by __getattr__. Names are
looked up on the magic __names__ attribute of the tuple.

5. On slicing, a named tuple should return another named tuple. This
means that the __names__ tuple has to be sliced also.


Hm. If __names__ is a tuple, then does that tuple have a __names__
attribute as well?

(In practice, I can't imagine any reason why tuple.__names__.__names__
should ever be anything other than None, but the potential recursiveness
makes me nervous...)

Jeff Shannon
Technician/Programmer
Credit International

Jul 18 '05 #29

P: n/a
On Wed, 17 Nov 2004 17:13:45 -0800, Jeff Shannon <je**@ccvcorp.com> wrote:
Carlos Ribeiro wrote:
4. Named attribute access is supported by __getattr__. Names are
looked up on the magic __names__ attribute of the tuple.

5. On slicing, a named tuple should return another named tuple. This
means that the __names__ tuple has to be sliced also.


Hm. If __names__ is a tuple, then does that tuple have a __names__
attribute as well?

(In practice, I can't imagine any reason why tuple.__names__.__names__
should ever be anything other than None, but the potential recursiveness
makes me nervous...)


Humm. The worst case is if it's done in a circular fashion, as in:

mytuple.__names__ = nametuple
nametuple.__names__ = mytuple

That's weird. The best that I can imagine now is that it would be
illegal to assign a named tuple to the __names__ member of another
tuple.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #30

P: n/a
Carlos Ribeiro wrote:
On Wed, 17 Nov 2004 17:13:45 -0800, Jeff Shannon <je**@ccvcorp.com> wrote:
Carlos Ribeiro wrote:

4. Named attribute access is supported by __getattr__. Names are
looked up on the magic __names__ attribute of the tuple.

5. On slicing, a named tuple should return another named tuple. This
means that the __names__ tuple has to be sliced also.


Hm. If __names__ is a tuple, then does that tuple have a __names__
attribute as well?

(In practice, I can't imagine any reason why tuple.__names__.__names__
should ever be anything other than None, but the potential recursiveness
makes me nervous...)

Humm. The worst case is if it's done in a circular fashion, as in:

mytuple.__names__ = nametuple
nametuple.__names__ = mytuple

That's weird. The best that I can imagine now is that it would be
illegal to assign a named tuple to the __names__ member of another
tuple.

it should be possible to avoid a recusive problem:
a = ('1', '2')
a.__names__ = ('ONE', 'TWO')
a[0] '1' a.ONE '1' a[0] is a.ONE True b = (3, 4)
b.__names__ = a
b[0] 3 b.1 3 b.ONE Traceback (most recent call last):
File "<interactive input>", line 1, in ?
IndexError: tuple index out of range
a = (1, 2)
a.__names__ = ('ONE', 'TWO')
a[0] 1 a.ONE 1 a[0] is a.ONE True b = (3, 4)
b.__names__ = a
b[0] 3 b.1 Traceback (most recent call last):
File "<interactive input>", line 1, in ?
TypeError: tuple must contain strings

or maybe __names__ could be a property method and do some validation on assignment.

a = (1, 2)
a.__names__ = ('ONE', 'TWO')
a[0] 1 a.ONE 1 a[0] is a.ONE True b = (3, 4)
b.__names__ = a

Traceback (most recent call last):
File "<interactive input>", line 1, in ?
TypeError: tuple must contain strings

bryan
Jul 18 '05 #31

P: n/a
Carlos Ribeiro wrote:
@returns('year','month','day')
def today():
...
return year, month, day


I was also thinking of suggesting an extension to
the return statement:

return year = y, month = m, day = d

but I thought one radical idea per post would
be plenty. :-)

--
Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury, | A citizen of NewZealandCorp, a |
Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. |
gr**@cosc.canterbury.ac.nz +--------------------------------------+

Jul 18 '05 #32

P: n/a
On Wed, 17 Nov 2004 23:23:02 -0200, Carlos Ribeiro <ca********@gmail.com> wrote:
On Wed, 17 Nov 2004 17:13:45 -0800, Jeff Shannon <je**@ccvcorp.com> wrote:
Carlos Ribeiro wrote:
>4. Named attribute access is supported by __getattr__. Names are
>looked up on the magic __names__ attribute of the tuple.
>
>5. On slicing, a named tuple should return another named tuple. This
>means that the __names__ tuple has to be sliced also.
>
>


Hm. If __names__ is a tuple, then does that tuple have a __names__
attribute as well?

(In practice, I can't imagine any reason why tuple.__names__.__names__
should ever be anything other than None, but the potential recursiveness
makes me nervous...)


Humm. The worst case is if it's done in a circular fashion, as in:

mytuple.__names__ = nametuple
nametuple.__names__ = mytuple

That's weird. The best that I can imagine now is that it would be
illegal to assign a named tuple to the __names__ member of another
tuple.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com


Just to introduce a different perspective, a viewer for unnamed sequences,
rather than a new type with names:
class SeqVu(object): ... """create sequence viewer object"""
... def __init__(self, names=''):
... for i, name in enumerate(names.split()):
... setattr(self, name, i)
... def __call__(self, tup):
... """accept tup and return self for named access"""
... self.tup = tup; return self
... def __setitem__(self, i, tup):
... """provide assignment target for tuples to view"""
... self.tup = tup
... def __getattribute__(self, name):
... """get tuple item by name"""
... return object.__getattribute__(self,'tup')[
... object.__getattribute__(self, name)]
... by3names = SeqVu('zero one two')
t = range(5)
by3names(t).one 1 by3names(t).two 2 for by3names[:] in [(i, chr(i)) for i in xrange(65, 70)]:

... print by3names.one, by3names.zero
...
A 65
B 66
C 67
D 68
E 69

Think of this as compost for the thought garden, not a bouquet ;-)

Regards,
Bengt Richter
Jul 18 '05 #33

P: n/a
On Thu, 18 Nov 2004 02:57:23 GMT, Bryan <be*****@yahoo.com> wrote:
[...]
it should be possible to avoid a recusive problem:
a = ('1', '2')
a.__names__ = ('ONE', 'TWO') I guess you are working with a patched version of python?
a = ('1', '2')
a.__names__ = ('ONE', 'TWO')

Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'tuple' object has only read-only attributes (assign to .__names__)

Regards,
Bengt Richter
Jul 18 '05 #34

P: n/a
Bengt Richter wrote:
On Thu, 18 Nov 2004 02:57:23 GMT, Bryan <be*****@yahoo.com> wrote:
[...]
it should be possible to avoid a recusive problem:

>a = ('1', '2')
>a.__names__ = ('ONE', 'TWO')
I guess you are working with a patched version of python?
>>> a = ('1', '2')
>>> a.__names__ = ('ONE', 'TWO')

Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'tuple' object has only read-only attributes (assign to .__names__)

Regards,
Bengt Richter


we were discussing a feature that doesn't yet exist. i believe it's possible to
get around the recursion issue by only allowing the __names__ attribute to be
accessed in the immediate explicitly called tuple. i really do like the idea of
named tuple elements.

bryan
Jul 18 '05 #35

P: n/a
On Wed, 17 Nov 2004 22:58:56 +0000, Bengt Richter wrote:
On Wed, 17 Nov 2004 16:49:03 +1300, Greg Ewing <gr**@cosc.canterbury.ac.nz> wrote:
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

ISTM that a tuple _is_ a class that wraps content for return as a _single value_,
just like a custom container class instance. What's the problem? The fact that you
can write


Generally, generally, generally, I say again, generally.

You take every example mentioned to date, which is largely:

* OS code trying to be as thin as possible (stat, socket)
* math-type code where the tuple really is the class, as you suggest
* more-or-less contrived one- or two-liner code

and you've still got way, way, way less than 1% of the code of the vast
majority of programs.

Like any other code smell, there are times to use it. But I say it's a
*smell*, and those of you implicitly reading that to mean "Returning
tuples is *never* a good idea" are doing me a disservice; please go look
up "code smell" and the word "generally".

If you use it every once in a while where it is the right solution, great.
I do too. If you're using it every other function in a module, and it
isn't a thin wrapper around some other library, you've got a code smell.
(It's probably a class or two trying to get out.) You've got a *big* code
smell if you are unpacking those return values many times, because now
you've hard coded the size of the tuple into the code in many places.

Named tuples eliminate the latter, and make the former weaker. But I
wouldn't add named tuples because of this use case, I'd add them because
we could use immutable dictionaries, and this just happens to come along
for the ride (as long as you aren't required to fully consume the tuple on
the left side, otherwise the smell remains).
Jul 18 '05 #36

P: n/a
On Thu, 18 Nov 2004 05:51:00 -0500, Jeremy Bowers wrote:
* math-type code where the tuple really is the class, as you suggest


By the way, Bengt, just to be explicit, except for this particular "you"
the parent message is addressed generally.
Jul 18 '05 #37

P: n/a
Bengt Richter <bo**@oz.net> wrote:
...
Jeremy Bowers wrote:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

ISTM that a tuple _is_ a class that wraps content for return as a _single
value_, just like a custom container class instance. What's the problem?
The fact that you


None that I can see, no matter how many times Jeremy repeats
"generally".
is very nice, if you ask me. There are lots of uses for
small ordered sets of values where no explicit naming
is required, any more than it is in a call to def foo(r,g,b): ...
Yes, I agree.
IWT it would be acceptable to limit binding to existing bindings, so that
frame structure would not have to be altered.
Looks right to me.
Still, as Carlos pointed out, formal parameter names
are private to a function,
? No they're not -- a caller knows them, in order to be able to call
with named argument syntax.
I.e., if the dict has additional content, it is not an error
Agreed, by analogy with '%(foo)s %(bar)s' % somedict.
Hm, do you want to go for a dict.get-like default value in that?


No, enriching dicts to support an optional default value (or default
value factory) looks like a separate fight to me. As does syntax sugar,
IMHO. If we can get the semantics in 2.5 with a PEP, then maybe sugar
can follow once the usefulness is established to GvR's contents.
Alex
Jul 18 '05 #38

P: n/a
On Thu, 18 Nov 2004 05:04:16 GMT, Bryan <be*****@yahoo.com> wrote:
Bengt Richter wrote:

On Thu, 18 Nov 2004 02:57:23 GMT, Bryan <be*****@yahoo.com> wrote:
[...]
it should be possible to avoid a recusive problem:
>>a = ('1', '2')
>>a.__names__ = ('ONE', 'TWO')


I guess you are working with a patched version of python?
>>> a = ('1', '2')
>>> a.__names__ = ('ONE', 'TWO')

Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'tuple' object has only read-only attributes (assign to .__names__)

Regards,
Bengt Richter


we were discussing a feature that doesn't yet exist. i believe it's possible to
get around the recursion issue by only allowing the __names__ attribute to be
accessed in the immediate explicitly called tuple. i really do like the idea of
named tuple elements.

There are a few requirements that can be imposed to avoid problems.
First, __names__ is clearly a property, acessed via get & set (which
allows to trap some errors). It should accept only tuples as an
argument (not lists) to avoid potential problems with external
references and mutability of the names. As for the validation, I'm not
sure if it's a good idea to check for strings. maybe just check if the
'names' stored in the tuple are immutable (or perhaps 'hashable') is
enough.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #39

P: n/a
On Thu, 18 Nov 2004 08:56:30 +0100, Alex Martelli <al*****@yahoo.com> wrote:
Bengt Richter <bo**@oz.net> wrote:
Still, as Carlos pointed out, formal parameter names
are private to a function,


? No they're not -- a caller knows them, in order to be able to call
with named argument syntax.


On my original post I pointed out that named arguments are an
exception to this rule. For the most of the times, the caller does not
*need* to know the internal argument names. Named arguments are
entirely optional, and in fact, if you provide the arguments in the
correct order, you can avoid using them in almost every situation
(**kw being the exception).

The point was: fixing names on the return code introduces coupling.
But after thinking a little bit more, the issue does not seem as clear
to me as before. After all, *some* coupling is always needed, and
positional return values also are "coupled" to some extent. I think
that the named tuples idea is nice because it works entirely
symmetrical with the named arguments; it allows to use a shortcut
positional notation, or to use a named access, at the will of the
programmer, for the same function call.
I.e., if the dict has additional content, it is not an error


Agreed, by analogy with '%(foo)s %(bar)s' % somedict.
Hm, do you want to go for a dict.get-like default value in that?


No, enriching dicts to support an optional default value (or default
value factory) looks like a separate fight to me. As does syntax sugar,
IMHO. If we can get the semantics in 2.5 with a PEP, then maybe sugar
can follow once the usefulness is established to GvR's contents.


I'm also trying to focus on tuples, so I'm leaving dicts out of this
for now. But clearly named tuples & immutable dicts are closely
related; but of the two, named tuples have an advantage because they
are naturally ordered.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #40

P: n/a
Carlos Ribeiro wrote:
There are a few requirements that can be imposed to avoid problems.
First, __names__ is clearly a property, acessed via get & set (which
allows to trap some errors). It should accept only tuples as an
argument (not lists) to avoid potential problems with external
references and mutability of the names. As for the validation, I'm not
sure if it's a good idea to check for strings. maybe just check if the
'names' stored in the tuple are immutable (or perhaps 'hashable') is
enough.


Your idea of a __names__ attribute suffers from a problem that the common
use case would be to return a tuple with appropriate names. Right now you
can do that easily in one statement but if you have to assign to an
attribute it becomes messy.

An alternative would be so add a named argument to the tuple constructor so
we can do:

return tuple(('1', '2'), names=('ONE', 'TWO'))

and that would set the __names__ property. If you allow this then you can
make the __names__ property readonly and the problem of introducing a cycle
in the __names__ attributes goes away.
Jul 18 '05 #41

P: n/a
On 18 Nov 2004 12:58:49 GMT, Duncan Booth <du**********@invalid.invalid> wrote:
Carlos Ribeiro wrote:
There are a few requirements that can be imposed to avoid problems.
First, __names__ is clearly a property, acessed via get & set (which
allows to trap some errors). It should accept only tuples as an
argument (not lists) to avoid potential problems with external
references and mutability of the names. As for the validation, I'm not
sure if it's a good idea to check for strings. maybe just check if the
'names' stored in the tuple are immutable (or perhaps 'hashable') is
enough.


Your idea of a __names__ attribute suffers from a problem that the common
use case would be to return a tuple with appropriate names. Right now you
can do that easily in one statement but if you have to assign to an
attribute it becomes messy.

An alternative would be so add a named argument to the tuple constructor so
we can do:

return tuple(('1', '2'), names=('ONE', 'TWO'))

and that would set the __names__ property. If you allow this then you can
make the __names__ property readonly and the problem of introducing a cycle
in the __names__ attributes goes away.


I think that a better way to solve the problem is to create a names
method on the tuple itself:

return ('1', '2').names('ONE', 'TWO')

It's shorter and clean, and avoids a potential argument against named
parameters for the tuple constructor -- none of the standard
contructors take named parameters to set extended behavior as far as I
know.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #42

P: n/a
Carlos Ribeiro wrote:
On 18 Nov 2004 12:58:49 GMT, Duncan Booth <du**********@invalid.invalid> wrote:
Carlos Ribeiro wrote:

There are a few requirements that can be imposed to avoid problems.
First, __names__ is clearly a property, acessed via get & set (which
allows to trap some errors). It should accept only tuples as an
argument (not lists) to avoid potential problems with external
references and mutability of the names. As for the validation, I'm not
sure if it's a good idea to check for strings. maybe just check if the
'names' stored in the tuple are immutable (or perhaps 'hashable') is
enough.


Your idea of a __names__ attribute suffers from a problem that the common
use case would be to return a tuple with appropriate names. Right now you
can do that easily in one statement but if you have to assign to an
attribute it becomes messy.

An alternative would be so add a named argument to the tuple constructor so
we can do:

return tuple(('1', '2'), names=('ONE', 'TWO'))

and that would set the __names__ property. If you allow this then you can
make the __names__ property readonly and the problem of introducing a cycle
in the __names__ attributes goes away.

I think that a better way to solve the problem is to create a names
method on the tuple itself:

return ('1', '2').names('ONE', 'TWO')

It's shorter and clean, and avoids a potential argument against named
parameters for the tuple constructor -- none of the standard
contructors take named parameters to set extended behavior as far as I
know.


but doesn't this feel more pythonic and more consistant?

return ('ONE':'1', 'TWO':'2')

Jul 18 '05 #43

P: n/a
On Thu, 18 Nov 2004 14:21:58 GMT, Bryan <be*****@yahoo.com> wrote:
Carlos Ribeiro wrote:
I think that a better way to solve the problem is to create a names
method on the tuple itself:

return ('1', '2').names('ONE', 'TWO')

It's shorter and clean, and avoids a potential argument against named
parameters for the tuple constructor -- none of the standard
contructors take named parameters to set extended behavior as far as I
know.


but doesn't this feel more pythonic and more consistant?

return ('ONE':'1', 'TWO':'2')


The problem is that it involves changing the language, and at this
point, the idea is to devise a solution that *doesn't* need to change
the language. But it's a possibility for the future, after a simpler
version of the same basic feature is approved and implemented.
--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #44

P: n/a
Jeremy Bowers wrote:
You take every example mentioned to date, which is largely:

* OS code trying to be as thin as possible (stat, socket)
* math-type code where the tuple really is the class, as you suggest
* more-or-less contrived one- or two-liner code

and you've still got way, way, way less than 1% of the code of the vast
majority of programs.

Like any other code smell, there are times to use it. But I say it's a
*smell*, and those of you implicitly reading that to mean "Returning
tuples is *never* a good idea" are doing me a disservice; please go look
up "code smell" and the word "generally".
Well, the first page I found that listed the term "code smell" also mentioned
"code wants to be simple", and having scanned through a good sized chunk of
Python code looking to see where multiple return values were used, I am now of
the opinion that, not only are multiple return values _not_ automatically code
smell, but they are one of the key ingredients to avoiding monolithic functions
(and thus they help keep code simple).

The code I looked through had about 4500 return statements. Of those, roughly
350 were returns with multiple values. Of those, 26 had 4 or more return values
- if code smell is a hint that something _might_ be wrong, I'd say those 26 have
code smell, but the remaining 320 or so do not automatically raise concern (I
inspected those 26 in detail and saw only 4 or 5 cases that'd be removed next
time the code got refactored).
If you use it every once in a while where it is the right solution, great.
I do too. If you're using it every other function in a module, and it
isn't a thin wrapper around some other library, you've got a code smell.
I found that multiple return values were not often used in "public" functions
(those meant to be accessible from outside the module). Instead the most common
use case was between functions that had been separated to divide the problem
into smaller, testable code chunks (i.e. - rather than having a huge function to
parse a log line, perform computations, and send the results to output, there
was a function that'd take the raw data, call a helper function to parse it,
call another helper to compute the result, and call another to send the results
off for output)

Without the ability to return multiple values, it would be much more cumbersome
to split the code into small, easily understood chunks because for any
non-trivial task, each logical "step" in that task often has multiple inputs as
well as multiple outputs.

After reviewing the code, I think that the use of multiple return values is, in
and of itself, _almost never_ a good hint of a problem - way too many false
positives (a better rule of thumb would be that code that returns BIG tuples has
code small).
(It's probably a class or two trying to get out.)
No, at least not in the code I looked at - in the vast majority of the uses of
multiple return values, the coupling between the functions was already very
tight (specialized) and the callee-caller relationship was one-to-one or
one-to-few (the function returning multiple values was called by one or very few
functions), and the data didn't move around between function as a unit or group,
such that making a class would have been pure overhead like many C structures -
never reused, and used only to cross the "bridge" between the two functions
(immediately unpacked and discarded upon reception).
You've got a *big* code
smell if you are unpacking those return values many times, because now
you've hard coded the size of the tuple into the code in many places.


Yes, this was mentioned previously.

-Dave
Jul 18 '05 #45

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:

Dicts are not the best way to make it, because they are not ordered
:-) But perhaps we could have something like "named tuples";

@returns('year','month','day')
def today():
...
return year, month, day


As someone has mentioned, usage of tuples is not very scalable and
tend to lend itself to carcass entries in the long run, or simply to
the breaking of compatibility. But I guess in your case, you want to
do something like printing out the results, so, as long as all items
are involved, an order would be beneficial.

------------------

For whatever it's worth, here are two more approaches for unordered
keyword argument and return.

(1) Usage of structures certainly is painful in C++, but not in
Python. What I often use is some generic object:

class Generic: pass

def today(p):
print p.message
r = Generic()
r.year, r.month, r.day = 2004, 11, 18
return r

p=Generic()
p.message = 'Hello!'
result = today(p)

I suspect a large number of people use this approach. Generic objects
are also good for pickling/serialization. (By the way, why is the new
style class "object()" made so that no dynamic attributes can be
assigned to it? Compared to dictionary, generic object is very helpful
as a syntax gadget and is used by many people. I am a bit surprised
that it's not part of the language. Why make so many people having to
resort to the "class Generic: pass" trick?)

(2) Use the function object itself:
def f():
if not hasattr(f,'x'):
f.x = 1
else:
f.x += 1
f.y, f.z = 2*f.x, 3*f.x

f()
print f.x, f.y, f.z
f()
print f.x, f.y, f.z

Of course, this approach has more limited applicability. (Not good for
multithreaded case, not good for renaming the function object or
passing it around.)

Hung Jung
Jul 18 '05 #46

P: n/a
Carlos Ribeiro wrote:
An alternative would be so add a named argument to the tuple
constructor so we can do:

return tuple(('1', '2'), names=('ONE', 'TWO'))

and that would set the __names__ property. If you allow this then you
can make the __names__ property readonly and the problem of
introducing a cycle in the __names__ attributes goes away.


I think that a better way to solve the problem is to create a names
method on the tuple itself:

return ('1', '2').names('ONE', 'TWO')

It's shorter and clean, and avoids a potential argument against named
parameters for the tuple constructor -- none of the standard
contructors take named parameters to set extended behavior as far as I
know.

It doesn't have to be a named parameter, it could just be a second
positional parameter but I think a named parameter reads better.

The problem with that is that you are still breaking the rule that tuples
should be immutable. Currently there is no easy way to copy a tuple:
a = (1, 2)
b = tuple(a)
b is a True import copy
b = copy.copy(a)
b is a

True

Once I have a tuple I know it isn't going to change no matter what happens.
You could say that the names method can only be called once, but even then
the tuple is still changing after its creation and you are still going to
have to introduce some way for me to take a tuple with one set of names and
create a new tuple with a different set of names such as changing the
behaviour of the tuple constructor and the copy.copy function.

Another way to avoid the named argument would be to provide tuple with a
factory method. How about:

return tuple.named((1, 2), ('one', 'two))
Jul 18 '05 #47

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote in message news:<ma**************************************@pyt hon.org>...
On Thu, 18 Nov 2004 05:04:16 GMT, Bryan <be*****@yahoo.com> wrote: There are a few requirements that can be imposed to avoid problems.
First, __names__ is clearly a property, acessed via get & set (which
allows to trap some errors). It should accept only tuples as an
argument (not lists) to avoid potential problems with external
references and mutability of the names. As for the validation, I'm not
sure if it's a good idea to check for strings. maybe just check if the
'names' stored in the tuple are immutable (or perhaps 'hashable') is
enough.


With on changes to the language the same can be achived by using db_row module
Jul 18 '05 #48

P: n/a
Not quite the syntax you want, but better imho since it doesn't
involve name redundancy:

locals().update( {'a': 1, 'b': 2, 'c': 3} )
Greg Ewing <gr**@cosc.canterbury.ac.nz> wrote in message news:<30*************@uni-berlin.de>...

Maybe things would be better if we had "dict unpacking":

a, c, b = {'a': 1, 'b': 2, 'c': 3}

would give a == 1, c == 3, b == 2. Then we could
accept outputs by keyword as well as inputs...

Jul 18 '05 #49

P: n/a
On 18 Nov 2004 10:05:23 -0800
fi**************@gmail.com (Lonnie Princehouse) wrote:
Not quite the syntax you want, but better imho since it doesn't
involve name redundancy:

locals().update( {'a': 1, 'b': 2, 'c': 3} )


Are you sure it will work with locals?
def f(d): .... locals().update(d)
.... print a
.... f({'a': 1}) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 3, in f
NameError: global name 'a' is not defined

Or even:
def f(d): .... a = 1
.... locals().update(d)
.... print a
.... f({'a': 2})

1

--
Denis S. Otkidach
http://www.python.ru/ [ru]
Jul 18 '05 #50

66 Replies

This discussion thread is closed

Replies have been disabled for this discussion.