469,282 Members | 1,977 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,282 developers. It's quick & easy.

determining the number of output arguments

Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

Thank you,
Darren
Jul 18 '05
66 4509
Carlos Ribeiro wrote:
2. Named tuples should, for all practical purposes, be an extension of standard tuples.


Yes -- and explicitly so. A tuple that has names ought not be a tuple
at all, but rather a tuple subtype. The benefit of adding __names__ as
a tuple attribute is lost on me.

Let's add libraries, not language changes:
class namedtuplewrapper(tuple):
"""
Subclasses wrap existing tuples, providing names.
"""

_names_ = []

def __getattr__(self, name):
try:
x = self._names_.index(name)
except ValueError:
raise AttributeError, 'no such field: %s' % name
if x >= 0:
return self[x]

class namedtuple(namedtuplewrapper):
"""
Sugar for subclasses that construct named tuples from
positional arguments.
"""

def __new__(cls, *args):
return tuple.__new__(cls, args)
if __name__ == '__main__':

# namedtuple example

class foo(namedtuple):
_names_ = ['one', 'two', 'three', 'four', 'five']

f = foo(1, 2, 3, 4, 5)
assert f.one + f.four == f.five == 5
# wrapper example

class loctime(namedtuplewrapper):
_names_ = [
'year', 'month', 'day',
'hour', 'min', 'sec',
'wday', 'yday', 'isdst'
]

import time
loc = loctime(time.localtime())
print loc.year, loc.month, loc.day

# arbitrary naming of instances...
loc._names_ = ['a', 'b', 'c']
print loc.a
-- Graham

Jul 18 '05 #51
Bengt Richter wrote:
Maybe keyword unpacking could spell that with a '**' assignment target,
e.g.,

** = foo() # update local bindings with all legal-name bindings in returned dict


I would *not* like to see this. ISTM that, when reading a function, I
should be able to see where every name in that function came from.
Having global values appear from outside of the function is bad enough;
introducing a new way to magically create variables whose names I can't
see would be (IMO) very, very bad.

Jeff Shannon
Technician/Programmer
Credit International
Jul 18 '05 #52
Hung Jung Lu wrote:
def today(p):
print p.message
r = Generic()
r.year, r.month, r.day = 2004, 11, 18
return r
[snip]
I suspect a large number of people use this approach. Generic objects
are also good for pickling/serialization. (By the way, why is the new
style class "object()" made so that no dynamic attributes can be
assigned to it?
My understanding is that this is for efficiency reasons... I remember
some older discussions, but they're kinda hard to google since 'object'
isn't a very good query term... Personally, I don't really care about
being able to assign attributes dynamically to an object() instance, but
I would like to be able to do something like:
r = object(year=2004, month=11, day=18)
r.day, r.month, r.year (18, 11, 2004)

where object's __init__ takes keyword arguments to designate attributes.
This would let you use object basically as a record. If I remember
right though, the problem with this is that it introduces overhead for
all classes that inherit from object. (I kinda thought it had something
to do with defining __slots__, but I'll wait for someone with more
knowledge in this area to fill in the details...)

def f():
if not hasattr(f,'x'):
f.x = 1
else:
f.x += 1
f.y, f.z = 2*f.x, 3*f.x

f()
print f.x, f.y, f.z
f()
print f.x, f.y, f.z

Of course, this approach has more limited applicability. (Not good for
multithreaded case, not good for renaming the function object or
passing it around.)


Just to clarify for anyone who wasn't following closely, this is "not
good for renaming the function object" in cases like:
def f(): .... if not hasattr(f,'x'):
.... f.x = 1
.... else:
.... f.x += 1
.... f.y, f.z = 2*f.x, 3*f.x
.... g = f
f = 2
g()

Traceback (most recent call last):
File "<interactive input>", line 1, in ?
File "<interactive input>", line 3, in f
AttributeError: 'int' object has no attribute 'x'

This is a little contrived here, but the point is that relying *inside*
a function on the name that the function is def'd with is probably not a
good idea.

Steve
Jul 18 '05 #53
Steven Bethard wrote:
My understanding is that this is for efficiency reasons...**I*remember
some older discussions, but they're kinda hard to google since 'object'
isn't a very good query term...**Personally,*I*don't*really*care*about
being able to assign attributes dynamically to an object() instance, but
I would like to be able to do something like:
*r*=*object(year=2004,*month=11,*day=18)
*r.day,*r.month,*r.year

(18, 11, 2004)


Given that the necessary class is literally a 3-liner, I'm not sure a language
extension is truly needed:

In [1]: class bunch:
...: def __init__(self,**kw):
...: self.__dict__.update(kw)
...:

In [2]: r=bunch(year=2004,month=11,day=18)

In [3]: r.day,r.month,r.year
Out[3]: (18, 11, 2004)

Cheers,

f

Jul 18 '05 #54
Fernando Perez wrote:
Steven Bethard wrote:

[snip]
> r = object(year=2004, month=11, day=18)
> r.day, r.month, r.year


(18, 11, 2004)

Given that the necessary class is literally a 3-liner, I'm not sure a language
extension is truly needed:

In [1]: class bunch:
...: def __init__(self,**kw):
...: self.__dict__.update(kw)
...:


How do you think I generated the code above? ;)

Even at 3 lines, do you really want to rewrite those every time you need
this functionality? I don't see what would really be wrong with at
least putting this in a stdlib module somewhere (collections perhaps?)

Heck, I can write a set class in only a few more lines:
class set(object): .... def __init__(self, seq):
.... self._dict = dict.fromkeys(seq)
.... def __iter__(self):
.... return iter(self._dict)
.... def add(self, item):
.... self._dict[item] = None
.... s = set([1, 3, 3, 5, 2, 7, 5])
list(s) [1, 2, 3, 5, 7] s.add(2)
list(s) [1, 2, 3, 5, 7] s.add(8)
list(s)

[1, 2, 3, 5, 7, 8]

But I don't think that's a good reason for not having a builtin set class.

The idea of a 'bunch', 'record', 'struct', 'object with attributes',
etc. gets asked for at least a couple times a month. It might be nice
if that functionality was available *somewhere*, whether it be in object
(not likely, I believe) or in a new class, e.g. 'record'.

On the other hand, I usually find that in the few places where I have
used a record like this, I eventually replace the struct with a real
class...

Steve
Jul 18 '05 #55
Steven Bethard wrote:
Fernando Perez wrote:
Steven Bethard wrote: [snip]
>> r = object(year=2004, month=11, day=18)
>> r.day, r.month, r.year

(18, 11, 2004)

Given that the necessary class is literally a 3-liner, I'm not sure a
language extension is truly needed:

In [1]: class bunch:
...: def __init__(self,**kw):
...: self.__dict__.update(kw)
...:


How do you think I generated the code above? ;)


:)
On the other hand, I usually find that in the few places where I have
used a record like this, I eventually replace the struct with a real
class...


Yes, that's true. IPython has this fairly fancy Struct module, which is
yet-another-shot at the same thing. It started as the above 3-liner, and
ended up growing into a fairly complex class:

planck[IPython]> wc -l Struct.py
376 Struct.py

Not huge, but certainly more than 3 lines :)

I guess I was only arguing for the 3-line version being fairly trivial even to
rewrite on the fly as needed. But if someone does add a fully functional
contribution of this kind, with enough bells and whistles to cover the more
advanced cases, I'd be +100 on that :)

Best,

f

Jul 18 '05 #56
On Thu, 18 Nov 2004 08:56:30 +0100, al*****@yahoo.com (Alex Martelli) wrote:
Bengt Richter <bo**@oz.net> wrote:
... [...]
Still, as Carlos pointed out, formal parameter names
are private to a function,
(If I misunderstood and misrepresented what Carlos wrote, I apologize)
? No they're not -- a caller knows them, in order to be able to call
with named argument syntax.

I guess you mean keyword arguments, so yes, but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.
I.e., once the calling code is written, e.g. an obfuscated-symbols
version of the function may be substituted with no change in program
behavior. So in that sense, there is no coupling: the writer of the
calling code is not forced to use any of the function's internal names
in _his_ source code. This is in contrast with e.g. a returned dict or
name-enhanced tuple: he _is_ forced to use the given actual names in _his_ code,
much like keyword arguments again.

That, incidentally, is why Carlos' decorator suggestion looked good to me. I.e.,
if it could be used to wrap an outside function that returns a plain tuple,
the choice of names would be back in the caller's source code again. Let' see if I can find
it to quote ... not far away...

"""
@returns('year','month','day')
def today():
...
return year, month, day
"""
Applying that (ignoring optimization ;-) to an outside function:

@returns('year','month','day')
def today(*args,**kw):
return outside_function(*args, **kw)

which my some magic would create a today function than returned
a named tuple, with names chosen by the call coder.

Regards,
Bengt Richter
Jul 18 '05 #57
On Thu, 18 Nov 2004 11:16:31 -0800, Jeff Shannon <je**@ccvcorp.com> wrote:
Bengt Richter wrote:
Maybe keyword unpacking could spell that with a '**' assignment target,
e.g.,

** = foo() # update local bindings with all legal-name bindings in returned dict
I would *not* like to see this. ISTM that, when reading a function, I
should be able to see where every name in that function came from.
Having global values appear from outside of the function is bad enough;
introducing a new way to magically create variables whose names I can't

^^^^^^
Not create, just rebind existing locals.see would be (IMO) very, very bad.


All things in moderation. But I agree that you would have to know and trust foo ;-)

Regards,
Bengt Richter
Jul 18 '05 #58
On Thu, 18 Nov 2004 21:14:54 +0300, "Denis S. Otkidach" <od*@strana.ru> wrote:
On 18 Nov 2004 10:05:23 -0800
fi**************@gmail.com (Lonnie Princehouse) wrote:
Not quite the syntax you want, but better imho since it doesn't
involve name redundancy:

locals().update( {'a': 1, 'b': 2, 'c': 3} )
Are you sure it will work with locals?


I think he was alluding to proposed functionality, not as it
currently works. I thought it might be possible to have a locals()-like
proxy that would update _existing_ locals, so that your second example
would work.
def f(d):... locals().update(d)
... print a
... f({'a': 1})Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 3, in f
NameError: global name 'a' is not defined

Or even:
def f(d):... a = 1
... locals().update(d) locals_proxy().update(d) # rebind existing local names matching keys in d... print a
... f({'a': 2})

1

--
Denis S. Otkidach
http://www.python.ru/ [ru]


Regards,
Bengt Richter
Jul 18 '05 #59
Bengt Richter <bo**@oz.net> wrote:
? No they're not -- a caller knows them, in order to be able to call
with named argument syntax.
I guess you mean keyword arguments, so yes,


I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).
but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.
How does that differ from returning a tuple-with-names?
I.e., once the calling code is written, e.g. an obfuscated-symbols
version of the function may be substituted with no change in program
behavior. So in that sense, there is no coupling: the writer of the
If the caller doesn't use argument names, you can change argument names
without breaking the caller.

If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference.
calling code is not forced to use any of the function's internal names
Same for a tuple-with-names being returned.
in _his_ source code. This is in contrast with e.g. a returned dict or
name-enhanced tuple: he _is_ forced to use the given actual names in _his_
code, much like keyword arguments again.


Dicts are another issue. As for tuples-with-names, no way:

a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes.

The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.
Alex

Jul 18 '05 #60
On Fri, 19 Nov 2004 10:58:17 +0100, al*****@yahoo.com (Alex Martelli) wrote:
Bengt Richter <bo**@oz.net> wrote:
>? No they're not -- a caller knows them, in order to be able to call
>with named argument syntax.
> I guess you mean keyword arguments, so yes,


I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).

Of the language no, but the error message is suggestive:
def foo(a,b,c): print a,b,c ... foo(z=123) Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: foo() got an unexpected keyword argument 'z'

OTOH, I have been arguing from an untested assumption *<8^P. I didn't
realize that 'keyword' style named parameter passing could be used
with the ordinary named parameters!
foo(b=2, a=1, c=3) 1 2 3

That definitely is more than mnemonic. So the formal ordered parameter names
are not just internal information, they are part of the interface (which
you can optionally ignore). Really sorry if I misled anyone on that ;-/
but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.


How does that differ from returning a tuple-with-names?

I guess you are referring to those names' also being for optional use, i.e.,
that you can unpack a tuple-with-names the old fashioned way if you want. Hm...
but that might lead to ambiguities if we are to have automatic name-driven unpacking.
I.e.,

c,a,b = tuple_with_names

might be different from

c,a,b = tuple(tuple_with_names)

and

a,b = tuple_with_names

might be a legal extraction of a and b (whatever the order), but

a,b = tuple(tuple_with_names)[1:]

would need the [1:] to avoid an unpacking error.
That makes me wonder if

a,b,c,etc = someobj

it will need an UNPACK_SEQUENCE that looks for a name-lookup
capability in someobj before it looks for an iter capability
to do sequential unpacking. I think probing __getitem__ would
cause problems, since dict is already capable of both __iter__
and __getitem__, and the priority is already defined:
a,b = {'x':1, 'y':2}
a,b

('y', 'x')
And since the underlying tuple would have to support __iter__,
I'm not sure how to generate code for name-driven unpacking
without having a __getnamedvalue__ method separate from __getitem__,
and which would have priority over __iter__. And then what do
you do with all those left-hand-side namelists that you mostly
won't need unless the RHS evaluates to an object with a __getnamedvalue__
method? Maybe we need to have an explict operator for name-driven unpacking, e.g.,

a,b <- someobj

Then plain old __getitem__ could be used on any object that currently supports it,
and the name list would only be compiled into the code for explicit a,b <- something.

Obviously this could take care of unpacking tuple-with-name objects too.
I.e., once the calling code is written, e.g. an obfuscated-symbols
version of the function may be substituted with no change in program
behavior. So in that sense, there is no coupling: the writer of the
If the caller doesn't use argument names, you can change argument names
without breaking the caller.

That makes sense now, but I've been a caller who never used argument names
for the ordered parameters ;-)
If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference. Nor do I now, except for the above aspect of ambiguity in automated
serial vs name-driven unpacking.
calling code is not forced to use any of the function's internal names
Same for a tuple-with-names being returned.

Ok, yes, you can ignore the names. But if you do use names, you live with the
name choices of the function coder (both ways, as I've learned).
in _his_ source code. This is in contrast with e.g. a returned dict or
name-enhanced tuple: he _is_ forced to use the given actual names in _his_
code, much like keyword arguments again.
Dicts are another issue. As for tuples-with-names, no way:

But this is ignoring the names. See above re name-driven unpacking ;-)
a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes. Ok, but name-driven unpacking will have to have other-than-plain-assignment
syntax, it now seems to me.

The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.

Can you believe I've gone years without integrating the fact that any
named python parameter can be passed in name=value form? I've only used
them with ** syntax. Sheesh. Habits can be unnecessarily constraining ;-/

Regards,
Bengt Richter
Jul 18 '05 #61
On Fri, 19 Nov 2004 10:58:17 +0100, Alex Martelli <al*****@yahoo.com> wrote:
Bengt Richter <bo**@oz.net> wrote:
? No they're not -- a caller knows them, in order to be able to call
with named argument syntax.

I guess you mean keyword arguments, so yes,


I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).
but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.


How does that differ from returning a tuple-with-names?


There is a small difference. In some instances, it's *necessary* to
know the name of the argument; for example, if you want to provide a
partial argument list (the only plausible option involves knowing the
default values of the ommited arguments so you can provide all
arguments as positional ones, but that's ends up being about the same
as far as coupling is concerned).

On the other hand, when a named tuple is used to return a value the
caller isn't required to know the name of the argument. He can
*always* refer to it positionally. (the difference is in the fact that
there are no default values in the "return signature", if we may talk
about such beast).
I.e., once the calling code is written, e.g. an obfuscated-symbols
version of the function may be substituted with no change in program
behavior. So in that sense, there is no coupling: the writer of the


If the caller doesn't use argument names, you can change argument names
without breaking the caller.

If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference.
calling code is not forced to use any of the function's internal names


Same for a tuple-with-names being returned.
in _his_ source code. This is in contrast with e.g. a returned dict or
name-enhanced tuple: he _is_ forced to use the given actual names in _his_
code, much like keyword arguments again.


Dicts are another issue. As for tuples-with-names, no way:

a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes.

The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.


As I pointed out above, it's not *strictly* parallel. However, I
concede that knowledge about the positional parameters also introduces
a great deal of coupling, more than I assumed at first.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #62
Steven Bethard <st************@gmail.com> wrote:
Fernando Perez wrote:
Steven Bethard wrote:
>> r = object(year=2004, month=11, day=18) Given that the necessary class is literally a 3-liner, ...

Even at 3 lines, do you really want to rewrite those every time you need
this functionality?


I have written the one-liner "class Generic: pass" all too many times.
:)

Generic objects can be used to represent hierarchical data in tree
fashion. As I said, generic objects are also good for
pickling/serialization. We are talking about communication between
multiple applications, across time and/or space. Other representations
include dictionaries or XML. But you can tell that when it comes to
compactness and ease of use, generic objects are the way to go. In
fact, most common hierarchical data structures only need: (1) the
generic object, (2) list, (3) basic types (numbers and strings.)

It seems odd that there is no standard generic object in Python.

Actually, if one thinks outside Python, in prototypish languages,
generic objects are more fundamental. They are the building block of
everything else. You don't build your 3-liner generic object from
something else... all on the contrary, you build everything else from
your generic object.

The syntax "object(year=2004, month=11, day=18)" certainly is nice. I
wouldn't be surprised that somewhere, someone has already written some
programming language that uses this syntax for their fundamental
generic object.
On the other hand, I usually find that in the few places where I have
used a record like this, I eventually replace the struct with a real
class...


This is true for single programs. It's true for function arguments or
outputs. In those places, generic objects are good as the quick and
easy way of using hierarchical data structure without the need of
formally defining a class. Once things deserve to be replaced by real
classes, they are replaced.

This is not true for pickling/serialization purpose. When you have
pickled data, you don't want to have to search for the definition of
the classes, which you or someone else may have written years ago. You
want to be able to unpickle/unserialize your data and use it, without
the need of class definition. Yes, dictionary can be used, but you:

(a) either use mydata['a']['b']['c']['d'] instead of mydata.a.b.c.d,
or

(b) have a class to convert dictionary-based back to object-based
(hence we come back to the problem: where is that code file of the
class that some guy wrote 7 years ago?)

If I have avoided anything more complicated than "class Generic:
pass", it's because this is a one-liner that I can remember how to
type anytime. :) Now, if in the language there is something standard,
even this trick won't be necessary. From all what I can see, "class
Generic: pass" will stay as the preferred choice for many people, for
a long time to come. Simplicity counts.

Hung Jung
Jul 18 '05 #63
Oops, let's try that again:

| class namedtuplewrapper(tuple):
| """
| wraps an existing tuple, providing names.
| """
|
| _names_ = []
|
| def __getattr__(self, name):
| try:
| x = self._names_.index(name)
| except ValueError:
| raise AttributeError, 'no such field: %s' % name
| if x >= 0:
| return self[x]
|
| class namedtuple(namedtuplewrapper):
| """
| Sugar for a class that constructs named tuples from
| positional arguments.
| """
|
| def __new__(cls, *args):
| return tuple.__new__(cls, args)
|
|
| if __name__ == '__main__':
|
| # namedtuple example
|
| class foo(namedtuple):
| _names_ = ['one', 'two', 'three', 'four', 'five']
|
| f = foo(1, 2, 3, 4, 5)
| assert f.one + f.four == f.five
|
|
| # wrapper example
|
| class loctime(namedtuplewrapper):
| _names_ = [
| 'year', 'month', 'day',
| 'hour', 'min', 'sec',
| 'wday', 'yday', 'isdst'
| ]
|
| import time
| print time.localtime()
| loc = loctime(time.localtime())
| print loc.year, loc.month, loc.day
|
| # arbitrary naming...
| loc._names_ = ['a', 'b', 'c']
| print loc.a

-- Graham

Jul 18 '05 #64
Hung Jung Lu wrote:
Steven Bethard wrote:
>>>r = object(year=2004, month=11, day=18)

[snip a bunch of good arguments for including generic objects]

It does sound like there's some support for putting something like this
into the language. My feeling is that the right place to start would be
to put such an object into the collections module. (If necessary, it
could get moved to builtins later.)

Is this something a PEP should be written for?

Steve
Jul 18 '05 #65
Steven Bethard wrote:
[snip a bunch of good arguments for including generic objects]

It does sound like there's some support for putting something like
this into the language. My feeling is that the right place to start
would be to put such an object into the collections module. (If
necessary, it could get moved to builtins later.)

Is this something a PEP should be written for?

It would need a PEP before it could have a chance of being included in
the standard lib. And if it gets rejected, at least you'd then have a
record of some rationale for *not* having a standard Generic/Bunch class.

So, yeah, if you want this to be anything other than an ephemeral Usenet
conversation, a PEP would be the next step, I think. :)

Jeff Shannon
Technician/Programmer
Credit International

Jul 18 '05 #66
Bengt Richter <bo**@oz.net> wrote:
...
I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-). ...
TypeError: foo() got an unexpected keyword argument 'z'
Yeah, it's official python terminology, just hateful.
OTOH, I have been arguing from an untested assumption *<8^P. I didn't
realize that 'keyword' style named parameter passing could be used
with the ordinary named parameters!
Ah, OK, couldn't realize that misapprehension on your part.
but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.


How does that differ from returning a tuple-with-names?

I guess you are referring to those names' also being for optional use,
i.e., that you can unpack a tuple-with-names the old fashioned way if you
want. Hm...


Yes, I took that for granted.
but that might lead to ambiguities if we are to have automatic name-driven
unpacking. I.e.,

c,a,b = tuple_with_names

might be different from

c,a,b = tuple(tuple_with_names)
It had better not be, otherwise the tuples-with-names returned by
standard library modules such as time, os, resource, would behave
differently from these new tuple_with_names. IOW, we don't want
name-driven unpacking from these tuples with names -- at least, I most
assuredly don't. Are these tuple_with_names not iterables?! Then they
must be unpackable this way like ANY other iterable:
a, b, c = dict(a=23, b=45, c=33)
print a, b, c

a c b

If you're talking about backwards incompatible changes (Python 3000) it
might be good to separate the thread from one which I was participating
in because of potential interest for Python 2.5. As long as we're not
considering backwards incompatible changes, it's unthinkable to have
iterables that can't be unpacked, even though the results might not be
what you might expect.

method? Maybe we need to have an explict operator for name-driven

unpacking, e.g.,

Probably some kind of distinguished syntax, though a new
assignment-operator seems way overkill for the tiny benefit.

Alex
Jul 18 '05 #67

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

5 posts views Thread by Elaine Jackson | last post: by
2 posts views Thread by William Payne | last post: by
3 posts views Thread by Trevor M. Lango | last post: by
4 posts views Thread by Augustus S.F.X Van Dusen | last post: by
18 posts views Thread by Lie | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by suresh191 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.