469,306 Members | 1,725 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,306 developers. It's quick & easy.

Question about idioms for clearing a list

I know that the standard idioms for clearing a list are:

(1) mylist[:] = []
(2) del mylist[:]

I guess I'm not in the "slicing frame of mind", as someone put it, but
can someone explain what the difference is between these and:

(3) mylist = []

Why are (1) and (2) preferred? I think the first two are changing the
list in-place, but why is that better? Isn't the end result the same?

Thanks in advance.
--
Steven.
Jan 31 '06
65 3784
A slim lady in a brown overcoat appears and says with a silly
French accent: "Lizten very carefully, I vill zay ziz only onze."

BTW, I happen to reply to Ed's post now, but I'm really responding
to a fairly common attitude which I find somewhat counterproductive.
I hope people are openminded and able to see the point I'm trying
to make.

Ed Singleton wrote:
The point is that having to use del to clear a list appears to the
inexperienced as being an odd shaped brick when they've already used
the .clear() brick in other places.
Agreed. The smart way to go from this stage of surprise is
not to assume that Python is broken, but to try to understand
how lists are different from e.g. dicts, and why the so-much-
smarter-than-me Python designers made it like this. Hubris is
considered a virtue in the Perl community. While not mentioned
so much, I'd say that humility is considered a virtue here.

Not that Python is perfect, but when you don't get a
"sorry, this change would break existing code, won't happen
until Python 3.0"-resonse, but a "study this more"-response,
the smart thing is to open your mind and try to fully grok
this.

There is a very strong ambition among the Python developers
to avoid duplication of features. This is one of the keys to
Python's ease of use and readability. Don't bother suggesting
synonyms. While there are thousands of situations where adding
just another method would make life easier in the short run,
life would be much harder if there were thousands of extra
methods in the Python core!

It isn't always possible to use one approch to a problem in all
cases. If two approaches are used, they don't usually overlap.
The "extra" approach is only used where the normal approach
doesn't work.
Having bricks that work in lots of places makes the language
'guessable'. "I've never cleared a list before, but I've cleared
dictionaries and I guess the same way would work here".
I think we both agree that Python is very useful in this regard.
It's more consistent than other languages I've worked with, and
when things seems inconsistent, that's probably deliberate, and
the appearent lack of consistency is a hint that we need to grok
how these cases are different.

You really happen to arrive at this from the wrong direction. :)
The .clear() methods in dicts and sets are there because there
is no other convenient way to empty these containers. There is
no support in these kinds of containers to refer to the whole
contents without copying. There is no <something> which lets
you do "del aDict[<something>]" to clean a dict. You can do
"for key in aDict: del aDict[key]", but since this is a fairly
common thing to do, there is a shortcut for that called .clear().
I'm pretty sure the reason to implement it was speed rather than
convenience. As you know, "There should be one-- and preferably
only one --obvious way to do it." "Although practicality beats
purity." In other words, moving a very common loop from Python
to C was more important than a minimal interface. Don't forget
to "import this" and ponder a bit...

Statements and operators are really fundamental in Python. We
don't support n = 1.add(2), since we have the '+' operator.
You can't do x.clear() to remove arbitrary variables x from a
namespace, but del x works for every x, whether it's a dict,
a module or an int etc. Python basically just use methods when
the statements and operators aren't enough.

To really understand more about this, it might be useful to
ask why we can't use del to somehow empty dicts instead. In
the best case, that might possibly lead to the removal or at
least deprecation of the .clear() method in Python 3.0, but I
doubt that people would find some way that had the required
beauty.
The problem is you have to be very careful when talking about this,
not to use the C-word, because that's the hobgoblin of little minds,
whereas in almost every other field it is considered an important part
of usability.


That hobgoblin-phrase isn't very helpful. First of all, not all
who respond to questions at comp.lang.python know what they are
talking about. This is the internet. Take everything you read with
a grain of salt. But another thing I've learnt in my years as a
consultant, is that when experienced people directly and strongly
reject an idea, they usually do that based on intuition and
experience, and even if they are arguing poorly for their case,
it's in your best interest to try to understand why they react
like they do, or you'll fall down in that tar pit they were trying
to warn you about. It's like being in a foreign country and meeting
someone who is waving his arms and shouting incomprehensibly to
you. Just because you don't understand what he's saying, you shouldn't
assume that he's just talking jibberish and can be safely ignored.

The smart approach is neither to stubbornly repeat your already
rejected idea, nor to try to crush the arguments of those who
oppose you, but rather to openly and humbly try your best to see
their side of this. Don't just accept it. Try to figure out why
these really bright people have taken this standpoint. They've
probably heard your question 20 times before, and might be bored
to go through all the arguments again.

(Part of this is to figure out who are the guys you should really
listen to. If people are allowed to commit code to the Python code
base, that's a hint that they know what they are talking about...
http://sourceforge.net/project/membe...?group_id=5470
In other words, you cn safely ignore me... ;^)

I'm sure Python can and will be improved in various ways, and that
some people who are fairly new to Python might come up with bright
ideas. I also suspect that those old farts (like me) who have used
it for ages might respond in a stubborn and reactionary way. That
doesn't change the fact that the overwhelming amount of
comp.lang.python threads about flaws in Python and suggestions for
improvements in the core would make Python worse rather than better
if they were implemented.

I know that the last time (the one time) I claimed that Python had
a really bizarre bug, it turned out that I was just being blind and
sloppy.
Feb 8 '06 #51

<bo****@gmail.com> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com...
I am confused. Could you explain this ? I was under the impression said
above(mapping don't support slicing), until after I read the language
reference. I don't think it is slicing as in the list slicing sense but
it does use the term "extend slicing".

http://www.python.org/doc/2.4.2/ref/slicings.html


I believe that extended slicing was introduced for multidimensional slicing
of multidimensional arrays in Numerical Python. Apparently, since such
arrays are not indexed sequentially, they are considered as mappings. The
paragraph on extended slicing is much clearer if one knows how they are
used in one of the numerical Python packages.

In the mathematical sense, a sequence can be considered to be a mapping of
counts to whatever. In some languages, sequences have been implemented
literally as a mapping by 'associative arrays', the equivalent of Python's
dict. And dicts are sometimes used in Python to implement sparse arrays.

A set is not a mapping unless its members are all ordered pairs, as in a
dict.

Terry Jan Reedy

Feb 8 '06 #52

"Magnus Lycka" <ly***@carmen.se> wrote in message
news:ds**********@wake.carmen.se...
Statements and operators are really fundamental in Python. We
don't support n = 1.add(2), since we have the '+' operator.


Actually, since the 2.2 union of type and new-style classes, which gave
attributes to all types ....
1 .__add__(2) # note space after 1 3 1.__add__(2) # ambiguous, interpreter won't guess SyntaxError: invalid syntax 1..__add__(2.) 3.0

I only see this as useful for making bound methods:
inc = 1 .__add__
inc(2) 3 dub = 2 .__mul__
dub(2)

4

which is certainly nicer than the older method of writing a 'makeoper'
function that returned a nested function with either a default param or a
closure.

Terry Jan Reedy

Feb 8 '06 #53
Magnus Lycka wrote:
Ed Singleton wrote:
The point is that having to use del to clear a list appears to the
inexperienced as being an odd shaped brick when they've already used
the .clear() brick in other places.

Agreed. The smart way to go from this stage of surprise is
not to assume that Python is broken, but to try to understand
how lists are different from e.g. dicts, and why the so-much-
smarter-than-me Python designers made it like this.


That two of Python's three built-in mutable collections support
clear() is clearly a historical artifact.
Not that Python is perfect, but when you don't get a
"sorry, this change would break existing code, won't happen
until Python 3.0"-resonse, but a "study this more"-response,
the smart thing is to open your mind and try to fully grok
this.


The original question was about idioms and understanding, but
there's more to the case for list.clear. Python is "duck typed".
Consistency is the key to polymorphism: type X will work as an
actual parameter if and only if X has the required methods and
they do the expected things.

Emptying out a collection is logically the same thing whether
that collection is a list, set, dictionary, or user-defined
SortedBag. When different types can support the same operation,
they should also support the same interface. That's what
enables polymorphism.
--
--Bryan
Feb 8 '06 #54
Bryan Olson wrote:
The original question was about idioms and understanding, but
there's more to the case for list.clear. Python is "duck typed".
Consistency is the key to polymorphism: type X will work as an
actual parameter if and only if X has the required methods and
they do the expected things.

Emptying out a collection is logically the same thing whether
that collection is a list, set, dictionary, or user-defined
SortedBag. When different types can support the same operation,
they should also support the same interface. That's what
enables polymorphism.


I agree that emptying is logically the same thing for all of these
types. Beyond that, they don't seem to have a lot in common. It's
quite possible to support a duck typing approach that works for all
sorts of sequences, but it's fairly meaningless to use ducktyping
for conceptually different types such as dicts, lists and sets.

Do you really have a usecase for this? It seems to me that your
argument is pretty hollow.

For lists, which are mutable sequences, you add new data with .insert,
..append or .extend. You replace or remove existing data using indexing
l[x] or slicing l[x:y] in del or assignment statements. You can also
remove data with .pop or .remove. These overlapping methods have
specific use. l.remove(x) is short for del l[l.index(x)] (it's also
faster, and that sometimes matter) and .pop() is there to support a
stack-like behaviour.

Dicts use indexing d[x] or .update to either add new data or to replace
existing data. There is no distinction between these operations in
dicts, since dicts are semantically so different from sequences. They
have content, but no order, no specific "positions". You can delete
one item at a time with del d[x], but since slices don't make sense
for dicts, there is a d.clear() method to achieve this common task
quickly. One could imagine that it was allowed to write "del d[]" or
something like that, but then we also expect x = d[] and d[] = x to
work... We probably don't want that.

Sets are also semantically different, and thus use a different set of
operations. They share the lack of order with dicts, but they aren't
pairs, so the semantics is naturally different. They don't support
indexing at all, since they neither have order nor keys.

As far as I understand, the only operation which is currently used
by all three collections is .pop, but that takes a different number
or parameters, since these collections are conceptually different!

Then we have Queues. They have a different purpose, and again, a
different API, since they provide features such as blocking or non-
blocking reads and writes.
Feb 9 '06 #55
Magnus Lycka wrote:
Bryan Olson wrote:
The original question was about idioms and understanding, but
there's more to the case for list.clear. Python is "duck typed".
Consistency is the key to polymorphism: type X will work as an
actual parameter if and only if X has the required methods and
they do the expected things.

Emptying out a collection is logically the same thing whether
that collection is a list, set, dictionary, or user-defined
SortedBag. When different types can support the same operation,
they should also support the same interface. That's what
enables polymorphism.

I agree that emptying is logically the same thing for all of these
types. Beyond that, they don't seem to have a lot in common. It's
quite possible to support a duck typing approach that works for all
sorts of sequences, but it's fairly meaningless to use ducktyping
for conceptually different types such as dicts, lists and sets.

Do you really have a usecase for this? It seems to me that your
argument is pretty hollow.


Sure:

if item_triggering_end in collection:
handle_end(whatever)
collection.clear()
Or maybe moving everything from several collections into
a single union:

big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()

[...] As far as I understand, the only operation which is currently used
by all three collections is .pop, but that takes a different number
or parameters, since these collections are conceptually different!


The all support len, iteration, and membership test.

Many algorithms make sense for either sets or lists. Even if they
cannot work on every type of collection, that's no reason not
to help them be as general as logic allows.

--
--Bryan
Feb 10 '06 #56
Bryan Olson <fa*********@nowhere.org> wrote:
...
I agree that emptying is logically the same thing for all of these
types. Beyond that, they don't seem to have a lot in common. It's ... Do you really have a usecase for this? It seems to me that your
argument is pretty hollow.


Sure:

if item_triggering_end in collection:
handle_end(whatever)
collection.clear()

Or maybe moving everything from several collections into
a single union:

big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()


I was thinking of something different again, from a use case I did have:

def buncher(sourceit, sentinel, container, adder, clearer):
for item in sourceit:
if item == sentinel:
yield container
clearer()
else
adder(item)
yield container

s = set()
for setbunch in buncher(src, '', s, s.add, s.clear): ...

d = dict()
for dictbunch in buncher(src, '', d, lambda x: d.setdefault(x,''),
d.clear): ...

L = list()
for listbunch in buncher(src, '', L, L.append,
lambda: L.__setslice__(0,len(L),[])): ...

the dict case is semi-goofy (since some arbitrary value must be set, one
ends up with a lambda willy nilly), but the list case is even worse,
with that horrid "lambda calling __setslice__" (eek).

BTW, we do have other mutable collections...:

import collections
q = collections.deque()
for qbunch in buncher(src, q, q.append, q.clear): ...

just as neat as a set.

So what is the rationale for having list SO much harder to use in such a
way, than either set or collections.deque? (For dict, I can see the
rationale for not having an 'addkey', even though the presence of class
method 'fromkeys' weakens that rationale... but for list, I cannot see
any reason that makes sense to me).
Alex
Feb 10 '06 #57
[Alex Martelli]
I was thinking of something different again, from a use case I did have:

def buncher(sourceit, sentinel, container, adder, clearer):
for item in sourceit:
if item == sentinel:
yield container
clearer()
else
adder(item)
yield container

s = set()
for setbunch in buncher(src, '', s, s.add, s.clear): ...
I'm curious, what is the purpose of emptying and clearing the same
container? ISTM that the for-loop's setbunch assignment would then be
irrelevant since id(setbunch)==id(s). IOW, the generator return
mechanism is not being used at all (as the yielded value is constant
and known in advance to be identical to s).

Just for jollies, I experimented with other ways to do the same thing:

from itertools import chain, groupby

def buncher(sourceit, sentinel, container, updater, clearer):
# Variant 1: use iter() to do sentinel detection and
# use updater() for fast, high volume updates/extensions
it = iter(src)
for item in it:
updater(chain([item], iter(it.next, sentinel)))
yield container
clearer()

s = set()
for setbunch in buncher(src, '', s, s.update, s.clear):
print setbunch, id(setbunch)

def buncher(sourceit, sentinel, container, updater, clearer):
# Variant 2: use groupby() to the bunching and
# use updater() for fast, high volume updates/extensions
for k, g in groupby(sourceit, lambda x: x != sentinel):
if k:
updater(g)
yield container
clearer()

s = set()
for setbunch in buncher(src, '', s, s.update, s.clear):
print setbunch

Of course, if you give-up the seemingly unimportant in-place update
requirement, then all three versions get simpler to implement and call:

def buncher(sourceit, sentinel, constructor):
# Variant 3: return a new collection for each bunch
for k, g in groupby(sourceit, lambda x: x != sentinel):
if k:
yield constructor(g)

for setbunch in buncher(src, '', set):
print setbunch

Voila, the API is much simpler; there's no need initially create the
destination container; and there's no need for adaptation functions
because the constructor API's are polymorphic:

constructor = list
constructor = set
constructor = dict.fromkeys

[Alex] d = dict()
for dictbunch in buncher(src, '', d, lambda x: d.setdefault(x,''),
d.clear): ...

L = list()
for listbunch in buncher(src, '', L, L.append,
lambda: L.__setslice__(0,len(L),[])): ...
Hmm, is your original buncher a candidate for adapters? For instance,
could the buncher try to adapt any collection input to support its
required API of generic adds, clears, updates, etc.?

[Alex] So what is the rationale for having list SO much harder to use in such a
way, than either set or collections.deque?


Sounds like a loaded question ;-)

If you're asking why list's don't have a clear() method, the answer is
that they already had two ways to do it (slice assignment and slice
deletion) and Guido must have valued API compactness over collection
polymorphism. The latter is also evidenced by set.add() vs
list.append() and by the two pop() methods having a different
signatures.

If you're asking why your specific case looked so painful, I suspect
that it only looked hard because the adaptation was force-fit into a
lambda (the del-statement or slice assignment won't work as an
expression). You would have had similar difficulties embedding
try/except logic or a print-statement. Guido, would of course
recommend using a plain def-statement:

L = list()
def L_clearer(L=L):
del L[:]
for listbunch in buncher(src, '', L, L.append, L_clearer):
print listbunch

While I question why in-place updating was needed in your example, it
did serve as a nice way to show-off various approaches to adapting
non-polymorphic API's for a generic consumer function with specific
needs.

Nice post,
Raymond

Feb 10 '06 #58

Raymond Hettinger wrote:
[Alex]
So what is the rationale for having list SO much harder to use in such a
way, than either set or collections.deque?
Sounds like a loaded question ;-)

If you're asking why list's don't have a clear() method, the answer is
that they already had two ways to do it (slice assignment and slice
deletion) and Guido must have valued API compactness over collection
polymorphism. The latter is also evidenced by set.add() vs
list.append() and by the two pop() methods having a different
signatures.

Sounds to me that it is a preference(style, whatever), rather than some
other posts of this thread argued that "del L[:]" is better.

If you're asking why your specific case looked so painful, I suspect
that it only looked hard because the adaptation was force-fit into a
lambda (the del-statement or slice assignment won't work as an
expression). You would have had similar difficulties embedding
try/except logic or a print-statement. Guido, would of course
recommend using a plain def-statement:

L = list()
def L_clearer(L=L):
del L[:]
for listbunch in buncher(src, '', L, L.append, L_clearer):
print listbunch

Is that really "clearer" ? While it is still very localized(just read a
few lines up for the definition), buncher(src, '', L.append, L.clear)
seems to be clearer to me, especially there are two similar construct
on set/dict above, even the Alex's lambda form conveys more info, IMO.
Usng the new partial function, may be it can be written as :

buncher(src.'', L, L.append, partial(L.__delslice__, 0, sys.maxint))

#assuming list can have at at most maxint items.

Feb 10 '06 #59
Bryan Olson wrote:
Magnus Lycka wrote:
Do you really have a usecase for this? It seems to me that your
argument is pretty hollow.
Sure:

if item_triggering_end in collection:
handle_end(whatever)
collection.clear()

Or maybe moving everything from several collections into
a single union:

big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()


I don't understand the second one. Where did 't' come from?
Anyway, tiny code snippets are hardly usecases. Are these from
real code? If they are, why aren't there support for emptying
lists? Have you patched your Python? Didn't you actually need
to support lists?

I still don't see any convincing usecase for the kind of
ducktyping you imply. There are certainly situations where
people have used lists or dicts before there were sets in
Python, and want to support both variants for a while at least,
but since their APIs are so differnt for these types, .clear()
seems like a non-issue.

If this was a problem in the real world, I bet we'd see a lot
of code with functions similar to this:

def clear(container):
try:
del container[:]
except TypeError:
container.clear()

If you *do* have this problem, this is a very simple workaround.
As far as I understand, the only operation which is currently used
by all three collections is .pop, but that takes a different number
or parameters, since these collections are conceptually different!


The all support len, iteration, and membership test.


Ok. Forgot that. id(), str() and repr() as well. Still, after almost
10 years of Python programming I can't remember that I ever ran into
a situation where I ever needed one single piece of code to empty
an arbitrary container. It's trivial to solve, so I wouldn't have
stopped to think about it for even a minute if it happened, but I
still don't think it happened.This was never a problem for me, and
I don't think I saw anyone else complain about it either, and I've
seen plenty of complaints! ;)

I can understand the argument about making it easy to remember how
to perform an action. I think the current situation is correct. To
introduce redunancy in this case (del x[:] <==> x.clear()) would not
be an improvement of Python. In the long run, such a strategy of
synonyms would make Python much more like Perl, and we don't want
that. So I can understand that the question pops up though (but
not why it gets such proportions).

I don't buy this duck-typing argument though. Considering how little
it would change in unifying these divergent APIs, it still sounds
as hollow to me.
Many algorithms make sense for either sets or lists. Even if they
cannot work on every type of collection, that's no reason not
to help them be as general as logic allows.

class BryansList(list): .... add=list.append
.... def clear(self):
.... del self[:]
.... b = BryansList([1,2,3,4,5])
b [1, 2, 3, 4, 5] b.add(6)
b.clear()
b

[]

Happy now? You can keep it, I don't need it. :)
Most of us consider minimal interfaces a virtue.
Feb 10 '06 #60

Magnus Lycka wrote:
>>> class BryansList(list): ... add=list.append
... def clear(self):
... del self[:]
... >>> b = BryansList([1,2,3,4,5])
>>> b [1, 2, 3, 4, 5] >>> b.add(6)
>>> b.clear()
>>> b

[]

Happy now? You can keep it, I don't need it. :)
Most of us consider minimal interfaces a virtue.


What kind of performance penalty are we talking about here ? list being
such a fundamental thing, no one would like to use a slower version
just for the clear/add method. And if it is a "use when you really need
to", it would make the code harder to understand as it would be
"sometimes it is BryansList, sometimes it is builtin list".

That said, I don't find clear() to be useful as unless one needs to
pass around a single copy of list object around and saved them for
future use(which can be a source of subtle bug), just lst=[] is usually
good enough for localized usage.

Feb 10 '06 #61
Magnus Lycka wrote:
Bryan Olson wrote:
Magnus Lycka wrote:
Do you really have a usecase for this? It seems to me that your
argument is pretty hollow.

Sure:

if item_triggering_end in collection:
handle_end(whatever)
collection.clear()

Or maybe moving everything from several collections into
a single union:

big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()

I don't understand the second one. Where did 't' come from?


Cut-and-past carelessness. Meant to update with 'collection'.
Anyway, tiny code snippets are hardly usecases.
The task is the usecase.

[...] I still don't see any convincing usecase for the kind of
ducktyping you imply.


I didn't say I could convince you. I said that when different
types can support the same operation, they should also support
the same interface. That's what enables polymorphism.
--
--Bryan
Feb 10 '06 #62
> > If you're asking why list's don't have a clear() method, the answer is
that they already had two ways to do it (slice assignment and slice
deletion) and Guido must have valued API compactness over collection
polymorphism. The latter is also evidenced by set.add() vs
list.append() and by the two pop() methods having a different
signatures. [bonono]
Sounds to me that it is a preference(style, whatever), rather than some
other posts of this thread argued that "del L[:]" is better.


It was simply design decision reflecting Guido's values on language
economics.

If you're asking why your specific case looked so painful, I suspect
that it only looked hard because the adaptation was force-fit into a
lambda (the del-statement or slice assignment won't work as an
expression). You would have had similar difficulties embedding
try/except logic or a print-statement. Guido, would of course
recommend using a plain def-statement:

L = list()
def L_clearer(L=L):
del L[:]
for listbunch in buncher(src, '', L, L.append, L_clearer):
print listbunch

Is that really "clearer" ? While it is still very localized(just read a
few lines up for the definition), buncher(src, '', L.append, L.clear)
seems to be clearer to me, especially there are two similar construct
on set/dict above,


Hmm, my post was so long that the main points were lost:

* the example was tricky only because of the unnecessary in-place
update requirement

* eliminating that requirement solves the adaptation problem and
simplifies the client code

* the constructor API is polymorphic, use it

* adding clear() doesn't help with the other API variations between
set, list, dict, etc.

* Guido's decision for distinct APIs is intentional (i.e. set.add vs
list.append)

* Alex's adapter PEP is likely a better solution for forcing
polymorphism on unlike APIs

* When a lambda becomes awkward, Guido recommends a separate def

* Guido has no sympathy for atrocities resulting from squeezing
everything into one line

* Alex's example can be simplified considerably:

def buncher(sourceit, sentinel, constructor):
for k, g in groupby(sourceit, lambda x: x != sentinel):
if k:
yield constructor(g)

for setbunch in buncher(src, '', set):
print setbunch

* The improved version has no need for list.clear(). End of story.
Raymond

Feb 10 '06 #63
Raymond Hettinger wrote:
[...]
If you're asking why list's don't have a clear() method, the answer is
that they already had two ways to do it (slice assignment and slice
deletion) and Guido must have valued API compactness over collection
polymorphism.


That's a decision from long ago. Now that we have sets and
the iterable protocol, the case is quite different.
--
--Bryan
Feb 10 '06 #64
Bryan Olson wrote:
Magnus Lycka wrote:
Bryan Olson wrote:
big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()


I don't understand the second one. Where did 't' come from?


Cut-and-past carelessness. Meant to update with 'collection'.


If some_iter gives you dicts, the code above will throw away
your values, and put the set of keys in big_union. Is that what
you meant to do? I suspect most people would find this somewhat
surprising. For sets and "BryanLists" it will put a set of all
the contents of those collections in big_union. I think this
just verifies my previous arguments. It's rarely meaningful to
write functions that are meaingful for all builtin collections.
Feb 10 '06 #65
Magnus Lycka wrote:
Bryan Olson wrote:
Magnus Lycka wrote:
Bryan Olson wrote:

big_union = set()
for collection in some_iter:
big_union.update(t)
collection.clear()
I don't understand the second one. Where did 't' come from?

Cut-and-past carelessness. Meant to update with 'collection'.

If some_iter gives you dicts, the code above will throw away
your values, and put the set of keys in big_union. Is that what
you meant to do?


It can be quite useful. Python only recently added the set
type. Previously, the usual way to implement sets with
efficient membership testing was to use a dict where the
keys map to some irrelevant value. The above will work for
the old technique as well as for the set type.

I suspect most people would find this somewhat
surprising. For sets and "BryanLists" it will put a set of all
the contents of those collections in big_union.
That was the description: moving everything from several
collections into a single union.

I think this
just verifies my previous arguments. It's rarely meaningful to
write functions that are meaingful for all builtin collections.


That's a nonsense argument, polymorphism doesn't have to work
over every type to be useful. Functions meaningful for either
sets or lists are common; that's enough justification for
giving corresponding operations the same interface.
--
--Bryan
Feb 10 '06 #66

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

21 posts views Thread by JustSomeGuy | last post: by
4 posts views Thread by lallous | last post: by
2 posts views Thread by Brian | last post: by
2 posts views Thread by Wong CS | last post: by
7 posts views Thread by situ | last post: by
2 posts views Thread by Antony Clements | last post: by
9 posts views Thread by Thomas Ploch | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by suresh191 | last post: by
reply views Thread by harlem98 | last post: by
1 post views Thread by Geralt96 | last post: by
reply views Thread by harlem98 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.