473,791 Members | 3,216 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

reduce() anomaly?

This seems like it ought to work, according to the
description of reduce(), but it doesn't. Is this
a bug, or am I missing something?

Python 2.3.2 (#1, Oct 20 2003, 01:04:35)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright" , "credits" or "license" for more information.
d1 = {'a':1}
d2 = {'b':2}
d3 = {'c':3}
l = [d1, d2, d3]
d4 = reduce(lambda x, y: x.update(y), l) Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update' d4 = reduce(lambda x, y: x.update(y), l, {})

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 1, in <lambda>
AttributeError: 'NoneType' object has no attribute 'update'

- Steve.
Jul 18 '05
226 12713
Bengt Richter wrote:
...
Nope -- itertools is not about CONSUMERS of iterators, which this one
would be. All itertools entries RETURN iterators.
Ok, but what about returning an iterator -- e.g., funumerate(f, seq) --
that supplies f(x),x pairs like enumerate(seq) supplies i,x?
That doesn't seem to conflict with Raymond Hettinger's vision (he's the
itertools guru), but you might check with him directly.
[I'd suggest extending enumerate, but I already want to pass optional
[range parameters there,
so one could control the numeric values returned, e.g.,
enumerate(seq,< params>) corresponding to zip(xrange(<par ams>),seq))].
I know Raymond wants to be able to control the starting point if Guido
will allow that extension (RH is also enumerate's author) but I don't
know if he's interested in the stride and stop values, too.
[BTW, should xrange() default to xrange(0,sys.ma xint,1)?]
I doubt xrange has enough of a future to be worth extending it. I'd
rather have an irange returning an iterator (and given that itertools.count
already does basically the job you mention, treating irange without args
as an error may be more useful). No special reason to single out sys.maxint
when the int/long distinction is rapidly withering, btw.
Given that the sort method of lists now has an optional key= argument, I
Sorry, I try to remember to always say something like "now (in the 2.4
pre-alpha on CVS)" but sometimes I do forget to repeat this every time I
mention 2.4 novelties.
This is a new one on me: >>> seq.sort(key=la mbda x:x)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: sort() takes no keyword arguments


In 2.3, yes. Get a 2.4 CVS snapshot and you'll see it work.

Do you mean the comparison function? Or is there something else now too?
I'm beginning to infer that key= is actually a keyword arg for a
_function_ to get a "key" value from a composite object (in which case
Right.
ISTM "getkeyvalu e" or "valuefunc" would be a better name). But IMO "key"
suggests it will be used on elements x like x[key], not passing a
definition key=whatever and then using key(x) to get values.
The concept and terminology of a "sort key" or "sorting key" is very
popular, so the concision of the 'key' attribute name was chosen in
preference to your longer suggestions. E.g., to sort a list of strings
in order of increasing string length (and otherwise stably),

thelist.sort(ke y=len)

was deemed preferable to

thelist.sort(ge tkeyvalue=len)

Considering that the key= parameter is meant to take the place of most
uses of DSU (and thus likely become extremely popular), I concur with
the current choice for the name. However, it's not etched in stone yet:
2.4 is at the pre-alpha stage. You can check the python-dev archives
for past discussions of the issue, then join python-dev to offer your
contributions, if you think they're helpful; this is a course of action
that is always open to anybody, of course.

think the obvious approach would be to add the same optional argument to
min
and max with exactly the same semantics. I.e., just as today:

x = max(somelist)
somelist.sort ()
assert x == somelist[-1]

we'd also have

x = max(somelist, key=whatever)
somelist.sort (key=whatever)
assert x == somelist[-1]

I think I like it, other than the name. Maybe s/key/valuefunc/ ;-)


If max and min do acquire such an optional argument, it will of course
have the same name and semantics as it does for sort.

That would be max(seq, key=len) in my proposal.


That's a nice option for max (and min, and ??), but ISTM that it would


max and min are it (unless heapq or bisect grow new _classes_ where such
comparison-flexibility would also be natural to have; in the current,
function-centered state of heapq, such an optional argument would not
fit well, and the situation for bisect is not crystal-clear either way).
also be nice to have a factory for efficient iterators of this kind.
It would probably be pretty efficient then to write

maxlen, maxitem = max(funumerate( len,seq))
Yes, roughly as efficient as it is today (2.4, again) to write the
less concise
max(izip(imap(l en, seq), seq))
[for a RE-iterable seq] using more general itertools entries. [If
seq is not necessarily reiterable, you'd need to add a tee to this;
again, see the itertools in the current 2.4 snapshot].

However, this does not deal with the issue of seq items needing
to be comparable (and at reasonable cost, too) when they have the
same value for len(item). max and min with an optional argument
would know to ONLY compare the "sort key", just like sort does,
NEVER the items themselves. So, to fully emulate the proposed
max and min, you need to throw in an enumerate as well.
or

def longest(seq):
return max(funumerate( len,seq))[-1]

and it would be available as infrastructure for other efficient loops in
addition to being tied in to specific sequence processors like max and
min.


The only issue here is whether izip(imap(len, seq), seq) is frequent
enough to warrant a new itertools entry. As it cannot take the place
of the proposed optional argument to min and max, it doesn't really
affect them directly. The general interest of
izip(imap(len, seq), enumerate(seq))
is, I think, too low to warrant a special entry just to express it.
Alex

Jul 18 '05 #151
Andrew Dalke wrote:
Alex:
I have already suggested, in a post on this thread's direct ancestor 9 days
ago, the 100%-equivalent substitution in pure, simple, faster Python:


I've only been skimming this thread. Now upon reread I see you also
looked at the standard library. No fair -- I'm supposed to be the only
one who does that! :)


On python-dev one had better make ready by such a preliminary check
for the inevitable objections.

Nope -- itertools is not about CONSUMERS of iterators, which this one

would
be. All itertools entries RETURN iterators.


Yup. There's no good, existing, natural place for such a function, that
I can tell.


I agree. Which is why I've been claiming that we need a module of
"iterator consumers" (accumulators?) in 2.4 to get full advantage
from the new genexp's and itertools enhancements.

Given that the sort method of lists now has an optional key= argument, I
think the obvious approach would be to add the same optional argument to

min
and max with exactly the same semantics. I.e., just as today:


That follows Moshe's comment that a "Decorate Max Undecorate",
in parallel to Decorate Sort Undecorate. Nice.


Hadn't seen Moshe comment on it, but, yes, it IS the same idea.

we'd also have

x = max(somelist, key=whatever)
somelist.sort(k ey=whatever)
assert x == somelist[-1]


Is 'key' also an iterable? ...


No, a callable (just like in 2.4's sort).

That would be max(seq, key=len) in my proposal.


Ahh, it's the function used to get the keys. What about 'getkey'
as a name?


See my comments to Bengt on this thread. Summarizing: 2.4 is in
pre-alpha, therefore suggestions to change it will be quite
appropriate on python-dev. The name and semantics of such an
optional argument must of course be the same as whatever ends up
being accepted for lists' sort method (currently, it's key=).

I looked through the back thread but didn't see your function
definition. Is it backwards compatible to the existing max
function?... You know, I never realized how strange max was
in the first place, so that
max(1, 2)
and
max( (1, 2) )
are both valid. It must assume that if there's only one argument
then the latter form is appropriate. Then your form must say
Yes:
max(23)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: iteration over non-sequence

that if there's only one non-keyword argument then it's of
the form

max(seq, **args)

But
max(a, b, key=len)
seems reasonable as well. Then again, so does
max(seq, len)


max(seq, len) must of course remain compatible with today. Keyword
arguments are handled separately (in built-in functions).

a perfectly natural extension, it seems to me. I've found such total and
extreme opposition to this perfectly natural extension in private
correspondence about it with another Python committer that I've so far
delayed proposing it on python-dev -- for reasons that escape me it would
appear to be highly controversial rather than perfectly obvious.


My guess is that reaction is based on the need to have both forms
max(a, b)
and
max(seq)
and allow a key function to be passed in, and make it feel
natural without running into problems when the two might be
mixed up.


Interestingly enough, this objection wasn't raised. If it were,
then it would easily be squashed by analogy with somelist.sort:
somelist.sort(f oo) must mean to use foo as the comparator func,
for backwards compatibility, so that one MUST give the argument
name, somelist.sort(k ey=foo), to use a key-extracting func instead.

even though
the tasks handled by the standard library are heavily slanted
to string and text processing, networking &c, and (of course)
"pure infrastructure" , rather than typical application tasks.


Yeah, I worry about that bias error when I do my analysis.
I really should also scan some of the other (proprietary) code
bases I have access to, but then it leads to claims of being
irreproducible. Besides, despite being a scientific programmer,
I mostly do "string and text processing, networking &c."


A corpus of freely available code from the Vaults of Parnassus
might make for a useful resource for such tasks.
Alex

Jul 18 '05 #152
Ville Vainio wrote:
...
[stuff that should remain builtins]
iter, len, pow [for the crucial 3-arguments case], range (or some
preferable variant that returns an iterator), and zip seem pretty


+enumerate


Yep, it sure should be there.

are quite handy for interactive use, such as locals, globals,
dir, vars, ..., are not all that frequently used in programs --
so they might live in a module that the interactive mode gets
automatically, rather than being built-ins.


locals and globals seem to have a natural place in builtins IMHO.


Why? They only allow a limited form of introspection. Why are
they more essentially "built-in" than other stuff in inspect?

All of this would be perfect for the mailing list on Python 3.0
if the latter existed. Posting it to c.l.py makes it unlikely
Guido will ever consider the discussion's resuts, anyway. The


How about the wiki at python.org?


Sure, one could write things up there, but I don't think it's a
medium as suitable to debate as a newsgroup or mailing list.
Alex

Jul 18 '05 #153
Alex Martelli <al***@aleax.it > writes:

[ locals() and globals() ]
Why? They only allow a limited form of introspection. Why are
they more essentially "built-in" than other stuff in inspect?


Somehow their existence in builtins has a connotation that we are
referring to the "current" namespace, instead of, say,
inspect.locals( ) (which might bear the connotation that we are talking
about locals of the module inspect).

I've mostly used locals() and globals() as "%(myvar)s" % locals(), and
never for inspect-ish tasks.
How about the wiki at python.org?


Sure, one could write things up there, but I don't think it's a
medium as suitable to debate as a newsgroup or mailing list.


Yes, it isn't. How about a "py3k-sig"?

--
Ville Vainio http://www.students.tut.fi/~vainio24
Jul 18 '05 #154
Douglas Alan wrote:
Alex Martelli <al***@aleax.it > writes:
But to be consistent with your other arguments, no doubt you'd argue
for a .sort() followed by [-1] as "more general" than max...
That would have a larger big O time growth value -- most likely
O(n * log n) vs. O(n), for reasonable implemenations. And while I


If the sequence is carefully randomized, yes. If the sequence has
any semblance of pre-existing order, the timsort is amazingly good
at exploiting it, so that in many real-world cases it DOES run as
O(N).
wouldn't sweat a factor of 2 for a feature of a RAD or scripting
language, I would be more concerned about moving to a larger big O
value.
Me too! That's why I'd like to make SURE that some benighted soul
cannot code:

onebigstring = reduce(str.__ad d__, lotsofstrings)

and get O(N squared) performance, versus the O(N) performance of
the correct Python idiom, ''.join(lotsofs trings) . At least sum
does give an error message, AND a reminder that ''.join should be
used, when you try sum(lotsofstrin gs, '') -- reduce just slows
your program down by a LOT silently and underhandedly.. .

for all items in the sequence of numbers". But, reduce as it exists
in Python today cannot be clearly defined EXCEPT by such an
iteration with some generic "callable which accepts two arguments"
specified in lieu of the operator.add. Given this total generality,
even though hardly ever does it make sense to USE it, reduce becomes
quite complex.


Your proposed extension to max() and min() has all the same problems.


Not at all. Maybe you have totally misunderstood "my proposed
extension"? There is NO "callable which accepts two arguments"
involved -- and therefore no complications at all. Indeed, what
I propose is simply to ensure that under all circumstances

x = max(iterable, key=k)

binds x to exactly the same value which would be be bound by

x = list.sorted(ite rable, key=k)[-1]

in Python 2.4 (max can of course guarantee O(N) behavior, while
list.sorted cannot -- this optimization, as well as the slight
simplification, would be my motivations for having key= in max).
But reasonable programmers don't abuse this generality, and so there
So, you're claiming that ALL people who were defending 'reduce' by
posting use cases which DID "abuse this generality" are unreasonable?
this urge to be stiffled. Don't take my word for it -- ask Paul
Graham. I believe he was even invited to give the Keynote Address at
a recent PyCon.
However, if you agree with Paul Graham's theories on language design,
you should be consistent, and use Lisp. If you consider Python to be
preferable, then there must be some point on which you disagree with
him. In my case, I would put "simplicity vs generality" issues as
the crux of my own disagreements with Dr. Graham.

So, Python is not designed as PG would design it (else it would be
Lisp). It's designed with far greater care for simplicity, and for
practicality, and with a jaundiced eye against excess generality.
Indeed, 'sum' itself lost substantial generality between my original
conception of it and the way Guido eventually accepted it into the
built-ins -- and Guido's on record as regretting that he did not
remove even _more_ generality from it.

You've even argued in this thread, incorrectly, that reduce could be
eliminated if only all binary operators were able to accept arbitrary
numbers of arguments. This, plus your insistence that 'reduce' is
"just like" APL's / (which does NOT accept arbitrary functions on its
left -- just specific operators such as +), indicate a _misconception_
of reduce on your part. I'm sure you don't hold it consciously, but
these miscommunicatio ns indicate that even you, reduce's paladin, do
NOT properly grasp reduce "intuitivel y".


Just what is it that I don't grasp again? I think my position is
clear: I have no intention to abuse reduce(), so I don't worry myself
with ways in which I might be tempted to.


Yet you want reduce to keep accepting ANY callable that takes two
arguments as its first argument, differently from APL's / (which does
NOT accept arbitrary functions on its left); and you claimed that reduce
could be removed if add, mul, etc, would accept arbitrary numbers of
arguments. This set of stances is not self-consistent.

Having (e.g.) add accept a sequence argument (like max does), or, for
explicitness and potentially performance, grow an add.reduce attribute
just like in Numeric, would give no particular problems. I'd still
want (just like Numeric does) to have sum as a name for add.reduce,
because it's by far the most common case


So, now you *do* want multiple obviously right ways to do the same
thing?


sum(sequence) is the obviously right way to sum the numbers that are
the items of sequence. If that maps to add.reduce(sequ ence), no problem;
nobody in their right mind would claim the latter as "the one obvious
way", exactly because it IS quite un-obvious.

and avoids getting into the issue of what the (expletive delete)
does "reducing" have to do with anything that "add.reduce " does
(it's really a curve ball compared with the many meanings of
"reduce" in English, after all).


The English world "reduce" certainly has multiple meanings, but so
does "sum". I can be a noun or a verb, for instance. It can mean


Any noun can be verbed, but that's a problem, or feature, of English
as a natural language, and unrelated to programming languages.

The point is that the primary meaning of "reduce" is "diminish",
and when you're summing (positive:-) numbers you are not diminishing
anything whatsoever ... unless you think in terms of multidimensiona l
arrays and diminishing dimensionality, but while that's quite OK in
APL or Numeric, which DO have multidimensiona l arrays, it's silly in
Python proper, which doesn't. The primary meaning of "sum" is "sum",
so I have never met anybody having the slightest problem understanding
or using it (including both people with English as their mother
tongue, and others, all the way to people with near-zero command of
English: it helps, here, that "summare" is a _Latin_ word -- so is
"reducere", but with the same primary meaning as in English:-).
"summary" or "gist" in addition to addition. It also can be confusing
by appearing to be just a synonym for "add". Now people might have
trouble remember what the difference between sum() and add() is.
Got any relevant experience teaching Python? I have plenty and I have
never met ANY case of the "trouble" you mention.

In Computer Science, however, "reduce" typically only has one meaning
when provided as a function in a language, and programmers might as
well learn that sooner than later.


I think you're wrong. "reduce dimensionality of a multi-dimensional
array by 1 by operating along one axis" is one such meaning, but there
are many others. For example, the second Google hit for "reduce
function" gives me:

http://www.gits.nl/dg/node65.html

where 'reduce' applies to rewriting for multi-dot grammars, and
the 5th hit is

http://www.dcs.ed.ac.uk/home/stg/NOTES/node31.html

which uses a much more complicated generalization:

reduce(g, e, m, n, f)=g(e,g(f(m),g (f(m+1),...g(f( n-1),g(f(n)))...) ))

not to mention Python's own __reduce__, etc. And the 'reduce'
given at
http://w3future.com/html/stories/hop.xml
is what we might code as sum(map(templat eFunction, sequence))
while
http://csdl.computer.org/comp/trans/...4/i0364abs.htm
deals with "the derivation of general methods for the L/sub 2/
approximation of signals by polynomial splines" and defines
REDUCE as "prefilter and down-sampler" (which is exactly as I
might expect it to be defined in any language dealing mostly
with signal processing, of course).

So, it's quite sensible for people to be confused about the
meaning of 'reduce' within a programming language.

But, most importantly, such a design would avoid the veritable traps
to which the current, too-general 'reduce' subjects the poor learner
who's quite reasonably going to believe that all that generality
MUST be good for something if it's been designed into a built-in.
We've seen quite a few such inappropriate uses on this thread, after
all.


That's very easy to fix:

FAQ
---
Q. Should I ever pass a function with side effects into reduce() or
map()?

A. No.

(Unless the side-effect is just printing out debugging information,
or saving away statistics, or something like that.)


Designing an over-general approach, and "fixing it in the docs" by
telling people not to use 90% of the generality they so obviously
get, is not a fully satisfactory solution. Add in the caveats about
not using reduce(str.__ad d__, manystrings), etc, and any reasonable
observer would agree that reduce had better be redesigned.

Again, I commend APL's approach, also seen with more generality in Numeric
(in APL you're stuck with the existing operator on the left of + -- in
Numeric you can, in theory, write your own ufuncs), as saner. While not
quite as advisable, allowing callables such as operator.add to take
multiple arguments would afford a similarly _correctly-limited generality_
effect. reduce + a zillion warnings about not using most of its potential
is just an unsatisfactory combination.
Alex

Jul 18 '05 #155
Terry Reedy wrote:
"Alex Martelli" <al***@aleax.it > wrote in message
news:%N******** ************@ne ws2.tin.it...
to be consistent with your other arguments, no doubt you'd argue for
a .sort() followed by [-1] as "more general" than max...
By my definition of "more general", a function that returns all order
statistics of a list is trivially more general than one that returns
any fewer, including just one. It conveys a superset of the


Of course. So...?
infomation conveyed by less general functions and answers a superset
of the questions they answer. If you add a control parameter
specifying which order statistics to return, then less general
functions have a restriced domain.

So I have no idea your intent in appearing to challenge this and how
you think it contributes to your overall argument. Is your idea of
'more general' so different?


Hmmm, nobody else seemed to have any problem understanding my quip,
including the poster at which it was directed. Let me spell it out
for you: a few people are attacking 'sum' and defending 'reduce'
because the latter is "more general". Why, so is sorting a sequence
"more general" than finding its maximum; for "consistenc y" with their
other arguments (which by this quip I'm alleging are no good) they
should then be clamoring for the removal of 'max' to leave only 'sort'.
(I did expect an O()-based objection and punctually got it, ready for
my repartee about reduce(str.add, lotsastrings) being O(N squared),
etc, etc).

So, no, I don't think my idea of "more general" is different from
yours: e.g., a function that, given a sequence, returns its length
AND the number of times 'None' occurs as an item, is more general
than one which just returns the length. That does not make it in
any intrinsic way "necessaril y preferable" -- and THAT is my point.
Alex

Jul 18 '05 #156
Terry Reedy wrote:
"Alex Martelli" <al***@aleax.it > wrote in message
news:8Y******** ***********@new s1.tin.it...
Dave Brueck wrote:
...
>> results = [ func(x) for x in sequence ]
>> ... instead of ...
>> results = sequence.map(fu nc) ?? > Because I find the first much more readable>
I entirely agree with both points.
For this pair, I like the second better. Different aesthetics.


I guess. I just can't imagine EVERY iterable automagically
growing a 'map' method without feelign shudders of terror at
the total ugliness and gratuitous "blackmagicness " of the idea.

They're even clearer when the contrast is between, e.g.:
results = [ x+23 for x in sequence ]
and:
results = sequence.map(la mbda x: x+23)
where using the HOF approach forces you
to understand (and read) lambda too.


Here I might take the first. 'lambda' is something I feed 'stuck'
with.


I cherish being able to use the same construct, list comprehension,
whether I already have a callable ready or not.

Would the hypothetical
results = sequence.map(fu nc x: x+23)
be any better?
Just replacing the keyword 'lambda' with 'func'? If you were
designing a green-field language, and couldn't find any other way
to express callable literals -- so it only came down to a 2-way
choice between lambda and func as keywords with the same semantics,
I guess I would suggest func as the lesser evil.
How about a very hypothetical (post ``==repr deprecation)
results = sequence..map(` x+23`)


How would this notation imply that x is an argument rather than,
say, a global?
Alex

Jul 18 '05 #157
Terry Reedy wrote:

"Alex Martelli" <al***@aleax.it > wrote in message
news:IT******** ************@ne ws2.tin.it...
I'd rather put map/filter/reduce in a 'legacy' module -- warts and
all, e.g. map's horrible habit of None-padding shorter sequences,
EEK -- and have a designed-from-scratch functional module without
the warts.
What a liberating thought. I sometimes forget that the only thing I
am really stuck with when programming in Python is the core syntax (in
the Ref Manual) and that I am free to wrap or replace anything else
(in the Lib Manual). For a project I hope to start soon, I have been
worried about how to work around or explain away the deficiencies of
Python's of reduce(). Instead, I should just write the version I want
for my purposes (especially since speed does not matter). So your
voluminous writing on this has benefited at least one person.


Glad to hear this.

Probably, yes. What functions, as opposed to types, do you
think should absolutely be in the builtins rather than in a separate
module, and (unless it's obvious) why?


I have thought some about this also. Type objects could be put/left
in types, but as constructors, they are too useful and basic not to
have immediately available (unless builtins was reduced to nearly
nothing).


Yes, I think the types should stay right in the builtins.

Of course __import__ had better stay or we won't
be able to get anything imported, for example:-).


Does it have to be in builtins for import statements to work?


No, it would be technically possible to redefine "import"'s semantics
to take it from anywhere.
Direct use of __import__ seems relatively rare.


It depends on what you're comparing against, I guess, but yes,
it's not a staple of most programming styles. It's there more
to be overridden than to be directly called.

The attribute-related functions hasattr, getattr,
setattr, and delattr, seem very fundamental
(but I'd want the same status for the item-functions
currently over in operator),
as well as
(at least some of them -- delattr isn't -- but I do think that

trying to
discriminate too finely would make things too hard to learn)
very frequently used in code.


I had to re-lineate this to parse apart the three thoughts. A static
frequency-of-use analysis for some large code collection would be
interesting. I might write something eventually.


I'm pretty sure that checking for existence and getting the value
are more frequent operations than setting a new value, and that, in
turn, more frequent than deletion. That's "just the way it is" --
it's much the same as in, e.g., SQL (SELECT overwhelming more used,
INSERT and UPDATE less often, DELETE quite rarely).

iter

I group this with type constructor objects, even though not one
itself


It's a factory function, yes. Not technically a type, but the
connection in terms of use is indeed close.
len

unless made a method, keep in


A method of every sequence and container on Earth? Eeek.
pow [for the crucial 3-arguments case]

very specialized and rarely used? move to math as pow3
range (or some preferable variant that returns an iterator)

how about iter(3), etc, currently an arg type error?
think of 3 as the sentinal value.
is not typing '0,' worth the wartiness of optional first arg?
and zip seem pretty fundamental

agree except for pow


Yes, I can see pow could be controversial here.
Alex

Jul 18 '05 #158
"Andrew Dalke" <ad****@mindspr ing.com> writes:
Surely he was talking about implementing "list-alikes"...?
Yes, I think you're right about that, and I misinterpreted
his statement.


Yes, I was talking about "list-alikes".
Then again, is the change that "all Python list-alikes must
implement stable sort" or that "Python native lists shall
now and forever implement stable sort"?
The change was that native lists will implement stable sort. My gripe
is that if native lists are required to sort stably and list-alikes
can sort unstably, then list.sort and listalike.sort act differently
in a way that can lead to subtle bugs.
That's a statement only that list.sort shall be stable and
not that all .sort() methods must be stable.


It would be icky if some .sort() methods are required to be stable but
others are not.

Note that the most obvious way to implement sort() in C is to call the
standard C library qsort function, which is unstable.
Jul 18 '05 #159
In article <7x************ @ruckus.brouhah a.com>,
Paul Rubin <http://ph****@NOSPAM.i nvalid> wrote:
That's a statement only that list.sort shall be stable and
not that all .sort() methods must be stable.


It would be icky if some .sort() methods are required to be stable but
others are not.

Note that the most obvious way to implement sort() in C is to call the
standard C library qsort function, which is unstable.


You're complaining that stability makes implementing a list-alike's sort
trickier. However it also can make using sort simpler. Which do you
think happens more often?

--
David Eppstein http://www.ics.uci.edu/~eppstein/
Univ. of California, Irvine, School of Information & Computer Science
Jul 18 '05 #160

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

181
8917
by: Tom Anderson | last post by:
Comrades, During our current discussion of the fate of functional constructs in python, someone brought up Guido's bull on the matter: http://www.artima.com/weblogs/viewpost.jsp?thread=98196 He says he's going to dispose of map, filter, reduce and lambda. He's going to give us product, any and all, though, which is nice of him.
16
2187
by: clintonG | last post by:
At design-time the application just decides to go boom claiming it can't find a dll. This occurs sporadically. Doing a simple edit in the HTML for example and then viewing the application has caused the application to go boom. Sometimes the page will compile and run using F5 and others not. So I do the web search tango looking around the blogs and the tuts and determine I should go into Temporary ASP.NET Files and delete the directory...
1
2808
by: mai | last post by:
Hi everyone, i'm trying to exhibit FIFO anomaly(page replacement algorithm),, I searched over 2000 random strings but i couldnt find any anomaly,, am i I doing it right?,, Please help,,,The following is the code,, #include <iostream> #include <string> #include <stdio.h> #include <stdlib.h> #include <ctime // For time()
7
3505
by: cnb | last post by:
This must be because of implementation right? Shouldn't reduce be faster since it iterates once over the list? doesnt sum first construct the list then sum it? ----------------------- reduce with named function: 37.9864357062 reduce with nested, named function: 39.4710288598 reduce with lambda: 39.2463927678 sum comprehension: 25.9530121845
0
9517
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10428
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10207
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9997
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9030
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7537
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5559
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4110
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2916
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.