does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
 BB

"On naÃ®t tous les mÃ¨tres du mÃªme monde" 19 2978
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite
confortable with the intuition... can anyone think of a more solid
argumentation ?
Anecdotal evidence suggests the answer is no:
>>histo = {} for i in xrange(1000):
.... t = tuple(sorted(range(3), lambda x, y: cmp(random.random(), 0.5)))
.... histo[t] = histo.get(t, 0) + 1
....
>>sorted(histo.values())
[60, 62, 64, 122, 334, 358]
versus:
>>histo = {} for i in xrange(1000):
.... t = tuple(sorted(range(3), key=lambda x: random.random()))
.... histo[t] = histo.get(t, 0) + 1
....
>>sorted(histo.values())
[147, 158, 160, 164, 183, 188]
Peter
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
Why not use the supplied shuffle method?
random.shuffle(x)
Dustan wrote:
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
Why not use the supplied shuffle method?
random.shuffle(x)
or check out this thread: http://groups.google.com/group/comp....vc=2&q=shuffle
Iain
[ Boris Borcic]
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Say len(x) == N. With Python's current sort, the conjecture is true
if and only if N <= 2.
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort,
No idea what that could mean in a rigorous, algorithmindependent way.
I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
If a list is already sorted, or reversesorted, the minimum number of
comparisons needed to determine that is N1, and Python's current sort
achieves that. It first looks for the longest ascending or descending
run. If comparison outcomes are random, it will decide "yes, already
sorted" with probability 1/2**(N1), and likewise for reversesorted.
When N 2, those are larger than the "correct" probabilities 1/(N!).
So., e.g., when N=3, the sort will leave the list alone 1/4th of the
time (and will reverse the list inplace another 1/4th). That makes
the identity and reversal permutations much more likely than "they
should be", and the bias is worse as N increases.
Of course random.shuffle(x) is the intended way to effect a
permutation uniformly at random.
Boris Borcic <bb*****@gmail.comwrites:
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
You really can't assume anything like that. Sorting assumes an order
relation on the items being sorted, which means if a < b and b < c,
then a < c. If your comparison operation doesn't preserve that
property then the sort algorithm could do anything, including looping
forever or crashing.
Paul Rubin wrote:
Boris Borcic <bb*****@gmail.comwrites:
>x.sort(cmp = lambda x,y : cmp(random.random(),0.5)) pick a random shuffle of x with uniform distribution ?
You really can't assume anything like that. Sorting assumes an order
relation on the items being sorted, which means if a < b and b < c,
then a < c. If your comparison operation doesn't preserve that
property then the sort algorithm could do anything, including looping
forever or crashing.
Not if it does the minimum number of comparisons to achieve the sort, in which
case it won't ever call cmp() if the result is predetermined by the order
relation and previous comparisons, so that it will never get from this
comparison function a system of answers that's not consistent with an order
relation. That's obvious at least in the case where random.random() ==0.5 never
occurs (and at first sight even the latter case shouldn't change it).
Best, BB

"On naît tous les mètres du même monde"
Thanks for these details. BB
Tim Peters wrote:
[ Boris Borcic]
>x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Say len(x) == N. With Python's current sort, the conjecture is true
if and only if N <= 2.
>Intuitively, assuming list.sort() does a minimal number of comparisons to achieve the sort,
No idea what that could mean in a rigorous, algorithmindependent way.
>I'd say the answer is yes. But I don't feel quite confortable with the intuition... can anyone think of a more solid argumentation ?
If a list is already sorted, or reversesorted, the minimum number of
comparisons needed to determine that is N1, and Python's current sort
achieves that. It first looks for the longest ascending or descending
run. If comparison outcomes are random, it will decide "yes, already
sorted" with probability 1/2**(N1), and likewise for reversesorted.
When N 2, those are larger than the "correct" probabilities 1/(N!).
So., e.g., when N=3, the sort will leave the list alone 1/4th of the
time (and will reverse the list inplace another 1/4th). That makes
the identity and reversal permutations much more likely than "they
should be", and the bias is worse as N increases.
Of course random.shuffle(x) is the intended way to effect a
permutation uniformly at random.
Boris Borcic wrote:
Paul Rubin wrote:
>Boris Borcic <bb*****@gmail.comwrites:
>>x.sort(cmp = lambda x,y : cmp(random.random(),0.5)) pick a random shuffle of x with uniform distribution ?
You really can't assume anything like that. Sorting assumes an order relation on the items being sorted, which means if a < b and b < c, then a < c. If your comparison operation doesn't preserve that property then the sort algorithm could do anything, including looping forever or crashing.
Not if it does the minimum number of comparisons to achieve the sort, in
which case it won't ever call cmp() if the result is predetermined by
the order relation and previous comparisons, so that it will never get
read "by /the assumption of an order relation/ and previous comparisons"
from this comparison function a system of answers that's not consistent
with an order relation. That's obvious at least in the case where
random.random() == 0.5 never occurs (and at first sight even the latter
case shouldn't change it).
 this  the idea that /if/ the algorithm was optimal it would only sample the
random comparison function up to a system of answers consistent with an order
relation  is actually what prompted my question,
iow "is it good for anything ?"
Best, BB

"On naît tous les mètres du même monde"
Boris Borcic wrote:
Paul Rubin wrote:
>Boris Borcic <bb*****@gmail.comwrites:
>>x.sort(cmp = lambda x,y : cmp(random.random(),0.5)) pick a random shuffle of x with uniform distribution ?
You really can't assume anything like that. Sorting assumes an order relation on the items being sorted, which means if a < b and b < c, then a < c. If your comparison operation doesn't preserve that property then the sort algorithm could do anything, including looping forever or crashing.
Not if it does the minimum number of comparisons to achieve the sort, in
which case it won't ever call cmp() if the result is predetermined by
the order relation and previous comparisons, so that it will never get
from this comparison function a system of answers that's not consistent
with an order relation. That's obvious at least in the case where
random.random() == 0.5 never occurs (and at first sight even the latter
case shouldn't change it).
To be more convincing... assume the algorithm is optimal and calls
cmp(x,y). Either the previous calls (+ assuming order relation) give no
conclusion as to the possible result of the call, in which case the result can't
contradict an order relation; or the previous calls (+assumption) provideonly
partial information; iow the algorithm knows for example x<=y and needsto
determine which of x<y or x==y is the case; in which situation the comparison
function could break the assumption of an order relation by answering eg "x>y".
But is it possible for a sort algorithm assuming an order relation to attain eg
x<=y but neither x<y nor x==y using calls to the comparison function involving
either x or y and some other variables ? No, since the axiom of order applies to
data that never has the form of weak inequalities and thus will never infer an
inequality that's not strong.
Ok, this isn't well written, but I have to run.
Cheers, BB

"On naît tous les mètres du même monde".
Boris Borcic <bb*****@gmail.comwrites:
To be more convincing... assume the algorithm is optimal and calls
That assumption is not even slightly realistic. Realworld sorting
algorithms almost never do a precisely minimal amount of comparison.
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
Try this:
x.sort(key=lambda x: random.random())
Raymond
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes.
You would be mistaken (except for the trivial case of 2 elements).
In m uniform, independant, random 2way choices, there are 2**m
equally probable outcomes. We can map multiple random outcomes
to the same final output, but each will still have probability of the
form
n/2**m, where n is an integer.
A random permutation requires that we generate outputs with
probability 1/(n!). For n>2, we cannot reach the probability using a
limited number of 2way choices.
Have you ever looked at the problem of making a perfectly uniform
1in3 choice when the only source of randomness is a perfect random
bit generator? The algorithms terminate with probability 1, but are
nonterminators in that there is no finite number of steps in which
they must terminate.

Bryan
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
Intuitively, assuming list.sort() does a minimal number of comparisons to
achieve the sort, I'd say the answer is yes. But I don't feel quite confortable
with the intuition... can anyone think of a more solid argumentation ?
 BB

"On naît tous les mètres du même monde"
Someone has already said this, but I'm not sure we can ignore exactly
what algorithm we are using. With that in mind, I'll just arbitrarily
choose an algorithm to use for analysis. I know you want something that
works in N*log(N) where N is the length of the list, but I'm going to
ignore that and consider selection sort for the sake of a "more solid
argument".
In that case, you would NOT achieve a "uniform" distribution. I quote
that because I'm going to make up a definition, which I hope
corresponds to the official one ;). To me, uniform will mean if we look
at any position in the list, element e has 1/N chance of being there.
Let e be the element which was in the first position to begin with.
What are its chances of being there after being "sorted"? Well, in
order for it to still be there, it would have to survive N rounds of
selection. In each selection, it has 1/2 chance of surviving. That
means its total chance of survival is 1/(2**N), which is much less than
1/N. QED!
***
After writting that down, I thought of an argument for an N*log(N)
algorithm, which would cause the resulting list to be uniformly random:
tournament sort (I think this is also called binary tree sort). How
many rounds does an element have to win in order to come out on top? A
number which approaches log2(N). This is like before, except this
element doesn't have to survive as many rounds; therefore, it's total
chance of coming out on top is 1/(2**log2(N)) == 1/N. Hoorah!
***
After considering that, I realized that even if your idea to shuffle a
list did work (can't tell because I don't know how Python's sort
works), it would not be an optimal way to shuffle a list even though
Python uses an N*log(N) sort (at least I hope so :P). This is because
you can shuffle in time proportional the the length of the list.
You can accomplish this by doing what I will call "random selection".
It would be like linear selection, except you wouldn't bother checking
the head against every other element. When you want to figure out what
to put at the head, just pick at random! I suspect this is what Python
does in random.shuffle, since it is rather an easy to see it would
result in something uniform (I swear I haven't looked at the source
code for random.shuffle :P).
From: "danielx" <da********@berkeley.edu>
Date: 22 Jul 2006 01:43:30 0700
Boris Borcic wrote:
does
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
pick a random shuffle of x with uniform distribution ?
....
Let e be the element which was in the first position to begin with.
What are its chances of being there after being "sorted"? Well, in
order for it to still be there, it would have to survive N rounds of
selection. In each selection, it has 1/2 chance of surviving. That
means its total chance of survival is 1/(2**N), which is much less than
1/N. QED!
This proof makes sense to me if the sorting algorithm makes a random
decision every time it swaps.
Couldn't we easily get an n*log(n) shuffle by performing a _single_
mapping of each element in the collection to a pair made up of a
random key and its value, and then sorting based on the keys and
stripping them out? i.e., in O(n) time, turn
"2 clubs", "2 diamonds", "2 hearts", "2 spades", "3 clubs"
into something like
[0.395, "2 clubs"], [0.158, "2 diamonds"], [0.432, "2 hearts"], [0.192, "2 spades"], [0.266, "3 clubs"]
and then in O(n*log(n)) time, sort it into
[0.158, "2 diamonds"], [0.192, "2 spades"], [0.266, "3 clubs"], [0.395, "2 clubs"], [0.432, "2 hearts"]
and then strip out the keys again in O(n) time?
Or perhaps this has been discussed already and I missed that part? I
just joined the list... apologies if this is a repeat.
You can accomplish this by doing what I will call "random selection".
It would be like linear selection, except you wouldn't bother checking
the head against every other element. When you want to figure out what
to put at the head, just pick at random! I suspect this is what Python
does in random.shuffle, since it is rather an easy to see it would
result in something uniform (I swear I haven't looked at the source
code for random.shuffle :P).
But, after making the linear selection, don't you still need to use
O(log(n)) time to find and remove the item from the collection? I
don't know much about Python's internal data structures, but if
there's an array in there, you need O(n) time to move up everything
after the deleted item; if there's a list, you need O(n) time to find
the element you picked at random; if there's a balanced tree, you
could find it and delete it in O(log(n)) time...
Help me review my undergraduate data structures here  is there some
way to pick each item _exactly_once_ in O(n) time?
Dave Wonnacott
David G. Wonnacott wrote:
From: "danielx" <da********@berkeley.edu>
Date: 22 Jul 2006 01:43:30 0700
Boris Borcic wrote:
does
>
x.sort(cmp = lambda x,y : cmp(random.random(),0.5))
>
pick a random shuffle of x with uniform distribution ?
...
Let e be the element which was in the first position to begin with.
What are its chances of being there after being "sorted"? Well, in
order for it to still be there, it would have to survive N rounds of
selection. In each selection, it has 1/2 chance of surviving. That
means its total chance of survival is 1/(2**N), which is much less than
1/N. QED!
This proof makes sense to me if the sorting algorithm makes a random
decision every time it swaps.
Couldn't we easily get an n*log(n) shuffle by performing a _single_
mapping of each element in the collection to a pair made up of a
random key and its value, and then sorting based on the keys and
stripping them out? i.e., in O(n) time, turn
"2 clubs", "2 diamonds", "2 hearts", "2 spades", "3 clubs"
into something like
[0.395, "2 clubs"], [0.158, "2 diamonds"], [0.432, "2 hearts"], [0.192, "2 spades"], [0.266, "3 clubs"]
and then in O(n*log(n)) time, sort it into
[0.158, "2 diamonds"], [0.192, "2 spades"], [0.266, "3 clubs"], [0.395, "2 clubs"], [0.432, "2 hearts"]
and then strip out the keys again in O(n) time?
Or perhaps this has been discussed already and I missed that part? I
just joined the list... apologies if this is a repeat.
Yes, that would work beautifully. You could use the key argument of
list.sort. What we are talking about though, is using the cmp argument.
I.e. will list.sort(cmp=lambda:???) be able to give you a shuffle? I
think most of us think this depends on what sort algorithm Python uses.
>
You can accomplish this by doing what I will call "random selection".
It would be like linear selection, except you wouldn't bother checking
the head against every other element. When you want to figure out what
to put at the head, just pick at random! I suspect this is what Python
does in random.shuffle, since it is rather an easy to see it would
result in something uniform (I swear I haven't looked at the source
code for random.shuffle :P).
But, after making the linear selection, don't you still need to use
O(log(n)) time to find and remove the item from the collection? I
don't know much about Python's internal data structures, but if
there's an array in there, you need O(n) time to move up everything
after the deleted item; if there's a list, you need O(n) time to find
the element you picked at random; if there's a balanced tree, you
could find it and delete it in O(log(n)) time...
I'm almost sure there's a C array back there as well (well, not
techinically an array, but something from malloc), because indexing a
Python list is supposed to be "fast". It would take constant time to
put something at the head. Just swap with the thing that's already
there. Since you have to do this swap for each position, it takes time
proportional to N.
When I originally came up with this idea, I was thinking you would not
choose a new head among previously chosen heads. But if you do allow
that, I think it will still be uniform. So my original idea was
something like this:
1 2 3 4
# ^ head pos
stage 0: we need to choose something to put in pos 0. We can choose
anything to go there.
(swap)
3 2 1 4
# ^ head pos
stage 1: we need to choose something to put in pos 1. We can choose
among things in positions greater than or equal to 1 ie, we may choose
among 2 1 4.
etc.
This is what I meant when I said this would be like linear selection,
because once something is in its correct position, it doesn't get
moved. But you might also be able to achieve a uniform sort if in
stages 1 and up, you are still allowed to choose anything you want to
be the head. I'll let someone else figure it out :P.
>
Help me review my undergraduate data structures here  is there some
way to pick each item _exactly_once_ in O(n) time?
I think I answered this in the first segment of this post. Let me know
if I don't seem clear.
>
Dave Wonnacott
David G. Wonnacott wrote:
Couldn't we easily get an n*log(n) shuffle...
Why are you trying to get an O(n*log(n)) shuffle when an O(n) shuffle
algorithim is well known and implemented in Python as random.shuffle()?
Ross Ridge
Ross Ridge wrote:
David G. Wonnacott wrote:
Couldn't we easily get an n*log(n) shuffle...
Why are you trying to get an O(n*log(n)) shuffle when an O(n) shuffle
algorithim is well known and implemented in Python as random.shuffle()?
I think David is referring to this: "don't you still need to use
O(log(n)) time to find and remove the item from the collection?"
The answer for him is no: as far as I know, the Python list is a
randomaccess structure, so looking up 2 items and swapping them runs
in constant time. You perform that N times to shuffle the sequence, so
it runs in O(N).

Ben Sizer
Paul Rubin wrote:
Boris Borcic <bb*****@gmail.comwrites:
>To be more convincing... assume the algorithm is optimal and calls
That assumption is not even slightly realistic. Realworld sorting
algorithms almost never do a precisely minimal amount of comparison.
'optimal' or 'minimal amount of comparisons' does not mean here to make just the
right n1 comparisons between successive items to verify the order of n items,
but rather that no strict subset of the comparisons made to sort some data is
sufficient to sort that data.
IOW "minimal" in "minimal amount of comparisons" refers not to the total
ordering by size of the sets of comparisons made, but to the partial ordering by
set inclusion of these sets.
And in this sense, it is clear that quicksort for instance is optimal*
It is easy to see, when you detail this algorithm, that never during its run is
the result of a comparison it makes, preordained by comparisons already made;
iow : it doesn't make superfluous or redundant comparisons.
Do you mean quicksortlike algorithms aren't realworld ?
Best, BB

*assuming a variant, like the following for illustration, that doesn't waste
info on equal items.
def qsort(items,cmp) :
if len(items)<2 : return items
pivot = items[0]
divide = [[pivot],[],[]]
for item in items[1:] :
divide[cmp(item,pivot)].append(item)
return qsort(divide[1],cmp)+divide[0]+qsort(divide[1],cmp)
I wrote :
>
...in this sense, it is clear that quicksort for instance is optimal*
It is easy to see, when you detail this algorithm, that never during its
run is the result of a comparison it makes, preordained by comparisons
already made; iow : it doesn't make superfluous or redundant comparisons.
The above is true, but...
While an algorithm may never call for a comparison that has a result
predetermined by previous comparisons, it doesn't follow that the property would
hold true for the exact same comparisons done in a different order.
In particular, any sort algorithm must have compared successive items to be able
to conclude to their order, so that any system of comparisons that allows to
sort a dataset of n items, must contain the strictly minimal system of n1
comparisons of successive items.
This doesn't change the conclusion that an algorithm that never does a
comparison with a result that could be deduced from previous comparisons, will
only sample a random comparison function up to a system of comparisons that's
consistent with an order relation between the items. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Paul CT 
last post by:
Hi,
Is there a way to repeat a set of code until a certain if statement is
satisfied. If it is then to exit the loop and if not repeat the code?...

by: Thomas Mlynarczyk 
last post by:
I remember there is a programming language where you can initialize the
random number generator, so that it can  if you want  give you the exactly...

by: steflhermitte 
last post by:
Dear cppians,
I want to apply a stratified sampling on an image. Following simplified
example will explain my problem.
The original image I...

by: alanbe 
last post by:
Greetings
I am making a flashcard type application to help me in my TCP/IP
protocols test.
My instructor will test us periodically on how a...

by: gl 
last post by:
How do I take an array or arraylist, and sort it randomly? Like suppose the
items in it are (1,2,3,4,5) and I want to get it to be in a random order...

by: fieldfallow 
last post by:
Hello all,
Is there a function in the standard C library which returns a prime
number which is also pseudorandom?
Assuming there isn't, as it...

by: jar13861 
last post by:
I'm confused on how to write a random array that will only generate 9
different numbers from 19. Here is what I have, but its only writing
one...

by: Erich Pul 
last post by:
hi!
i got a structure, which should be filled with random integer values
(it is in fact a generator for numbers like in a lotto), but these...

by: Pascal 
last post by:
hello and soory for my english
here is the query :"how to split a string in a random way"
I try my first shot in vb 2005 express and would like to...

by: Kemmylinns12 
last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and...

by: Naresh1 
last post by:
What is WebLogic Admin Training?
WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge...

by: jalbright99669 
last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...

by: antdb 
last post by:
Ⅰ. Advantage of AntDB: hyperconvergence + streaming processing engine
In the overall architecture, a new "hyperconvergence" concept was...

by: Matthew3360 
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...

by: AndyPSV 
last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...

by: Carina712 
last post by:
Setting background colors for Excel documents can help to improve the visual appeal of the document and make it easier to read and understand....

by: BLUEPANDA 
last post by:
At BluePanda Dev, we're passionate about building highquality software and sharing our knowledge with the community. That's why we've created a SaaS...

by: Rahul1995seven 
last post by:
Introduction:
In the realm of programming languages, Python has emerged as a powerhouse. With its simplicity, versatility, and robustness, Python...
 