473,406 Members | 2,710 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

Generate a sequence of random numbers that sum up to 1?

I am at my wit's end.

I want to generate a certain number of random numbers.
This is easy, I can repeatedly do uniform(0, 1) for
example.

But, I want the random numbers just generated sum up
to 1 .

I am not sure how to do this. Any idea? Thanks.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Apr 22 '06 #1
14 9843
Anthony Liu wrote:
I am at my wit's end.

I want to generate a certain number of random numbers.
This is easy, I can repeatedly do uniform(0, 1) for
example.

But, I want the random numbers just generated sum up
to 1 .

I am not sure how to do this. Any idea? Thanks.
numbers.append (random.uniform (0, 1.0-sum(numbers)))

might help, perhaps.

or

scaled = [x/sum(numbers) for x in numbers]

Mel.

Apr 22 '06 #2
Anthony Liu wrote:
But, I want the random numbers just generated sum up
to 1 .


This seems like an odd request. Might I ask what it's for?

Generating random numbers in [0,1) that are both uniform and sum to 1 looks
like an unsatisfiable task. Each number you generate restricts the
possibilities for future numbers. E.g. if the first number is 0.5, all
future numbers must be < 0.5 (indeed, must *sum* to 0.5). You'll end up
with a distribution increasingly skewed towards smaller numbers the more
you generate. I can't imagine what that would be useful for.

If that's not a problem, do this: generate the numbers, add them up, and
divide each by the sum.

nums = [random.uniform(0,1) for x in range(0,100)]
sum = reduce(lambda x,y: x+y, nums)
norm = [x/sum for x in nums]

Of course now the numbers aren't uniform over [0,1) anymore.

Also note that the sum of the normalized numbers will be very close to 1,
but slightly off due to representation issues. If that level of accuracy
matters, you might consider generating your rands as integers and then
fp-dividing by the sum (or just store them as integers/fractions).
Apr 22 '06 #3
Em Sáb, 2006-04-22 Ã*s 03:16 +0000, Edward Elliott escreveu:
If that level of accuracy
matters, you might consider generating your rands as integers and then
fp-dividing by the sum (or just store them as integers/fractions).


Or using decimal module: http://docs.python.org/lib/module-decimal.html

--
Felipe.

Apr 22 '06 #4
Anthony Liu <an***********@yahoo.com> wrote:
...
As a matter of fact, given that we have to specify the
number of states for an HMM, I would like to create a
specified number of random floating numbers whose sum
is 1.0.


def forAL(N):
N_randoms = [random.random() for x in xrange(N)]
total = sum(N_randoms)
return [x/total for x in N_randoms]
Does this do what you want? Of course, the resulting numbers are not
independent, but then the constraints you pose would contradict that.
Alex
Apr 22 '06 #5

Anthony Liu wrote:
I am at my wit's end.

I want to generate a certain number of random numbers.
This is easy, I can repeatedly do uniform(0, 1) for
example.

But, I want the random numbers just generated sum up
to 1 .

I am not sure how to do this. Any idea? Thanks.


--------------------------------------------------------------
import random

def partition(start=0,stop=1,eps=5):
d = stop - start
vals = [ start + d * random.random() for _ in range(2*eps) ]
vals = [start] + vals + [stop]
vals.sort()
return vals

P = partition()

intervals = [ P[i:i+2] for i in range(len(P)-1) ]

deltas = [ x[1] - x[0] for x in intervals ]

print deltas

print sum(deltas)
---------------------------------------------------------------

Gerard

Apr 22 '06 #6

Gerard Flanagan wrote:
Anthony Liu wrote:
I am at my wit's end.

I want to generate a certain number of random numbers.
This is easy, I can repeatedly do uniform(0, 1) for
example.

But, I want the random numbers just generated sum up
to 1 .

I am not sure how to do this. Any idea? Thanks.


--------------------------------------------------------------
import random

def partition(start=0,stop=1,eps=5):
d = stop - start
vals = [ start + d * random.random() for _ in range(2*eps) ]
vals = [start] + vals + [stop]
vals.sort()
return vals

P = partition()

intervals = [ P[i:i+2] for i in range(len(P)-1) ]

deltas = [ x[1] - x[0] for x in intervals ]

print deltas

print sum(deltas)
---------------------------------------------------------------


def partition(N=5):
vals = sorted( random.random() for _ in range(2*N) )
vals = [0] + vals + [1]
for j in range(2*N+1):
yield vals[j:j+2]

deltas = [ x[1]-x[0] for x in partition() ]

print deltas

print sum(deltas)

[0.10271966686994982, 0.13826576491042208, 0.064146913555132801,
0.11906452454467387, 0.10501198456091299, 0.011732423830768779,
0.11785369256442912, 0.065927165520102249, 0.098351305878176198,
0.077786747076205365, 0.099139810689226726]
1.0

Apr 22 '06 #7
Gerard Flanagan wrote:
Gerard Flanagan wrote:
Anthony Liu wrote:
I am at my wit's end.

I want to generate a certain number of random numbers.
This is easy, I can repeatedly do uniform(0, 1) for
example.

But, I want the random numbers just generated sum up
to 1 .

I am not sure how to do this. Any idea? Thanks.


--------------------------------------------------------------
import random

def partition(start=0,stop=1,eps=5):
d = stop - start
vals = [ start + d * random.random() for _ in range(2*eps) ]
vals = [start] + vals + [stop]
vals.sort()
return vals

P = partition()

intervals = [ P[i:i+2] for i in range(len(P)-1) ]

deltas = [ x[1] - x[0] for x in intervals ]

print deltas

print sum(deltas)
---------------------------------------------------------------


def partition(N=5):
vals = sorted( random.random() for _ in range(2*N) )
vals = [0] + vals + [1]
for j in range(2*N+1):
yield vals[j:j+2]

deltas = [ x[1]-x[0] for x in partition() ]

print deltas

print sum(deltas)


finally:

---------------------------------------------------------------
def distribution(N=2):
p = [0] + sorted( random.random() for _ in range(N-1) ) + [1]
for j in range(N):
yield p[j+1] - p[j]

spread = list(distribution(10))

print spread
print sum(spread)
---------------------------------------------------------------
Gerard

Apr 22 '06 #8
Gerard Flanagan <gr********@yahoo.co.uk> wrote:
def distribution(N=2):
p = [0] + sorted( random.random() for _ in range(N-1) ) + [1]
for j in range(N):
yield p[j+1] - p[j]

spread = list(distribution(10))

print spread
print sum(spread)


This is simpler, easier to prove correct and most likely quicker.

def distribution(N=2):
L = [ random.uniform(0,1) for _ in xrange(N) ]
sumL = sum(L)
return [ l/sumL for l in L ]

spread = distribution(10)
print spread
print sum(spread)

--
Nick Craig-Wood <ni**@craig-wood.com> -- http://www.craig-wood.com/nick
Apr 23 '06 #9
Nick Craig-Wood wrote:
Gerard Flanagan <gr********@yahoo.co.uk> wrote:
def distribution(N=2):
p = [0] + sorted( random.random() for _ in range(N-1) ) + [1]
for j in range(N):
yield p[j+1] - p[j]

spread = list(distribution(10))

print spread
print sum(spread)


This is simpler, easier to prove correct and most likely quicker.

def distribution(N=2):
L = [ random.uniform(0,1) for _ in xrange(N) ]
sumL = sum(L)
return [ l/sumL for l in L ]


simpler:- ok

easier to prove correct:- in what sense?

quicker:- slightly slower in fact (using xrange in both functions).
This must be due to 'uniform' - using random() rather than uniform(0,1)
then yes, it's quicker. Roughly tested, I get yours (and Alex
Martelli's) to be about twice as fast. (2<=N<1000, probably greater
difference as N increases).

All the best.

Gerard

Apr 23 '06 #10
I'm surprised noone has pursued a course of subtraction rather than
division. Say you want 10 numbers:
s = 1.0
n = []
for x in xrange(9): .... value = random.random() * s
.... n.append(value)
.... s -= value
.... n.append(s)
n [0.7279111122901516, 0.082128708606867745, 0.0080516733577621798,
0.12122060245902817, 0.0034460458833209676, 0.0021046234724371184,
0.054109424914363845, 0.00035750970249204185, 0.00051175075536832372,
0.00015854855820800087] sum(n)

1.0
Either:
1) Just because they're *ordered* doesn't mean they're not *random*,
or
2) You all now know why I'm not a mathematician. ;)

It seems to me that the only constraint on the randomness of my results
is the OP's constraint: that they sum to 1. I'd be fascinated to learn
if and why that wouldn't work.
Robert Brewer
System Architect
Amor Ministries
fu******@amor.org

Apr 23 '06 #11
fumanchu <fu******@amor.org> wrote:
I'm surprised noone has pursued a course of subtraction rather than
division. Say you want 10 numbers:
s = 1.0
n = []
for x in xrange(9): ... value = random.random() * s
... n.append(value)
... s -= value
... n.append(s)
n [0.7279111122901516, 0.082128708606867745, 0.0080516733577621798,
0.12122060245902817, 0.0034460458833209676, 0.0021046234724371184,
0.054109424914363845, 0.00035750970249204185, 0.00051175075536832372,
0.00015854855820800087] sum(n)

1.0
Either:
1) Just because they're *ordered* doesn't mean they're not *random*,
or
2) You all now know why I'm not a mathematician. ;)

It seems to me that the only constraint on the randomness of my results
is the OP's constraint: that they sum to 1. I'd be fascinated to learn
if and why that wouldn't work.


n[0] is uniformly distributed between 0 and 1; n[1] is not -- not sure
how to characterize its distribution, but it's vastly skewed to favor
smaller values -- and further n[x] values for x>1 are progressively more
and more skewed similarly.

Such total disuniformity, where the very distribution of each value is
skewed by the preceding one, may still be "random" for some sufficiently
vague meaning of "random", but my intuition suggests it's unlikely to
prove satisfactory for the OP's purposes.
Alex
Apr 23 '06 #12
Alex Martelli wrote:
fumanchu <fu******@amor.org> wrote:
I'm surprised noone has pursued a course of subtraction rather than
division. Say you want 10 numbers:
>s = 1.0
>n = []
>for x in xrange(9):


... value = random.random() * s
... n.append(value)
... s -= value
...
>n.append(s)
>n


[0.7279111122901516, 0.082128708606867745, 0.0080516733577621798,
0.12122060245902817, 0.0034460458833209676, 0.0021046234724371184,
0.054109424914363845, 0.00035750970249204185, 0.00051175075536832372,
0.00015854855820800087]
>sum(n)


1.0

Either:
1) Just because they're *ordered* doesn't mean they're not *random*,
or
2) You all now know why I'm not a mathematician. ;)

It seems to me that the only constraint on the randomness of my results
is the OP's constraint: that they sum to 1. I'd be fascinated to learn
if and why that wouldn't work.


n[0] is uniformly distributed between 0 and 1; n[1] is not -- not sure
how to characterize its distribution, but it's vastly skewed to favor
smaller values -- and further n[x] values for x>1 are progressively more
and more skewed similarly.

Such total disuniformity, where the very distribution of each value is
skewed by the preceding one, may still be "random" for some sufficiently
vague meaning of "random", but my intuition suggests it's unlikely to
prove satisfactory for the OP's purposes.


[digression]

All of this discussion about whether the distribution of values is uniform or
not doesn't mean much until one has defined "uniformity," or equivalently,
"distance" in the space we're talking about. In this case, we're talking about
the unit n-simplex space (SS^n) which has elements S=(s_1, s_2, ... s_n) where
sum(S) = 1 and s_i >= 0. I favor the Aitchison distance:

import numpy as np

def aitchison_distance(x, y):
""" Compute the Aitchison distance between two vectors in simplex space.
"""
lx = np.log(x)
ly = np.log(y)
lgx = np.mean(lx, axis=-1)
lgy = np.mean(ly, axis=-1)
diff = (lx - lgx) - (ly - lgy)
return np.sqrt(np.sum(diff*diff))

Note that zeros yield inifinities, so the borders of the unit simplex are
infinitely farther away from other points in the interior. Consequently,
generating "uniform" random samples from this space is as impractical as it is
to draw "uniform" random samples from the entire infinite real number line. It's
also not very interesting. However, one can transform SS^n into RR^(n-1) and
back again, so drawing numbers from a multivariate normal of whatever mean and
covariance you like will give you "nice" simplicial data and quite possibly even
realistic data, too, depending on your model. I like using the isometric
log-ratio transform ("ilr" transform) for this.

Good Google search terms: "compositional data", Aitchison

But of course, for the OP's purpose of creating synthetic Markov chain
transition tables, generating some random vectors uniformly on [0, 1)^n and
normalizing them to sum to 1 works a treat. Don't bother with anything else.

--
Robert Kern
ro*********@gmail.com

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Apr 23 '06 #13
Alex Martelli wrote:
Such total disuniformity, where the very distribution of each value is
skewed by the preceding one, may still be "random" for some sufficiently
vague meaning of "random", but my intuition suggests it's unlikely to
prove satisfactory for the OP's purposes.


It does seem very odd. If you could restrict the range, you could get an
unskewed distribution. Set range = (0, 2*sum/cnt) and you can generate cnt
numbers whose sum will tend towards sum (because the average value will be
sum/cnt):

target_sum = 1
cnt = 100
max = 2.0 * target_sum / cnt # 0.02
nums = [random.uniform(0,max) for x in range(0,cnt)]
real_sum = sum(nums) # 0.975... in one sample run

If the sum has to be exact, you can set the last value to reach it:

nums[-1] = target_sum - sum(nums[:-1])
print sum(nums) # 1.0

which skews the sample ever so slightly. And check for negatives in case
the sum exceeded the target.

If the exact count doesn't matter, just generate random nums until you're
within some delta of the target sum.

Basically, there usually better options to the problem as originally posed.
Actually, now that I reread it the OP never said the range had be [0,1).
So maybe we read too much into the original phrasing. If you need
anything resembling a uniform distribution, scaling the results afterward
is not the way to go.
Apr 23 '06 #14

"fumanchu" <fu******@amor.org> wrote in message
news:11**********************@j33g2000cwa.googlegr oups.com...
I'm surprised noone has pursued a course of subtraction rather than
division.


I believe someone did mention the subtraction method in one of the initial
responses. But the problem is this. If you independently sample n numbers
from a given distribution and then rescale, then the scaled numbers still
all have the same distribution (and are uniform in that sense). In the
subtraction method, each comes from a differnt distribution, as others
explained, with the nth being radically different from the first.

Terry Jan Reedy

Apr 24 '06 #15

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Intaek LIM | last post by:
generally, we use srand(time(0)) to generate random numbers. i know why we use time(0), but i can not explain how it operates. first, see example source below. ...
10
by: Johannes Veerkamp | last post by:
hi there, i'm a newbie in c and i'd like to write a programm which generates random numbers from 1 to 10 (integers). can anybody help me out with the correct code? thanx
5
by: cvnweb | last post by:
I am trying to generate 2 random numbers that are diffrent, in order to add them to existing numbers to generate numbers that start out the same, but are randomly added and subtracted so that they...
3
by: JoelPJustice | last post by:
I am working through a VBA book by myself to help and try and improve my skills. However, the book does not give you solutions to certain problems. I have worked through this problem up until bullet...
12
by: Jim Michaels | last post by:
I need to generate 2 random numbers in rapid sequence from either PHP or mysql. I have not been able to do either. I get the same number back several times from PHP's mt_rand() and from mysql's...
9
by: MyInfoStation | last post by:
Hi all, I am a newbie to Python and would like to genereate some numbers according to geometric distribution. However, the Python Random package seems do not have implemented functionality. I am...
6
by: Anamika | last post by:
I am doing a project where I want to generate random numbers for say n people.... These numbers should be unique for n people. Two people should not have same numbers.... Also the numbers...
8
by: kiranchahar | last post by:
Hey all, How do I generate random numbers with Uniform distribution Uniform(a,b) using C-programming? I want to generate uniform random numbers which have mean following Uniform(p,q) and also...
9
by: Chelong | last post by:
Hi All I am using the srand function generate random numbers.Here is the problem. for example: #include<iostream> #include <time.h> int main() {
24
by: pereges | last post by:
I need to generate two uniform random numbers between 0 and 1 in C ? How to do it ? I looked into rand function where you need to #define RAND_MAX as 1 but will this rand function give me ...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.