Sample uniformly at random from an n-dimensional unit simplex - math

Sampling uniformly at random from an n-dimensional unit simplex is the fancy way to say that you want n random numbers such that
they are all non-negative,
they sum to one, and
every possible vector of n non-negative numbers that sum to one are equally likely.
In the n=2 case you want to sample uniformly from the segment of the line x+y=1 (ie, y=1-x) that is in the positive quadrant.
In the n=3 case you're sampling from the triangle-shaped part of the plane x+y+z=1 that is in the positive octant of R3:
(Image from http://en.wikipedia.org/wiki/Simplex.)
Note that picking n uniform random numbers and then normalizing them so they sum to one does not work. You end up with a bias towards less extreme numbers.
Similarly, picking n-1 uniform random numbers and then taking the nth to be one minus the sum of them also introduces bias.
Wikipedia gives two algorithms to do this correctly: http://en.wikipedia.org/wiki/Simplex#Random_sampling
(Though the second one currently claims to only be correct in practice, not in theory. I'm hoping to clean that up or clarify it when I understand this better. I initially stuck in a "WARNING: such-and-such paper claims the following is wrong" on that Wikipedia page and someone else turned it into the "works only in practice" caveat.)
Finally, the question:
What do you consider the best implementation of simplex sampling in Mathematica (preferably with empirical confirmation that it's correct)?
Related questions
Generating a probability distribution
java random percentages

This code can work:
samples[n_] := Differences[Join[{0}, Sort[RandomReal[Range[0, 1], n - 1]], {1}]]
Basically you just choose n - 1 places on the interval [0,1] to split it up then take the size of each of the pieces using Differences.
A quick run of Timing on this shows that it's a little faster than Janus's first answer.

After a little digging around, I found this page which gives a nice implementation of the Dirichlet Distribution. From there it seems like it would be pretty simple to follow Wikipedia's method 1. This seems like the best way to do it.
As a test:
In[14]:= RandomReal[DirichletDistribution[{1,1}],WorkingPrecision->25]
Out[14]= {0.8428995243540368880268079,0.1571004756459631119731921}
In[15]:= Total[%]
Out[15]= 1.000000000000000000000000
A plot of 100 samples:
alt text http://www.public.iastate.edu/~zdavkeos/simplex-sample.png

I'm with zdav: the Dirichlet distribution seems to be the easiest way ahead, and the algorithm for sampling the Dirichlet distribution which zdav refers to is also presented on the Wikipedia page on the Dirichlet distribution.
Implementationwise, it is a bit of an overhead to do the full Dirichlet distribution first, as all you really need is n random Gamma[1,1] samples. Compare below
Simple implementation
SimplexSample[n_, opts:OptionsPattern[RandomReal]] :=
(#/Total[#])& # RandomReal[GammaDistribution[1,1],n,opts]
Full Dirichlet implementation
DirichletDistribution/:Random`DistributionVector[
DirichletDistribution[alpha_?(VectorQ[#,Positive]&)],n_Integer,prec_?Positive]:=
Block[{gammas}, gammas =
Map[RandomReal[GammaDistribution[#,1],n,WorkingPrecision->prec]&,alpha];
Transpose[gammas]/Total[gammas]]
SimplexSample2[n_, opts:OptionsPattern[RandomReal]] :=
(#/Total[#])& # RandomReal[DirichletDistribution[ConstantArray[1,{n}]],opts]
Timing
Timing[Table[SimplexSample[10,WorkingPrecision-> 20],{10000}];]
Timing[Table[SimplexSample2[10,WorkingPrecision-> 20],{10000}];]
Out[159]= {1.30249,Null}
Out[160]= {3.52216,Null}
So the full Dirichlet is a factor of 3 slower. If you need m>1 samplepoints at a time, you could probably win further by doing (#/Total[#]&)/#RandomReal[GammaDistribution[1,1],{m,n}].

Here's a nice concise implementation of the second algorithm from Wikipedia:
SimplexSample[n_] := Rest## - Most## &[Sort#Join[{0,1}, RandomReal[{0,1}, n-1]]]
That's adapted from here: http://www.mofeel.net/1164-comp-soft-sys-math-mathematica/14968.aspx
(Originally it had Union instead of Sort#Join -- the latter is slightly faster.)
(See comments for some evidence that this is correct!)

I have created an algorithm for uniform random generation over a simplex. You can find the details in the paper in the following link:
http://www.tandfonline.com/doi/abs/10.1080/03610918.2010.551012#.U5q7inJdVNY
Briefly speaking, you can use following recursion formulas to find the random points over the n-dimensional simplex:
x1=1-R11/n-1
xk=(1-Σi=1kxi)(1-Rk1/n-k), k=2, ..., n-1
xn=1-Σi=1n-1xi
Where R_i's are random number between 0 and 1.
Now I am trying to make an algorithm to generate random uniform samples from constrained simplex.that is intersection between a simplex and a convex body.

Old question, and I'm late to the party, but this method is much faster than the accepted answer if implemented efficiently.
In Mathematica code:
#/Total[#,{2}]&#Log#RandomReal[{0,1},{n,d}]
In plain English, you generate n rows * d columns of randoms uniformly distributed between 0 and 1. Then take the Log of everything. Then normalize each row, dividing each element in the row by the row total. Now you have n samples uniformly distributed over the (d-1) dimensional simplex.
If found this method here: https://mathematica.stackexchange.com/questions/33652/uniformly-distributed-n-dimensional-probability-vectors-over-a-simplex
I'll admit, I'm not sure why it works, but it passes every statistical test I can think of. If anyone has a proof of why this method works, I'd love to see it!

Related

What is a simple formula for a non-iterative random number sequence?

I would like to have a function f(x) that gives good pseudo-random numbers in uniform distribution according to value x. I am aware of linear congruential generators, however these work in iterations, i.e. I provide the initial seed and then I get a sequence of random values one by one. This is not what I want, because if a want to get let's say 200000th number in the sequence, I have to compute numbers 1 ... 199999. I need a function that is given by one simple formula that uses basic operations such as +, *, mod, etc. I am also aware of hash functions but I didn't find any that suits these needs. I might come up with some function myself, but I'd like to use something that's been tested to give decent pseudo-random values. Is there anything like that being used?
You might consider multiplicative congruential generators. These are linear congruentials without the additive constant: Xi+1 = aXi % c for suitable constants a and c. Expanding this out for a few iterations will convince you that Xk = akX0 % c, where X0 is your seed value. This can be calculated in O(log(k)) time using fast modular exponentiation. No need to calculate the first 199,999 to get the 200,000th value, you can find it in something proportional to about 18 steps.
Actually, for LCG with additive constant it works as well. There is a paper by F. Brown, "Random Number Generation with Arbitrary Stride", Trans. Am. Nucl. Soc. (Nov. 1994). Based on this paper there is reasonable LCG with decent quality and log2(N) skip-ahead feature, used by well-known Monte Carlo package MCNP5. C++ post is here https://github.com/Iwan-Zotow/LCG-PLE63/. Further development if this idea (RNG with logarithmic skip-ahead) is pretty decent family of generators at http://www.pcg-random.org/
You could use a simple encryption algorithm that can encrypt the numbers 1, 2, 3, ... Since encryption is reversible, each input number will have a unique output. The 200000th number in your sequence is encrypt(key, 200000). Use DES for 64 bit numbers, AES for 128 bit numbers and you can roll your own simple Feistel cipher for 32 bit or 16 bit numbers.

Understanding "randomness"

I can't get my head around this, which is more random?
rand()
OR:
rand() * rand()
I´m finding it a real brain teaser, could you help me out?
EDIT:
Intuitively I know that the mathematical answer will be that they are equally random, but I can't help but think that if you "run the random number algorithm" twice when you multiply the two together you'll create something more random than just doing it once.
Just a clarification
Although the previous answers are right whenever you try to spot the randomness of a pseudo-random variable or its multiplication, you should be aware that while Random() is usually uniformly distributed, Random() * Random() is not.
Example
This is a uniform random distribution sample simulated through a pseudo-random variable:
BarChart[BinCounts[RandomReal[{0, 1}, 50000], 0.01]]
While this is the distribution you get after multiplying two random variables:
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] *
RandomReal[{0, 1}, 50000], {50000}], 0.01]]
So, both are “random”, but their distribution is very different.
Another example
While 2 * Random() is uniformly distributed:
BarChart[BinCounts[2 * RandomReal[{0, 1}, 50000], 0.01]]
Random() + Random() is not!
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] +
RandomReal[{0, 1}, 50000], {50000}], 0.01]]
The Central Limit Theorem
The Central Limit Theorem states that the sum of Random() tends to a normal distribution as terms increase.
With just four terms you get:
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000] +
Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000],
{50000}],
0.01]]
And here you can see the road from a uniform to a normal distribution by adding up 1, 2, 4, 6, 10 and 20 uniformly distributed random variables:
Edit
A few credits
Thanks to Thomas Ahle for pointing out in the comments that the probability distributions shown in the last two images are known as the Irwin-Hall distribution
Thanks to Heike for her wonderful torn[] function
I guess both methods are as random although my gutfeel would say that rand() * rand() is less random because it would seed more zeroes. As soon as one rand() is 0, the total becomes 0
Neither is 'more random'.
rand() generates a predictable set of numbers based on a psuedo-random seed (usually based on the current time, which is always changing). Multiplying two consecutive numbers in the sequence generates a different, but equally predictable, sequence of numbers.
Addressing whether this will reduce collisions, the answer is no. It will actually increase collisions due to the effect of multiplying two numbers where 0 < n < 1. The result will be a smaller fraction, causing a bias in the result towards the lower end of the spectrum.
Some further explanations. In the following, 'unpredictable' and 'random' refer to the ability of someone to guess what the next number will be based on previous numbers, ie. an oracle.
Given seed x which generates the following list of values:
0.3, 0.6, 0.2, 0.4, 0.8, 0.1, 0.7, 0.3, ...
rand() will generate the above list, and rand() * rand() will generate:
0.18, 0.08, 0.08, 0.21, ...
Both methods will always produce the same list of numbers for the same seed, and hence are equally predictable by an oracle. But if you look at the the results for multiplying the two calls, you'll see they are all under 0.3 despite a decent distribution in the original sequence. The numbers are biased because of the effect of multiplying two fractions. The resulting number is always smaller, therefore much more likely to be a collision despite still being just as unpredictable.
Oversimplification to illustrate a point.
Assume your random function only outputs 0 or 1.
random() is one of (0,1), but random()*random() is one of (0,0,0,1)
You can clearly see that the chances to get a 0 in the second case are in no way equal to those to get a 1.
When I first posted this answer I wanted to keep it as short as possible so that a person reading it will understand from a glance the difference between random() and random()*random(), but I can't keep myself from answering the original ad litteram question:
Which is more random?
Being that random(), random()*random(), random()+random(), (random()+1)/2 or any other combination that doesn't lead to a fixed result have the same source of entropy (or the same initial state in the case of pseudorandom generators), the answer would be that they are equally random (The difference is in their distribution). A perfect example we can look at is the game of Craps. The number you get would be random(1,6)+random(1,6) and we all know that getting 7 has the highest chance, but that doesn't mean the outcome of rolling two dice is more or less random than the outcome of rolling one.
Here's a simple answer. Consider Monopoly. You roll two six sided dice (or 2d6 for those of you who prefer gaming notation) and take their sum. The most common result is 7 because there are 6 possible ways you can roll a 7 (1,6 2,5 3,4 4,3 5,2 and 6,1). Whereas a 2 can only be rolled on 1,1. It's easy to see that rolling 2d6 is different than rolling 1d12, even if the range is the same (ignoring that you can get a 1 on a 1d12, the point remains the same). Multiplying your results instead of adding them is going to skew them in a similar fashion, with most of your results coming up in the middle of the range. If you're trying to reduce outliers, this is a good method, but it won't help making an even distribution.
(And oddly enough it will increase low rolls as well. Assuming your randomness starts at 0, you'll see a spike at 0 because it will turn whatever the other roll is into a 0. Consider two random numbers between 0 and 1 (inclusive) and multiplying. If either result is a 0, the whole thing becomes a 0 no matter the other result. The only way to get a 1 out of it is for both rolls to be a 1. In practice this probably wouldn't matter but it makes for a weird graph.)
The obligatory xkcd ...
It might help to think of this in more discrete numbers. Consider want to generate random numbers between 1 and 36, so you decide the easiest way is throwing two fair, 6-sided dice. You get this:
1 2 3 4 5 6
-----------------------------
1| 1 2 3 4 5 6
2| 2 4 6 8 10 12
3| 3 6 9 12 15 18
4| 4 8 12 16 20 24
5| 5 10 15 20 25 30
6| 6 12 18 24 30 36
So we have 36 numbers, but not all of them are fairly represented, and some don't occur at all. Numbers near the center diagonal (bottom-left corner to top-right corner) will occur with the highest frequency.
The same principles which describe the unfair distribution between dice apply equally to floating point numbers between 0.0 and 1.0.
Some things about "randomness" are counter-intuitive.
Assuming flat distribution of rand(), the following will get you non-flat distributions:
high bias: sqrt(rand(range^2))
bias peaking in the middle: (rand(range) + rand(range))/2
low:bias: range - sqrt(rand(range^2))
There are lots of other ways to create specific bias curves. I did a quick test of rand() * rand() and it gets you a very non-linear distribution.
Most rand() implementations have some period. I.e. after some enormous number of calls the sequence repeats. The sequence of outputs of rand() * rand() repeats in half the time, so it is "less random" in that sense.
Also, without careful construction, performing arithmetic on random values tends to cause less randomness. A poster above cited "rand() + rand() + rand() ..." (k times, say) which will in fact tend to k times the mean value of the range of values rand() returns. (It's a random walk with steps symmetric about that mean.)
Assume for concreteness that your rand() function returns a uniformly distributed random real number in the range [0,1). (Yes, this example allows infinite precision. This won't change the outcome.) You didn't pick a particular language and different languages may do different things, but the following analysis holds with modifications for any non-perverse implementation of rand(). The product rand() * rand() is also in the range [0,1) but is no longer uniformly distributed. In fact, the product is as likely to be in the interval [0,1/4) as in the interval [1/4,1). More multiplication will skew the result even further toward zero. This makes the outcome more predictable. In broad strokes, more predictable == less random.
Pretty much any sequence of operations on uniformly random input will be nonuniformly random, leading to increased predictability. With care, one can overcome this property, but then it would have been easier to generate a uniformly distributed random number in the range you actually wanted rather than wasting time with arithmetic.
"random" vs. "more random" is a little bit like asking which Zero is more zero'y.
In this case, rand is a PRNG, so not totally random. (in fact, quite predictable if the seed is known). Multiplying it by another value makes it no more or less random.
A true crypto-type RNG will actually be random. And running values through any sort of function cannot add more entropy to it, and may very likely remove entropy, making it no more random.
The concept you're looking for is "entropy," the "degree" of disorder of a string
of bits. The idea is easiest to understand in terms of the concept of "maximum entropy".
An approximate definition of a string of bits with maximum entropy is that it cannot be expressed exactly in terms of a shorter string of bits (ie. using some algorithm to
expand the smaller string back to the original string).
The relevance of maximum entropy to randomness stems from the fact that
if you pick a number "at random", you will almost certainly pick a number
whose bit string is close to having maximum entropy, that is, it can't be compressed.
This is our best understanding of what characterizes a "random" number.
So, if you want to make a random number out of two random samples which is "twice" as
random, you'd concatenate the two bit strings together. Practically, you'd just
stuff the samples into the high and low halves of a double length word.
On a more practical note, if you find yourself saddled with a crappy rand(), it can
sometimes help to xor a couple of samples together --- although, if its truly broken even
that procedure won't help.
The accepted answer is quite lovely, but there's another way to answer your question. PachydermPuncher's answer already takes this alternative approach, and I'm just going to expand it out a little.
The easiest way to think about information theory is in terms of the smallest unit of information, a single bit.
In the C standard library, rand() returns an integer in the range 0 to RAND_MAX, a limit that may be defined differently depending on the platform. Suppose RAND_MAX happens to be defined as 2^n - 1 where n is some integer (this happens to be the case in Microsoft's implementation, where n is 15). Then we would say that a good implementation would return n bits of information.
Imagine that rand() constructs random numbers by flipping a coin to find the value of one bit, and then repeating until it has a batch of 15 bits. Then the bits are independent (the value of any one bit does not influence the likelihood of other bits in the same batch have a certain value). So each bit considered independently is like a random number between 0 and 1 inclusive, and is "evenly distributed" over that range (as likely to be 0 as 1).
The independence of the bits ensures that the numbers represented by batches of bits will also be evenly distributed over their range. This is intuitively obvious: if there are 15 bits, the allowed range is zero to 2^15 - 1 = 32767. Every number in that range is a unique pattern of bits, such as:
010110101110010
and if the bits are independent then no pattern is more likely to occur than any other pattern. So all possible numbers in the range are equally likely. And so the reverse is true: if rand() produces evenly distributed integers, then those numbers are made of independent bits.
So think of rand() as a production line for making bits, which just happens to serve them up in batches of arbitrary size. If you don't like the size, break the batches up into individual bits, and then put them back together in whatever quantities you like (though if you need a particular range that is not a power of 2, you need to shrink your numbers, and by far the easiest way to do that is to convert to floating point).
Returning to your original suggestion, suppose you want to go from batches of 15 to batches of 30, ask rand() for the first number, bit-shift it by 15 places, then add another rand() to it. That is a way to combine two calls to rand() without disturbing an even distribution. It works simply because there is no overlap between the locations where you place the bits of information.
This is very different to "stretching" the range of rand() by multiplying by a constant. For example, if you wanted to double the range of rand() you could multiply by two - but now you'd only ever get even numbers, and never odd numbers! That's not exactly a smooth distribution and might be a serious problem depending on the application, e.g. a roulette-like game supposedly allowing odd/even bets. (By thinking in terms of bits, you'd avoid that mistake intuitively, because you'd realise that multiplying by two is the same as shifting the bits to the left (greater significance) by one place and filling in the gap with zero. So obviously the amount of information is the same - it just moved a little.)
Such gaps in number ranges can't be griped about in floating point number applications, because floating point ranges inherently have gaps in them that simply cannot be represented at all: an infinite number of missing real numbers exist in the gap between each two representable floating point numbers! So we just have to learn to live with gaps anyway.
As others have warned, intuition is risky in this area, especially because mathematicians can't resist the allure of real numbers, which are horribly confusing things full of gnarly infinities and apparent paradoxes.
But at least if you think it terms of bits, your intuition might get you a little further. Bits are really easy - even computers can understand them.
As others have said, the easy short answer is: No, it is not more random, but it does change the distribution.
Suppose you were playing a dice game. You have some completely fair, random dice. Would the die rolls be "more random" if before each die roll, you first put two dice in a bowl, shook it around, picked one of the dice at random, and then rolled that one? Clearly it would make no difference. If both dice give random numbers, then randomly choosing one of the two dice will make no difference. Either way you'll get a random number between 1 and 6 with even distribution over a sufficient number of rolls.
I suppose in real life such a procedure might be useful if you suspected that the dice might NOT be fair. If, say, the dice are slightly unbalanced so one tends to give 1 more often than 1/6 of the time, and another tends to give 6 unusually often, then randomly choosing between the two would tend to obscure the bias. (Though in this case, 1 and 6 would still come up more than 2, 3, 4, and 5. Well, I guess depending on the nature of the imbalance.)
There are many definitions of randomness. One definition of a random series is that it is a series of numbers produced by a random process. By this definition, if I roll a fair die 5 times and get the numbers 2, 4, 3, 2, 5, that is a random series. If I then roll that same fair die 5 more times and get 1, 1, 1, 1, 1, then that is also a random series.
Several posters have pointed out that random functions on a computer are not truly random but rather pseudo-random, and that if you know the algorithm and the seed they are completely predictable. This is true, but most of the time completely irrelevant. If I shuffle a deck of cards and then turn them over one at a time, this should be a random series. If someone peeks at the cards, the result will be completely predictable, but by most definitions of randomness this will not make it less random. If the series passes statistical tests of randomness, the fact that I peeked at the cards will not change that fact. In practice, if we are gambling large sums of money on your ability to guess the next card, then the fact that you peeked at the cards is highly relevant. If we are using the series to simulate the menu picks of visitors to our web site in order to test the performance of the system, then the fact that you peeked will make no difference at all. (As long as you do not modify the program to take advantage of this knowledge.)
EDIT
I don't think I could my response to the Monty Hall problem into a comment, so I'll update my answer.
For those who didn't read Belisarius link, the gist of it is: A game show contestant is given a choice of 3 doors. Behind one is a valuable prize, behind the others something worthless. He picks door #1. Before revealing whether it is a winner or a loser, the host opens door #3 to reveal that it is a loser. He then gives the contestant the opportunity to switch to door #2. Should the contestant do this or not?
The answer, which offends many people's intuition, is that he should switch. The probability that his original pick was the winner is 1/3, that the other door is the winner is 2/3. My initial intuition, along with that of many other people, is that there would be no gain in switching, that the odds have just been changed to 50:50.
After all, suppose that someone switched on the TV just after the host opened the losing door. That person would see two remaining closed doors. Assuming he knows the nature of the game, he would say that there is a 1/2 chance of each door hiding the prize. How can the odds for the viewer be 1/2 : 1/2 while the odds for the contestant are 1/3 : 2/3 ?
I really had to think about this to beat my intuition into shape. To get a handle on it, understand that when we talk about probabilities in a problem like this, we mean, the probability you assign given the available information. To a member of the crew who put the prize behind, say, door #1, the probability that the prize is behind door #1 is 100% and the probability that it is behind either of the other two doors is zero.
The crew member's odds are different than the contestant's odds because he knows something the contestant doesn't, namely, which door he put the prize behind. Likewise, the contestent's odds are different than the viewer's odds because he knows something that the viewer doesn't, namely, which door he initially picked. This is not irrelevant, because the host's choice of which door to open is not random. He will not open the door the contestant picked, and he will not open the door that hides the prize. If these are the same door, that leaves him two choices. If they are different doors, that leaves only one.
So how do we come up with 1/3 and 2/3 ? When the contestant originally picked a door, he had a 1/3 chance of picking the winner. I think that much is obvious. That means there was a 2/3 chance that one of the other doors is the winner. If the host game him the opportunity to switch without giving any additional information, there would be no gain. Again, this should be obvious. But one way to look at it is to say that there is a 2/3 chance that he would win by switching. But he has 2 alternatives. So each one has only 2/3 divided by 2 = 1/3 chance of being the winner, which is no better than his original pick. Of course we already knew the final result, this just calculates it a different way.
But now the host reveals that one of those two choices is not the winner. So of the 2/3 chance that a door he didn't pick is the winner, he now knows that 1 of the 2 alternatives isn't it. The other might or might not be. So he no longer has 2/3 dividied by 2. He has zero for the open door and 2/3 for the closed door.
Consider you have a simple coin flip problem where even is considered heads and odd is considered tails. The logical implementation is:
rand() mod 2
Over a large enough distribution, the number of even numbers should equal the number of odd numbers.
Now consider a slight tweak:
rand() * rand() mod 2
If one of the results is even, then the entire result should be even. Consider the 4 possible outcomes (even * even = even, even * odd = even, odd * even = even, odd * odd = odd). Now, over a large enough distribution, the answer should be even 75% of the time.
I'd bet heads if I were you.
This comment is really more of an explanation of why you shouldn't implement a custom random function based on your method than a discussion on the mathematical properties of randomness.
When in doubt about what will happen to the combinations of your random numbers, you can use the lessons you learned in statistical theory.
In OP's situation he wants to know what's the outcome of X*X = X^2 where X is a random variable distributed along Uniform[0,1]. We'll use the CDF technique since it's just a one-to-one mapping.
Since X ~ Uniform[0,1] it's cdf is: fX(x) = 1
We want the transformation Y <- X^2 thus y = x^2
Find the inverse x(y): sqrt(y) = x this gives us x as a function of y.
Next, find the derivative dx/dy: d/dy (sqrt(y)) = 1/(2 sqrt(y))
The distribution of Y is given as: fY(y) = fX(x(y)) |dx/dy| = 1/(2 sqrt(y))
We're not done yet, we have to get the domain of Y. since 0 <= x < 1, 0 <= x^2 < 1
so Y is in the range [0, 1).
If you wanna check if the pdf of Y is indeed a pdf, integrate it over the domain: Integrate 1/(2 sqrt(y)) from 0 to 1 and indeed, it pops up as 1. Also, notice the shape of the said function looks like what belisarious posted.
As for things like X1 + X2 + ... + Xn, (where Xi ~ Uniform[0,1]) we can just appeal to the Central Limit Theorem which works for any distribution whose moments exist. This is why the Z-test exists actually.
Other techniques for determining the resulting pdf include the Jacobian transformation (which is the generalized version of the cdf technique) and MGF technique.
EDIT: As a clarification, do note that I'm talking about the distribution of the resulting transformation and not its randomness. That's actually for a separate discussion. Also what I actually derived was for (rand())^2. For rand() * rand() it's much more complicated, which, in any case won't result in a uniform distribution of any sorts.
It's not exactly obvious, but rand() is typically more random than rand()*rand(). What's important is that this isn't actually very important for most uses.
But firstly, they produce different distributions. This is not a problem if that is what you want, but it does matter. If you need a particular distribution, then ignore the whole “which is more random” question. So why is rand() more random?
The core of why rand() is more random (under the assumption that it is producing floating-point random numbers with the range [0..1], which is very common) is that when you multiply two FP numbers together with lots of information in the mantissa, you get some loss of information off the end; there's just not enough bit in an IEEE double-precision float to hold all the information that was in two IEEE double-precision floats uniformly randomly selected from [0..1], and those extra bits of information are lost. Of course, it doesn't matter that much since you (probably) weren't going to use that information, but the loss is real. It also doesn't really matter which distribution you produce (i.e., which operation you use to do the combination). Each of those random numbers has (at best) 52 bits of random information – that's how much an IEEE double can hold – and if you combine two or more into one, you're still limited to having at most 52 bits of random information.
Most uses of random numbers don't use even close to as much randomness as is actually available in the random source. Get a good PRNG and don't worry too much about it. (The level of “goodness” depends on what you're doing with it; you have to be careful when doing Monte Carlo simulation or cryptography, but otherwise you can probably use the standard PRNG as that's usually much quicker.)
Floating randoms are based, in general, on an algorithm that produces an integer between zero and a certain range. As such, by using rand()*rand(), you are essentially saying int_rand()*int_rand()/rand_max^2 - meaning you are excluding any prime number / rand_max^2.
That changes the randomized distribution significantly.
rand() is uniformly distributed on most systems, and difficult to predict if properly seeded. Use that unless you have a particular reason to do math on it (i.e., shaping the distribution to a needed curve).
Multiplying numbers would end up in a smaller solution range depending on your computer architecture.
If the display of your computer shows 16 digits rand() would be say 0.1234567890123
multiplied by a second rand(), 0.1234567890123, would give 0.0152415 something
you'd definitely find fewer solutions if you'd repeat the experiment 10^14 times.
Most of these distributions happen because you have to limit or normalize the random number.
We normalize it to be all positive, fit within a range, and even to fit within the constraints of the memory size for the assigned variable type.
In other words, because we have to limit the random call between 0 and X (X being the size limit of our variable) we will have a group of "random" numbers between 0 and X.
Now when you add the random number to another random number the sum will be somewhere between 0 and 2X...this skews the values away from the edge points (the probability of adding two small numbers together and two big numbers together is very small when you have two random numbers over a large range).
Think of the case where you had a number that is close to zero and you add it with another random number it will certainly get bigger and away from 0 (this will be true of large numbers as well as it is unlikely to have two large numbers (numbers close to X) returned by the Random function twice.
Now if you were to setup the random method with negative numbers and positive numbers (spanning equally across the zero axis) this would no longer be the case.
Say for instance RandomReal({-x, x}, 50000, .01) then you would get an even distribution of numbers on the negative a positive side and if you were to add the random numbers together they would maintain their "randomness".
Now I'm not sure what would happen with the Random() * Random() with the negative to positive span...that would be an interesting graph to see...but I have to get back to writing code now. :-P
There is no such thing as more random. It is either random or not. Random means "hard to predict". It does not mean non-deterministic. Both random() and random() * random() are equally random if random() is random. Distribution is irrelevant as far as randomness goes. If a non-uniform distribution occurs, it just means that some values are more likely than others; they are still unpredictable.
Since pseudo-randomness is involved, the numbers are very much deterministic. However, pseudo-randomness is often sufficient in probability models and simulations. It is pretty well known that making a pseudo-random number generator complicated only makes it difficult to analyze. It is unlikely to improve randomness; it often causes it to fail statistical tests.
The desired properties of the random numbers are important: repeatability and reproducibility, statistical randomness, (usually) uniformly distributed, and a large period are a few.
Concerning transformations on random numbers: As someone said, the sum of two or more uniformly distributed results in a normal distribution. This is the additive central limit theorem. It applies regardless of the source distribution as long as all distributions are independent and identical. The multiplicative central limit theorem says the product of two or more independent and indentically distributed random variables is lognormal. The graph someone else created looks exponential, but it is really lognormal. So random() * random() is lognormally distributed (although it may not be independent since numbers are pulled from the same stream). This may be desirable in some applications. However, it is usually better to generate one random number and transform it to a lognormally-distributed number. Random() * random() may be difficult to analyze.
For more information, consult my book at www.performorama.org. The book is under construction, but the relevant material is there. Note that chapter and section numbers may change over time. Chapter 8 (probability theory) -- sections 8.3.1 and 8.3.3, chapter 10 (random numbers).
We can compare two arrays of numbers regarding the randomness by using
Kolmogorov complexity
If the sequence of numbers can not be compressed, then it is the most random we can reach at this length...
I know that this type of measurement is more a theoretical option...
Actually, when you think about it rand() * rand() is less random than rand(). Here's why.
Essentially, there are the same number of odd numbers as even numbers. And saying that 0.04325 is odd, and like 0.388 is even, and 0.4 is even, and 0.15 is odd,
That means that rand() has a equal chance of being an even or odd decimal.
On the other hand, rand() * rand() has it's odds stacked a bit differently.
Lets say:
double a = rand();
double b = rand();
double c = a * b;
a and b both have a 50% precent chance of being even or odd. Knowing that
even * even = even
even * odd = even
odd * odd = odd
odd * even = even
means that there a 75% chance that c is even, while only a 25% chance it's odd, making the value of rand() * rand() more predictable than rand(), therefore less random.
Use a linear feedback shift register (LFSR) that implements a primitive polynomial.
The result will be a sequence of 2^n pseudo-random numbers, ie none repeating in the sequence where n is the number of bits in the LFSR .... resulting in a uniform distribution.
http://en.wikipedia.org/wiki/Linear_feedback_shift_register
http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
Use a "random" seed based on microsecs of your computer clock or maybe a subset of the md5 result on some continuously changing data in your file system.
For example, a 32-bit LFSR will generate 2^32 unique numbers in sequence (no 2 alike) starting with a given seed.
The sequence will always be in the same order, but the starting point will be different (obviously) for a different seeds.
So, if a possibly repeating sequence between seedings is not a problem, this might be a good choice.
I've used 128-bit LFSR's to generate random tests in hardware simulators using a seed which is the md5 results on continuously changing system data.
Assuming that rand() returns a number between [0, 1) it is obvious that rand() * rand() will be biased toward 0. This is because multiplying x by a number between [0, 1) will result in a number smaller than x. Here is the distribution of 10000 more random numbers:
google.charts.load("current", { packages: ["corechart"] });
google.charts.setOnLoadCallback(drawChart);
function drawChart() {
var i;
var randomNumbers = [];
for (i = 0; i < 10000; i++) {
randomNumbers.push(Math.random() * Math.random());
}
var chart = new google.visualization.Histogram(document.getElementById("chart-1"));
var data = new google.visualization.DataTable();
data.addColumn("number", "Value");
randomNumbers.forEach(function(randomNumber) {
data.addRow([randomNumber]);
});
chart.draw(data, {
title: randomNumbers.length + " rand() * rand() values between [0, 1)",
legend: { position: "none" }
});
}
<script src="https://www.gstatic.com/charts/loader.js"></script>
<div id="chart-1" style="height: 500px">Generating chart...</div>
If rand() returns an integer between [x, y] then you have the following distribution. Notice the number of odd vs even values:
google.charts.load("current", { packages: ["corechart"] });
google.charts.setOnLoadCallback(drawChart);
document.querySelector("#draw-chart").addEventListener("click", drawChart);
function randomInt(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
function drawChart() {
var min = Number(document.querySelector("#rand-min").value);
var max = Number(document.querySelector("#rand-max").value);
if (min >= max) {
return;
}
var i;
var randomNumbers = [];
for (i = 0; i < 10000; i++) {
randomNumbers.push(randomInt(min, max) * randomInt(min, max));
}
var chart = new google.visualization.Histogram(document.getElementById("chart-1"));
var data = new google.visualization.DataTable();
data.addColumn("number", "Value");
randomNumbers.forEach(function(randomNumber) {
data.addRow([randomNumber]);
});
chart.draw(data, {
title: randomNumbers.length + " rand() * rand() values between [" + min + ", " + max + "]",
legend: { position: "none" },
histogram: { bucketSize: 1 }
});
}
<script src="https://www.gstatic.com/charts/loader.js"></script>
<input type="number" id="rand-min" value="0" min="0" max="10">
<input type="number" id="rand-max" value="9" min="0" max="10">
<input type="button" id="draw-chart" value="Apply">
<div id="chart-1" style="height: 500px">Generating chart...</div>
OK, so I will try to add some value to complement others answers by saying that you are creating and using a random number generator.
Random number generators are devices (in a very general sense) that have multiple characteristics which can be modified to fit a purpose. Some of them (from me) are:
Entropy: as in Shannon Entropy
Distribution: statistical distribution (poisson, normal, etc.)
Type: what is the source of the numbers (algorithm, natural event, combination of, etc.) and algorithm applied.
Efficiency: rapidity or complexity of execution.
Patterns: periodicity, sequences, runs, etc.
and probably more...
In most answers here, distribution is the main point of interest, but by mix and matching functions and parameters, you create new ways of generating random numbers which will have different characteristics for some of which the evaluation may not be obvious at first glance.
It's easy to show that the sum of the two random numbers is not necessarily random. Imagine you have a 6 sided die and roll. Each number has a 1/6 chance of appearing. Now say you had 2 dice and summed the result. The distribution of those sums is not 1/12. Why? Because certain numbers appear more than others. There are multiple partitions of them. For example the number 2 is the sum of 1+1 only but 7 can be formed by 3+4 or 4+3 or 5+2 etc... so it has a larger chance of coming up.
Therefore, applying a transform, in this case addition on a random function does not make it more random, or necessarily preserve randomness. In the case of the dice above, the distribution is skewed to 7 and therefore less random.
As others already pointed out, this question is hard to answer since everyone of us has his own picture of randomness in his head.
That is why, I would highly recommend you to take some time and read through this site to get a better idea of randomness:
http://www.random.org/
To get back to the real question.
There is no more or less random in this term:
both only appears random!
In both cases - just rand() or rand() * rand() - the situation is the same:
After a few billion of numbers the sequence will repeat(!).
It appears random to the observer, because he does not know the whole sequence, but the computer has no true random source - so he can not produce randomness either.
e.g.: Is the weather random?
We do not have enough sensors or knowledge to determine if weather is random or not.
The answer would be it depends, hopefully the rand()*rand() would be more random than rand(), but as:
both answers depends on the bit size of your value
that in most of the cases you generate depending on a pseudo-random algorithm (which is mostly a number generator that depends on your computer clock, and not that much random).
make your code more readable (and not invoke some random voodoo god of random with this kind of mantra).
Well, if you check any of these above I suggest you go for the simple "rand()".
Because your code would be more readable (wouldn't ask yourself why you did write this, for ...well... more than 2 sec), easy to maintain (if you want to replace you rand function with a super_rand).
If you want a better random, I would recommend you to stream it from any source that provide enough noise (radio static), and then a simple rand() should be enough.

If there are M different boxes and N identical balls

and we need to put these balls into boxes.
How many states of the states could there be?
This is part of a computer simulation puzzle. I've almost forget all my math knowledges.
I believe you are looking for the Multinomial Coefficient.
I will check myself and expand my answer.
Edit:
If you take a look at the wikipedia article I gave a link to, you can see that the M and N you defined in your question correspond to the m and n defined in the Theorem section.
This means that your question corresponds to: "What is the number of possible coefficient orderings when expanding a polynomial raised to an arbitrary power?", where N is the power, and M is the number of variables in the polynomial.
In other words:
What you are looking for is to sum over the multinomial coefficients of a polynomial of M variables expanded when raised to the power on N.
The exact equations are a bit long, but they are explained very clearly in wikipedia.
Why is this true:
The multinomial coefficient gives you the number of ways to order identical balls between baskets when grouped into a specific grouping (for example, 4 balls grouped into 3, 1, and 1 - in this case M=4 and N=3). When summing over all grouping options you get all possible combinations.
I hope this helped you out.
These notes explain how to solve the "balls in boxes" problem in general: whether the balls are labeled or not, whether the boxes are labeled or not, whether you have to have at least one ball in each box, etc.
this is a basic combinatorial question (distribution of identical objects into non identical slots)
the number of states is [(N+M-1) choose (M-1)]

How to check if m n-sized vectors are linearly independent?

Disclaimer
This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future.
Now the problem
I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n?
Any hints?
See also this video lecture.
Construct a matrix of the vectors (one row per vector), and perform a Gaussian elimination on this matrix. If any of the matrix rows cancels out, they are not linearly independent.
The trivial case is when m > n, in this case, they cannot be linearly independent.
Construct a matrix M whose rows are the vectors and determine the rank of M. If the rank of M is less than m (the number of vectors) then there is a linear dependence. In the algorithm to determine the rank of M you can stop the procedure as soon as you obtain one row of zeros, but running the algorithm to completion has the added bonanza of providing the dimension of the spanning set of the vectors. Oh, and the algorithm to determine the rank of M is merely Gaussian elimination.
Take care for numerical instability. See the warning at the beginning of chapter two in Numerical Recipes.
If m<n, you will have to do some operation on them (there are multiple possibilities: Gaussian elimination, orthogonalization, etc., almost any transformation which can be used for solving equations will do) and check the result (eg. Gaussian elimination => zero row or column, orthogonalization => zero vector, SVD => zero singular number)
However, note that this question is a bad question for a programmer to ask, and this problem is a bad problem for a program to solve. That's because every linearly dependent set of n<m vectors has a different set of linearly independent vectors nearby (eg. the problem is numerically unstable)
I have been working on this problem these days.
Previously, I have found some algorithms regarding Gaussian or Gaussian-Jordan elimination, but most of those algorithms only apply to square matrix, not general matrix.
To apply for general matrix, one of the best answers might be this:
http://rosettacode.org/wiki/Reduced_row_echelon_form#MATLAB
You can find both pseudo-code and source code in various languages.
As for me, I transformed the Python source code to C++, causes the C++ code provided in the above link is somehow complex and inappropriate to implement in my simulation.
Hope this will help you, and good luck ^^
If computing power is not a problem, probably the best way is to find singular values of the matrix. Basically you need to find eigenvalues of M'*M and look at the ratio of the largest to the smallest. If the ratio is not very big, the vectors are independent.
Another way to check that m row vectors are linearly independent, when put in a matrix M of size mxn, is to compute
det(M * M^T)
i.e. the determinant of a mxm square matrix. It will be zero if and only if M has some dependent rows. However Gaussian elimination should be in general faster.
Sorry man, my mistake...
The source code provided in the above link turns out to be incorrect, at least the python code I have tested and the C++ code I have transformed does not generates the right answer all the time. (while for the exmample in the above link, the result is correct :) -- )
To test the python code, simply replace the mtx with
[30,10,20,0],[60,20,40,0]
and the returned result would be like:
[1,0,0,0],[0,1,2,0]
Nevertheless, I have got a way out of this. It's just this time I transformed the matalb source code of rref function to C++. You can run matlab and use the type rref command to get the source code of rref.
Just notice that if you are working with some really large value or really small value, make sure use the long double datatype in c++. Otherwise, the result will be truncated and inconsistent with the matlab result.
I have been conducting large simulations in ns2, and all the observed results are sound.
hope this will help you and any other who have encontered the problem...
A very simple way, that is not the most computationally efficient, is to simply remove random rows until m=n and then apply the determinant trick.
m < n: remove rows (make the vectors shorter) until the matrix is square, and then
m = n: check if the determinant is 0 (as you said)
m < n (the number of vectors is greater than their length): they are linearly dependent (always).
The reason, in short, is that any solution to the system of m x n equations is also a solution to the n x n system of equations (you're trying to solve Av=0). For a better explanation, see Wikipedia, which explains it better than I can.

Approximating nonparametric cubic Bezier

What is the best way to approximate a cubic Bezier curve? Ideally I would want a function y(x) which would give the exact y value for any given x, but this would involve solving a cubic equation for every x value, which is too slow for my needs, and there may be numerical stability issues as well with this approach.
Would this be a good solution?
Just solve the cubic.
If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.
A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)
You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.
Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.
EDIT: responding to your comment, but I need more than 300 characters.
I'm only dealing with bezier curves where y(x) has only one (real) root. Regarding numerical stability, using the formula from http://en.wikipedia.org/wiki/Cubic_equation#Summary, it would appear that there might be problems if u is very small. – jtxx000
The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.
To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:
u^^6 + q u^^3 == p^^3 /27
Near zero, you can use the approximation:
q u^^3 == p^^3 /27
or p / 3u == cube root of q
So the computation of x from u should contain something like:
(fabs(u) >= somesmallvalue) ? (p / u / 3.0) : cuberoot (q)
How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.
The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're sure that there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.
If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.
Yes, de Casteljau algorithm would work for you. However, I don't know if it will be faster than solving the cubic equation by Cardano's method.

Resources