Uniform sampling (by volume) within a cone - math

I'm looking for an algorithm that can generate points within a cone with a flat bottom (a disk).
I have the normalized axis along which the cone is being created (for our purposes let's just say it is the y-axis so (0, 1, 0) and the angle of the cone (let's say it is 45 degrees).
The only resources I could find online generate vectors within a cone, but they are based on sampling a sphere, so at the bottom you get a kind of "snow-cone" effect instead of a disk at the bottom.
That is done with the following pseudocode:
// Sample phi uniformly on [0, 2PI]
float phi = rand(0, 1) * 2 * PI
// Sample u uniformly from [cos(angle), 1]
float u = rand(0, 1) * (1 - cos(angle * PI/180)) + cos(angle * PI/180)
vec3 = vec3(sqrt(1 - u^2) * cos(phi), u, sqrt(1 - u^2) * sin(phi)))
The below picture is what I am going for. Having the ability to generate samples either on the surface or inside would be nice as well:

I could explain my solution in detail using integrals and probability distributions, but the lack of MathJax on this site makes that difficult. I'll keep my explanation at a simple level, but it should be clear. I'll also make the solution a little more general than you ask: we want a random point inside a right circular cone of height a and radius of base b, and we want the point with uniform sampling over the volume of that cone. This method directly chooses a random point in the cone without any rejection testing.
First let's consider the small cone of height h inside that larger cone, both cones with the same apex and parallel bases. The two cones are of course similar figures, and the square-cube law says that the volume of the smaller cone varies as the cube of its height. That height varies from 0 to a and we want its cube to be uniform over that range. Therefore we choose h to vary with the cube root of a uniform random variable, and we get (in Python 3 code),
h = a * (random()) ** (1/3)
We next consider the circular region that is the base of that smaller cone of height h. The radius of that base is (b / a) * h, by similar triangles. Now think of a smaller circular region of radius r inside that larger circular region, both circles in the same plane and with the same center. The area of the smaller circle varies with the square of its radius, so to get a uniform area over its range we take the square root of a uniform random variable. We get
r = (b / a) * h * sqrt(random())
We now want the angle t (for theta) of a point on the circumference of that smaller circle of radius r. The angle in radians obviously does not depend on the other factors, so we just use a uniform random variable to get
t = 2 * pi * random()
We now use those three random variables h, r, and t to choose our point inside the starting cone. If the apex of the cone is at the origin and the axis of the cone is along the positive y-axis, so that the center of the base is (0, a, 0) and a point on the circumference of the base is (b, a, 0), you can choose
x = r * cos(t)
y = h
z = r * sin(t)
When you asked about generating samples "on the surface" you did not clarify if you mean just the side (or is it "sides"?) of the cone, just the base, or the entire surface. Your second graphic appears to mean just the side, but I'll give code for all three.
The side only
Again we use a smaller cone of height h inside the larger cone. Its surface area varies as the square of its height, so we take the square root of a uniform random variable. The circle in its base is fixed, if our point is to be on the surface, and again the angle is just uniform. So we get
h = a * sqrt(random())
r = (b / a) * h
t = 2 * pi * random()
Use the same code for x, y, and z I used above for the interior of the cone to get the final random point on the side surface of the cone.
The base only
This is much like choosing a point in the interior, except the height is predetermined to equal the height of the entire cone. We get the following, somewhat simplified code:
h = a
r = b * sqrt(random())
t = 2 * pi * random()
Again, use the previous code for the final x, y, and z.
The entire surface
Here we can first decide, at random, whether to place our point on the base or on the surface, then place the point in one of the two ways above. The area of the base of a cone of height a and base radius b is pi * b * b while the surface area of the cone's side is pi * b * sqrt(a*a + b*b). We use the ratio of the base to the total of those areas to choose which subsurface to use for our point:
if random() < b / (b + sqrt(a*a + b*b)):
return point_on_base(a, b)
else:
return point_on_side(a, b)
Use my codes above for the side and base to complete that code.
Here are simple matplotlib 3D scatter plots of 10,000 random points, first inside the cone then on its side surface. Note that I made the apex angle 45°, as your text states but unlike your pictures. Viewing these from other angles seems to confirm that they are uniform in volume or area.

Related

Uniformly distribute n points inside an ellipse

How do you uniformly distribute n points inside an ellipse given n and its minor axis length (major axis can be assumed to be the x axis and size 1)? Or if that is impossible, how do you pick n points such that the smallest distance between any two is maximized?
Right now I am uneasy about running expensive electron repulsion simulators (in hopes that there is a better solution like the sunflower function in this question to distribute n points in a circle). n will most likely be between 10 and 100 points, but it would be nice if it worked great for all n
If the ellipse is centered at (0,0), if a=1 is the major radius, b is the minor radius, and the major axis is horizontal, then the ellipse has equation x' A x = 1 where A is the matrix
/ \
| 1 0 |
| 0 1/b² |
\ /
Now, here is a way to uniformly sample inside an ellipse with equation x' A x = r. Take the upper triangular Cholesky factor of A, say U. Here, U is simply
/ \
| 1 0 |
| 0 1/b |
\ /
Pick a point y at random, uniformly, in the circle centered at (0,0) and with radius r. Then x = U^{-1}y is a point uniformly picked at random in the ellipse.
This method works in arbitrary dimension, not only in the two-dimensional case (replacing "circle" with "hypersphere").
So, for our present case, here is the pseudo-code, assuming random() uniformly generates a number between 0 and 1:
R = sqrt(random())
theta = random() * 2 * pi
x1 = R * cos(theta)
x2 = b * R * sin(theta)
Here is a R code to generate n points:
runif_ellipse <- function(n, b){
R <- sqrt(runif(n))
theta <- runif(n) * 2*pi
cbind(R*cos(theta), b*R*sin(theta))
}
points <- runif_ellipse(n = 1000, b = 0.7)
plot(points, asp = 1, pch = 19)
Rather simple approach:
Make initial value for D distance approximation like D=Sqrt(Pi*b/N ) where b is minor axis length.
Generate triangular grid (with equilateral triangles to provide the most dense packing) of points with cell size D. Count number of points lying inside given ellipse.
If it is smaller than N, diminish distance D, of larger - increase D. Repeat until exactly N points are inside.
Dependence CountInside <=> D is monotone for fixed starting point, so you can use binary search to get result faster.
There might be complex cases with 2-4 symmetric points near the border - when they go out or inside simultaneously. If you catch this case, shift starting point a bit.

Arrange X amount of things evenly around a point in 3d space

If I have X amount of things (lets just randomly say 300)
Is there an algorithm that will arrange these things somewhat evenly around a central point? Like a 100 sided dice or a 3d mesh of a sphere?
Id rather have the things somewhat evenly spaced like this..
Rather than this polar way..
ps. For those interested, wondering why do I want to do this?
Well I'm doing these for fun, and after completing #7 I decided I'd like to represent the array of wires in 3d in Unity and watch them operate in a slowed down manner.
Here is a simple transformation that maps a uniform sample in the rectangle [0, 2 pi] x [-1, 1] onto a uniform sample on the sphere of radius r:
T(phi, z) = (r cos(phi) sqrt(1 - z^2), r sin(phi) sqrt(1 - zˆ2), r z)
The reason why this transformation produces uniform samples on the sphere is that the area of any region T(U) obtained by transforming the region U from the rectangle does not depend on U but on the area of U.
To prove this mathematically it is enough to verify that the norm of the vectorial product
| ∂T/∂phi x ∂T/∂z |
is constant (the area on the sphere is the integral of this vectorial product w.r.t. phi and z).
Summarizing
To produce a random sample uniformly distributed in the Sphere of radius r do the following:
Produce a random sample (phi_1, ..., phi_n) uniformly distributed in [0, 2 pi].
Produce a random sample (z_1, ..., z_n) uniformly distributed in [-1, 1].
For every pair (phi_j, z_k) calculate T(phi_j, z_k) using the formula above.
Here's a three-step approach. 1a) Make more points than you need. 1b) Remove some. 2) Adjust the rest.
1a) To make more points that you need, take any quasiregular polyhedron with sides that tessellate (triangles, squares, diamonds). Tesselate the spherical faces by subdivision, generating more vertices. For example, if you use the regular icosahedron you get geodesic domes. (Subdivide by 2, you get the dual to the C60 buckyball.) Working out exact formulas isn't hard. The number of new vertices per face is quadratic in the subdivision.
1b) Randomly remove enough points to get you down to your target number.
2) Use a force-directed layout algorithm to redistribute the vertices over the sphere. The underlying force graph is just that provided by the nearest neighbors in your underlying tesselation.
There are other ways to do step 1), such as just generating random points in any distribution. There is an advantage of starting with a quasiregular figure, though. Force-directed algorithms have a reputation for poor convergence in some cases. By starting with something that's already mostly optimal, you'll bypass most all of any convergence problems you might have.
One elegant solution I came across recently is a spherical fibonacci lattice (http://extremelearning.com.au/how-to-evenly-distribute-points-on-a-sphere-more-effectively-than-the-canonical-fibonacci-lattice/)
The nice thing about it is that you can specify the exact number of points you want
// C# Code example
Vector3[] SphericalFibonacciLattice(int n) {
Vector3[] res = new Vector3[n];
float goldenRatio = (1.0f + MathF.Sqrt(5.0f)) * 0.5f;
for(int i = 0; i < n; i++)
{
float theta = 2.0f * MathF.PI * i / goldenRatio;
float phi = MathF.Acos(1.0f - 2.0f * (i + 0.5f) / n);
Vector3 p = new Vector3(MathF.Cos(theta) * MathF.Sin(phi),
MathF.Sin(theta) * MathF.Sin(phi),
MathF.Cos(phi));
res[i] = p;
}
return res;
}
The linked article extends on this to create an even more uniform distribution, but even this basic version creates very nice results.

Generating random points on a surface of an n-dimensional torus

I'd like to generate random points being located on the surface of an n-dimensional torus. I have found formulas for how to generate the points on the surface of a 3-dimensional torus:
x = (c + a * cos(v)) * cos(u)
y = (c + a * cos(v)) * sin(u)
z = a * sin(v)
u, v ∈ [0, 2 * pi); c, a > 0.
My question is now: how to extend this formulas to n dimensions. Any help on the matter would be much appreciated.
I guess that you can do this recursively. Start with a full orthonormal basis of your vector space, and let the current location be the origin. At each step, choose a point in the plane spanned by the first two coordinate vectors, i.e. take w1 = cos(t)*v1 + sin(t)*v2. Shift the other basis vectors, i.e. w2 = v3, w3 = v4, …. Also take a step from your current position in the direction w1, with the radius r1 chosen up front. When you only have a single basis vector remaining, then the current point is a point on the n-dimensional torus of the outermost recursive call.
Note that while the above may be used to choose points randomly, it won't choose them uniformly. That would likely be a much harder question, and you definitely should ask about the math of that on Math SE or perhaps on Cross Validated (Statistics SE) to get the math right before you worry about implementation.
An n-torus (n being the dimensionality of the surface of the torus; a bagel or doughnut is therefore a 2-torus, not a 3-torus) is a smooth mapping of an n-rectangle. One way to approach this is to generate points on the rectangle and then map them onto the torus. Aside from the problem of figuring out how to map a rectangle onto a torus (I don't know it off-hand), there is the problem that the resulting distribution of points on the torus is not uniform even if the distribution of points is uniform on the rectangle. But there must be a way to adjust the distribution on the rectangle to make it uniform on the torus.
Merely generating u and v uniformly will not necessarily sample uniformly from a torus surface. An additional step is needed.
J.F. Williamson, "Random selection of points distributed on curved surfaces", Physics in Medicine & Biology 32(10), 1987, describes a general method of choosing a uniformly random point on a parametric surface. It is an acceptance/rejection method that accepts or rejects each candidate point depending on its stretch factor (norm-of-gradient). To use this method for a parametric surface, several things have to be known about the surface, namely—
x(u, v), y(u, v) and z(u, v), which are functions that generate 3-dimensional coordinates from two dimensional coordinates u and v,
The ranges of u and v,
g(point), the norm of the gradient ("stretch factor") at each point on the surface, and
gmax, the maximum value of g for the entire surface.
For the 3-dimensional torus with the parameterization you give in your question, g and gmax are the following:
g(u, v) = a * (c + cos(v) * a).
gmax = a * (a + c).
The algorithm to generate a uniform random point on the surface of a 3-dimensional torus with torus radius c and tube radius a is then as follows (where RNDEXCRANGE(x,y) returns a number in [x,y) uniformly at random, and RNDRANGE(x,y) returns a number in [x,y] uniformly at random):
// Maximum stretch factor for torus
gmax = a * (a + c)
while true
u = RNDEXCRANGE(0, pi * 2)
v = RNDEXCRANGE(0, pi * 2)
x = cos(u)*(c+cos(v)*a)
y = sin(u)*(c+cos(v)*a)
z = sin(v)*a
// Norm of gradient (stretch factor)
g = a*abs(c+cos(v)*a)
if g >= RNDRANGE(0, gmax)
// Accept the point
return [x, y, z]
end
end
If you have n-dimensional torus generating formulas, a similar approach can be used to generate uniform random points on that torus (accept a candidate point if norm-of-gradient equals or exceeds a random number in [0, gmax), where gmax is the maximum norm-of-gradient).

Generate a random point within a circle (uniformly)

I need to generate a uniformly random point within a circle of radius R.
I realize that by just picking a uniformly random angle in the interval [0 ... 2π), and uniformly random radius in the interval (0 ... R) I would end up with more points towards the center, since for two given radii, the points in the smaller radius will be closer to each other than for the points in the larger radius.
I found a blog entry on this over here but I don't understand his reasoning. I suppose it is correct, but I would really like to understand from where he gets (2/R2)×r and how he derives the final solution.
Update: 7 years after posting this question I still hadn't received a satisfactory answer on the actual question regarding the math behind the square root algorithm. So I spent a day writing an answer myself. Link to my answer.
How to generate a random point within a circle of radius R:
r = R * sqrt(random())
theta = random() * 2 * PI
(Assuming random() gives a value between 0 and 1 uniformly)
If you want to convert this to Cartesian coordinates, you can do
x = centerX + r * cos(theta)
y = centerY + r * sin(theta)
Why sqrt(random())?
Let's look at the math that leads up to sqrt(random()). Assume for simplicity that we're working with the unit circle, i.e. R = 1.
The average distance between points should be the same regardless of how far from the center we look. This means for example, that looking on the perimeter of a circle with circumference 2 we should find twice as many points as the number of points on the perimeter of a circle with circumference 1.
Since the circumference of a circle (2πr) grows linearly with r, it follows that the number of random points should grow linearly with r. In other words, the desired probability density function (PDF) grows linearly. Since a PDF should have an area equal to 1 and the maximum radius is 1, we have
So we know how the desired density of our random values should look like.
Now: How do we generate such a random value when all we have is a uniform random value between 0 and 1?
We use a trick called inverse transform sampling
From the PDF, create the cumulative distribution function (CDF)
Mirror this along y = x
Apply the resulting function to a uniform value between 0 and 1.
Sounds complicated? Let me insert a blockquote with a little side track that conveys the intuition:
Suppose we want to generate a random point with the following distribution:
That is
1/5 of the points uniformly between 1 and 2, and
4/5 of the points uniformly between 2 and 3.
The CDF is, as the name suggests, the cumulative version of the PDF. Intuitively: While PDF(x) describes the number of random values at x, CDF(x) describes the number of random values less than x.
In this case the CDF would look like:
To see how this is useful, imagine that we shoot bullets from left to right at uniformly distributed heights. As the bullets hit the line, they drop down to the ground:
See how the density of the bullets on the ground correspond to our desired distribution! We're almost there!
The problem is that for this function, the y axis is the output and the x axis is the input. We can only "shoot bullets from the ground straight up"! We need the inverse function!
This is why we mirror the whole thing; x becomes y and y becomes x:
We call this CDF-1. To get values according to the desired distribution, we use CDF-1(random()).
…so, back to generating random radius values where our PDF equals 2x.
Step 1: Create the CDF:
Since we're working with reals, the CDF is expressed as the integral of the PDF.
CDF(x) = ∫ 2x = x2
Step 2: Mirror the CDF along y = x:
Mathematically this boils down to swapping x and y and solving for y:
CDF: y = x2
Swap: x = y2
Solve: y = √x
CDF-1: y = √x
Step 3: Apply the resulting function to a uniform value between 0 and 1
CDF-1(random()) = √random()
Which is what we set out to derive :-)
Let's approach this like Archimedes would have.
How can we generate a point uniformly in a triangle ABC, where |AB|=|BC|? Let's make this easier by extending to a parallelogram ABCD. It's easy to generate points uniformly in ABCD. We uniformly pick a random point X on AB and Y on BC and choose Z such that XBYZ is a parallelogram. To get a uniformly chosen point in the original triangle we just fold any points that appear in ADC back down to ABC along AC.
Now consider a circle. In the limit we can think of it as infinitely many isoceles triangles ABC with B at the origin and A and C on the circumference vanishingly close to each other. We can pick one of these triangles simply by picking an angle theta. So we now need to generate a distance from the center by picking a point in the sliver ABC. Again, extend to ABCD, where D is now twice the radius from the circle center.
Picking a random point in ABCD is easy using the above method. Pick a random point on AB. Uniformly pick a random point on BC. Ie. pick a pair of random numbers x and y uniformly on [0,R] giving distances from the center. Our triangle is a thin sliver so AB and BC are essentially parallel. So the point Z is simply a distance x+y from the origin. If x+y>R we fold back down.
Here's the complete algorithm for R=1. I hope you agree it's pretty simple. It uses trig, but you can give a guarantee on how long it'll take, and how many random() calls it needs, unlike rejection sampling.
t = 2*pi*random()
u = random()+random()
r = if u>1 then 2-u else u
[r*cos(t), r*sin(t)]
Here it is in Mathematica.
f[] := Block[{u, t, r},
u = Random[] + Random[];
t = Random[] 2 Pi;
r = If[u > 1, 2 - u, u];
{r Cos[t], r Sin[t]}
]
ListPlot[Table[f[], {10000}], AspectRatio -> Automatic]
Here is a fast and simple solution.
Pick two random numbers in the range (0, 1), namely a and b. If b < a, swap them. Your point is (b*R*cos(2*pi*a/b), b*R*sin(2*pi*a/b)).
You can think about this solution as follows. If you took the circle, cut it, then straightened it out, you'd get a right-angled triangle. Scale that triangle down, and you'd have a triangle from (0, 0) to (1, 0) to (1, 1) and back again to (0, 0). All of these transformations change the density uniformly. What you've done is uniformly picked a random point in the triangle and reversed the process to get a point in the circle.
Note the point density in proportional to inverse square of the radius, hence instead of picking r from [0, r_max], pick from [0, r_max^2], then compute your coordinates as:
x = sqrt(r) * cos(angle)
y = sqrt(r) * sin(angle)
This will give you uniform point distribution on a disk.
http://mathworld.wolfram.com/DiskPointPicking.html
Think about it this way. If you have a rectangle where one axis is radius and one is angle, and you take the points inside this rectangle that are near radius 0. These will all fall very close to the origin (that is close together on the circle.) However, the points near radius R, these will all fall near the edge of the circle (that is, far apart from each other.)
This might give you some idea of why you are getting this behavior.
The factor that's derived on that link tells you how much corresponding area in the rectangle needs to be adjusted to not depend on the radius once it's mapped to the circle.
Edit: So what he writes in the link you share is, "That’s easy enough to do by calculating the inverse of the cumulative distribution, and we get for r:".
The basic premise is here that you can create a variable with a desired distribution from a uniform by mapping the uniform by the inverse function of the cumulative distribution function of the desired probability density function. Why? Just take it for granted for now, but this is a fact.
Here's my somehwat intuitive explanation of the math. The density function f(r) with respect to r has to be proportional to r itself. Understanding this fact is part of any basic calculus books. See sections on polar area elements. Some other posters have mentioned this.
So we'll call it f(r) = C*r;
This turns out to be most of the work. Now, since f(r) should be a probability density, you can easily see that by integrating f(r) over the interval (0,R) you get that C = 2/R^2 (this is an exercise for the reader.)
Thus, f(r) = 2*r/R^2
OK, so that's how you get the formula in the link.
Then, the final part is going from the uniform random variable u in (0,1) you must map by the inverse function of the cumulative distribution function from this desired density f(r). To understand why this is the case you need to find an advanced probability text like Papoulis probably (or derive it yourself.)
Integrating f(r) you get F(r) = r^2/R^2
To find the inverse function of this you set u = r^2/R^2 and then solve for r, which gives you r = R * sqrt(u)
This totally makes sense intuitively too, u = 0 should map to r = 0. Also, u = 1 shoudl map to r = R. Also, it goes by the square root function, which makes sense and matches the link.
Let ρ (radius) and φ (azimuth) be two random variables corresponding to polar coordinates of an arbitrary point inside the circle. If the points are uniformly distributed then what is the disribution function of ρ and φ?
For any r: 0 < r < R the probability of radius coordinate ρ to be less then r is
P[ρ < r] = P[point is within a circle of radius r] = S1 / S0 =(r/R)2
Where S1 and S0 are the areas of circle of radius r and R respectively.
So the CDF can be given as:
0 if r<=0
CDF = (r/R)**2 if 0 < r <= R
1 if r > R
And PDF:
PDF = d/dr(CDF) = 2 * (r/R**2) (0 < r <= R).
Note that for R=1 random variable sqrt(X) where X is uniform on [0, 1) has this exact CDF (because P[sqrt(X) < y] = P[x < y**2] = y**2 for 0 < y <= 1).
The distribution of φ is obviously uniform from 0 to 2*π. Now you can create random polar coordinates and convert them to Cartesian using trigonometric equations:
x = ρ * cos(φ)
y = ρ * sin(φ)
Can't resist to post python code for R=1.
from matplotlib import pyplot as plt
import numpy as np
rho = np.sqrt(np.random.uniform(0, 1, 5000))
phi = np.random.uniform(0, 2*np.pi, 5000)
x = rho * np.cos(phi)
y = rho * np.sin(phi)
plt.scatter(x, y, s = 4)
You will get
The reason why the naive solution doesn't work is that it gives a higher probability density to the points closer to the circle center. In other words the circle that has radius r/2 has probability r/2 of getting a point selected in it, but it has area (number of points) pi*r^2/4.
Therefore we want a radius probability density to have the following property:
The probability of choosing a radius smaller or equal to a given r has to be proportional to the area of the circle with radius r. (because we want to have a uniform distribution on the points and larger areas mean more points)
In other words we want the probability of choosing a radius between [0,r] to be equal to its share of the overall area of the circle. The total circle area is pi*R^2, and the area of the circle with radius r is pi*r^2. Thus we would like the probability of choosing a radius between [0,r] to be (pi*r^2)/(pi*R^2) = r^2/R^2.
Now comes the math:
The probability of choosing a radius between [0,r] is the integral of p(r) dr from 0 to r (that's just because we add all the probabilities of the smaller radii). Thus we want integral(p(r)dr) = r^2/R^2. We can clearly see that R^2 is a constant, so all we need to do is figure out which p(r), when integrated would give us something like r^2. The answer is clearly r * constant. integral(r * constant dr) = r^2/2 * constant. This has to be equal to r^2/R^2, therefore constant = 2/R^2. Thus you have the probability distribution p(r) = r * 2/R^2
Note: Another more intuitive way to think about the problem is to imagine that you are trying to give each circle of radius r a probability density equal to the proportion of the number of points it has on its circumference. Thus a circle which has radius r will have 2 * pi * r "points" on its circumference. The total number of points is pi * R^2. Thus you should give the circle r a probability equal to (2 * pi * r) / (pi * R^2) = 2 * r/R^2. This is much easier to understand and more intuitive, but it's not quite as mathematically sound.
It really depends on what you mean by 'uniformly random'. This is a subtle point and you can read more about it on the wiki page here: http://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29, where the same problem, giving different interpretations to 'uniformly random' gives different answers!
Depending on how you choose the points, the distribution could vary, even though they are uniformly random in some sense.
It seems like the blog entry is trying to make it uniformly random in the following sense: If you take a sub-circle of the circle, with the same center, then the probability that the point falls in that region is proportional to the area of the region. That, I believe, is attempting to follow the now standard interpretation of 'uniformly random' for 2D regions with areas defined on them: probability of a point falling in any region (with area well defined) is proportional to the area of that region.
Here is my Python code to generate num random points from a circle of radius rad:
import matplotlib.pyplot as plt
import numpy as np
rad = 10
num = 1000
t = np.random.uniform(0.0, 2.0*np.pi, num)
r = rad * np.sqrt(np.random.uniform(0.0, 1.0, num))
x = r * np.cos(t)
y = r * np.sin(t)
plt.plot(x, y, "ro", ms=1)
plt.axis([-15, 15, -15, 15])
plt.show()
I think that in this case using polar coordinates is a way of complicate the problem, it would be much easier if you pick random points into a square with sides of length 2R and then select the points (x,y) such that x^2+y^2<=R^2.
Solution in Java and the distribution example (2000 points)
public void getRandomPointInCircle() {
double t = 2 * Math.PI * Math.random();
double r = Math.sqrt(Math.random());
double x = r * Math.cos(t);
double y = r * Math.sin(t);
System.out.println(x);
System.out.println(y);
}
based on previus solution https://stackoverflow.com/a/5838055/5224246 from #sigfpe
I used once this method:
This may be totally unoptimized (ie it uses an array of point so its unusable for big circles) but gives random distribution enough. You could skip the creation of the matrix and draw directly if you wish to. The method is to randomize all points in a rectangle that fall inside the circle.
bool[,] getMatrix(System.Drawing.Rectangle r) {
bool[,] matrix = new bool[r.Width, r.Height];
return matrix;
}
void fillMatrix(ref bool[,] matrix, Vector center) {
double radius = center.X;
Random r = new Random();
for (int y = 0; y < matrix.GetLength(0); y++) {
for (int x = 0; x < matrix.GetLength(1); x++)
{
double distance = (center - new Vector(x, y)).Length;
if (distance < radius) {
matrix[x, y] = r.NextDouble() > 0.5;
}
}
}
}
private void drawMatrix(Vector centerPoint, double radius, bool[,] matrix) {
var g = this.CreateGraphics();
Bitmap pixel = new Bitmap(1,1);
pixel.SetPixel(0, 0, Color.Black);
for (int y = 0; y < matrix.GetLength(0); y++)
{
for (int x = 0; x < matrix.GetLength(1); x++)
{
if (matrix[x, y]) {
g.DrawImage(pixel, new PointF((float)(centerPoint.X - radius + x), (float)(centerPoint.Y - radius + y)));
}
}
}
g.Dispose();
}
private void button1_Click(object sender, EventArgs e)
{
System.Drawing.Rectangle r = new System.Drawing.Rectangle(100,100,200,200);
double radius = r.Width / 2;
Vector center = new Vector(r.Left + radius, r.Top + radius);
Vector normalizedCenter = new Vector(radius, radius);
bool[,] matrix = getMatrix(r);
fillMatrix(ref matrix, normalizedCenter);
drawMatrix(center, radius, matrix);
}
First we generate a cdf[x] which is
The probability that a point is less than distance x from the centre of the circle. Assume the circle has a radius of R.
obviously if x is zero then cdf[0] = 0
obviously if x is R then the cdf[R] = 1
obviously if x = r then the cdf[r] = (Pi r^2)/(Pi R^2)
This is because each "small area" on the circle has the same probability of being picked, So the probability is proportionally to the area in question. And the area given a distance x from the centre of the circle is Pi r^2
so cdf[x] = x^2/R^2 because the Pi cancel each other out
we have cdf[x]=x^2/R^2 where x goes from 0 to R
So we solve for x
R^2 cdf[x] = x^2
x = R Sqrt[ cdf[x] ]
We can now replace cdf with a random number from 0 to 1
x = R Sqrt[ RandomReal[{0,1}] ]
Finally
r = R Sqrt[ RandomReal[{0,1}] ];
theta = 360 deg * RandomReal[{0,1}];
{r,theta}
we get the polar coordinates
{0.601168 R, 311.915 deg}
This might help people interested in choosing an algorithm for speed; the fastest method is (probably?) rejection sampling.
Just generate a point within the unit square and reject it until it is inside a circle. E.g (pseudo-code),
def sample(r=1):
while True:
x = random(-1, 1)
y = random(-1, 1)
if x*x + y*y <= 1:
return (x, y) * r
Although it may run more than once or twice sometimes (and it is not constant time or suited for parallel execution), it is much faster because it doesn't use complex formulas like sin or cos.
The area element in a circle is dA=rdr*dphi. That extra factor r destroyed your idea to randomly choose a r and phi. While phi is distributed flat, r is not, but flat in 1/r (i.e. you are more likely to hit the boundary than "the bull's eye").
So to generate points evenly distributed over the circle pick phi from a flat distribution and r from a 1/r distribution.
Alternatively use the Monte Carlo method proposed by Mehrdad.
EDIT
To pick a random r flat in 1/r you could pick a random x from the interval [1/R, infinity] and calculate r=1/x. r is then distributed flat in 1/r.
To calculate a random phi pick a random x from the interval [0, 1] and calculate phi=2*pi*x.
You can also use your intuition.
The area of a circle is pi*r^2
For r=1
This give us an area of pi. Let us assume that we have some kind of function fthat would uniformly distrubute N=10 points inside a circle. The ratio here is 10 / pi
Now we double the area and the number of points
For r=2 and N=20
This gives an area of 4pi and the ratio is now 20/4pi or 10/2pi. The ratio will get smaller and smaller the bigger the radius is, because its growth is quadratic and the N scales linearly.
To fix this we can just say
x = r^2
sqrt(x) = r
If you would generate a vector in polar coordinates like this
length = random_0_1();
angle = random_0_2pi();
More points would land around the center.
length = sqrt(random_0_1());
angle = random_0_2pi();
length is not uniformly distributed anymore, but the vector will now be uniformly distributed.
There is a linear relationship between the radius and the number of points "near" that radius, so he needs to use a radius distribution that is also makes the number of data points near a radius r proportional to r.
I don't know if this question is still open for a new solution with all the answer already given, but I happened to have faced exactly the same question myself. I tried to "reason" with myself for a solution, and I found one. It might be the same thing as some have already suggested here, but anyway here it is:
in order for two elements of the circle's surface to be equal, assuming equal dr's, we must have dtheta1/dtheta2 = r2/r1. Writing expression of the probability for that element as P(r, theta) = P{ r1< r< r1 + dr, theta1< theta< theta + dtheta1} = f(r,theta)*dr*dtheta1, and setting the two probabilities (for r1 and r2) equal, we arrive to (assuming r and theta are independent) f(r1)/r1 = f(r2)/r2 = constant, which gives f(r) = c*r. And the rest, determining the constant c follows from the condition on f(r) being a PDF.
I am still not sure about the exact '(2/R2)×r' but what is apparent is the number of points required to be distributed in given unit 'dr' i.e. increase in r will be proportional to r2 and not r.
check this way...number of points at some angle theta and between r (0.1r to 0.2r) i.e. fraction of the r and number of points between r (0.6r to 0.7r) would be equal if you use standard generation, since the difference is only 0.1r between two intervals. but since area covered between points (0.6r to 0.7r) will be much larger than area covered between 0.1r to 0.2r, the equal number of points will be sparsely spaced in larger area, this I assume you already know, So the function to generate the random points must not be linear but quadratic, (since number of points required to be distributed in given unit 'dr' i.e. increase in r will be proportional to r2 and not r), so in this case it will be inverse of quadratic, since the delta we have (0.1r) in both intervals must be square of some function so it can act as seed value for linear generation of points (since afterwords, this seed is used linearly in sin and cos function), so we know, dr must be quadratic value and to make this seed quadratic, we need to originate this values from square root of r not r itself, I hope this makes it little more clear.
Such a fun problem.
The rationale of the probability of a point being chosen lowering as distance from the axis origin increases is explained multiple times above. We account for that by taking the root of U[0,1].
Here's a general solution for a positive r in Python 3.
import numpy
import math
import matplotlib.pyplot as plt
def sq_point_in_circle(r):
"""
Generate a random point in an r radius circle
centered around the start of the axis
"""
t = 2*math.pi*numpy.random.uniform()
R = (numpy.random.uniform(0,1) ** 0.5) * r
return(R*math.cos(t), R*math.sin(t))
R = 200 # Radius
N = 1000 # Samples
points = numpy.array([sq_point_in_circle(R) for i in range(N)])
plt.scatter(points[:, 0], points[:,1])
A programmer solution:
Create a bit map (a matrix of boolean values). It can be as large as you want.
Draw a circle in that bit map.
Create a lookup table of the circle's points.
Choose a random index in this lookup table.
const int RADIUS = 64;
const int MATRIX_SIZE = RADIUS * 2;
bool matrix[MATRIX_SIZE][MATRIX_SIZE] = {0};
struct Point { int x; int y; };
Point lookupTable[MATRIX_SIZE * MATRIX_SIZE];
void init()
{
int numberOfOnBits = 0;
for (int x = 0 ; x < MATRIX_SIZE ; ++x)
{
for (int y = 0 ; y < MATRIX_SIZE ; ++y)
{
if (x * x + y * y < RADIUS * RADIUS)
{
matrix[x][y] = true;
loopUpTable[numberOfOnBits].x = x;
loopUpTable[numberOfOnBits].y = y;
++numberOfOnBits;
} // if
} // for
} // for
} // ()
Point choose()
{
int randomIndex = randomInt(numberOfBits);
return loopUpTable[randomIndex];
} // ()
The bitmap is only necessary for the explanation of the logic. This is the code without the bitmap:
const int RADIUS = 64;
const int MATRIX_SIZE = RADIUS * 2;
struct Point { int x; int y; };
Point lookupTable[MATRIX_SIZE * MATRIX_SIZE];
void init()
{
int numberOfOnBits = 0;
for (int x = 0 ; x < MATRIX_SIZE ; ++x)
{
for (int y = 0 ; y < MATRIX_SIZE ; ++y)
{
if (x * x + y * y < RADIUS * RADIUS)
{
loopUpTable[numberOfOnBits].x = x;
loopUpTable[numberOfOnBits].y = y;
++numberOfOnBits;
} // if
} // for
} // for
} // ()
Point choose()
{
int randomIndex = randomInt(numberOfBits);
return loopUpTable[randomIndex];
} // ()
1) Choose a random X between -1 and 1.
var X:Number = Math.random() * 2 - 1;
2) Using the circle formula, calculate the maximum and minimum values of Y given that X and a radius of 1:
var YMin:Number = -Math.sqrt(1 - X * X);
var YMax:Number = Math.sqrt(1 - X * X);
3) Choose a random Y between those extremes:
var Y:Number = Math.random() * (YMax - YMin) + YMin;
4) Incorporate your location and radius values in the final value:
var finalX:Number = X * radius + pos.x;
var finalY:Number = Y * radois + pos.y;

Hexagonal Grid Coordinates To Pixel Coordinates

I am working with a hexagonal grid. I have chosen to use this coordinate system because it is quite elegant.
This question talks about generating the coordinates themselves, and is quite useful. My issue now is in converting these coordinates to and from actual pixel coordinates. I am looking for a simple way to find the center of a hexagon with coordinates x,y,z. Assume (0,0) in pixel coordinates is at (0,0,0) in hex coords, and that each hexagon has an edge of length s. It seems to me like x,y, and z should each move my coordinate a certain distance along an axis, but they are interrelated in an odd way I can't quite wrap my head around it.
Bonus points if you can go the other direction and convert any (x,y) point in pixel coordinates to the hex that point belongs in.
For clarity, let the "hexagonal" coordinates be (r,g,b) where r, g, and b are the red, green, and blue coordinates, respectively. The coordinates (r,g,b) and (x,y) are related by the following:
y = 3/2 * s * b
b = 2/3 * y / s
x = sqrt(3) * s * ( b/2 + r)
x = - sqrt(3) * s * ( b/2 + g )
r = (sqrt(3)/3 * x - y/3 ) / s
g = -(sqrt(3)/3 * x + y/3 ) / s
r + b + g = 0
Derivation:
I first noticed that any horizontal row of hexagons (which should have a constant y-coordinate) had a constant b coordinate, so y depended only on b. Each hexagon can be broken into six equilateral triangles with sides of length s; the centers of the hexagons in one row are one and a half side-lengths above/below the centers in the next row (or, perhaps easier to see, the centers in one row are 3 side lengths above/below the centers two rows away), so for each change of 1 in b, y changes 3/2 * s, giving the first formula. Solving for b in terms of y gives the second formula.
The hexagons with a given r coordinate all have centers on a line perpendicular to the r axis at the point on the r axis that is 3/2 * s from the origin (similar to the above derivation of y in terms of b). The r axis has slope -sqrt(3)/3, so a line perpendicular to it has slope sqrt(3); the point on the r axis and on the line has coordinates (3sqrt(3)/4 * s * r, -3/4 * s * r); so an equation in x and y for the line containing the centers of the hexagons with r-coordinate r is y + 3/4 * s * r = sqrt(3) * (x - 3sqrt(3)/4 * s * r). Substituting for y using the first formula and solving for x gives the second formula. (This is not how I actually derived this one, but my derivation was graphical with lots of trial and error and this algebraic method is more concise.)
The set of hexagons with a given r coordinate is the horizontal reflection of the set of hexagons with that g coordinate, so whatever the formula is for the x coordinate in terms of r and b, the x coordinate for that formula with g in place of r will be the opposite. This gives the third formula.
The fourth and fifth formulas come from substituting the second formula for b and solving for r or g in terms of x and y.
The final formula came from observation, verified by algebra with the earlier formulas.

Resources