How to calculate the length of a curve of a math function? - math

I got curve C and I want to compute the curve length between its 2 points A,B:
f(x) = x² + 2x
C( x,f( x))
A(-2.0,f(-2.0))
B( 0.5,f( 0.5))
so x=<-2.0,0.5>
How to calculate the curve length between points A,B ?
Of course I want to know how to calculate it on sheets :)
Thank you ;)

You can simply compute many n points along the curve and add up the distances between them approximating your curve with many small line segments. That is actually how curve integration is done when the number of points goes to infinity. Without any higher math we can set n to some big enough value and add it in O(n) for loop. For example in C++ like this:
#include <math.h>
double f(double x){ return (x*x)+x+x; } // your function
double length(double x0,double x1,int n) // length of f(x) x=<x0,x1>
{
int e;
double x,dx,y,dy,l;
y=f(x0); dx=(x1-x0)/double(n-1); l=0.0; // start y and length
for (e=1,x=x0+dx;e;x+=dx) // loop through whole x range
{
if (x>=x1) { x=x1; e=0; } // end?
dy=y; y=f(x); dy=y-dy; // y=f(x) and dy is y-oldy
l+=sqrt((dx*dx)+(dy*dy)); // add line length
}
return l; // return length
}
use like this:
cout << length(-2.0,0.5,10) << endl;
cout << length(-2.0,0.5,100) << endl;
cout << length(-2.0,0.5,1000) << endl;
cout << length(-2.0,0.5,10000) << endl;
cout << length(-2.0,0.5,100000) << endl;
cout << length(-2.0,0.5,1000000) << endl;
when the result start saturating stop increasing n as you found your solution (with some error of coarse) Here results on my machine:
4.57118083390485
4.30516477250995
4.30776425810517
4.30551273287911
4.30528771762491
4.30526521739629
So we can round the answer to for example 4.305 ...
Of coarse if you compute the curve integral algebraically instead of this then you can obtain precise answer in O(1) if integrable of coarse...

Here is Python 3 code that will approximate the length of an arc of a function graph. It is designed for continuous functions, though no computer program can do the infinitely many calculations needed to get the true result.
"""Compute the arc length of the curve defined by y = x**2 + 2*x for
-2 <= x <= 0.5 without using calculus.
"""
from math import hypot
def arclength(f, a, b, tol=1e-6):
"""Compute the arc length of function f(x) for a <= x <= b. Stop
when two consecutive approximations are closer than the value of
tol.
"""
nsteps = 1 # number of steps to compute
oldlength = 1.0e20
length = 1.0e10
while abs(oldlength - length) >= tol:
nsteps *= 2
fx1 = f(a)
xdel = (b - a) / nsteps # space between x-values
oldlength = length
length = 0
for i in range(1, nsteps + 1):
fx0 = fx1 # previous function value
fx1 = f(a + i * (b - a) / nsteps) # new function value
length += hypot(xdel, fx1 - fx0) # length of small line segment
return length
def f(x):
return x**2 + 2*x
print(arclength(f, -2.0, 0.5, 1e-10))
You can set the "tolerance" for the result. This routine basically follows the mathematical definition of the length of an arc. It approximates the curve with joined line segments and calculates the combined length of the segments. The number of segments is doubled until two consecutive length approximations are closer than the given tolerance. In the graphic below the blue segments are added together, then the red segments, and so on. Since a line is the shortest distance between two points, all the approximations and the final answer will be less than the true answer (unless round-off or other errors occur in the calculations).
The answer given by that code is
4.3052627174649505
The result from calculus, reduced to a decimal number, is
4.305262717478898
so my result is a little low, as expected, and is within the desired tolerance.
My routine does have some features to reduce computations and improve accuracy, but more could be done. Ask if you need more, such as the calculus closed form of the answer. Warning--that answer involves the inverse hyperbolic sine function.

Related

Fixed point multiplication in assembly (x86)

I want to multiply and divide an unsigned 8.8 fixed-point number in the
ax register with 1.00125 and store the result in ax as well.
I know that fixed point multiplication/division requires some extra steps
but I have no idea how to implement those in assembly.
Help is greatly appreciated.
If you care about accuracy, 1.00125 can't be stored exactly in any integer format or in any floating point format because it's a recursive fraction in binary (in binary it's 1.000000000101000111101011100001010001111010111...b where that 00001010001111010111 sequence repeats forever). For this reason I'd convert it into the rational number 801/800; and then do x * 1.00125 = (x * 801) / 800 (possibly with "round to nearest" on the division).
If you don't care about accuracy, then the more bits you can use for "1.00125" the closer the result will be to the correct answer. With 8 bits ("1.7 fixed point") the closest you can get is 1.0000000b, which means you can just skip the multiplication (x * 1.00125 = x). With 16 bits ("1.15 fixed point") the closest you can get is 1.000000000101001b (or 1.001220703125 in decimal).
However, you can cheat more. Specifically, you can significantly increase accuracy with the same number of bits by doing (x * 1) + (x * 0.00125). E.g. instead of having a 16 bit constant like 1.000000000101001b (where 9 bits are zeros), you can have a 16-bit constant like 0.0000000001010001111010111b (where the 16 bits are the last 16 bits without any of the leading zeros). In this case the constant is very close (like 0.00124999880) rather than "less close" (like 1.001220703125 was).
Ironically; with only 16 bits, this "0.00125" is more accurate than a 32-bit floating point representation of 1.00125 can be.
So.. in assembly (assuming everything is unsigned) it might look like:
;ax = x << 8 (or x as an 8.8 fixed point number)
mov cx,ax ;cx = x << 8
mov bx,41943 ;bx = 41943 = 0.00124999880 << 25
mul bx ;dx:ax = (x << 8) * (0.00124999880 << 25) = x * 0.00124999880 << 33
;dx = x * 0.00124999880 << 17
shr dx,9 ;dx = x * 0.00124999880 << 17 >> 9 = x * 0.00124999880 << 8, carry flag = last bit shifted out
adc dx,0 ;Round up to nearest (add 1 if last bit shifted out was set)
lea ax,[dx+cx] ;ax = x << 8 + x * 0.00124999880 << 8 = x * 1.00124999880 << 8
Of course the problem here is that converting it back to "8.8 fixed point" ruins most of the accuracy anyway. To keep most of the accuracy, you could use a 32-bit result ("8.24 fixed point") instead. This might look like:
;ax = x << 8 (or x as an 8.8 fixed point number)
mov cx,ax ;cx = x << 8
mov bx,41943 ;bx = 41943 = 0.00124999880 << 25
mul bx ;dx:ax = (x << 8) * (0.00124999880 << 25) = x * 0.00124999880 << 33
add ax,1 << 8 ;To cause the following shift to round to nearest
adc dx,0
shrd ax,dx,9
shr dx,9 ;dx:ax = x * 0.00124999880 << 33 >> 0 = x * 0.00124999880 << 24
;cx:0 = x << 24
add dx,cx ;dx:ax = x << 24 + x * 0.00124999880 << 24 = x * 1.00124999880 << 24
The other problem is that there's potential overflow. E.g. if x was 0xFF.FF (or about 255.996) the result would be something like 256.32 which is too big to fit in an "8.8" or "8.24" or "8.anything" fixed point format. To avoid that problem you can just increase the number of integer bits (and reduce the accuracy by 1 bit) - e.g. make the result "9.7 fixed point", or "9.23 fixed point".
The important points here are:
a) For "fixed point" calculations, every operation (multiplication, division, addition, ...) causes the decimal point to move.
b) Because the decimal point keeps moving, it's best to adopt a standard notation for where the decimal point is at each step. My way is to include an "explicit shift" in the comments (e.g. "x << 8" rather than just "x"). This "explicit shift documented in the comments" makes it easy to determine where the decimal point moves, and if/how much you need to shift left/right to convert to a different fixed point format.
c) For good code, you need to pay attention to accuracy and overflow, and this causes the decimal point to move around even more (and makes the use of a "standard notation for where the decimal point is" more important).
An easy solution is to just use the x87 floating point unit to do the multiplication. Assuming real mode with nasm (untested):
example:
push bp
mov sp, bp ; establish stack frame
push ax
push ax ; make space for quotient
fild word [bp-2] ; load number
fld st0 ; duplicate top of stack
fmul dword [factor] ; compute product
fistp word [bp-2]
fmul dword [invfac] ; compute quotient
fistp word [bp-4]
pop dx ; quotient
pop ax ; product
pop bp ; tear down stack framt
ret
factor dd 1.00125
invfac dd 0.999875 ; 1/1.00125
This leaves the quotient in dx and the product in ax. Rounding is done according to the rounding mode configured in the x87 FPU (which should be rounding to nearest by default).
One thing to understand about fixed point multiplication that the point of rhe result is the point of operand 1 plus the point of operand 2.
Thus, when multiplying two numbers with fixed point of zero, we get a result with fixed point zero.
And when multiplying two numbers with fixed point at 8 places (binary) we get a number with fixed point at 16 places.
So, need to scale down such result as needed.

Return values for arccosine?

I implemented a day/night shader built on the basis that only pixels on the side of an object that is facing the directional light source are illuminated. I calculate this based on the unit vectors between the directional light's position and the position of the pixel in 3D space:
float3 direction = normalize(Light.Position - Object.Position);
float theta = abs(acos(dot(normalize(Object.Position), direction))) * 180 / 3.14f;
if (theta < 180)
color = float3(1.0f);
else
color = float3(0.2f);
return float4(color, 1.0f);
This works well, but since I am brushing up on my math lately, it got me thinking that I should make sure I understand what acos is returning.
Mathematically, I know that the arccosine should give me an angle in radians from a value between -1 and 1, while cosine should give me a value between -1 and 1 from an angle in radians.
The documentation states that the input value should be between -1 and 1 for acos which follows that idea, but it doesn't tell me if the return value is 0 - π, -π - π, 0 - 2π, or a similar range.
Return Value
The arccosine of the x parameter.
Type Description
Name [Template Type] 'Component Type' Size
x [scalar, vector, or matrix] 'float' any
ret [same as input x] 'float' same dimension(s) as input x
HLSL doesn't really give me a way to test this very easily, so I'm wondering if anyone has any documentation on this.
What is the return range for the HLSL function acos?
I went through some testing on this topic and have discovered that the HLSL version of acos returns a value between 0 and π. I proved this to be true with the following:
n = 0..3
d = [0, 90, 180, 181]
r = π / 180 * d[n]
c = cos(r)
a = acos(c)
The following is the result of the evaluations for d[n]:
d[0] returns a = 0.
d[1] returns a = π/2.
d[2] returns a = π.
d[3] returns a ~= 3.12....
This tells us that the return value for acos stays true to the range of usual principle values for arccosine:
0 ≤ acos(x) ≤ π
Also remaining consistent with the definition:
acos(cos(θ)) = θ
I have provided feedback to Microsoft with regards to the lack of detailed documentation on HLSL intrinsic functions in comparison to more common languages.

How to compute normals for a segment line in 3D

I have exported some hair particules from Blender (a hairstyle). These are composed of several lines (GL_LINES). My openGL program displays these particules without any problem. Now I just want to apply light properties on these particules. Blender does not export the normals vectors so I need to compute them by myself. I know the following rule :
If we define a line segment as [AB] in two dimensions,
we have dx = xB - xA and dy = yB - yA, then the normals are N1(-dy, dx) and N2(dy, -dx).
I hope I did not make any mistake.
But I don't know the rule for a 3D space line segment definition if I add the z dimention in my line segment coordinates (for instance A(5, 2, 3) and B(0, 0, -5)).
Does anyone can help me?
Since Aki forgot that comments aren't answers:
Lines in 3D space don't have a normal. Technically, lines in 2D space don't have a normal either; they have two normals.
There are an infinite number of directions that are perpendicular to a line in 3D space. All of these normals are in the same plane, but with different directions. Without some more advanced algorithm (likely based on adjacent lines), there is no way to pick one of these normals over another.
If you assume that you can get two vectors to begin with, and it looks like that's what you are saying, call them v, w, to get a normal vector take the cross product. It's not a bad idea to normalize v, w to begin with, depending on the situation. The cross product can be given by:
v x w =(v_2w_3 - v_3w_2, v_3x_1 - v_1w_3, v_1w_2 - v_3w_1),
Here v_i is the ith component of v and so on. The numbers next to each other represent multiplication. You, of course, have plus or minus this vector giving two possibilities.
I had a similar question, and even used the indefinite article "a". Some have suggested there is no norm to a 3D line segment by saying there is an infinite number of them. Yet, miss the indefinite article "a" --- which I assume could mean any 1 of infinite.
What happens when someone does not have two vectors to start with?
vector is the unit vector of the line segment or vector.
create a rotation matrix around vector to obtain 1 of infinite norms
It took some time, but using Eigen template library and 10000 random test samples. Here is the code:
#include <Eigen/Core>
#include <Eigen/Geometry>
Eigen::MatrixXd samples = Eigen::MatrixXd::Random(10000, 3); // 3x3 Matrix filled with random numbers between (-1,1)
for (int i = 0; i < 10000; ++i)
{
Eigen::Vector3d vector(samples(i, 0), samples(i, 1), samples(i, 2));
vector.normalize();
Eigen::Vector3d zaxis(0, 0, 1);
Eigen::Vector3d xaxis = zaxis.cross(vector);
xaxis.normalize();
Eigen::Vector3d yaxis = vector.cross(xaxis);
yaxis.normalize();
Eigen::Matrix3d m;
m(0, 0) = xaxis(0);
m(0, 1) = yaxis(0);
m(0, 2) = vector(0);
m(1, 0) = xaxis(1);
m(1, 1) = yaxis(1);
m(1, 2) = vector(1);
m(2, 0) = xaxis(2);
m(2, 1) = yaxis(2);
m(2, 2) = vector(2);
// one of two easy points to use to get 1 of infinite norms --- the other being (1, 0, 0)
Eigen::Vector3d point(0, 1, 0);
point = m * point;
point.normalize();
auto norm = point.cross(vector);
norm.normalize(); // 1 of an infinite number of norms
auto check = norm.dot(vector); // verify with dot product
if (std::abs(check) >= 1e-12)
{
//complain
}
}

Generate a random point within a circle (uniformly)

I need to generate a uniformly random point within a circle of radius R.
I realize that by just picking a uniformly random angle in the interval [0 ... 2π), and uniformly random radius in the interval (0 ... R) I would end up with more points towards the center, since for two given radii, the points in the smaller radius will be closer to each other than for the points in the larger radius.
I found a blog entry on this over here but I don't understand his reasoning. I suppose it is correct, but I would really like to understand from where he gets (2/R2)×r and how he derives the final solution.
Update: 7 years after posting this question I still hadn't received a satisfactory answer on the actual question regarding the math behind the square root algorithm. So I spent a day writing an answer myself. Link to my answer.
How to generate a random point within a circle of radius R:
r = R * sqrt(random())
theta = random() * 2 * PI
(Assuming random() gives a value between 0 and 1 uniformly)
If you want to convert this to Cartesian coordinates, you can do
x = centerX + r * cos(theta)
y = centerY + r * sin(theta)
Why sqrt(random())?
Let's look at the math that leads up to sqrt(random()). Assume for simplicity that we're working with the unit circle, i.e. R = 1.
The average distance between points should be the same regardless of how far from the center we look. This means for example, that looking on the perimeter of a circle with circumference 2 we should find twice as many points as the number of points on the perimeter of a circle with circumference 1.
Since the circumference of a circle (2πr) grows linearly with r, it follows that the number of random points should grow linearly with r. In other words, the desired probability density function (PDF) grows linearly. Since a PDF should have an area equal to 1 and the maximum radius is 1, we have
So we know how the desired density of our random values should look like.
Now: How do we generate such a random value when all we have is a uniform random value between 0 and 1?
We use a trick called inverse transform sampling
From the PDF, create the cumulative distribution function (CDF)
Mirror this along y = x
Apply the resulting function to a uniform value between 0 and 1.
Sounds complicated? Let me insert a blockquote with a little side track that conveys the intuition:
Suppose we want to generate a random point with the following distribution:
That is
1/5 of the points uniformly between 1 and 2, and
4/5 of the points uniformly between 2 and 3.
The CDF is, as the name suggests, the cumulative version of the PDF. Intuitively: While PDF(x) describes the number of random values at x, CDF(x) describes the number of random values less than x.
In this case the CDF would look like:
To see how this is useful, imagine that we shoot bullets from left to right at uniformly distributed heights. As the bullets hit the line, they drop down to the ground:
See how the density of the bullets on the ground correspond to our desired distribution! We're almost there!
The problem is that for this function, the y axis is the output and the x axis is the input. We can only "shoot bullets from the ground straight up"! We need the inverse function!
This is why we mirror the whole thing; x becomes y and y becomes x:
We call this CDF-1. To get values according to the desired distribution, we use CDF-1(random()).
…so, back to generating random radius values where our PDF equals 2x.
Step 1: Create the CDF:
Since we're working with reals, the CDF is expressed as the integral of the PDF.
CDF(x) = ∫ 2x = x2
Step 2: Mirror the CDF along y = x:
Mathematically this boils down to swapping x and y and solving for y:
CDF: y = x2
Swap: x = y2
Solve: y = √x
CDF-1: y = √x
Step 3: Apply the resulting function to a uniform value between 0 and 1
CDF-1(random()) = √random()
Which is what we set out to derive :-)
Let's approach this like Archimedes would have.
How can we generate a point uniformly in a triangle ABC, where |AB|=|BC|? Let's make this easier by extending to a parallelogram ABCD. It's easy to generate points uniformly in ABCD. We uniformly pick a random point X on AB and Y on BC and choose Z such that XBYZ is a parallelogram. To get a uniformly chosen point in the original triangle we just fold any points that appear in ADC back down to ABC along AC.
Now consider a circle. In the limit we can think of it as infinitely many isoceles triangles ABC with B at the origin and A and C on the circumference vanishingly close to each other. We can pick one of these triangles simply by picking an angle theta. So we now need to generate a distance from the center by picking a point in the sliver ABC. Again, extend to ABCD, where D is now twice the radius from the circle center.
Picking a random point in ABCD is easy using the above method. Pick a random point on AB. Uniformly pick a random point on BC. Ie. pick a pair of random numbers x and y uniformly on [0,R] giving distances from the center. Our triangle is a thin sliver so AB and BC are essentially parallel. So the point Z is simply a distance x+y from the origin. If x+y>R we fold back down.
Here's the complete algorithm for R=1. I hope you agree it's pretty simple. It uses trig, but you can give a guarantee on how long it'll take, and how many random() calls it needs, unlike rejection sampling.
t = 2*pi*random()
u = random()+random()
r = if u>1 then 2-u else u
[r*cos(t), r*sin(t)]
Here it is in Mathematica.
f[] := Block[{u, t, r},
u = Random[] + Random[];
t = Random[] 2 Pi;
r = If[u > 1, 2 - u, u];
{r Cos[t], r Sin[t]}
]
ListPlot[Table[f[], {10000}], AspectRatio -> Automatic]
Here is a fast and simple solution.
Pick two random numbers in the range (0, 1), namely a and b. If b < a, swap them. Your point is (b*R*cos(2*pi*a/b), b*R*sin(2*pi*a/b)).
You can think about this solution as follows. If you took the circle, cut it, then straightened it out, you'd get a right-angled triangle. Scale that triangle down, and you'd have a triangle from (0, 0) to (1, 0) to (1, 1) and back again to (0, 0). All of these transformations change the density uniformly. What you've done is uniformly picked a random point in the triangle and reversed the process to get a point in the circle.
Note the point density in proportional to inverse square of the radius, hence instead of picking r from [0, r_max], pick from [0, r_max^2], then compute your coordinates as:
x = sqrt(r) * cos(angle)
y = sqrt(r) * sin(angle)
This will give you uniform point distribution on a disk.
http://mathworld.wolfram.com/DiskPointPicking.html
Think about it this way. If you have a rectangle where one axis is radius and one is angle, and you take the points inside this rectangle that are near radius 0. These will all fall very close to the origin (that is close together on the circle.) However, the points near radius R, these will all fall near the edge of the circle (that is, far apart from each other.)
This might give you some idea of why you are getting this behavior.
The factor that's derived on that link tells you how much corresponding area in the rectangle needs to be adjusted to not depend on the radius once it's mapped to the circle.
Edit: So what he writes in the link you share is, "That’s easy enough to do by calculating the inverse of the cumulative distribution, and we get for r:".
The basic premise is here that you can create a variable with a desired distribution from a uniform by mapping the uniform by the inverse function of the cumulative distribution function of the desired probability density function. Why? Just take it for granted for now, but this is a fact.
Here's my somehwat intuitive explanation of the math. The density function f(r) with respect to r has to be proportional to r itself. Understanding this fact is part of any basic calculus books. See sections on polar area elements. Some other posters have mentioned this.
So we'll call it f(r) = C*r;
This turns out to be most of the work. Now, since f(r) should be a probability density, you can easily see that by integrating f(r) over the interval (0,R) you get that C = 2/R^2 (this is an exercise for the reader.)
Thus, f(r) = 2*r/R^2
OK, so that's how you get the formula in the link.
Then, the final part is going from the uniform random variable u in (0,1) you must map by the inverse function of the cumulative distribution function from this desired density f(r). To understand why this is the case you need to find an advanced probability text like Papoulis probably (or derive it yourself.)
Integrating f(r) you get F(r) = r^2/R^2
To find the inverse function of this you set u = r^2/R^2 and then solve for r, which gives you r = R * sqrt(u)
This totally makes sense intuitively too, u = 0 should map to r = 0. Also, u = 1 shoudl map to r = R. Also, it goes by the square root function, which makes sense and matches the link.
Let ρ (radius) and φ (azimuth) be two random variables corresponding to polar coordinates of an arbitrary point inside the circle. If the points are uniformly distributed then what is the disribution function of ρ and φ?
For any r: 0 < r < R the probability of radius coordinate ρ to be less then r is
P[ρ < r] = P[point is within a circle of radius r] = S1 / S0 =(r/R)2
Where S1 and S0 are the areas of circle of radius r and R respectively.
So the CDF can be given as:
0 if r<=0
CDF = (r/R)**2 if 0 < r <= R
1 if r > R
And PDF:
PDF = d/dr(CDF) = 2 * (r/R**2) (0 < r <= R).
Note that for R=1 random variable sqrt(X) where X is uniform on [0, 1) has this exact CDF (because P[sqrt(X) < y] = P[x < y**2] = y**2 for 0 < y <= 1).
The distribution of φ is obviously uniform from 0 to 2*π. Now you can create random polar coordinates and convert them to Cartesian using trigonometric equations:
x = ρ * cos(φ)
y = ρ * sin(φ)
Can't resist to post python code for R=1.
from matplotlib import pyplot as plt
import numpy as np
rho = np.sqrt(np.random.uniform(0, 1, 5000))
phi = np.random.uniform(0, 2*np.pi, 5000)
x = rho * np.cos(phi)
y = rho * np.sin(phi)
plt.scatter(x, y, s = 4)
You will get
The reason why the naive solution doesn't work is that it gives a higher probability density to the points closer to the circle center. In other words the circle that has radius r/2 has probability r/2 of getting a point selected in it, but it has area (number of points) pi*r^2/4.
Therefore we want a radius probability density to have the following property:
The probability of choosing a radius smaller or equal to a given r has to be proportional to the area of the circle with radius r. (because we want to have a uniform distribution on the points and larger areas mean more points)
In other words we want the probability of choosing a radius between [0,r] to be equal to its share of the overall area of the circle. The total circle area is pi*R^2, and the area of the circle with radius r is pi*r^2. Thus we would like the probability of choosing a radius between [0,r] to be (pi*r^2)/(pi*R^2) = r^2/R^2.
Now comes the math:
The probability of choosing a radius between [0,r] is the integral of p(r) dr from 0 to r (that's just because we add all the probabilities of the smaller radii). Thus we want integral(p(r)dr) = r^2/R^2. We can clearly see that R^2 is a constant, so all we need to do is figure out which p(r), when integrated would give us something like r^2. The answer is clearly r * constant. integral(r * constant dr) = r^2/2 * constant. This has to be equal to r^2/R^2, therefore constant = 2/R^2. Thus you have the probability distribution p(r) = r * 2/R^2
Note: Another more intuitive way to think about the problem is to imagine that you are trying to give each circle of radius r a probability density equal to the proportion of the number of points it has on its circumference. Thus a circle which has radius r will have 2 * pi * r "points" on its circumference. The total number of points is pi * R^2. Thus you should give the circle r a probability equal to (2 * pi * r) / (pi * R^2) = 2 * r/R^2. This is much easier to understand and more intuitive, but it's not quite as mathematically sound.
It really depends on what you mean by 'uniformly random'. This is a subtle point and you can read more about it on the wiki page here: http://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29, where the same problem, giving different interpretations to 'uniformly random' gives different answers!
Depending on how you choose the points, the distribution could vary, even though they are uniformly random in some sense.
It seems like the blog entry is trying to make it uniformly random in the following sense: If you take a sub-circle of the circle, with the same center, then the probability that the point falls in that region is proportional to the area of the region. That, I believe, is attempting to follow the now standard interpretation of 'uniformly random' for 2D regions with areas defined on them: probability of a point falling in any region (with area well defined) is proportional to the area of that region.
Here is my Python code to generate num random points from a circle of radius rad:
import matplotlib.pyplot as plt
import numpy as np
rad = 10
num = 1000
t = np.random.uniform(0.0, 2.0*np.pi, num)
r = rad * np.sqrt(np.random.uniform(0.0, 1.0, num))
x = r * np.cos(t)
y = r * np.sin(t)
plt.plot(x, y, "ro", ms=1)
plt.axis([-15, 15, -15, 15])
plt.show()
I think that in this case using polar coordinates is a way of complicate the problem, it would be much easier if you pick random points into a square with sides of length 2R and then select the points (x,y) such that x^2+y^2<=R^2.
Solution in Java and the distribution example (2000 points)
public void getRandomPointInCircle() {
double t = 2 * Math.PI * Math.random();
double r = Math.sqrt(Math.random());
double x = r * Math.cos(t);
double y = r * Math.sin(t);
System.out.println(x);
System.out.println(y);
}
based on previus solution https://stackoverflow.com/a/5838055/5224246 from #sigfpe
I used once this method:
This may be totally unoptimized (ie it uses an array of point so its unusable for big circles) but gives random distribution enough. You could skip the creation of the matrix and draw directly if you wish to. The method is to randomize all points in a rectangle that fall inside the circle.
bool[,] getMatrix(System.Drawing.Rectangle r) {
bool[,] matrix = new bool[r.Width, r.Height];
return matrix;
}
void fillMatrix(ref bool[,] matrix, Vector center) {
double radius = center.X;
Random r = new Random();
for (int y = 0; y < matrix.GetLength(0); y++) {
for (int x = 0; x < matrix.GetLength(1); x++)
{
double distance = (center - new Vector(x, y)).Length;
if (distance < radius) {
matrix[x, y] = r.NextDouble() > 0.5;
}
}
}
}
private void drawMatrix(Vector centerPoint, double radius, bool[,] matrix) {
var g = this.CreateGraphics();
Bitmap pixel = new Bitmap(1,1);
pixel.SetPixel(0, 0, Color.Black);
for (int y = 0; y < matrix.GetLength(0); y++)
{
for (int x = 0; x < matrix.GetLength(1); x++)
{
if (matrix[x, y]) {
g.DrawImage(pixel, new PointF((float)(centerPoint.X - radius + x), (float)(centerPoint.Y - radius + y)));
}
}
}
g.Dispose();
}
private void button1_Click(object sender, EventArgs e)
{
System.Drawing.Rectangle r = new System.Drawing.Rectangle(100,100,200,200);
double radius = r.Width / 2;
Vector center = new Vector(r.Left + radius, r.Top + radius);
Vector normalizedCenter = new Vector(radius, radius);
bool[,] matrix = getMatrix(r);
fillMatrix(ref matrix, normalizedCenter);
drawMatrix(center, radius, matrix);
}
First we generate a cdf[x] which is
The probability that a point is less than distance x from the centre of the circle. Assume the circle has a radius of R.
obviously if x is zero then cdf[0] = 0
obviously if x is R then the cdf[R] = 1
obviously if x = r then the cdf[r] = (Pi r^2)/(Pi R^2)
This is because each "small area" on the circle has the same probability of being picked, So the probability is proportionally to the area in question. And the area given a distance x from the centre of the circle is Pi r^2
so cdf[x] = x^2/R^2 because the Pi cancel each other out
we have cdf[x]=x^2/R^2 where x goes from 0 to R
So we solve for x
R^2 cdf[x] = x^2
x = R Sqrt[ cdf[x] ]
We can now replace cdf with a random number from 0 to 1
x = R Sqrt[ RandomReal[{0,1}] ]
Finally
r = R Sqrt[ RandomReal[{0,1}] ];
theta = 360 deg * RandomReal[{0,1}];
{r,theta}
we get the polar coordinates
{0.601168 R, 311.915 deg}
This might help people interested in choosing an algorithm for speed; the fastest method is (probably?) rejection sampling.
Just generate a point within the unit square and reject it until it is inside a circle. E.g (pseudo-code),
def sample(r=1):
while True:
x = random(-1, 1)
y = random(-1, 1)
if x*x + y*y <= 1:
return (x, y) * r
Although it may run more than once or twice sometimes (and it is not constant time or suited for parallel execution), it is much faster because it doesn't use complex formulas like sin or cos.
The area element in a circle is dA=rdr*dphi. That extra factor r destroyed your idea to randomly choose a r and phi. While phi is distributed flat, r is not, but flat in 1/r (i.e. you are more likely to hit the boundary than "the bull's eye").
So to generate points evenly distributed over the circle pick phi from a flat distribution and r from a 1/r distribution.
Alternatively use the Monte Carlo method proposed by Mehrdad.
EDIT
To pick a random r flat in 1/r you could pick a random x from the interval [1/R, infinity] and calculate r=1/x. r is then distributed flat in 1/r.
To calculate a random phi pick a random x from the interval [0, 1] and calculate phi=2*pi*x.
You can also use your intuition.
The area of a circle is pi*r^2
For r=1
This give us an area of pi. Let us assume that we have some kind of function fthat would uniformly distrubute N=10 points inside a circle. The ratio here is 10 / pi
Now we double the area and the number of points
For r=2 and N=20
This gives an area of 4pi and the ratio is now 20/4pi or 10/2pi. The ratio will get smaller and smaller the bigger the radius is, because its growth is quadratic and the N scales linearly.
To fix this we can just say
x = r^2
sqrt(x) = r
If you would generate a vector in polar coordinates like this
length = random_0_1();
angle = random_0_2pi();
More points would land around the center.
length = sqrt(random_0_1());
angle = random_0_2pi();
length is not uniformly distributed anymore, but the vector will now be uniformly distributed.
There is a linear relationship between the radius and the number of points "near" that radius, so he needs to use a radius distribution that is also makes the number of data points near a radius r proportional to r.
I don't know if this question is still open for a new solution with all the answer already given, but I happened to have faced exactly the same question myself. I tried to "reason" with myself for a solution, and I found one. It might be the same thing as some have already suggested here, but anyway here it is:
in order for two elements of the circle's surface to be equal, assuming equal dr's, we must have dtheta1/dtheta2 = r2/r1. Writing expression of the probability for that element as P(r, theta) = P{ r1< r< r1 + dr, theta1< theta< theta + dtheta1} = f(r,theta)*dr*dtheta1, and setting the two probabilities (for r1 and r2) equal, we arrive to (assuming r and theta are independent) f(r1)/r1 = f(r2)/r2 = constant, which gives f(r) = c*r. And the rest, determining the constant c follows from the condition on f(r) being a PDF.
I am still not sure about the exact '(2/R2)×r' but what is apparent is the number of points required to be distributed in given unit 'dr' i.e. increase in r will be proportional to r2 and not r.
check this way...number of points at some angle theta and between r (0.1r to 0.2r) i.e. fraction of the r and number of points between r (0.6r to 0.7r) would be equal if you use standard generation, since the difference is only 0.1r between two intervals. but since area covered between points (0.6r to 0.7r) will be much larger than area covered between 0.1r to 0.2r, the equal number of points will be sparsely spaced in larger area, this I assume you already know, So the function to generate the random points must not be linear but quadratic, (since number of points required to be distributed in given unit 'dr' i.e. increase in r will be proportional to r2 and not r), so in this case it will be inverse of quadratic, since the delta we have (0.1r) in both intervals must be square of some function so it can act as seed value for linear generation of points (since afterwords, this seed is used linearly in sin and cos function), so we know, dr must be quadratic value and to make this seed quadratic, we need to originate this values from square root of r not r itself, I hope this makes it little more clear.
Such a fun problem.
The rationale of the probability of a point being chosen lowering as distance from the axis origin increases is explained multiple times above. We account for that by taking the root of U[0,1].
Here's a general solution for a positive r in Python 3.
import numpy
import math
import matplotlib.pyplot as plt
def sq_point_in_circle(r):
"""
Generate a random point in an r radius circle
centered around the start of the axis
"""
t = 2*math.pi*numpy.random.uniform()
R = (numpy.random.uniform(0,1) ** 0.5) * r
return(R*math.cos(t), R*math.sin(t))
R = 200 # Radius
N = 1000 # Samples
points = numpy.array([sq_point_in_circle(R) for i in range(N)])
plt.scatter(points[:, 0], points[:,1])
A programmer solution:
Create a bit map (a matrix of boolean values). It can be as large as you want.
Draw a circle in that bit map.
Create a lookup table of the circle's points.
Choose a random index in this lookup table.
const int RADIUS = 64;
const int MATRIX_SIZE = RADIUS * 2;
bool matrix[MATRIX_SIZE][MATRIX_SIZE] = {0};
struct Point { int x; int y; };
Point lookupTable[MATRIX_SIZE * MATRIX_SIZE];
void init()
{
int numberOfOnBits = 0;
for (int x = 0 ; x < MATRIX_SIZE ; ++x)
{
for (int y = 0 ; y < MATRIX_SIZE ; ++y)
{
if (x * x + y * y < RADIUS * RADIUS)
{
matrix[x][y] = true;
loopUpTable[numberOfOnBits].x = x;
loopUpTable[numberOfOnBits].y = y;
++numberOfOnBits;
} // if
} // for
} // for
} // ()
Point choose()
{
int randomIndex = randomInt(numberOfBits);
return loopUpTable[randomIndex];
} // ()
The bitmap is only necessary for the explanation of the logic. This is the code without the bitmap:
const int RADIUS = 64;
const int MATRIX_SIZE = RADIUS * 2;
struct Point { int x; int y; };
Point lookupTable[MATRIX_SIZE * MATRIX_SIZE];
void init()
{
int numberOfOnBits = 0;
for (int x = 0 ; x < MATRIX_SIZE ; ++x)
{
for (int y = 0 ; y < MATRIX_SIZE ; ++y)
{
if (x * x + y * y < RADIUS * RADIUS)
{
loopUpTable[numberOfOnBits].x = x;
loopUpTable[numberOfOnBits].y = y;
++numberOfOnBits;
} // if
} // for
} // for
} // ()
Point choose()
{
int randomIndex = randomInt(numberOfBits);
return loopUpTable[randomIndex];
} // ()
1) Choose a random X between -1 and 1.
var X:Number = Math.random() * 2 - 1;
2) Using the circle formula, calculate the maximum and minimum values of Y given that X and a radius of 1:
var YMin:Number = -Math.sqrt(1 - X * X);
var YMax:Number = Math.sqrt(1 - X * X);
3) Choose a random Y between those extremes:
var Y:Number = Math.random() * (YMax - YMin) + YMin;
4) Incorporate your location and radius values in the final value:
var finalX:Number = X * radius + pos.x;
var finalY:Number = Y * radois + pos.y;

Hexagonal tiles and finding their adjacent neighbours

I'm developing a simple 2D board game using hexagonal tile maps, I've read several articles (including the gamedev one's, which are linked every time there's a question on hexagonal tiles) on how to draw hexes on the screen and how to manage the movement (though much of it I had already done before). My main problem is finding the adjacent tiles based on a given radius.
This is how my map system works:
(0,0) (0,1) (0,2) (0,3) (0,4)
(1,0) (1,1) (1,2) (1,3) (1,4)
(2,0) (2,1) (2,2) (2,3) (2,4)
(3,0) (3,1) (3,2) (3,3) (3,4)
etc...
What I'm struggling with is the fact that I cant just 'select' the adjacent tiles by using for(x-range;x+range;x++); for(y-range;y+range;y++); because it selects unwanted tiles (in the example I gave, selecting the (1,1) tile and giving a range of 1 would also give me the (3,0) tile (the ones I actually need being (0,1)(0,2)(1,0)(1,2)(2,1)(2,2) ), which is kinda adjacent to the tile (because of the way the array is structured) but it's not really what I want to select. I could just brute force it, but that wouldn't be beautiful and would probably not cover every aspect of 'selecting radius thing'.
Can someone point me in the right direction here?
What is a hexagonal grid?
What you can see above are the two grids. It's all in the way you number your tiles and the way you understand what a hexagonal grid is. The way I see it, a hexagonal grid is nothing more than a deformed orthogonal one.
The two hex tiles I've circled in purple are theoretically still adjacent to 0,0. However, due to the deformation we've gone through to obtain the hex-tile grid from the orthogonal one, the two are no longer visually adjacent.
Deformation
What we need to understand is the deformation happened in a certain direction, along a [(-1,1) (1,-1)] imaginary line in my example. To be more precise, it is as if the grid has been elongated along that line, and squashed along a line perpendicular to that. So naturally, the two tiles on that line got spread out and are no longer visually adjacent. Conversely, the tiles (1, 1) and (-1, -1) which were diagonal to (0, 0) are now unusually close to (0, 0), so close in fact that they are now visually adjacent to (0, 0). Mathematically however, they are still diagonals and it helps to treat them that way in your code.
Selection
The image I show illustrates a radius of 1. For a radius of two, you'll notice (2, -2) and (-2, 2) are the tiles that should not be included in the selection. And so on. So, for any selection of radius r, the points (r, -r) and (-r, r) should not be selected. Other than that, your selection algorithm should be the same as a square-tiled grid.
Just make sure you have your axis set up properly on the hexagonal grid, and that you are numbering your tiles accordingly.
Implementation
Let's expand on this for a bit. We now know that movement along any direction in the grid costs us 1. And movement along the stretched direction costs us 2. See (0, 0) to (-1, 1) for example.
Knowing this, we can compute the shortest distance between any two tiles on such a grid, by decomposing the distance into two components: a diagonal movement and a straight movement along one of the axis.
For example, for the distance between (1, 1) and (-2, 5) on a normal grid we have:
Normal distance = (1, 1) - (-2, 5) = (3, -4)
That would be the distance vector between the two tiles were they on a square grid. However we need to compensate for the grid deformation so we decompose like this:
(3, -4) = (3, -3) + (0, -1)
As you can see, we've decomposed the vector into one diagonal one (3, -3) and one straight along an axis (0, -1).
We now check to see if the diagonal one is along the deformation axis which is any point (n, -n) where n is an integer that can be either positive or negative.
(3, -3) does indeed satisfy that condition, so this diagonal vector is along the deformation. This means that the length (or cost) of this vector instead of being 3, it will be double, that is 6.
So to recap. The distance between (1, 1) and (-2, 5) is the length of (3, -3) plus the length of (0, -1). That is distance = 3 * 2 + 1 = 7.
Implementation in C++
Below is the implementation in C++ of the algorithm I have explained above:
int ComputeDistanceHexGrid(const Point & A, const Point & B)
{
// compute distance as we would on a normal grid
Point distance;
distance.x = A.x - B.x;
distance.y = A.y - B.y;
// compensate for grid deformation
// grid is stretched along (-n, n) line so points along that line have
// a distance of 2 between them instead of 1
// to calculate the shortest path, we decompose it into one diagonal movement(shortcut)
// and one straight movement along an axis
Point diagonalMovement;
int lesserCoord = abs(distance.x) < abs(distance.y) ? abs(distance.x) : abs(distance.y);
diagonalMovement.x = (distance.x < 0) ? -lesserCoord : lesserCoord; // keep the sign
diagonalMovement.y = (distance.y < 0) ? -lesserCoord : lesserCoord; // keep the sign
Point straightMovement;
// one of x or y should always be 0 because we are calculating a straight
// line along one of the axis
straightMovement.x = distance.x - diagonalMovement.x;
straightMovement.y = distance.y - diagonalMovement.y;
// calculate distance
size_t straightDistance = abs(straightMovement.x) + abs(straightMovement.y);
size_t diagonalDistance = abs(diagonalMovement.x);
// if we are traveling diagonally along the stretch deformation we double
// the diagonal distance
if ( (diagonalMovement.x < 0 && diagonalMovement.y > 0) ||
(diagonalMovement.x > 0 && diagonalMovement.y < 0) )
{
diagonalDistance *= 2;
}
return straightDistance + diagonalDistance;
}
Now, given the above implemented ComputeDistanceHexGrid function, you can now have a naive, unoptimized implementation of a selection algorithm that will ignore any tiles further than the specified selection range:
int _tmain(int argc, _TCHAR* argv[])
{
// your radius selection now becomes your usual orthogonal algorithm
// except you eliminate hex tiles too far away from your selection center
// for(x-range;x+range;x++); for(y-range;y+range;y++);
Point selectionCenter = {1, 1};
int range = 1;
for ( int x = selectionCenter.x - range;
x <= selectionCenter.x + range;
++x )
{
for ( int y = selectionCenter.y - range;
y <= selectionCenter.y + range;
++y )
{
Point p = {x, y};
if ( ComputeDistanceHexGrid(selectionCenter, p) <= range )
cout << "(" << x << ", " << y << ")" << endl;
else
{
// do nothing, skip this tile since it is out of selection range
}
}
}
return 0;
}
For a selection point (1, 1) and a range of 1, the above code will display the expected result:
(0, 0)
(0, 1)
(1, 0)
(1, 1)
(1, 2)
(2, 1)
(2, 2)
Possible optimization
For optimizing this, you can include the logic of knowing how far a tile is from the selection point (logic found in ComputeDistanceHexGrid) directly into your selection loop, so you can iterate the grid in a way that avoids out of range tiles altogether.
Simplest method i can think of...
minX = x-range; maxX = x+range
select (minX,y) to (maxX, y), excluding (x,y) if that's what you want to do
for each i from 1 to range:
if y+i is odd then maxX -= 1, otherwise minX += 1
select (minX, y+i) to (maxX, y+i)
select (minX, y-i) to (maxX, y-i)
It may be a little off; i just worked it through in my head. But at the very least, it's an idea of what you need to do.
In C'ish:
void select(int x, int y) {
/* todo: implement this */
/* should ignore coordinates that are out of bounds */
}
void selectRange(int x, int y, int range) {
int minX = x - range, maxX = x + range;
for (int i = minX; i <= maxX; ++i) {
if (i != x) select(i, y);
}
for (int yOff = 1; yOff <= range; ++yOff) {
if ((y+yOff) % 2 == 1) --maxX; else ++minX;
for (int i=minX; i<=maxX; ++i) {
select(i, y+yOff);
select(i, y-yOff);
}
}
}

Resources