Related
So I searched the in internet looking for programs with Cramer's Rule and there were some few, but apparently these examples were for fixed matrices only like 2x2 or 4x4.
However, I am looking for a way to solve a NxN Matrix. So I started and reached the point of asking the user for the size of the matrix and asked the user to input the values of the matrix but then I don't know how to move on from here.
As in I guess my next step is to apply Cramer's rule and get the answers but I just don't know how.This is the step I'm missing. can anybody help me please?
First, you need to calculate the determinant of your equations system matrix - that is the matrix, that consists of the coefficients (from the left-hand side of the equations) - let it be D.
Then, to calculate the value of a certain variable, you need to take the matrix of your system (from the previous step), replace the coefficients of the corresponding column with constant terms (from the right-hand side), calculate the determinant of resulting matrix - let it be C, and divide C by D.
A bit more about the replacement from the previous step: say, your matrix if 3x3 (as in the image) - so, you have a system of equations, where every a coefficient is multiplied by x, every b - by y, and every c by z, and ds are the constant terms. So, to calculate y, you replace those coefficients that are multiplied by y - bs in this case, with ds.
You perform the second step for every variable and your system gets solved.
You can find an example in https://rosettacode.org/wiki/Cramer%27s_rule#C
Although the specific example deals with a 4X4 matrix the code is written to accommodate any size square matrix.
What you need is calculate the determinant. Cramer's rule is just for the determinant of a NxN matrix
if N is not big, you can use the Cramer's rule(see code below), which is quite straightforward. However, this method is not efficient; if your N is big, you need to resort to other methods, such as lu decomposition
Assuming your data is double, and result can be hold by double.
#include <malloc.h>
#include <stdio.h>
double det(double * matrix, int n) {
if( 1 >= n ) return matrix[ 0 ];
double *subMatrix = (double*)malloc(( n - 1 )*( n - 1 ) * sizeof(double));
double result = 0.0;
for( int i = 0; i < n; ++i ) {
for( int j = 0; j < n - 1; ++j ) {
for( int k = 0; k < i; ++k )
subMatrix[ j*( n - 1 ) + k ] = matrix[ ( j + 1 )*n + k ];
for( int k = i + 1; k < n; ++k )
subMatrix[ j*( n - 1 ) + ( k - 1 ) ] = matrix[ ( j + 1 )*n + k ];
}
if( i % 2 == 0 )
result += matrix[ 0 * n + i ] * det(subMatrix, n - 1);
else
result -= matrix[ 0 * n + i ] * det(subMatrix, n - 1);
}
free(subMatrix);
return result;
}
int main() {
double matrix[ ] = { 1,2,3,4,5,6,7,8,2,6,4,8,3,1,1,2 };
printf("%lf\n", det(matrix, 4));
return 0;
}
I'm trying to implement 2D Perlin noise to create Minecraft-like terrain (Minecraft doesn't actually use 2D Perlin noise) without overhangs or caves and stuff.
The way I'm doing it, is by creating a [50][20][50] array of cubes, where [20] will be the maximum height of the array, and its values will be determined with Perlin noise. I will then fill that array with arrays of cube.
I've been reading from this article and I don't understand, how do I compute the 4 gradient vector and use it in my code? Does every adjacent 2D array such as [2][3] and [2][4] have a different 4 gradient vector?
Also, I've read that the general Perlin noise function also takes a numeric value that will be used as seed, where do I put that in this case?
I'm going to explain Perlin noise using working code, and without relying on other explanations. First you need a way to generate a pseudo-random float at a 2D point. Each point should look random relative to the others, but the trick is that the same coordinates should always produce the same float. We can use any hash function to do that - not just the one that Ken Perlin used in his code. Here's one:
static float noise2(int x, int y) {
int n = x + y * 57;
n = (n << 13) ^ n;
return (float) (1.0-((n*(n*n*15731+789221)+1376312589)&0x7fffffff)/1073741824.0);
}
I use this to generate a "landscape" landscape[i][j] = noise2(i,j); (which I then convert to an image) and it always produces the same thing:
...
But that looks too random - like the hills and valleys are too densely packed. We need a way of "stretching" each random point over, say, 5 points. And for the values between those "key" points, you want a smooth gradient:
static float stretchedNoise2(float x_float, float y_float, float stretch) {
// stretch
x_float /= stretch;
y_float /= stretch;
// the whole part of the coordinates
int x = (int) Math.floor(x_float);
int y = (int) Math.floor(y_float);
// the decimal part - how far between the two points yours is
float fractional_X = x_float - x;
float fractional_Y = y_float - y;
// we need to grab the 4x4 nearest points to do cubic interpolation
double[] p = new double[4];
for (int j = 0; j < 4; j++) {
double[] p2 = new double[4];
for (int i = 0; i < 4; i++) {
p2[i] = noise2(x + i - 1, y + j - 1);
}
// interpolate each row
p[j] = cubicInterp(p2, fractional_X);
}
// and interpolate the results each row's interpolation
return (float) cubicInterp(p, fractional_Y);
}
public static double cubicInterp(double[] p, double x) {
return cubicInterp(p[0],p[1],p[2],p[3], x);
}
public static double cubicInterp(double v0, double v1, double v2, double v3, double x) {
double P = (v3 - v2) - (v0 - v1);
double Q = (v0 - v1) - P;
double R = v2 - v0;
double S = v1;
return P * x * x * x + Q * x * x + R * x + S;
}
If you don't understand the details, that's ok - I don't know how Math.cos() is implemented, but I still know what it does. And this function gives us stretched, smooth noise.
->
The stretchedNoise2 function generates a "landscape" at a certain scale (big or small) - a landscape of random points with smooth slopes between them. Now we can generate a sequence of landscapes on top of each other:
public static double perlin2(float xx, float yy) {
double noise = 0;
noise += stretchedNoise2(xx, yy, 5) * 1; // sample 1
noise += stretchedNoise2(xx, yy, 13) * 2; // twice as influential
// you can keep repeating different variants of the above lines
// some interesting variants are included below.
return noise / (1+2); // make sure you sum the multipliers above
}
To put it more accurately, we get the weighed average of the points from each sample.
( + 2 * ) / 3 =
When you stack a bunch of smooth noise together, usually about 5 samples of increasing "stretch", you get Perlin noise. (If you understand the last sentence, you understand Perlin noise.)
There are other implementations that are faster because they do the same thing in different ways, but because it is no longer 1983 and because you are getting started with writing a landscape generator, you don't need to know about all the special tricks and terminology they use to understand Perlin noise or do fun things with it. For example:
1) 2) 3)
// 1
float smearX = interpolatedNoise2(xx, yy, 99) * 99;
float smearY = interpolatedNoise2(xx, yy, 99) * 99;
ret += interpolatedNoise2(xx + smearX, yy + smearY, 13)*1;
// 2
float smearX2 = interpolatedNoise2(xx, yy, 9) * 19;
float smearY2 = interpolatedNoise2(xx, yy, 9) * 19;
ret += interpolatedNoise2(xx + smearX2, yy + smearY2, 13)*1;
// 3
ret += Math.cos( interpolatedNoise2(xx , yy , 5)*4) *1;
About perlin noise
Perlin noise was developed to generate a random continuous surfaces (actually, procedural textures). Its main feature is that the noise is always continuous over space.
From the article:
Perlin noise is function for generating coherent noise over a space. Coherent noise means that for any two points in the space, the value of the noise function changes smoothly as you move from one point to the other -- that is, there are no discontinuities.
Simply, a perlin noise looks like this:
_ _ __
\ __/ \__/ \__
\__/
But this certainly is not a perlin noise, because there are gaps:
_ _
\_ __/
___/ __/
Calculating the noise (or crushing gradients!)
As #markspace said, perlin noise is mathematically hard. Lets simplify by generating 1D noise.
Imagine the following 1D space:
________________
Firstly, we define a grid (or points in 1D space):
1 2 3 4
________________
Then, we randomly chose a noise value to each grid point (This value is equivalent to the gradient in the 2D noise):
1 2 3 4
________________
-1 0 0.5 1 // random noise value
Now, calculating the noise value for a grid point it is easy, just pick the value:
noise(3) => 0.5
But the noise value for a arbitrary point p needs to be calculated based in the closest grid points p1 and p2 using their value and influence:
// in 1D the influence is just the distance between the points
noise(p) => noise(p1) * influence(p1) + noise(p2) * influence(p2)
noise(2.5) => noise(2) * influence(2, 2.5) + noise(3) * influence(3, 2.5)
=> 0 * 0.5 + 0.5 * 0.5 => 0.25
The end! Now we are able to calculate 1D noise, just add one dimension for 2D. :-)
Hope it helps you understand! Now read #mk.'s answer for working code and have happy noises!
Edit:
Follow up question in the comments:
I read in wikipedia article that the gradient vector in 2d perlin should be length of 1 (unit circle) and random direction. since vector has X and Y, how do I do that exactly?
This could be easily lifted and adapted from the original perlin noise code. Find bellow a pseudocode.
gradient.x = random()*2 - 1;
gradient.y = random()*2 - 1;
normalize_2d( gradient );
Where normalize_2d is:
// normalizes a 2d vector
function normalize_2d(v)
size = square_root( v.x * v.x + v.y * v.y );
v.x = v.x / size;
v.y = v.y / size;
Compute Perlin noise at coordinates x, y
function perlin(float x, float y) {
// Determine grid cell coordinates
int x0 = (x > 0.0 ? (int)x : (int)x - 1);
int x1 = x0 + 1;
int y0 = (y > 0.0 ? (int)y : (int)y - 1);
int y1 = y0 + 1;
// Determine interpolation weights
// Could also use higher order polynomial/s-curve here
float sx = x - (double)x0;
float sy = y - (double)y0;
// Interpolate between grid point gradients
float n0, n1, ix0, ix1, value;
n0 = dotGridGradient(x0, y0, x, y);
n1 = dotGridGradient(x1, y0, x, y);
ix0 = lerp(n0, n1, sx);
n0 = dotGridGradient(x0, y1, x, y);
n1 = dotGridGradient(x1, y1, x, y);
ix1 = lerp(n0, n1, sx);
value = lerp(ix0, ix1, sy);
return value;
}
I would like to uniformly distribute a predetermined set of points within a circle. By uniform distribution, I mean they should all be equally distanced from each other (hence a random approach won't work). I tried a hexagonal approach, but I had problems consistently reaching the outermost radius.
My current approach is a nested for loop where each outer iteration reduces the radius & number of points, and each inner loop evenly drops points on the new radius. Essentially, it's a bunch of nested circles. Unfortunately, it's far from even. Any tips on how to do this correctly?
The goals of having a uniform distribution within the area and a uniform distribution on the boundary conflict; any solution will be a compromise between the two. I augmented the sunflower seed arrangement with an additional parameter alpha that indicates how much one cares about the evenness of boundary.
alpha=0 gives the typical sunflower arrangement, with jagged boundary:
With alpha=2 the boundary is smoother:
(Increasing alpha further is problematic: Too many points end up on the boundary).
The algorithm places n points, of which the kth point is put at distance sqrt(k-1/2) from the boundary (index begins with k=1), and with polar angle 2*pi*k/phi^2 where phi is the golden ratio. Exception: the last alpha*sqrt(n) points are placed on the outer boundary of the circle, and the polar radius of other points is scaled to account for that. This computation of the polar radius is done in the function radius.
It is coded in MATLAB.
function sunflower(n, alpha) % example: n=500, alpha=2
clf
hold on
b = round(alpha*sqrt(n)); % number of boundary points
phi = (sqrt(5)+1)/2; % golden ratio
for k=1:n
r = radius(k,n,b);
theta = 2*pi*k/phi^2;
plot(r*cos(theta), r*sin(theta), 'r*');
end
end
function r = radius(k,n,b)
if k>n-b
r = 1; % put on the boundary
else
r = sqrt(k-1/2)/sqrt(n-(b+1)/2); % apply square root
end
end
Might as well tag on my Python translation.
from math import sqrt, sin, cos, pi
phi = (1 + sqrt(5)) / 2 # golden ratio
def sunflower(n, alpha=0, geodesic=False):
points = []
angle_stride = 360 * phi if geodesic else 2 * pi / phi ** 2
b = round(alpha * sqrt(n)) # number of boundary points
for k in range(1, n + 1):
r = radius(k, n, b)
theta = k * angle_stride
points.append((r * cos(theta), r * sin(theta)))
return points
def radius(k, n, b):
if k > n - b:
return 1.0
else:
return sqrt(k - 0.5) / sqrt(n - (b + 1) / 2)
# example
if __name__ == '__main__':
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
points = sunflower(500, alpha=2, geodesic=False)
xs = [point[0] for point in points]
ys = [point[1] for point in points]
ax.scatter(xs, ys)
ax.set_aspect('equal') # display as square plot with equal axes
plt.show()
Stumbled across this question and the answer above (so all cred to user3717023 & Matt).
Just adding my translation into R here, in case someone else needed that :)
library(tibble)
library(dplyr)
library(ggplot2)
sunflower <- function(n, alpha = 2, geometry = c('planar','geodesic')) {
b <- round(alpha*sqrt(n)) # number of boundary points
phi <- (sqrt(5)+1)/2 # golden ratio
r <- radius(1:n,n,b)
theta <- 1:n * ifelse(geometry[1] == 'geodesic', 360*phi, 2*pi/phi^2)
tibble(
x = r*cos(theta),
y = r*sin(theta)
)
}
radius <- function(k,n,b) {
ifelse(
k > n-b,
1,
sqrt(k-1/2)/sqrt(n-(b+1)/2)
)
}
# example:
sunflower(500, 2, 'planar') %>%
ggplot(aes(x,y)) +
geom_point()
Building on top of #OlivelsAWord , here is a Python implementation using numpy:
import numpy as np
import matplotlib.pyplot as plt
def sunflower(n: int, alpha: float) -> np.ndarray:
# Number of points respectively on the boundary and inside the cirlce.
n_exterior = np.round(alpha * np.sqrt(n)).astype(int)
n_interior = n - n_exterior
# Ensure there are still some points in the inside...
if n_interior < 1:
raise RuntimeError(f"Parameter 'alpha' is too large ({alpha}), all "
f"points would end-up on the boundary.")
# Generate the angles. The factor k_theta corresponds to 2*pi/phi^2.
k_theta = np.pi * (3 - np.sqrt(5))
angles = np.linspace(k_theta, k_theta * n, n)
# Generate the radii.
r_interior = np.sqrt(np.linspace(0, 1, n_interior))
r_exterior = np.ones((n_exterior,))
r = np.concatenate((r_interior, r_exterior))
# Return Cartesian coordinates from polar ones.
return r * np.stack((np.cos(angles), np.sin(angles)))
# NOTE: say the returned array is called s. The layout is such that s[0,:]
# contains X values and s[1,:] contains Y values. Change the above to
# return r.reshape(n, 1) * np.stack((np.cos(angles), np.sin(angles)), axis=1)
# if you want s[:,0] and s[:,1] to contain X and Y values instead.
if __name__ == '__main__':
fig, ax = plt.subplots()
# Let's plot three sunflowers with different values of alpha!
for alpha in (0, 1, 2):
s = sunflower(500, alpha)
# NOTE: the 'alpha=0.5' parameter is to control transparency, it does
# not have anything to do with the alpha used in 'sunflower' ;)
ax.scatter(s[0], s[1], alpha=0.5, label=f"alpha={alpha}")
# Display as square plot with equal axes and add a legend. Then show the result :)
ax.set_aspect('equal')
ax.legend()
plt.show()
Adding my Java implementation of previous answers with an example (Processing).
int n = 2000; // count of nodes
Float alpha = 2.; // constant that can be adjusted to vary the geometry of points at the boundary
ArrayList<PVector> vertices = new ArrayList<PVector>();
Float scaleFactor = 200.; // scale points beyond their 0.0-1.0 range for visualisation;
void setup() {
size(500, 500);
// Test
vertices = sunflower(n, alpha);
displayTest(vertices, scaleFactor);
}
ArrayList<PVector> sunflower(int n, Float alpha) {
Double phi = (1 + Math.sqrt(5)) / 2; // golden ratio
Double angle = 2 * PI / Math.pow(phi, 2); // value used to calculate theta for each point
ArrayList<PVector> points = new ArrayList<PVector>();
Long b = Math.round(alpha*Math.sqrt(n)); // number of boundary points
Float theta, r, x, y;
for (int i = 1; i < n + 1; i++) {
r = radius(i, n, b.floatValue());
theta = i * angle.floatValue();
x = r * cos(theta);
y = r * sin(theta);
PVector p = new PVector(x, y);
points.add(p);
}
return points;
}
Float radius(int k, int n, Float b) {
if (k > n - b) {
return 1.0;
} else {
Double r = Math.sqrt(k - 0.5) / Math.sqrt(n - (b+1) / 2);
return r.floatValue();
}
}
void displayTest(ArrayList<PVector> points, Float size) {
for (int i = 0; i < points.size(); i++) {
Float x = size * points.get(i).x;
Float y = size * points.get(i).y;
pushMatrix();
translate(width / 2, height / 2);
ellipse(x, y, 5, 5);
popMatrix();
}
}
Here's my Unity implementation.
Vector2[] Sunflower(int n, float alpha = 0, bool geodesic = false){
float phi = (1 + Mathf.Sqrt(5)) / 2;//golden ratio
float angle_stride = 360 * phi;
float radius(float k, float n, float b)
{
return k > n - b ? 1 : Mathf.Sqrt(k - 0.5f) / Mathf.Sqrt(n - (b + 1) / 2);
}
int b = (int)(alpha * Mathf.Sqrt(n)); //# number of boundary points
List<Vector2>points = new List<Vector2>();
for (int k = 0; k < n; k++)
{
float r = radius(k, n, b);
float theta = geodesic ? k * 360 * phi : k * angle_stride;
float x = !float.IsNaN(r * Mathf.Cos(theta)) ? r * Mathf.Cos(theta) : 0;
float y = !float.IsNaN(r * Mathf.Sin(theta)) ? r * Mathf.Sin(theta) : 0;
points.Add(new Vector2(x, y));
}
return points.ToArray();
}
I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}
I'm in need of help solving an issue, the problem came up doing one of my small robot experiments, the basic idea, is that each little robot has the ability to approximate the distance, from themselves to an object, however the approximate I'm getting is way too rough, and I'm hoping to calculate something more accurate.
So:
Input: A list of vertex (v_1, v_2, ... v_n), a vertex v_* (robots)
Output: The coordinates for the unknown vertex v_* (object)
Each vertex v_1 to v_n's coordinates are well known (supplied by calling getX() and getY() on the vertex), and its possible to get the approximate range to v_* by calling; getApproximateDistance(v_*), function getApproximateDistance() returns two variables variables, that is; minDistance and maxDistance. - The actual distance lies in between these.
So what I've been trying to do to obtain the coordinates for v_*, is to use trilateration, however I can't seem to find a formula for doing trilateration with limits (lower and upperbound), so that's really what I'm looking for (not really good enough at math, to figure it out myself).
Note: is triangulation the way to go instead?
Note: I would possibly love to know a way to do, performance/accuracy trade-offs.
An example of data:
[Vertex . `getX()` . `getY()` . `minDistance` . `maxDistance`]
[`v_1` . 2 . 2 . 0.5 . 1 ]
[`v_2` . 1 . 2 . 0.3 . 1 ]
[`v_3` . 1.5 . 1 . 0.3 . 0.5]
Picture to show data: http://img52.imageshack.us/img52/6414/unavngivetcb.png
It's obvious that the approximate for v_1 can be better, than [0.5; 1], as the figure that the above data creates is small cut of a annulus (limited by v_3), however how would I calculate that, and possibly find the approximate within that figure (this figure is possibly concave)?
Would this be better suited for MathOverflow?
I would go for a simple discrete approach. The implicit formula for an annulus is trivial and the intersection of multiple annulus if the number of them is high can be computed somewhat efficently with a scanline based approach.
For getting high accuracy with a fast computation an option could be using a multiresolution approach (i.e. first starting in low-res and then recomputing in high-res only samples that are close to a valid point.
A small python toy I wrote can generate a 400x400 pixel image of the intersection area in about 0.5 secs (this is the kind of computation that would get a 100x speedup if done with C).
# x, y, r0, r1
data = [(2.0, 2.0, 0.5, 1.0),
(1.0, 2.0, 0.3, 1.0),
(1.5, 1.0, 0.3, 0.5)]
x0 = max(x - r1 for x, y, r0, r1 in data)
y0 = max(y - r1 for x, y, r0, r1 in data)
x1 = min(x + r1 for x, y, r0, r1 in data)
y1 = min(y + r1 for x, y, r0, r1 in data)
def hit(x, y):
for cx, cy, r0, r1 in data:
if not (r0**2 <= ((x - cx)**2 + (y - cy)**2) <= r1**2):
return False
return True
res = 400
step = 16
white = chr(255)
grey = chr(192)
black = chr(0)
img = [black] * (res * res)
# Low-res pass
cells = {}
for i in xrange(0, res, step):
y = y0 + i * (y1 - y0) / res
for j in xrange(0, res, step):
x = x0 + j * (x1 - x0) / res
if hit(x, y):
for h in xrange(-step*2, step*3, step):
for v in xrange(-step*2, step*3, step):
cells[(i+v, j+h)] = True
# High-res pass
for i in xrange(0, res, step):
for j in xrange(0, res, step):
if cells.get((i, j), False):
img[i * res + j] = grey
img[(i + step - 1) * res + j] = grey
img[(i + step - 1) * res + (j + step - 1)] = grey
img[i * res + (j + step - 1)] = grey
for v in xrange(step):
y = y0 + (i + v) * (y1 - y0) / res
for h in xrange(step):
x = x0 + (j + h) * (x1 - x0) / res
if hit(x, y):
img[(i + v)*res + (j + h)] = white
open("result.pgm", "wb").write(("P5\n%i %i 255\n" % (res, res)) +
"".join(img))
Another interesting option could be using a GPU if available. Starting from a white picture and drawing in black the exterior of each annulus will leave at the end the intersection area in white.
For example with Python/Qt the code for doing this computation is simply:
img = QImage(res, res, QImage.Format_RGB32)
dc = QPainter(img)
dc.fillRect(0, 0, res, res, QBrush(QColor(255, 255, 255)))
dc.setPen(Qt.NoPen)
dc.setBrush(QBrush(QColor(0, 0, 0)))
for x, y, r0, r1 in data:
xa1 = (x - r1 - x0) * res / (x1 - x0)
xb1 = (x + r1 - x0) * res / (x1 - x0)
ya1 = (y - r1 - y0) * res / (y1 - y0)
yb1 = (y + r1 - y0) * res / (y1 - y0)
xa0 = (x - r0 - x0) * res / (x1 - x0)
xb0 = (x + r0 - x0) * res / (x1 - x0)
ya0 = (y - r0 - y0) * res / (y1 - y0)
yb0 = (y + r0 - y0) * res / (y1 - y0)
p = QPainterPath()
p.addEllipse(QRectF(xa0, ya0, xb0-xa0, yb0-ya0))
p.addEllipse(QRectF(xa1, ya1, xb1-xa1, yb1-ya1))
p.addRect(QRectF(0, 0, res, res))
dc.drawPath(p)
and the computation part for an 800x800 resolution image takes about 8ms (and I'm not sure it's hardware accelerated).
If only the barycenter of the intersection is to be computed then there is no memory allocation at all. For example a "brute-force" approach is just a few lines of C
typedef struct TReading {
double x, y, r0, r1;
} Reading;
int hit(double xx, double yy,
Reading *readings, int num_readings)
{
while (num_readings--)
{
double dx = xx - readings->x;
double dy = yy - readings->y;
double d2 = dx*dx + dy*dy;
if (d2 < readings->r0 * readings->r0) return 0;
if (d2 > readings->r1 * readings->r1) return 0;
readings++;
}
return 1;
}
int computeLocation(Reading *readings, int num_readings,
int resolution,
double *result_x, double *result_y)
{
// Compute bounding box of interesting zone
double x0 = -1E20, y0 = -1E20, x1 = 1E20, y1 = 1E20;
for (int i=0; i<num_readings; i++)
{
if (readings[i].x - readings[i].r1 > x0)
x0 = readings[i].x - readings[i].r1;
if (readings[i].y - readings[i].r1 > y0)
y0 = readings[i].y - readings[i].r1;
if (readings[i].x + readings[i].r1 < x1)
x1 = readings[i].x + readings[i].r1;
if (readings[i].y + readings[i].r1 < y1)
y1 = readings[i].y + readings[i].r1;
}
// Scan processing
double ax = 0, ay = 0;
int total = 0;
for (int i=0; i<=resolution; i++)
{
double yy = y0 + i * (y1 - y0) / resolution;
for (int j=0; j<=resolution; j++)
{
double xx = x0 + j * (x1 - x0) / resolution;
if (hit(xx, yy, readings, num_readings))
{
ax += xx; ay += yy; total += 1;
}
}
}
if (total)
{
*result_x = ax / total;
*result_y = ay / total;
}
return total;
}
And on my PC can compute the barycenter with resolution = 100 in 0.08 ms (x=1.50000, y=1.383250) or with resolution = 400 in 1.3ms (x=1.500000, y=1.383308). Of course a double-step speedup could be implemented even for the barycenter-only version.
I would switch from "max/min" to trying to minimize an error function. That gets you to the problem discussed at Finding a point that best fits the intersection of n spheres which is more tractable than intersecting a series of complicated shapes. (And what if one robot's sensor is messed up and it gives an impossible value? That variation will still usually give a reasonable answer.)
Not sure about your case, but in a typical robotics application you're going to be reading sensors periodically and crunching the data. If that's the case, you're trying to estimate the location based on noisy data and that's a common problem. As a simple (less rigorous) method, you could take the existing position and adjust it toward or away from each known point. Take the measured distance to target minus the present distance to target, multiply that delta (error) by some value between 0 and 1, and move your estimated position that much toward the target. Repeat for each target. Then repeat each time you get a new set of measurements. The multiplier will have an effect like a low-pass filter, smaller values will give you a more stable position estimate with slower response to movement. For the distance, use the average of the min and max. If you can put tighter bounds on the range to one target, you can increase the multiplier closer to 1 for just that target.
This is of course a crude position estimator. The math guys can probably be more rigorous, but also more complicated. The solution is definitely not anything to do with intersecting areas and working with geometric shapes.