RSME calculates how close the predicted value is compared to the actual value, but in a point cloud, there are 2 things that I am confused about:
How do we know which point corresponds to which point, to be subtracted from?
Point clouds are 3-dimensional since it has xyz values, but how do people turn those 3 values to one RSME value?
First of all, it's RMSE, not RSME. It stands for Root Mean Square Error:
https://en.wikipedia.org/wiki/Root-mean-square_deviation
With 3D coordinates you can compare component wise, or however else you choose to define a distance measure. Then you plug this into the RMSE formula. Essentially this means comparing an expected value to your observed value.
As for the point correspondence - this depends on the algorithm of choice. Probably one of the most famous examples is ICP:
https://de.wikipedia.org/wiki/Iterative_Closest_Point_Algorithm
In a nutshell for every point of one cloud, the closest point of the other cloud is determined. Then an error measure is calculated and lastly points are transformed. This is done an arbitrary number of times, depending on the desired precision.
Since I strongly suspect that you are indeed looking for ICP, here is the description as to how they are put together:
https://en.wikipedia.org/wiki/Iterative_closest_point
Other than that you will have to do some reading yourself.
Related
Problem
I want to find
The first root
The first local minimum/maximum
of a black-box function in a given range.
The function has following properties:
It's continuous and differentiable.
It's combination of constant and periodic functions. All periods are known.
(It's better if it can be done with weaker assumptions)
What is the fastest way to get the root and the extremum?
Do I need more assumptions or bounds of the function?
What I've tried
I know I can use root-finding algorithm. What I don't know is how to find the first root efficiently.
It needs to be fast enough so that it can run within a few miliseconds with precision of 1.0 and range of 1.0e+8, which is the problem.
Since the range could be quite large and it should be precise enough, I can't brute-force it by checking all the possible subranges.
I considered bisection method, but it's too slow to find the first root if the function has only one big root in the range, as every subrange should be checked.
It's preferable if the solution is in java, but any similar language is fine.
Background
I want to calculate when arbitrary celestial object reaches certain height.
It's a configuration-defined virtual object, so I can't assume anything about the object.
It's not easy to get either analytical solution or simple approximation because various coordinates are involved.
I decided to find a numerical solution for this.
For a general black box function, this can't really be done. Any root finding algorithm on a black box function can't guarantee that it has found all the roots or any particular root, even if the function is continuous and differentiable.
The property of being periodic gives a bit more hope, but you can still have periodic functions with infinitely many roots in a bounded domain. Given that your function relates to celestial objects, this isn't likely to happen. Assuming your periodic functions are sinusoidal, I believe you can get away with checking subranges on the order of one-quarter of the shortest period (out of all the periodic components).
Maybe try Brent's Method on the shortest quarter period subranges?
Another approach would be to apply your root finding algorithm iteratively. If your range is (a, b), then apply your algorithm to that range to find a root at say c < b. Then apply your algorithm to the range (a, c) to find a root in that range. Continue until no more roots are found. The last root you found is a good candidate for your minimum root.
Black box function for any range? You cannot even be sure it has the continuous domain over that range. What kind of solutions are you looking for? Natural numbers, integers, real numbers, complex? These are all the question that greatly impact the answer.
So 1st thing should be determining what kind of number you accept as the result.
Second is having some kind of protection against limes of function that will try to explode your calculations as it goes for plus or minus infinity.
Since we are touching the limes topics you could have your solution edge towards zero and look like a solution but never touch 0 and become a solution. This depends on your margin of error, how close something has to be to be considered ok, it's good enough.
I think for this your SIMPLEST TO IMPLEMENT bet for real number solutions (I assume those) is to take an interval and this divide and conquer algorithm:
Take lower and upper border and middle value (or approx middle value for infinity decimals border/borders)
Try to calculate solution with all 3 and have some kind of protection against infinities
remember all 3 values in an array with results from them (3 pair of values)
remember the current best value (one its closest to solution) in seperate variable (a pair of value and result for that value)
STEP FORWARD - repeat above with 1st -2nd value range and 2nd -3rd value range
have a new pair of value and result to be closest to solution.
clear the old value-result pairs, replace them with new ones gotten from this iteration while remembering the best value solution pair (total)
Repeat above for how precise you wish to get and look at that memory explode with each iteration, keep in mind you are gonna to have exponential growth of values there. It can be further improved if you lets say take one interval and go as deep as you wanna, remember best value-result pair and then delete all other memory and go for next interval and dig deep.
I need to write a function that returns on of the numbers (-2,-1,0,1,2) randomly, but I need the average of the output to be a specific number (say, 1.2).
I saw similar questions, but all the answers seem to rely on the target range being wide enough.
Is there a way to do this (without saving state) with this small selection of possible outputs?
UPDATE: I want to use this function for (randomized) testing, as a stub for an expensive function which I don't want to run. The consumer of this function runs it a couple of hundred times and takes an average. I've been using a simple randint function, but the average is always very close to 0, which is not realistic.
Point is, I just need something simple that won't always average to 0. I don't really care what the actual average is. I may have asked the question wrong.
Do you really mean to require that specific value to be the average, or rather the expected value? In other words, if the generated sequence were to contain an extraordinary number of small values in its initial part, should the rest of the sequence atempt to compensate for that in an attempt to get the overall average right? I assume not, I assume you want all your samples to be computed independently (after all, you said you don't want any state), in which case you can only control the expected value.
If you assign a probability pi for each of your possible choices, then the expected value will be the sum of these values, weighted by their probabilities:
EV = ā 2pā2 ā pā1 + p1 + 2p2 = 1.2
As additional constraints you have to require that each of these probabilities is non-negative, and that the above four add up to a value less than 1, with the remainder taken by the fifth probability p0.
there are many possible assignments which satisfy these requirements, and any one will do what you asked for. Which of them are reasonable for your application depends on what that application does.
You can use a PRNG which generates variables uniformly distributed in the range [0,1), and then map these to the cases you described by taking the cumulative sums of the probabilities as cut points.
I have problems in finding a proper similarity measure for clustering. I have around 3000 arrays of sets, where each set contains features of certain domain (e.g., number, color, days, alphabets, etc). I'll explain my problem with an example.
Lets assume i have only 2 arrays(a1 & a2) and I want to find the similarity between them. each array contains 4 sets (in my actual problem there are 250 sets (domains) per array) and a set can be empty.
a1: {a,b}, {1,4,6}, {mon, tue, wed}, {red, blue,green}
a2: {b,c}, {2,4,6}, {}, {blue, black}
I have come with a similarity measure using Jaccard index (denoted as J):
sim(a1,a2) = [J(a1[0], a2[0]) + J(a1[1], a2[1]) + ... + J(a1[3], a2[3])]/4
note:I divide by total number of sets (in the above example 4) to keep the similarity between 0 and 1.
Is this a proper similarity measure and are there any flaws in this approach. I am applying Jaccard index for each set separately because I want compare the similarity between related domains(i.e. color with color, etc...)
I am not aware of any other proper similarity measure for my problem.
Further, can I use this similarity measure for clustering purpose?
This should work for most clustering algorithms. Don't use k-means - it can handle numeric vector spaces only. But you have a vector-of-sets type of data.
You may want to use a different mean than the arithmetic average for combining the four Jaccard measures. Try the harmonic or geometric means. See, the average over 250 values will likely be somewhere close to 0.5 all the time, so you need a mean that is more "aggressive".
So the plan sounds good. Just try it, implement this similarity and plug it into various clustering algorithm and see if they find something. I like OPTICS for exploring data and distance functions, as the OPTICS plot can be very indicative whether (or not!) there is something to be found based on the distance function. If the plot is too flat, there just is not much to be found, it is like a representative sample of the distances in the data set...
I use ELKI, and they even have a tutorial on adding custom distance functions: http://elki.dbs.ifi.lmu.de/wiki/Tutorial/DistanceFunctions although you can probably just compute the distances with whatever tool you like and write them to a similarity matrix. At 3000 objects this will remain very manageable, 4200000 doubles is just a few MB.
I have a linear regression equation from school , which gives a value between 1 and -1 indicative of whether or not a set of data points are close enough to a linear function
and the equation given here
http://people.hofstra.edu/stefan_waner/realworld/calctopic1/regression.html
under best fit of a line. I would like to use these to do simple gesture detection based on a point in 3-space (x,y,z) - forward, back, left, right, up, down. First I would see if they fall on a line in 2 of the 3 dimensions, then I would see if that line's slope approached zero or infinity.
Is this fast enough for functional gesture recognition? If not, could someone propose an alternative algorithm?
If I've understood your question correctly then (1) the calculation you describe here would probably be plenty fast enough, (2) it may not actually do what you want, and (3) the stuff that'll be slow in an actual implementation would lie elsewhere.
So, I think you're proposing to do this. (1) Identify the positions of ... something ... (the user's hand, perhaps) in three-dimensional space, at several successive times. (2) For (say) each of {x,y} and {x,z}, look at those two coordinates of each point, compute the correlation coefficient (which is what your formula describes) and see whether it's close to +-1. (3) If both correlation coefficients are close to +-1 then the points lie approximately on a straight line; calculate the gradient of that line (using a formula similar to that of the correlation coefficient). (4) If the gradients are both very close to 0 or +- infinity, then your line is approximately parallel to one axis, which is the case you're trying to recognize.
1: Is it fast enough? You might perhaps be sampling at 50 frames per second or thereabouts, and your gestures might take a second to execute. So you'll have somewhere on the order of 50 positions. So, the total number of arithmetic operations you'll need is maybe a few hundred (including a modest number of square roots). In the worst case, you might be doing this in emulated floating-point on a slow ARM processor or something; in that case, each arithmetic operation might take a couple of hundred cycles, so the whole thing might be 100k cycles, which for a really slow processor running at 100MHz would be about a millisecond. You're not going to have any problem with the time taken to do this calculation.
2: Is it the right thing? It's not clear that it's the right calculation. For instance, suppose your user's hand moves back and forth rapidly several times along the x-axis; that will give you a positive result; is that what you want? Suppose the user attempts the gesture you want but moves at slightly the wrong angle; you may get a negative result. Suppose they move exactly along the x-axis for a bit and then along the y-axis for a bit; then the projections onto the {x,y}, {x,z} and {y,z} planes will all pass your test. These all seem like results you might not want.
3: Is it where the real cost will lie? This all assumes you've already got (x,y,z) coordinates. Getting those is probably going to be more expensive than processing them. For instance, if you have a camera-based system of some kind then there'll be some nontrivial image processing for every frame. Or perhaps you're integrating up data from accelerometers (which, by the way, is likely to give nasty inaccurate position results); the chances are that you're doing some filtering and other calculations to get position data. I bet that the cost of performing a calculation like this one will be substantially less than the cost of getting the coordinates in the first place.
I have an asymmetric directed graph with a set of probabilities (so the likelihood that a person will move from point A to B, or point A to C, etc). Given a route through all the points, I would like to calculate the likelihood that each choice made in the route is a good choice.
As an example, suppose a graph of just 2 points.
//In a matrix, the probabilities might look like
//A B
[ 0 0.9 //A
0.1 0 ] //B
So the probability of moving from A to B is 0.9 and from B to A is 0.1. Given the route A->B, how correct is the first point (A), and how correct is the second point (B).
Suppose I have a bigger matrix with a route that goes A->B->C->D. So, some examples of what I would like to know:
How likely is it that A comes before B,C, & D
How likely is it that B comes after A
How likely is it that C & D come after B
Basically, at each point, I want to know the likelihood that the previous points come before the current and also the likelihood that the following points come after. I don't need something that is statistically sound. Just an indicator that I can use for relative comparisons. Any ideas?
update: I see that this question is not useful to everyone but the answer is really useful to me so I've tried to make the description of the problem more clear and will include my answer shortly in case it helps someone.
I don't think that's possible efficiently. If there was an algorithm to calculate the probability that a point was in the wrong position, you could simply work out which position was least wrong for each point, and thus calculate the correct order. The problem is essentially the same as finding the optimal route.
The subsidiary question is what the probability is 'of', here. Can the probability be 100%? How would you know?
Part of the reason the travelling salesman problem is hard is that there is no way to know that you have the optimal solution except looking at all the solutions and finding that it is the shortest.
Replace probability matrix (p) with -log(p) and finding shortest path in that matrix would solve your problem.
After much thought, I came up with something that suits my needs. It still has the the same problem where to get an accurate answer would require checking every possible route. However, in my case, only checking direct and the first indirect routes are enough to give an idea of how "correct" my answer is.
First I need the confidence for each probability. This is a separate calculation and is contained in a separate matrix (that maps 1 to 1 to the probability matrix). I just take the 1.0-confidenceInterval for each probability.
If I have a route A->B->C->D, I calculate a "correctness indicator" for a point. It looks like I am getting some sort of average of a direct route and the first level of indirect routes.
Some examples:
Denote P(A,B) as probability that A comes before B
Denote C(A,B) as confidence in the probability that A comes before B
Denote P`(A,C) as confidence that A comes before C based on the indirect route A->B->C
At point B, likelihood that A comes before it:
indicator = P(A,B)*C(A,B)/C(A,B)
At point C, likelihood that A & B come before:
P(A,C) = P(A,B)*P(B,C)
C(A,C) = C(A,B)*C(B,C)
indicator = [P(A,C)*C(A,C) + P(B,C)*C(B,C) + P'(A,C)*C'(A,C)]/[C(A,C)+C(B,C)+C'(A,C)]
So this gives me some sort of indicator that is always between 0 and 1, and takes the first level indirect route into account (from->indirectPoint->to). It seems to provide the rough estimation I was looking for. It is not a great answer, but it does provide some estimate and since nothing else provides anything better, it is suitable