What unit is `getFitnessScore()` in the IterativeClosestPoint class from PCL returning? - point-cloud-library

I use the pcl::IterativeClosestPoint method from the Point-Cloud-Library.
As of right now it seems that the documentation of it is offline.
Here in google cache. And also a tutorial.
There is a possibility to call icp.getFitnessScore() to get the mean squared distances from the points of the two clouds. I just can't find information on what kind of unit this is indicated. Does anyone knows what the number I get there means? For example output for me was: 0,0003192. This seems to be low, but I have no clue if it is meters, centimeters, feet, or whatever.
Thank you very much.

what kind of unit is icp.getFitnessScore() used?
Like Joy said in his comment, the unit is the same as your input data.
For example, your input point cloud might comes from a obj file. And a point will be stored like v 9.322 -1.0778 0.44997. The number returned by icp.getFitnessScore() will have the same unit as the point's coordinate.
Does anyone knows what the number I get there means?
The number you get represents the mean squared distance from each point in source to its closest point in target.
That is to say, if you assume every point in source has a corresponding point in target, and the correspondence set comes from closest point data association, then the number represents the mean squared distance between all correspondences. That can be seen from the source code below.
To make more sense of the function, you might want to filter out correspondences that have a large distance between them. (The two point cloud might only partially overlap.) And the function actually has an optional parameter max_range that does this.
The method getFitnessScore() is defined in pcl::Registration, the base class of pcl::IterativeClosestPoint. The optional parameter max_range is defaulted to be std::numeric_limits<double>::max(), as you can see in the definition:
/** \brief Obtain the Euclidean fitness score (e.g., sum of squared distances from the source to the target)
* \param[in] max_range maximum allowable distance between a point and its correspondence in the target
* (default: double::max)
*/
inline double
getFitnessScore (double max_range = std::numeric_limits<double>::max ());
And the source code of this function is:
template <typename PointSource, typename PointTarget, typename Scalar> inline double
pcl::Registration<PointSource, PointTarget, Scalar>::getFitnessScore (double max_range)
{
double fitness_score = 0.0;
// Transform the input dataset using the final transformation
PointCloudSource input_transformed;
transformPointCloud (*input_, input_transformed, final_transformation_);
std::vector<int> nn_indices (1);
std::vector<float> nn_dists (1);
// For each point in the source dataset
int nr = 0;
for (size_t i = 0; i < input_transformed.points.size (); ++i)
{
// Find its nearest neighbor in the target
tree_->nearestKSearch (input_transformed.points[i], 1, nn_indices, nn_dists);
// Deal with occlusions (incomplete targets)
if (nn_dists[0] <= max_range)
{
// Add to the fitness score
fitness_score += nn_dists[0];
nr++;
}
}
if (nr > 0)
return (fitness_score / nr);
else
return (std::numeric_limits<double>::max ());
}

Related

Given 3 or more numbers or vectors, how do I interpolate between them based on a percentage?

I want to know the most basic math principles I need to interpolate a value between 3 or more other values, based on a linear percentage; as it would be applicable in programming.
For example, say I have "0", "100", "200", and I want the number that's at "50%". The math would then return something like "100" because 100 is at 50%.
Another example: I have 3 points somewhere in 3D space. If I do "75%" then the result would be a point that is exactly halfway between point 2 and 3, or if I do "25%" then it'll be half-way between 1 and 2.
Game engines like Unity use something like this for blending between multiple animations on a character, for another example.
What I've brainstormed so far is that I would somehow take the input value and find whatever the 2 neighboring "points" are closest to it (much harder in 3D or 2d space but manageable in 1d), then simply lerp between those two- but that requires me to figure out what percentage both of those points are at individually, and remap from "0 to 100%" to "A% to B%". I think it would work but It seems kind of complicated to me.
If possible, I'd like answers to include a C# example or language-agnostic psuitocode just so I can understand the math.
simple example for scalar float objects using piecewise linear interpolation:
int n=3; // number of your objects
float x[n]={ 0.5,2.0,10.0 }; // your objects
float get_object(float t) // linearly interpolate objects x[] based in parameter t = <0,1>, return value must be the same type as your objects
{
int ix;
float x0,x1; // the same type as your objects
// get segment ix and parameter t
t*=n; ix=floor(t); t-=ix;
// get closest known points x0,x1
x0=x[ix]; ix++;
if (ix<n) x1=x[ix]; else return x0;
// interpolate
return x0+(x1-x0)*t;
}
so if t=0 it returns first object in the x[] if it is t=1 is returns last and anything in between is linearly interpolated ... The idea is just to multiply our t by number of segments or point (depend on how you handle edge cases) which integer part of the result will give us index of closest 2 objects to our wanted one and then the fractional part of multiplied t will give us directly interpolation parameter in range <0,1> between the two closest points...
In case you objects are not with the same weight or are not uniformly sampled then you need to add interpolation with weights or use higher order polynomial (quadratic,cubic,...).
You can use this for "any" type T of objects you just have to implement operations T+T , T-T and T*float if they are not present.
If your gameObjects is at the same line try this code.
public Transform objStart;
public Transform objEnd;
public Transform square;
public float distance;
//percent .5 means 50%
[Range(0f,1f)]
public float percent;
public Vector3 distancePercentPosition;
// Start is called before the first frame update
void Start()
{
}
// Update is called once per frame
void Update()
{
//get distance between two object
distance = Vector3.Magnitude(objEnd.position - objStart.position);
//get position based on percent;
distancePercentPosition = (objEnd.position - objStart.position).normalized * percent * distance;
square.position = objStart.position + distancePercentPosition;
}
once you get the position between lines you can now map your gameobject in each position based on percent.

what is the difference of these two implementations of a recursion algorihtm?

I am doing a leetcode problem.
A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).
How many possible unique paths are there?
So I tried this implementation first and got a "exceeds runtime" (I forgot the exact term but it means the implementation is slow). So I changed it version 2, which use a array to save the results. I honestly don't know how the recursion works internally and why these two implementations have different efficiency.
version 1(slow):
class Solution {
// int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
return uniquePaths(m-1,n) + uniquePaths(m,n-1);
}
}
};
version2 (faster):
class Solution {
int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
if (res[m-1][n]==0) res[m-1][n] = uniquePaths(m-1,n);
if (res[m][n-1]==0) res[m][n-1] = uniquePaths(m,n-1);
return res[m-1][n] + res[m][n-1];
}
}
};
Version 1 is slower beacuse you are calculating the same data again and again. I'll try to explain this on different problem but I guess that you know Fibonacci numbers. You can calculate any Fibonacci number by following recursive algorithm:
fib(n):
if n == 0 then return 0
if n == 1 then return 1
return fib(n-1) + fib(n-1)
But what actually are you calculating? If you want to find fib(5) you need to calculate fib(4) and fib(3), then to calculate fib(4) you need to calculate fib(3) again! Take a look at the image to fully understand:
The same situation is in your code. You compute uniquePaths(m,n) even if you have it calculated before. To avoid that, in your second version you use array to store computed data and you don't have to compute it again when res[m][n]!=0

Calculating bank angle between two objects

I have a drone following a path for movement. That is, it doesn't use a rigidbody so I don't have access to velocity or magnitude and such. It follows the path just fine, but I would like to add banking to it when it turns left or right. I use a dummy object in front of the drone, thinking I could calculate the bank/tilt amount using the transform vectors from the two objects.
I've been working on this for days as I don't have a lot of math skills. Basically I've been copying pieces of code trying to get things to work. Nothing I do works to make the drone bank. The following code manages to spin (not bank).
// Update is called once per frame
void Update () {
Quaternion rotation = Quaternion.identity;
Vector3 dir = (dummyObject.transform.position - this.transform.position).normalized;
float angle = Vector3.Angle( dir, transform.up );
float rollAngle = CalculateRollAngle(angle);
rotation.SetLookRotation(dir, transform.right);// + rollIntensity * smoothRoll * right);
rotation *= Quaternion.Euler(new Vector3(0, 0, rollAngle));
transform.rotation = rotation;
}
/// <summary>
/// Calculates Roll and smoothes it (to compensates for non C2 continuous control points algorithm) /// </summary>
/// <returns>The roll angle.</returns>
/// <param name="rollFactor">Roll factor.</param>
float CalculateRollAngle(float rollFactor)
{
smoothRoll = Mathf.Lerp(smoothRoll, rollFactor, rollSmoothing * Time.deltaTime);
float angle = Mathf.Atan2(1, smoothRoll * rollIntensity);
angle *= Mathf.Rad2Deg;
angle -= 90;
TurnRollAngle = angle;
angle += RollOffset;
return angle;
}
Assuming you have waypoints the drone is following, you should figure out the angle between the last two (i.e. your "now-facing" and "will be facing" directions). The easy way is to use Vector2.Angle.
I would use this angle to determine the amount I'll tilt the drone's body: the sharper the turn, the harder the banking. I would use a ratio value (public initially so I can manipulate it from the editor).
Next, instead of doing any math I would rely on the engine to do the rotation for me - so I would go for Transform.Rotate function.In case banking can go too high and look silly, I would set a maximum for that and Clamp my calculated banking angle between zero and max.
Without knowing exactly what you do and how, it's not easy to give perfect code, but for a better understand of the above, here's some (untested, i.e. pseudo) code for the solution I visualize:
public float turnSpeed = 7.0f; //the drone will "rotate toward the new waypoint" by this speed
//bankSpeed+turnBankRatio must be two times "faster" (and/or smaller degree) than turning, see details in 'EDIT' as of why:
public float bankSpeed = 14.0f; //banking speed
public float turnBankRatio = .5f; //90 degree turn == 45 degree banking
private float turnAngle = 0.0f; //this is the 'x' degree turning angle we'll "Lerp"
private float turnAngleABS = 0.0f; //same as turnAngle but it's an absolute value. Storing to avoid Mathf.Abs() in Update()!
private float bankAngle = 0.0f; //banking degree
private bool isTurning = false; //are we turning right now?
//when the action is fired for the drone it should go for the next waypoint, call this guy
private void TurningTrigger() {
//remove this line after testing, it's some extra safety
if (isTurning) { Debug.LogError("oups! must not be possible!"); return; }
Vector2 droneOLD2DAngle = GetGO2DPos(transform.position);
//do the code you do for the turning/rotation of drone here!
//or use the next waypoint's .position as the new angle if you are OK
//with the snippet doing the turning for you along with banking. then:
Vector2 droneNEW2DAngle = GetGO2DPos(transform.position);
turnAngle = Vector2.Angle(droneOLD2DAngle, droneNEW2DAngle); //turn degree
turnAngleABS = Mathf.Abs(turnAngle); //avoiding Mathf.Abs() in Update()
bankAngle = turnAngle * turnBankRatio; //bank angle
//you can remove this after testing. This is to make sure banking can
//do a full run before the drone hits the next waypoint!
if ((turnAngle * turnSpeed) < (bankAngle * bankSpeed)) {
Debug.LogError("Banking degree too high, or banking speed too low to complete maneuver!");
}
//you can clamp or set turnAngle based on a min/max here
isTurning = true; //all values were set, turning and banking can start!
}
//get 2D position of a GO (simplified)
private Vector2 GetGO2DPos(Vector3 worldPos) {
return new Vector2(worldPos.x, worldPos.z);
}
private void Update() {
if (isTurning) {
//assuming the drone is banking to the "side" and "side" only
transform.Rotate(0, 0, bankAngle * time.deltaTime * bankSpeed, Space.Self); //banking
//if the drone is facing the next waypoint already, set
//isTurning to false
} else if (turnAngleABS > 0.0f) {
//reset back to original position (with same speed as above)
//at least "normal speed" is a must, otherwise drone might hit the
//next waypoint before the banking reset can finish!
float bankAngle_delta = bankAngle * time.deltaTime * bankSpeed;
transform.Rotate(0, 0, -1 * bankAngle_delta, Space.Self);
turnAngleABS -= (bankAngle_delta > 0.0f) &#63 bankAngle_delta : -1 * bankAngle_delta;
}
//the banking was probably not set back to exactly 0, as time.deltaTime
//is not a fixed value. if this happened and looks ugly, reset
//drone's "z" to Quaternion.identity.z. if it also looks ugly,
//you need to test if you don't """over bank""" in the above code
//by comparing bankAngle_delta + 'calculated banking angle' against
//the identity.z value, and reset bankAngle_delta if it's too high/low.
//when you are done, your turning animation is over, so:
}
Again, this code might not perfectly fit your needs (or compile :P), so focus on the idea and the approach, not the code itself. Sorry for not being able right now to put something together and test myself - but I hope I helped. Cheers!
EDIT: Instead of a wall of text I tried to answer your question in code (still not perfect, but goal is not doing the job, but to help with some snippets and ideas :)
So. Basically, what you have is a distance and "angle" between two waypoints. This distance and your drone's flight/walk/whatever speed (which I don't know) is the maximum amount of time available for:
1. Turning, so the drone will face in the new direction
2. Banking to the side, and back to zero/"normal"
As there's two times more action on banking side, it either has to be done faster (bankSpeed), or in a smaller angle (turnBankRatio), or both, depending on what looks nice and feels real, what your preference is, etc. So it's 100% subjective. It's also your call if the drone turns+banks quickly and approaches toward the next waypoint, or does things in slow pace and turns just a little if has a lot of time/distance and does things fast only if it has to.
As of isTurning:
You set it to true when the drone reached a waypoint and heads out to the next one AND the variables to (turn and) bank were set properly. When you set it to false? It's up to you, but the goal is to do so when the maneuver is finished (this was buggy in the snippet the first time as this "optimal status" was not possible to ever be reached) so he drone can "reset banking".For further details on what's going on, see code comments.Again, this is just a snippet to support you with a possible solution for your problem. Give it some time and understand what's going on. It really is easy, you just need some time to cope ;)Hope this helps! Enjoy and cheers! :)

Potential floating point issue with cosine acceleration curve

I am using a cosine curve to apply a force on an object between the range [0, pi]. By my calculations, that should give me a sine curve for the velocity which, at t=pi/2 should have a velocity of 1.0f
However, for the simplest of examples, I get a top speed of 0.753.
Now if this is a floating point issue, that is fine, but that is a very significant error so I am having trouble accepting that it is (and if it is, why is there such a huge error computing these values).
Some code:
// the function that gives the force to apply (totalTime = pi, maxForce = 1.0 in this example)
return ((Mathf.Cos(time * (Mathf.PI / totalTime)) * maxForce));
// the engine stores this value and in the next fixed update applies it to the rigidbody
// the mass is 1 so isn't affecting the result
engine.ApplyAccelerateForce(applyingForce * ship.rigidbody2D.mass);
Update
There is no gravity being applied to the object, no other objects in the world for it to interact with and no drag. I'm also using a RigidBody2D so the object is only moving on the plane.
Update 2
Ok have tried a super simple example and I get the result I am expecting so there must be something in my code. Will update once I have isolated what is different.
For the record, super simple code:
float forceThisFrame;
float startTime;
// Use this for initialization
void Start () {
forceThisFrame = 0.0f;
startTime = Time.fixedTime;
}
// Update is called once per frame
void Update () {
float time = Time.fixedTime - startTime;
if(time <= Mathf.PI)
{
forceThisFrame = Mathf.Cos (time);
if(time >= (Mathf.PI /2.0f)- 0.01f && time <= (Mathf.PI /2.0f) + 0.01f)
{
print ("Speed: " + rigidbody2D.velocity);
}
}
else
{
forceThisFrame = 0.0f;
}
}
void FixedUpdate()
{
rigidbody2D.AddForce(forceThisFrame * Vector2.up);
}
Update 3
I have changed my original code to match the above example as near as I can (remaining differences listed below) and I still get the discrepancy.
Here are my results of velocity against time. Neither of them make sense to me, with a constant force of 1N, that should result in a linear velocity function v(t) = t but that isn't quite what is produced by either example.
Remaining differences:
The code that is "calculating" the force (now just returning 1) is being run via a non-unity DLL, though the code itself resides within a Unity DLL (can explain more but can't believe this is relevant!)
The behaviour that is applying the force to the rigid body is a separate behaviour.
One is moving a cube in an empty enviroment, the other is moving a Model3D and there is a plane nearby - tried a cube with same code in broken project, same problem
Other than that, I can't see any difference and I certainly can't see why any of those things would affect it. They both apply a force of 1 on an object every fixed update.
For the cosine case this isn't a floating point issue, per se, it's an integration issue.
[In your 'fixed' acceleration case there are clearly also minor floating point issues].
Obviously acceleration is proportional to force (F = ma) but you can't just simply add the acceleration to get the velocity, especially if the time interval between frames is not constant.
Simplifying things by assuming that the inter-frame acceleration is constant, and therefore following v = u + at (or alternately ∂v = a.∂t) you need to scale the effect of the acceleration in proportion to the time elapsed since the last frame. It follows that the smaller ∂t is, the more accurate your integration.
This was a multi-part problem that started with me not fully understanding Update vs. FixedUpdate in Unity, see this question on GameDev.SE for more info on that part.
My "fix" from that was advancing a timer that went with the fixed update so as to not apply the force wrong. The problem, as demonstrated by Eric Postpischil was because the FixedUpdate, despite its name, is not called every 0.02s but instead at most every 0.02s. The fix for this was, in my update to apply some scaling to the force to apply to accomodate for missed fixed updates. My code ended up looking something like:
Called From Update
float oldTime = time;
time = Time.fixedTime - startTime;
float variableFixedDeltaTime = time - oldTime;
float fixedRatio = variableFixedDeltaTime / Time.fixedDeltaTime;
if(time <= totalTime)
{
applyingForce = forceFunction.GetValue(time) * fixedRatio;
Vector2 currentVelocity = ship.rigidbody2D.velocity;
Vector2 direction = new Vector2(ship.transform.right.x, ship.transform.right.y);
float velocityAlongDir = Vector2.Dot(currentVelocity, direction);
float velocityPrediction = velocityAlongDir + (applyingForce * lDeltaTime);
if(time > 0.0f && // we are not interested if we are just starting
((velocityPrediction < 0.0f && velocityAlongDir > 0.0f ) ||
(velocityPrediction > 0.0f && velocityAlongDir < 0.0f ) ))
{
float ratio = Mathf.Abs((velocityAlongDir / (applyingForce * lDeltaTime)));
applyingForce = applyingForce * ratio;
// We have reversed the direction so we must have arrived
Deactivate();
}
engine.ApplyAccelerateForce(applyingForce);
}
Where ApplyAccelerateForce does:
public void ApplyAccelerateForce(float requestedForce)
{
forceToApply += requestedForce;
}
Called from FixedUpdate
rigidbody2D.AddForce(forceToApply * new Vector2(transform.right.x, transform.right.y));
forceToApply = 0.0f;

Usage of Map and Translate Functions in Processing

New to Processing working on understanding this code:
import com.onformative.leap.LeapMotionP5;
import java.util.*;
LeapMotionP5 leap;
LinkedList<Integer> values;
public void setup() {
size(800, 300);
frameRate(120); //Specifies the number of frames to be displayed every second
leap = new LeapMotionP5(this);
values = new LinkedList<Integer>();
stroke(255);
}
int lastY = 0;
public void draw() {
**translate(0, 180)**; //(x, y, z)
background(0);
if (values.size() >= width) {
values.removeFirst();
}
values.add((int) leap.getVelocity(leap.getHand(0)).y);
System.out.println((int) leap.getVelocity(leap.getHand(0)).y);
int counter = 0;
** for (Integer val : values)** {
**val = (int) map(val, 0, 1500, 0, height);**
line(counter, val, counter - 1, lastY);
point(counter, val);
lastY = val;
counter++;
}
** line(0, map(1300, 0, 1500, 0, height), width, map(1300, 0, 1500, 0, height)); //(x1, y1, x2, y2)**
}
It basically draw of graph of movement detected on the y axis using the Leap Motion sensor. Output looks like this:
I eventually need to do something similar to this that would detect amplitude instead of velocity simultaneously on all 3 axis instead of just the y.
The use of Map and Translate are whats really confusing me. I've read the definitions of these functions on the Processing website so I know what they are and the syntax, but what I dont understand is the why?! (which is arguably the most important part.
I am asking if someone can provide simple examples that explain the WHY behind using these 2 functions. For instance, given a program that needs to do A, B, and C, with data foo, y, and x, you would use Map or Translate because A, B, and C.
I think programming guides often overlook this important fact but to me it is very important to truly understanding a function.
Bonus points for explaining:
for (Integer val : values) and LinkedList<Integer> values; (cant find any documentation on the processing website for these)
Thanks!
First, we'll do the easiest one. LinkedList is a data structure similar to ArrayList, which you may be more familiar with. If not, then it's just a list of values (of the type between the angle braces, in this case integer) that you can insert and remove from. It's a bit complicated on the inside, but if it doesn't appear in the Processing documentation, it's a safe bet that it's built into Java itself (java documentation).
This line:
for (Integer val : values)
is called a "for-each" or "foreach" loop, which has plenty of very good explanation on the internet, but I'll give a brief explanation here. If you have some list (perhaps a LinkedList, perhaps an ArrayList, whatever) and want to do something with all the elements, you might do something like this:
for(int i = 0; i < values.size(); i++){
println(values.get(i)); //or whatever
println(values.get(i) * 2);
println(pow(values.get(i),3) - 2*pow(values.get(i),2) + values.get(i));
}
If you're doing a lot of manipulation with each element, it quickly gets tedious to write out values.get(i) each time. The solution would be to capture values.get(i) into some variable at the start of the loop and use that everywhere instead. However, this is not 100% elegant, so java has a built-in way to do this, which is the for-each loop. The code
for (Integer val : values){
//use val
}
is equivalent to
for(int i = 0; i < values.size(); i++){
int val = values.get(i);
//use val
}
Hopefully that makes sense.
map() takes a number in one linear system and maps it onto another linear system. Imagine if I were an evil professor and wanted to give students random grades from 0 to 100. I have a function that returns a random decimal between 0 and 1, so I can now do map(rand(),0,1,0,100); and it will convert the number for me! In this example, you could also just multiply by 100 and get the same result, but it is usually not so trivial. In this case, you have a sensor reading between 0 and 1500, but if you just plotted that value directly, sometimes it would go off the screen! So you have to scale it to an appropriate scale, which is what that does. 1500 is the max that the reading can be, and presumably we want the maximum graphing height to be at the edge of the screen.
I'm not familiar with your setup, but it looks like the readings can be negative, which means that they might get graphed off the screen, too. The better solution would be to map the readings from -1500,1500 to 0,height, but it looks like they chose to do it a different way. Whenever you call a drawing function in processing (eg point(x,y)), it draws the pixels at (x,y) offset from (0,0). Sometimes you don't want it to draw it relative to (0,0), so the translate() function allows you to change what it draws things relative against. In this case, translating allows you to plot some point (x,0) somewhere in the middle of the screen, rather than on the edge.
Hope that helps!

Resources