Why are the min and max vectors for pcl::CropBox 4-dimensional? - point-cloud-library

The setMin and setMax methods of the pcl::CropBox filter each take an Eigen::Vector4f as a parameter.
Why 4f and not 3f? What's the fourth dimension for?

What's the Vector4F for?
The Vector4f corresponds to the homogeneous coordinate. For example, (3, 4, 5, 1) and (6, 8, 10, 2) is the same point in homogeneous coordinates system. You can normalize(a, b, c, d) to (a/d, b/d, c/d, 1) in this case.
Easy answer is: Just set the last digit to 1
Why PCL CropBox needs Vector4f?
It's because PCL CropBox can handle any box transformation via setTransform
Transformation matrix usually contains a 4x4 matrix, example shown below where r is a 3x3 rotation matrix and t is a 3-d vector
[[r0, r1, r2, t0],
[r3, r4, r5, t1],
[r6, r7, r8, t2],
[ 0, 0, 0, 1]]
It's just easier to matrix multiply a homogeneous coordinate (1x4) vector with (4x4) matrix.
Feel free to ask more question, as I can update this answer.

Related

Scale 3D-Points in Plane

I have some points (3D) all on the same (known) plane. Now I want to scale these points within the plane as opposed to the whole 3D space.
Is there some quick solution for this e.g. a modified scaling matrix?
Can someone help me?
Thanks.
EDIT: I'm more looking for an idea/pseudocode how to do this. If you want use MatLab or some convenient language
Your plane can be known by three non-collinear points P0, P1, P2, or by its implicit equation,
A.x + B.y + C.z + D = 0
In the first case, consider the vector P0P1 and normalize it (U = P0P1/|P0P1|). Then compute a second vector orthogonal with the first, V = P0P2 - (P0P2.U).U and normalize it.
In the second case you can take the three intersection points with the axes, (-D/A, 0, 0), (0, -D/B, 0), (0, 0, -D/C) and you are back in the first case (but mind degenerate cases).
Use the two vectors to compute the desired 2D coordinates of any point P = (X, Y, Z) by the dot products
(x, y) = (P.U, P.V)
(This transform is a rotation that makes P0P1 parallel to the x axis and brings P0P1P2 in the plane xy.)

Vector Projection Confusion

If I have a vector (0, 0, 9) and I want to project it onto the vector (0, 0.7, 0.7) shouldn't that give me a vector of (0, 9, 9).
I am using the following formula
Vector3.Dot (vector, normal) * normal.magnitude * normal;
which is returning (0, 0.45, 0.45). What have I missed, isn't the returned vector z suppose to end at the same z position as the projected vector. Like this
When you project a vector onto another vector, the result's magnitude is always less than or equal to the magnitude of the original vector.
Given a normalized vector u, project v onto u with the formula:
proju v = 〈u, v〉u
Or, in code,
Vector3.Dot(vector, normal) * normal
The result of projecting (0, 0, 9) onto (0, √1/2, √1/2) is (0, 4.5, 4.5).
Note that normal.magnitude is always 1, unless normal is not "normal" (in which case you named your variable wrong); and if it's not normal, you need to divide by the squared magnitude, not multiply.
Vector3.Dot(vector, direction) * direction
* (1.0 / direction.magnitude_squared)

Solving a matrix differential equation with Mathematica

I need to solve this equation in Mathematica:
d/dx v(x) = A . v(x)
here v is the column vector {v1(x),v2(x),v3(x),v4(x)} and
A is a 4 x 4 matrix.
I want to solve for the functions v1, v2, v3, v4 with any initial conditions.
The range of x is from 0 to 1000.
How do I write Mathematica code for this type of differential equation using NDSolve?
So, if you have some horrible matrix
A = RandomReal[0.1, {4, 4}]; (* A horrible matrix *)
which we make anti-symmetric (so the solution is oscillatory)
A = A - Transpose#A;
Define the vector of functions and their initial conditions
v[x_] := {v1[x], v2[x], v3[x], v4[x]};
init = v[0] == RandomReal[1, 4]
Then the NDSolve command looks like
sol = NDSolve[LogicalExpand[v'[x] == A.v[x] && init],
{v1, v2, v3, v4}, {x, 0, 1000}]
And the solutions can be plotted with
Plot[Evaluate[v[x] /. sol], {x, 0, 1000}]
Note that that the above differential equation is a linear, first order equation with constant coefficients, so is simply solved using a matrix exponential.
However, if the matrix A was a function of x, then analytic solutions become hard, but the numerical code stays the same.
For example, try:
A = RandomReal[1/10, {4, 4}] - Exp[-RandomReal[1/100, {4, 4}] x^2];
A = A - Transpose#A;
Which can produce solutions like
I wanted to do the same with a matrix instead of a vector v. As long as equation for it can be read correctly without knowing that this symbol represents a vector or a matrix, NDSolve deduced its character from initial condition, however in case when dimensionality of variable is explicit:
M'[t]==a[t]*IdentityMatrix[2]+M[t]
it fails.
An "ordinary" solution is to define matrix explicitly and flatten it when giving as a list of variables.
However I omitted this issue (and many relatex syntax problems) just introducing a reduntant variable which only role is to be the identity matrix but without introducing a list (matrix is 2d list, so Mathematica acts as while adding lists to each other, generating the error):
eqn = {w'[t] == a[t]*identity[t] + w[t], a'[t] == 2, identity'[t] == {{0, 0}, {0, 0}}}
init={ w[0] == {{1, 2}, {2, 1}}, a[0] == 1, identity[0] == {{1, 0}, {0, 1}}}
sol = NDSolve[eqn&&init, {w, a, identity}, {t, 0, 1}]
Some evidence of work:
Plot[{Evaluate[w[t] /. sol][[1, 1, 1]], Evaluate[w[t] /. sol][[1, 1, 2]]}, {t, 0, 1}]
Try something like this (I do not have Mathematica on my home notebook :))
NDSolve[Transpose[{v1[x],v2[x],v3[x],v4[x]}']=={{a11,a12,a13,a14},{a21,a22,a23,a24},{a31,a32,a33,a34},{a41,a42,a43,a44}}.Transpose[{v1[x],v2[x],v3[x],v4[x]}], {v1,v2,v3,v4},{x,0,1000}]
ps: you can rewrite it in a different way, replacing your record as a set of equations
{v1'[x]==a11*v1[x]+a12*v2[x]+a13*v3[x]+a14*v4[x],v2'[x]==a21*v1[x]+a22*v2[x]+a23*v3[x]+a24*v4[x], and so on..} if you want )

Vector transformations in OpenGL ES

Why are vector transformations done in reverse order in OpenGL ES? Is it because the vectors are stored in column-matrix form? It seems that they have made things unnecessarily difficult.
It probably has to do with the fact that vector transformations aren't commutative. Changing the order can give you a different result.
A simple thought experiment proves the point:
For a unit vector (1, 0, 0), a +90 degree rotation about the z-axis, followed by a +90 degree rotation about the x-axis results in a vector (0, 0, 1).
If you start with the +90 degree rotation about the x-axis, followed by the +90 degree rotation about the z-axis, you'll have a vector (0, 1, 0).

Compute average distance from point to line segment and line segment to line segment

I'm searching for an algorithm to calculate the average distance between a point and a line segment in 3D. So given two points A(x1, y1, z1) and B(x2, y2, z2) that represent line segment AB, and a third point C(x3, y3, z3), what is the average distance between each point on AB to point C?
I'm also interested in the average distance between two line segments. So given segment AB and CD, what is the average distance from each point on AB to the closest point on CD?
I haven't had any luck with the web searches I've tried, so any suggestions would be appreciated.
Thanks.
First, the distance between two points is the square root of the sum of the squares of the pairwise differences of the coordinates.
(For example, the distance from (0,0,0) to (1,1,1) is sqrt(3) but this works for arbitrary points in any number of dimensions.)
This distance is known as the l2-norm (lower-case L) or Euclidean norm.
Write norm(A,B) for the distance between points A and B.
On to the interesting problem of average distances...
(Note that finding the minimum distance from a point to a line or between line segments is a much more common problem. There was an answer here with good pointers for that problem but it seems it was deleted in the meantime.)
To find the average distance from a point C to a line segment AB, consider the distance to an arbitrary point between A and B, namely (1-k)A+kB where k ranges from 0 to 1.
That's norm(C, (1-k)A+kB).
So the average distance is the integral from k = 0 to 1 of norm(C, (1-k)A+kB).
Mathematica can do that integral for any specific A, B, and C.
Here's a Mathematica implementation:
avgd[A_,B_,C_] := Integrate[Sqrt#Dot[(1-k)*A+k*B-C, (1-k)*A+k*B-C], {k, 0, 1}]
The integrand can also be written Norm[(1-k)*A+k*B-C]. Either way, Mathematica can do it for specific points but can't integrate it symbolically, though apparently David got it to do so somehow.
Here's David's example from the comments:
> avgd[{0, 0, 0}, {4, 0, 0}, {4, 3, 0}] // N
3.73594
For the problem of the average distance between two line segments, in theory I think this should work:
avgd[A_,B_,C_,D_] := Integrate[Norm[(1-k)A+k*B - (1-j)C - j*D], {k,0,1}, {j,0,1}]
But Mathematica seems to choke on that even for specific points, let alone symbolically.
If you mean what I think you mean by "average" (and "distance," i.e. the L2 norm mentioned by dreeves), here's a procedure that I think should work for finding the average distance between a point and a line segment. You'll need a function dot(A,B) which takes the dot product of two vectors.
// given vectors (points) A, B, C
K1 = dot(A-C,A-C)
K2 = 2*dot(B-A,A-C)
K3 = dot(B-A,B-A)
L1 = sqrt(K3*(K1+K2+K3))
L2 = sqrt(K3*K1)
N = 4*K3*L1 + 2*K2*(L1-L2) + (K2*K2-4*K1*K3)*log((K2+2*L2)/(2*K3+K2+2*L1))
D = N / (8*K3^1.5)
Assuming I've transcribed everything correctly, D will then be the average distance.
This is basically just pseudocode for evaluating the result of an integral that I did in Mathematica. There may be some neat computational shortcut for this but if there is, I don't know it. (And unless there is one, I'd question how much you really need to do this computation)
If you want to find the average distance from the closest point on a line segment CD to all points on AB, in most cases the closest point will be either C or D so you can just check both of those to see which is closer (probably using some minimum-distance calculation as referenced in other answers). The only exception is when CD and AB are parallel and you could run a perpendicular from one to the other, in which case you'd have to define your requirements more precisely.
If you wanted to find the average distance between all points on CD and all points on AB... it could be done with a double integral, though I shudder to think how complicated the resulting formula would be.
Well, if analysis fails, reach for a computer and do a silly amount of calculation until you get a feel for the numbers ...
I too have a copy of Mathematica. To keep things simple, since a triangle must lie in a plane, I've worked the following in 2D space. To keep things extra simple, I specify a point at {0,0} and a line segment from {1,0} to {0,1}. The average distance from point to line must be, if it is meaningful, the average length of all the lines which could be drawn from {0.0} to anywhere on the line segment. Of course, there are an awful lot of such lines, so let's start with, say, 10. In Mathematica this might be computed as
Mean[Table[EuclideanDistance[{0, 0}, {1 - k, 0 + k}], {k, 0, 1, 10.0^-1}]]]
which gives 0.830255. The next step is obvious, make the number of lines I measure larger. In fact, let's make a table of averages as the exponent of 10.0 gets smaller (they're negative !). In Mathematica:
Table[Mean[Table[EuclideanDistance[{0, 0}, {1 - k, 0 + k}], {k, 0, 1,
10.0^-i}]], {i, 0, 6}]
which produces:
{1, 0.830255, 0.813494, 0.811801, 0.811631, 0.811615, 0.811613}
Following this approach I re-worked #Dave's example (forget the third dimension):
Table[Mean[Table[EuclideanDistance[{0, 0}, {4, 0 + 3 k}], {k, 0, 1,
10.0^-i}]], {i, 0, 6}]
which gives:
{9/2, 4.36354, 4.34991, 4.34854, 4.34841, 4.34839, 4.34839}
This does not agree with what #dreeves says #Dave's algorithm computes.
EDIT: OK, so I've wasted some more time on this. For the simple example I used in the first place, that is with a point at {0,0} and a line segment extending from {0,1} to {1,0} I define a function in Mathematica (as ever), like this:
fun2[k_] := EuclideanDistance[{0, 0}, {0 + k, 1 - k}]
Now, this is integratable. Mathematica gives:
In[13]:= Integrate[fun2[k], {k, 0, 1}]
Out[13]= 1/4 (2 + Sqrt[2] ArcSinh[1])
Or, if you'd rather have numbers, this:
In[14]:= NIntegrate[fun2[k], {k, 0, 1}]
Out[14]= 0.811613
which is what the purely numerical approach I took earlier gives.
I'm now going to get back to work, and leave it to you all to generalise this to an arbitrary triangle defined by a point and the end-points of a line segment.

Resources