I am currently trying to understand the calculation of the mean curvature for a 3D surface, where one coordinate is a function of the other two coordinates.
Looking at wikipedia https://en.wikipedia.org/wiki/Mean_curvature#Surfaces_in_3D_space under "[For the special case of a surface defined as a function of two coordinates, e.g. z = S(x,y)]" they give this formula:
mean curvature
What i don't understand here is the div(z - S) . If z = S(x,y) then i would think that z is the same as S and thus z - S equals 0.
I tried to follow the cited literature but i didn't find what i was looking for.
Apparently i misunderstand something here and z is not the same as S?
Any help would be appreciated.
z-S(x,y) is a function of 3 variables, the gradient of which is (-S_x,-S_y,1), see the second line. Then you normalize this gradient vector and compute the divergence of the normalized vector field.
Related
I want to optimize a function for a multivariate surface given a range of values for one variable.
For example, take the equation for the following quadratic surface:
z = x + x^2 + xy + y^2 + y.
How would I find values of y that maximize z given all possible values of x? The result should be a line along the surface that maximizes z at every value of x.
I have found a lot of resources online that explain how to find maxima and minima, as well as saddle points, but I am not sure if that approach will be relevant - the slope of the surface along that line will usually not be 0, so I don't think it makes sense to use derivatives here.
I am new to calculus and mathematical optimization. I would be thrilled if someone would point me to a resource that could help me out with this problem
Thank you!
Given a set of coordinates corresponding to a closed shape, I want to calculate the total absolute curvature, which requires calculating the curvature for each point, taking the absolute value, and summing them. Simple enough.
I used the answer to this question to calculate the curvature from a matrix of x y coordinates (xymat) and get what I thought would be the total absolute curvature:
sum(abs(predict(smooth.spline(xymat), deriv = 2)$y))
The problem is that total absolute curvature has a minimum value of 2*pi and is exactly that for circles, but this code is evaluating to values less than 2*pi:
library(purrr)
xymat <- map_df(data.frame(degrees=seq(0:360)),
function(theta) data.frame(x = sin(theta), y = cos(theta)))
sum(abs(predict(smooth.spline(xymat), deriv = 2)$y))
This returns 1.311098 instead of the expected value of 6.283185.
If I change the df parameter of smooth.spline to 3 as in the previous answer, the returned value is 3.944053, still shy of 2*pi (the df value smooth.spline calculated for itself was 2.472213).
Is there a better way to calculate curvature? Is smooth.spline parameterized by arc length or will incorporating it (somehow) rescue this calculation?
Okay, a few things before we begin. You're using degrees in your seq, which will give you incorrect results (0 to 360 degrees). You can check that this is wrong by taking cos(360) in R, which isn't 1. This is explained in the documentation for the trig functions under Details.
So let's change your function to this
xymat <- map_df(data.frame(degrees=seq(0,2*pi,length=360)),
function(theta) data.frame(x = sin(theta), y = cos(theta)))
If you plot this, this indeed looks like a circle.
Let's actually restrict this to the lower half of the circle. If you put a spline through this without understanding the symmetry and looking at the plot, chances are that you'll get a horizontal line through the circle.
Why? because the spline doesn't know that it's symmetric above and below y = 0. The spline is trying to fit a function that explains the "data", not trace an arc. It splits the difference between two symmetric sets of points around y = 0.
If we restrict the spline to the lower half of the circle, we can use y values between 1 and -1, like this:
lower.semicircle <- data.frame(predict(smooth.spline(xymat[91:270,], all.knots = T)))
And let's fit a spline through it.
lower.semicircle.pred<-data.frame(predict(smooth.spline(lower.semicircle, all.knots = T)))
Note that I'm not using the deriv function here. That is for a different problem in the cars example to which you linked. You want total absolute curvature and they are looking at rate of change of curvature.
What we have now is an approximation to a lower semicircle using splines. Now you want the distance between all of the little sequential points like in the integral from the wikipedia page.
Let's calculate all of the little arc distances using a distance matrix. This literally calculates the Euclidean distances between each point to every other point.
all.pairwise.distances.in.the.spline.approx<-dist(lower.semicircle.pred, diag=F)
dist.matrix<-as.matrix(all.pairwise.distances.in.the.spline.approx)
seq.of.distances.you.want<-dist.matrix[row(dist.matrix) == col(dist.matrix) + 1]
This last object is what you need to sum across.
sum(seq.of.distances.you.want)
..which evaluates to [1] 3.079 for the lower semicircle, around half of your 2*pi expected value.
It's not perfect but splines have problems with edge effects.
I am working on a project of interpolating sample data {(x_i,y_i)} where the input domain for x_i locates in 4D space and output y_i locates in 3D space. I need generate two look up tables for both directions. I managed to generate the 4D -> 3D table. But the 3D -> 4D one is tricky. The sample data are not on regular grid points, and it is not one to one mapping. Is there any known method to treat this situation? I did some search online, but what I found is only for 3D -> 3D mapping, which are not suitable for this case. Thank you!
To answer the questions of Spektre:
X(3D) -> Y(4D) is the case 1X -> nY
I want to generate a table that for any given X, we can find the value for Y. The sample data is not occupy all the domain of X. But it's fine, we only need accuracy for point inside the domain of sample data. For example, we have sample data like {(x1,x2,x3) ->(y1,y2,y3,y4)}. It is possible we also have a sample data {(x1,x2,x3) -> (y1_1,y2_1,y3_1,y4_1)}. But it is OK. We need a table for any (a,b,c) in space X, it corresponds to ONE (e,f,g,h) in space Y. There might be more than one choice, but we only need one. (Sorry for the symbol confusing if any)
One possible way to deal with this: Since I have already established a smooth mapping from Y->X, I can use Newton's method or any other method to reverse search the point y for any given x. But it is not accurate enough, and time consuming. Because I need do search for each point in the table, and the error is the sum of the model error with the search error.
So I want to know it is possible to find a mapping directly to interpolate the sample data instead of doing such kind of search in 3.
You are looking for projections/mappings
as you mentioned you have projection X(3D) -> Y(4D) which is not one to one in your case so what case it is (1 X -> n Y) or (n X -> 1 Y) or (n X -> m Y) ?
you want to use look-up table
I assume you just want to generate all X for given Y the problem with non (1 to 1) mappings is that you can use lookup table only if it has
all valid points
or mapping has some geometric or mathematic symmetry (for example distance between points in X and Yspace is similar,and mapping is continuous)
You can not interpolate between generic mapped points so the question is what kind of mapping/projection you have in mind?
First the 1->1 projections/mappings interpolation
if your X->Y projection mapping is suitable for interpolation
then for 3D->4D use tri-linear interpolation. Find closest 8 points (each in its axis to form grid hypercube) and interpolate between them in all 4 dimensions
if your X<-Y projection mapping is suitable for interpolation
then for 4D->3D use quatro-linear interpolation. Find closest 16 points (each in its axis to form grid hypercube) and interpolate between them in all 3 dimensions.
Now what about 1->n or n->m projections/mappings
That solely depends on the projection/mapping properties which I know nothing of. Try to provide an example of your datasets and adding some image would be best.
[edit1] 1 X <- n Y
I still would use quatro-linear interpolation. You still will need to search your Y table but if you group it like 4D grid then it should be easy enough.
find 16 closest points in Y-table to your input Y point
These points should be the closest points to your Y in each +/- direction of all axises. In 3D it looks like this:
red point is your input Y point
blue points are the found closest points (grid) they do not need to be so symmetric as on image .
Please do not want me to draw 4D example that make sense :) (at least for sober mind)
interpolation
find corresponding X points. If there is more then one per point chose the closer one to the others ... Now you should have 16 X points and 16+1 Y points. Then from Y points you need just to calculate the distance along lines from your input Y point. These distances are used as parameter for linear interpolations. Normalize them to <0,1> where
0 means 'left' and 1 means 'right' point
0.5 means exact middle
You will need this scalar distance in each of Y-domain dimension. Now just compute all the X points along the linear interpolations until you get the corresponding red point in X-domain.
With tri-linear interpolation (3D) there are 4+2+1=7 linear interpolations (as on image). For quatro-linear interpolation (4D) there are 8+4+2+1=15 linear interpolations.
linear interpolation
X = X0 + (X1-X0)*t
X is interpolated point
X0,X1 are the 'left','right' points
t is the distance parameter <0,1>
I have two 3D vectors called A and B that both only have a 3D position. I know how to find the angle along the unit circle ranging from 0-360 degrees with the atan2 function by doing:
EDIT: (my atan2 function made no sense, now it should find the "y-angle" between 2 vectors):
toDegrees(atan2(A.x-B.x,A.z-B.z))+180
But that gives me the Y angle between the 2 vectors.
I need to find the X angle between them. It has to do with using the x, y and z position values. Not the x and z only, because that gives the Y angle between the two vectors.
I need the X angle, I know it sounds vague but I don't know how to explain. Maybe for example you have a camera in 3D space, if you look up or down than you rotate the x-axis. But now I need to get the "up/down" angle between the 2 vectors. If I rotate that 3D camera along the y-axis, the x-axis doens't change. So with the 2 vectors, no matter what the "y-angle" is between them, the x-angle between the 2 vectors wil stay the same if y-angle changes because it's the "up/down" angle, like in the camara.
Please help? I just need a line of math/pseudocode, or explanation. :)
atan2(crossproduct.length,scalarproduct)
The reason for using atan2 instead of arccos or arcsin is accuracy. arccos behaves very badly close to 0 degrees. Small computation errors in argument will lead to disproportionally big errors in result. arcsin has same problem close to 90 degrees.
Computing the altitude angle
OK, it might be I finally understood your comment below about the result being independent of the y angle, and about how it relates to the two vectors. It seems you are not really interested in two vectors and the angle between these two, but instead you're interested in the difference vector and the angle that one forms against the horizontal plane. In a horizontal coordinate system (often used in astronomy), that angle would be called “altitude” or “elevation”, as opposed to the “azimuth” you compute with the formula in your (edited) question. “altitude” closely relates to the “tilt” of your camera, whereas “azimuth” relates to “panning”.
We still have a 2D problem. One coordinate of the 2D vector is the y coordinate of the difference vector. The other coordinate is the length of the vector after projecting it on the horizontal plane, i.e. sqrt(x*x + z*z). The final solution would be
x = A.x - B.x
y = A.y - B.y
z = A.z - B.z
alt = toDegrees(atan2(y, sqrt(x*x + z*z)))
az = toDegrees(atan2(-x, -z))
The order (A - B as opposed to B - A) was chosen such that “A above B” yields a positive y and therefore a positive altitude, in accordance with your comment below. The minus signs in the azimuth computation above should replace the + 180 in the code from your question, except that the range now is [-180, 180] instead of your [0, 360]. Just to give you an alternative, choose whichever you prefer. In effect you compute the azimuth of B - A either way. The fact that you use a different order for these two angles might be somewhat confusing, so think about whether this really is what you want, or whether you want to reverse the sign of the altitude or change the azimuth by 180°.
Orthogonal projection
For reference, I'll include my original answer below, for those who are actually looking for the angle of rotation around some fixed x axis, the way the original question suggested.
If this x angle you mention in your question is indeed the angle of rotation around the x axis, as the camera example suggests, then you might want to think about it this way: set the x coordinate to zero, and you will end up with 2D vectors in the y-z plane. You can think of this as an orthogonal projection onto said plain. Now you are back to a 2D problem and can tackle it there.
Personally I'd simply call atan2 twice, once for each vector, and subtract the resulting angles:
toDegrees(atan2(A.z, A.y) - atan2(B.z, B.y))
The x=0 is implicit in the above formula simply because I only operate on y and z.
I haven't fully understood the logic behind your single atan2 call yet, but the fact that I have to think about it this long indicates that I wouldn't want to maintain it, at least not without a good explanatory comment.
I hope I understood your question correctly, and this is the thing you're looking for.
Just like 2D Vectors , you calculate their angle by solving cos of their Dot Product
You don't need atan, you always go for the dot product since its a fundamental operation of vectors and then use acos to get the angle.
double angleInDegrees = acos ( cos(theta) ) * 180.0 / PI;
I have a set of points (with unknow coordinates) and the distance matrix. I need to find the coordinates of these points in order to plot them and show the solution of my algorithm.
I can set one of these points in the coordinate (0,0) to simpify, and find the others. Can anyone tell me if it's possible to find the coordinates of the other points, and if yes, how?
Thanks in advance!
EDIT
Forgot to say that I need the coordinates on x-y only
The answers based on angles are cumbersome to implement and can't be easily generalized to data in higher dimensions. A better approach is that mentioned in my and WimC's answers here: given the distance matrix D(i, j), define
M(i, j) = 0.5*(D(1, j)^2 + D(i, 1)^2 - D(i, j)^2)
which should be a positive semi-definite matrix with rank equal to the minimal Euclidean dimension k in which the points can be embedded. The coordinates of the points can then be obtained from the k eigenvectors v(i) of M corresponding to non-zero eigenvalues q(i): place the vectors sqrt(q(i))*v(i) as columns in an n x k matrix X; then each row of X is a point. In other words, sqrt(q(i))*v(i) gives the ith component of all of the points.
The eigenvalues and eigenvectors of a matrix can be obtained easily in most programming languages (e.g., using GSL in C/C++, using the built-in function eig in Matlab, using Numpy in Python, etc.)
Note that this particular method always places the first point at the origin, but any rotation, reflection, or translation of the points will also satisfy the original distance matrix.
Step 1, arbitrarily assign one point P1 as (0,0).
Step 2, arbitrarily assign one point P2 along the positive x axis. (0, Dp1p2)
Step 3, find a point P3 such that
Dp1p2 ~= Dp1p3+Dp2p3
Dp1p3 ~= Dp1p2+Dp2p3
Dp2p3 ~= Dp1p3+Dp1p2
and set that point in the "positive" y domain (if it meets any of these criteria, the point should be placed on the P1P2 axis).
Use the cosine law to determine the distance:
cos (A) = (Dp1p2^2 + Dp1p3^2 - Dp2p3^2)/(2*Dp1p2* Dp1p3)
P3 = (Dp1p3 * cos (A), Dp1p3 * sin(A))
You have now successfully built an orthonormal space and placed three points in that space.
Step 4: To determine all the other points, repeat step 3, to give you a tentative y coordinate.
(Xn, Yn).
Compare the distance {(Xn, Yn), (X3, Y3)} to Dp3pn in your matrix. If it is identical, you have successfully identified the coordinate for point n. Otherwise, the point n is at (Xn, -Yn).
Note there is an alternative to step 4, but it is too much math for a Saturday afternoon
If for points p, q, and r you have pq, qr, and rp in your matrix, you have a triangle.
Wherever you have a triangle in your matrix you can compute one of two solutions for that triangle (independent of a euclidean transform of the triangle on the plane). That is, for each triangle you compute, it's mirror image is also a triangle that satisfies the distance constraints on p, q, and r. The fact that there are two solutions even for a triangle leads to the chirality problem: You have to choose the chirality (orientation) of each triangle, and not all choices may lead to a feasible solution to the problem.
Nevertheless, I have some suggestions. If the number entries is small, consider using simulated annealing. You could incorporate chirality into the annealing step. This will be slow for large systems, and it may not converge to a perfect solution, but for some problems it's the best you and do.
The second suggestion will not give you a perfect solution, but it will distribute the error: the method of least squares. In your case the objective function will be the error between the distances in your matrix, and actual distances between your points.
This is a math problem. To derive coordinate matrix X only given by its distance matrix.
However there is an efficient solution to this -- Multidimensional Scaling, that do some linear algebra. Simply put, it requires a pairwise Euclidean distance matrix D, and the output is the estimated coordinate Y (perhaps rotated), which is a proximation to X. For programming reason, just use SciKit.manifold.MDS in Python.
The "eigenvector" method given by the favourite replies above is very general and automatically outputs a set of coordinates as the OP requested, however I noticed that that algorithm does not even ask for a desired orientation (rotation angle) for the frame of the output points, the algorithm chooses that orientation all by itself!
People who use it might want to know at what angle the frame will be tipped before hand so I found an equation which gives the answer for the case of up to three input points, however I have not had time to generalize it to n-points and hope someone will do that and add it to this discussion. Here are the three angles the output sides will form with the x-axis as a function of the input side lengths:
angle side a = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2-b^2)^2)/(a^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side b = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*b^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side c = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*c^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
Those equations also lead directly to a solution to the OP's problem of finding the coordinates for each point because: the side lengths are already given from the OP as the input, and my equations give the slope of each side versus the x-axis of the solution, thus revealing the vector for each side of the polygon answer, and summing those sides through vector addition up to a desired vertex will produce the coordinate of that vertex. So if anyone can extend my angle equations to handling beyond three input lengths (but I note: that might be impossible?), it might be a very fast way to the general solution of the OP's question, since slow parts of the algorithms that people gave above like "least square fitting" or "matrix equation solving" might be avoidable.