I read the article on https://www.value-at-risk.net/cubic-spline-interpolation/
I understand all but I don't know how I can get the values for the matrix:
I know that there is something like hi = hi+1 - hi
I visited several websites, read different explenations, but I never found out how exactly I come to this values in the matrix.
The matrix is just system of equations encoded as matrix so it can be easily computed by inverse matrix.
For example second line of matrix (8,4,2,1,0,0,0,0) after matrix multiplication means this:
a3.2^3+a2.1^2+a1.2^1+a0=5
which is your p1(2)=5 the lines are:
p1(1)=1
p1(2)=5
p2(2)=5
p2(3)=4
p1'(2)-p2'(2)=0
p1''(2)-p2''(2)=0
p1''(1)=0
p2''(3)=0
so for example last matrix line ( 0,0,0,0,18,2,0,0 ) is this:
b3.18 + b2.2 = 0
If we derive p2(t) up to 2-nd derivation
p2(t) = b3.t^3 + b2.t^2 + b1.t + b0
p2'(t) = 3.b3.t^2 + 2.b2.t + b1
p2''(t) = 2.3.b3.t + 1.2.b2 = 6.b3.t + 2.b2
Now for t=3 we get:
p2''(3) = 6.b3.3 + 2.b2 = 18.b3 + 2.b2
And encoded into matrix (last line)
(0,0,0,0,18,2,0,0) * ( a3,a2,a1,a0,b3,b2,b1,b0) = 0
Which matches your example. Hope it is clear now ...
Beware this example of yours is just for y axis as you got 2D curve you need to do this for x axis in the same manner again ...
Now let rewrite the matrix equation again:
M*A=B
Where M is your 8x8 matrix, A=(a3,a2,a1,a0,b3,b2,b1,b0) and B=(1,5,5,4,0,0,0,0) you can solve this like this:
inverse(M)*M*A = inverse(M)*B
A = inverse(M)*B
So you obtain the A which holds your p1,p2 polynomials coefficients for B which holds your position (y coordinates in one pass and x coordinates in the next) so you got p1x,p1y,p2x,p2y polynomials. Which is what you need for the interpolation.
However this approach is a bit backward and usually predefined polynomial forms are used like SPLINE,BEZIER with defined properties like continuity, linearity, etc (no need for inverse matrix operation). But if you need custom properties like in this example then you do not have much of a choice.
For more info see How can i produce multi point linear interpolation?
Related
I have a scalar function which is obtained by iterative calculations. I wish to differentiate(find the directional derivative) of the values with respect to a matrix elementwise. How should I employ the finite difference approximation in this case. Does diff or gradient help in this case. Note that I only want numerical derivatives.
The typical code that I would work on is:
n=4;
for i=1:n
for x(i)=-2:0.04:4;
for y(i)=-2:0.04:4;
A(:,:,i)=[sin(x(i)), cos(y(i));2sin(x(i)),sin(x(i)+y(i)).^2];
B(:,:,i)=[sin(x(i)), cos(x(i));3sin(y(i)),cos(x(i))];
R(:,:,i)=horzcat(A(:,:,i),B(:,:,i));
L(i)=det(B(:,:,i)'*A(:,:,i)B)(:,:,i));
%how to find gradient of L with respect to x(i), y(i)
grad_L=tr((diff(L)/diff(R)')*(gradient(R))
endfor;
endfor;
endfor;
I know that the last part for grad_L would syntax error saying the dimensions don't match. How do I proceed to solve this. Note that gradient or directional derivative of a scalar functionf of a matrix variable X is given by nabla(f)=trace((partial f/patial(x_{ij})*X_dot where x_{ij} denotes elements of matrix and X_dot denotes gradient of the matrix X
Both your code and explanation are very confusing. You're using an iteration of n = 4, but you don't do anything with your inputs or outputs, and you overwrite everything. So I will ignore the n aspect for now since you don't seem to be making any use of it. Furthermore you have many syntactical mistakes which look more like maths or pseudocode, rather than any attempt to write valid Matlab / Octave.
But, essentially, you seem to be asking, "I have a function which for each (x,y) coordinate on a 2D grid, it calculates a scalar output L(x,y)", where the calculation leading to L involves multiplying two matrices and then getting their determinant. Here's how to produce such an array L:
X = -2 : 0.04 : 4;
Y = -2 : 0.04 : 4;
X_indices = 1 : length(X);
Y_indices = 1 : length(Y);
for Ind_x = X_indices
for Ind_y = Y_indices
x = X(Ind_x); y = Y(Ind_y);
A = [sin(x), cos(y); 2 * sin(x), sin(x+y)^2];
B = [sin(x), cos(x); 3 * sin(y), cos(x) ];
L(Ind_x, Ind_y) = det (B.' * A * B);
end
end
You then want to obtain the gradient of L, which, of course, is a vector output. Now, to obtain this, ignoring the maths you mentioned for a second, if you're basically trying to use the gradient function correctly, then you just use it directly onto L, and specify the grid X Y used for it to specify the spacings between the different elements in L, and collect its output as a two-element array, so that you capture both the x and y vector-components of the gradient:
[gLx, gLy] = gradient(L, X, Y);
I'd like to implement image morphing, for which I need to be able to deform the image with given set of points and their destination positions (where they will be "dragged"). I am looking for a simple and easy solution that gets the job done, it doesn't have to look great or be extremely fast.
This is an example what I need:
Let's say I have an image and a set of only one deforming point [0.5,0.5] which will have its destination at [0.6,0.5] (or we can say its movement vector is [0.1,0.0]). This means I want to move the very center pixel of the image by 0.1 to the right. Neighboring pixels in some given radius r need to of course be "dragged along" a little with this pixel.
My idea was to do it like this:
I'll make a function mapping the source image positions to destination positions depending on the deformation point set provided.
I will then have to find the inverse function of this function, because I have to perform the transformation by going through destination pixels and seeing "where the point had to come from to come to this position".
My function from step 1 looked like this:
p2 = p1 + ( 1 / ( (distance(p1,p0) / r)^2 + 1 ) ) * s
where
p0 ([x,y] vector) is the deformation point position.
p1 ([x,y] vector) is any given point in the source image.
p2 ([x,y] vector) is the position, to where p1 will be moved.
s ([x,y] vector) is movement vector of deformation point and says in which direction and how far p0 will be dragged.
r (scalar) is the radius, just some number.
I have problem with step number 2. The calculation of the inverse function seems a little too complex to me and so I wonder:
If there is an easy solution for finding the inverse function, or
if there is a better function for which finding the inverse function is simple, or
if there is an entirely different way of doing all this that is simple?
Here's the solution in Python - I did what Yves Daoust recommended and simply tried to use the forward function as the inverse function (switching the source and destination). I also altered the function slightly, changing exponents and other values produces different results. Here's the code:
from PIL import Image
import math
def vector_length(vector):
return math.sqrt(vector[0] ** 2 + vector[1] ** 2)
def points_distance(point1, point2):
return vector_length((point1[0] - point2[0],point1[1] - point2[1]))
def clamp(value, minimum, maximum):
return max(min(value,maximum),minimum)
## Warps an image accoording to given points and shift vectors.
#
# #param image input image
# #param points list of (x, y, dx, dy) tuples
# #return warped image
def warp(image, points):
result = img = Image.new("RGB",image.size,"black")
image_pixels = image.load()
result_pixels = result.load()
for y in range(image.size[1]):
for x in range(image.size[0]):
offset = [0,0]
for point in points:
point_position = (point[0] + point[2],point[1] + point[3])
shift_vector = (point[2],point[3])
helper = 1.0 / (3 * (points_distance((x,y),point_position) / vector_length(shift_vector)) ** 4 + 1)
offset[0] -= helper * shift_vector[0]
offset[1] -= helper * shift_vector[1]
coords = (clamp(x + int(offset[0]),0,image.size[0] - 1),clamp(y + int(offset[1]),0,image.size[1] - 1))
result_pixels[x,y] = image_pixels[coords[0],coords[1]]
return result
image = Image.open("test.png")
image = warp(image,[(210,296,100,0), (101,97,-30,-10), (77,473,50,-100)])
image.save("output.png","PNG")
You don't need to construct the direct function and invert it. Directly compute the inverse function, by swapping the roles of the source and destination points.
You need some form of bivariate interpolation, have a look at radial basis function interpolation. It requires to solve a linear system of equations.
Inverse distance weighting (similar to your proposal) is the easiest to implement but I am afraid it will give disappointing results.
https://en.wikipedia.org/wiki/Multivariate_interpolation#Irregular_grid_.28scattered_data.29
I'm getting ellipses as level curves of a fit dataset. After selecting a particular ellipse, I would like to report it as a center point, semi-major and minor axes lengths, and a rotation angle. In other words, I would like to transform (using mathematica) my ellipse equation from the form:
Ax^2 + By^2 + Cx + Dy + Exy + F = 0
to a more standard form:
((xCos[alpha] - ySin[alpha] - h)^2)/(r^2) + ((xSin[alpha] + yCos[alpha] - k)^2)/(s^2) = 1
where (h,k) is the center, alpha is the rotation angle, and r and s are the semi-axes
The actual equation I'm attempting to transform is
1.68052 x - 9.83173 x^2 + 4.89519 y - 1.19133 x y - 9.70891 y^2 + 6.09234 = 0
I know the center point is the fitted maximum, which is:
{0.0704526, 0.247775}
I posted a version of this answer on Math SE since it benefits a lot from proper mathematical typesetting. The example there is simpler as well, and there are some extra details.
The following description follows the German Wikipedia article Hauptachsentransformation. Its English counterpart, according to inter-wiki links, is principal component analysis. I find the former article a lot more geometric than the latter. The latter has a strong focus on statistical data, though, so it might be useful for you nevertheless.
Rotation
Your ellipse is described as
[A E/2] [x] [x]
[x y] * [E/2 B] * [y] + [C D] * [y] + F = 0
First you identify the rotation. You do this by identifying the eigenvalues and eigenvectors of this 2×2 matrix. These eigenvectors will form an orthogonal matrix describing your rotation: its entries are the Sin[alpha] and Cos[alpha] from your formula.
With your numbers, you get
[A E/2] [-0.74248 0.66987] [-10.369 0 ] [-0.74248 -0.66987]
[E/2 B] = [-0.66987 -0.74248] * [ 0 -9.1715] * [ 0.66987 -0.74248]
The first of the three factors is the matrix formed by the eigenvectors, each normalized to unit length. The central matrix has the eigenvalues on the diagonal, and the last one is the transpose of the first. If you multiply the vector (x,y) with that last matrix, then you will change the coordinate system in such a way that the mixed term vanishes, i.e. the x and y axes are parallel to the main axes of your ellipse. This is just what happens in your desired formula, so now you know that
Cos[alpha] = -0.74248 (-0.742479398678 with more accuracy)
Sin[alpha] = 0.66987 ( 0.669868899516)
Translation
If you multiply the row vector [C D] in the above formula with the first of the three matrices, then this effect will exactly cancel the multiplication of (x, y) by the third matrix. Therefore in that changed coordinate system, you use the central diagonal matrix for the quadratic term, and this product for the linear term.
[-0.74248 0.66987]
[1.68052, 4.89519] * [-0.66987 -0.74248] = [-4.5269 -2.5089]
Now you have to complete the square independently for x and y, and you end up with a form from which you can read the center coordinates.
-10.369x² -4.5269x = -10.369(x + 0.21829)² + 0.49408
-9.1715y² -2.5089y = -9.1715(y + 0.13677)² + 0.17157
h = -0.21829 (-0.218286476695)
k = -0.13677 (-0.136774259156)
Note that h and k describe the center in the already rotated coordinate system; to obtain the original center you'd multiply again with the first matrix:
[-0.74248 0.66987] [-0.21829] [0.07045]
[-0.66987 -0.74248] * [-0.13677] = [0.24778]
which fits your description.
Scaling
The completed squares above contributed some more terms to the constant factor F:
6.09234 + 0.49408 + 0.17157 = 6.7580
Now you move this to the right side of the equation, then divide the whole equation by this number so that you get the = 1 from your desired form. Then you can deduce the radii.
1 -10.369
-- = ------- = 1.5344
r² -6.7580
1 -9.1715
-- = ------- = 1.3571
s² -6.7580
r = 0.80730 (0.807304599162099)
s = 0.85840 (0.858398019487315)
Verifying the result
Now let's check that we didn't make any mistakes. With the parameters we found, you can piece together the equation
((-0.74248*x - 0.66987*y + 0.21829)^2)/(0.80730^2)
+ (( 0.66987*x - 0.74248*y + 0.13677)^2)/(0.85840^2) = 1
Move the 1 to the left side, and multiply by -6.7580, and you should end up with the original equation. Expanding that (with the extra precision versions printed in parentheses), you'll get
-9.8317300000 x^2
-1.1913300000 x y
+1.6805200000 x
-9.7089100000 y^2
+4.8951900000 y
+6.0923400000
which is a perfect match for your input.
If you have h and k, you can use Lagrange Multipliers to maximize / minimize the function (x-h)^2+(y-k)^2 subject to the constraint of being on the ellipse. The maximum distance will be the major radius, the minimum distance the minor radius, and alpha will be how much they are rotated from horizontal.
I have two sequences of length n and m. Each is a sequence of points of the form (x,y) and represent curves in an image. I need to find how different (or similar) these sequences are given that fact that
one sequence is likely longer than the other (i.e., one can be half or a quarter as long as the other, but if they trace approximately the same curve, they are the same)
these sequences could be in opposite directions (i.e., sequence 1 goes from left to right, while sequence 2 goes from right to left)
I looked into some difference estimates like Levenshtein as well as edit-distances in structural similarity matching for protein folding, but none of them seem to do the trick. I could write my own brute-force method but I want to know if there is a better way.
Thanks.
Do you mean that you are trying to match curves that have been translated in x,y coordinates? One technique from image processing is to use chain codes [I'm looking for a decent reference, but all I can find right now is this] to encode each sequence and then compare those chain codes. You could take the sum of the differences (modulo 8) and if the result is 0, the curves are identical. Since the sequences are of different lengths and don't necessarily start at the same relative location, you would have to shift one sequence and do this again and again, but you only have to create the chain codes once. The only way to detect if one of the sequences is reversed is to try both the forward and reverse of one of the sequences. If the curves aren't exactly alike, the sum will be greater than zero but it is not straightforward to tell how different the curves are simply from the sum.
This method will not be rotationally invariant. If you need a method that is rotationally invariant, you should look at Boundary-Centered Polar Encoding. I can't find a free reference for that, but if you need me to describe it, let me know.
A method along these lines might work:
For both sequences:
Fit a curve through the sequence. Make sure that you have a continuous one-to-one function from [0,1] to points on this curve. That is, for each (real) number between 0 and 1, this function returns a point on the curve belonging to it. By tracing the function for all numbers from 0 to 1, you get the entire curve.
One way to fit a curve would be to draw a straight line between each pair of consecutive points (it is not a nice curve, because it has sharp bends, but it might be fine for your purpose). In that case, the function can be obtained by calculating the total length of all the line segments (Pythagoras). The point on the curve corresponding to a number Y (between 0 and 1) corresponds to the point on the curve that has a distance Y * (total length of all line segments) from the first point on the sequence, measured by traveling over the line segments (!!).
Now, after we have obtained such a function F(double) for the first sequence, and G(double) for the second sequence, we can calculate the similarity as follows:
double epsilon = 0.01;
double curveDistanceSquared = 0.0;
for(double d=0.0;d<1.0;d=d+epsilon)
{
Point pointOnCurve1 = F(d);
Point pointOnCurve2 = G(d);
//alternatively, use G(1.0-d) to check whether the second sequence is reversed
double distanceOfPoints = pointOnCurve1.EuclideanDistance(pointOnCurve2);
curveDistanceSquared = curveDistanceSquared + distanceOfPoints * distanceOfPoints;
}
similarity = 1.0/ curveDistanceSquared;
Possible improvements:
-Find an improved way to fit the curves. Note that you still need the function that traces the curve for the above method to work.
-When calculating the distance, consider reparametrizing the function G in such a way that the distance is minimized. (This means you have an increasing function R, such that R(0) = 0 and R(1)=1,
but which is otherwise general. When calculating the distance you use
Point pointOnCurve1 = F(d);
Point pointOnCurve2 = G(R(d));
Subsequently, you try to choose R in such a way that the distance is minimized. (to see what happens, note that G(R(d)) also traces the curve)).
Why not do some sort of curve fitting procedure (least-squares whether it be ordinary or non-linear) and see if the coefficients on the shape parameters are the same. If you run it as a panel-data sort of model, there are explicit statistical tests whether sets of parameters are significantly different from one another. That would solve the problem of the the same curve but sampled at different resolutions.
Step 1: Canonicalize the orientation. For example, let's say that all curved start at the endpoint with lowest lexicographic order.
def inCanonicalOrientation(path):
return path if path[0]<path[-1] else reversed(path)
Step 2: You can either be roughly accurate, or very accurate. If you wish to be very accurate, calculate a spline, or fit both curves to a polynomial of appropriate degree, and compare coefficients. If you'd like just a rough estimate, do as follows:
def resample(path, numPoints)
pathLength = pathLength(path) #write this function
segments = generateSegments(path)
currentSegment = next(segments)
segmentsSoFar = [currentSegment]
for i in range(numPoints):
samplePosition = i/(numPoints-1)*pathLength
while samplePosition > pathLength(segmentsSoFar)+currentSegment.length:
currentSegment = next(segments)
segmentsSoFar.insert(currentSegment)
difference = samplePosition - pathLength(segmentsSoFar)
howFar = difference/currentSegment.length
yield Point((1-howFar)*currentSegment.start + (howFar)*currentSegment.end)
This can be modified from a linear resampling to something better.
def error(pathA, pathB):
pathA = inCanonicalOrientation(pathA)
pathB = inCanonicalOrientation(pathB)
higherResolution = max([len(pathA), len(pathB)])
resampledA = resample(pathA, higherResolution)
resampledB = resample(pathA, higherResolution)
error = sum(
abs(pointInA-pointInB)
for pointInA,pointInB in zip(pathA,pathB)
)
averageError = error / len(pathAorB)
normalizedError = error / Z(AorB)
return normalizedError
Where Z is something like the "diameter" of your path, perhaps the maximum Euclidean distance between any two points in a path.
I would use a curve-fitting procedure, but also throw in a constant term, i.e. 0 =B0 + B1*X + B2*Y + B3*X*Y + B4*X^2 etc. This would catch the translational variance and then you can do a statistical comparison of the estimated coefficients of the curves formed by the two sets of points as a way of classifying them. I'm assuming you'll have to do bi-variate interpolation if the data form arbitrary curves in the x-y plane.
Does anyone know how to minimize a function containing an integral in MATLAB? The function looks like this:
L = Int(t=0,t=T)[(AR-x)dt], A is a system parameter and R and x are related through:
dR/dt = axRY - bR, where a and b are constants.
dY/dt = -xRY
I read somewhere that I can use fminbnd and quad in combination but I am not able to make it work. Any suggestions?
Perhaps you could give more details of your integral, e.g. where is the missing bracket in [AR-x)dt]? Is there any dependence of x on t, or can we integrate dR/dt = axR - bR to give R=C*exp((a*x-b)*t)? In any case, to answer your question on fminbnd and quad, you could set A,C,T,a,b,xmin and xmax (the last two are the range you want to look for the min over) and use:
[x fval] = fminbnd(#(x) quad(#(t)A*C*exp((a*x-b)*t)-x,0,T),xmin,xmax)
This finds x that minimizes the integral.
If i didn't get it wrong you are trying to minimize respect to t:
\int_0^t{(AR-x) dt}
well then you just need to find the zeros of:
AR-x
This is just math, not matlab ;)
Here's some manipulation of your equations that might help.
Combining the second and third equations you gave gives
dR/dt = -a*(dY/dt)-bR
Now if we solve for R on the righthand side and plug it into the first equation you gave we get
L = Int(t=0,t=T)[(-A/b*(dR/dt + a*dY/dt) - x)dt]
Now we can integrate the first term to get:
L = -A/b*[R(T) - R(0) + Y(T) - Y(0)] - Int(t=0,t=T)[(x)dt]
So now all that matters with regards to R and Y are the endpoints. In fact, you may as well define a new function, Z which equals Y + R. Then you get
L = -A/b*[Z(T) - Z(0)] - Int(t=0,t=T)[(x)dt]
This next part I'm not as confident in. The integral of x with respect to t will give some function which is evaluated at t = 0 and t = T. This function we will call X to give:
L = -A/b*[Z(T) - Z(0)] - X(T) + X(0)
This equation holds true for all T, so we can set T to t if we want to.
L = -A/b*[Z(t) - Z(0)] - X(t) + X(0)
Also, we can group a lot of the constants together and call them C to give
X(t) = -A/b*Z(t) + C
where
C = A/b*Z(0) + X(0) - L
So I'm not sure what else to do with this, but I've shown that the integral of x(t) is linearly related to Z(t) = R(t) + Y(t). It seems to me that there are many equations that solve this. Anyone else see where to go from here? Any problems with my math?