Calculation of tangents for cardinal spline curves - math

I am reading an article about about cubic Hermite interpolation. In the cardinal spline curve section they give a formula to calculate tangents at end points given by:
Ti = a * ( Pi+1 - Pi-1 )
However, if I have two points P1 and P2 then to find T1
T1 = a*(P2-P0).
I need to calculate this but what should be my P0 point be? Similarly to find T2 I will need to know P3. Can anyone clarify this?

You're right, this formula only makes sense for inner points in your spline which have neighbours on both sides. For endpoints you have to get the tangent from other constraints. The common solutions are:
supply manually chosen tangent points
choose the tangent such that the curvature at the endpoint is zero, this is referred to as natural boundary condition
choose periodic boundary conditions, that is, the tangents of the start- and endpoint are equal. Then you only have to specify one of the tangents. For a closed spline, you can get the last tangent from the natural boundary condition.
These ideas are brought up in the context of cubic splines, which require to solve a system of linear equations in order to get the polynomial coefficients for any part of the spline, because they minimize the total curvature of the whole spline, but in your case of Hermite splines they should be applicable, too.

Related

Bend a mesh along spline

I am trying to bend a mesh along a spline curve and currently out of ideas … at first I thought I just add spline point vectors to mesh's vertices , but I am looking for more optimized version of it …
so guys …
How can I bend a mesh along a spline, so that mesh, with some forward axis vector follows the spline and bends according to it and also repeat along the spline …
???
I believe there are many ways to do what you want. Some years ago I worked out an approach using Conformal Geometric Algebra. Of course you can do it using conventional 3D math as its described in Instant Mesh Deformation and Deformation styles for spline-based skeletal animation papers.
A simple method is as follows:
Your spline function is a function S(t): R -> R^3, it takes a scalar between [0:1] and give you points in R^3.
Project each mesh vertex on the spline curve. The projection is orthogonal in the sense that it follows the direction of a normal vector to the curve. So your mesh vertex v_i is projected to a point v'_i in the spline where S(t_i) = v'_i. Form a vector p_i = v_i - v'_i (which is normal to the curve) so each mesh vertex can be expressed as:
v_i = S(t_i) + p_i
Compute an orthogonal coordinate system at "each point" of the spline. That coordinate system is known as Frenet-Serret frame. The first vector to determine is the tangent to the curve. It is uniquely defined as the derivative of S(t) so tangent T = S(t)/dt. The other two vectors, the normal N and binormal B, can be computed in different ways, check the above reference papers for that.
Express the vector p_i (from step 1) in terms of the Frenet-Serret frame at point S(t_i). Such that the vector p_i is a linear combination of T, N and B. Create a matrix A with columns T, N and B. You need to find x_i such that:
A x_i = p_i
That can be solved by inverting the matrix A (actually taking the transpose should suffice). So each mesh vertex can be computed as:
v_i = S(t_i) + A x_i
You can store the pair (t_i, x_i) instead of v_i (you don't need to store v_i anymore since you can compute it from t_i and x_i).
To deform the mesh deformation the spline control points must be translated, then you need to recompute the Frenet-Serret frame of each spline point (taking the derivative of the S(t) to compute T and updating the N and B as suggested in above reference papers). Once you have the updated T, N and B, you can define the matrix A and then compute the mesh vertex positions using formula from step 3.
Results can be seen in pictures of the above mentioned papers.

Hermit Spline Tangents estimation

Hermite Spline tangent estimation
I'm trying to come up with an algorithm or method that will allow me to estimate the tangent's magnitude (the direction is given) such as the interpolated spline will be the best fit for a given curve (blue dots)
In general points are in 3D space, but even a 2D solution will put me on track.
this is the Hermit Spline form:
Vector3D p0, p1, m0 (tangent at p0), m1 (tangent at p1)
p = (2*t^3 - 3*t^2 + 1)*p0 + (t^3-2*t^2+t)*m0 + (3*t^2 - 2*t^3)*p1 + (t^3-t^2)*m1;
My intuition suggest me some sort of least squares method using m0 and m1 magnitudes as the unknowns (could I apply least squares to each coordinates ecuation?) but somehow it should involve projecting and transforming that spline equation over the vector p1-p0 and instead of returning a 3D point, just the distance of that point to the base p1-p0. But I guess I got lost there :(
Perhaps the solution is easy and I would really love some light into this darkness.
Thanks in advance for your time!

How can i produce multi point linear interpolation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a linear interpolation methods. This is calculate interpolate value when (x1,y1) (x2,y2) and x0 known. it is calculate y0 value. But i need the do that when multi point known.
I am not talking about Bilinear or Trilinear interpolation.
For multi point interpolation there are 3 options:
piecewise linear interpolation
choose 2 closest points to your known coordinate if you use parameter then select the points containing parameter range and change the parameter range/scale to interpolation range (usually <0,1>) and interpolate as linear interpolation.
example of linear DDA on integers and more is in here:
Precise subpixel line drawing algorithm (rasterization algorithm)
polynomial interpolation
this is not linear !!! Take all known points, compute n-th degree polynomial from it (by Lagrange polynomial or by edge conditions or by regression/curve fitting or by whatever else) and compute the point from parameter as function of this polynomial. Usually you have one polynomial per axis the more the points and or degree of polynomial the less stable the result (oscillations).
piecewise polynomial interpolation
It is combination of #1,#2 (n is low to avoid oscillations). You need to call the point sequence properly to manage continuity between segments, the edge conditions must take into account previous and next segment...
here Piecewise interpolation cubic example
here How to construct own interpolation 3th degree polynomial
here How to construct own interpolation 4th degree polynomial
here point call sequence and BEZIER cubic as interpolation cubic
[notes]
SPLINE,BEZIER,... are approximation curves not interpolation (they do not necessarily cross the control points). There is a way how to convert in-between different types of curves by recomputation of control points. For example see this:
Interpolation cubic vs. Bezier cubic

How to make sure if a polynomial curve is monotonic under interval [a,b]?

If I got a polynomial curve, and I want to find all monotonic curve segments and corresponding intervals by programming.
What's the best way to do this...
I want to avoid solving equation like f'(x) = 0;
Using some nice numerical ways to do this,like bi-section, is preferred.
f'(x) expression is available.
Thanks.
Add additional details. For example, I get a curve in 2d space, and its polynomial is
x: f(t)
y: g(t)
t is [0,1]
So, if I want to get its monotonic curve segment, I must know the position of t where its tangent vector is (1,0).
One direct way to resolve this is to setup an equation "f'(x) = 0".
But I want to use the most efficient way to do this.
For example, I try to use recursive ways to find this.
Divide the range [0,1] to four parts, and check whether the four tangents projection on vector (1,0) are in same direction, and two points are close enough. If not, continue to divide the range into 4 parts, until they are in same direction in (1,0) and (0,1), and close enough.
I think you will have to find the roots of f'(x) using a numerical method (feel free to implement any root-seeking algorithm you want, Wikipedia has a list). The roots will be those points where the gradient reaches zero; say x1, x2, x3.
You then have a set of intervals (-inf, x1) (x1, x2) etc, continuity of a polynomial ensures that the gradient will be always positive or always negative between a particular pair of points.
So evaluating the gradient sign at a point within each interval will tell you whether that interval is monotically increasing or not. If you don't care for a "strictly" increasing section, you could patch together adjacent intervals which have positive gradient (as a point of inflection will show up as one of the f'(x)=0 roots).
As an alternative to computing the roots of f', you can also use Sturm Sequences.
They allow counting the number of roots (here, the roots of f') in an interval.
The monotic curve segments are delimited by the roots of f'(x). You can find the roots by using an iterative algorithm like Newton's method.

approximation methods

I attached image:
(source: piccy.info)
So in this image there is a diagram of the function, which is defined on the given points.
For example on points x=1..N.
Another diagram, which was drawn as a semitransparent curve,
That is what I want to get from the original diagram,
i.e. I want to approximate the original function so that it becomes smooth.
Are there any methods for doing that?
I heard about least squares method, which can be used to approximate a function by straight line or by parabolic function. But I do not need to approximate by parabolic function.
I probably need to approximate it by trigonometric function.
So are there any methods for doing that?
And one idea, is it possible to use the Least squares method for this problem, if we can deduce it for trigonometric functions?
One more question!
If I use the discrete Fourier transform and think about the function as a sum of waves, so may be noise has special features by which we can define it and then we can set to zero the corresponding frequency and then perform inverse Fourier transform.
So if you think that it is possible, then what can you suggest in order to identify the frequency of noise?
Unfortunately many solutions here presented don't solve the problem and/or they are plain wrong.
There are many approaches and they are specifically built to solve conditions and requirements you must be aware of !
a) Approximation theory: If you have a very sharp defined function without errors (given by either definition or data) and you want to trace it exactly as possible, you are using
polynominal or rational approximation by Chebyshev or Legendre polynoms, meaning that you
approach the function by a polynom or, if periodical, by Fourier series.
b) Interpolation: If you have a function where some points (but not the whole curve!) are given and you need a function to get through this points, you can use several methods:
Newton-Gregory, Newton with divided differences, Lagrange, Hermite, Spline
c) Curve fitting: You have a function with given points and you want to draw a curve with a given (!) function which approximates the curve as closely as possible. There are linear
and nonlinear algorithms for this case.
Your drawing implicates:
It is not remotely like a mathematical function.
It is not sharply defined by data or function
You need to fit the curve, not some points.
What do you want and need is
d) Smoothing: Given a curve or datapoints with noise or rapidly changing elements, you only want to see the slow changes over time.
You can do that with LOESS as Jacob suggested (but I find that overkill, especially because
choosing a reasonable span needs some experience). For your problem, I simply recommend
the running average as suggested by Jim C.
http://en.wikipedia.org/wiki/Running_average
Sorry, cdonner and Orendorff, your proposals are well-minded, but completely wrong because you are using the right tools for the wrong solution.
These guys used a sixth polynominal to fit climate data and embarassed themselves completely.
http://scienceblogs.com/deltoid/2009/01/the_australians_war_on_science_32.php
http://network.nationalpost.com/np/blogs/fullcomment/archive/2008/10/20/lorne-gunter-thirty-years-of-warmer-temperatures-go-poof.aspx
Use loess in R (free).
E.g. here the loess function approximates a noisy sine curve.
(source: stowers-institute.org)
As you can see you can tweak the smoothness of your curve with span
Here's some sample R code from here:
Step-by-Step Procedure
Let's take a sine curve, add some
"noise" to it, and then see how the
loess "span" parameter affects the
look of the smoothed curve.
Create a sine curve and add some noise:
period <- 120 x <- 1:120 y <-
sin(2*pi*x/period) +
runif(length(x),-1,1)
Plot the points on this noisy sine curve:
plot(x,y, main="Sine Curve +
'Uniform' Noise") mtext("showing
loess smoothing (local regression
smoothing)")
Apply loess smoothing using the default span value of 0.75:
y.loess <- loess(y ~ x, span=0.75,
data.frame(x=x, y=y))
Compute loess smoothed values for all points along the curve:
y.predict <- predict(y.loess,
data.frame(x=x))
Plot the loess smoothed curve along with the points that were already
plotted:
lines(x,y.predict)
You could use a digital filter like a FIR filter. The simplest FIR filter is just a running average. For more sophisticated treatment look a something like a FFT.
This is called curve fitting. The best way to do this is to find a numeric library that can do it for you. Here is a page showing how to do this using scipy. The picture on that page shows what the code does:
(source: scipy.org)
Now it's only 4 lines of code, but the author doesn't explain it at all. I'll try to explain briefly here.
First you have to decide what form you want the answer to be. In this example the author wants a curve of the form
f(x) = p0 cos (2π/p1 x + p2) + p3 x
You might instead want the sum of several curves. That's OK; the formula is an input to the solver.
The goal of the example, then, is to find the constants p0 through p3 to complete the formula. scipy can find this array of four constants. All you need is an error function that scipy can use to see how close its guesses are to the actual sampled data points.
fitfunc = lambda p, x: p[0]*cos(2*pi/p[1]*x+p[2]) + p[3]*x # Target function
errfunc = lambda p: fitfunc(p, Tx) - tX # Distance to the target function
errfunc takes just one parameter: an array of length 4. It plugs those constants into the formula and calculates an array of values on the candidate curve, then subtracts the array of sampled data points tX. The result is an array of error values; presumably scipy will take the sum of the squares of these values.
Then just put some initial guesses in and scipy.optimize.leastsq crunches the numbers, trying to find a set of parameters p where the error is minimized.
p0 = [-15., 0.8, 0., -1.] # Initial guess for the parameters
p1, success = optimize.leastsq(errfunc, p0[:])
The result p1 is an array containing the four constants. success is 1, 2, 3, or 4 if ths solver actually found a solution. (If the errfunc is sufficiently crazy, the solver can fail.)
This looks like a polynomial approximation. You can play with polynoms in Excel ("Add Trendline" to a chart, select Polynomial, then increase the order to the level of approximation that you need). It shouldn't be too hard to find an algorithm/code for that.
Excel can show the equation that it came up with for the approximation, too.

Resources