Related
I have a similar line graph plotted using R plot function (plot(df))
I want to get distance of the whole line between two points in the graph (e.g., between x(1) and x(3)). How can I do this?
If your function is defined over a fine grid of points, you can compute the length of the line segment between each pair of points and add them. Pythagoras is your friend here:
To the extent that the points are not close enough together that the function is essentially linear between the points, it will tend to (generally only slightly) underestimate the arc length.
Note that if your x-values are stored in increasing order, these ẟx and ẟy values can be obtained directly by differencing (in R that's diff)
If you have a functional form for y as a function of x you can apply the integral for the arc length -- i.e. integrate
∫ √[1+(dy/dx)²] dx
between a and b. This is essentially just the calculation in 1 taken to the limit.
If both x and y are parametric functions of another variable (t, say) you can simplify the parametric form of the above integral (if we don't forget the Jacobian) to integrating
∫ √[(dx/dt)²+(dy/dt)²] dt
between a and b
(Note the direct parallel to 1.)
if you don't have a convenient-to-integrate functional form in 2. or 3. you can use numerical quadrature; this can be quite efficient (which can be handy when the derivative function is expensive to evaluate).
Disclaimer: I have no experience with sage, programming or any computer calculations.
I want to expand a polynomial in Sage. The input is a factored polynomial and I need a certain coefficient. However, since the polynomial has 30 factors, my computer won't do it.
Should I look for somebody with a better computer or are 30 factors simply too much?
Here is my sage code:
R.<x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9,x_10,x_11,x_12> = QQbar[]
f = (x_1-x_2)*(x_1-x_3)*(x_1-x_9)*(x_1-x_10)*(x_2-x_3)*(x_2-x_10)*(x_2-x_11)*(x_2-x_12)*(x_3-x_4)*(x_4-x_11)*(x_4-x_5)*(x_4-x_6)*(x_4-x_11)*(x_5-x_6)*(x_5-x_10)*(x_5-x_11)*(x_5-x_12)*(x_6-x_7)*(x_6-x_12)*(x_7-x_9)*(x_7-x_8)*(x_7-x_12)*(x_8-x_9)*(x_8-x_10)*(x_8-x_11)*(x_8-x_12)*(x_9-x_10)*(x_10-x_11)*(x_10-x_12)*(x_11-x_12);
c = f.coefficient({x_1:2,x_2:2,x_3:2,x_4:2,x_5:2,x_6:2,x_7:2,x_8:2,x_9:2,x_10:5,x_11:5,x_12:5}); c
Just some background. I'm trying to solve an instance of list edge colouring with the combinatorial Nullstellensatz.
https://en.wikipedia.org/wiki/List_edge-coloring
Given a graph G=(V,E) we associate a variable x_i with each vertex i in V. The graph monomial eps(G) is defined as the product \prod_{ij \in E} (x_i-x_j). (Note that we fixed an orientation of the edges, but that's not important here.)
Suppose that there are lists of colours assigned to the vertices, such that the vertex i has a list of size a(i). Then, by the combinatorial Nullenstellensatz there is a colouring from those lists (i.e. each vertex receives a colour from its list and two adjacent vertices do not receive the same colour), if the coefficient of \prod_{i \in V} x_i^{a(i)-1} is non-zero in eps(G).
I want to apply this to the line graph of the graph G(M) with incidence matrix:
M = Matrix([0,0,0,3,3,0,3],[0,0,0,0,3,3,3],[0,0,0,3,0,3,3],[0,0,0,3,3,0,3],[3,0,3,0,0,0,6],[3,3,0,0,0,0,6],[0,3,3,0,0,0,6],[3,3,3,6,6,6,0])
(Here the size of the lists are indicated by the integers).
I believe that it takes so long because your coefficients are in QQbar, and arithmetic in QQbar is much slower than over QQ, for example. Is there a good reason for not using QQ?
If I change the coefficient ring to QQ, Sage fairly quickly tells me that c is 0:
sage: R.<x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9,x_10,x_11,x_12> = QQ[]
sage: f = (x_1-x_2)*(x_1-x_3)*(x_1-x_9)*(x_1-x_10)*(x_2-x_3)*(x_2-x_10)*(x_2-x_11)*(x_2-x_12)*(x_3-x_4)*(x_4-x_11)*(x_4-x_5)*(x_4-x_6)*(x_4-x_11)*(x_5-x_6)*(x_5-x_10)*(x_5-x_11)*(x_5-x_12)*(x_6-x_7)*(x_6-x_12)*(x_7-x_9)*(x_7-x_8)*(x_7-x_12)*(x_8-x_9)*(x_8-x_10)*(x_8-x_11)*(x_8-x_12)*(x_9-x_10)*(x_10-x_11)*(x_10-x_12)*(x_11-x_12)
sage: c = f.coefficient({x_1:2,x_2:2,x_3:2,x_4:2,x_5:2,x_6:2,x_7:2,x_8:2,x_9:2,x_10:5,x_11:5,x_12:5})
sage: c
0
I have two sequences of length n and m. Each is a sequence of points of the form (x,y) and represent curves in an image. I need to find how different (or similar) these sequences are given that fact that
one sequence is likely longer than the other (i.e., one can be half or a quarter as long as the other, but if they trace approximately the same curve, they are the same)
these sequences could be in opposite directions (i.e., sequence 1 goes from left to right, while sequence 2 goes from right to left)
I looked into some difference estimates like Levenshtein as well as edit-distances in structural similarity matching for protein folding, but none of them seem to do the trick. I could write my own brute-force method but I want to know if there is a better way.
Thanks.
Do you mean that you are trying to match curves that have been translated in x,y coordinates? One technique from image processing is to use chain codes [I'm looking for a decent reference, but all I can find right now is this] to encode each sequence and then compare those chain codes. You could take the sum of the differences (modulo 8) and if the result is 0, the curves are identical. Since the sequences are of different lengths and don't necessarily start at the same relative location, you would have to shift one sequence and do this again and again, but you only have to create the chain codes once. The only way to detect if one of the sequences is reversed is to try both the forward and reverse of one of the sequences. If the curves aren't exactly alike, the sum will be greater than zero but it is not straightforward to tell how different the curves are simply from the sum.
This method will not be rotationally invariant. If you need a method that is rotationally invariant, you should look at Boundary-Centered Polar Encoding. I can't find a free reference for that, but if you need me to describe it, let me know.
A method along these lines might work:
For both sequences:
Fit a curve through the sequence. Make sure that you have a continuous one-to-one function from [0,1] to points on this curve. That is, for each (real) number between 0 and 1, this function returns a point on the curve belonging to it. By tracing the function for all numbers from 0 to 1, you get the entire curve.
One way to fit a curve would be to draw a straight line between each pair of consecutive points (it is not a nice curve, because it has sharp bends, but it might be fine for your purpose). In that case, the function can be obtained by calculating the total length of all the line segments (Pythagoras). The point on the curve corresponding to a number Y (between 0 and 1) corresponds to the point on the curve that has a distance Y * (total length of all line segments) from the first point on the sequence, measured by traveling over the line segments (!!).
Now, after we have obtained such a function F(double) for the first sequence, and G(double) for the second sequence, we can calculate the similarity as follows:
double epsilon = 0.01;
double curveDistanceSquared = 0.0;
for(double d=0.0;d<1.0;d=d+epsilon)
{
Point pointOnCurve1 = F(d);
Point pointOnCurve2 = G(d);
//alternatively, use G(1.0-d) to check whether the second sequence is reversed
double distanceOfPoints = pointOnCurve1.EuclideanDistance(pointOnCurve2);
curveDistanceSquared = curveDistanceSquared + distanceOfPoints * distanceOfPoints;
}
similarity = 1.0/ curveDistanceSquared;
Possible improvements:
-Find an improved way to fit the curves. Note that you still need the function that traces the curve for the above method to work.
-When calculating the distance, consider reparametrizing the function G in such a way that the distance is minimized. (This means you have an increasing function R, such that R(0) = 0 and R(1)=1,
but which is otherwise general. When calculating the distance you use
Point pointOnCurve1 = F(d);
Point pointOnCurve2 = G(R(d));
Subsequently, you try to choose R in such a way that the distance is minimized. (to see what happens, note that G(R(d)) also traces the curve)).
Why not do some sort of curve fitting procedure (least-squares whether it be ordinary or non-linear) and see if the coefficients on the shape parameters are the same. If you run it as a panel-data sort of model, there are explicit statistical tests whether sets of parameters are significantly different from one another. That would solve the problem of the the same curve but sampled at different resolutions.
Step 1: Canonicalize the orientation. For example, let's say that all curved start at the endpoint with lowest lexicographic order.
def inCanonicalOrientation(path):
return path if path[0]<path[-1] else reversed(path)
Step 2: You can either be roughly accurate, or very accurate. If you wish to be very accurate, calculate a spline, or fit both curves to a polynomial of appropriate degree, and compare coefficients. If you'd like just a rough estimate, do as follows:
def resample(path, numPoints)
pathLength = pathLength(path) #write this function
segments = generateSegments(path)
currentSegment = next(segments)
segmentsSoFar = [currentSegment]
for i in range(numPoints):
samplePosition = i/(numPoints-1)*pathLength
while samplePosition > pathLength(segmentsSoFar)+currentSegment.length:
currentSegment = next(segments)
segmentsSoFar.insert(currentSegment)
difference = samplePosition - pathLength(segmentsSoFar)
howFar = difference/currentSegment.length
yield Point((1-howFar)*currentSegment.start + (howFar)*currentSegment.end)
This can be modified from a linear resampling to something better.
def error(pathA, pathB):
pathA = inCanonicalOrientation(pathA)
pathB = inCanonicalOrientation(pathB)
higherResolution = max([len(pathA), len(pathB)])
resampledA = resample(pathA, higherResolution)
resampledB = resample(pathA, higherResolution)
error = sum(
abs(pointInA-pointInB)
for pointInA,pointInB in zip(pathA,pathB)
)
averageError = error / len(pathAorB)
normalizedError = error / Z(AorB)
return normalizedError
Where Z is something like the "diameter" of your path, perhaps the maximum Euclidean distance between any two points in a path.
I would use a curve-fitting procedure, but also throw in a constant term, i.e. 0 =B0 + B1*X + B2*Y + B3*X*Y + B4*X^2 etc. This would catch the translational variance and then you can do a statistical comparison of the estimated coefficients of the curves formed by the two sets of points as a way of classifying them. I'm assuming you'll have to do bi-variate interpolation if the data form arbitrary curves in the x-y plane.
I am looking for an (almost everywhere) differentiable function f(p1, p2, p3, p4) that given four points will give me a scale-agnostic measure for co-planarity. It is zero if the four points lie on the same plane and positive otherwise. Scale-agnostic means that, when I uniformly scale all points the planarity measure will return the same.
I came up with something that is quite complex and not easy to optimize. Define u=p2-p1, v=p3-p1, w=p4-p1. Then the planarity measure is:
[(u x v) * w]² / (|u x v|² |w|²)
where x means cross product and '*' means dot product.
The numerator is simply (the square of) the volume of the tetrahedron defined by the four points, and the denominator is a normalizing factor that makes this measure become simply the cosine of an angle. Because angles do not changed under uniform scale, this function satisfies all my requirements.
Does anybody know of something simpler?
Alex.
Edit:
I eventually used an Augmented Lagrangian method to perform optimization, so I don't need it to be scale agnostic. Just using the constraint (u x v) * w = 0 is enough, as the optimization procedure finds the correct Lagrange multiplier to compensate for the scale.
Your methods seems ok, I'd do something like this for efficient implementation:
Take u, v, w as you did
Normalize them: various tricks exist to evaluate the inverse square root efficiently with whatever precision you want, like this jewel. Most modern processors have builtins for this operation.
Take f = |det(u, v, w)| ( = (u x v) . w ). There are fast direct implementations for 3x3 matrices; see #batty's answer to this question.
This amounts to what you do without the squares. It is still homogeneous and almost everywhere differentiable. Take the square of the determinant if you want something differentiable everywhere.
EDIT: #phkahler implicitly suggested using the ratio of the radius of the inscribed sphere to the radius of the circumscribed sphere as a measure of planarity. This is a bounded differentiable function of the points, invariant by scaling. However, this is at least as difficult to compute as what you (and I) suggest. Especially computing the radius of the circumscribed sphere is very sensitive to roundoff errors.
A measure that should be symmetric with respect to point reorderings is:
((u x v).w)^2/(|u||v||w||u-v||u-w||v-w|)
which is proportional to the volume of the tetrahedron squared divided by all 6 edge lengths. It is not simpler than your formula or Alexandre C.'s, but it is not much more complicated. However, it does become unnecessarily singular when any two points coincide.
A better-behaved, order-insensitive formula is:
let a = u x v
b = v x w
c = w x u
(a.w)^2/(|a| + |b| + |c| + |a+b+c|)^3
which is something like the volume of the tetrahedron divided by the surface area, but raised to appropriate powers to make the whole thing scale-insensitive. This is also a bit more complex than your formula, but it works unless all 4 points are collinear.
How about
|(u x v) * w| / |u|^3
(and you can change |x| to (x)^2 if you think it's simpler).
I am trying to generate a 3d tube along a spline. I have the coördinates of the spline (x1,y1,z1 - x2,y2,z2 - etc) which you can see in the illustration in yellow. At those points I need to generate circles, whose vertices are to be connected at a later stadium. The circles need to be perpendicular to the 'corners' of two line segments of the spline to form a correct tube. Note that the segments are kept low for illustration purpose.
[apparently I'm not allowed to post images so please view the image at this link]
http://img191.imageshack.us/img191/6863/18720019.jpg
I am as far as being able to calculate the vertices of each ring at each point of the spline, but they are all on the same planar ie same angled. I need them to be rotated according to their 'legs' (which A & B are to C for instance).
I've been thinking this over and thought of the following:
two line segments can be seen as 2 vectors (in illustration A & B)
the corner (in illustraton C) is where a ring of vertices need to be calculated
I need to find the planar on which all of the vertices will reside
I then can use this planar (=vector?) to calculate new vectors from the center point, which is C
and find their x,y,z using radius * sin and cos
However, I'm really confused on the math part of this. I read about the dot product but that returns a scalar which I don't know how to apply in this case.
Can someone point me into the right direction?
[edit]
To give a bit more info on the situation:
I need to construct a buffer of floats, which -in groups of 3- describe vertex positions and will be connected by OpenGL ES, given another buffer with indices to form polygons.
To give shape to the tube, I first created an array of floats, which -in groups of 3- describe control points in 3d space.
Then along with a variable for segment density, I pass these control points to a function that uses these control points to create a CatmullRom spline and returns this in the form of another array of floats which -again in groups of 3- describe vertices of the catmull rom spline.
On each of these vertices, I want to create a ring of vertices which also can differ in density (amount of smoothness / vertices per ring).
All former vertices (control points and those that describe the catmull rom spline) are discarded.
Only the vertices that form the tube rings will be passed to OpenGL, which in turn will connect those to form the final tube.
I am as far as being able to create the catmullrom spline, and create rings at the position of its vertices, however, they are all on a planars that are in the same angle, instead of following the splines path.
[/edit]
Thanks!
Suppose you have a parametric curve such as:
xx[t_] := Sin[t];
yy[t_] := Cos[t];
zz[t_] := t;
Which gives:
The tangent vector to our curve is formed by the derivatives in each direction. In our case
Tg[t_]:= {Cos[t], -Sin[t], 1}
The orthogonal plane to that vector comes solving the implicit equation:
Tg[t].{x - xx[t], y - yy[t], z - zz[t]} == 0
In our case this is:
-t + z + Cos[t] (x - Sin[t]) - (y - Cos[t]) Sin[t] == 0
Now we find a circle in that plane, centered at the curve. i.e:
c[{x_, y_, z_, t_}] := (x - xx[t])^2 + (y - yy[t])^2 + (z - zz[t])^2 == r^2
Solving both equations, you get the equation for the circles:
HTH!
Edit
And by drawing a lot of circles, you may get a (not efficient) tube:
Or with a good Graphics 3D library:
Edit
Since you insist :) here is a program to calculate the circle at junctions.
a = {1, 2, 3}; b = {3, 2, 1}; c = {2, 3, 4};
l1 = Line[{a, b}];
l2 = Line[{b, c}];
k = Cross[(b - a), (c - b)] + b; (*Cross Product*)
angle = -ArcCos[(a - b).(c - b)/(Norm[(a - b)] Norm[(c - b)])]/2;
q = RotationMatrix[angle, k - b].(a - b);
circle[t_] := (k - b)/Norm[k - b] Sin#t + (q)/Norm[q] Cos#t + b;
Show[{Graphics3D[{
Red, l1,
Blue, l2,
Black, Line[{b, k}],
Green, Line[{b, q + b}]}, Axes -> True],
ParametricPlot3D[circle[t], {t, 0, 2 Pi}]}]
Edit
Here you have the mesh constructed by this method. It is not pretty, IMHO:
I don't know what your language of choice is, but if you speak MatLab there are already a few implementations available. Even if you are using another language, some of the code might be clear enough to inspire a reimplementation.
The key point is that if you don't want your tube to twist when you connect the vertices, you cannot determine the basis locally, but need to propagate it along the curve. The Frenet frame, as proposed by jalexiou, is one option but simpler stuff works fine as well.
I did a simple MatLab implementation called tubeplot.m in my formative years (based on a simple non-Frenet propagation), and googling it, I can see that Anders Sandberg from kth.se has done a (re?)implementation with the same name, available at http://www.nada.kth.se/~asa/Ray/Tubeplot/tubeplot.html.
Edit:
The following is pseudocode for the simple implementation in tubeplot.m. I have found it to be quite robust.
The plan is to propagate two normals a and b along the curve, so
that at each point on the curve a, b and the tangent to the curve
will form an orthogonal basis which is "as close as possible" to the
basis used in the previous point.
Using this basis we can find points on the circumference of the tube.
// *** Input/output ***
// v[0]..v[N-1]: Points on your curve as vectors
// No neighbours should overlap
// nvert: Number of vertices around tube, integer.
// rtube: Radius of tube, float.
// xyz: (N, nvert)-array with vertices of the tube as vectors
// *** Initialization ***
// 1: Tangent vectors
for i=1 to N-2:
dv[i]=v[i+1]-v[i-1]
dv[0]=v[1]-v[0], dv[N-1]=v[N-1]-v[N-2]
// 2: An initial value for a (must not be pararllel to dv[0]):
idx=<index of smallest component of abs(dv[0])>
a=[0,0,0], a[idx]=1.0
// *** Loop ***
for i = 0 to N-1:
b=normalize(cross(a,dv[i]));
a=normalize(cross(dv[i],b));
for j = 0 to nvert-1:
th=j*2*pi/nvert
xyz[i,j]=v[i] + cos(th)*rtube*a + sin(th)*rtube*b
Implementation details: You can probably speed up things by precalculating the cos and sin. Also, to get a robust performance, you should fuse input points closer than, say, 0.1*rtube, or a least test that all the dv vectors are non-zero.
HTH
You need to look at Fenet formulas in Differential Geometry. See figure 2.1 for an example with a helix.
Surfaces & Curves
Taking the cross product of the line segment and the up vector will give you a vector at right-angles to them both (unless the line segment points exactly up or down) which I'll call horizontal. Taking the cross product of horizontal and the line segment with give you another vector that's at right angles to the line segment and the other one (let's call it vertical). You can then get the circle coords by lineStart + cos theta * horizontal + sin theta * vertical for theta in 0 - 2Pi.
Edit: To get the points for the mid-point between two segments, use the sum of the two line segment vectors to find the average.