Flexagon Simulation - math

What is the best way to simulate a flexagon?
My best guess at a starting point is to represent the faces and edges, and simulate transformations based where edges meet. I'm thinking that in the process of implementing a transformation, it will be apparent when folding in a given direction is physically impossible.
I'm going to try to figure this out by experimentation, but it definitely feels like the kind of problem where a gap in my facility with mathematics is holding me back.
Edit: To clarify, I'm interested in what sort of data structures I could use to represent a flexagon and how I can manipulate those data structures to simulate the folding of a flexagon.

If you write all of the invariants of the flexagon as a system of equations, small deviations around legal states may be written as a linear system. For instance, the stiffness of a piece of paper between (x1,y1) and (x2,y2) enforces
(x1 - x2)**2 + (y1 - y2)**2 - L**2 == 0
This can be be softened to
chi2 = (x1 - x2)**2 + (y1 - y2)**2 - L**2 + other constraints...
Derivatives of chi2 with respect to x1, x2, y1, y2 yield linear equations. A system of linear equations is a matrix, and an eigenvalue/eigenvector decomposition of that matrix give you linear combinations of the x1, x2, y1, y2 parameters that are easy or hard to bend. The eigenvectors are a basis set of possible directions and each one's corresponding eigenvalue tells you how hard it is to bend in that direction. Larger eigenvalues are more constrained.
A problem with the above is that if there are any directions that are truly allowed, that is, the derivative of chi2 with respect to p is 0 (the original constraint is absolutely satisfied), then the matrix is singular and can't be inverted to get the eigensystem. If you only want to know what those absolutely allowed directions are, you can compute the null space of the matrix instead of its eigensystem. However, I suspect (never having played with a flexagon) that the "allowed" directions involve a little bit of bending, in which case chi2 is small but nonzero. Then you'd be looking for small but nonzero eigenvalues. Other degrees of freedom are allowed and uninteresting, such as translation or rotation of the whole object. To turn it into a pure eigensystem problem (no null space at all), add constraints to the system with arbitrarily small constants lambda:
chi2 += lambda_x * (x1 + x2)**2/4.0 + lambda_y * (y1 + y2)**2/4.0
You'll recognize them in your solution because they'll vary as you vary each lambda. (The example above gives a penalty lambda_x to translating in x and lambda_y to translating in y.)
In terms of implementation, you can use any linear algebra software to compute solutions and check for variation with the lambdas. I used Python to prototype a problem like this (detector alignment in high energy physics, in which the constraints are measurements like "this detector is 3 cm from that detector" and the chi2 was derived from the uncertainties "3 cm +- 0.1 cm") and then ported the solution to C++ (BLAS) for production. The Numpy library for Python had enough linear algebra (it's BLAS under the hood), though I also used the generic, non-linear minimizers in Scipy to debug the matrix solution. The hardest part is getting the indexes to line up right, which is necessary when casting it as a matrix and not when you give an objective function to a generic minimizer (because you use variable names instead). This is more of a Matlab or Mathematica problem, so if you're more comfortable with one of them, use it instead. This problem will require a lot of trial and error, so use the most interactive system possible (one with a good REPL or worksheet/notebook-style interface).
It can also be helpful to draw a graph of the connections (graph-theory graph, not a plot), on which to label their constraints. For me, that was a necessary first step before writing out the equations.
It might also help to visualize the system by writing a set of functions that take parameter values (x1, etc.) and draw the figure with OpenGL (or other 3-D mesh renderer). This can show you if some constraint is being violated, because the mesh tiles would pass theough each other. It can also help you identify the degrees of freedom represented by each eigenvector: vary the parameters by the linear combination represented by the eigenvector and you'll see if it's just translating/rotating or if it's doing some interesting twist or fold.

Related

Support Vector Machine Geometrical Intuition

Hi,
I have a big difficult trying to understand why in the equation of the hyperplane of support vector machine there is a 1 after >=?? w.x + b >= 1 <==(why this 1??) I know that could be something about the intersection point on y axes but I cannot relate that to the support vector and to its meaning of classification.
Can anyone please explain me why the equation has that 1(-1) ?
Thank you.
The 1 is just an algebraic simplification, which comes in handy in the later optimization.
First, notice, that all three hyperplanes can be denotes as
w'x+b= 0
w'x+b=+A
w'x+b=-A
If we would fix the norm of the normal w, ||w||=1, then the above would have one solution with some arbitrary A depending on the data, lets call our solution v and c (values of optimal w and b respectively). But if we let w to have any norm, then we can easily see, that if we put
w'x+b= 0
w'x+b=+1
w'x+b=-1
then there is one unique w which satisfies these equations, and it is given by w=v/A, b=c/A, because
(v/A)'x+(b/A)= 0 (when v'x+b=0) // for the middle hyperplane
(v/A)'x+(b/A)=+1 (when v'x+b=+A) // for the positive hyperplane
(v/A)'x+(b/A)=-1 (when v'x+b=-A) // for the negative hyperplane
In other words - we assume that these "supporting vectors" satisfy w'x+b=+/-1 equation for future simplification, and we can do it, because for any solution satisfing v'x+c=+/-A there is a solution for our equation (with different norm of w)
So once we have these simplifications our optimization problem simplifies to the minimization of the norm of ||w|| (maximization of the size of the margin, which now can be expressed as `2/||w||). If we would stay with the "normal" equation with (not fixed!) A value, then the maximization of the margin would be in one more "dimension" - we would have to look through w,b,A to find the triple which maximizes it (as the "restrictions" would be in the form of y(w'x+b)>A). Now, we just search through w and b (and in the dual formulation - just through alpha but this is the whole new story).
This step is not required. You can build SVM without it, but this makes thing simplier - the Ockham's razor rule.
This boundary is called "margin" and must be maximized then you have to minimize ||w||.
The aim of SVM is to find a hyperplane able to maximize the distances between the two groups.
However there are infinite solutions ( see figure: move the optimal hyperplane along the perpendicualr vector) and we need to fix at least the boundaries: the +1 or -1 is a common convention to avoid these infinite solutions.
Formally you have to optimize r ||w|| and we set a bounadry condition r ||w|| = 1.

Mathematical library to compare simularities in graphs of data for a high level language (eg. Javascript)?

I'm looking for something that I guess is rather sophisticated and might not exist publicly, but hopefully it does.
I basically have a database with lots of items which all have values (y) that correspond to other values (x). Eg. one of these items might look like:
x | 1 | 2 | 3 | 4 | 5
y | 12 | 14 | 16 | 8 | 6
This is just a a random example. Now, there are thousands of these items all with their own set of x and y values. The range between one x and the x after that one is not fixed and may differ for every item.
What I'm looking for is a library where I can plugin all these sets of Xs and Ys and tell it to return things like the most common item (sets of x and y that follow a compareable curve / progression), and the ability to check whether a certain set is atleast x% compareable with another set.
With compareable I mean the slope of the curve if you would draw a graph of the data. So, not actaully the static values but rather the detection of events, such as a high increase followed by a slow decrease, etc.
Due to my low amount of experience in mathematics I'm not quite sure what I'm looking for is called, and thus have trouble explaining what I need. Hopefully I gave enough pointers for someone to point me into the right direction.
I'm mostly interested in a library for javascript, but if there is no such thing any library would help, maybe I can try to port what I need.
About Markov Cluster(ing) again, of which I happen to be the author, and your application. You mention you are interested in trend similarity between objects. This is typically computed using Pearson correlation. If you use the mcl implementation from http://micans.org/mcl/, you'll also obtain the program 'mcxarray'. This can be used to compute pearson correlations between e.g. rows in a table. It might be useful to you. It is able to handle missing data - in a simplistic approach, it just computes correlations on those indices for which values are available for both. If you have further questions I am happy to answer them -- with the caveat that I usually like to cc replies to the mcl mailing list so that they are archived and available for future reference.
What you're looking for is an implementation of a Markov clustering. It is often used for finding groups of similar sequences. Porting it to Javascript, well... If you're really serious about this analysis, you drop Javascript as soon as possible and move on to R. Javascript is not meant to do this kind of calculations, and it is far too slow for it. R is a statistical package with much implemented. It is also designed specifically for very speedy matrix calculations, and most of the language is vectorized (meaning you don't need for-loops to apply a function over a vector of values, it happens automatically)
For the markov clustering, check http://www.micans.org/mcl/
An example of an implementation : http://www.orthomcl.org/cgi-bin/OrthoMclWeb.cgi
Now you also need to define a "distance" between your sets. As you are interested in the events and not the values, you could give every item an extra attribute being a vector with the differences y[i] - y[i-1] (in R : diff(y) ). The distance between two items can then be calculated as the sum of squared differences between y1[i] and y2[i].
This allows you to construct a distance matrix of your items, and on that one you can call the mcl algorithm. Unless you work on linux, you'll have to port that one.
What you're wanting to do is ANOVA, or ANalysis Of VAriance. If you run the numbers through an ANOVA test, it'll give you information about the dataset that will help you compare one to another. I was unable to locate a Javascript library that would perform ANOVA, but there are plenty of programs that are capable of it. Excel can perform ANOVA from a plugin. R is a stats package that is free and can also perform ANOVA.
Hope this helps.
Something simple is (assuming all the graphs have 5 points, and x = 1,2,3,4,5 always)
Take u1 = the first point of y, ie. y1
Take u2 = y2 - y1
...
Take u5 = y5 - y4
Now consider the vector u as a point in 5-dimensional space. You can use simple clustering algorithms, like k-means.
EDIT: You should not aim for something too complicated as long as you go with javascript. If you want to go with Java, I can suggest something based on PCA (requiring the use of singular value decomposition, which is too complicated to be implemented efficiently in JS).
Basically, it goes like this: Take as previously a (possibly large) linear representation of data, perhaps differences of components of x, of y, absolute values. For instance you could take
u = (x1, x2 - x1, ..., x5 - x4, y1, y2 - y1, ..., y5 - y4)
You compute the vector u for each sample. Call ui the vector u for the ith sample. Now, form the matrix
M_{ij} = dot product of ui and uj
and compute its SVD. Now, the N most significant singular values (ie. those above some "similarity threshold") give you N clusters.
The corresponding columns of the matrix U in the SVD give you an orthonormal family B_k, k = 1..N. The squared ith component of B_k gives you the probability that the ith sample belongs to cluster K.
If it is ok to use java you really should have a look at Weka. It is possible to access all features via java code. Maybe you find a markov clustering, but if not, they hava a lot other clustering algorithem and its really easy to use.

Fastest numerical solution of a real cubic polynomial?

R question: Looking for the fastest way to NUMERICALLY solve a bunch of arbitrary cubics known to have real coeffs and three real roots. The polyroot function in R is reported to use Jenkins-Traub's algorithm 419 for complex polynomials, but for real polynomials the authors refer to their earlier work. What are the faster options for a real cubic, or more generally for a real polynomial?
The numerical solution for doing this many times in a reliable, stable manner, involve: (1) Form the companion matrix, (2) find the eigenvalues of the companion matrix.
You may think this is a harder problem to solve than the original one, but this is how the solution is implemented in most production code (say, Matlab).
For the polynomial:
p(t) = c0 + c1 * t + c2 * t^2 + t^3
the companion matrix is:
[[0 0 -c0],[1 0 -c1],[0 1 -c2]]
Find the eigenvalues of such matrix; they correspond to the roots of the original polynomial.
For doing this very fast, download the singular value subroutines from LAPACK, compile them, and link them to your code. Do this in parallel if you have too many (say, about a million) sets of coefficients.
Notice that the coefficient of t^3 is one, if this is not the case in your polynomials, you will have to divide the whole thing by the coefficient and then proceed.
Good luck.
Edit: Numpy and octave also depend on this methodology for computing the roots of polynomials. See, for instance, this link.
The fastest known way (that I'm aware of) to find the real solutions a system of arbitrary polynomials in n variables is polyhedral homotopy. A detailed explanation is probably beyond a StackOverflow answer, but essentially it's a path algorithm that exploits the structure of each equation using toric geometries. Google will give you a number of papers.
Perhaps this question is better suited for mathoverflow?
Fleshing out Arietta's answer above:
> a <- c(1,3,-4)
> m <- matrix(c(0,0,-a[1],1,0,-a[2],0,1,-a[3]), byrow=T, nrow=3)
> roots <- eigen(m, symm=F, only.values=T)$values
Whether this is faster or slower than using the cubic solver in the GSL package (as suggested by knguyen above) is a matter of benchmarking it on your system.
Do you need all 3 roots or just one? If just one, I would think Newton's Method would work ok. If all 3 then it might be problematic in circumstances where two are close together.
1) Solve for the derivative polynomial P' to locate your three roots. See there to know how to do it properly. Call those roots a and b (with a < b)
2) For the middle root, use a few steps of bisection between a and b, and when you're close enough, finish with Newton's method.
3) For the min and max root, "hunt" the solution. For the max root:
Start with x0 = b, x1 = b + (b - a) * lambda, where lambda is a moderate number (say 1.6)
do x_n = b + (x_{n - 1} - a) * lambda until P(x_n) and P(b) have different signs
Perform bisection + newton between x_{n - 1} and x_n
The common methods are available: Newton's Method, Bisection Method, Secant, Fixed point iteration, etc. Google any one of them.
If you have a non-linear system on the other hand (e.g. a system on N polynomial eqn's in N unknowns), a method such as high-order Newton may be used.
Have you tried looking into the GSL package http://cran.r-project.org/web/packages/gsl/index.html?

How to plot implicit equations

What is the usual method or algorithm used to plot implicit equations of 2 variables?
I am talking about equations such as,
sin(x*y)*y = 20
x*x - y*y = 1
Etc.
Does anyone know how Maple or Matlab do this? My target language is C#.
Many thanks!
One way to do this is to sample the function on a regular, 2D grid. Then you can run an algorithm like marching squares on the resulting 2D grid to draw iso-contours.
In a related question, someone also linked to the gnuplot source code. It's fairly complex, but might be worth going through. You can find it here: http://www.gnuplot.info/
Iterate the value of x across the range you want to plot. For each fixed value of x, solve the equation numerically using a method such as interval bisection or the Newton-Raphson method (for which you can calculate the derivative using implicit differentiation, or perhaps differentiate numerically). This will give you the corresponding value of y for a given x. In most cases, you won't need too many iterations to get a very precise result, and it's very efficient anyway.
Note that you will need to transform the equation into the form f(x) = 0, though this is always trivial. The nice thing about this method is that it works just as well the other way round (i.e. taking a fixed range of y and computing x per value).
There're multiple methods. The easiest algorithm I could find is descripted here:
https://homepages.warwick.ac.uk/staff/David.Tall/pdfs/dot1986b-implicit-fns.pdf and describes what Noldorin has described you.
The most complex one, and seems to be the one that can actually solve a lot of special cases is described here:
https://academic.oup.com/comjnl/article/33/5/402/480353
i think,
in matlab you give array as input for x.
then for every x, it calculates y.
then draws line from x0,y0 to x1, y1
then draws line from x1,y1 to x2, y2
...
...

Approximating nonparametric cubic Bezier

What is the best way to approximate a cubic Bezier curve? Ideally I would want a function y(x) which would give the exact y value for any given x, but this would involve solving a cubic equation for every x value, which is too slow for my needs, and there may be numerical stability issues as well with this approach.
Would this be a good solution?
Just solve the cubic.
If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.
A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)
You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.
Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.
EDIT: responding to your comment, but I need more than 300 characters.
I'm only dealing with bezier curves where y(x) has only one (real) root. Regarding numerical stability, using the formula from http://en.wikipedia.org/wiki/Cubic_equation#Summary, it would appear that there might be problems if u is very small. – jtxx000
The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.
To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:
u^^6 + q u^^3 == p^^3 /27
Near zero, you can use the approximation:
q u^^3 == p^^3 /27
or p / 3u == cube root of q
So the computation of x from u should contain something like:
(fabs(u) >= somesmallvalue) ? (p / u / 3.0) : cuberoot (q)
How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.
The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're sure that there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.
If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.
Yes, de Casteljau algorithm would work for you. However, I don't know if it will be faster than solving the cubic equation by Cardano's method.

Resources