Expand large polynomial with Sage (List colouring, combinatorial Nullstellensatz) - graph

Disclaimer: I have no experience with sage, programming or any computer calculations.
I want to expand a polynomial in Sage. The input is a factored polynomial and I need a certain coefficient. However, since the polynomial has 30 factors, my computer won't do it.
Should I look for somebody with a better computer or are 30 factors simply too much?
Here is my sage code:
R.<x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9,x_10,x_11,x_12> = QQbar[]
f = (x_1-x_2)*(x_1-x_3)*(x_1-x_9)*(x_1-x_10)*(x_2-x_3)*(x_2-x_10)*(x_2-x_11)*(x_2-x_12)*(x_3-x_4)*(x_4-x_11)*(x_4-x_5)*(x_4-x_6)*(x_4-x_11)*(x_5-x_6)*(x_5-x_10)*(x_5-x_11)*(x_5-x_12)*(x_6-x_7)*(x_6-x_12)*(x_7-x_9)*(x_7-x_8)*(x_7-x_12)*(x_8-x_9)*(x_8-x_10)*(x_8-x_11)*(x_8-x_12)*(x_9-x_10)*(x_10-x_11)*(x_10-x_12)*(x_11-x_12);
c = f.coefficient({x_1:2,x_2:2,x_3:2,x_4:2,x_5:2,x_6:2,x_7:2,x_8:2,x_9:2,x_10:5,x_11:5,x_12:5}); c
Just some background. I'm trying to solve an instance of list edge colouring with the combinatorial Nullstellensatz.
https://en.wikipedia.org/wiki/List_edge-coloring
Given a graph G=(V,E) we associate a variable x_i with each vertex i in V. The graph monomial eps(G) is defined as the product \prod_{ij \in E} (x_i-x_j). (Note that we fixed an orientation of the edges, but that's not important here.)
Suppose that there are lists of colours assigned to the vertices, such that the vertex i has a list of size a(i). Then, by the combinatorial Nullenstellensatz there is a colouring from those lists (i.e. each vertex receives a colour from its list and two adjacent vertices do not receive the same colour), if the coefficient of \prod_{i \in V} x_i^{a(i)-1} is non-zero in eps(G).
I want to apply this to the line graph of the graph G(M) with incidence matrix:
M = Matrix([0,0,0,3,3,0,3],[0,0,0,0,3,3,3],[0,0,0,3,0,3,3],[0,0,0,3,3,0,3],[3,0,3,0,0,0,6],[3,3,0,0,0,0,6],[0,3,3,0,0,0,6],[3,3,3,6,6,6,0])
(Here the size of the lists are indicated by the integers).

I believe that it takes so long because your coefficients are in QQbar, and arithmetic in QQbar is much slower than over QQ, for example. Is there a good reason for not using QQ?
If I change the coefficient ring to QQ, Sage fairly quickly tells me that c is 0:
sage: R.<x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9,x_10,x_11,x_12> = QQ[]
sage: f = (x_1-x_2)*(x_1-x_3)*(x_1-x_9)*(x_1-x_10)*(x_2-x_3)*(x_2-x_10)*(x_2-x_11)*(x_2-x_12)*(x_3-x_4)*(x_4-x_11)*(x_4-x_5)*(x_4-x_6)*(x_4-x_11)*(x_5-x_6)*(x_5-x_10)*(x_5-x_11)*(x_5-x_12)*(x_6-x_7)*(x_6-x_12)*(x_7-x_9)*(x_7-x_8)*(x_7-x_12)*(x_8-x_9)*(x_8-x_10)*(x_8-x_11)*(x_8-x_12)*(x_9-x_10)*(x_10-x_11)*(x_10-x_12)*(x_11-x_12)
sage: c = f.coefficient({x_1:2,x_2:2,x_3:2,x_4:2,x_5:2,x_6:2,x_7:2,x_8:2,x_9:2,x_10:5,x_11:5,x_12:5})
sage: c
0

Related

How to identify the roots of an equation by plotting it's real and imaginary parts

This is more of a general Maths question (might be silly even). But in high school we learn to identify the roots of an equation via it's plot right.
For example, for the equation
y = x^2 - 1
The blue line would show us the roots. This is when the blue line crosses x, so +- 1.
Now, if we said that the equation had a real and an imaginary part, so that it is
y = x^2 - 1 + (x^2 - 0.5)i
as given in the Mathematica screenshot, then we have a real part which crosses zero, and an imaginary part which also crosses zero but at a different x. So my question is: is it possible to identify the roots of such an equation by simply looking at the real and imaginary parts of the plot?
Note: part of my confusion is that if I use FindRoot, in Mathematica, I get either 0.877659 - 0.142424i or -0.877659 + 0.142424i. So might be some fundamental property in Maths I don't know about which prevents one from identifying roots of a complex function through separating real and imaginary parts...
we have a real part which crosses zero, and an imaginary part which also crosses zero but at a different x.
Those are graphs of the real and imaginary parts plotted for real values of x. If they both crossed the horizontal axis at the same point(s), that would mean the equation has real root(s), since both real and imaginary parts would be zero for some real value of x. However, this equation has no real roots, so the crossing points are different.
So my question is: is it possible to identify the roots of such an equation by simply looking at the real and imaginary parts of the plot?
f(x) = x^2 - 1 + i (x^2 - 0.5) is a complex function of a complex variable, which maps a complex variable x = a + i b to the complex value f(x) = Re(f(x)) + i Im(f(x)).
Each of Re(f(x)) and Im(f(x)) is a real function of a complex variable. Such functions can be plotted in 3D by representing x = a + i b as a point in the (a, b) plane, and the value of the function along the third dimension, say c. For example, f(x) has the following graphs for the real and imaginary parts.
The cross-sections of the two surfaces by the horizontal plane c = 0 are pairs of curves where each function is zero, respectively. It follows that the intersections of those curves are the points where Re(f(x)) = Im(f(x)) = 0, which means they are the roots of the equation f(x) = 0.
Since f(x) = 0 is a quadratic equation, it must have two roots, and those two points are in fact ±(0.877659 - 0.142424 i), as can be verified by direct calculation.

Linear regression / line finding for non-function lines

I want to find a line having a number of points which are around that line. Line is in 2D space and is defined by two points, or by one point and an angle. What would be the algorithm for it?
There's a lot about this on SO and in internet and also in Numerical Receipes, but all examples seem to focus on function form of the line (y=ax+b) which is not going to work well for (almost) vertical lines.
I could possibly detect if the line is more horizontal or more vertical, and swap coordinates in the other case, but maybe there exists some more elegant solution?
I'm using C# ATM but can translate from any code probably.
I'm sorry I can't provide a reference, but here's how:
Suppose your N (2d) data points are p[] and you want to find a vector a and a scalar d to minimise
E = Sum{ i | sqr( a'*p[i] - d) }/N
(The line is { q | a'*q = d} E is the sum of the squares of the distances of the data points from the line).
Some tedious algebra shows that
E = a'*C*a + sqr(d - a'*M)
where M is the mean and C the covariance of the data, ie
M = Sum{ i | p[i] } / N
C = Sum{ i | (p[i]-M)*(p[i]-M)' } / N
E will be minimised by choosing d = a'*M, and a to be an eigenvector of C corresponding to the smaller eigenvalue.
So the algorithm is:
Compute M and C
Find the smaller eigenvalue of C and the corresponding eigenvector a
Compute d = a'*M
(Note that the same thing works in higher dimensions too. For example in 3d we would find the 'best' plane).

partially reconstruct information of function convoluted with boxcar kernel

the function (f) I want to reconstruct partially could look like this:
The following properties are known:
It consists only of alternating plateau (high/low).
So the first derivation is zero respectively undefined at the edges.
The function was convoluted with a kernel fulfilling the following conditions:
It is a boxcar function
Its center is at x=0
Its integral is 1.
I want to reconstruct only the positions of the edges of the original function (f) from the convolution result (c). So just these positions are of interest to me:
If the convolution kernel width (k) is less than the minimum plateau width (b, 40 in the example above) of f, c looks as follows:
(The width of the box car convolution kernel here is k=31.)
In that case it is easy to reconstruct the edge positions:
I look for (possibly broad) extrema, and in between to neighbours [e1_x, e1_y] and [e2_x, e2_y] (one of them is a minimum and one a maximum of course), I search the x0 fulfilling: c(x0) = (e1_y + e2_y) / 2.
The reconstructed edge positions look like that:
But if k > b my approach fails:
(k=57)
Is there a possibility to calculate the original edge positions in f, if g (and so k) and c are known, also for the k>b cases?

Finding the coordinates of points from distance matrix

I have a set of points (with unknow coordinates) and the distance matrix. I need to find the coordinates of these points in order to plot them and show the solution of my algorithm.
I can set one of these points in the coordinate (0,0) to simpify, and find the others. Can anyone tell me if it's possible to find the coordinates of the other points, and if yes, how?
Thanks in advance!
EDIT
Forgot to say that I need the coordinates on x-y only
The answers based on angles are cumbersome to implement and can't be easily generalized to data in higher dimensions. A better approach is that mentioned in my and WimC's answers here: given the distance matrix D(i, j), define
M(i, j) = 0.5*(D(1, j)^2 + D(i, 1)^2 - D(i, j)^2)
which should be a positive semi-definite matrix with rank equal to the minimal Euclidean dimension k in which the points can be embedded. The coordinates of the points can then be obtained from the k eigenvectors v(i) of M corresponding to non-zero eigenvalues q(i): place the vectors sqrt(q(i))*v(i) as columns in an n x k matrix X; then each row of X is a point. In other words, sqrt(q(i))*v(i) gives the ith component of all of the points.
The eigenvalues and eigenvectors of a matrix can be obtained easily in most programming languages (e.g., using GSL in C/C++, using the built-in function eig in Matlab, using Numpy in Python, etc.)
Note that this particular method always places the first point at the origin, but any rotation, reflection, or translation of the points will also satisfy the original distance matrix.
Step 1, arbitrarily assign one point P1 as (0,0).
Step 2, arbitrarily assign one point P2 along the positive x axis. (0, Dp1p2)
Step 3, find a point P3 such that
Dp1p2 ~= Dp1p3+Dp2p3
Dp1p3 ~= Dp1p2+Dp2p3
Dp2p3 ~= Dp1p3+Dp1p2
and set that point in the "positive" y domain (if it meets any of these criteria, the point should be placed on the P1P2 axis).
Use the cosine law to determine the distance:
cos (A) = (Dp1p2^2 + Dp1p3^2 - Dp2p3^2)/(2*Dp1p2* Dp1p3)
P3 = (Dp1p3 * cos (A), Dp1p3 * sin(A))
You have now successfully built an orthonormal space and placed three points in that space.
Step 4: To determine all the other points, repeat step 3, to give you a tentative y coordinate.
(Xn, Yn).
Compare the distance {(Xn, Yn), (X3, Y3)} to Dp3pn in your matrix. If it is identical, you have successfully identified the coordinate for point n. Otherwise, the point n is at (Xn, -Yn).
Note there is an alternative to step 4, but it is too much math for a Saturday afternoon
If for points p, q, and r you have pq, qr, and rp in your matrix, you have a triangle.
Wherever you have a triangle in your matrix you can compute one of two solutions for that triangle (independent of a euclidean transform of the triangle on the plane). That is, for each triangle you compute, it's mirror image is also a triangle that satisfies the distance constraints on p, q, and r. The fact that there are two solutions even for a triangle leads to the chirality problem: You have to choose the chirality (orientation) of each triangle, and not all choices may lead to a feasible solution to the problem.
Nevertheless, I have some suggestions. If the number entries is small, consider using simulated annealing. You could incorporate chirality into the annealing step. This will be slow for large systems, and it may not converge to a perfect solution, but for some problems it's the best you and do.
The second suggestion will not give you a perfect solution, but it will distribute the error: the method of least squares. In your case the objective function will be the error between the distances in your matrix, and actual distances between your points.
This is a math problem. To derive coordinate matrix X only given by its distance matrix.
However there is an efficient solution to this -- Multidimensional Scaling, that do some linear algebra. Simply put, it requires a pairwise Euclidean distance matrix D, and the output is the estimated coordinate Y (perhaps rotated), which is a proximation to X. For programming reason, just use SciKit.manifold.MDS in Python.
The "eigenvector" method given by the favourite replies above is very general and automatically outputs a set of coordinates as the OP requested, however I noticed that that algorithm does not even ask for a desired orientation (rotation angle) for the frame of the output points, the algorithm chooses that orientation all by itself!
People who use it might want to know at what angle the frame will be tipped before hand so I found an equation which gives the answer for the case of up to three input points, however I have not had time to generalize it to n-points and hope someone will do that and add it to this discussion. Here are the three angles the output sides will form with the x-axis as a function of the input side lengths:
angle side a = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2-b^2)^2)/(a^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side b = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*b^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side c = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*c^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
Those equations also lead directly to a solution to the OP's problem of finding the coordinates for each point because: the side lengths are already given from the OP as the input, and my equations give the slope of each side versus the x-axis of the solution, thus revealing the vector for each side of the polygon answer, and summing those sides through vector addition up to a desired vertex will produce the coordinate of that vertex. So if anyone can extend my angle equations to handling beyond three input lengths (but I note: that might be impossible?), it might be a very fast way to the general solution of the OP's question, since slow parts of the algorithms that people gave above like "least square fitting" or "matrix equation solving" might be avoidable.

Some help rendering the Mandelbrot set

I have been given some work to do with the fractal visualisation of the Mandelbrot set.
I'm not looking for a complete solution (naturally), I'm asking for help with regard to the orbits of complex numbers.
Say I have a given Complex number derived from a point on the complex plane. I now need to iterate over its orbit sequence and plot points according to whether the orbits increase by orders of magnitude or not.
How do I gather the orbits of a complex number? Any guidance is much appreciated (links etc). Any pointers on Math functions needed to test the orbit sequence e.g. Math.pow()
I'm using Java but that's not particularly relevant here.
Thanks again,
Alex
When you display the Mandelbrot set, you simply translate the real and imaginaty planes into x and y coordinates, respectively.
So, for example the complex number 4.5 + 0.27i translates into x = 4.5, y = 0.27.
The Mandelbrot set is all points where the equation Z = Z² + C never reaches a value where |Z| >= 2, but in practice you include all points where the value doesn't exceed 2 within a specific number of iterations, for example 1000. To get the colorful renderings that you usually see of the set, you assign different colors to points outside the set depending on how fast they reach the limit.
As it's complex numbers, the equation is actually Zr + Zi = (Zr + Zi)² + Cr + Ci. You would divide that into two equations, one for the real plane and one for the imaginary plane, and then it's just plain algebra. C is the coordinate of the point that you want to test, and the initial value of Z is zero.
Here's an image from my multi-threaded Mandelbrot generator :)
Actually the Mandelbrot set is the set of complex numbers for which the iteration converges.
So the only points in the Mandelbrot set are that big boring colour in the middle. and all of the pretty colours you see are doing nothing more than representing the rate at which points near the boundary (but the wrong side) spin off to infinity.
In mathspeak,
M = {c in C : lim (k -> inf) z_k = 0 } where z_0 = c, z_(k+1) = z_k^2 + c
ie pick any complex number c. Now to determine whether it is in the set, repeatedly iterate it z_0 = c, z_(k+1) = z_k^2 + c, and z_k will approach either zero or infinity. If its limit (as k tends to infinity) is zero, then it is in. Otherwise not.
It is possible to prove that once |z_k| > 2, it is not going to converge. This is a good exercise in optimisation: IIRC |Z_k|^2 > 2 is sufficient... either way, squaring up will save you the expensive sqrt() function.
Wolfram Mathworld has a nice site talking about the Mandelbrot set.
A Complex class will be most helpful.
Maybe an example like this will stimulate some thought. I wouldn't recommend using an Applet.
You have to know how to do add, subtract, multiply, divide, and power operations with complex numbers, in addition to functions like sine, cosine, exponential, etc. If you don't know those, I'd start there.
The book that I was taught from was Ruel V. Churchill "Complex Variables".
/d{def}def/u{dup}d[0 -185 u 0 300 u]concat/q 5e-3 d/m{mul}d/z{A u m B u
m}d/r{rlineto}d/X -2 q 1{d/Y -2 q 2{d/A 0 d/B 0 d 64 -1 1{/f exch d/B
A/A z sub X add d B 2 m m Y add d z add 4 gt{exit}if/f 64 d}for f 64 div
setgray X Y moveto 0 q neg u 0 0 q u 0 r r r r fill/Y}for/X}for showpage

Resources