How to find which subset of bitfields xor to another bitfield? - math

I have a somewhat math-oriented problem. I have a bunch of bitfields and would like to calculate what subset of them to xor together to achieve a certain other bitfield, or if there isn't a way to do it discover that no such subset exists.
I'd like to do this using a free library, rather than original code, and I'd strongly prefer something with Python bindings (using Python's built-in math libraries would be acceptable as well, but I want to port this to multiple languages eventually). Also it would be good to not take the memory hit of having to expand each bit to its own byte.
Some further clarification: I only need a single solution. My matrices are the opposite of sparse. I'm very interested in keeping the runtime to an absolute minimum, so using algorithmically fancy methods for inverting matrices is strongly preferred. Also, it's very important that the specific given bitfield be the one outputted, so a technique which just finds a subset which xor to 0 doesn't quite cut it.
And I'm generally aware of gaussian elimination. I'm trying to avoid doing this from scratch!
cross-posted to mathoverflow, because it isn't clear what the right place for this question is - https://mathoverflow.net/questions/41036/how-to-find-which-subset-of-bitfields-xor-to-another-bitfield

Mathematically speaking, XOR of two bits can be treated as addition in F_2 field.
You want to solve a set of equations in a F_2 field. For four bitfiels with bits (a_0, a_1, ... a_n), (b_0, b_1, ..., b_n), (c_0, c_1, ..., c_n), (r_0, r_1, ..., r_n), you get equations:
x * a_0 + y * b_0 + z * c_0 = r_0
x * a_1 + y * b_1 + z * c_1 = r_1
...
x * a_n + y * b_n + z * c_n = r_n
(where you look for x, y, z).
You could program this as a simple integer linear problem with glpk, probably lp_solve (but I don't remember if it will fit). These might work very slowly though, as they are trying to solve much more general problem.
After googling for a while, it seems that this page might be a good start looking for code. From descriptions it seems that Dixon and LinBox could be a good fit.
Anyway, I think asking at mathoverflow might give you more precise answers. If you do, please link your question here.
Update: Sagemath uses M4RI for solving this problem. This makes it (for me) a very good recommendation.

For small instances that easily fit in memory, this is just solving a linear system over F_2, so try mod-2 Gaussian elimination. For very large sparse instances, like those that occur in factoring (sieve) algorithms, look up the Wiedemann algorithm.

It's possible to have multiple subsets xor to the same value; do you care about finding all subsets?
A perhaps heavy-handed approach would be to filter the powerset of bitfields. In Haskell:
import Data.Bits
xorsTo :: Int -> [Int] -> [[Int]]
xorsTo target fields = filter xorsToTarget (powerset fields)
where xorsToTarget f = (foldl xor 0 f) == target
powerset [] = [[]]
powerset (x:xs) = powerset xs ++ map (x:) (powerset xs)
Not sure if there is a way to do this without generating the powerset. (In the worst case, it is possible for the solution to actually be the entire powerset).

expanding on liori's answer above we have a linear system of equations (in modulo 2):
a0, b0, c0 ...| r0
a1, b1, c1 ...| r1
... |
an, bn, cn ...| rn
Gaussian elimination can be used to solve the system. In modulo 2, the add row operation becomes an XOR operation. It is much simpler computationally to do this than to use a generic linear systems solver.
So, if a0 is zero we swap up a row that has a 1 in the a position. Then perform an XOR (using row 0) on any other row whos "a" bit is a 1. Then repeat using row 1 and column b, then row 2 col c, etc.
If you get a row of zeroes with a non-zero in the r column then the subset DNE.

Related

gcd of two numbers one of them is too large

I was going through a question which ask to calculate gcd(a-b,a^n+b^n)%(10^9+7) where a,b,n can be as large as 10^12.
I am able to solve this for a,b and n for very small numbers and fermat's theorem also didn't seem to work, and i reached a conclusion that if a,b are coprime then this will always give me gcd as 2 but for the rest i am not able to get it?
i need just a little hint that what i am doing wrong to get gcd for large numbers? I also tried x^y to find gcd by taking modulo at each step but that also didn't work.
Need just direction and i will make my way.
Thanks in advance.
You are correct that a^n + b^n is too large to compute and that working mod 10^9 + 7 at each step doesn't provide a way to compute the answer. But, you can still use modular exponentiation by squaring with a different modulus, namely a-b
Key observations:
1) gcd(a-b,a^n + b^n) = gcd(d,a^n + b^n) where d = abs(a-b)
2) gcd(d,a^n + b^n) = gcd(d,r) where r = (a^n + b^n) % d
3) r can be feasibly computed with modular exponentiation by squaring
The point of 1) is that different programming languages have different conventions for handling negative numbers in the mod operator. Taking the absolute value avoids such complications, though mathematically it doesn't make a difference. The key idea is that it is perfectly feasible to do the first step of the Euclidean algorithm for computing gcds. All you need is the remainder upon division of the larger by the smaller of the two numbers. After the first step is done, all of the numbers are in the feasible range.

Solve a particular linear system efficiently in julia

I use extensively the julia's linear equation solver res = X\b. I have to use it millions of times in my program because of parameter variation. This was working ok because I was using small dimensions (up to 30). Now that I want to analyse bigger systems, up to 1000, the linear solver is no longer efficient.
I think there can be a work around. However I must say that sometimes my X matrix is dense, and sometimes is sparse, so I need something that works fine for both cases.
The b vector is a vector with all zeroes, except for one entry which is always 1 (actually it is always the last entry). Moreover, I don't need all the res vector, just the first entry of it.
If your problem is of the form (A - µI)x = b, where µ is a variable parameter and A, b are fixed, you might work with diagonalization.
Let A = PDP° where P° denotes the inverse of P. Then (PDP° - µI)x = b can be transformed to
(D - µI)P°x = P°b,
P°x = P°b / (D - µI),
x = P(P°b / (D - µI)).
(the / operation denotes the division of the respective vector elements by the scalars Dr - µ.)
After you have diagonalized A, computing a solution for any µ reduces to two matrix/vector products, or a single one if you can also precompute P°b.
Numerical instability will show up in the vicinity of the Eigenvalues of A.
Usually when people talk about speeding up linear solvers res = X \ b, it’s for multiple bs. But since your b isn’t changing, and you just keep changing X, none of those tricks apply.
The only way to speed this up, from a mathematical perspective, seems to be to ensure that Julia is picking the fastest solver for X \ b, i.e., if you know X is positive-definite, use Cholesky, etc. Matlab’s flowcharts for how it picks the solver to use for X \ b, for dense and sparse X, are available—most likely Julia implements something close to these flowcharts too, but again, maybe you can find some way to simplify or shortcut it.
All programming-related speedups (multiple threads—while each individual solver is probably already multi-threaded, it may be worth running multiple solvers in parallel when each solver uses fewer threads than cores; #simd if you’re willing to dive into the solvers themselves; OpenCL/CUDA libraries; etc.) then can be applied.
Best approach for efficiency would be to use: JuliaMath/IterativeSolvers.jl. For A * x = b problems, I would recommend x = lsmr(A, b).
Second best alternatives would be to give a bit more information to the compiler: instead of x = inv(A'A) * A' * b, do x = inv(cholfact(A'A)) A' * b if Cholesky decomposition works for you. Otherwise, you could try U, S, Vt = svd(A) and x = Vt' * diagm(sqrt.(S)) * U' * b.
Unsure if x = pinv(A) * b is optimized, but might be slightly more efficient than x = A \ b.

how to solve mathematical expectation in hackerrank 20/20 hack february 2014

So this problem was given in Hackerrank 20/20 hack february :
Let’s consider a random permutation p1, p2, …, pN of numbers 1, 2, …, N and calculate the value F=(X2+…+XN-1)^K, where Xi equals 1 if one of the following two conditions holds: pi-1 < pi > pi+1 or pi-1 > pi < pi+1 and Xi equals 0 otherwise. What is the expected value of F?
Constraints: 1000 <= N <= 10^9, 1 <= K <= 5
I thought it was Eulerian number related problem. As the contest is over,I can see the solutions. But I don't understand any of them. Is there any tricks?
so a few words about my "solution" ;)
What I basically did:
1) write a brute force solver (obviously for N << 20)
-> this solver won't handle high values of N, as given in the constraints
2) analyze the output of the solutions to these (invalid) inputs
-> observe that with K=1, the output follows a straight line
-> K=2, is a quadratic function
-> K=3, is a cubic function, and so on
3) find the parameters for each function (K=1 - 5) by using a solver, or how I did it, wolfram alpha ;)
-> additionally I "normalized" each parameter to only have one division afterwards
4) use any programming language / big integer class to solve the correct inputs in O(1)
I'm pretty sure that one can come up with these parameters in a very clever way, but for me, during the contest, this solution was easy and fast enough without having to think too much about the "why" ;)

Get branch points of equation

If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.

higher order linear regression

I have the matrix system:
A x B = C
A is a by n and B is n by b. Both A and B are unknown but I have partial information about C (I have some values in it but not all) and n is picked to be small enough that the system is expected to be over constrained. It is not required that all rows in A or columns in B are over constrained.
I'm looking for something like least squares linear regression to find a best fit for this system (Note: I known there will not be a single unique solution but all I want is one of the best solutions)
To make a concrete example; all the a's and b's are unknown, all the c's are known, and the ?'s are ignored. I want to find a least squares solution only taking into account the know c's.
[ a11, a12 ] [ c11, c12, c13, c14, ? ]
[ a21, a22 ] [ b11, b12, b13, b14, b15] [ c21, c22, c23, c24, c25 ]
[ a31, a32 ] x [ b21, b22, b23, b24, b25] = C ~= [ c31, c32, c33, ?, c35 ]
[ a41, a42 ] [ ?, ?, c43, c44, c45 ]
[ a51, a52 ] [ c51, c52, c53, c54, c55 ]
Note that if B is trimmed to b11 and b21 only and the unknown row 4 chomped out, then this is almost a standard least squares linear regression problem.
This problem is illposed as described.
Let A, B, and C=5, be scalars. You are asking to solve
a*b=5
which has an infinite number of solutions.
One approach, on the information provided above, is to minimize
the function g defined as
g(A,B) = ||AB-C||^2 = trace((AB-C)*(AB-C))^2
using Newtons method or a quasi-secant approach (BFGS).
(You can easily compute the gradient here).
M* is the transpose of M and multiplication is implicit.
(The norm is the frobenius norm... I removed the
underscore F as it was not displaying properly)
As this is an inherently nonlinear problem, standard linear
algebra approaches do not apply.
If you provide more information, I may be able to help more.
Some more questions: I think the issue is here is that without
more information, there is no "best solution". We need to
determine a more concrete idea of what we are looking for.
One idea, could be a "sparsest" solution. This area is
a hot area of research, with some of the best minds in the
world working here (See Terry Tao et al. work on Nuclear Norm)
This problem although tractable is still hard.
Unfortunately, I am not yet able to comment, so I will add my comments here.
As said below, LM is a great approach to solving this and is just one approach.
along the lines of the Newton type approaches to either
the optimization problem or the nonlinear solving problem.
Here is an idea, using the example you gave above: Lets define
two new vectors, V and U each with 21 elements (exactly the same number of defined
elements in C).
V is precisely the known elements of C, column ordered, so (in matlab notation)
V = [C11; C21; C31; C51; C12; .... ; C55]
U is a vector which is a column ordering of the product AB, LEAVING OUT THE
ELEMENTS CORRESPONDING TO '?' in matrix C. Collecting all the variables into x
we have
x = [a11, a21, .. a52, b11, b21 ..., b25].
f(x) = U (as defined above).
We can now try to solve f(x)=V with your favorite nonlinear least squares method.
As an aside, although a poster below recommended simulated annealing, I recommend
against it. THere are some problems it works, but it is a heuristic. When you have
powerful analytic methods such as Gauss-Newton or LM, I say use them. (in my own
experience that is)
A wild guess: A singular value decomposition might do the trick?
I have no idea on how to deal with your missing values, so I'm going to ignore that problem.
There are no unique solutions. To find a best solution you need some sort of a metric to judge them by. I'm going to suppose you want to use a least squares metric, i.e. the best guess values of A and B are those that minimize sum of the numbers [C_ij-(A B)_ij]^2.
One thing you didn't mention is how to determine the value you are going to use for n. In short, we can come up with 'good' solutions if 1 <= n <= b. This is because 1 <= rank(span(C)) <= b. Where rank(span(C)) = the dimension of the column space of C. Note that this is assuming a >= b. To be more correct we would write 1 <= rank(span(C)) <= min(a,b).
Now, supposing that you have chosen n such that 1 <= n <= b. You are going to minimize the residual sum of squares if you chose the columns of A such that span(A) = span(First n eigen vectors of C). If you don't have any other good reasons, just choose the columns of A to be to first n eigen vectors of C. Once you have chosen A, you can get the values of B in the usual linear regression way. I.e. B = (A'A)^(-1)A' C
You have a couple of options. The Levenberg-Marquadt algorithm is generally recognized as the best LS method. A free implementation is available at here. However, if the calculation is fast and you have a decent number of parameters, I would strongly suggest a Monte Carlo method such as simulated annealing.
You start with some set of parameters in the answer, and then you increase one of them by a random percentage up to a maximum. You then calculate the fitness function for your system. Now, here's the trick. You don't throw away the bad answers. You accept them with a Boltzmann probability distribution.
P = exp(-(x-x0)/T)
where T is a temperature parameter and x-x0 is the current fitness value minus the previous. After x number of iterations, you decrease T by a fixed amount (this is called the cooling schedule). You then repeat this process for another random parameter. As T decreases, fewer poor solutions are chosen, and eventually the procedure becomes a "greedy search" only accepting the solutions that improve the fit. If your system has many free parameters (> 10 or so), this is really the only way to go where you will have any chance of getting to a global minimum. This fitting method takes about 20 minutes to write in code, and a couple of hours to tweak. Hope this helps.
FYI, Wolfram has a nice discussion of this in the context of the traveling salesman problem, and I've been using it very successfully to solve some very difficult global minimization problems. It is slower than LM methods, but much better in most difficult/relatively large cases.
Based on the realization that cutting B to a single column and them removing row with unknowns converts this to very near a known problem, One approach would be to:
seed A with random values.
solve for each column of B independently.
rework the problem to allow solving for each row of A given the B values from step 2.
repeat at step 2 until things settle out.
I have no clue if that is even stable.

Resources