I have a dumb question, and I am embarrassed to even ask.
Due to my little knowledge in math I couldn't figure out what should I search.
I'm dealing with the following equation:
[(a*x)^b]*c=d
where ^ stands for XOR and * for Multiplication.
How can I isolate x?
[(a*x)^b]*c=d
[(a*x)^b]=d/c
(a*x)^b^b=(d/c)^b //double xor with b retrieves initial value
(a*x)=(d/c)^b
x = ((d/c)^b) / a
Based on properties of xor the following holds:
A xor A = 0
B xor 0 = B
Plus, it's commutative. The rest is plain equation solving math.
Related
I was going through a question which ask to calculate gcd(a-b,a^n+b^n)%(10^9+7) where a,b,n can be as large as 10^12.
I am able to solve this for a,b and n for very small numbers and fermat's theorem also didn't seem to work, and i reached a conclusion that if a,b are coprime then this will always give me gcd as 2 but for the rest i am not able to get it?
i need just a little hint that what i am doing wrong to get gcd for large numbers? I also tried x^y to find gcd by taking modulo at each step but that also didn't work.
Need just direction and i will make my way.
Thanks in advance.
You are correct that a^n + b^n is too large to compute and that working mod 10^9 + 7 at each step doesn't provide a way to compute the answer. But, you can still use modular exponentiation by squaring with a different modulus, namely a-b
Key observations:
1) gcd(a-b,a^n + b^n) = gcd(d,a^n + b^n) where d = abs(a-b)
2) gcd(d,a^n + b^n) = gcd(d,r) where r = (a^n + b^n) % d
3) r can be feasibly computed with modular exponentiation by squaring
The point of 1) is that different programming languages have different conventions for handling negative numbers in the mod operator. Taking the absolute value avoids such complications, though mathematically it doesn't make a difference. The key idea is that it is perfectly feasible to do the first step of the Euclidean algorithm for computing gcds. All you need is the remainder upon division of the larger by the smaller of the two numbers. After the first step is done, all of the numbers are in the feasible range.
I'm trying to write a code which decrypts any Affine cipher.
Now, I found that the decryption function is :
y = a^(-1) * (x-b) mod 26
The problem is : when x is smaller than b the answer is negative.
I know that it is a Math question rather than a Code question, but I hope that there are some nice guys who may help me.
It's actually a question that straddles maths and programming.
Firstly mathematicians and programmers use "mod" somewhat differently.
Mathematicians use it as a statement about the equation they have just written. When they say "a = b + c mod m" what they mean is that "a = b + c" in modulo m arithmetic.
Programmers on the other hand use mod as an operator that provides the remainder after integer division.
Secondly there are multiple ways of defining integer division "floored division", "truncated division" and "euclidian division" and hence multiple ways of defning the modulo operator.
Unfortunately what you need for your algorithm is the "remainder after floored division" but what your programming language is giving you is the "remainder after truncated division.
One possible fix is to simply add an if statement.
if (y < 0) y += 26
Is there an algorithm that can solve a non-linear congruence in modular arithmetic? I read that such a problem is classified as NP-complete.
In my specific case the congruence is of the form:
x^3 + ax + b congruent to 0 (mod 2^64)
where a and b are known constants and I need to solve it for x.
Look at Hensel's lemma.
Yes, the general problem is NP-Complete.
This is because boolean algebra is arithmetic modulo 2! So any 3SAT formula can be rewritten as an equivalent arithmetic expression in arithmetic modulo 2. Checking if a 3SAT formula is satisfiable becomes equivalent to checking if the corresponding arithemetic expression can be 1 or not.
For example, a AND b becomes a.b in arithemetic.
NOT a is 1-a etc.
But in your case, talking about NP-Compleness makes no sense, as it is one specific problem.
Also, lhf is right. Hensel's Lifting lemma can be used. The basic essence is that to solve P(x) = 0 mod 2^(e+1) we can solve P(x) = 0 mod 2^e and 'lift' those solutions to mod 2^(e+1)
Here is a pdf explaining how to use that: http://www.cs.xu.edu/math/math302/04f/PolyCongruences.pdf
I have a somewhat math-oriented problem. I have a bunch of bitfields and would like to calculate what subset of them to xor together to achieve a certain other bitfield, or if there isn't a way to do it discover that no such subset exists.
I'd like to do this using a free library, rather than original code, and I'd strongly prefer something with Python bindings (using Python's built-in math libraries would be acceptable as well, but I want to port this to multiple languages eventually). Also it would be good to not take the memory hit of having to expand each bit to its own byte.
Some further clarification: I only need a single solution. My matrices are the opposite of sparse. I'm very interested in keeping the runtime to an absolute minimum, so using algorithmically fancy methods for inverting matrices is strongly preferred. Also, it's very important that the specific given bitfield be the one outputted, so a technique which just finds a subset which xor to 0 doesn't quite cut it.
And I'm generally aware of gaussian elimination. I'm trying to avoid doing this from scratch!
cross-posted to mathoverflow, because it isn't clear what the right place for this question is - https://mathoverflow.net/questions/41036/how-to-find-which-subset-of-bitfields-xor-to-another-bitfield
Mathematically speaking, XOR of two bits can be treated as addition in F_2 field.
You want to solve a set of equations in a F_2 field. For four bitfiels with bits (a_0, a_1, ... a_n), (b_0, b_1, ..., b_n), (c_0, c_1, ..., c_n), (r_0, r_1, ..., r_n), you get equations:
x * a_0 + y * b_0 + z * c_0 = r_0
x * a_1 + y * b_1 + z * c_1 = r_1
...
x * a_n + y * b_n + z * c_n = r_n
(where you look for x, y, z).
You could program this as a simple integer linear problem with glpk, probably lp_solve (but I don't remember if it will fit). These might work very slowly though, as they are trying to solve much more general problem.
After googling for a while, it seems that this page might be a good start looking for code. From descriptions it seems that Dixon and LinBox could be a good fit.
Anyway, I think asking at mathoverflow might give you more precise answers. If you do, please link your question here.
Update: Sagemath uses M4RI for solving this problem. This makes it (for me) a very good recommendation.
For small instances that easily fit in memory, this is just solving a linear system over F_2, so try mod-2 Gaussian elimination. For very large sparse instances, like those that occur in factoring (sieve) algorithms, look up the Wiedemann algorithm.
It's possible to have multiple subsets xor to the same value; do you care about finding all subsets?
A perhaps heavy-handed approach would be to filter the powerset of bitfields. In Haskell:
import Data.Bits
xorsTo :: Int -> [Int] -> [[Int]]
xorsTo target fields = filter xorsToTarget (powerset fields)
where xorsToTarget f = (foldl xor 0 f) == target
powerset [] = [[]]
powerset (x:xs) = powerset xs ++ map (x:) (powerset xs)
Not sure if there is a way to do this without generating the powerset. (In the worst case, it is possible for the solution to actually be the entire powerset).
expanding on liori's answer above we have a linear system of equations (in modulo 2):
a0, b0, c0 ...| r0
a1, b1, c1 ...| r1
... |
an, bn, cn ...| rn
Gaussian elimination can be used to solve the system. In modulo 2, the add row operation becomes an XOR operation. It is much simpler computationally to do this than to use a generic linear systems solver.
So, if a0 is zero we swap up a row that has a 1 in the a position. Then perform an XOR (using row 0) on any other row whos "a" bit is a 1. Then repeat using row 1 and column b, then row 2 col c, etc.
If you get a row of zeroes with a non-zero in the r column then the subset DNE.
I'm working on coding the Pohlig-Hellman Algorithm but I am having problem understand the steps in the algorithm based on the definition of the algorithm.
Going by the Wiki of the algorithm:
I know the first part 1) is to calculate the prime factor of p-1 - which is fine.
However, I am not sure what I need to do in steps 2) where you calculate the co-efficents:
Let x2 = c0 + c1(2).
125(180/2) = 12590 1 mod (181) so c0 = 0.
125(180/4) = 12545 1 mod (181) so c1 = 0.
Thus, x2 = 0 + 0 = 0.
and 3) put the coefficents together and solve in the chinese remainder theorem.
Can someone help with explaining this in plain english (i) - or pseudocode. I want to code the solution myself obviously but I cannot make any more progress unless i understand the algorithm.
Note: I have done a lot of searching for this and I read S. Pohlig and M. Hellman (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance but its still not really making sense to me.
Thanks in advance
Update:
how come q(125) stays constant in this example.
Where as in this example is appears like he is calculating a new q each time.
To be more specific I don't understand how the following is computed:
Now divide 7531 by a^c0 to get
7531(a^-2) = 6735 mod p.
Let's start with the main idea behind Pohlig-Hellman. Assume that we are given y, g and p and that we want to find x, such that
y == gx (mod p).
(I'm using == to denote an equivalence relation). To simplify things, I'm also assuming that the order of g is p-1, i.e. the smallest positive k with 1==gk (mod p) is k=p-1.
An inefficient method to find x, would be to simply try all values in the range 1 .. p-1.
Somewhat better is the "Baby-step giant-step" method that requires O(p0.5) arithmetic operations. Both methods are quite slow for large p. Pohlig-Hellman is a significant improvement when p-1 has many factors. I.e. assume that
p-1 = n r
Then what Pohlig and Hellman propose is to solve the equation
yn == (gn)z
(mod p).
If we take logarithms to the basis g on both sides, this is the same as
n logg(y) == logg(yn) == nz (mod p-1).
n can be divided out, giving
logg(y) == z (mod r).
Hence x == z (mod r).
This is an improvement, since we only have to search a range 0 .. r-1 for a solution of z. And again "Baby-step giant-step" can be used to improve the search for z. Obviously, doing this once is not a complete solution yet. I.e. one has to repeat the algorithm above for every prime factor r of p-1 and then to use the Chinese remainder theorem to find x from the partial solutions. This works nicely if p-1 is square free.
If p-1 is divisible by a prime power then a similiar idea can be used. For example let's assume that p-1 = m qk.
In the first step, we compute z such that x == z (mod q) as shown above. Next we want to extend this to a solution x == z' (mod q2). E.g. if p-1 = m q2 then this means that we have to find z' such that
ym == (gm)z' (mod p).
Since we already know that z' == z (mod q), z' must be in the set {z, z+q, z+2q, ..., z+(q-1)q }. Again we could either do an exhaustive search for z' or improve the search with "baby-step giant-step". This step is repeated for every exponent of q, this is from knowing x mod qi we iteratively derive x mod qi+1.
I'm coding it up myself right now (JAVA). I'm using Pollard-Rho to find the small prime factors of p-1. Then using Pohlig-Hellman to solve a DSA private key. y = g^x. I am having the same problem..
UPDATE: "To be more specific I don't understand how the following is computed: Now divide 7531 by a^c0 to get 7531(a^-2) = 6735 mod p."
if you find the modInverse of a^c0 it will make sense
Regards