I'm trying to write a code which decrypts any Affine cipher.
Now, I found that the decryption function is :
y = a^(-1) * (x-b) mod 26
The problem is : when x is smaller than b the answer is negative.
I know that it is a Math question rather than a Code question, but I hope that there are some nice guys who may help me.
It's actually a question that straddles maths and programming.
Firstly mathematicians and programmers use "mod" somewhat differently.
Mathematicians use it as a statement about the equation they have just written. When they say "a = b + c mod m" what they mean is that "a = b + c" in modulo m arithmetic.
Programmers on the other hand use mod as an operator that provides the remainder after integer division.
Secondly there are multiple ways of defining integer division "floored division", "truncated division" and "euclidian division" and hence multiple ways of defning the modulo operator.
Unfortunately what you need for your algorithm is the "remainder after floored division" but what your programming language is giving you is the "remainder after truncated division.
One possible fix is to simply add an if statement.
if (y < 0) y += 26
Related
I was going through a question which ask to calculate gcd(a-b,a^n+b^n)%(10^9+7) where a,b,n can be as large as 10^12.
I am able to solve this for a,b and n for very small numbers and fermat's theorem also didn't seem to work, and i reached a conclusion that if a,b are coprime then this will always give me gcd as 2 but for the rest i am not able to get it?
i need just a little hint that what i am doing wrong to get gcd for large numbers? I also tried x^y to find gcd by taking modulo at each step but that also didn't work.
Need just direction and i will make my way.
Thanks in advance.
You are correct that a^n + b^n is too large to compute and that working mod 10^9 + 7 at each step doesn't provide a way to compute the answer. But, you can still use modular exponentiation by squaring with a different modulus, namely a-b
Key observations:
1) gcd(a-b,a^n + b^n) = gcd(d,a^n + b^n) where d = abs(a-b)
2) gcd(d,a^n + b^n) = gcd(d,r) where r = (a^n + b^n) % d
3) r can be feasibly computed with modular exponentiation by squaring
The point of 1) is that different programming languages have different conventions for handling negative numbers in the mod operator. Taking the absolute value avoids such complications, though mathematically it doesn't make a difference. The key idea is that it is perfectly feasible to do the first step of the Euclidean algorithm for computing gcds. All you need is the remainder upon division of the larger by the smaller of the two numbers. After the first step is done, all of the numbers are in the feasible range.
We have taken the MASH-2 hash function in a college course, and in the exam we are confronted
with questions to calculate something like this ((62500)^257)) mod (238194151) using only a scientific calculator. now i know some theories with a^b (mod n) but the problem i present above is even hard to calculate manually. i think it would take about 15 minutes to solve this. i would like to know if there is a faster way to do this. or even if there is some way to do it in binary (convert the number to binary and then do some manipulations). i need to able to do this by hand with a scientific calculator.
In this special case the prime factor decomposition of a = 62500 = 2² ⋅ 5⁶ is very simple.
You can use this to calculate (2²)²⁵⁷ and (5⁶)²⁵⁷ first and calculate then the product.
But the problem I see, is that for n = 238194151 my scientific calculator can not calculate n² correctly. If your calculator can do this, it should be no problem.
Since gcd(a, b) = 1 you also could use CRT, but I'm not sure if you can find the prime factors n = 13 ⋅ 59 ⋅ 310553 with only a scientific calculator. If so, this will make it much easier. You just calculate a²⁵⁷ mod (13⋅59) and a²⁵⁷ mod 310553 and put the results together with CRT.
You can also use only Exponentiation by squaring so you only have to calculate 8 squares.
I have to code to evaluate the value of following sequence :
( pow(1,k) + pow(2,k) + ... + pow(n,k) ) % MOD
for given value of n,k and MOD.
I have tried searching it on internet. I got an equation . It contains zeta functions and it seems difficult in implementation. I want any simple approach for implementing the same. Note that the value of n is large, so that we cannot simply use brute force to pass the time limit.
Newton's identities might be of help. Calculate the coefficients of the polynomial with 1..n as roots. That pretty trivial. Then use the identities.
It's just the first thing that comes to mind when I see sums of powers.
I think it is nicely compatible with modular arithmetics - there are only multiplications and additions.
I must admit, that Newton's identities are only the rearrangement of the terms, so not much speed gain here.
JUST USE PYTHON
k=input("Enter value for K: ")
n=input("Enter value for N: ")
mod=input("Enter value for MOD: ")
sum=0
for i in range(1,n+1):
sum+=pow(i,k)
result=sum % mod
print mod
May be this code is gonna help.
I agree that math.stackexchange.com is a better bet.
But here are random facts that, depending on parameters, may make the problem more manageable.
First, factor MOD, solve for each prime power factor, then use the Chinese Remainder Theorem to find the answer for MOD. Thus without loss of generality, you may assume that MOD is a prime power.
Next, note that 1^k + ... + MOD^k is always divisible by MOD. Therefore you can replace n by n mod MOD.
Next, if MOD = p^i and j is not divisible by p, then j^((p-1) * p^(i-1)) is 1 mod MOD, so we can reduce the size of k.
Of course if (k, n) < MOD and MOD is prime, this will not help you at all. (Which, depending on how this problem arises, may well be the case.)
(If k is small enough, there are explicit formulas that you can produce for the sum. But it seems that for you k can be large enough to make that approach intractable.)
I was looking at lecture 2 of the stanford machine learning lecture series taught by professor Andrew NG and had a question about something that I think might be quite rudimentary but is just not clicking in my head. So let us consider two vectors θ and x where both vectors contain real numbers.
Let h(x) be a function (in this specific case called the hypothesis) but let it be some function denoted by :
h(x) = "summation from i = 0 to i = n" of θ(i)*x(i) = θ(transpose)*x
I dont understand the last part where he says h(x) is also equal to θ(transpose)*x.
If someone could clarify this concept for my I would be very grateful.
It's just basic linear algebra, it follows from the definition of matrix vector multiplication:
So if θ and x are both n+1 x 1 matrices, then
I have a somewhat math-oriented problem. I have a bunch of bitfields and would like to calculate what subset of them to xor together to achieve a certain other bitfield, or if there isn't a way to do it discover that no such subset exists.
I'd like to do this using a free library, rather than original code, and I'd strongly prefer something with Python bindings (using Python's built-in math libraries would be acceptable as well, but I want to port this to multiple languages eventually). Also it would be good to not take the memory hit of having to expand each bit to its own byte.
Some further clarification: I only need a single solution. My matrices are the opposite of sparse. I'm very interested in keeping the runtime to an absolute minimum, so using algorithmically fancy methods for inverting matrices is strongly preferred. Also, it's very important that the specific given bitfield be the one outputted, so a technique which just finds a subset which xor to 0 doesn't quite cut it.
And I'm generally aware of gaussian elimination. I'm trying to avoid doing this from scratch!
cross-posted to mathoverflow, because it isn't clear what the right place for this question is - https://mathoverflow.net/questions/41036/how-to-find-which-subset-of-bitfields-xor-to-another-bitfield
Mathematically speaking, XOR of two bits can be treated as addition in F_2 field.
You want to solve a set of equations in a F_2 field. For four bitfiels with bits (a_0, a_1, ... a_n), (b_0, b_1, ..., b_n), (c_0, c_1, ..., c_n), (r_0, r_1, ..., r_n), you get equations:
x * a_0 + y * b_0 + z * c_0 = r_0
x * a_1 + y * b_1 + z * c_1 = r_1
...
x * a_n + y * b_n + z * c_n = r_n
(where you look for x, y, z).
You could program this as a simple integer linear problem with glpk, probably lp_solve (but I don't remember if it will fit). These might work very slowly though, as they are trying to solve much more general problem.
After googling for a while, it seems that this page might be a good start looking for code. From descriptions it seems that Dixon and LinBox could be a good fit.
Anyway, I think asking at mathoverflow might give you more precise answers. If you do, please link your question here.
Update: Sagemath uses M4RI for solving this problem. This makes it (for me) a very good recommendation.
For small instances that easily fit in memory, this is just solving a linear system over F_2, so try mod-2 Gaussian elimination. For very large sparse instances, like those that occur in factoring (sieve) algorithms, look up the Wiedemann algorithm.
It's possible to have multiple subsets xor to the same value; do you care about finding all subsets?
A perhaps heavy-handed approach would be to filter the powerset of bitfields. In Haskell:
import Data.Bits
xorsTo :: Int -> [Int] -> [[Int]]
xorsTo target fields = filter xorsToTarget (powerset fields)
where xorsToTarget f = (foldl xor 0 f) == target
powerset [] = [[]]
powerset (x:xs) = powerset xs ++ map (x:) (powerset xs)
Not sure if there is a way to do this without generating the powerset. (In the worst case, it is possible for the solution to actually be the entire powerset).
expanding on liori's answer above we have a linear system of equations (in modulo 2):
a0, b0, c0 ...| r0
a1, b1, c1 ...| r1
... |
an, bn, cn ...| rn
Gaussian elimination can be used to solve the system. In modulo 2, the add row operation becomes an XOR operation. It is much simpler computationally to do this than to use a generic linear systems solver.
So, if a0 is zero we swap up a row that has a 1 in the a position. Then perform an XOR (using row 0) on any other row whos "a" bit is a 1. Then repeat using row 1 and column b, then row 2 col c, etc.
If you get a row of zeroes with a non-zero in the r column then the subset DNE.