This question already has answers here:
Calculating (a^b)%MOD
(7 answers)
Closed 8 years ago.
I am wondering how to calculate x^y mod z. x and y are very large (can't fit in 64 bit integer) and z will fit 64 bit integer.
And one thing don't give answers like x^y mod z is same as (x mod z)^y mod z.
Here is the standard method, in pseudo-code:
function powerMod(b, e, m)
x := 1
while e > 0
if e % 2 == 1
x := (b * x) % m
e := e - 1
else
b := (b * b) % m
e := e / 2
return x
The algorithm is known as exponentiation by squaring or the square and multiply method.
If you are using Java, then turn x, y, and z into BigInteger.
BigInteger xBI = new BigInteger(x);
BigInteger yBI = new BigInteger(y);
BigInteger zBI = new BigInteger(z);
BigInteger answer = xBI.modPow(yBI,zBI);
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
suppose i want to find any value of x and y such that they satisfy x . W + y . D = P
this can be done by the following using extended euclidean algorithm
int exgcd(int a, int b, int &x, int &y)
{
if (b == 0)
{
x = 1;
y = 0;
return a;
}
int g, xi, yi;
g = exgcd(b, a % b, xi, yi);
x = yi;
y = xi - (a / b * yi);
return g;
}
but this will be just some random x and y satisfying the equation
suppose i add an additional constraint that i want any x y z such that
x>=0 y>=0 z>=0 and x + y + z = n
how can i effectively (please share code/pseudocode if possible) find all such x y and z??
my question boils down to find any x and y (using extended euclidean algorithm) which
1) satisfy a linear equation
2) fall in a given range
here is the link to the question if you want
Ok so we have 2 equations and 3 unknowns, so doing some simple mathematics we can find the equations we need to solve
x * W + y * D + z * 0 = p
And
x + y + z = n
given
x,y,z >=0
So firstly we will reduce one unknown by looping through any one of the unknowns lets say z
We iterate through 0 - n for z and our new equations will be
x * w + y * d = p
And
x + y = m { m is n - z for value of z in current iteration
Now we have 2 equations and 2 unknowns
So our solution for x and y can now be reduced by substituting x in first equation
Which makes it
(m - y) * w + y * d = p
This can be reduced to
y = (p - m * w) / (d - w)
and
x = m - y
You can stop at first value where x and y are integers
Given is a bitstream (continuous string of bits too long to be processed at once) and the result should be a matching stream of base20 numbers.
The process is simple for a small number of bits:
Assuming most significant bit right:
110010011 = decimal 403 (1 * 1 + 1 * 2 + 1 * 16 + 1 * 128 + 1 * 256)
403 / 20 = 20 R 3
20 / 20 = 1 R 0
1 / 20 = 0 R 1
Result is [3, 0, 1] = 3 * 1 + 0 * 20 + 1 * 400
But what if the bits are too much to be converted to a decimal number in one step?
My approach was doing both processes in a loop: Convert the bits to decimal and converting the decimal down to base20 numbers. This process requires the multipliers (position values) to be lowered while walking through the bits, because otherwise, they'll quickly increase too much to be calculated probably. The 64th bit would have been multiplied by 2^64 and so on.
note: I understood the question that a bitstream is arriving of unknown length and during an unknown duration and a live conversion from base 2 to base 20 should be made.
I do not believe this can be done in a single go. The problem is that base 20 and base 2 have no common ground and the rules of modular arithmetic do not allow to solve the problem cleanly.
(a+b) mod n = ( (a mod n) + (b mod n) ) mod n
(a*b) mod n = ( (a mod n) * (b mod n) ) mod n
(a^m) mod n = ( (a mod n)^m ) mod n
Now if you have a number A written in base p and q (p < q) as
A = Sum[a[i] p^i, i=0->n] = Sum[b[i] q^i, i=0->n]
Then we know that b[0] = A mod q. However, we do not know A and hence, the above tells us that
b[0] = A mod q = Sum[a[i] p^i, i=0->n] mod q
= Sum[ (a[i] p^i) mod q, i=0->n] mod q
= Sum[ ( (a[i] mod q) (p^i mod q) ) mod q, i=0->n] mod q
This implies that:
If you want to know the lowest digit b0 of a number in base q, you need to have the knowledge of the full number.
This can only be simplified if q = pm as
b[0] = A mod q = Sum[a[i] p^i, i=0->n] mod q
= Sum[ (a[i] p^i) mod q, i=0->n] mod q
= Sum[ a[i] p^i, i=0->m-1]
So in short, since q = 20 and p = 2. I have to say, no, it can not be done in a single pass. Furthermore, remind yourself that I only spoke about the first digit in base q and not yet the ith digit.
As an example, imagine a bit stream of 1000 times 0 followed by a single 1. This resembles the number 21000. The first digit is easy, but to get any other digit ... you are essentially in a rather tough spot.
Is it safe to replace a/(b*c) with a/b/c when using integer-division on positive integers a,b,c, or am I at risk losing information?
I did some random tests and couldn't find an example of a/(b*c) != a/b/c, so I'm pretty sure it's safe but not quite sure how to prove it.
Thank you.
Mathematics
As mathematical expressions, ⌊a/(bc)⌋ and ⌊⌊a/b⌋/c⌋ are equivalent whenever b is nonzero and c is a positive integer (and in particular for positive integers a, b, c). The standard reference for these sorts of things is the delightful book Concrete Mathematics: A Foundation for Computer Science by Graham, Knuth and Patashnik. In it, Chapter 3 is mostly on floors and ceilings, and this is proved on page 71 as a part of a far more general result:
In the 3.10 above, you can define x = a/b (mathematical, i.e. real division), and f(x) = x/c (exact division again), and plug those into the result on the left ⌊f(x)⌋ = ⌊f(⌊x⌋)⌋ (after verifying that the conditions on f hold here) to get ⌊a/(bc)⌋ on the LHS equal to ⌊⌊a/b⌋/c⌋ on the RHS.
If we don't want to rely on a reference in a book, we can prove ⌊a/(bc)⌋ = ⌊⌊a/b⌋/c⌋ directly using their methods. Note that with x = a/b (the real number), what we're trying to prove is that ⌊x/c⌋ = ⌊⌊x⌋/c⌋. So:
if x is an integer, then there is nothing to prove, as x = ⌊x⌋.
Otherwise, ⌊x⌋ < x, so ⌊x⌋/c < x/c which means that ⌊⌊x⌋/c⌋ ≤ ⌊x/c⌋. (We want to show it's equal.) Suppose, for the sake of contradiction, that ⌊⌊x⌋/c⌋ < ⌊x/c⌋ then there must be a number y such that ⌊x⌋ < y ≤ x and y/c = ⌊x/c⌋. (As we increase a number from ⌊x⌋ to x and consider division by c, somewhere we must hit the exact value ⌊x/c⌋.) But this means that y = c*⌊x/c⌋ is an integer between ⌊x⌋ and x, which is a contradiction!
This proves the result.
Programming
#include <stdio.h>
int main() {
unsigned int a = 142857;
unsigned int b = 65537;
unsigned int c = 65537;
printf("a/(b*c) = %d\n", a/(b*c));
printf("a/b/c = %d\n", a/b/c);
}
prints (with 32-bit integers),
a/(b*c) = 1
a/b/c = 0
(I used unsigned integers as overflow behaviour for them is well-defined, so the above output is guaranteed. With signed integers, overflow is undefined behaviour, so the program can in fact print (or do) anything, which only reinforces the point that the results can be different.)
But if you don't have overflow, then the values you get in your program are equal to their mathematical values (that is, a/(b*c) in your code is equal to the mathematical value ⌊a/(bc)⌋, and a/b/c in code is equal to the mathematical value ⌊⌊a/b⌋/c⌋), which we've proved are equal. So it is safe to replace a/(b*c) in code by a/b/c when b*c is small enough not to overflow.
While b*c could overflow (in C) for the original computation, a/b/c can't overflow, so we don't need to worry about overflow for the forward replacement a/(b*c) -> a/b/c. We would need to worry about it the other way around, though.
Let x = a/b/c. Then a/b == x*c + y for some y < c, and a == (x*c + y)*b + z for some z < b.
Thus, a == x*b*c + y*b + z. y*b + z is at most b*c-1, so x*b*c <= a <= (x+1)*b*c, and a/(b*c) == x.
Thus, a/b/c == a/(b*c), and replacing a/(b*c) by a/b/c is safe.
Nested floor division can be reordered as long as you keep track of your divisors and dividends.
#python3.x
x // m // n = x // (m * n)
#python2.x
x / m / n = x / (m * n)
Proof (sucks without LaTeX :( ) in python3.x:
Let k = x // m
then k - 1 < x / m <= k
and (k - 1) / n < x / (m * n) <= k / n
In addition, (x // m) // n = k // n
and because x // m <= x / m and (x // m) // n <= (x / m) // n
k // n <= x // (m * n)
Now, if k // n < x // (m * n)
then k / n < x / (m * n)
and this contradicts the above statement that x / (m * n) <= k / n
so if k // n <= x // (m * n) and k // n !< x // (m * n)
then k // n = x // (m * n)
and (x // m) // n = x // (m * n)
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Nested_divisions
link of question
http://codeforces.com/contest/615/problem/D
link of solution is
http://codeforces.com/contest/615/submission/15260890
In below code i am not able to understand why 1 is subtracted from mod
where mod=1000000007
ll d = 1;
ll ans = 1;
for (auto x : cnt) {
ll cnt = x.se;
ll p = x.fi;
ll fp = binPow(p, (cnt + 1) * cnt / 2, MOD);
ans = binPow(ans, (cnt + 1), MOD) * binPow(fp, d, MOD) % MOD;
d = d * (x.se + 1) % (MOD - 1);//why ??
}
Apart from the fact that there is the code does not make much sense as out of context as it is, there is the little theorem of Fermat:
Whenever MOD is a prime number, as 10^9+7 is, one can reduce exponents by multiples of (MOD-1) as for any a not a multiple of MOD
a ^ (MOD-1) == 1 mod MOD.
Which means that
a^b == a ^ (b mod (MOD-1)) mod MOD.
As to the code, which is efficient for its task, consider n=m*p^e where m is composed of primes smaller than p.
Then for each factor f of m there are factors 1*f, p*f, p^2*f,...,p^e*f of n. The product over all factors of n thus is the product over
p^(0+1+2+...+e) * f^(e+1) = p^( e*(e+1)/2 ) * f^(e+1)
over all factors f of m. Putting the numbers of factors as d and the product of factors of m as ans results in the combined formula
ans = ans^( e+1 ) * p^( d*e*(e+1)/2 )
d = d*(e+1)
which can now be recursively applied to the list of prime factors and their multiplicities.
I've got stuck on this problem :
Given a, b and c three
natural numbers (such that 1<= a, b, c <= 10^9), you are supposed to find the last digit of the number a^b^c."
What I've firstly thought was the O(log n) algorithm for raising a at power n.
int acc=1; //accumulator
while(n>0) {
if(n%2==1)
acc*=a;
a=a*a;
n/=2;
}
Obviously, some basic math might help, like the "last digit" stuff :
Last_digit(2^n) = Last_digit(2^(n%4))
Where n%4 is the remainder of the division n/4
In a nutshell, I've tried to combine these, but I couldn't get on the good way.
Some help would really be apreciated.
The problem is that b^c may be very large. So you want to reduce it before using the standard modular exponentiation.
You can remark that a^(b^c) MOD 10 can have a maximum of 10 different values.
Because of the pigeonhole principle, there will be a number p such that for some r:
a^r MOD 10 = a^(p+r) MOD 10
p <= 10
r <= 10
This implies that for any q:
a^r MOD 10 = a^r*a^p MOD 10
= (a^r*a^p)*a^p MOD 10
= ...
= a^(r+q*p) MOD 10
For any n = s+r+q*p, with s < p you have:
a^n MOD 10 = a^s*a^(r+q*p) MOD 10
= a^s*a^r MOD 10
= a^((n-r) MOD p)*a^r MOD 10
You can just replace n= (b^c) in the previous equation.
You will only compute (b^c-r) MOD p where p <= 10 which is easily done and then compute a^((b^c-r) MOD p)*a^r MOD 10.
Like I mentioned in my comments, this really doesn't have much to do with smart algorithms. The problem can be reduced completely using some elementary number theory. This will yield an O(1) algorithm.
The Chinese remainder theorem says that if we know some number x modulo 2 and modulo 5, we know it modulo 10. So finding a^b^c modulo 10 can be reduced to finding a^b^c modulo 2 and a^b^c modulo 5. Fermat's little theorem says that for any prime p, if p does not divide a, then a^(p-1) = 1 (mod p), so a^n = a^(n mod (p-1)) (mod p). If p does divide a, then obviously a^n = 0 (mod p) for any n > 0. Note that x^n = x (mod 2) for any n>0, so a^b^c = a (mod 2).
What remains is to find a^b^c mod 5, which reduces to finding b^c mod 4. Unfortunately, we can use neither the Chinese remainder theorem, nor Fermat's little theorem here. However, mod 4 there are only 4 possibilities for b, so we can check them separately. If we start with b = 0 (mod 4) or b = 1 (mod 4), then of course b^c = b (mod 4). If we have b = 2 (mod 4) then it is easily seen that b^c = 2 (mod 4) if c = 1, and b^c = 0 (mod 4) if c > 1. If b = 3 (mod 4) then b^c = 3 if c is even, and b^c = 1 if c is odd. This gives us b^c (mod 4) for any b and c, which then gives us a^b^c (mod 5), all in constant time.
Finally with a^b^c = a (mod 2) we can use the Chinese remainder theorem to find a^b^c (mod 10). This requires a mapping between (x (mod 2), y (mod 5)) and z (mod 10). The Chinese remainder theorem only tells us that this mapping is bijective, it doesn't tell us how to find it. However, there are only 10 options, so this is easily done on a piece of paper or using a little program. Once we find this mapping we simply store it in an array, and we can do the entire calculation in O(1).
By the way, this would be the implementation of my algorithm in python:
# this table only needs to be calculated once
# can also be hard-coded
mod2mod5_to_mod10 = [[0 for i in range(5)] for j in range(2)]
for i in range(10):
mod2mod5_to_mod10[i % 2][i % 5] = i
[a,b,c] = [int(input()) for i in range(3)]
if a % 5 == 0:
abcmod5 = 0
else:
bmod4 = b % 4
if bmod4 == 0 or bmod4 == 1:
bcmod4 = bmod4
elif bmod4 == 2:
if c == 1:
bcmod4 = 2
else:
bcmod4 = 0
else:
if c % 2 == 0:
bcmod4 = 1
else:
bcmod4 = 3
abcmod5 = ((a % 5)**bcmod4) % 5
abcmod2 = a % 2
abcmod10 = mod2mod5_to_mod10[abcmod2][abcmod5]
print(abcmod10)