what is the result of modulo opeartion on (a-b)%n - math

Recently i solved a problem where i have to compute (a-b)%n.The results were self explanatory when a-b is an positive number but for negative numbers the results that i got seems confusing..i just wanted to know how can we calculate this result for negative numbers.
Any links dealing with modulo operator properties are most welcome.

http://en.m.wikipedia.org/wiki/Modulo_operation
In many programming languages (C, Java) the modulo operator is defined so that the modulus has the same sign as the first operand. This means that the following equation holds:
(-a) % n = -(a % n)
For example, -8%3 would be -2, since 8%3 is 2.
Others, such as Python, compute a % n instead as the positive remainder when diving by n, which means
(-a) % n = n - (a % n)
For example, -8%3 is 1 because 3-(8%3) is 3-2 is 1.
Note that in modular arithmetic adding or subtracting any multiple of n does not change the result because "equality" (or congruence if you prefer that term) is defined with respect to divisibility: X is equal to 0 if it is a multiple of n, and A is equal to B if A-B is a multiple of n. For example -2 is equal to 1 modulo 3 because -2-1 = -3 is divisible by 3.

Related

How to solve the following recurence relation

How do I solve the following recurrence relation?
f(n+2) = 2*f(n+1) - f(n) + 2 where n is even
f(n+2) = 3*f(n) where n is odd
f(1) = f(2) = 1
For odd n I could solve the recurrence and it turns out to be a geometric series with common ratio 3.
When n is even I could find and solve the homogeneous part of the recurrence relation by substituting f(n) = r^n. So the solution comes to be r = 1. Therefore the solution is c1 + c2*n. But how do I solve the particular integral part? Am I on the right track? Are there any other approaches to the above solution?
The recurrence for odd n is very easy to solve with the substitution you tried:
Substituting this into the recurrence for even n:
Attempt #1
Make a general substitution of the form:
Note that the exponent is n/2 instead of n based on the odd recurrence, but it is purely a matter of choice
Matching the same types of terms:
But this solution doesn't work with the boundary condition f(2) = 1:
Attempt #2
It turns out that a second exponential term is required:
As before, one of the exponential terms needs to match 3^(n/2):
The last equation has solutions d = 0, -1; obviously only the non-trivial one
is useful:
The final solution for all n ≥ 2:
Alternative method
Longer but (possibly, at least I found it to be) more intuitive - expand the recurrence m times:
Observe the pattern:
The additive factor of 2 is present for odd number of expansions m but cancels out for even m.
Each expansion adds a factor of 2 * 3^(n/2-m) for odd m, and subtracts it for even m.
Each expansion also adds a factor of f(n-2m) for even m, and subtracts it for odd m.
Combining these observations to write a general closed form expression for the m-th expansion:
Using the standard formula for geometric series in the last step.
Recursion stops at f(2) = 1:
The same result as before.

Mixing function for non power of 2 integer intervals

I'm looking for a mixing function that given an integer from an interval <0, n) returns a random-looking integer from the same interval. The interval size n will typically be a composite non power of 2 number. I need the function to be one to one. It can only use O(1) memory, O(1) time is strongly preferred. I'm not too concerned about randomness of the output, but visually it should look random enough (see next paragraph).
I want to use this function as a pixel shuffling step in a realtime-ish renderer to select the order in which pixels are rendered (The output will be displayed after a fixed time and if it's not done yet this gives me a noisy but fast partial preview). Interval size n will be the number of pixels in the render (n = 1920*1080 = 2073600 would be a typical value). The function must be one to one so that I can be sure that every pixel is rendered exactly once when finished.
I've looked at the reversible building blocks used by hash prospector, but these are mostly specific to power of 2 ranges.
The only other method I could think of is multiply by large prime, but it doesn't give particularly nice random looking outputs.
What are some other options here?
Here is one solution based on the idea of primitive roots modulo a prime:
If a is a primitive root mod p then the function g(i) = a^i % p is a permutation of the nonzero elements which are less than p. This corresponds to the Lehmer prng. If n < p, you can get a permutation of 0, ..., n-1 as follows: Given i in that range, first add 1, then repeatedly multiply by a, taking the result mod p, until you get an element which is <= n, at which point you return the result - 1.
To fill in the details, this paper contains a table which gives a series of primes (all of which are close to various powers of 2) and corresponding primitive roots which are chosen so that they yield a generator with good statistical properties. Here is a part of that table, encoded as a Python dictionary in which the keys are the primes and the primitive roots are the values:
d = {32749: 30805,
65521: 32236,
131071: 66284,
262139: 166972,
524287: 358899,
1048573: 444362,
2097143: 1372180,
4194301: 1406151,
8388593: 5169235,
16777213: 9726917,
33554393: 32544832,
67108859: 11526618,
134217689: 70391260,
268435399: 150873839,
536870909: 219118189,
1073741789: 599290962}
Given n (in a certain range -- see the paper if you need to expand that range), you can find the smallest p which works:
def find_p_a(n):
for p in sorted(d.keys()):
if n < p:
return p, d[p]
once you know n and the matching p,a the following function is a permutation of 0 ... n-1:
def f(i,n,p,a):
x = a*(i+1) % p
while x > n:
x = a*x % p
return x-1
For a quick test:
n = 2073600
p,a = find_p_a(n) # p = 2097143, a = 1372180
nums = [f(i,n,p,a) for i in range(n)]
print(len(set(nums)) == n) #prints True
The average number of multiplications in f() is p/n, which in this case is 1.011 and will never be more than 2 (or very slightly larger since the p are not exact powers of 2). In practice this method is not fundamentally different from your "multiply by a large prime" approach, but in this case the factor is chosen more carefully, and the fact that sometimes more than 1 multiplication is required adding to the apparent randomness.

People - Apple Puzzle [Inspired by client-puzzle protocol]

I am learning a client-puzzle protocol and i have a question about finding the possibility of a solution. Instead of going into the dry protocol facts, here is a scenario:
Lets say i have x people and I have y apples:
Each person must have at least 1 apple
Each person can have at most z apples.
Is there a formula to calculate the number of scenarios?
Example:
4 people [x], 6 apples [y], 15 MAX apples [z]
No. of scenarios calculated by hand: 10.
If my number is very huge, I hope to calculate it using a formula.
Thank you for any help.
Your problem is equivalent to "finds the number of ways you can get x by adding together z numbers, each of which lies between min and max." Sample Python implementation:
def possible_sums(x, z, min, max):
if min*z > x or max*z < x:
return 0
if z == 1:
if x >= min and x <= max:
return 1
else:
return 0
total = 0
#iterate from min, up to and including max
for i in range(min, max+1):
total += possible_sums(x-i, z-1, min, max)
return total
print possible_sums(6, 4, 1, 15)
Result:
10
This function can become quite expensive when called with large numbers, but runtime can be improved with memoization. How this can be accomplished depends on the language, but the conventional Python approach is to store previously calculated values in a dictionary.
def memoize(fn):
results = {}
def f(*args):
if args not in results:
results[args] = fn(*args)
return results[args]
return f
#memoize
def possible_sums(x, z, min, max):
#rest of code goes here
Now print possible_sums(60, 40, 1, 150), which would have taken a very long time to calculate, returns 2794563003870330 in an instant.
There are ways to do this mathematically. It is similar to asking how many ways there are to roll a total of 10 on 3 6-sided dice (x=3, y=10, z=6). You can implement this in a few different ways.
One approach is to use inclusion-exclusion. The number of ways to write y as a sum of x positive numbers with no maximum is y-1 choose x-1 by the stars-and-bars argument. You can calculate the number of ways to write y as a sum of x positive numbers so that a particular set of s of them are at least z+1: 0 if y-x-sz is negative, and y-1-s z choose x-1 if it is nonnegative. Then you can use inclusion-exclusion to write the count as the sum over nonnegative values of s so that y-x-sz is nonnegative of (-1)^s (x choose s)(y-1-sz choose x-1).
You can use generating functions. You can let powers of some variable, say t, hold the total, and the coefficients say how many combinations there are with that total. Then you are asking for the coefficient of t^y in (t+t^2+...+t^z)^x. You can compute this in a few ways.
One approach is with dynamic programming, computing coefficients of (t+t^2+...+t^z)^k for k up to x. The naive approach is probably fast enough: You can compute this for k=1, 2, 3, ..., x. It is a bit faster to use something like repeated squaring, e.g., to compute the 87th power, you could expand 87 in binary as 64+16+4+2+1=0b1010111 (written as a binary literal). You could compute the 1st, 2nd, 4th, 16th, and 64th powers by squaring and multiply these, or you could compute the 0b1, 0b10, 0b101, 0b1010, 0b10101, 0b101011, and 0b1010111 powers by squaring and multiplying to save a little space.
Another approach is to use the binomial theorem twice.
(t+t^2+...+t^z)^x = t^x ((t^z-1)/(t-1))^x
= t^x (t^z-1)^x (t-1)^-x.
The binomial theorem with exponent x lets us rewrite (t^z-1)^x as a sum of (-1)^s t^(z(x-s))(x choose s) where s ranges from 0 to x. It also lets us rewrite (t-1)^-x as an infinite sum of (r+x-1 choose x-1)t^r over nonnegative r. Then we can pick out the finite set of terms which contribute to the coefficient of t^y (r = y-x-sz), and we get the same sum as by inclusion-exclusion above.
For example, suppose we have x=1000, y=1100, z=30. The value is
=1.29 x 10^144.

Comparison Sort - Theoretical

Can someone explain the solution of this problem to me?
Suppose that you are given a sequence of n elements to sort. The input sequence
consists of n=k subsequences, each containing k elements. The elements in a given
subsequence are all smaller than the elements in the succeeding subsequence and
larger than the elements in the preceding subsequence. Thus, all that is needed to
sort the whole sequence of length n is to sort the k elements in each of the n=k
subsequences. Show an n lg k lower bound on the number of comparisons
needed to solve this variant of the sorting problem.
Solution:
Let S be a sequence of n elements divided into n/k subsequences each of length k
where all of the elements in any subsequence are larger than all of the elements
of a preceding subsequence and smaller than all of the elements of a succeeding
subsequence.
Claim
Any comparison-based sorting algorithm to sort S must take (n lg k) time in the
worst case.
Proof
First notice that, as pointed out in the hint, we cannot prove the lower
bound by multiplying together the lower bounds for sorting each subsequence.
That would only prove that there is no faster algorithm that sorts the subsequences
independently. This was not what we are asked to prove; we cannot introduce any
extra assumptions.
Now, consider the decision tree of height h for any comparison sort for S. Since
the elements of each subsequence can be in any order, any of the k! permutations
correspond to the final sorted order of a subsequence. And, since there are n/k such
subsequences, each of which can be in any order, there are (k!)^n/k permutations
of S that could correspond to the sorting of some input order. Thus, any decision
tree for sorting S must have at least (k!)^n/k leaves. Since a binary tree of height h
has no more than 2^h leaves, we must have 2^h ≥ (k!)^(n/k) or h ≥ lg((k!)^n/k). We
therefore obtain
h ≥ lg((k!)^n/k) -- unbalanced parens - final one added?
= (n/k) lg(k!)
≥ (n/k) lg((k/2)^k/2)
= (n/2) lg(k/2)
The third line comes from k! having its k/2 largest terms being at least k/2 each.
(We implicitly assume here that k is even. We could adjust with floors and ceilings
if k were odd.)
Since there exists at least one path in any decision tree for sorting S that has length
at least (n/2) lg(k/2), the worst-case running time of any comparison-based sorting
algorithm for S is (n lg k).
Can someone walk me through the steps in code block? Especially the step when lg k! becomes lg((k/2)^k/2).
I've reprinted the math below:
(1)      h ≥ lg(k! n/k)
(2)      = (n/k) lg(k!)
(3)      ≥ (n/k) lg((k/2)k/2)
(4)      = (n/2) lg(k/2)
Let's walk through this. Going from line (1) to line (2) uses properties of logarithms. Similarly, going from line (3) to line (4) uses properties of logarithms and the facththat (n / k)(k / 2) = (n / 2). So the trick step is going from line (2) to line (3).
The claim here is the following:
For all k, k! ≥ (k / 2)(k / 2)
Intuitively, the idea is as follows. Consider k! = k(k - 1)(k - 2)...(2)(1). If you'll notice, half of these terms are greater than k / 2 and half of them are smaller. If we drop all the terms that are less than k, we get something (close to) the following:
k! ≥ k(k - 1)(k - 2)...(k / 2)
Now, we have that k / 2 ≥ k, so we have that
k! ≥ k(k - 1)(k - 2)...(k / 2) ≥ (k/2)(k/2)...(k/2)
This is the product of (k / 2) with itself (k / 2) times, so it's equal to (k / 2)k/2. This math isn't precise because the logic for odd and even values are a bit different, but using essentially this idea you get a sketch of the proof of the earlier result.
To summarize: from (1) to (2) and from (3) to (4) uses properties of logarithms, and from (2) to (3) uses the above result.
Hope this helps!

finding a/b mod c

I know this may seem like a math question but i just saw this in a contest and I really want to know how to solve it.
We have
a (mod c)
and
b (mod c)
and we're looking for the value of the quotient
(a/b) (mod c)
Any ideas?
In the ring of integers modulo C, these equations are equivalent:
A / B (mod C)
A * (1/B) (mod C)
A * B-1(mod C).
Thus you need to find B-1, the multiplicative inverse of B modulo C. You can find it using e.g. extended Euclidian algorithm.
Note that not every number has a multiplicative inverse for the given modulus.
Specifically, B-1 exists if and only if gcd(B, C) = 1 (i.e. B and C are coprime).
See also
Wikipedia/Modular multiplicative inverse
Wikipedia/Extended Euclidian algorithm
Modular multiplicative inverse: Example
Suppose we want to find the multiplicative inverse of 3 modulo 11.
That is, we want to find
x = 3-1(mod 11)
x = 1/3 (mod 11)
3x = 1 (mod 11)
Using extended Euclidian algorithm, you will find that:
x = 4 (mod 11)
Thus, the modular multiplicative inverse of 3 modulo 11 is 4. In other words:
A / 3 == A * 4 (mod 11)
Naive algorithm: brute force search
One way to solve this:
3x = 1 (mod 11)
Is to simply try x for all values 0..11, and see if the equation holds true. For small modulus, this algorithm may be acceptable, but extended Euclidian algorithm is much better asymptotically.
There are potentially many answers. When all you have is k = B mod C, then B could be any k+CN for all integer N.
This means B could potentially be very large. So large, in fact, to make A/B approach zero.
However, that's just one way to respond.
I think it can be written as(But not sure)
(a/b)%c = ((a)%(b*c))/b

Resources