Binary modulo operation [closed] - math

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
My apologies, this question is now redirected to
this web page in math forum.
Empirically, I can know that (a+b+c) mod 2 = (a-b-c) mod 2.
e.g.,)
1+2+3 = 6, 6 mod 2 = 0
1-2-3 = -4, -4 mod 2 = 0
1+2+4 = 7, 7 mod 2 = 1
1-2-4 = -5, -5 mod 2 = 1
It seems that it is only possible when we use binary modulo (mod 2).
Is there any formal proof for this?

Not sure, why this ended up on SO. As James said in the comments, these questions should be asked on math.stackexchange But since it is here:
I a + b + c = a - b - c + 2(b + c)
II 2(b + c) ≡ 0 (mod 2), ergo
III a + b + c ≡ a - b - c (mod 2)
Edit, since it was requested: The generalisation of II would require n to be a divisor of 2 to fulfill
2(b + c) ≡ 0 (mod n)
for all b and c, which means that n is either 1 or 2.

The reason this works mod 2 is exactly because there are only two residues: 0 and 1. And thus it is true that for any x
x ≡ -x mod 2
Thus a + b ≡ a - b mod 2
Obviously this is not true for any other modulo operation. So for any other n > 2 you can create a simple counter-example for (a+b+c) ≡ (a-b-c) mod n:
a = n
b = 0
c = 1
(a + b + c) mod n = 1
(a - b - c) mod n = n - 1
Obviously n - 1 is not equal to 1 if n > 2. Actually most of the triplets (a, b, c) would be counter-examples for any n > 2.

Related

integer programming: need help to formulate a constrain

I am trying to formulate a constrain for my math model. the constrain goal is:
if A = 1 and B = 1 then C <= D
otherwise (A or B or both are 0) there is no constraint.
A and B are binary variables. C and D are integer numbers.
so far I was able to come up with this equation:
M(A - 1) - (B - 1) + C <= D (M is a big big number)
this formulation does not hold when A = 1 and B = 0
You could do this in two steps, first introduce a variable X representing logical and of A and B.
X >= A + B - 1
X <= A
X <= B
Then use X to express the inequality:
C - M(1-X) <= D

Is f(n) = 1000n + 4500lgn + 54n O(n)?

I was given the following question in my test:
Is f(n) = 1000n + 4500lgn + 54n O(n)?
I answered this question by applying the following definition:
Definition of O(n), which is that for some function f(n) there must be two positive constants, c and k, such that c > 0, k > 0, n >= k, and 0 <= f(n) <= cn. If we can show that constants c and k exist then the function is O(n) (and if those constants don't exist then the function is actually larger than O(n)).
Solution:
0 ≤ 1000n + 4500lgn + 54n ≤ cn
0 ≤ 4000 + 9000 + 216 ≤ 4c when k=4
0 ≤ 3304 ≤ c
0 ≤ 8000 + 13500 + 432 ≤ 8n when n=8>k
0 ≤ 21932 ≤ 8n
0 ≤ 2741.5 ≤ n (last time c=3304 but now it is 2741.5....as n increases, c is not constant!)
Conclusion:
This function is not O(n) - we can't find constant values c and k because they simply don't exist.
Is my solution correct?
0 ≤ 2741.5 ≤ n (last time c=3304 but now it is 2741.5....as n increases, c is not constant!)
The flaw in your solution is that if you stick with the original value of c, the constraint is still satisfied. It is not the actual value of the constants that matters, simply that there exists a pair of constants c and k for which the inequality is satisfied for all n > k.
I don't know what level of rigor is required (by your teachers) in an answer to that question. However, a rigorous solution would require a mathematical proof (from first principles or established theorems) that either c and k do exist1, or that they cannot exist.
1 - A pair of c and k that you can prove does satisfy the constraint for all N > k would be a sufficient proof.
log2n < n, so 1000n + 4500 log2n + 54n ≤ 1000n + 4500n + 54n.
Just add up the coefficients. For k = 1 and c = 1000 + 4500 + 54 = 5554, f(n) ≤ c*n for all n ≥ k. Therefore f is O(n).

last digit of a^b^c

I've got stuck on this problem :
Given a, b and c three
natural numbers (such that 1<= a, b, c <= 10^9), you are supposed to find the last digit of the number a^b^c."
What I've firstly thought was the O(log n) algorithm for raising a at power n.
int acc=1; //accumulator
while(n>0) {
if(n%2==1)
acc*=a;
a=a*a;
n/=2;
}
Obviously, some basic math might help, like the "last digit" stuff :
Last_digit(2^n) = Last_digit(2^(n%4))
Where n%4 is the remainder of the division n/4
In a nutshell, I've tried to combine these, but I couldn't get on the good way.
Some help would really be apreciated.
The problem is that b^c may be very large. So you want to reduce it before using the standard modular exponentiation.
You can remark that a^(b^c) MOD 10 can have a maximum of 10 different values.
Because of the pigeonhole principle, there will be a number p such that for some r:
a^r MOD 10 = a^(p+r) MOD 10
p <= 10
r <= 10
This implies that for any q:
a^r MOD 10 = a^r*a^p MOD 10
= (a^r*a^p)*a^p MOD 10
= ...
= a^(r+q*p) MOD 10
For any n = s+r+q*p, with s < p you have:
a^n MOD 10 = a^s*a^(r+q*p) MOD 10
= a^s*a^r MOD 10
= a^((n-r) MOD p)*a^r MOD 10
You can just replace n= (b^c) in the previous equation.
You will only compute (b^c-r) MOD p where p <= 10 which is easily done and then compute a^((b^c-r) MOD p)*a^r MOD 10.
Like I mentioned in my comments, this really doesn't have much to do with smart algorithms. The problem can be reduced completely using some elementary number theory. This will yield an O(1) algorithm.
The Chinese remainder theorem says that if we know some number x modulo 2 and modulo 5, we know it modulo 10. So finding a^b^c modulo 10 can be reduced to finding a^b^c modulo 2 and a^b^c modulo 5. Fermat's little theorem says that for any prime p, if p does not divide a, then a^(p-1) = 1 (mod p), so a^n = a^(n mod (p-1)) (mod p). If p does divide a, then obviously a^n = 0 (mod p) for any n > 0. Note that x^n = x (mod 2) for any n>0, so a^b^c = a (mod 2).
What remains is to find a^b^c mod 5, which reduces to finding b^c mod 4. Unfortunately, we can use neither the Chinese remainder theorem, nor Fermat's little theorem here. However, mod 4 there are only 4 possibilities for b, so we can check them separately. If we start with b = 0 (mod 4) or b = 1 (mod 4), then of course b^c = b (mod 4). If we have b = 2 (mod 4) then it is easily seen that b^c = 2 (mod 4) if c = 1, and b^c = 0 (mod 4) if c > 1. If b = 3 (mod 4) then b^c = 3 if c is even, and b^c = 1 if c is odd. This gives us b^c (mod 4) for any b and c, which then gives us a^b^c (mod 5), all in constant time.
Finally with a^b^c = a (mod 2) we can use the Chinese remainder theorem to find a^b^c (mod 10). This requires a mapping between (x (mod 2), y (mod 5)) and z (mod 10). The Chinese remainder theorem only tells us that this mapping is bijective, it doesn't tell us how to find it. However, there are only 10 options, so this is easily done on a piece of paper or using a little program. Once we find this mapping we simply store it in an array, and we can do the entire calculation in O(1).
By the way, this would be the implementation of my algorithm in python:
# this table only needs to be calculated once
# can also be hard-coded
mod2mod5_to_mod10 = [[0 for i in range(5)] for j in range(2)]
for i in range(10):
mod2mod5_to_mod10[i % 2][i % 5] = i
[a,b,c] = [int(input()) for i in range(3)]
if a % 5 == 0:
abcmod5 = 0
else:
bmod4 = b % 4
if bmod4 == 0 or bmod4 == 1:
bcmod4 = bmod4
elif bmod4 == 2:
if c == 1:
bcmod4 = 2
else:
bcmod4 = 0
else:
if c % 2 == 0:
bcmod4 = 1
else:
bcmod4 = 3
abcmod5 = ((a % 5)**bcmod4) % 5
abcmod2 = a % 2
abcmod10 = mod2mod5_to_mod10[abcmod2][abcmod5]
print(abcmod10)

Calculating the level of insertion based on the size of the tree

If I have a graph structure that looks like the following
a level-1
b c level-2
c d e level-3
e f g h level-4
...... level-n
a points to b and c
b points to c and d
c points to d and e
and so on
how can i calculate the n from the size(number of existing nodes) of the graph/tree?
The number of nodes present if the height is h is given by
1 + 2 + 3 + ... + h = h(h + 1) / 2
This means that one simple option would be to take the total number of nodes n and do a simple binary search to find the right value of h that such that h(h + 1) / 2 = n.
Alternatively, since n = h(h + 1) / 2, you can note that
n = h(h + 1) / 2
2n = h2 + h
0 = h2 + h - 2n
Now you have a quadratic equation (in h) that you can solve to directly get back the value of h. The solution is
h = (-1 ± √(1 + 8n)) / 2
If you take the minus branch, you'll get back a negative number, so you should take the positive branch and compute
(-1 + √(1 + 8n)) / 2
to directly get back h.
Hope this helps!

No idea how to solve SICP exercise 1.11

Exercise 1.11:
A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3) if n > 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.
Implementing it recursively is simple enough. But I couldn't figure out how to do it iteratively. I tried comparing with the Fibonacci example given, but I didn't know how to use it as an analogy. So I gave up (shame on me) and Googled for an explanation, and I found this:
(define (f n)
(if (< n 3)
n
(f-iter 2 1 0 n)))
(define (f-iter a b c count)
(if (< count 3)
a
(f-iter (+ a (* 2 b) (* 3 c))
a
b
(- count 1))))
After reading it, I understand the code and how it works. But what I don't understand is the process needed to get from the recursive definition of the function to this. I don't get how the code could have formed in someone's head.
Could you explain the thought process needed to arrive at the solution?
You need to capture the state in some accumulators and update the state at each iteration.
If you have experience in an imperative language, imagine writing a while loop and tracking information in variables during each iteration of the loop. What variables would you need? How would you update them? That's exactly what you have to do to make an iterative (tail-recursive) set of calls in Scheme.
In other words, it might help to start thinking of this as a while loop instead of a recursive definition. Eventually you'll be fluent enough with recursive -> iterative transformations that you won't need to extra help to get started.
For this particular example, you have to look closely at the three function calls, because it's not immediately clear how to represent them. However, here's the likely thought process: (in Python pseudo-code to emphasise the imperativeness)
Each recursive step keeps track of three things:
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
So I need three pieces of state to track the current, the last and the penultimate values of f. (that is, f(n-1), f(n-2) and f(n-3).) Call them a, b, c. I have to update these pieces inside each loop:
for _ in 2..n:
a = NEWVALUE
b = a
c = b
return a
So what's NEWVALUE? Well, now that we have representations of f(n-1), f(n-2) and f(n-3), it's just the recursive equation:
for _ in 2..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
Now all that's left is to figure out the initial values of a, b and c. But that's easy, since we know that f(n) = n if n < 3.
if n < 3: return n
a = 2 # f(n-1) where n = 3
b = 1 # f(n-2)
c = 0 # f(n-3)
# now start off counting at 3
for _ in 3..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
That's still a little different from the Scheme iterative version, but I hope you can see the thought process now.
I think you are asking how one might discover the algorithm naturally, outside of a 'design pattern'.
It was helpful for me to look at the expansion of the f(n) at each n value:
f(0) = 0 |
f(1) = 1 | all known values
f(2) = 2 |
f(3) = f(2) + 2f(1) + 3f(0)
f(4) = f(3) + 2f(2) + 3f(1)
f(5) = f(4) + 2f(3) + 3f(2)
f(6) = f(5) + 2f(4) + 3f(3)
Looking closer at f(3), we see that we can calculate it immediately from the known values.
What do we need to calculate f(4)?
We need to at least calculate f(3) + [the rest]. But as we calculate f(3), we calculate f(2) and f(1) as well, which we happen to need for calculating [the rest] of f(4).
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
So, for any number n, I can start by calculating f(3), and reuse the values I use to calculate f(3) to calculate f(4)...and the pattern continues...
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
↘ ↘
f(5) = f(4) + 2f(3) + 3f(2)
Since we will reuse them, lets give them a name a, b, c. subscripted with the step we are on, and walk through a calculation of f(5):
Step 1: f(3) = f(2) + 2f(1) + 3f(0) or f(3) = a1 + 2b1 +3c1
where
a1 = f(2) = 2,
b1 = f(1) = 1,
c1 = 0
since f(n) = n for n < 3.
Thus:
f(3) = a1 + 2b1 + 3c1 = 4
Step 2: f(4) = f(3) + 2a1 + 3b1
So:
a2 = f(3) = 4 (calculated above in step 1),
b2 = a1 = f(2) = 2,
c2 = b1 = f(1) = 1
Thus:
f(4) = 4 + 2*2 + 3*1 = 11
Step 3: f(5) = f(4) + 2a2 + 3b2
So:
a3 = f(4) = 11 (calculated above in step 2),
b3 = a2 = f(3) = 4,
c3 = b2 = f(2) = 2
Thus:
f(5) = 11 + 2*4 + 3*2 = 25
Throughout the above calculation we capture state in the previous calculation and pass it to the next step,
particularily:
astep = result of step - 1
bstep = astep - 1
cstep = bstep -1
Once I saw this, then coming up with the iterative version was straightforward.
Since the post you linked to describes a lot about the solution, I'll try to only give complementary information.
You're trying to define a tail-recursive function in Scheme here, given a (non-tail) recursive definition.
The base case of the recursion (f(n) = n if n < 3) is handled by both functions. I'm not really sure why the author does this; the first function could simply be:
(define (f n)
(f-iter 2 1 0 n))
The general form would be:
(define (f-iter ... n)
(if (base-case? n)
base-result
(f-iter ...)))
Note I didn't fill in parameters for f-iter yet, because you first need to understand what state needs to be passed from one iteration to another.
Now, let's look at the dependencies of the recursive form of f(n). It references f(n - 1), f(n - 2), and f(n - 3), so we need to keep around these values. And of course we need the value of n itself, so we can stop iterating over it.
So that's how you come up with the tail-recursive call: we compute f(n) to use as f(n - 1), rotate f(n - 1) to f(n - 2) and f(n - 2) to f(n - 3), and decrement count.
If this still doesn't help, please try to ask a more specific question — it's really hard to answer when you write "I don't understand" given a relatively thorough explanation already.
I'm going to come at this in a slightly different approach to the other answers here, focused on how coding style can make the thought process behind an algorithm like this easier to comprehend.
The trouble with Bill's approach, quoted in your question, is that it's not immediately clear what meaning is conveyed by the state variables, a, b, and c. Their names convey no information, and Bill's post does not describe any invariant or other rule that they obey. I find it easier both to formulate and to understand iterative algorithms if the state variables obey some documented rules describing their relationships to each other.
With this in mind, consider this alternative formulation of the exact same algorithm, which differs from Bill's only in having more meaningful variable names for a, b and c and an incrementing counter variable instead of a decrementing one:
(define (f n)
(if (< n 3)
n
(f-iter n 2 0 1 2)))
(define (f-iter n
i
f-of-i-minus-2
f-of-i-minus-1
f-of-i)
(if (= i n)
f-of-i
(f-iter n
(+ i 1)
f-of-i-minus-1
f-of-i
(+ f-of-i
(* 2 f-of-i-minus-1)
(* 3 f-of-i-minus-2)))))
Suddenly the correctness of the algorithm - and the thought process behind its creation - is simple to see and describe. To calculate f(n):
We have a counter variable i that starts at 2 and climbs to n, incrementing by 1 on each call to f-iter.
At each step along the way, we keep track of f(i), f(i-1) and f(i-2), which is sufficient to allow us to calculate f(i+1).
Once i=n, we are done.
What did help me was running the process manually using a pencil and using hint author gave for the fibonacci example
a <- a + b
b <- a
Translating this to new problem is how you push state forward in the process
a <- a + (b * 2) + (c * 3)
b <- a
c <- b
So you need a function with an interface to accept 3 variables: a, b, c. And it needs to call itself using process above.
(define (f-iter a b c)
(f-iter (+ a (* b 2) (* c 3)) a b))
If you run and print each variable for each iteration starting with (f-iter 1 0 0), you'll get something like this (it will run forever of course):
a b c
=========
1 0 0
1 1 0
3 1 1
8 3 1
17 8 3
42 17 8
100 42 17
235 100 42
...
Can you see the answer? You get it by summing columns b and c for each iteration. I must admit I found it by doing some trail and error. Only thing left is having a counter to know when to stop, here is the whole thing:
(define (f n)
(f-iter 1 0 0 n))
(define (f-iter a b c count)
(if (= count 0)
(+ b c)
(f-iter (+ a (* b 2) (* c 3)) a b (- count 1))))
A function f is defined by the rule that f(n) = n, if n<3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3), if n > 3. Write a procedure that computes f by means of a recursive process.
It is already written:
f(n) = n, (* if *) n < 3
= f(n - 1) + 2f(n - 2) + 3f(n - 3), (* if *) n > 3
Believe it or not, there was once such a language. To write this down in another language is just a matter of syntax. And by the way, the definition as you (mis)quote it has a bug, which is now very apparent and clear.
Write a procedure that computes f by means of an iterative process.
Iteration means going forward (there's your explanation!) as opposed to the recursion's going backwards at first, to the very lowest level, and then going forward calculating the result on the way back up:
f(0) = 0
f(1) = 1
f(2) = 2
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
= a + 2b + 3c
f(n+1) = f(n ) + 2f(n - 1) + 3f(n - 2)
= a' + 2b' + 3c' where
a' = f(n) = a+2b+3c,
b' = f(n-1) = a,
c' = f(n-2) = b
......
This thus describes the problem's state transitions as
(n, a, b, c) -> (n+1, a+2*b+3*c, a, b)
We could code it as
g (n, a, b, c) = g (n+1, a+2*b+3*c, a, b)
but of course it wouldn't ever stop. So we must instead have
f n = g (2, 2, 1, 0)
where
g (k, a, b, c) = g (k+1, a+2*b+3*c, a, b), (* if *) k < n
g (k, a, b, c) = a, otherwise
and this is already exactly like the code you asked about, up to syntax.
Counting up to n is more natural here, following our paradigm of "going forward", but counting down to 0 as the code you quote does is of course entirely equivalent.
The corner cases and possible off-by-one errors are left out as exercise non-interesting technicalities.

Resources