How to solve bitwise inequalities? - bitwise-xor

Given two integers L and R. Let’s define XR=L⊕(L+1)⊕(L+2)⊕(L+3)⊕...⊕R (⊕ is bitwise XOR) Is there any integer M such that, (XR⊕M) < XR where L≤M≤R?
Constraint:
1≤ L ≤ R ≤ 10^16
Limits:
1s, 512 MB
I know how to find XR. But how to find M?

Related

Asymptotic bounds and Big O notation

Is it right to say that suppose we have two monotonically increasing functions f,g so that f(n)=Ω(n) and f(g(n))=O(n). Then I want to conclude that g(n)=O(n).
I think that this is a false claim, and I've been trying to provide counter example to show that this is false claim, but after many attempts I'm starting to think otherwise.
Can you please provide some kind of explanation or example if this is a false claim or a way to prove if it's a correct one.
I believe this claim is true. Here's a proof.
Suppose that f(n) = Ω(n). That means that there are constants c, n0 such that
f(n) ≥ cn for any n ≥ n0. (1)
Similarly, since f(g(n)) = O(n), we know that there are constants d, n1 such that
f(g(n)) ≤ dn for any n ≥ n1. (2)
Now, there are two options. The first is that g(n) = O(1), in which case we're done because g(n) is then O(n). The second case is that g(n) ≠ O(1), in which case g grows without bound. That means that there is an n2 such that g(n2) ≥ n0 (g grows without bound, so it eventually overtakes n0) and n2 ≥ n1 (just pick a big n2).
Now, pick any n ≥ n2. Since n ≥ n2, we have that g(n) ≥ g(n2) ≥ n0 because g is monotone increasing, and therefore by (1) we see that
f(g(n)) ≥ cg(n).
Since n ≥ n2 ≥ n1, we can combine this inequality with equation (2) to see that
dn ≥ f(g(n)) ≥ cg(n).
so, in particular, we have that
g(n) ≤ (d / c)n
for all n ≥ n2, so g(n) = O(n).

Recurrence relations and asymptotic complexity

I am trying to understand the recurrence relation of f(n) = n^cos n and g(n) = n. I am told that this relation has no asymptotic behavior related to Big O, little o, Big Omega, little omega, or Theta. Something about the oscillations of cos n? Can I receive a little more understanding on this behavior?
When I use L' Hospital rule on my calculator, I get undefined.
The function ncos n is O(n). Since -1 ≤ cos n ≤ 1, the function ncos n is always bounded between n-1 and n1, so in particular it's always upper-bounded by O(n). However, it's not Ω(n), because for any number n0 and any constant c, you can find an n > n0 where ncos n < cn. One way to do this is to look for choices of n where cos n is negative; the value of n-ε for any ε > 0 will eventually be smaller than cn for any c.
Hope this helps!

Proving big O of statement [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.
To show that nk is O(2n), note that
nk = (2lg n)k = 2k lg n
So now you want to find an n0 and c such that for all n ≥ n0,
2k lg n ≤ c 2n
Now, let's let c = 1 and then consider what happens when n = 2m for some m. If we do this, we get
2k lg n ≤ c 2n = 2n
2k lg 2m ≤ 22m
2km ≤ 22m
And, since 2n is a monotonically-increasing function, this is equivalent to
km ≤ 2m
Now, let's finish things off. Let's suppose that we let m = max{k, 4}, so k ≤ m. Thus we have that
km ≤ m2
We also have that
m2 ≤ 2m
Since for any m ≥ 4, m2 ≤ 2m, and we've ensured by our choice of m that m = max{k, 4}. Combining this, we get that
km ≤ 2m
Which is equivalent to what we wanted to show above. Consequently, if we pick any n ≥ 2m = 2max{4, k}, it will be true that nk ≤ 2n. Thus by the formal definition of big-O notation, we get that nk = O(2n).
I think this math is right; please let me know if I'm wrong!
Hope this helps!
I can't comment yet, so I will make this an answer.
Instead of reducing the equation like you have been trying to do, you should try to find an n0 and a M that satisfy the formal definition of big O notation found here: http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition
Something along the lines of n0=M=k might work (I haven't written it out so maybe that doesn't work, thats just to give you an idea)

finding a/b mod c

I know this may seem like a math question but i just saw this in a contest and I really want to know how to solve it.
We have
a (mod c)
and
b (mod c)
and we're looking for the value of the quotient
(a/b) (mod c)
Any ideas?
In the ring of integers modulo C, these equations are equivalent:
A / B (mod C)
A * (1/B) (mod C)
A * B-1(mod C).
Thus you need to find B-1, the multiplicative inverse of B modulo C. You can find it using e.g. extended Euclidian algorithm.
Note that not every number has a multiplicative inverse for the given modulus.
Specifically, B-1 exists if and only if gcd(B, C) = 1 (i.e. B and C are coprime).
See also
Wikipedia/Modular multiplicative inverse
Wikipedia/Extended Euclidian algorithm
Modular multiplicative inverse: Example
Suppose we want to find the multiplicative inverse of 3 modulo 11.
That is, we want to find
x = 3-1(mod 11)
x = 1/3 (mod 11)
3x = 1 (mod 11)
Using extended Euclidian algorithm, you will find that:
x = 4 (mod 11)
Thus, the modular multiplicative inverse of 3 modulo 11 is 4. In other words:
A / 3 == A * 4 (mod 11)
Naive algorithm: brute force search
One way to solve this:
3x = 1 (mod 11)
Is to simply try x for all values 0..11, and see if the equation holds true. For small modulus, this algorithm may be acceptable, but extended Euclidian algorithm is much better asymptotically.
There are potentially many answers. When all you have is k = B mod C, then B could be any k+CN for all integer N.
This means B could potentially be very large. So large, in fact, to make A/B approach zero.
However, that's just one way to respond.
I think it can be written as(But not sure)
(a/b)%c = ((a)%(b*c))/b

Pohlig–Hellman algorithm for computing discrete logarithms

I'm working on coding the Pohlig-Hellman Algorithm but I am having problem understand the steps in the algorithm based on the definition of the algorithm.
Going by the Wiki of the algorithm:
I know the first part 1) is to calculate the prime factor of p-1 - which is fine.
However, I am not sure what I need to do in steps 2) where you calculate the co-efficents:
Let x2 = c0 + c1(2).
125(180/2) = 12590 1 mod (181) so c0 = 0.
125(180/4) = 12545 1 mod (181) so c1 = 0.
Thus, x2 = 0 + 0 = 0.
and 3) put the coefficents together and solve in the chinese remainder theorem.
Can someone help with explaining this in plain english (i) - or pseudocode. I want to code the solution myself obviously but I cannot make any more progress unless i understand the algorithm.
Note: I have done a lot of searching for this and I read S. Pohlig and M. Hellman (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance but its still not really making sense to me.
Thanks in advance
Update:
how come q(125) stays constant in this example.
Where as in this example is appears like he is calculating a new q each time.
To be more specific I don't understand how the following is computed:
Now divide 7531 by a^c0 to get
7531(a^-2) = 6735 mod p.
Let's start with the main idea behind Pohlig-Hellman. Assume that we are given y, g and p and that we want to find x, such that
y == gx (mod p).
(I'm using == to denote an equivalence relation). To simplify things, I'm also assuming that the order of g is p-1, i.e. the smallest positive k with 1==gk (mod p) is k=p-1.
An inefficient method to find x, would be to simply try all values in the range 1 .. p-1.
Somewhat better is the "Baby-step giant-step" method that requires O(p0.5) arithmetic operations. Both methods are quite slow for large p. Pohlig-Hellman is a significant improvement when p-1 has many factors. I.e. assume that
p-1 = n r
Then what Pohlig and Hellman propose is to solve the equation
yn == (gn)z
(mod p).
If we take logarithms to the basis g on both sides, this is the same as
n logg(y) == logg(yn) == nz (mod p-1).
n can be divided out, giving
logg(y) == z (mod r).
Hence x == z (mod r).
This is an improvement, since we only have to search a range 0 .. r-1 for a solution of z. And again "Baby-step giant-step" can be used to improve the search for z. Obviously, doing this once is not a complete solution yet. I.e. one has to repeat the algorithm above for every prime factor r of p-1 and then to use the Chinese remainder theorem to find x from the partial solutions. This works nicely if p-1 is square free.
If p-1 is divisible by a prime power then a similiar idea can be used. For example let's assume that p-1 = m qk.
In the first step, we compute z such that x == z (mod q) as shown above. Next we want to extend this to a solution x == z' (mod q2). E.g. if p-1 = m q2 then this means that we have to find z' such that
ym == (gm)z' (mod p).
Since we already know that z' == z (mod q), z' must be in the set {z, z+q, z+2q, ..., z+(q-1)q }. Again we could either do an exhaustive search for z' or improve the search with "baby-step giant-step". This step is repeated for every exponent of q, this is from knowing x mod qi we iteratively derive x mod qi+1.
I'm coding it up myself right now (JAVA). I'm using Pollard-Rho to find the small prime factors of p-1. Then using Pohlig-Hellman to solve a DSA private key. y = g^x. I am having the same problem..
UPDATE: "To be more specific I don't understand how the following is computed: Now divide 7531 by a^c0 to get 7531(a^-2) = 6735 mod p."
if you find the modInverse of a^c0 it will make sense
Regards

Resources