I want to create a poor-mans version of a password complexity checker. I determine the rough character set the password is using and its length. The search space would then be: charset ^ length. In order to compare this against a single value I want the smallest x that when used as the exponent of 2 is larger than the search space. In more mathy language I want this:
given a and b find the smallest x where a^b < 2^x;
My math sucks. Is there a quick and easy way to calculate this?
Maybe my math doesn't suck quite that much.
2^x == a^b
= define a = 2^c, c = 2loga
2^x == 2^c^b
=
2^x == 2^c*b
=
x == c*b
=
x == 2loga * b
You can solve the equation by using logarithms. Taking the logarithms on both sides yields
x > b * log(a) / log(2)
If you want to find the smallest integer such that the equation holds we can round up the right hand side. In python this can be implemented as
import math
def find_x(a, b):
return math.ceil(b * math.log(a) / math.log(2))
I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.
I'm reading a book about cryptography (I've tried translate the terms from Spanish to English) and I don't understand how calculate the inverse within this field (originally the question used the term “body” instead of “field”, since that's a literal translation from languages like Spanish or German).
Encrypting with a monoalphabetic subtitution by pure decimation:
Equivalences:
Ci: Letter encrypted
a: Decimation constant
Mi: Message no encrypted
mod: Module operation (we obtain the remainder)
n: Number of letters in the encryption alphabet
Spanish alphabet: ABCDEFGHIJKLMNÑOPQRSTUVWZXY
· Encryption: Ci = a* Mi mod n
For example --> We will encrypt the letter C (C is the position 2, starting from 0) with a=20 and with the Spanish alfhabet (n=27) --> Ci = 20*C mod 27 = 20*2 mod 27 = 13 => N
· Decryption: a^(-1) * Ci mod n
HERE IS THE PROBLEM
a^(-1) is the inverse of the decimation factor in the body n; in other words: inverse(a, n). I've googled and tried to do some calculations but I don't obtain the correct result ---> inverse(a, n) = inverse(20, 27) = 16 (and the gcd is valid to do it).
For example:
22^(-1) * 13 mod 27 != 16
To find the modular (multiplicative) inverse in your example you have to find x such that (22 * x) % 27 == 1.
There are a variety of different ways you can do this mathematically. Note that in general, an inverse exists only if gcd(a, n) == 1.
If you want to write a simple algorithm for your example, try this Python code:
def inverse(a, n):
for x in range(n):
if (a * x) % n == 1:
return x
This gives:
>>> inverse(22, 27)
16
>>> inverse(20, 27)
23
As mentioned in the comments below your question, there may well be better functions for computing the modular inverse in existing libraries for your favourite programming language.
I have spent the last 5 hours searching for an answer. Even though I have found many answers they have not helped in any way.
What I am basically looking for is a mathematical, arithmetic only representation of the bitwise XOR operator for any 32bit unsigned integers.
Even though this sounds really simple, nobody (at least it seems so) has managed to find an answer to this question.
I hope we can brainstorm, and find a solution together.
Thanks.
XOR any numerical input between 0 and 1 including both ends
a + b - ab(1 + a + b - ab)
XOR binary input
a + b - 2ab or (a-b)²
Derivation
Basic Logical Operators
NOT = (1-x)
AND = x*y
From those operators we can get...
OR = (1-(1-a)(1-b)) = a + b - ab
Note: If a and b are mutually exclusive then their and condition will always be zero - from a Venn diagram perspective, this means there is no overlap. In that case, we could write OR = a + b, since a*b = 0 for all values of a & b.
2-Factor XOR
Defining XOR as (a OR B) AND (NOT (a AND b)):
(a OR B) --> (a + b - ab)
(NOT (a AND b)) --> (1 - ab)
AND these conditions together to get...
(a + b - ab)(1 - ab) = a + b - ab(1 + a + b - ab)
Computational Alternatives
If the input values are binary, then powers terms can be ignored to arrive at simplified computationally equivalent forms.
a + b - ab(1 + a + b - ab) = a + b - ab - a²b - ab² + a²b²
If x is binary (either 1 or 0), then we can disregard powers since 1² = 1 and 0² = 0...
a + b - ab - a²b - ab² + a²b² -- remove powers --> a + b - 2ab
XOR (binary) = a + b - 2ab
Binary also allows other equations to be computationally equivalent to the one above. For instance...
Given (a-b)² = a² + b² - 2ab
If input is binary we can ignore powers, so...
a² + b² - 2ab -- remove powers --> a + b - 2ab
Allowing us to write...
XOR (binary) = (a-b)²
Multi-Factor XOR
XOR = (1 - A*B*C...)(1 - (1-A)(1-B)(1-C)...)
Excel VBA example...
Function ArithmeticXOR(R As Range, Optional EvaluateEquation = True)
Dim AndOfNots As String
Dim AndGate As String
For Each c In R
AndOfNots = AndOfNots & "*(1-" & c.Address & ")"
AndGate = AndGate & "*" & c.Address
Next
AndOfNots = Mid(AndOfNots, 2)
AndGate = Mid(AndGate, 2)
'Now all we want is (Not(AndGate) AND Not(AndOfNots))
ArithmeticXOR = "(1 - " & AndOfNots & ")*(1 - " & AndGate & ")"
If EvaluateEquation Then
ArithmeticXOR = Application.Evaluate(xor2)
End If
End Function
Any n of k
These same methods can be extended to allow for any n number out of k conditions to qualify as true.
For instance, out of three variables a, b, and c, if you're willing to accept any two conditions, then you want a&b or a&c or b&c. This can be arithmetically modeled from the composite logic...
(a && b) || (a && c) || (b && c) ...
and applying our translations...
1 - (1-ab)(1-ac)(1-bc)...
This can be extended to any n number out of k conditions. There is a pattern of variable and exponent combinations, but this gets very long; however, you can simplify by ignoring powers for a binary context. The exact pattern is dependent on how n relates to k. For n = k-1, where k is the total number of conditions being tested, the result is as follows:
c1 + c2 + c3 ... ck - n*∏
Where c1 through ck are all n-variable combinations.
For instance, true if 3 of 4 conditions met would be
abc + abe + ace + bce - 3abce
This makes perfect logical sense since what we have is the additive OR of AND conditions minus the overlapping AND condition.
If you begin looking at n = k-2, k-3, etc. The pattern becomes more complicated because we have more overlaps to subtract out. If this is fully extended to the smallest value of n = 1, then we arrive at nothing more than a regular OR condition.
Thinking about Non-Binary Values and Fuzzy Region
The actual algebraic XOR equation a + b - ab(1 + a + b - ab) is much more complicated than the computationally equivalent binary equations like x + y - 2xy and (x-y)². Does this mean anything, and is there any value to this added complexity?
Obviously, for this to matter, you'd have to care about the decimal values outside of the discrete points (0,0), (0,1), (1,0), and (1,1). Why would this ever matter? Sometimes you want to relax the integer constraint for a discrete problem. In that case, you have to look at the premises used to convert logical operators to equations.
When it comes to translating Boolean logic into arithmetic, your basic building blocks are the AND and NOT operators, with which you can build both OR and XOR.
OR = (1-(1-a)(1-b)(1-c)...)
XOR = (1 - a*b*c...)(1 - (1-a)(1-b)(1-c)...)
So if you're thinking about the decimal region, then it's worth thinking about how we defined these operators and how they behave in that region.
Non-Binary Meaning of NOT
We expressed NOT as 1-x. Obviously, this simple equation works for binary values of 0 and 1, but the thing that's really cool about it is that it also provides the fractional or percent-wise compliment for values between 0 to 1. This is useful since NOT is also known as the Compliment in Boolean logic, and when it comes to sets, NOT refers to everything outside of the current set.
Non-Binary Meaning of AND
We expressed AND as x*y. Once again, obviously it works for 0 and 1, but its effect is a little more arbitrary for values between 0 to 1 where multiplication results in partial truths (decimal values) diminishing each other. It's possible to imagine that you would want to model truth as being averaged or accumulative in this region. For instance, if two conditions are hypothetically half true, is the AND condition only a quarter true (0.5 * 0.5), or is it entirely true (0.5 + 0.5 = 1), or does it remain half true ((0.5 + 0.5) / 2)? As it turns out, the quarter truth is actually true for conditions that are entirely discrete and the partial truth represents probability. For instance, will you flip tails (binary condition, 50% probability) both now AND again a second time? Answer is 0.5 * 0.5 = 0.25, or 25% true. Accumulation doesn't really make sense because it's basically modeling an OR condition (remember OR can be modeled by + when the AND condition is not present, so summation is characteristically OR). The average makes sense if you're looking at agreement and measurements, but it's really modeling a hybrid of AND and OR. For instance, ask 2 people to say on a scale of 1 to 10 how much do they agree with the statement "It is cold outside"? If they both say 5, then the truth of the statement "It is cold outside" is 50%.
Non-Binary Values in Summary
The take away from this look at non-binary values is that we can capture actual logic in our choice of operators and construct equations from the ground up, but we have to keep in mind numerical behavior. We are used to thinking about logic as discrete (binary) and computer processing as discrete, but non-binary logic is becoming more and more common and can help make problems that are difficult with discrete logic easier/possible to solve. You'll need to give thought to how values interact in this region and how to translate them into something meaningful.
"mathematical, arithmetic only representation" are not correct terms anyway. What you are looking for is a function which goes from IxI to I (domain of integer numbers).
Which restrictions would you like to have on this function? Only linear algebra? (+ , - , * , /) then it's impossible to emulate the XOR operator.
If instead you accept some non-linear operators like Max() Sgn() etc, you can emulate the XOR operator with some "simpler" operators.
Given that (a-b)(a-b) quite obviously computes xor for a single bit, you could construct a function with the floor or mod arithmetic operators to split the bits out, then xor them, then sum to recombine. (a-b)(a-b) = a2 -2·a·b + b2 so one bit of xor gives a polynomial with 3 terms.
Without floor or mod, the different bits interfere with each other, so you're stuck with looking at a solution which is a polynomial interpolation treating the input a,b as a single value: a xor b = g(a · 232 + b)
The polynomial has 264-1 terms, though will be symmetric in a and b as xor is commutative so you only have to calculate half of the coefficients. I don't have the space to write it out for you.
I wasn't able to find any solution for 32-bit unsigned integers but I've found some solutions for 2-bit integers which I was trying to use in my Prolog program.
One of my solutions (which uses exponentiation and modulo) is described in this StackOverflow question and the others (some without exponentiation, pure algebra) can be found in this code repository on Github: see different xor0 and o_xor0 implementations.
The nicest xor represention for 2-bit uints seems to be: xor(A,B) = (A + B*((-1)^A)) mod 4.
Solution with +,-,*,/ expressed as Excel formula (where cells from A2 to A5 and cells from B1 to E1 contain numbers 0-4) to be inserted in cells from A2 to E5:
(1-$A2)*(2-$A2)*(3-$A2)*($A2+B$1)/6 - $A2*(1-$A2)*(3-$A2)*($A2+B$1)/2 + $A2*(1-$A2)*(2-$A2)*($A2-B$1)/6 + $A2*(2-$A2)*(3-$A2)*($A2-B$1)/2 - B$1*(1-B$1)*(3-B$1)*$A2*(3-$A2)*(6-4*$A2)/2 + B$1*(1-B$1)*(2-B$1)*$A2*($A2-3)*(6-4*$A2)/6
It may be possible to adapt and optimize this solution for 32-bit unsigned integers. It's complicated and it uses logarithms but seems to be the most universal one as it can be used on any integer number. Additionaly, you'll have to check if it really works for all number combinations.
I do realize that this is sort of an old topic, but the question is worth answering and yes, this is possible using an algorithm. And rather than go into great detail about how it works, I'll just demonstrate with a simple example (written in C):
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
typedef unsigned long
number;
number XOR(number a, number b)
{
number
result = 0,
/*
The following calculation just gives us the highest power of
two (and thus the most significant bit) for this data type.
*/
power = pow(2, (sizeof(number) * 8) - 1);
/*
Loop until no more bits are left to test...
*/
while(power != 0)
{
result *= 2;
/*
The != comparison works just like the XOR operation.
*/
if((power > a) != (power > b))
result += 1;
a %= power;
b %= power;
power /= 2;
}
return result;
}
int main()
{
srand(time(0));
for(;;)
{
number
a = rand(),
b = rand();
printf("a = %lu\n", a);
printf("b = %lu\n", b);
printf("a ^ b = %lu\n", a ^ b);
printf("XOR(a, b) = %lu\n", XOR(a, b));
getchar();
}
}
I think this relation might help in answering your question
A + B = (A XOR B ) + 2*(A.B)
(a-b)*(a-b) is the right answer. the only one? I guess so!
Is 2(n+1) = O(2n)?
I believe that this one is correct because n+1 ~= n.
Is 2(2n) = O(2n)?
This one seems like it would use the same logic, but I'm not sure.
First case is obviously true - you just multiply the constant C in by 2.
Current answers to the second part of the question, look like a handwaving to me, so I will try to give a proper math explanation. Let's assume that the second part is true, then from the definition of big-O, you have:
which is clearly wrong, because there is no such constant that satisfy such inequality.
Claim: 2^(2n) != O(2^n)
Proof by contradiction:
Assume: 2^(2n) = O(2^n)
Which means, there exists c>0 and n_0 s.t. 2^(2n) <= c * 2^n for all n >= n_0
Dividing both sides by 2^n, we get: 2^n <= c * 1
Contradiction! 2^n is not bounded by a constant c.
Therefore 2^(2n) != O(2^n)
Note that 2n+1 = 2(2n) and 22n = (2n)2
From there, either use the rules of Big-O notation that you know, or use the definition.
I'm assuming you just left off the O() notation on the left side.
O(2^(n+1)) is the same as O(2 * 2^n), and you can always pull out constant factors, so it is the same as O(2^n).
However, constant factors are the only thing you can pull out. 2^(2n) can be expressed as (2^n)(2^n), and 2^n isn't a constant. So, the answer to your questions are yes and no.
2n+1 = O(2n) because 2n+1 = 21 * 2n = O(2n).
Suppose 22n = O(2n) Then there exists a constant c such that for n beyond some n0, 22n <= c 2n. Dividing both sides by 2n, we get 2n < c. There's no values for c and n0 that can make this true, so the hypothesis is false and 22n != O(2n)
To answer these questions, you must pay attention to the definition of big-O notation. So you must ask:
is there any constant C such that 2^(n+1) <= C(2^n) (provided that n is big enough)?
And the same goes for the other example: is there any constant C such that 2^(2n) <= C(2^n) for all n that is big enough?
Work on those inequalities and you'll be on your way to the solution.
We will use=> a^(m*n) = (a^m)^n = (a^n)^m
now,
2^(2*n) = (2^n)^2 = (2^2)^n
so,
(2^2)^n = (4)^n
hence,
O(4^n)
Obviously,
rate of growth of (2^n) < (4^n)
Let me give you an solution that would make sense instantly.
Assume that 2^n = X
(since n is a variable, X should be a variable), then 2^(2n) = X^2. So, we basically have X^2 = O(X). You probably already know this is untrue based on a easier asymptotic notation exercises you did.
For instance:
X^2 = O(X^2) is true.
X^2+X+1 = O(X^2) is also true.
X^2+X+1 = O(X) is untrue.
X^2 = O(X) is also untrue.
Think about polynomials.