How to set a square root to only be whole - math

I cant seem to find any kind of answer to this, but if I have an equation like the square root of (X^2-4n) where 4n is a constant, how could I set x so the equation gives a whole number.
I know setting x to n+1 works, but I'm looking for an algorithm that would generate all solutions.

So, the problem is to find all pairs of integers (x, m) such that:
sqrt(x^2 - 4n) = m
We have:
x^2 - 4n = m^2
or
x^2 - mˆ2 = 4n
so
(x + m)(x - m) = 4n
Now, 2 divides 4n and so it must divide (x+m) or (x-m). But if it divides any of them it will divide the other too. Thus a := (x+m)/2 and b := (x-m)/2 are both integers. Therefore
a*b = n
So, it is just a matter of factoring n as a*b in all possible ways and recover x and m from the equations above:
x = a + b.
m = a - b.
Your solution x = n+1 corresponds to the trivial factorization n = n*1 where a=n and b=1.
UPDATE
Here is an algorithm that prints all pairs (x, m)
[Initialize] a := n.
[Check] if n % a = 0 then
b := n / a.
print(a + b), print(a - b)
[Decrement] a := a - 1.
[End?] if a * a > n go to Step 2.

Related

Implementation of Pohlig-Hellman unable to solve for large exponents

I'm trying to create an implementation of the Pohlig-Hellman algorithm in order to create a utility to craft / exploit backdoors in implementations of the Diffie-Hellman protocol. This project is inspired by this 2016 white-paper by NCC Group.
I currently have an implementation, here, that works for relatively small exponents – i.e. Given a linear congruence, g^x = h (mod n), for some specially-crafted modulus, n = pq, where p and q are prime, my implementation can solve for values of x smaller than min{ p, q }.
However, if x is larger than the smallest prime factor of n, then my implementation will give an incorrect solution. I suspect that the issue may not be with my implementation of Pohlig-Hellman, itself, but with the arguments I am passing to it. All the code can be found at the link, provided above, but I'll copy the relevant code snippets, here:
#
# Implementation of Pohlig-Hellman algorithm
#
# The `crt` function implements the Chinese Remainder Theorem, and the `pollard` function implements
# Pollard's Rho algorithm for discrete logarithms (see /dph/crt.py and /dph/pollard.py).
#
def pohlig(G, H, P, factors):
g = [pow(G, divexact(P - 1, f), P) for f in factors]
h = [pow(H, divexact(P - 1, f), P) for f in factors]
if Config.verbose:
x = []
total = len(factors)
for i, (gi, hi) in enumerate(zip(g, h), start=1):
print('Solving discrete logarithm {}/{}...'.format(str(i).rjust(len(str(total))), total))
result = pollard(gi, hi, P)
x.append(result)
print(f'x = 0x{result.digits(16)}')
else:
x = [pollard(gi, hi, P) for gi, hi in zip(g, h)]
return crt(x, factors)
Above is my implementation of Pohlig-Hellman, and below is where I call it to exploit a backdoor in some implementation of the Diffie-Hellman protocol.
def _exp(args):
g = args.g
h = args.h
p_factors = list(map(mpz, args.p_factors.split(',')))
try:
p_factors.remove(2)
except ValueError:
pass
q_factors = list(map(mpz, args.q_factors.split(',')))
try:
q_factors.remove(2)
except ValueError:
pass
p = 2 * _product(*p_factors) + 1
q = 2 * _product(*q_factors) + 1
if Config.verbose:
print(f'p = 0x{p.digits(16)}')
print(f'q = 0x{q.digits(16)}')
print()
print(f'Compute the discrete logarithm modulo `p`')
print(f'-----------------------------------------')
px = pohlig(g % p, h % p, p, p_factors)
if Config.verbose:
print()
print(f'Compute the discrete logarithm modulo `q`')
print(f'-----------------------------------------')
qx = pohlig(g % q, h % q, q, q_factors)
if Config.verbose:
print()
x = crt([px, qx], [p, q])
print(f'x = 0x{x.digits(16)}')
Here is a summary of what I am doing:
Choose a prime p = 2 * prod{ p_i } + 1, where p_i denotes a set of primes.
Choose a prime q = 2 * prod{ q_j } + 1, where q_j denotes a set of primes.
Inject n = pq as the backdoor modulus in some implementation of Diffie-Hellman.
Wait for a victim (e.g. Alice computes A = g^a (mod n), and Bob computes B = g^b (mod n)).
Solve for Alice's or Bob's secret exponent, a or b, and compute their shared secret key, K = A^b = B^a (mod n).
Step #5 is done by performing the Pohlig-Hellman algorithm twice to solve for x (mod p) and x (mod q), and then the Chinese Remainder Theorem is used to solve for x (mod n).
EDIT
The x that I am referring to in the description of step #5 is either Alice's secret exponent, a, or Bob's secret exponent, b, depending on which we choose to solve for, since only one is needed to compute the shared secret key, K.

Given a list of coefficients, create a polynomial

I want to create a polynomial with given coefficients. This seems very simple but what I have found till now did not appear to be the thing I desired.
For example in such an environment;
n = 11
K = GF(4,'a')
R = PolynomialRing(GF(4,'a'),"x")
x = R.gen()
a = K.gen()
v = [1,a,0,0,1,1,1,a,a,0,1]
Given a list/vector v of length n (I will set this n and v at the begining), I want to get the polynomial v(x) as v[i]*x^i.
(Actually after that I am going to build the quotient ring GF(4,'a')[x] /< x^n-v(x) > after getting this v(x) from above) then I will say;
S = R.quotient(x^n-v(x), 'y')
y = S.gen()
But I couldn't write it.
This is a frequently asked question in many places so it is better to leave it here as an answer although the answer I have is so simple:
I just wrote R(v) and it gave me the polynomial:
sage
n = 11
K = GF(4,'a')
R = PolynomialRing(GF(4,'a'),"x")
x = R.gen()
a = K.gen()
v = [1,a,0,0,1,1,1,a,a,0,1]
R(v)
x^10 + a*x^8 + a*x^7 + x^6 + x^5 + x^4 + a*x + 1
Basically (that is, ignoring the specifics of your polynomial ring) you have a list/vector v of length n and you require a polynomial which is the sum of all v[i]*x^i. Note that this sum equals the matrix product V.X where V is a one row matrix (essentially equal to the vector v) and X is a column matrix consisting of powers of x. In Maxima you could write
v: [1,a,0,0,1,1,1,a,a,0,1]$
n: length(v)$
V: matrix(v)$
X: genmatrix(lambda([i,j], x^(i-1)), n, 1)$
V.X;
The output is
x^10+ax^8+ax^7+x^6+x^5+x^4+a*x+1

collision detection between two lines

This is a fairly simple question. I need need an equation to determine whether two 2 dimensional lines collide with each other. If they do I also need to know the X and Y position of the collision.
Put them both in general form. If A and B are the same then they're parallel. Otherwise, create two simultaneous equations and solve for x and y.
Let A and B represented by this parametric form : y = mx + b
Where m is the slope of the line
Now in the case of parallelism of A and B their slope should be equal
Else they will collide with each other at point T(x,y)
For finding the coordinates of point T you have to solve an easy equation:
A: y = mx + b
B: y = Mx + B
y(A) = y(B) means : mx + b = Mx + B which yields to x = (B - b)/(m - M) and by putting
the x to the line A we find y = ((m*(B - b))/(m - M)) + b
so : T : ((B - b)/(m - M) , ((m*(B - b))/(m - M)) + b)

equivalent expressions

I'm trying to figure out an equivalent expressions of the following equations using bitwise, addition, and/or subtraction operators. I know there's suppose to be an answer (which furthermore generalizes to work for any modulus 2^a-1, where a is a power of 2), but for some reason I can't seem to figure out what the relation is.
Initial expressions:
x = n % (2^32-1);
c = (int)n / (2^32-1); // ints are 32-bit, but x, c, and n may have a greater number of bits
My procedure for the first expression was to take the modulo of 2^32, then try to make up the difference between the two modulo's. I'm having trouble on this second part.
x = n & 0xFFFFFFFF + difference // how do I calculate difference?
I know that the difference n%(2^32)-n%(2^32-1) is periodic (with a period of 2^32*(2^32-1)), and there's a "spike up' starting at multiples of 2^32-1 and ending at 2^32. After each 2^32 multiple, the difference plot decreases by 1 (hopefully my descriptions make sense)
Similarly, the second expression could be calculated in a similar fashion:
c = n >> 32 + makeup // how do I calculate makeup?
I think makeup steadily increases by 1 at multiples of 2^32-1 (and decreases by 1 at multiples of 2^32), though I'm having troubles expressing this idea in terms of the available operators.
You can use these identities:
n mod (x - 1) = (((n div x) mod (x - 1)) + ((n mod x) mod (x - 1))) mod (x - 1)
n div (x - 1) = (n div x) + (((n div x) + (n mod x)) div (x - 1))
First comes from (ab+c) mod d = ((a mod d) (b mod d) + (c mod d)) mod d.
Second comes from expanding n = ax + b = a(x-1) + a + b, while dividing by x-1.
I think I've figured out the answer to my question:
Compute c first, then use the results to compute x. Assumes that the comparison returns 1 for true, 0 for false. Also, the shifts are all logical shifts.
c = (n>>32) + ((t & 0xFFFFFFFF) >= (0xFFFFFFFF - (n>>32)))
x = (0xFFFFFFFE - (n & 0xFFFFFFFF) - ((c - (n>>32))<<32)-c) & 0xFFFFFFFF
edit: changed x (only need to keep lower 32 bits, rest is "junk")

Can someone explain Mathematical Induction (to prove a recursive method)

Can someone explain mathematical induction to prove a recursive method? I am a freshmen computer science student and I have not yet taken Calculus (I have had up through Trig). I kind of understand it but I have trouble when asked to write out an induction proof for a recursive method.
Here is a explanation by example:
Let's say you have the following formula that you want to prove:
sum(i | i <- [1, n]) = n * (n + 1) / 2
This formula provides a closed form for the sum of all integers between 1 and n.
We will start by proving the formula for the simple base case of n = 1. In this case, both sides of the formula reduce to 1. This in turn means that the formula holds for n = 1.
Next, we will prove that if the formula holds for a value n, then it holds for the next value of n (or n + 1). In other words, if the following is true:
sum(i | i <- [1, n]) = n * (n + 1) / 2
Then the following is also true:
sum(i | i <- [1, n + 1]) = (n + 1) * (n + 2) / 2
To do so, let's start with the first side of the last formula:
s1 = sum(i | i <- [1, n + 1]) = sum(i | i <- [1, n]) + (n + 1)
That is, the sum of all integers between 1 and n + 1 is equal to the sum of integers between 1 and n, plus the last term n + 1.
Since we are basing this proof on the condition that the formula holds for n, we can write:
s1 = n * (n + 1) / 2 + (n + 1) = (n + 1) * (n + 2) / 2 = s2
As you can see, we have arrived at the second side of the formula we are trying to prove, which means that the formula does indeed hold.
This finishes the inductive proof, but what does it actually mean?
The formula is correct for n = 0.
If the formula is correct for n, then it is correct for n + 1.
From 1 and 2, we can say: if the formula is correct for n = 0, then it is correct for 0 + 1 = 1. Since we proved the case of n = 0, then the case of n = 1 is indeed correct.
We can repeat this above process again. The case of n = 1 is correct, then the case of n = 2 is correct. This reasoning can go ad infinitum; the formula is correct for all integer values of n >= 1.
induction != Calc!!!
I can get N guys drunk with 10*N beers.
Base Case: 1 guy
I can get one guy drunk with 10 beers
Inductive step, given p(n) prove p(n + 1)
I can get i guys drunk with 10 * i beers, if I add another guy, I can get him drunk with 10 more beers. Therefore, I can get i + 1 guys drunk with 10 * (i + 1) beers.
p(1) -> p(i + 1) -> p(i + 2) ... p(inf)
Discrete Math is easy!
First, you need a base case. Then you need an inductive step that holds true for some step n. In your inductive step, you will need an inductive hypothesis. That hypothesis is the assumption that you needed to have made. Finally, use that assumption to prove step n+1

Resources