How to get the true Euclidean remainder? - math

I have two unsigned longs a and q and I would like to find a number n between 0 and q-1 such that n + a is divisible by q (without overflow).
In other words, I'm trying to find a (portable) way of computing (-a)%q which lies between 0 and q-1. (The sign of that expression is implementation-defined in C89.) What's a good way to do this?

What you're looking for is mathematically equivalent to (q - a) mod q, which in turn is equivalent to (q - (a mod q)) mod q. I think you should therefore be able to compute this as follows:
unsigned long result = (q - (a % q)) % q;

Related

is n! in the order of Theta((n+1)!)? can you show me a proof? [duplicate]

What about (n-1)!?
Also if you could show me a proof that would help me understand better.
I'm stuck on this one.
To show that (n+1)! is in O(n!) you have to show that there is a constant c so that for all big enough n (n > n0) the inequality
(n+1)! < c n!
holds. However since (n+1)! = (n+1) n! this simplifies to
n+1 < c
which clearly does not hold since c is a constant and n can be arbitrarily large.
On the other hand, (n-1)! is in O(n!). The proof is left as an exercise.
(n+1)! = n! * (n+1)
O((n+1)*n!) = O(nn!+n!) = O(2(nn!)) = O(n*n!) > O(n!)
(n-1)! = n! * n-1
O(n-1)! = O(n!/n) < O(n!)
I wasnt formally introduced to algorithmic complexity so take what I write with a grain of salt
That said, we know n^3 is way worse than n, right?
Well, since (n + 1)! = (n - 1)! * n * (n + 1)
Comparing (n + 1)! to (n - 1)! is like comparing n to n^3
Sorry, I dont have proof but expanding the factorial as above should lead to it

Reversing mod for a function?

I'm trying to solve this equation:
(b(ax+b ) - c) % n = e
Where everything is given except x
I tried the approach of :
(A + x) % B = C
(B + C - A) % B = x
where A is (-c) and then manually solve for x given my other subs, but I am not getting the correct output. Would I possibly need to use eea? Any help would be appreciated! I understand this question has been asked, I tried their solutions but it doesn't work for me.
(b*(a*x+b) - c) % n = e
can be rewritten as:
(b*a*x) % n = (e - b*b + c) % n
x = ((e - b*b + c) * modular_inverse(b*a, n)) % n
where the modular inverse of u, modular_inverse(u, n), is a number v such that u*v % n == 1. See this question for code to calculate the modular inverse.
Some caveats:
When simplifying modular equations, you can never simply divide, you need to multiply with the modular inverse.
There is no straightforward formula to calculate the modular inverse, but there is a simple, quick algorithm to calculate it, similar to calculating the gcd.
The modular inverse doesn't always exist.
Depending on the programming language, when one or both arguments are negative, the result of modulo can also be negative.
As every solution works for every x modulo n, for small n only the numbers from 0 till n-1 need to be tested, so in many cases a simple loop is sufficient.
What language are you doing this in, and are the variables constant?
Here's a quick way to determine the possible values of x in Java:
for (int x = -1000; x < 1000; x++){
if ((b*((a*x)+b) - c) % n == e){
System.out.println(x);
}
}

Math-ish recursion to formula

Ok,
so this is a application of existing mathematical practices, but I can't really apply them to my case.
So, I have x of a currency to increase the level of a game-object y for cost z.
z is calculated in cost(y.lvl) = c_1 * c_2^y.lvl / c_3, where the c's are constants.
I am seeking an efficient way to calculate, how often I can increase the level of y, given x. Currently I'm using a loop that does something like this:
double tempX = x;
int counter = 0;
while(tempX >= cost(y.lvl+counter)){
tempX-=cost(y.lvl)+counter;
counter++;
}
The problem is, that in some cases, this loop has to iterate too many times to stay performant.
What I am looking for is essentially a function
int howManyCanBeBought(x,y.lvl), which calculates it's result in a single go, instead of looping a lot of times.
I've read something about transforming recursions to generating functions and transforming them to closed formulas, but I didn't get the math behind it. Is there an easy way to it?
If I understand correctly, you're looking for the largest n such that:
Σi=0..n c1/c3 c2lvl+i ≤ x
Dividing by the constant factor:
Σi=0..n c2i ≤ c3 / (c1 c2lvl) x
Using the formula for the sum of a geometric series:
(c2n+1 - 1) / (c2 - 1) ≤ c3 / (c1 c2lvl) x
And solving for the maximum integer:
n = floor(logc2(c3 (c2 - 1) / (c1 c2lvl) x + 1) - 1)

Finding time complexity of recursive formula

I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.

Finding the upper bound of a mathematical function (function analysis)

I am trying to understand Big-O notation through a book I have and it is covering Big-O by using functions although I am a bit confused. The book says that O(g(n)) where g(n) is the upper bound of f(n). So I understand that means that g(n) gives the max rate of growth for f(n) at larger values of n.
and that there exists an n_0 where the rate of growth of cg(n) (where c is some constant) and f(n) have the same rate of growth.
But what I am confused is on these examples on finding Big O in mathmatical functions.
This book says findthe upper bound for f(n) = n^4 +100n^2 + 50
they then state that n^4 +100n^2 + 50 <= 2n^4 (unsure why the 2n^4)
then they some how find n_0 =11 and c = 2, I understand why the big O is O(n^4) but I am just confused about the rest.
This is all discouraging as I don't understand but I feel like this is an important topic that I must understand.
If any one is curious the book is Data Structures and Algorithms Made Easy by Narasimha Karumanchi
Not sure if this post belongs here or in the math board.
Preparations
First, lets state, loosely, the definition of f being in O(g(n)) (note: O(g(n)) is a set of functions, so to be picky, we say that f is in O(...), rather than f(n) being in O(...)).
If a function f(n) is in O(g(n)), then c · g(n) is an upper bound on
f(n), for some constant c such that f(n) is always ≤ c · g(n),
for large enough n (i.e. , n ≥ n0 for some constant n0).
Hence, to show that f(n) is in O(g(n)), we need to find a set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n ≥ n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Showing that f ∈ O(n^4)
Now, lets proceed and look at the example that confused you
Find an upper asymptotic bound for the function
f(n) = n^4 + 100n^2 + 50 (*)
One straight-forward approach is to express the lower-order terms in (*) in terms of the higher order terms, specifically, w.r.t. bounds (... < ...).
Hence, we see if we can find a lower bound on n such that the following holds
100n^2 + 50 ≤ n^4, for all n ≥ ???, (i)
We can easily find when equality holds in (i) by solving the equation
m = n^2, m > 0
m^2 - 100m - 50 = 0
(m - 50)^2 - 50^2 - 50 = 0
(m - 50)^2 = 2550
m = 50 ± sqrt(2550) = { m > 0, single root } ≈ 100.5
=> n ≈ { n > 0 } ≈ 10.025
Hence, (i) holds for n ≳ 10.025, bu we'd much rather present this bound on n with a neat integer value, hence rounding up to 11:
100n^2 + 50 ≤ n^4, for all n ≥ 11, (ii)
From (ii) it's apparent that the following holds
f(n) = n^4 + 100n^2 + 50 ≤ n^4 + n^4 = 2 · n^4, for all n ≥ 11, (iii)
And this relation is exactly (+) with c = 2, n0 = 11 and g(n) = n^4, and hence we've shown that f ∈ O(n^4). Note again, however, that the choice of constants c and n0 is one of convenience, that is not unique. Since we've shown that (+) holds for on set of constants (c,n0), we can show that it holds for an infinite amount of different such choices of constants (e.g., it naturally holds for c=10 and n0=20, ..., and so on).

Resources