I got this problem in an interview recently:
Given a set of numbers X = [X_1, X_2, ...., X_n] where X_i <= 500 for 1 <= i <= n. Increment the numbers (only positive increments) in the set so that each element in the set has a common divisor >=2, and such that the sum of all increments is minimized.
For example, if X = [5, 7, 7, 7, 7] the new set would be X = [7, 7, 7, 7, 7] Since you can add 2 to X_1. X = [6, 8, 8, 8, 8] has a common denominator of 2 but is not correct since we're adding 6 (add 2 to 5 and 1 to each of the 4 7's).
I had a seemingly working solution (as in it passed all the test cases) that loops through the prime numbers < 500 and for each X_i in X finds the closest multiple of the prime number greater than X_i.
function closest_multiple(x, y)
return ceil(x/y)*y
min_increment = inf
for each prime_number < 500:
total_increment = 0
for each element X_i in X:
total_increment += closest_multiple(X_i, prime_number) - X_i
min_increment = min(min_increment, total_increment)
return min_increment
It's technically O(n) but is there a better way to solve this? I've been suggested to use dynamic programming but am unsure how that would fit in here.
Constant-bounded entries case
When X_i is bounded by a constant, the best time you can achieve asymptotically is O(n), since it takes at least that long to read all of your inputs. There are some practical improvements:
Filter out duplicates, so you work with a list of (element, frequency) pairs.
Early stopping in your loop.
Faster computation of closest_multiple(x, p) - x. This is slightly hardware/language dependent, but a single integer modulus op is almost certainly faster than an int -> float cast, float division, ceiling() call, and multiplication on the same magnitude numbers.
freq_counts <- Initialize-Counter(X) // List of (element, freq) pairs
min_increment = inf
for each prime_number < 500:
total_increment = 0
for each pair X_i, freq in freq_counts:
total_increment += (prime_number - (X_i % prime_number)) * freq
if total_increment >= min_increment: break
min_increment = min(min_increment, total_increment)
return min_increment
Large entries case
With uniformly chosen random data, the answer is almost always from using '2' as the divisor, and much larger prime divisors are vanishingly unlikely. However, let's solve for that worst case scenario.
Here, let max(X) = M, so that our input size is O(n (log M)) bits. We want a solution that's sub-exponential in that input size, so finding all primes below M (or even sqrt(M)) is out of the question. We're looking for any prime that gives us a min-total-increment; we'll call such a prime a min-prime. After finding such a prime, we can get the min-total-increment in linear time. We'll use a factoring approach along with two observations.
Observation 1: The answer is always at most n, since the increment needed for the prime 2 to divide X_i is at most 1.
Observation 2: We're trying to find primes that divide X_i or a number slightly larger than X_i for a large fraction of our entries X_i. Let Consecutive-Product-Divisors[i] be the set of all primes dividing either of X_i or X_i+1, which I'll abbreviate CPD[i]. This is exactly the set of all primes which divide X_i * (1 + X_i).
(Obs. 2 Continued) If U is a known upper bound on our answer (here, at most n), and p is a min-prime for X, then p must divide either X_i or X_i + 1 for at least N - U/2 of our CPD entries. Use frequency counts on the CPD array to find all such primes.
Once you have a list of candidate primes (all min-primes are guaranteed to be in this list), you can test each one individually using your algorithm. Since a number k can have at most O(log k) distinct prime divisors, this gives O(n log M) possible distinct primes that divide at least half of the numbers
[X_1*(1 + X_1), X_2*(1 + X_2), ... X_n*(1 + X_n)] that make up our candidate list. It's possible you can lower this bound with some more careful analysis, but it likely won't strongly affect the asymptotic runtime of the whole algorithm.
A more optimal complexity for large entries
The complexity of this solution is hard to write in short form, because the bottleneck is factoring n numbers of maximum size M, plus O(n^2 log M) arithmetic (i.e. addition, subtraction, multiply, modulo) operations on numbers of maximum size M. That doesn't mean the runtime is unknown: If you select any integer factoring algorithm and large-integer-arithmetic algorithms, you can derive the runtime exactly. Unfortunately, because of factoring, the best known runtime of the above algorithm is super-polynomial (but sub-exponential).
How can we do better? I did find a more complicated solution, based on Greatest Common Divisors (GCD) and dynamic-programming-like that runs in polynomial time (although likely much slower on non-astronomical-size inputs) since it doesn't rely on factoring.
The solution relies on the fact that at least one of the following two statements is true:
The number 2 is a min-prime for X, or
For at least one value of i, 1 <= i <= n there is an optimal solution where X_i remains unincremented, i.e. where one of the divisors of X_i produces a min-total-increment.
GCD-Based polynomial time algorithm
We can test 2 and all small primes quickly for their minimum costs. In fact, we'll test all primes p, p <= n, which we can do in polynomial time, and factor out these primes from X_i and its first n increments. This leads us to the following algorithm:
// Given: input list X = [X_1, X_2, ... X_n].
// Subroutine compute-min-cost(list A, int p) is
// just the inner loop of the above algorithm.
min_increment = inf;
for each prime p <= n:
min_increment = min(min_increment, compute-min-cost(X, p));
// Initialize empty, 2-D, n x (n+1) list Y[n][n+1], of offset X-values
for all 1 <= i <= n:
for all 0 <= j <= n:
Y[i][j] <- X[i] + j;
for each prime p <= n: // Factor out all small prime divisors from Y
for each Y[i][j]:
while Y[i][j] % p == 0:
Y[i][j] /= p;
for all 1 <= i <= n: // Loop 1
// Y[i][0] is the test 'unincremented' entry
// Initialize empty hash-tables 'costs' and 'new_costs'
// Keys of hash-tables are GCDs,
// Values are a running sum of increment-costs for that GCD
costs[Y[i][0]] = 0;
for all 1 <= k <= n: // Loop 2
if i == k: continue;
clear all entries from new_costs // or reinitialize to empty
for all 0 <= j < n: // Loop 3
for each Key in costs: // Loop 4
g = GCD(Key, Y[k][j]);
if g == 1: continue;
if g is not a key in new_costs:
new_costs[g] = j + costs[Key];
else:
new_costs[g] = min(new_costs[g], j + costs[Key]);
swap(costs, new_costs);
if costs is not empty:
min_increment = min(min_increment, smallest Value in costs);
return min_increment;
The correctness of this solution follows from the previous two observations, and the (unproven, but straightforward) fact that there is a list
[X_1 + r_1, X_2 + r_2, ... , X_n + r_n] (with 0 <= r_i <= n for all i) whose GCD is a divisor with minimum increment cost.
The runtime of this solution is trickier: GCDs can easily be computed in O(log^2(M)) time, and the list of all primes up to n can be computed in low poly(n) time. From the loop structure of the algorithm, to prove a polynomial bound on the whole algorithm, it suffices to show that the maximum size of our 'costs' hash-table is polynomial in log M. This is where the 'factoring-out' of small primes comes into play. After iteration k of loop 2, the entries in costs are (Key, Value) pairs, where each Key is the GCD of k + 1 elements:
our initial Y[i][0], and [Y[1][j_1], Y[2][j_2], ... Y[k][j_k]] for some 0 <= j_l < n. The Value for this Key is the minimum increment sum needed for this divisor (i.e. sum of the j_l) over all possible choices of j_l.
There are at most O(log M) unique prime divisors of Y[i][0]. Each such prime divides at most one key in our 'costs' table at any time: Since we've factored out all prime divisors below n, any remaining prime divisor p can divide at most one of the n consecutive numbers in any Y[j] = [X_j, 1 + X_j, ... n-1 + X_j]. This means the overall algorithm is polynomial, and has a runtime below O(n^4 log^3(M)).
From here, the open questions are whether a simpler algorithm exists, and how much better than this bound can you achieve. You can definitely optimize this algorithm (including using the early-stopping and frequency counts from before). It's also likely that better bounds on counting large-and-distinct-prime-divisors for consecutive numbers shows this solution is already better than that stated runtime, but a simplification of this solution would be very interesting.
I need to perform calculations on random batches of very larger integers. I have a function that compares the numbers for certain properties and returns a value based on those properties. Since the batches and the numbers themselves can be very large I want to speed up the process by utilizing the GPU.
Here is a short version of what i have running purely on the CPU now.
using Statistics
function check(M)
val = 0
#some code that calculates val based on M, e.g. the mean
val = mean(M)
return val
end
function distribution(N, n, exp) # N=batchsize, n=# of batches, exp=exponent of the upper limit of the integers
avg = 0
M = zeros(BigInt, N)
for i = 1 : n
M = rand(1 : BigInt(10) ^ exp, N)
avg += check(M)
end
avg /= n
println(avg, ":", N)
end
#example
distribution(10 ^ 3, 10 ^ 6, 100)
I have briefly used CUDAnative in Julia but I don't know how to implement the BigInt calculations. That package would be preferred but others are fine as well. Any help is appreciated.
BigInts are CPU only since they are not implemented in Julia, see 1.
I need to calculate the speed difference between performing a Montgomery Multiplication page 602-603 with a word-size/register of size 32 vs. 64.
So far, this is what I understand:
x and y are represented by multiple-word arrays of length n
where n = m/w and w is the register size (either 32 or
64).
The total number of single-digit multiplications in Montgomery
multiplication is n*(2 + 2*n), where n represents the number length of the word-arrays.
I will assume that the multiplication of two single-digit takes 1 clock cycle on each of the computers.
How can I put all this together to represent the number of clock cycles needed in Montgomery multiplication on a computer with a 32-bit register or 64-bit register?
The number of cycles for a multiple-precision Montgomery multiplication would indeed be n(2+2*n) if all the intermediate single-precision multiplication operands and results were available in registers. For cryptographic operations this is hardly possible since m is usually 1024 or larger. Assuming 32-bit registers (xyR^-1 mod m) would require 192 registers only to store the operands (3*(1024/32)). In fact you need to take into account memory accesses to answer this question.
A rewrite of the algorithm with memory accesses (assuming multiplications can be done in parallel with loads/stores):
For i from 0 to n: a_i <- 0
For i from 0 to (n − 1) do the following:
Fetch a_0
Fetch y_0
Fetch x_i
Compute u_i <- (a_0 + x_i*y_0)m' mod b. Store u_i in a register
c = 0 (Computing A <- (A + x_i*y + u_i*m)/b)
for j from 0 to (n-1):
Fetch a_j
Fetch y_j
Compute (cv) = a_j + x_i*y_j + c, Fetch m_j
Compute (cv) = (cv) + u_i*m_j, if j>0 Store a_{j-1} <- v
Store a_n <- c and a_{n-1} <- v
If A >= m then A <- A − m.
Return(A).
Hope this helps.
Ok,
so this is a application of existing mathematical practices, but I can't really apply them to my case.
So, I have x of a currency to increase the level of a game-object y for cost z.
z is calculated in cost(y.lvl) = c_1 * c_2^y.lvl / c_3, where the c's are constants.
I am seeking an efficient way to calculate, how often I can increase the level of y, given x. Currently I'm using a loop that does something like this:
double tempX = x;
int counter = 0;
while(tempX >= cost(y.lvl+counter)){
tempX-=cost(y.lvl)+counter;
counter++;
}
The problem is, that in some cases, this loop has to iterate too many times to stay performant.
What I am looking for is essentially a function
int howManyCanBeBought(x,y.lvl), which calculates it's result in a single go, instead of looping a lot of times.
I've read something about transforming recursions to generating functions and transforming them to closed formulas, but I didn't get the math behind it. Is there an easy way to it?
If I understand correctly, you're looking for the largest n such that:
Σi=0..n c1/c3 c2lvl+i ≤ x
Dividing by the constant factor:
Σi=0..n c2i ≤ c3 / (c1 c2lvl) x
Using the formula for the sum of a geometric series:
(c2n+1 - 1) / (c2 - 1) ≤ c3 / (c1 c2lvl) x
And solving for the maximum integer:
n = floor(logc2(c3 (c2 - 1) / (c1 c2lvl) x + 1) - 1)
On recent interview I was asked the following question. There is a function random2(), wich returns 0 or 1 with equal probability (0.5). Write implementation of random4() and random3() using random2().
It was easy to implement random4() like this
if(random2())
return random2();
return random2() + 2;
But I had difficulties with random3(). The only realization I could represent:
uint32_t sum = 0;
for (uint32_t i = 0; i != N; ++i)
sum += random2();
return sum % 3;
This implementation of random4() is based only my intuition only. I'm not sure if it is correct actually, because I can't mathematically prove its correctness. Can somebody help me with this question, please.
random3:
Not sure if this is the most efficient way, but here's my take:
x = random2 + 2*random2
What can happen:
0 + 0 = 0
0 + 2 = 2
1 + 0 = 1
1 + 2 = 3
The above are all the possibilities of what can happen, thus each has equal probability, so...
(p(x=c) is the probability that x = c)
p(x=0) = 0.25
p(x=1) = 0.25
p(x=2) = 0.25
p(x=3) = 0.25
Now while x = 3, we just keep generating another number, thus giving equal probability to 0,1,2. More technically, you would distribute the probability from x=3 across all of them repeatedly such that p(x=3) tends to 0, thus the probability of the others will tend to 0.33 each.
Code:
do
val = random2() + 2*random2();
while (val != 3);
return val;
random4:
Let's run through your code:
if(random2())
return random2();
return random2() + 2;
First call has 50% chance of 1 (true) => returns either 0 or 1 with 50% * 50% probability, thus 25% each
First call has 50% chance of 0 (false) => returns either 2 or 3 with 50% * 50% probability, thus 25% each
Thus your code generates 0,1,2,3 with equal probability.
Update inspired by e4e5f4's answer:
For a more deterministic answer than the one I provided above...
Generate some large number by calling random2 a bunch of times and mod the result by the desired number.
This won't be exactly the right probability for each, but it will be close.
So, for a 32-bit integer by calling random2 32 times, target = 3:
Total numbers: 4294967296
Number of x's such that x%3 = 1 or 2: 1431655765
Number of x's such that x%3 = 0: 1431655766
Probability of 1 or 2 (each): 0.33333333325572311878204345703125
Probability of 0: 0.3333333334885537624359130859375
So within 0.00000002% of the correct probability, seems pretty close.
Code:
sum = 0;
for (int i = 0; i < 32; i++)
sum = 2*sum + random2();
return sum % N;
Note:
As pjr pointed out, this is, in general, far less efficient than the rejection method above. The probability of getting to the same number of calls of random2 (i.e. 32) (assuming this is the slowest operation) with the rejection method is 0.25^(32/2) = 0.0000000002 = 0.00000002%. This together with the fact that this method isn't exact, gives way more preference to the rejection method. Lower this number decreases the running time, but increases the error, and it would probably need to be lowered quite a bit (thus reaching a high error) to approach the average running time of the rejection method.
It is useful to note the above algorithm has a maximum running time. The rejection method does not. If your random number generator is totally broken for some reason, it could keep generating the rejected number and run for quite a while or forever with the rejection method, but the for-loop above will run 32 times, regardless of what happens.
Using modulo(%) is not recommended because it introduces bias. Mapping will be nice only if n is power of 2. Otherwise some kind of rejection is involved as suggested by other answer.
Another generic approach would be to emulate built-in PRNGs by -
Generate 32 random2() and map it to a 32-bit integer
Get random number in range (0,1) by dividing it by max integer value
Simply multiply this number by n (=3,4...73 so on) and floor to get desired output