Easy dynamic programming recursive formula (uva 147 coin change) - recursion

the problem is about coin change - "how many ways you can change 3,5,10 dollars
if you have 5c,10c ......
"http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=83
the problem is solved many times on various blogs( solution here )
In dp, the hardest thing is to understand the relation between subproblems and get the formula(optimal substructure)
I only give the actual for loop that stores the ways into 2d table like the solution:
for (int i = 2; i <= NCHANGES; ++i){
for (int m = 1; m <= MAX_AMOUNT; ++m){
if (m >= coins[i])
n[i][m] = n[i-1][m] + n[i][m - coins[i]];
else
n[i][m] = n[i-1][m];
}
}
=================================
The actual important code:
if (m >= coins[i])
n[i][m] = n[i-1][m] + n[i][m - coins[i]];
else
n[i][m] = n[i-1][m];
My thinking.
for example:
(else case)
I have the amount 5 cents and 1 coin to use : 5c. there is only 1 way : 5c = 1 * 5c
(store n[5][coin(5)])
I have the amount 5c and 2 coins to use : 5c and 10c i can't use BOTH 5C and 10c => i go back to 1 WAY of doing it ( store 1 in the table for n[5][coin(5,10)])
for this case
that's why n[i][m] = n[i-1][m]
can you explain the first if case? n[i][m] = n[i-1][m] + n[i][m - coins[i]]?

Ok, i found it on a website - same problem.
The coin change recurrence:
a[i][j] = a[i-1][j] (d[i] > j)
(If the coin can't be used, then don't use it)
a[i][j] = a[i-1][j] + a[i][j-d[i]] (d[i] <= j)
(If the coin can be used: don't use OR use it)

Related

Sherlock and Cost on Hackerrank

It's about this dynamic programming challenge.
If you have a hard time to understand the Problem then see also on AbhishekVermaIIT's post
Basically, you get as input an array B and you construct array A. Fo this array A you need the maximum possible sum with absolute(A[i] - A[i-1]), for i = 1 to N. How to construct array A? --> You can choose for every element A[i] in array A either the values 1 or B[i]. (As you will deduce from the problem description any other value between these two values doesn't make any sense.)
And I came up with this recursive Java solution (without memoization):
static int costHelper(int[] arr, int i) {
if (i < 1) return 0;
int q = max(abs(1 - arr[i-1]) + costHelper(arr, i-1) , abs(arr[i] - arr[i-1]) + costHelper(arr, i-1));
int[] arr1 = new int[i];
for (int j = 0; j < arr1.length-1; j++) {
arr1[j] = arr[j];
}
arr1[i-1] = 1;
int r = max(abs(1 - 1) + costHelper(arr1, i-1) , abs(arr[i] - 1) + costHelper(arr1, i-1));
return max(q , r);
}
static int cost(int[] arr) {
return costHelper(arr, arr.length-1);
}
public static void main(String[] args) {
int[] arr = {55, 68, 31, 80, 57, 18, 34, 28, 76, 55};
int result = cost(arr);
System.out.println(result);
}
Basically, I start at the end of the array and check what is maximizing the sum of the last element minus last element - 1. But I have 4 cases:
(1 - arr[i-1])
(arr[i] - arr[i-1])
(1 - 1) // I know, it is not necessary.
(arr[i] -1)
For the 3rd or 4th case I construct a new array one element smaller in size than the input array and with a 1 as the last element.
Now, the result of arr = 55 68 31 80 57 18 34 28 76 55 according to Hackerrank should be 508. But I get 564.
Since it has to be 508 I guess the array should be 1 68 1 80 1 1 34 1 76 1.
For other arrays I get the right answer. For example:
79 6 40 68 68 16 40 63 93 49 91 --> 642 (OK)
100 2 100 2 100 --> 396 (OK)
I don't understand what is wrong with this algorithm.
I'm not sure exactly what's happening with your particular solution but I suspect it might be that the recursive function only has one dimension, i, since we need a way to identify the best previous solution, f(i-1), both if B_(i-1) was chosen and if 1 was chosen at that point, so we can choose the best among them vis-a-vis f(i). (It might help if you could add a description of your algorithm in words.)
Let's look at the brute-force dynamic program: let m[i][j1] represent the best sum-of-abs-diff in A[0..i] when A_i is j1. Then, generally:
m[i][j1] = max(abs(j1 - j0) + m[i-1][j0])
for j0 in [1..B_(i-1)] and j1 in [1..B_i]
Python code:
def cost(arr):
if len(arr) == 1:
return 0
m = [[float('-inf')]*101 for i in xrange(len(arr))]
for i in xrange(1, len(arr)):
for j0 in xrange(1, arr[i-1] + 1):
for j1 in xrange(1, arr[i] + 1):
m[i][j1] = max(m[i][j1], abs(j1 - j0) + (m[i-1][j0] if i > 1 else 0))
return max(m[len(arr) - 1])
That works but times out since we are looping potentially 100*100*10^5 iterations.
I haven't thought through the proof for it, but, as you suggest, apparently we can choose only from either 1 or B_i for each A_i for an optimal solution. This allows us to choose between those directly in a significantly more efficient solution that won't time out:
def cost(arr):
if len(arr) == 1:
return 0
m = [[float('-inf')]*2 for i in xrange(len(arr))]
for i in xrange(1, len(arr)):
for j0 in [1, arr[i-1]]:
for j1 in [1, arr[i]]:
a_i = 0 if j1 == 1 else 1
b_i = 0 if j0 == 1 else 1
m[i][a_i] = max(m[i][a_i], abs(j1 - j0) + (m[i-1][b_i] if i > 1 else 0))
return max(m[len(arr) - 1])
This is a bottom-up tabulation but we could easily convert it to a recursive one using the same idea.
Here is the javascript code with memoization-
function cost(B,n,val) {
if(n==-1){
return 0;
}
let prev1=0,prev2=0;
if(n!=0){
if(dp[n-1][0]==-1)
dp[n-1][0] = cost(B,n-1,1);
if(dp[n-1][1]==-1)
dp[n-1][1] = cost(B,n-1,B[n]);
prev1=dp[n-1][0];
prev2=dp[n-1][1];
}
prev1 = prev1 + Math.abs(val-1);
prev2 = prev2+ Math.abs(val-B[n]);
return Math.max(prev1,prev2);
}
where B->given array,n->total length,val-> 1 or B[n], value considered by the calling function.
Initial call -> Math.max(cost(B,n-2,1),cost(B,n-2,B[n-1]));
BTW, this took me around 3hrs, rather could have easily done with iteration method. :p
//dp[][0] is when a[i]=b[i]
dp[i][0]=max((dp[i-1][0]+abs(b[i]-b[i-1])),(dp[i-1][1]+abs(b[i]-1)));
dp[i][1]=max((dp[i-1][1]+abs(1-1)),(dp[i-1][0]+abs(b[i-1]-1)));
Initially all the elements in dp have the value of 0.
We know that we will get the answer if at any i the value is b[i] or 1. So the final answer is :
max(dp[n-1][0],dp[n-1][1])
dp[i][0] signifies a[i]=b[i] and dp[i][1] signifies a[i]=1.
So at every i we want the maximum of [i-1][0] (previous element is b[i-1]) or [i-1][1] (previous element is 1)

Sum of combinations of numbers

I want to solve a mathematical problem in a fastest possible way.
I have a set of natural numbers between 1 to n, for example {1,2,3,4,n=5} and I want to calculate a formula like this:
s = 1*2*3*4+1*2*3*5+1*2*4*5+1*3*4*5+2*3*4*5
as you can see, each element in the sum is a multiplications of n-1 numbers in the set. For example in (1*2*3*4), 5 is excluded and in (1*2*3*5), 4 is excluded. I know some of the multiplications are repeated, for example (1*2) is repeated in 3 of the multiplications. How can I solve this problem with least number of multiplications.
Sorry for bad English.
Thanks.
Here is a way that does not "cheat" by replacing multiplication with repeated addition or by using division. The idea is to replace your expression with
1*2*3*4 + 5*(1*2*3 + 4*(1*2 + 3*(1 + 2)))
This used 9 multiplications for the numbers 1 through 5. In general I think the multiplication count would be one less than the (n-1)th triangular number, n * (n - 1) / 2 - 1. Here is Python code that stores intermediate factorial values to reduce the number of multiplications to just 6, or in general 2 * n - 4, and the addition count to the same (but half of them are just adding 1):
def f(n):
fact = 1
term = 2
sum = 3
for j in range(2, n):
fact *= j
term = (j + 1) * sum
sum = fact + term
return sum
The only way to find which algorithm is the fastest is to code all of them in one language, and run each using a timer.
The following would be the most straightforward answer.
def f(n):
result = 0
nList = [i+1 for i in range(n)]
for i in range(len(nList)):
result += reduce(lambda x, y: x*y,(nList[:i]+nList[i+1:]))
return result
Walkthrough - use the reduce function to multiply all list's of length n-1 and add to the variable result.
If you just want to minimise the number of multiplications, you can replace all the multiplications by additions, like this:
// Compute 1*2*…*n
mult_all(n):
if n = 1
return 1
res = 0
// by adding 1*2*…*(n-1) an entirety of n times
for i = 1 to n do
res += mult_all(n-1)
return res
// Compute sum of 1*2*…*(i-1)*(i+1)*…*n
sum_of_mult_all_but_one(n):
if n = 1
return 0
// by computing 1*2*…*(n-1) + (sum 1*2*…*(i-1)*(i+1)*…*(n-1))*n
res = mult_all(n-1)
for i = 1 to n do
res += sum_of_mult_all_but_one(n-1)
return res
Here is an answer that would work with javascript. It is not the fastest way because it is not optimized, but it should work if you want to just find the answer.
function combo(n){
var mult = 1;
var sum = 0;
for (var i = 1; i <= n; i++){
mult = 1;
for (var j = 1; j<= n; j++){
if(j != i){
mult = mult*j;
}
}
sum += mult;
}
return (sum);
}
alert(combo(n));

Math Problem: Scale a graph so that it matches another

I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}

Divide by 10 using bit shifts?

Is it possible to divide an unsigned integer by 10 by using pure bit shifts, addition, subtraction and maybe multiply? Using a processor with very limited resources and slow divide.
Editor's note: this is not actually what compilers do, and gives the wrong answer for large positive integers ending with 9, starting with div10(1073741829) = 107374183 not 107374182. It is exact for smaller inputs, though, which may be sufficient for some uses.
Compilers (including MSVC) do use fixed-point multiplicative inverses for constant divisors, but they use a different magic constant and shift on the high-half result to get an exact result for all possible inputs, matching what the C abstract machine requires. See Granlund & Montgomery's paper on the algorithm.
See Why does GCC use multiplication by a strange number in implementing integer division? for examples of the actual x86 asm gcc, clang, MSVC, ICC, and other modern compilers make.
This is a fast approximation that's inexact for large inputs
It's even faster than the exact division via multiply + right-shift that compilers use.
You can use the high half of a multiply result for divisions by small integral constants. Assume a 32-bit machine (code can be adjusted accordingly):
int32_t div10(int32_t dividend)
{
int64_t invDivisor = 0x1999999A;
return (int32_t) ((invDivisor * dividend) >> 32);
}
What's going here is that we're multiplying by a close approximation of 1/10 * 2^32 and then removing the 2^32. This approach can be adapted to different divisors and different bit widths.
This works great for the ia32 architecture, since its IMUL instruction will put the 64-bit product into edx:eax, and the edx value will be the wanted value. Viz (assuming dividend is passed in eax and quotient returned in eax)
div10 proc
mov edx,1999999Ah ; load 1/10 * 2^32
imul eax ; edx:eax = dividend / 10 * 2 ^32
mov eax,edx ; eax = dividend / 10
ret
endp
Even on a machine with a slow multiply instruction, this will be faster than a software or even hardware divide.
Though the answers given so far match the actual question, they do not match the title. So here's a solution heavily inspired by Hacker's Delight that really uses only bit shifts.
unsigned divu10(unsigned n) {
unsigned q, r;
q = (n >> 1) + (n >> 2);
q = q + (q >> 4);
q = q + (q >> 8);
q = q + (q >> 16);
q = q >> 3;
r = n - (((q << 2) + q) << 1);
return q + (r > 9);
}
I think that this is the best solution for architectures that lack a multiply instruction.
Of course you can if you can live with some loss in precision. If you know the value range of your input values you can come up with a bit shift and a multiplication which is exact.
Some examples how you can divide by 10, 60, ... like it is described in this blog to format time the fastest way possible.
temp = (ms * 205) >> 11; // 205/2048 is nearly the same as /10
to expand Alois's answer a bit, we can expand the suggested y = (x * 205) >> 11 for a few more multiples/shifts:
y = (ms * 1) >> 3 // first error 8
y = (ms * 2) >> 4 // 8
y = (ms * 4) >> 5 // 8
y = (ms * 7) >> 6 // 19
y = (ms * 13) >> 7 // 69
y = (ms * 26) >> 8 // 69
y = (ms * 52) >> 9 // 69
y = (ms * 103) >> 10 // 179
y = (ms * 205) >> 11 // 1029
y = (ms * 410) >> 12 // 1029
y = (ms * 820) >> 13 // 1029
y = (ms * 1639) >> 14 // 2739
y = (ms * 3277) >> 15 // 16389
y = (ms * 6554) >> 16 // 16389
y = (ms * 13108) >> 17 // 16389
y = (ms * 26215) >> 18 // 43699
y = (ms * 52429) >> 19 // 262149
y = (ms * 104858) >> 20 // 262149
y = (ms * 209716) >> 21 // 262149
y = (ms * 419431) >> 22 // 699059
y = (ms * 838861) >> 23 // 4194309
y = (ms * 1677722) >> 24 // 4194309
y = (ms * 3355444) >> 25 // 4194309
y = (ms * 6710887) >> 26 // 11184819
y = (ms * 13421773) >> 27 // 67108869
each line is a single, independent, calculation, and you'll see your first "error"/incorrect result at the value shown in the comment. you're generally better off taking the smallest shift for a given error value as this will minimise the extra bits needed to store the intermediate value in the calculation, e.g. (x * 13) >> 7 is "better" than (x * 52) >> 9 as it needs two less bits of overhead, while both start to give wrong answers above 68.
if you want to calculate more of these, the following (Python) code can be used:
def mul_from_shift(shift):
mid = 2**shift + 5.
return int(round(mid / 10.))
and I did the obvious thing for calculating when this approximation starts to go wrong with:
def first_err(mul, shift):
i = 1
while True:
y = (i * mul) >> shift
if y != i // 10:
return i
i += 1
(note that // is used for "integer" division, i.e. it truncates/rounds towards zero)
the reason for the "3/1" pattern in errors (i.e. 8 repeats 3 times followed by 9) seems to be due to the change in bases, i.e. log2(10) is ~3.32. if we plot the errors we get the following:
where the relative error is given by: mul_from_shift(shift) / (1<<shift) - 0.1
Considering Kuba Ober’s response, there is another one in the same vein.
It uses iterative approximation of the result, but I wouldn’t expect any surprising performances.
Let say we have to find x where x = v / 10.
We’ll use the inverse operation v = x * 10 because it has the nice property that when x = a + b, then x * 10 = a * 10 + b * 10.
Let use x as variable holding the best approximation of result so far. When the search ends, x Will hold the result. We’ll set each bit b of x from the most significant to the less significant, one by one, end compare (x + b) * 10 with v. If its smaller or equal to v, then the bit b is set in x. To test the next bit, we simply shift b one position to the right (divide by two).
We can avoid the multiplication by 10 by holding x * 10 and b * 10 in other variables.
This yields the following algorithm to divide v by 10.
uin16_t x = 0, x10 = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
uint16_t t = x10 + b10;
if (t <= v) {
x10 = t;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
Edit: to get the algorithm of Kuba Ober which avoids the need of variable x10 , we can subtract b10 from v and v10 instead. In this case x10 isn’t needed anymore. The algorithm becomes
uin16_t x = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
if (b10 <= v) {
v -= b10;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
The loop may be unwinded and the different values of b and b10 may be precomputed as constants.
On architectures that can only shift one place at a time, a series of explicit comparisons against decreasing powers of two multiplied by 10 might work better than the solution form hacker's delight. Assuming a 16 bit dividend:
uint16_t div10(uint16_t dividend) {
uint16_t quotient = 0;
#define div10_step(n) \
do { if (dividend >= (n*10)) { quotient += n; dividend -= n*10; } } while (0)
div10_step(0x1000);
div10_step(0x0800);
div10_step(0x0400);
div10_step(0x0200);
div10_step(0x0100);
div10_step(0x0080);
div10_step(0x0040);
div10_step(0x0020);
div10_step(0x0010);
div10_step(0x0008);
div10_step(0x0004);
div10_step(0x0002);
div10_step(0x0001);
#undef div10_step
if (dividend >= 5) ++quotient; // round the result (optional)
return quotient;
}
Well division is subtraction, so yes. Shift right by 1 (divide by 2). Now subtract 5 from the result, counting the number of times you do the subtraction until the value is less than 5. The result is number of subtractions you did. Oh, and dividing is probably going to be faster.
A hybrid strategy of shift right then divide by 5 using the normal division might get you a performance improvement if the logic in the divider doesn't already do this for you.
I've designed a new method in AVR assembly, with lsr/ror and sub/sbc only. It divides by 8, then sutracts the number divided by 64 and 128, then subtracts the 1,024th and the 2,048th, and so on and so on. Works very reliable (includes exact rounding) and quick (370 microseconds at 1 MHz).
The source code is here for 16-bit-numbers:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/div10_16rd.asm
The page that comments this source code is here:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/DIV10.html
I hope that it helps, even though the question is ten years old.
brgs, gsc
elemakil's comments' code can be found here: https://doc.lagout.org/security/Hackers%20Delight.pdf
page 233. "Unsigned divide by 10 [and 11.]"

OR-multiplication on big integers

Multiplication of two n-bit numbers A and B can be understood as a sum of shifts:
(A << i1) + (A << i2) + ...
where i1, i2, ... are numbers of bits that are set to 1 in B.
Now lets replace PLUS with OR to get new operation I actually need:
(A << i1) | (A << i2) | ...
This operation is quite similar to regular multiplication for which there exists many faster algorithms (Schönhage-Strassen for example).
Is a similar algorithm for operation I presented here?
The size of the numbers is 6000 bits.
edit:
For some reason I have no link/button to post comments (any idea why?) so I will edit my question insead.
I indeed search for faster than O(n^2) algorithm for the operation defined above.
And yes, I am aware that it is not ordinary multiplication.
Is there a similar algorithm? I think probably not.
Is there some way to speed things up beyond O(n^2)? Possibly. If you consider a number A to be the analogue of A(x) = Σanxn where an are the binary digits of A, then your operation with bitwise ORs (let's call it A ⊕ B ) can be expressed as follows, where "⇔" means "analogue"
A ⇔ A(x) = Σanxn
B ⇔ B(x) = Σbnxn
C = A ⊕ B ⇔ C(x) = f(A(x)B(x)) = f(V(x)) where f(V(x)) = f(Σvnxn) = Σu(vn)xn where u(vn) = 0 if vn = 0, u(vn) = 1 otherwise.
Basically you are doing the equivalent of taking two polynomials and multiplying them together, then identifying all the nonzero terms. From a bit-string standpoint, this means treating the bitstring as an array of samples of zeros or ones, convolving the two arrays, and collapsing the resulting samples that are nonzero. There are fast convolution algorithms that are O(n log n), using FFTs for instance, and the "collapsing" step here is O(n)... but somehow I wonder if the O(n log n) evaluation of fast convolution treats something (like multiplication of large integers) as O(1) so you wouldn't actually get a faster algorithm. Either that, or the constants for orders of growth are so large that you'd have to have thousands of bits before you got any speed advantage. ORing is so simple.
edit: there appears to be something called "binary convolution" (see this book for example) that sounds awfully relevant here, but I can't find any good links to the theory behind it and whether there are fast algorithms.
edit 2: maybe the term is "logical convolution" or "bitwise convolution"... here's a page from CPAN (bleah!) talking a little about it along with Walsh and Hadamard transforms which are kind of the bitwise equivalent to Fourier transforms... hmm, no, that seems to be the analog for XOR rather than OR.
You can do this O(#1-bits in A * #1-bits in B).
a-bitnums = set(x : ((1<<x) & A) != 0)
b-bitnums = set(x : ((1<<x) & B) != 0)
c-set = 0
for a-bit in a-bitnums:
for b-bit in b-bitnums:
c-set |= 1 << (a-bit + b-bit)
This might be worthwhile if A and B are sparse in the number
of 1 bits present.
I presume, you are asking the name for the additive technique you have given
when you write "Is a similar algorithm for operation I presented here?"...
Have you looked at the Peasant multiplication technique?
Please read up the Wikipedia description if you do not get the 3rd column in this example.
B X A
27 X 15 : 1
13 30 : 1
6 60 : 0
3 120 : 1
1 240 : 1
B is 27 == binary form 11011b
27x15 = 15 + 30 + 120 + 240
= 15<<0 + 15<<1 + 15<<3 + 15<<4
= 405
Sounds familiar?
Here is your algorithm.
Choose the smaller number as your A
Initialize C as your result area
while B is not zero,
if lsb of B is 1, add A to C
left shift A once
right shift B once
C has your multiplication result (unless you rolled over sizeof C)
Update If you are trying to get a fast algorithm for the shift and OR operation across 6000 bits,
there might actually be one. I'll think a little more on that.
It would appear like 'blurring' one number over the other. Interesting.
A rather crude example here,
110000011 X 1010101 would look like
110000011
110000011
110000011
110000011
---------------
111111111111111
The number of 1s in the two numbers will decide the amount of blurring towards a number with all its bits set.
Wonder what you want to do with it...
Update2 This is the nature of the shift+OR operation with two 6000 bit numbers.
The result will be 12000 bits of course
the operation can be done with two bit streams; but, need not be done to its entirety
the 'middle' part of the 12000 bit stream will almost certainly be all 1s (provided both numbers are non-zero)
the problem will be in identifying the depth to which we need to process this operation to get both ends of the 12000 bit stream
the pattern at the two ends of the stream will depend on the largest consecutive 1s present in both the numbers
I have not yet got to a clean algorithm for this yet. Have updated for anyone else wanting to recheck or go further from here. Also, describing the need for such an operation might motivate further interest :-)
The best I could up with is to use a fast out on the looping logic. Combined with the possibility of using the Non-Zero approach as described by themis, you can answer you question by inspecting less than 2% of the N^2 problem.
Below is some code that gives the timing for numbers that are between 80% and 99% zero.
When the numbers get around 88% zero, using themis' approach switches to being better (was not coded in the sample below, though).
This is not a highly theoretical solution, but it is practical.
OK, here is some "theory" of the problem space:
Basically, each bit for X (the output) is the OR summation of the bits on the diagonal of a grid constructed by having the bits of A along the top (MSB to LSB left to right) and the bits of B along the side (MSB to LSB from top to bottom). Since the bit of X is 1 if any on the diagonal is 1, you can perform an early out on the cell traversal.
The code below does this and shows that even for numbers that are ~87% zero, you only have to check ~2% of the cells. For more dense (more 1's) numbers, that percentage drops even more.
In other words, I would not worry about tricky algorithms and just do some efficient logic checking. I think the trick is to look at the bits of your output as the diagonals of the grid as opposed to the bits of A shift-OR with the bits of B. The trickiest thing is this case is keeping track of the bits you can look at in A and B and how to index the bits properly.
Hopefully this makes sense. Let me know if I need to explain this a bit further (or if you find any problems with this approach).
NOTE: If we knew your problem space a bit better, we could optimize the algorithm accordingly. If your numbers are mostly non-zero, then this approach is better than themis since his would result is more computations and storage space needed (sizeof(int) * NNZ).
NOTE 2: This assumes the data is basically bits, and I am using .NET's BitArray to store and access the data. I don't think this would cause any major headaches when translated to other languages. The basic idea still applies.
using System;
using System.Collections;
namespace BigIntegerOr
{
class Program
{
private static Random r = new Random();
private static BitArray WeightedToZeroes(int size, double pctZero, out int nnz)
{
nnz = 0;
BitArray ba = new BitArray(size);
for (int i = 0; i < size; i++)
{
ba[i] = (r.NextDouble() < pctZero) ? false : true;
if (ba[i]) nnz++;
}
return ba;
}
static void Main(string[] args)
{
// make sure there are enough bytes to hold the 6000 bits
int size = (6000 + 7) / 8;
int bits = size * 8;
Console.WriteLine("PCT ZERO\tSECONDS\t\tPCT CELLS\tTOTAL CELLS\tNNZ APPROACH");
for (double pctZero = 0.8; pctZero < 1.0; pctZero += 0.01)
{
// fill the "BigInts"
int nnzA, nnzB;
BitArray a = WeightedToZeroes(bits, pctZero, out nnzA);
BitArray b = WeightedToZeroes(bits, pctZero, out nnzB);
// this is the answer "BigInt" that is at most twice the size minus 1
int xSize = bits * 2 - 1;
BitArray x = new BitArray(xSize);
int LSB, MSB;
LSB = MSB = bits - 1;
// stats
long cells = 0;
DateTime start = DateTime.Now;
for (int i = 0; i < xSize; i++)
{
// compare using the diagonals
for (int bit = LSB; bit < MSB; bit++)
{
cells++;
x[i] |= (b[MSB - bit] && a[bit]);
if (x[i]) break;
}
// update the window over the bits
if (LSB == 0)
{
MSB--;
}
else
{
LSB--;
}
//Console.Write(".");
}
// stats
TimeSpan elapsed = DateTime.Now.Subtract(start);
double pctCells = (cells * 100.0) / (bits * bits);
Console.WriteLine(pctZero.ToString("p") + "\t\t" +elapsed.TotalSeconds.ToString("00.000") + "\t\t" +
pctCells.ToString("00.00") + "\t\t" + cells.ToString("00000000") + "\t" + (nnzA * nnzB).ToString("00000000"));
}
Console.ReadLine();
}
}
}
Just use any FFT Polynomial Multiplication Algorithm and transform all resulting coefficients that are greater than or equal 1 into 1.
Example:
10011 * 10001
[1 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0] * [1 x^4 + 0 x^3 + 0 x^2 + 0 x^1 + 1 x^0]
== [1 x^8 + 0 x^7 + 0 x^6 + 1 x^5 + 2 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0]
-> [1 x^8 + 0 x^7 + 0 x^6 + 1 x^5 + 1 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0]
-> 100110011
For an example of the algorithm, check:
http://www.cs.pitt.edu/~kirk/cs1501/animations/FFT.html
BTW, it is of linearithmic complexity, i.e., O(n log(n))
Also see:
http://everything2.com/title/Multiplication%2520using%2520the%2520Fast%2520Fourier%2520Transform

Resources