Determining the big Oh for (n-1)+(n-1) - math

I have been trying to get my head around this perticular complexity computation but everything i read about this type of complexity says to me that it is of type big O(2^n) but if i add a counter to the code and check how many times it iterates per given n it seems to follow the curve of 4^n instead. Maybe i just misunderstood as i placed an count++; inside the scope.
Is this not of type big O(2^n)?
public int test(int n)
{
if (n == 0)
return 0;
else
return test(n-1) + test(n-1);
}
I would appreciate any hints or explanation on this! I completely new to this complexity calculation and this one has thrown me off the track.
//Regards

int test(int n)
{
printf("%d\n", n);
if (n == 0) {
return 0;
}
else {
return test(n - 1) + test(n - 1);
}
}
With a printout at the top of the function, running test(8) and counting the number of times each n is printed yields this output, which clearly shows 2n growth.
$ ./test | sort | uniq -c
256 0
128 1
64 2
32 3
16 4
8 5
4 6
2 7
1 8
(uniq -c counts the number of times each line occurs. 0 is printed 256 times, 1 128 times, etc.)
Perhaps you mean you got a result of O(2n+1), rather than O(4n)? If you add up all of these numbers you'll get 511, which for n=8 is 2n+1-1.
If that's what you meant, then that's fine. O(2n+1) = O(2⋅2n) = O(2n)

First off: the 'else' statement is obsolete since the if already returns if it evaluates to true.
On topic: every iteration forks 2 different iterations, which fork 2 iterations themselves, etc. etc. As such, for n=1 the function is called 2 times, plus the originating call. For n=2 it is called 4+1 times, then 8+1, then 16+1 etc. The complexity is therefore clearly 2^n, since the constant is cancelled out by the exponential.
I suspect your counter wasn't properly reset between calls.

Let x(n) be a number of total calls of test.
x(0) = 1
x(n) = 2 * x(n - 1) = 2 * 2 * x(n-2) = 2 * 2 * ... * 2
There is total of n twos - hence 2^n calls.

The complexity T(n) of this function can be easily shown to equal c + 2*T(n-1). The recurrence given by
T(0) = 0
T(n) = c + 2*T(n-1)
Has as its solution c*(2^n - 1), or something like that. It's O(2^n).
Now, if you take the input size of your function to be m = lg n, as might be acceptable in this scenario (the number of bits to represent n, the true input size) then this is, in fact, an O(m^4) algorithm... since O(n^2) = O(m^4).

Related

Google Foobar Fuel Injection Perfection

Problem:
Fuel Injection Perfection
Commander Lambda has asked for your help to refine the automatic quantum antimatter fuel injection system for her LAMBCHOP doomsday device. It's a great chance for you to get a closer look at the LAMBCHOP - and maybe sneak in a bit of sabotage while you're at it - so you took the job gladly.
Quantum antimatter fuel comes in small pellets, which is convenient since the many moving parts of the LAMBCHOP each need to be fed fuel one pellet at a time. However, minions dump pellets in bulk into the fuel intake. You need to figure out the most efficient way to sort and shift the pellets down to a single pellet at a time.
The fuel control mechanisms have three operations:
Add one fuel pellet Remove one fuel pellet Divide the entire group of fuel pellets by 2 (due to the destructive energy released when a quantum antimatter pellet is cut in half, the safety controls will only allow this to happen if there is an even number of pellets) Write a function called solution(n) which takes a positive integer as a string and returns the minimum number of operations needed to transform the number of pellets to 1. The fuel intake control panel can only display a number up to 309 digits long, so there won't ever be more pellets than you can express in that many digits.
For example: solution(4) returns 2: 4 -> 2 -> 1 solution(15) returns 5: 15 -> 16 -> 8 -> 4 -> 2 -> 1
Test cases
Inputs: (string) n = "4" Output: (int) 2
Inputs: (string) n = "15" Output: (int) 5
my code:
def solution(n):
n = int(n)
if n == 2:
return 1
if n % 2 != 0:
return min(solution(n + 1), solution(n - 1)) + 1
else:
return solution(int(n / 2)) + 1
This is the solution that I came up with with passes 4 out of 10 of the test cases. It seems to be working fine so im wondering if it is because of the extensive runtime. I thought of applying memoization but im not sure how to do it(or if it is even possible). Any help would be greatly appreciated :)
There are several issues to consider:
First, you don't handle the n == "1" case properly (operations = 0).
Next, by default, Python has a limit of 1000 recursions. If we compute the log2 of a 309 digit number, we expect to make a minimum of 1025 divisions to reach 1. And if each of those returns an odd result, we'd need to triple that to 3075 recursive operations. So, we need to bump up Python's recursion limit.
Finally, for each of those divisions that does return an odd value, we'll be spawning two recursive division trees (+1 and -1). These trees will not only increase the number of recursions, but can also be highly redundant. Which is where memoization comes in:
import sys
from functools import lru_cache
sys.setrecursionlimit(3333) # estimated by trial and error
#lru_cache()
def solution(n):
n = int(n)
if n <= 2:
return n - 1
if n % 2 == 0:
return solution(n // 2) + 1
return min(solution(n + 1), solution(n - 1)) + 1
print(solution("4"))
print(solution("15"))
print(solution(str(10**309 - 1)))
OUTPUT
> time python3 test.py
2
5
1278
0.043u 0.010s 0:00.05 100.0% 0+0k 0+0io 0pf+0w
>
So, bottom line is handle "1", increase your recursion limit, and add memoization. Then you should be able to solve this problem easily.
There are more memory- and runtime-efficient ways to solve the problem, which is what Google is testing for with their constraints. Every time you recurse a function, you put another call on the stack, or 2 calls when you recurse twice on each function call. While they seem basic, a while loop was a lot faster for me.
Think of the number in binary - when ever you have a streak of 1s >1 in length at LSB side of the number, it makes sense to add 1 (which will flip that streak to all 0s but add another bit to the overall length), then shift right until you find another 1 in the LSB position. You can solve it in a fixed memory block in O(n) using just a while loop.
If you don't want or can't use functools, you can build your own cache this way :
cache = {}
def solution_rec(n):
n = int(n)
if n in cache:
return cache[n]
else:
if n <= 1:
return 0
if n == 2:
return 1
if n % 2 == 0:
div = n / 2
cache[div] = solution(div)
return cache[div] + 1
else:
plus = n + 1
minus = n - 1
cache[plus] = solution(n + 1)
cache[minus] = solution(n - 1)
return min(cache[plus], cache[minus]) + 1
However, even if it runs much faster and has less recursive calls, it's still too much recursive calls for Python default configuration if you test the 309 digits limit.
it works if you set sys.setrecursionlimit to 1562.
An implementation of #rreagan3's solution, with the exception that an input of 3 should lead to a subtraction rather than an addition even through 3 has a streak of 1's on the LSB side:
def solution(n):
n = int(n)
count = 0
while n > 1:
if n & 1 == 0:
n >>= 1
elif n & 2 and n != 3:
n += 1
else:
n -= 1 # can also be: n &= -2
count += 1
return count
Demo: https://replit.com/#blhsing/SlateblueVeneratedFactor

Dynamic Programming: Child running up a staircase

I'm starting to practice Dynamic Programming and I just can't wrap my head around this question:
Question:
A child is running up a staircase with n steps and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs.
The solution from the cracking the coding interview book is like this:
"If we thought about all the paths to the nth step, we could just build them off the paths to the three previous steps. We can get up to the nth stop by any of the following:
Going to the (n-1) step and hopping 1 step
Going to the (n-2) step and hopping 2 steps
Going to the (n-3) step and hopping 3 steps"
Therefor to find the solution you just add the number of these path together !
That's what loses me ! Why isn't the answer like this: add number of those paths then add 3 ? Since if you are on step n-1 or n-2 or n-3, there are 3 ways to get the nth step? I understand that if you write down the answers for the first 4 bases cases (assuming that n=0 returns 1) You can see the fibonacci-like pattern. But you may not also see it so it's difficult.
And then they came up with this code:
public static int countWaysDP(int n, int[] map) {
if (n < 0)
return 0;
else if (n == 0)
return 1;
else if (map[n] > -1)
return map[n];
else {
map[n] = countWaysDP(n - 1, map) + countWaysDP(n - 2, map) + countWaysDP(n - 3, map);
return map[n]; }
}
So my second question. How does it return 1 when n == 0. Even if I accept that fact, I still can't figure out a way to solve it if I return 0 when n == 1.
Hope this makes sense.
Thank you
Here is how I wrapped my head around this-
From the book -
On the very last hop, up to the nth step, the child could have
done either a single, double, or triple step hop. That is, the last
move might have been a single step hop from step n-1, a double
step hop from step n-2, or a triple step hop from n-3. The
total number of ways of reaching the last step is therefore the sum of
the number of ways of reaching each of the last three steps
You are correctly contemplating -
Why isn't the answer like this: add number of those paths then add 3 ?
Since if you are on step n-1 or n-2 or n-3, there are 3 ways to get
the nth step?
The problem with such a base case is that it will be applicable only if n >= 3. You clearly will not add 3 if there are only 2 steps.
Let's break down the individual cases and understand what exactly is the base case here.
n=0
There are no stairs to climb.
Total number of ways = 0
n=1
Total number of ways = 1StepHop from (n-1)
Number of ways to do 1StepHop from Step 0(n-1) = 1
Total number of ways = 1
n=2
Total number of ways = 2StepHop from (n-2) + 1StepHop from (n-1)
Number of ways to do 2StepHop to reach Step 2 from Step 0(n-2) = 1
Number of ways to do 1StepHop to reach Step 2 from Step 1(n-1) = 1 (Previous answer for n=1)
Total number of ways = 1 + 1 = 2
n=3
Total number of ways = 3StepHop from (n-3) + 2StepHop from (n-2) + 1StepHop from (n-1)
Number of ways to do 3StepHop to reach Step 3 from Step 0(n-3) = 1
Number of ways to do 2StepHop to reach Step 3 from Step 1(n-2) = 2 (From previous answer for n = 2)
Number of ways to do 1StepHop to reach Step 3 from Step 2 = 1 (From previous answer for n=1)
Total number of ways = 1 + 2 + 1 = 4
Observation -
As you can see from above, we are correctly accounting for the last step in each case. Adding one for each of -> 1StepHop from n-1, 2StepHop from n-2 and 3StepHop from n-3.
Now looking at the code, the case where we return 1 if n==0 is a bit counter-intuitive since we already saw that the answer should be 0 if n==0. -
public static int countWaysDP(int n, int[] map) {
if (n < 0)
return 0;
else if (n == 0)
return 1; <------------- this case is counter-intuitive
else if (map[n] > -1)
return map[n];
else {
map[n] = countWaysDP(n - 1, map) + countWaysDP(n - 2, map) + countWaysDP(n - 3, map);
return map[n];
}
From the observation, you can see that this counter intuitive case of n==0 is actually the one which is accounting for the final step - 1StepHop from n-1, 2StepHop from n-2 and 3StepHop from n-3.
So hitting n==0 case makes sense only during recursion - which will happen only when the initial value of n is greater than 0.
A more complete solution to this problem may have a driver method which handles that case outside of the core recursive algorithm -
int countWays(int n) {
if (n <= 0 ) return 0;
int[] map = new int[n+1];
for(int i = 0; i<n+1; i++){
map[i] = -1;
}
return countWaysDP(n, map);
}
Hope this is helpful.
You can find the solution on
https://github.com/CrispenGari/Triple-Step-Algorithim/blob/master/main.cpp .
int count_Ways(int n){
if(n<0){
return 0;
}else if(n==0){
return 1;
}else{
return count_Ways(n-1) +count_Ways(n-2) + count_Ways(n-3);
}
}
int main(){
cout<<"Enter number of stairs: ";
int n;
cin>>n;
cout<<"There are "<< count_Ways(n)<<" possible ways the child can run up
thestairs."<<endl;
return 0;
}

Confused about prime number checking function

I came across a question on stack overflow about how to check if a number is prime. The answer was the code below. The function int is_prime(int num) returns 1 when the number is prime 0 is returned otherwise.
int is_prime(int num)
{
if (num <= 1) return 0;
if (num % 2 == 0 && num > 2) return 0;
for(int i = 3; i < num / 2; i+= 2)
{
if (num % i == 0)
return 0;
}
return 1;
}
All the logic in the if statements makes sense to me except for the for loop expressions. I don't get why the division i < num / 2 happens and why i+= 2 is being used. Sure one is there to advance the counter and the other is to halt the loop. but why half the number and why increment by two. Any reasonable explanation will be appreciated. Thanks.
Regarding the loop's increment:
The second if (if (num % 2 == 0)) checks if the number is even, and terminates the function if it is. If the function isn't terminated, we know that it's odd, and thus, may only be divisible by other odd numbers. Hence, the loop starts at 3 and checks the number against a series of odd numbers - i.e., increments the potential divisor by 2 on each iteration.
Regarding the loop's stop condition:
The smallest integer larger than 1 is 2. Thus, the largest integer that could ever divide an integer n is n/2. Thus, the loop works it's way up to num/2. If it didn't find a divisor for num by the time it reaches num/2, it has no chance to ever find such a divisor, so it's pointless to keep on going.

I am trying to solve recursion by hand

I am self taught and thought that I understood recursion, but I can not solve this problem:
What is returned by the call recur(12)?
What is returned by the call recur (25)?
public static int recur (int y)
{
if(y <=3)
return y%4;
return recur(y-2) + recur(y-1) + 1;
}
Would someone please help me with understanding how to solve these problems?
First of all, I assume you mean:
public static int recur(int y)
but the results of this method are discovered by placing a print statement at the beginning of the method:
public static int recur(int y)
{
System.out.println(y);
if(y <=3)
return y % 4;
return recur(y-2) + recur(y-1) + 1;
}
I am not sure what you mean by what is returned because there are several returns, though. Anyway, these are the steps to figure this out:
is 12 <= 3? No
recur(10) Don't proceed to the next recursion statement yet
is 10 <= 3? No
recur(8) Don't proceed to the next recursion statement yet
Continue this pattern until y <= 3 is true. Then you return y % 4 (whatever that number may be).
Now you are ready to go to the second recursive statement in the most recent recur() call. So, recur(y - 1).
Is y <= 3? If so, return y % 4. If not, do a process similar to step 1
once you return you add the result of recur(y - 2) + recur(y - 1) + 1. This will be a number of course.
continue this process for many iterations.
Recursion is difficult to follow and understand sometimes even for advanced programmers.
Here is a very common (and similar) problem for you to look into:
Java recursive Fibonacci sequence
I hope this helps!
Good luck!
I have removed the modulus there since any nonegative n less than 4 will just become n so I ended up with:
public static int recur (int y)
{
return y <= 3 ?
y :
recur(y-2) + recur(y-1) + 1;
}
I like to start off by testing the base case so what happens for y when they are 0,1,2,3? well the argument so 0,1,2,3 of course.
What about 4? Well then it's not less or equal to 3 and you get to replace it with recur(4-2) + recur(4-1) + 1 which is recur(2) + recur(3) + 1. Now you can solve each of the recur ones since we earlier established became its argument so you end up with 2 + 3 + 1.
Now doing this for 12 or 25 is exactly the same just with more steps. Here is 5 which has just one more step:
recur(5); //=>
recur(3) + recur(4) + 1; //==>
recur(3) + ( recur(2) + recur(3) + 1 ) + 1; //==>
3 + 2 + 3 + 1 + 1; // ==>
10
So in reality the recursion halts the process in the current iteration until you have an answer that the current iteration adds together so I could have done this in the opposite way, but then I would have paused every time I now have used a previous calculated value.
You should have enough info to do any y.
This is nothing more than an augmented Fibonacci sequence.
The first four terms are defined as 0, 1, 2, 3. Thereafter, each term is the sum of the previous two terms, plus one. This +1 augmentation is where it differs from the classic Fibonacci sequence. Just add up the series by hand:
0
1
2
3
3+2+1 = 6
6+3+1 = 10
10+6+1 = 17
17+10+1 = 28
...

OR-multiplication on big integers

Multiplication of two n-bit numbers A and B can be understood as a sum of shifts:
(A << i1) + (A << i2) + ...
where i1, i2, ... are numbers of bits that are set to 1 in B.
Now lets replace PLUS with OR to get new operation I actually need:
(A << i1) | (A << i2) | ...
This operation is quite similar to regular multiplication for which there exists many faster algorithms (Schönhage-Strassen for example).
Is a similar algorithm for operation I presented here?
The size of the numbers is 6000 bits.
edit:
For some reason I have no link/button to post comments (any idea why?) so I will edit my question insead.
I indeed search for faster than O(n^2) algorithm for the operation defined above.
And yes, I am aware that it is not ordinary multiplication.
Is there a similar algorithm? I think probably not.
Is there some way to speed things up beyond O(n^2)? Possibly. If you consider a number A to be the analogue of A(x) = Σanxn where an are the binary digits of A, then your operation with bitwise ORs (let's call it A ⊕ B ) can be expressed as follows, where "⇔" means "analogue"
A ⇔ A(x) = Σanxn
B ⇔ B(x) = Σbnxn
C = A ⊕ B ⇔ C(x) = f(A(x)B(x)) = f(V(x)) where f(V(x)) = f(Σvnxn) = Σu(vn)xn where u(vn) = 0 if vn = 0, u(vn) = 1 otherwise.
Basically you are doing the equivalent of taking two polynomials and multiplying them together, then identifying all the nonzero terms. From a bit-string standpoint, this means treating the bitstring as an array of samples of zeros or ones, convolving the two arrays, and collapsing the resulting samples that are nonzero. There are fast convolution algorithms that are O(n log n), using FFTs for instance, and the "collapsing" step here is O(n)... but somehow I wonder if the O(n log n) evaluation of fast convolution treats something (like multiplication of large integers) as O(1) so you wouldn't actually get a faster algorithm. Either that, or the constants for orders of growth are so large that you'd have to have thousands of bits before you got any speed advantage. ORing is so simple.
edit: there appears to be something called "binary convolution" (see this book for example) that sounds awfully relevant here, but I can't find any good links to the theory behind it and whether there are fast algorithms.
edit 2: maybe the term is "logical convolution" or "bitwise convolution"... here's a page from CPAN (bleah!) talking a little about it along with Walsh and Hadamard transforms which are kind of the bitwise equivalent to Fourier transforms... hmm, no, that seems to be the analog for XOR rather than OR.
You can do this O(#1-bits in A * #1-bits in B).
a-bitnums = set(x : ((1<<x) & A) != 0)
b-bitnums = set(x : ((1<<x) & B) != 0)
c-set = 0
for a-bit in a-bitnums:
for b-bit in b-bitnums:
c-set |= 1 << (a-bit + b-bit)
This might be worthwhile if A and B are sparse in the number
of 1 bits present.
I presume, you are asking the name for the additive technique you have given
when you write "Is a similar algorithm for operation I presented here?"...
Have you looked at the Peasant multiplication technique?
Please read up the Wikipedia description if you do not get the 3rd column in this example.
B X A
27 X 15 : 1
13 30 : 1
6 60 : 0
3 120 : 1
1 240 : 1
B is 27 == binary form 11011b
27x15 = 15 + 30 + 120 + 240
= 15<<0 + 15<<1 + 15<<3 + 15<<4
= 405
Sounds familiar?
Here is your algorithm.
Choose the smaller number as your A
Initialize C as your result area
while B is not zero,
if lsb of B is 1, add A to C
left shift A once
right shift B once
C has your multiplication result (unless you rolled over sizeof C)
Update If you are trying to get a fast algorithm for the shift and OR operation across 6000 bits,
there might actually be one. I'll think a little more on that.
It would appear like 'blurring' one number over the other. Interesting.
A rather crude example here,
110000011 X 1010101 would look like
110000011
110000011
110000011
110000011
---------------
111111111111111
The number of 1s in the two numbers will decide the amount of blurring towards a number with all its bits set.
Wonder what you want to do with it...
Update2 This is the nature of the shift+OR operation with two 6000 bit numbers.
The result will be 12000 bits of course
the operation can be done with two bit streams; but, need not be done to its entirety
the 'middle' part of the 12000 bit stream will almost certainly be all 1s (provided both numbers are non-zero)
the problem will be in identifying the depth to which we need to process this operation to get both ends of the 12000 bit stream
the pattern at the two ends of the stream will depend on the largest consecutive 1s present in both the numbers
I have not yet got to a clean algorithm for this yet. Have updated for anyone else wanting to recheck or go further from here. Also, describing the need for such an operation might motivate further interest :-)
The best I could up with is to use a fast out on the looping logic. Combined with the possibility of using the Non-Zero approach as described by themis, you can answer you question by inspecting less than 2% of the N^2 problem.
Below is some code that gives the timing for numbers that are between 80% and 99% zero.
When the numbers get around 88% zero, using themis' approach switches to being better (was not coded in the sample below, though).
This is not a highly theoretical solution, but it is practical.
OK, here is some "theory" of the problem space:
Basically, each bit for X (the output) is the OR summation of the bits on the diagonal of a grid constructed by having the bits of A along the top (MSB to LSB left to right) and the bits of B along the side (MSB to LSB from top to bottom). Since the bit of X is 1 if any on the diagonal is 1, you can perform an early out on the cell traversal.
The code below does this and shows that even for numbers that are ~87% zero, you only have to check ~2% of the cells. For more dense (more 1's) numbers, that percentage drops even more.
In other words, I would not worry about tricky algorithms and just do some efficient logic checking. I think the trick is to look at the bits of your output as the diagonals of the grid as opposed to the bits of A shift-OR with the bits of B. The trickiest thing is this case is keeping track of the bits you can look at in A and B and how to index the bits properly.
Hopefully this makes sense. Let me know if I need to explain this a bit further (or if you find any problems with this approach).
NOTE: If we knew your problem space a bit better, we could optimize the algorithm accordingly. If your numbers are mostly non-zero, then this approach is better than themis since his would result is more computations and storage space needed (sizeof(int) * NNZ).
NOTE 2: This assumes the data is basically bits, and I am using .NET's BitArray to store and access the data. I don't think this would cause any major headaches when translated to other languages. The basic idea still applies.
using System;
using System.Collections;
namespace BigIntegerOr
{
class Program
{
private static Random r = new Random();
private static BitArray WeightedToZeroes(int size, double pctZero, out int nnz)
{
nnz = 0;
BitArray ba = new BitArray(size);
for (int i = 0; i < size; i++)
{
ba[i] = (r.NextDouble() < pctZero) ? false : true;
if (ba[i]) nnz++;
}
return ba;
}
static void Main(string[] args)
{
// make sure there are enough bytes to hold the 6000 bits
int size = (6000 + 7) / 8;
int bits = size * 8;
Console.WriteLine("PCT ZERO\tSECONDS\t\tPCT CELLS\tTOTAL CELLS\tNNZ APPROACH");
for (double pctZero = 0.8; pctZero < 1.0; pctZero += 0.01)
{
// fill the "BigInts"
int nnzA, nnzB;
BitArray a = WeightedToZeroes(bits, pctZero, out nnzA);
BitArray b = WeightedToZeroes(bits, pctZero, out nnzB);
// this is the answer "BigInt" that is at most twice the size minus 1
int xSize = bits * 2 - 1;
BitArray x = new BitArray(xSize);
int LSB, MSB;
LSB = MSB = bits - 1;
// stats
long cells = 0;
DateTime start = DateTime.Now;
for (int i = 0; i < xSize; i++)
{
// compare using the diagonals
for (int bit = LSB; bit < MSB; bit++)
{
cells++;
x[i] |= (b[MSB - bit] && a[bit]);
if (x[i]) break;
}
// update the window over the bits
if (LSB == 0)
{
MSB--;
}
else
{
LSB--;
}
//Console.Write(".");
}
// stats
TimeSpan elapsed = DateTime.Now.Subtract(start);
double pctCells = (cells * 100.0) / (bits * bits);
Console.WriteLine(pctZero.ToString("p") + "\t\t" +elapsed.TotalSeconds.ToString("00.000") + "\t\t" +
pctCells.ToString("00.00") + "\t\t" + cells.ToString("00000000") + "\t" + (nnzA * nnzB).ToString("00000000"));
}
Console.ReadLine();
}
}
}
Just use any FFT Polynomial Multiplication Algorithm and transform all resulting coefficients that are greater than or equal 1 into 1.
Example:
10011 * 10001
[1 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0] * [1 x^4 + 0 x^3 + 0 x^2 + 0 x^1 + 1 x^0]
== [1 x^8 + 0 x^7 + 0 x^6 + 1 x^5 + 2 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0]
-> [1 x^8 + 0 x^7 + 0 x^6 + 1 x^5 + 1 x^4 + 0 x^3 + 0 x^2 + 1 x^1 + 1 x^0]
-> 100110011
For an example of the algorithm, check:
http://www.cs.pitt.edu/~kirk/cs1501/animations/FFT.html
BTW, it is of linearithmic complexity, i.e., O(n log(n))
Also see:
http://everything2.com/title/Multiplication%2520using%2520the%2520Fast%2520Fourier%2520Transform

Resources