First:
boolean isPrime(int n) {
if (n < 2)
return false;
return isPrime(n, 2);
}
private boolean isPrime(int n, int i) {
if (i == n)
return true;
return (n % i == 0) ? false : isPrime(n, i + 1);
}
Second:
boolean isPrime(int n) {
if (n < 0) n = -n;
if (n < 2) return false;
if (n == 2) return true;
if (n % 2 == 0) return false;
return rec_isPrime(n, 3);
}
boolean rec_isPrime(int n, int div) {
if (div * div > n) return true;
if (n % div == 0) return false;
return rec_isPrime(n, div + 2);
}
Please explain why is the second solution better than the first. I offered the first solution in an exam and my pointes were recudec under the claim that the solution is not effecient. I want to know what is the big differene
So this is a test question and I always keep in mind some professors have a colorful prerogative, but i could see a few reasons one might claim the first is slower:
when calculating primes you really only need to test if another primes are factors. The second so seeds with an odd, 3, then adds 2 ever recursive call which skips checking evens factors and reduces the numbers of calls needed by half.
and as #uselpa pointed out, the second code snippet stops at the when the testing factor's square is greater than n. Which effectively means in this version that all odd's between 1 and n have been accounted for. This allows deducing n is prime faster than the first which counts all the way up to n before declaring prime.
may argue that since the first tests for evens inside the recursive function instead of the outer method like the second, is it is an unnessary method on the call stack.
I have also seem some claims that ternary operations are slower than if-else checks, so you professor may fall into this belief. [Personally, I am not convinced there is a performance difference.]
Hope this helps. Was fun to think about some primes!
The first solution has a complexity of O(n) as it takes linear time, the second solution takes O(sqrt(n)) due to this line of code : if (div * div > n) return true;, because to look for a divisor past the square root is not required. For more details about that you can check : Why do we check up to the square root of a prime number to determine if it is prime?
Related
I'm looking for some way to shorten an iterator by some condition. A bit like an inverse filter but it stops iterating at the first true value. Let's call it until(f). Where:
iterator.until(f)
Would return an iterator that only runs until f is true once.
Let's use an example of finding the next prime number.
We have some structure containing known primes and a function to extend it.
// Structure for caching known prime numbers
struct PrimeGenerator {
primes:Vec<i64>
}
impl PrimeGenerator {
// Create a new prime generator
fn new()->Self{
let primes = vec![2,3];
Self {
primes,
}
}
// Extend the list of known primes by 1
fn extend_by_one(&mut self){
let mut next_option = self.primes.last().unwrap()+2;
while self.primes.iter().any(|x| next_option%x == 0) { // This is the relevant line
next_option += 2;
}
self.primes.push(next_option);
}
}
Now this snippet is a bit too exhaustive as we should only have to check until the square root of next_option, so I was looking for a some method that would shorten the iterator based on some condition, so I could write something like:
self.iter().until(|x| x*x > next_option).any(|x| next_option%x == 0)
Is there any similar pattern available?
Looks like your until is similar to inverted take_while.
self.iter().take_while(|x| x*x <= next_option).all(|x| next_option%x != 0)
In one of the previous intro to cs exams there was a question: calculate the space and time complexity of the function f1 as a function of n, assume that the time complexity of malloc(n) is O(1) and its space complexity is O(n).
int f1(int n) {
if(n < 3)
return 1;
int* arr = (int*) malloc(sizeof(int) * n);
f1(f1(n – 3));
free(arr);
return n;
}
The official solution is: time complexity: O(2^(n/3)), space complexity: O(n^2)
I tried to solve it but i didn't know how until i saw a note in my notebook that said: since the function returns n then we can treat f(f(n-3)) as f(n-3)+f(n-3) or as 2f(n-3). In this case the question becomes very similar to this one: Space complexity of recursive function
I tried solving it this way and i got the correct answer.
For the time complexity:
T(n)=2T(n-3)+1 , T(0)=1
T(n-3)=2T(n-3*2)+1
T(n)=2*2T(n-3*2)+2+1
T(n-3*2)=2T(n-3*3)+1
T(n)=2*2*2T(n-3*3)+2*2+2+1
...
T(n)=(2^k)T(n-3*k)+2^(k-1)+...+2^2+2+1
n-3*k=0
k=n/3
===> 2^(n/3)+...+2^2+2+1=2^(n/3)[1+(1/2)+(1/2^2)+...]=2^(n/3)*constant
Thus I got O(2^(n/3))
For the space complexity: the tree depth is n/3 and each time we do malloc so we get (n/3)^2 thus O(n^2).
My question:
why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
if the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
if we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
Why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
Because i) the nested f1 is evaluated first, and its return value is used to call the outer f1; so these nested calls are equivalent to:
int result = f1(n - 3);
f1(result);
... and ii) the return value of f1 is just its argument (except for the base case, but it doesn't matter asymptotically), so the above is further equivalent to:
f1(n - 3);
f1(n - 3); // result = n - 3
If the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
Only the outer call is affected. Again, using the equivalent expression from before:
f1(n - 3); // = (n - 3) / 3
f1((n - 3) / 3);
i.e. just f1(n - 3) + f1((n - 3) / 3) for your example.
If we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
You can always separate them into two separate calls as above, and again remember that only the second call is affected by the return result. If this is different to n - 3 then you would need a recursion tree instead of simple expansion. Depends on the specific problem, needless to say.
I am struggling to understand this recursion used in the dynamic programming example. Can anyone explain the working of this. The objective is to find the least number of coins for a value.
//f(n) = 1 + min f(n-d) for all denomimations d
Pseudocode:
int memo[128]; //initialized to -1
int min_coin(int n)
{
if(n < 0) return INF;
if(n == 0) return 0;
if(memo[n] != -1)
int ans = INF;
for(int i = 0; i < num_denomination; ++i)
{
ans = min(ans, min_coin(n - denominations[i]));
}
return memo[n] = ans+1; //when does this get called?
}
This particular example is explained very well in this article at Topcoder.
Basically this recursion is using the solutions to smaller problems (least number of coins for a smaller n) to find the solution for the overall problem. The dynamic programming aspect of this is the memoization of the solutions to the sub-problems so they don't have to be recalculated every time.
And yes - there are {} missing as ring0 mentioned in his comment - the recursion should only be executed if the sub-problem has not been solved before.
To answer the owner's question when does this get called? : in a solution based on a recursive program, the same function is called by itself... but eventually returns... When does it return? from the time the function ceased to call itself
f(a) {
if (a > 0) f(a-1);
display "x"
}
f(5);
f(5) would call f(4), in turns call f(3) that call f(2) which calls f(1) calling f(0).
f(0) has a being 0, so it does not call f(), and displays "x" then returns. It returns to the previous f(1) that, after calling f(0) - done - displays also "x". f(1) ends, f(2) displays "x", ... , until f(5). You get 6 "x".
In another terms from what ring0 has already mentioned - when the program reaches the base case and starts to unwind by going up the stack (call frames). For similar case using factorial example see this.
#!/usr/bin/env perl
use strict;
use IO::Handle;
use Carp qw(cluck);
STDOUT->autoflush(1);
STDERR->autoflush(1);
sub factorial {
my $v = shift;
dummy_func();
return 1 if $v == 1;
print "Variable v value: $v and it's address:", \$v, "\ncurrent sub factorial addr:", \&factorial, "\n","-"x40;
return $v * factorial($v - 1);
}
sub dummy_func {
cluck;
}
factorial(5);
I got this question in an interview. So, this seems to me a messed up Fibonacci seq. sum generator and this gives a stackoverflow. Because if(n==0) should be if(n<3) (exit condition is wrong). What should be the precise answer to this question? What was expected as an answer?
foo (int n) {
if (n == 0) return 1;
return foo(n-1) + foo(n-2);
}
UPDATE
What does this recursive function do?
What do you infer when you see these 3 lines of code. No debugging.
Yes, it's just a recursive Fibonacci method with an incorrect base case (although I don't think I'd use n < 3 either... it depends on how you define it).
As for "what was expected as an answer" - that would depend on the question. Were you asked exactly the question in the title of your post? If so, the answer is "it recurses forever until it blows up the stack when you pass it any value other than 0" - with an explanation, of course. (It will never end because either n-1 isn't 0, or n-2 isn't 0, or both.)
EDIT: The above answers the first question. To answer "What do you infer when you see these 3 lines of code" - I would infer that the developer in question has never run the code with a value other than 0, or doesn't care about whether or not the code works.
The problem with the code posted is that if we evaluate foo(1) we need to find foo(0) and foo (-1), foo(-1) then needs to find foo(-2) and foo(-3) and so on. This will keep putting more calls to foo() until there is no more space in the memory resulting in a stack overflow. How many calls to foo are made will depend on the size of the call stack, which will be implementation specific.
When I see these lines of code I immediately get the impression that whoever wrote it hasn't thought about all the possible inputs that could be fed to the function.
To make a recursive Fibonacci function that doesn't fail for foo(1) or a negative input we get:
foo (int n) {
if( n < 0 ) return 0;
if (n == 0) return 1;
return foo(n-1) + foo(n-2);
}
Edit: perhaps the return for a negative number should be something else, as the fibonacci sequence isn't implicitly defined for negative indexes.
However if we use the extension that fib(-n) = (-1)^(n+1) fib(n) we could get the following code:
int fib(int n) {
if( n == -1){
return 1;
}else if ( n < 0 ){
return ( (-1)^(-n+1 ) * fib(-n) );
}else if (n == 0){
return 1;
}else{
return fib(n-1) + fib(n-2);
}
}
suppose, you call foo ( 1 ), it will have two sub calls of foo(0) and foo(-1). As you can see, once you get to foo(-1), n will decrease indefinitely and will never reach a terminating condition.
The precise answer would be:
foo (int n) {
if (n <= 1) return 1; // assuming 0th and 1st fib is 1
return foo(n-1) + foo(n-2);
}
Its a recursive fibonacci function with 0th element being 1.
Ignoring the incorrect termination condition, the code is the "naive" recursive implementation of the Fibonacci function. It has very poor big-O time complexity. It would work fine for small n, but if n is greater than say 50 (I'm just guessing that number), then it would take "forever" to run.
A theoretical question here about the base or halting case in a recursive method, what's its standards?
I mean, is it normal not to have body in it, just a return statement?
Is it always like the following:
if (input operation value)
return sth;
Do you have different thoughts about it?
The pattern for recursive functions is that they look something like this:
f( value )
if ( test value )
return value
else
return f( simplify value )
I don't think you can say much more than that about general cases.
The base case is to terminate the loop (avoid becoming an infinite recursion). There's no standard in the base case, any input that is simple enough to be solved exactly can be chosen as one.
For example, this is perfectly valid:
int factorial (int n) {
if (n <= 5) {
// Not just a return statement
int x = 1;
while (n > 0) {
x *= n;
-- n;
}
return x;
} else {
return n * factorial(n-1);
}
}
In some cases, your base case is
return literal
In some cases, your base case is not simply "return a literal".
There cannot be a "standard" -- it depends on your function.
The "Syracuse Function" http://en.wikipedia.org/wiki/Collatz_conjecture for example,
doesn't have a trivial base case or a trivial literal value.
"Do you have different thoughts about it??" Isn't really a sensible question.
The recursion has to terminate, that's all. A trivial tail recursion may have a "base case" that returns a literal, or it may be a calculation. A more complex recursion may not have a trivial "base case".
It depends entirely on the particular recursive function. In general, it can't be an empty return; statement, though - for any recursive function that returns a value, the base case should also return a value of that type, since func(base) is also a perfectly valid call. For example, a recursive factorial function would return a 1 as the base value.