The example code here solves a project Euler problem:
Starting with the number 1 and moving to the right in a clockwise
direction a 5 by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is
101.
What is the sum of the numbers on the diagonals in a 1001 by 1001
spiral formed in the same way?
but my question is a matter of functional programming style rather than about how to get the answer (I already have it). I am trying to teach myself a bit about functional programming by avoiding imperative loops in my solutions, and so came up with the following recursive function to solve problem 28:
let answer =
let dimensions = 1001
let max_number = dimensions * dimensions
let rec loop total increment increment_count current =
if current > max_number then total
else
let new_inc, new_inc_count =
if increment_count = 4 then increment + 2, 0
else increment, increment_count + 1
loop (total + current) new_inc new_inc_count (current + increment)
loop 0 2 1 1
However, it seems to me my function is a bit of a mess. The following imperative version is shorter and clearer, even after taking into account the fact that F# forces you to explicitly declare variables as mutable and doesn't include a += operator:
let answer =
let dimensions = 1001
let mutable total = 1
let mutable increment = 2
let mutable current = 1
for spiral_layer_index in {1..(dimensions- 1) / 2} do
for increment_index in {1..4} do
current <- current + increment
total <- total + current
increment <- increment + 2
total
Disregarding the fact that people with more maths ability have solved the problem analytically, is there a better way to do this in a functional style? I also tried using Seq.unfold to create a sequence of values and then piping the resulting sequence into Seq.sum, but this ended up being even messier than my recursive version.
Since you didn't describe the problem you're trying to solve, this answer is based only on the F# code you posted. I agree that the functional version is a bit messy, but I believe it could be clearer. I don't really understand the nested for loop in your imperative solution:
for increment_index in {1..4} do
current <- current + increment
total <- total + current
You're not using the increment_index for anything, so you could just multiply increment and current by four and get the same result:
total <- total + 4*current + 10*increment
current <- current + 4*increment
Then your imperative solution becomes:
let mutable total = 0
let mutable increment = 2
let mutable current = 1
for spiral_layer_index in {1..(dimensions- 1) / 2} do
total <- total + 4*current + 10*increment
current <- current + 4*increment
increment <- increment + 2
total
If you rewrite this to a recursive function, it becomes just:
let rec loop index (total, current, increment) =
if index > (dimensions - 1) / 2 then total
else loop (index + 1) ( total + 4*current + 10*increment,
current + 4*increment, increment + 2 )
let total = loop 1 (0, 2, 1)
The same thing could be also written using Seq.fold like this (this is even more "functional", because in functional programming, you use recursion only to implement basic functions, like fold that can then be re-used):
let total, _, _=
{1 .. (dimensions - 1) / 2} |> Seq.fold (fun (total, current, increment) _ ->
(total + 4*current + 10*increment, current + 4 * increment, increment + 2)) (0, 1, 2)
NOTE: I'm not sure if this actually implements what you want. It is just a simplification of your imperative solution and then rewrite of that using a recursive function...
In fact, this is Project Euler Problem 28 and my F# solution circa November 21, 2011 is quite similar to one suggested in Tomas' answer:
let problem028 () =
[1..500]
|> List.fold (fun (accum, last) n ->
(accum + 4*last + 20*n, last + 8*n)) (1,1)
|> fst
Indeed, solution of the original problem takes just one-liner simple fold over the list of all involved squares with corners at diagonal nodes while threading through the accumulated sum and value of current diagonal element. Folding is one of the major idioms of functional programming; there is a great classic paper A tutorial on the universality and expressiveness of fold that covers many important facets of this core pattern.
Related
For example,
n = 4 (4x1) 1 way
n = 10 (4x1, 6x1) (10x1) 2 ways
Is there any equation can express the number of way?
You have used recurrence-relation tag - yes, it is possible to use recurrence to calculate the number of ways.
P(N) = P(N-10) + P(N-6) + P(N-4)
P(0) = 1
Explanation - you can get sum N, using (N-10) cents sum and 10-cent coin and so on.
For rather large values of N recursive algorithm will work too long, so one could build dynamic programming algorithm to accelerate calculations (DP will reuse calculated values for smaller sums)
Suppose you have a list of denominations. In your case it is A = [4,6,10]. So suppose you have the following things:
A = [4,6,10]
Length of list A = N
Sum = K
The problem can be written as:
# Given the list of denominations, its length and the sum.
P(A,N,K) = 0 if N < 0 or K < 0,
1 if K = 0,
P(A,N-1,K) + P(A,N-1,k-A[N]) #A[N]-> Nth element of list
As we can see the possibility of re-using sub-problems, DP will work wonderfully.
So in my text book there is this example of a recursive function using f#
let rec gcd = function
| (0,n) -> n
| (m,n) -> gcd(n % m,m);;
with this function my text book gives the example by executing:
gcd(36,116);;
and since the m = 36 and not 0 then it ofcourse goes for the second clause like this:
gcd(116 % 36,36)
gcd(8,36)
gcd(36 % 8,8)
gcd(4,8)
gcd(8 % 4,4)
gcd(0,4)
and now hits the first clause stating this entire thing is = 4.
What i don't get is this (%)percentage sign/operator or whatever it is called in this connection. for an instance i don't get how
116 % 36 = 8
I have turned this so many times in my head now and I can't figure how this can turn into 8?
I know this is probably a silly question for those of you who knows this but I would very much appreciate your help the same.
% is a questionable version of modulo, which is the remainder of an integer division.
In the positive, you can think of % as the remainder of the division. See for example Wikipedia on Euclidean Divison. Consider 9 % 4: 4 fits into 9 twice. But two times four is only eight. Thus, there is a remainder of one.
If there are negative operands, % effectively ignores the signs to calculate the remainder and then uses the sign of the dividend as the sign of the result. This corresponds to the remainder of an integer division that rounds to zero, i.e. -2 / 3 = 0.
This is a mathematically unusual definition of division and remainder that has some bad properties. Normally, when calculating modulo n, adding or subtracting n on the input has no effect. Not so for this operator: 2 % 3 is not equal to (2 - 3) % 3.
I usually have the following defined to get useful remainders when there are negative operands:
/// Euclidean remainder, the proper modulo operation
let inline (%!) a b = (a % b + b) % b
So far, this operator was valid for all cases I have encountered where a modulo was needed, while the raw % repeatedly wasn't. For example:
When filling rows and columns from a single index, you could calculate rowNumber = index / nCols and colNumber = index % nCols. But if index and colNumber can be negative, this mapping becomes invalid, while Euclidean division and remainder remain valid.
If you want to normalize an angle to (0, 2pi), angle %! (2. * System.Math.PI) does the job, while the "normal" % might give you a headache.
Because
116 / 36 = 3
116 - (3*36) = 8
Basically, the % operator, known as the modulo operator will divide a number by other and give the rest if it can't divide any longer. Usually, the first time you would use it to understand it would be if you want to see if a number is even or odd by doing something like this in f#
let firstUsageModulo = 55 %2 =0 // false because leaves 1 not 0
When it leaves 8 the first time means that it divided you 116 with 36 and the closest integer was 8 to give.
Just to help you in future with similar problems: in IDEs such as Xamarin Studio and Visual Studio, if you hover the mouse cursor over an operator such as % you should get a tooltip, thus:
Module operator tool tip
Even if you don't understand the tool tip directly, it'll give you something to google.
Im trying to figure out an equation. This is f(n)=f(n-1) + 3n^2 - n. I also have the values to use as f(1), f(2), f(3). How would i go about solving this??
You would usually use recursion but, whether you do that or an iterative solution, you're missing (or simply haven't shown us) a vital bit of information, the terminating condition such as f(1) = 1 (for example).
With that extra piece of information, you could code up a recursive solution relatively easily, such as the following pseudo-code:
define f(n):
if n == 1:
return 1
return f(n-1) + (3 * n * n) - n
As an aside, that's not actually Fibonacci, which is the specific 1, 1, 2, 3, 5, 8, 13, ... sequence.
It can be said to be Fibonacci-like but it's actually more efficient to do this one recursively since it only involves one self-referential call per level whereas Fibonacci needs two:
define f(n):
if n <= 2:
return 1
return f(n-2) + f(n-1)
And if you're one of those paranoid types who doesn't like recursion (and I'll admit freely it can have its problems in the real world of limited stack depths), you could opt for the iterative version.
define f(n):
if n == 1:
return 1
parent = 1
for num = 2 to n inclusive:
result = parent + (3 * num * num) - num
parent = result
return result
If you ask this question on a programming site such as Stack Overflow, you can expect to get code as an answer.
On the other hand, if you are looking for a closed formula for f(n), then you should direct your question to a specialised StackExchange site such as Computer Science.
Note: what you are looking for is called the repertoire method. It can be used to solve your problem (the closed formula is very simple).
I am a university student studying Racket/Scheme and C as introductory courses for my CS degree.
I have read online that it is generally best practice to use iteration as opposed to recursion in C because recursion is expensive due to saving stack frames onto the callstack etc...
Now in a functional language like Scheme, recursion is used all the time. I know that tail recursion is a huge benefit in Scheme and it is to my understanding that it only requires one stack frame (can anybody clarify this?) no matter how deep the recursion goes.
My question is: what about non-tail recursion? Does each function application get saved on the callstack? If I could get a brief overview of how this works or point me to a resource I would be grateful; I can't seem to find one anywhere that explicitly states this.
Tail call elimination is required by Scheme. Code that isn't tail call recursion will require an additional stack frame.
For a moment let us assume that javascript supports tail call optimization, the second of these function definition will use only 1 stack frame, while the first, on account of the + will require an additional stack frame.
function sum(n) {
if (n === 0)
return n;
return n + sum(n - 1);
}
function sum(n) {
function doSum(total, n) {
if (n === 0)
return total;
return doSum(total + n, n - 1);
}
return doSum(0, n);
}
Many recursive functions can be written for tail call optimization by putting the result of the computation on the stack
Conceptually invocations for the first definition look like this
3 + sum(2)
3 + sum(2) = 3 + 2 + sum(1)
3 + sum(2) = 3 + 2 + sum(1) = 3 + 2 + 1 + sum(0)
3 + sum(2) = 3 + 2 + sum(1) = 3 + 2 + 1 + sum(0) = 3 + 2 + 1 + 0
3 + sum(2) = 3 + 2 + sum(1) = 3 + 2 + 1 + sum(0) = 6
3 + sum(2) = 3 + 2 + sum(1) = 6
3 + sum(2) = 6
6
invocations for the second definition look like this
sum(3, sum(2)) = sum(5, sum(1)) = sum(6, sum(0)) = 6
Yes, a call in a non-tail position needs to add something to the stack so it knows how to resume work when the call returns. (For a more thorough explanation of stacks, tail calls, and non-tail calls, see Steele's paper Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO linked from the lambda papers page at readscheme.org.)
But Racket (and many other Schemes, and some other languages) implement "the stack" so that even if you have deep recursion, you won't run out of stack space. In other words, Racket has no stack overflows. One reason for this is that the techniques for supporting deep recursion coincide with the techniques for supporting first class continuations, which the Scheme standard also requires. You can read about them in Implementation Strategies for First-Class Continuations by Clinger et al.
Recently I have been studying recursion; how to write it, analyze it, etc. I have thought for a while that recurrence and recursion were the same thing, but some problems on recent homework assignments and quizzes have me thinking there are slight differences, that 'recurrence' is the way to describe a recursive program or function.
This has all been very Greek to me until recently, when I realized that there is something called the 'master theorem' used to write the 'recurrence' for problems or programs. I've been reading through the wikipedia page, but, as usual, things are worded in such a way that I don't really understand what it's talking about. I learn much better with examples.
So, a few questions:
Lets say you are given this recurrence:
r(n) = 2*r(n-2) + r(n-1);
r(1) = r(2)
= 1
Is this, in fact, in the form of the master theorem? If so, in words, what is it saying? If you were to be trying to write a small program or a tree of recursion based on this recurrence, what would that look like? Should I just try substituting numbers in, seeing a pattern, then writing pseudocode that could recursively create that pattern, or, since this may be in the form of the master theorem, is there a more straightforward, mathematical approach?
Now, lets say you were asked to find the recurrence, T(n), for the number of additions performed by the program created from the previous recurrence. I can see that the base case would probably be T(1) = T(2) = 0, but I'm not sure where to go from there.
Basically, I am asking how to go from a given recurrence to code, and the opposite. Since this looks like the master theorem, I'm wondering if there is a straightforward and mathematical way of going about it.
EDIT: Okay, I've looked through some of my past assignments to find another example of where I'm asked, 'to find the recurrence', which is the part of this question I'm having the post trouble with.
Recurrence that describes in the best
way the number of addition operations
in the following program fragment
(when called with l == 1 and r == n)
int example(A, int l, int r) {
if (l == r)
return 2;
return (A[l] + example(A, l+1, r);
}
A few years ago, Mohamad Akra and Louay Bazzi proved a result that generalizes the Master method -- it's almost always better. You really shouldn't be using the Master Theorem anymore...
See, for example, this writeup: http://courses.csail.mit.edu/6.046/spring04/handouts/akrabazzi.pdf
Basically, get your recurrence to look like equation 1 in the paper, pick off the coefficients, and integrate the expression in Theorem 1.
Zachary:
Lets say you are given this
recurrence:
r(n) = 2*r(n-2) + r(n-1); r(1) = r(2)
= 1
Is this, in fact, in the form of the
master theorem? If so, in words, what
is it saying?
I think that what your recurrence relation is saying is that for function of "r" with "n" as its parameter (representing the total number of data sets you're inputting), whatever you get at the nth position of the data-set is the output of the n-1 th position plus twice whatever is the result of the n-2 th position, with no non-recursive work being done. When you try to solve a recurrence relation, you're trying to go about expressing it in a way that doesn't involve recursion.
However, I don't think that that is in the correct form for the Master Theorem Method. Your statement is a "second order linear recurrence relation with constant coefficients". Apparently, according to my old Discrete Math textbook, that's the form you need to have in order to solve the recurrence relation.
Here's the form that they give:
r(n) = a*r(n-1) + b*r(n-2) + f(n)
For 'a' and 'b' are some constants and f(n) is some function of n. In your statement, a = 1, b = 2, and f(n) = 0. Whenever, f(n) is equal to zero the recurrence relation is known as "homogenous". So, your expression is homogenous.
I don't think that you can solve a homogenous recurrence relation using the Master Method Theoerm because f(n) = 0. None of the cases for Master Method Theorem allow for that because n-to-the-power-of-anything can't equal zero. I could be wrong, because I'm not really an expert at this but I don't that it's possible to solve a homogenous recurrence relation using the Master Method.
I that that the way to solve a homogeneous recurrence relation is to go by 5 steps:
1) Form the characteristic equation, which is something of the form of:
x^k - c[1]*x^k-1 - c[2]*x^k-2 - ... - c[k-1]*x - c[k] = 0
If you've only got 2 recursive instances in your homogeneous recurrence relation then you only need to change your equation into the Quadratic Equation where
x^2 - a*x - b = 0
This is because a recurrence relation of the form of
r(n) = a*r(n-1) + b*r(n-2)
Can be re-written as
r(n) - a*r(n-1) - b*r(n-2) = 0
2) After your recurrence relation is rewritten as a characteristic equation, next find the roots (x[1] and x[2]) of the characteristic equation.
3) With your roots, your solution will now be one of the two forms:
if x[1]!=x[2]
c[1]*x[1]^n + c[2]*x[2]^n
else
c[1]*x[1]^n + n*c[2]*x[2]^n
for when n>2.
4) With the new form of your recursive solution, you use the initial conditions (r(1) and r(2)) to find c[1] and c[2]
Going with your example here's what we get:
1)
r(n) = 1*r(n-1) + 2*r(n-2)
=> x^2 - x - 2 = 0
2) Solving for x
x = (-1 +- sqrt(-1^2 - 4(1)(-2)))/2(1)
x[1] = ((-1 + 3)/2) = 1
x[2] = ((-1 - 3)/2) = -2
3) Since x[1] != x[2], your solution has the form:
c[1](x[1])^n + c[2](x[2])^n
4) Now, use your initial conditions to find the two constants c[1] and c[2]:
c[1](1)^1 + c[2](-2)^1 = 1
c[1](1)^2 + c[2](-2)^2 = 1
Honestly, I'm not sure what your constants are in this situation, I stopped at this point. I guess you'd have to plug in numbers until you'd somehow got a value for both c[1] and c[2] which would both satisfy those two expressions. Either that or perform row reduction on a matrix C where C equals:
[ 1 1 | 1 ]
[ 1 2 | 1 ]
Zachary:
Recurrence that describes in the best
way the number of addition operations
in the following program fragment
(when called with l == 1 and r == n)
int example(A, int l, int r) {
if (l == r)
return 2;
return (A[l] + example(A, l+1, r);
}
Here's the time complexity values for your given code for when r>l:
int example(A, int l, int r) { => T(r) = 0
if (l == r) => T(r) = 1
return 2; => T(r) = 1
return (A[l] + example(A, l+1, r); => T(r) = 1 + T(r-(l+1))
}
Total: T(r) = 3 + T(r-(l+1))
Else, when r==l then T(r) = 2, because the if-statement and the return both require 1 step per execution.
Your method, written in code using a recursive function, would look like this:
function r(int n)
{
if (n == 2) return 1;
if (n == 1) return 1;
return 2 * r(n-2) + r(n-1); // I guess we're assuming n > 2
}
I'm not sure what "recurrence" is, but a recursive function is simply one that calls itself.
Recursive functions need an escape clause (some non-recursive case - for example, "if n==1 return 1") to prevent a Stack Overflow error (i.e., the function gets called so much that the interpreter runs out of memory or other resources)
A simple program that would implement that would look like:
public int r(int input) {
if (input == 1 || input == 2) {
return 1;
} else {
return 2 * r(input - 2) + r(input -1)
}
}
You would also need to make sure that the input is not going to cause an infinite recursion, for example, if the input at the beginning was less than 1. If this is not a valid case, then return an error, if it is valid, then return the appropriate value.
"I'm not exactly sure what 'recurrence' is either"
The definition of a "recurrence relation" is a sequence of numbers "whose domain is some infinite set of integers and whose range is a set of real numbers." With the additional condition that that the function describing this sequence "defines one member of the sequence in terms of a previous one."
And, the objective behind solving them, I think, is to go from a recursive definition to one that isn't. Say if you had T(0) = 2 and T(n) = 2 + T(n-1) for all n>0, you'd have to go from the expression "T(n) = 2 + T(n-1)" to one like "2n+2".
sources:
1) "Discrete Mathematics with Graph Theory - Second Edition", by Edgar G. Goodair and Michael M. Parmenter
2) "Computer Algorithms C++," by Ellis Horowitz, Sartaj Sahni, and Sanguthevar Rajasekaran.