Assisstance with comparisons in primitive recursion - recursion

I have been given the following question on a tutorial about recursive functions and am not sure how to compare two arguments
Show that the sign function, sign: N -> N, is primitive recursive where
sign(x)= x>0 : x=1
x=0 : x=0
Note the number of arguments of sign.
I already know that I will use
succ(zero(x)
to assign 1 and
zero(x)
to assign 0 but I do not know compare the x to zero.

Related

How do I solve a system of second order differential equations using Octave?

I am trying to solve two equations below for my project. I was unable to find any straightforward guide to solving the differential equations.
I am trying to plot a graph of h over time for the equations with a given initial h, miu, r, theta, g, L, and derivatives of h and theta wrt t. Is this possible? And if so, how?
The two equations mentioned
I tried to type the equations into Octave with the given conditions, but there seems to be an error that I am unable to identify.
Whatever is the numerical software you will use, Octave or another one (that does not formal calculation), the first step is to transform your system or N coupled Ordinary Differential Equations (ODEs) of order p into a system of p*N coupled ODEs of order 1.
This is always done by setting intermediate derivatives as new variables.
For your system, then
will become
Then, with Octave, and as explained in doc lsode you define a function say Xdot = dsys(X), that codes this system. X is the vector [h, theta, H, J], and Xdot is the returned vector of their respective derivatives, as defined by the right hand expressions of the system of first order ODEs.
So, the 2 first elements of Xdot will be trivial, just Xdot=[X(3) X(4) ...].
Of course, dsys() must also use the parameters M, g, m, µ, L, and r. As far as i understand, they can't be passed as additional arguments to dsys(). So you must defined them before calling lsode.
For initial states, you must define the vector X0=[h0, theta0, H0, J0] of known initial values.
The vector of increasing times >= 0 to which you want to compute and get the values of X must then be defined. For instance, t = 0:100. 0 must be the first element of t.
Finally, call Xt = lsode(#dsys, X0, t). After that you should get
Xt(:,1) are the values of h(t)
Xt(:,2) are the values of theta(t)
Xt(:,3) are the values of H(t)=(dh/dt)(t)
Xt(:,4) are the values of J(t)=(dtheta/dt)(t)

Vector with function as elements

I'm try to compute this integral in R numerically :
where
cm and cf are function that I already know and gamma is a parameter also known.
What I wanted to do is to compute the integral for a=18,19,20,...,65
Hence basically, I would like to construct a vector of size 48 in which the first element is pi(18), the second pi(19),untill pi(65).
Is it possible to do it in R ?
I have only tried to compute the integrand inside lambda (just to try if it works) in the following way
integrand <- vector(mode="numeric")
for (i in 1:48){
integrand[i] <- function(a.f){exp(0.83-0.071*a.f)*
exp(-0.970774-0.077159*a.f)*
(1/sqrt(((-0.67+0.133*a.f)^{2})*pi))*exp(-(1/(-0.67+0.133*a.f)^{2})*(i-a.f- 2)^{2})}
}
But I obtain the error:
"incompatible types (from closure to double) in subassignment type fix"
Therefore I have no clue on how solving my initial integral.
If I am not mistaken, you are almost there. R is so powerful because it is vectorized, which we can use to get the result:
a.f <- 1:48
integrand <- exp(0.83-0.071*a.f)*
exp(-0.970774-0.077159*a.f)*
(1/sqrt(((-0.67+0.133*a.f)^2)*pi))*
exp(-(1/(-0.67+0.133*a.f)^2)*(i-a.f- 2)^2)
Which gives a vector (integrand) of length 48 with the values.
Is that something that you want/need?

lchoose function in R

My understanding of lchoose function in R is simply lchoose(a,b) = log(choose(a,b)).
However, I found that:
temp <- 7.9999993
k <- 8
choose(temp,k)
[1] 0
lchoose(temp,k)
[1] 0
log(choose(temp,k))
[1] -Inf
So lchoose is not log of the choose function output.
Why is this happening?
In the discrete case (i.e discrete n), choose(n,k) computes the number of distinct k-element subsets from a set of n elements, so if k > n, then you are counting subsets of a set which have more elements that the corresponding set. Since there are no such subsets, then the answer is zero.
In general, for an n which is a real number, the function can still be computed, but however, the function still has to have the same meaning over discrete values, so for k>n the function has a value of zero. If you look at the definition of the binomial function with real n (see here) you'll see that the answer will be zero, but I tried to explain it, hopefully, in an intuitive manner.

Fast Fourier Transform Pseudocode?

The purpose of the following code is to convert a polynomial from coefficient representation into value representation by dividing it into its odd and even powers and then recursing on the smaller polynomials.
function FFT(A, w)
Input: Coefficient representation of a polynomials A(x) of degree ≤ n-1, where n
is a power of 2w, an nth root of unity.
Output: Value representation A(w^0),...,A(w^(n-1))
if w = 1; return A(1)
express A(x) in the form A_e(x^2) and xA_o(x^2) /*where A_e are the even powers and A_o
the odd.*/
call FFT(A_e,w^2) to evaluate A_e at even of powers of w
call FFT(A_o,w^2) to evaluate A_o at even powers of w
for j = 0 to n-1;
compute A(w^j) = A_e(w^(2j))+w^j(A_o(w^(2j)))
return A(w^0),...,A(w^(n-1))
What is the for loop being used for?
Why is the pseudocode only adding the smaller polynomials, doesn't it need to subtract them too? (to calculate A(-x)). Isn't that what the algorithm completely based on? Adding and subtracting the smaller polynomials to reduce the points in half?*
Why are powers of "w" being evaluated as opposed to "x"?
I am not a too sure if this belongs here, since the question is quite mathematical. If you feel this question is off-topic, I would appreciate it if you moved it to a site where you felt this question would be more appropriate, rather that just closing it.
*Psuedocode was gotten from Algorithms by S. Dasgupta. Page 71.
The loop is for recursion.
No need to add for negative x; the FFT transforms from time to frequency space.

Curve fitting: Find the smoothest function that satisfies a list of constraints

Consider the set of non-decreasing surjective (onto) functions from (-inf,inf) to [0,1].
(Typical CDFs satisfy this property.)
In other words, for any real number x, 0 <= f(x) <= 1.
The logistic function is perhaps the most well-known example.
We are now given some constraints in the form of a list of x-values and for each x-value, a pair of y-values that the function must lie between.
We can represent that as a list of {x,ymin,ymax} triples such as
constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064},
{3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102},
{6, 0.744714, 0.905966}}
Graphically that looks like this:
(source: yootles.com)
We now seek a curve that respects those constraints.
For example:
(source: yootles.com)
Let's first try a simple interpolation through the midpoints of the constraints:
mids = ({#1, Mean[{#2,#3}]}&) ### constraints
f = Interpolation[mids, InterpolationOrder->0]
Plotted, f looks like this:
(source: yootles.com)
That function is not surjective. Also, we'd like it to be smoother.
We can increase the interpolation order but now it violates the constraint that its range is [0,1]:
(source: yootles.com)
The goal, then, is to find the smoothest function that satisfies the constraints:
Non-decreasing.
Tends to 0 as x approaches negative infinity and tends to 1 as x approaches infinity.
Passes through a given list of y-error-bars.
The first example I plotted above seems to be a good candidate but I did that with Mathematica's FindFit function assuming a lognormal CDF.
That works well in this specific example but in general there need not be a lognormal CDF that satisfies the constraints.
I don't think you've specified enough criteria to make the desired CDF unique.
If the only criteria that must hold is:
CDF must be "fairly smooth" (see below)
CDF must be non-decreasing
CDF must pass through the "error bar" y-intervals
CDF must tend toward 0 as x --> -Infinity
CDF must tend toward 1 as x --> Infinity.
then perhaps you could use Monotone Cubic Interpolation.
This will give you a C^2 (twice continously differentiable) function which,
unlike cubic splines, is guaranteed to be monotone when given monotone data.
This leaves open the question, exactly what data should you use to generate the
monotone cubic interpolation. If you take the center point (mean) of each error
bar, are you guaranteed that the resulting data points are monotonically
increasing? If not, you might as well make some arbitrary choice to guarantee
that the points you select are monotonically increasing (because the criteria does not force our solution to be unique).
Now what to do about the last data point? Is there an X which is guaranteed to
be larger than any x in the constraints data set? Perhaps you can again make an
arbitrary choice of convenience and pick some very large X and put (X,1) as the
final data point.
Comment 1: Your problem can be broken into 2 sub-problems:
Given exact points (x_i,y_i) through which the CDF must pass, how do you generate CDF? I suspect there are infinitely many possible solutions, even with the infinite-smoothness constraint.
Given y-errorbars, how should you pick (x_i,y_i)? Again, there infinitely many possible solutions. Some additional criteria may need to be added to force a unique choice. Additional criteria would also probably make the problem even harder than it currently is.
Comment 2: Here is a way to use monotonic cubic interpolation, and satisfy criteria 4 and 5:
The monotonic cubic interpolation (let's call it f) maps R --> R.
Let CDF(x) = exp(-exp(f(x))). Then CDF: R --> (0,1). If we could find the appropriate f, then by defining CDF this way, we could satisfy criteria 4 and 5.
To find f, transform the CDF constraints (x_0,y_0),...,(x_n,y_n) using the transformation xhat_i = x_i, yhat_i = log(-log(y_i)). This is the inverse of the CDF transformation. If the y_i's were increasing, then the yhat_i's are decreasing.
Now apply monotone cubic interpolation to the (x_hat,y_hat) data points to generate f. Then finally, define CDF(x) = exp(-exp(f(x))). This will be a monotonically increasing function from R --> (0,1), which passes through the points (x_i,y_i).
This, I think, satisfies all the criteria 2--5. Criteria 1 is somewhat satisfied, though there certainly could exist smoother solutions.
I have found a solution that gives reasonable results for a variety of inputs.
I start by fitting a model -- once to the low ends of the constraints, and again to the high ends.
I'll refer to the mean of these two fitted functions as the "ideal function".
I use this ideal function to extrapolate to the left and to the right of where the constraints end, as well as to interpolate between any gaps in the constraints.
I compute values for the ideal function at regular intervals, including all the constraints, from where the function is nearly zero on the left to where it's nearly one on the right.
At the constraints, I clip these values as necessary to satisfy the constraints.
Finally, I construct an interpolating function that goes through these values.
My Mathematica implementation follows.
First, a couple helper functions:
(* Distance from x to the nearest member of list l. *)
listdist[x_, l_List] := Min[Abs[x - #] & /# l]
(* Return a value x for the variable var such that expr/.var->x is at least (or
at most, if dir is -1) t. *)
invertish[expr_, var_, t_, dir_:1] := Module[{x = dir},
While[dir*(expr /. var -> x) < dir*t, x *= 2];
x]
And here's the main function:
(* Return a non-decreasing interpolating function that maps from the
reals to [0,1] and that is as close as possible to expr[var] without
violating the given constraints (a list of {x,ymin,ymax} triples).
The model, expr, will have free parameters, params, so first do a
model fit to choose the parameters to satisfy the constraints as well
as possible. *)
cfit[constraints_, expr_, params_, var_] :=
Block[{xlist,bots,tops,loparams,hiparams,lofit,hifit,xmin,xmax,gap,aug,bests},
xlist = First /# constraints;
bots = Most /# constraints; (* bottom points of the constraints *)
tops = constraints /. {x_, _, ymax_} -> {x, ymax};
(* fit a model to the lower bounds of the constraints, and
to the upper bounds *)
loparams = FindFit[bots, expr, params, var];
hiparams = FindFit[tops, expr, params, var];
lofit[z_] = (expr /. loparams /. var -> z);
hifit[z_] = (expr /. hiparams /. var -> z);
(* find x-values where the fitted function is very close to 0 and to 1 *)
{xmin, xmax} = {
Min#Append[xlist, invertish[expr /. hiparams, var, 10^-6, -1]],
Max#Append[xlist, invertish[expr /. loparams, var, 1-10^-6]]};
(* the smallest gap between x-values in constraints *)
gap = Min[(#2 - #1 &) ### Partition[Sort[xlist], 2, 1]];
(* augment the constraints to fill in any gaps and extrapolate so there are
constraints everywhere from where the function is almost 0 to where it's
almost 1 *)
aug = SortBy[Join[constraints, Select[Table[{x, lofit[x], hifit[x]},
{x, xmin,xmax, gap}],
listdist[#[[1]],xlist]>gap&]], First];
(* pick a y-value from each constraint that is as close as possible to
the mean of lofit and hifit *)
bests = ({#1, Clip[(lofit[#1] + hifit[#1])/2, {#2, #3}]} &) ### aug;
Interpolation[bests, InterpolationOrder -> 3]]
For example, we can fit to a lognormal, normal, or logistic function:
g1 = cfit[constraints, CDF[LogNormalDistribution[mu,sigma], z], {mu,sigma}, z]
g2 = cfit[constraints, CDF[NormalDistribution[mu,sigma], z], {mu,sigma}, z]
g3 = cfit[constraints, 1/(1 + c*Exp[-k*z]), {c,k}, z]
Here's what those look like for my original list of example constraints:
(source: yootles.com)
The normal and logistic are nearly on top of each other and the lognormal is the blue curve.
These are not quite perfect.
In particular, they aren't quite monotone.
Here's a plot of the derivatives:
Plot[{g1'[x], g2'[x], g3'[x]}, {x, 0, 10}]
(source: yootles.com)
That reveals some lack of smoothness as well as the slight non-monotonicity near zero.
I welcome improvements on this solution!
You can try to fit a Bezier curve through the midpoints. Specifically I think you want a C2 continuous curve.

Resources