Creating function from jacbian maxima - substitution

I want to use Maxima to do linear stability analysis as function of r:
f(x):=rx + x^3 - x^5
A:solve(f(x)=0,x)
J:jacobian([f(x)],[x])
Now for each element in A, I want to check the sign of J as a function of r. In general I want a function from r that gives that tells me if there exists any eigenvalue to J with positive real part.

Maybe you know this already, but: multiplication in Maxima is indicated by an asterisk. So you have to write:
f(x):=r*x + x^3 - x^5;
I don't see any problem with your approach so far. The Jacobian is a 1 by 1 matrix so it is trivial to compute the eigenvalue. Then substitute values of x into that, and look at the real part (function realpart).

Related

How can I “translate” a statistical model defined on paper to the computer using R?

I have initially posted this question on stats.stackexchange.com,
but it was closed due to being focused on programming. Hopefully, I
can get any help here.
I will not put many theoretical details here to make it simple, but my final goal is to implement a Hidden Markov Model using R.
Although I am fine with the theoretical model construction, when I tried to implement it, I realized that I do not know basic things about computational statistics. My question goes into this direction.
Let and be random variables such that and , with and . If denotes distribution, how can I compute
using R?
I mean, what is the exact meaning of these distributions (one discrete and one continuous) multiplication? How can I do this using R? The answer is obviously a function of , but how is it represented in my code?
Is there any change if is also discrete? For instance, , with . How it would affect the implemented code?
I know my questions are not very specific, but I am very lost on how to start. My goal with this question is understanding how I can "translate" what I have written in paper to the computer.
Translation
The equations describe how to compute the probability distribution of X given an observation of Y=y and values for parameters p and sigma. Ultimately, you want to implement a function p_X_given_Y that takes a value of Y and returns a probability distribution for X. A good place to start is to implement the two functions used in the RHS of the expression. Something like,
p_X <- function (x, p=0.5) { switch(as.character(x), "0"=p, "1"=1-p, 0) }
p_Y_given_X <- function (y, x, sigma=1) { dnorm(y, x, sd=sigma) }
Note that p and sigma are picked arbitrarily here. These functions can then be used to define the p_X_given_Y function:
p_X_given_Y <- function (y) {
# numerators: for each x \in X
ps <- sapply(c("0"=0,"1"=1),
function (x) { p_X(x) * p_Y_given_X(y, x) })
# divide out denominator
ps / sum(ps)
}
which can be used like:
> p_X_given_Y(y=0)
# 0 1
# 0.6224593 0.3775407
> p_X_given_Y(y=0.5)
# 0 1
# 0.5 0.5
> p_X_given_Y(y=2)
# 0 1
# 0.1824255 0.8175745
These numbers should make intuitive sense (given p=0.5): Y=0 is more likely to come from X=0, Y=0.5 is equally likely to be X=0 or X=1, etc.. This is only one way of implementing it, where the idea is to return the "distribution of X", which in this case is simply a named numeric vector, where the names ("0", "1") correspond to the support of X, and the values correspond to the probability masses.
Some alternative implementations might be:
a p_X_given_Y(x,y) that also takes a value for x and returns the corresponding probability mass
a p_X_given_Y(y) that returns another function that takes an x argument and returns the corresponding probability mass (i.e., the probability mass function)

Solving system of ODEs in vector/matrix form in R (with deSolve?)

So I want to ask whether there's any way to define and solve a system of differential equations in R using matrix notation.
I know usually you do something like
lotka-volterra <- function(t,a,b,c,d,x,y){
dx <- ax + bxy
dy <- dxy - cy
return(list(c(dx,dy)))
}
But I want to do
lotka-volterra <- function(t,M,v,x){
dx <- x * M%*% x + v * x
return(list(dx))
}
where x is a vector of length 2, M is a 2*2 matrix and v is a vector of length 2. I.e. I want to define the system of differential equations using matrix/vector notation.
This is important because my system is significantly more complex, and I don't want to define 11 different differential equations with 100+ parameters rather than 1 differential equation with 1 matrix of interaction parameters and 1 vector of growth parameters.
I can define the function as above, but when it comes to using ode function from deSolve, there is an expectation of parms which should be passed as a named vector of parameters, which of course does not accept non-scalar values.
Is this at all possible in R with deSolve, or another package? If not I'll look into perhaps using MATLAB or Python, though I don't know how it's done in either of those languages either at present.
Many thanks,
H
With my low reputation (points), I apologize for posting this as an answer which supposedly should be just a comment. Going back, have you tried this link? In addition, in an attempt to find an alternative solution to your problem, have you tried MANOPT, a toolbox of MATLAB? It's actually open source just like R. I encountered MANOPT on a paper whose problem boils down to solving a system of ODEs involving purely matrices.

Find the second derivative of a log likelihood function

I'm interested in finding the values of the second derivatives of the log-likelihood function for logistic regression with respect to all of my m predictor variables.
Essentially I want to make a vector of m ∂2L/∂βj2 values where j goes from 1 to m.
I believe the second derivative should be -Σi=1n xij2(exiβ)/((1+exiβ)2) and I am trying to code it in R. I did something dumb when trying to code it and was wondering if there was some sort of sapply function I could use to do it more easily.
Here's the code I tried (I know the sum in the for loop doesn't really do anything, so I wasn't sure how to sum those values).
for (j in 1:m)
{
for (i in 1:n)
{
d2.l[j] <- -1*(sum((x.center[i,j]^2)*(exp(logit[i])/((1 + exp(logit[i])^2)))))
}
}
And logit is just a vector consisting of Xβ if that's not clear.
I'm hazy on the maths (and it's hard to read latex) but purely on the programming side, if logit is a vector with indices i=1,...,n and x.center is a nxm matrix:
for (j in 1:m)
dt.l[j] <- -sum( x.center[,j]^2 * exp(logit)/(1+exp(logit))^2 )
where the sum sums over i.
If you want to do it "vector-ish", you can take advantage of the fact that if you do matrix * vector (your x.center * exp(logit)/...) this happens column-wise in R which suits your equation:
-colSums(x.center^2 * exp(logit)/(1+exp(logit))^2)
For what it's worth, although the latter is "slicker", I will often use the explicit loop (as with the first example), purely for readability. Or else when I come back in a month's time I get very confused about my is and js and what is being summed over when.

Reformulating a quadratic program suitable for R

My problem is one which should be quite common in statistical inference:
min{(P - k)'S(P - k)} subject to k >= 0
So my choice variable is k, a 3x1 vector. The 3x1 vector P and 3x3 matrix S are known. Is it possible to reformulate this problem so I can use R's solve.QP quadratic programming solver? This solver requires the problem to be in the form
min{-d'b + 0.5 b' D b} subject to A'b >= b_0.
So here the choice vector is is b. Is there a way I can make my problem fit into solve.QP? Thanks so much for any help.

univariate nonlinear optimization with quadratic constraint in R

I have a quadratic function f where, f = function (x) {2+.1*x+.23*(x*x)}. Let's say I have another quadratic fn g where g = function (x) {3+.4*x-.60*(x*x)}
Now, I want to maximize f given the constraints 1. g>0 and 2. 600<x<650
I have tried the packages optim,constrOptim and optimize. optimize does one dim. optimization, but without constraints and constrOptim I couldn't understand. I need to this using R. Please help.
P.S. In this example, the values may be erratic as I have given two random quadratic functions, but basically I want maximization of a quadratic fn given a quadratic constraint.
If you solve g(x)=0 for x by the usual quadratic formula then that just gives you another set of bounds on x. If your x^2 coefficent is negative then g(x) > 0 between the solutions, otherwise g(x)>0 outside the solutions, so within (-Inf, x1) and (x2, Inf).
In this case, g(x)>0 for -1.927 < x < 2.59. So in this case both your constraints cannot be simultaneously achieved (g(x) is LESS THAN 0 for 600<x<650).
But supposing your second condition was 1 < x < 5, then you'd just combine the solution from g(x)>0 with that interval to get 1 < x < 2.59, and then maximise f in that interval using standard univariate optimisation.
And you don't even need to run an optimisation algorithm. Your target f is quadratic. If the coefficient of x^2 is positive the maximum is going to be at one of your limits of x, so you only have a small number of values to try. If the coefficient of x^2 is -ve then the maximum is either at a limit or at the point where f(x) peaks (solve f'(x)=0) if that is within your limits.
So you can do this precisely, there's just a few conditions to test and then some intervals to compute and then some values of f at those interval limits to calculate.

Resources