I have the following code that evaluates the likelihood function for a spatial autoregressive model in Julia, like so:
function like_sar2(betas,rho,sige,y,x,W)
n = length(y)
A = speye(n) - rho*W
e = y-x*betas-rho*sparse(W)*y
epe = e'*e
tmp2 = 1/(2*sige)
llike = -(n/2)*log(pi) - (n/2)*log(sige) + log(det(A)) - tmp2*epe
end
I am trying to maximize this function but I'm not sure how to pass the different sized function inputs so that the Optim.jl package will accept it. I have tried the following:
optimize(like_sar2,[betas;rho;sige;y;x;W],BFGS())
and
optimize(like_sar2,tuple(betas,rho,sige,y,x,W),BFGS())
In the first case, the matrix in brackets does not conform due to dimension mismatch and in the second, the Optim package doesn't allow tuples.
I'd like to try and maximize this likelihood function so that it can return the numerical Hessian matrix (using the Optim options) so that I can compute t-statistics for the parameters.
If there is any easier way to obtain the numerical Hessian for such a function I'd use that but it appears that packages like FowardDiff only accept single inputs.
Any help would be greatly appreciated!
Not 100% sure I correctly understand how your function works, but it seems to me like you're using the likelihood to estimate the coefficient vector beta, with the other input variables fixed. The way to do this would be to amend the function as follows:
using Optim
# Initialize some parameters
coeffs = rand(10)
rho = 0.1
ys = rand(10)
xs = rand(10,10)
Wmat = rand(10,10)
sige=0.5
# Construct likelihood with parameters fixed at pre-defined values
function like_sar2(β::Vector{Float64},ρ=rho,σε=sige,y=ys,x=xs,W=Wmat)
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
# Optimize, with starting value zero for all beta coefficients
optimize(like_sar2, zeros(10), NelderMead())
If you need to optimize more than your beta parameters (in the general autoregressive models I've used often the autocorrelation parameter was estimated jointly with other coefficients), you could do this by chugging it in with the beta vector and unpacking within the functions like so:
append!(coeffs,rho)
function like_sar3(coeffs::Vector{Float64},σε=sige,y=ys,x=xs,W=Wmat)
β = coeffs[1:10]; ρ = coeffs[11]
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
The key is that you end up with one vector of inputs to pass into your function.
I'm trying to use Julia (0.5) and Convex.jl (with ECOS solver) to figure out, given a portfolio of 2 stocks, how can I distribute my allocations (in percent) across both stocks such that I maximize my portfolio return and minimize my risk (std dev of returns). I want to maximize what is known as the Sharpe ratio that is a calculation driven from what percentages I have in each of my 2 stocks. So I want to MAXIMIZE the Sharpe ratio and have the solver figure out what is the optimal allocation for the two stocks (I want it to tell me I need x% of stock 1 and 1-x% of stock 2). The only real constraint is that the sum of the percent allocations adds to 100%. I have code below that runs, but does not give me the optimal weights/allocations I'm expecting (which is 36.3% for Supertech & 63.7% for Slowpoke). The solver instead comes back with 50/50.
My intuition is that I either have the objective function modeled incorrectly for the solver, or I need to do more with constraints. I don't have a good grasp on convex optimization so I'm winging it. Also, my objective function uses the variable.value attribute to get the correct output and I suspect I need to be working with the Variable expression object instead.
Question is, is what I'm trying to achieve something the Convex solver is designed for and I just have to model the objective function and constraints better, or do I have to just iterate the weights and brute force it?
Code with comments:
using Convex, ECOS
Supertech = [-.2; .1; .3; .5];
Slowpoke = [.05; .2; -.12; .09];
A = reshape([Supertech; Slowpoke],4,2)
mlen = size(A)[1]
R = vec(mean(A,1))
n=rank(A)
w = Variable(n)
c1 = sum(w) == 1;
λ = .01
w.value = [λ; 1-λ]
sharpe_ratio = sqrt(mlen) * (w.value' * R) / sqrt(sum(vec(w.value' .* w.value) .* vec(cov(A,1,false))))
# sharpe_ratio will be maximized at 1.80519 when w.value = [λ, 1-λ] where λ = .363
p = maximize(sharpe_ratio,c1);
solve!(p, ECOSSolver(verbose = false)); # when verbose=true, says is 'degenerate' because I don't have enough constrains...
println(w.value) # expecting to get [.363; .637] here but I get [0.5; 0.5]
I'm trying to get a 95% confidence interval around some predicted values, but am not capable of achieving this.
Basically, I estimated a growth curve like this:
set.seed(123)
dat=data.frame(size=rnorm(50,10,3),age=rnorm(50,5,2))
S <- function(t,ts,C,K) ((C*K)/(2*pi))*sin(2*pi*(t-ts))
sommers <- function(t,Linf,K,t0,ts,C)
Linf*(1-exp(-K*(t-t0)-S(t,ts,C,K)+S(t0,ts,C,K)))
model <- nls(size~sommers(age,Linf,K,t0,ts,C),data=dat,
start=list(Linf=10,K=4.7,t0=2.2,C=0.9,ts=0.1))
I have independent size measurements, for which I would like to predict the age. Therefore, the inverse of the function, which is not very straightforward, I calculated like this:
model.out=coef(model)
S.out <- function(t)
((model.out[[4]]*model.out[[2]])/(2*pi))*sin(2*pi*(t-model.out[[5]]))
sommers.out <- function(t)
model.out[[1]]*(1-exp(-model.out[[2]]*(t-model.out[[3]])-S.out(t)+S.out(model.out[[3]])))
inverse = function (f, lower = -100, upper = 100) {
function (y) uniroot((function (x) f(x) - y), lower = lower, upper = upper)[1]
}
sommers.inverse = inverse(sommers.out, 0, 25)
x= sommers.inverse(10) #this works with my complete dataset, but not with this fake one
Although this works fine, I need to know the confidence interval (95%) around this estimate (x). For linear models there is for example "predict(... confidence=)". I could also bootstrap the function somehow to get the quantiles associated with the parameters (didn't find how), to then use the extremes of those to calculate the maximum and minimum values predictable. But that doesn't really look like the good way of doing this....
Any help would be greatly appreciated.
EDIT after answer:
So this worked (explained in the book of Ben Bolker, see answer):
vmat = mvrnorm(1000, mu = coef(mfit), Sigma = vcov(mfit))
dist = numeric(1000)
for (i in 1:1000) {dist[i] = sommers_inverse(9.938,vmat[i,])}
quantile(dist, c(0.025, 0.975))
On the rather bad fake data I gave, this works of course rather horrible. But on the real data (which I have a problem recreating), this is ok!
Unless I'm mistaken, you're going to have to use either regular (parametric) bootstrapping or a method called either "population predictive intervals" (e.g., see section 5 of chapter 7 of Bolker 2008), which assumes that the sampling distributions of your parameters are multivariate Normal. However, I think you may have bigger problems, unless I've somehow messed up your model in adapting it ...
Generate data (note that random data may actually bad for testing your model - see below ...)
set.seed(123)
dat <- data.frame(size=rnorm(50,10,3),age=rnorm(50,5,2))
S <- function(t,ts,C,K) ((C*K)/(2*pi))*sin(2*pi*(t-ts))
sommers <- function(t,Linf,K,t0,ts,C)
Linf*(1-exp(-K*(t-t0)-S(t,ts,C,K)+S(t0,ts,C,K)))
Plot the data and the initial curve estimate:
plot(size~age,data=dat,ylim=c(0,16))
agevec <- seq(0,10,length=1001)
lines(agevec,sommers(agevec,Linf=10,K=4.7,t0=2.2,ts=0.1,C=0.9))
I had trouble with nls so I used minpack.lm::nls.lm, which is slightly more robust. (There are other options here, e.g. calculating the derivatives and providing the gradient function, or using AD Model Builder or Template Model Builder, or using the nls2 package.)
For nls.lm we need a function that returns the residuals:
sommers_fn <- function(par,dat) {
with(c(as.list(par),dat),size-sommers(age,Linf,K,t0,ts,C))
}
library(minpack.lm)
mfit <- nls.lm(fn=sommers_fn,
par=list(Linf=10,K=4.7,t0=2.2,C=0.9,ts=0.1),
dat=dat)
coef(mfit)
## Linf K t0 C ts
## 10.6540185 0.3466328 2.1675244 136.7164179 0.3627371
Here's our problem:
plot(size~age,data=dat,ylim=c(0,16))
lines(agevec,sommers(agevec,Linf=10,K=4.7,t0=2.2,ts=0.1,C=0.9))
with(as.list(coef(mfit)), {
lines(agevec,sommers(agevec,Linf,K,t0,ts,C),col=2)
abline(v=t0,lty=2)
abline(h=c(0,Linf),lty=2)
})
With this kind of fit, the results of the inverse function are going to be extremely unstable, as the inverse function is many-to-one, with the number of inverse values depending sensitively on the parameter values ...
sommers_pred <- function(x,pars) {
with(as.list(pars),sommers(x,Linf,K,t0,ts,C))
}
sommers_pred(6,coef(mfit)) ## s(6)=9.93
sommers_inverse <- function (y, pars, lower = -100, upper = 100) {
uniroot(function(x) sommers_pred(x,pars) -y, c(lower, upper))$root
}
sommers_inverse(9.938, coef(mfit)) ## 0.28
If I pick my interval very carefully I can get back the correct answer ...
sommers_inverse(9.938, coef(mfit), 5.5, 6.2)
Maybe your model will be better behaved with more realistic data. I hope so ...
I am trying to calculate the turning point of a a few functions where I have estimated the coefficient and constant from a regression. I'm using the optimize function for this as my curves are all linear.
My function looks like:
F<- function(x){
beta* x + alpha
}
mind: beta and alpha are both vectors here. When running the optimisation with optimize, I'm getting the following error:
Error in optimize(F, interval = c(10, 20), lower = (10), :
invalid function value in 'optimize'
Is this because optimize is running the optimisation mathematically, so the beta and alphas need to be single parameters? If anyone knows a better way of doing this please do contribute!
Thank you in advance :)
If the functions are linear, then they will be at a minimum at the lower end of the range where beta>=0, and at the upper end of the range if beta<=0 - no need to use optimize().
It's not entirely clear what you're expecting the code to do - if you want it to return an x for each set of parameters, look at optim() instead and have F return the sum, or run optimize on each set of parameters in turn using an apply() function or loop.
One other thing is that your syntax is a bit wonky - I imagine that you mean:
> F<- function(x){
+ beta* x + alpha
+ }
> alpha <- 1
> beta <- 2
> optimize(F,c(10,20))
$minimum
[1] 10.00006
$objective
[1] 21.00011
I am trying to find the best PDF of a continuous data that has unknown distribution, using the "density" function in R. Now, given a new data point, I want to find the probability density of this data point based on the kernel density estimator that I have from the "density" function result.
How can I do that?
If your new point will be within the range of values produced by density, it's fairly easy to do -- I'd suggest using approx (or approxfun if you need it as a function) to handle the interpolation between the grid-values.
Here's an example:
set.seed(2937107)
x <- rnorm(10,30,3)
dx <- density(x)
xnew <- 32.137
approx(dx$x,dx$y,xout=xnew)
If we plot the density and the new point we can see it's doing what you need:
This will return NA if the new value would need to be extrapolated. If you want to handle extrapolation, I'd suggest direct computation of the KDE for that point (using the bandwidth from the KDE you have).
This is one year old, but nevertheless, here is a complete solution. Let's call
d <- density(xs)
and define h = d$bw. Your KDE estimation is completely determined by
the elements of xs,
the bandwidth h,
the type of kernel functions.
Given a new value t, you can compute the corresponding y(t), using the following function, which assumes you have used Gaussian kernels for estimation.
myKDE <- function(t){
kernelValues <- rep(0,length(xs))
for(i in 1:length(xs)){
transformed = (t - xs[i]) / h
kernelValues[i] <- dnorm(transformed, mean = 0, sd = 1) / h
}
return(sum(kernelValues) / length(xs))
}
What myKDE does is it computes y(t) by the definition.
See: docs
dnorm(data_point, its_mean, its_stdev)