I want to fit a curve,the model is as the following picture.
In this model, there are four parameters(alpha,beta,gamma and Rd)to optimize,now the data of I and P(I) is already,and the function DEoptim() from DEoptim package in R is used,this is my code.
library(DEoptim)
I <-c(1.200,49.600,99.200,148.500,199.300,399.375,598.200,799.500,1099.600,1398.100,1698.600,1844.333)
pn <-c(-0.0495485,0.4166522,0.8954644,1.4592700,1.9931400,2.9114072,3.0808183,3.2427603,3.3916783,3.6078660,4.1020850,4.0947913)
fn.piao <- function(alpha,beta,gamma,Rd){
pn <- (alpha-alpha*beta*par)/(1+gamma*par)-Rd
}
lower <- c(0,0,0,0)
upper <- c(1,1,1,100)
DE.control <- list(itermax=500,NP=100)
DE.piao <- DEoptim(lower,upper,fn=fn.piao,par=par,control=DE.control)
but R shows error as follows
Error in DEoptim(lower, upper, fn = fn.piao, par = par, control =
DE.control) : object is not a matrix
In the paper (K.M. Mullen2011), it says ith element of lower and upper applies to the ith parameter. so i set the vector lowerand upper to represent alpha, beta, gamma and Rd.i want to ask where is my fault and how to adjust?
Related
I am wondering what is wrong with my following R code (R markdown)? I keep getting an error message for the last line that says "Error in h(x.n, df = N - 2) : unused argument (df = N - 2)". I am very confused because my TA looked at my code and told me that it should run perfectly.
For context, this is the problem I am working on:
library(MASS)
library(tidyverse)
library(hypergeo)
set.seed(1)
rm(list=ls())
N=7
Nsim=10000
rho=0
Sigma=matrix(c(1,rho,rho,1),2,2)
Sigma
mu=c(0,0)
r_vec=matrix(NaN,nrow=1,ncol=Nsim)
#have function mvrnorm-->simulate from multivariate normal distribution. N=7 Correlation matrix sigma. before X was fixed but now is random and formal dependence from Y that I can control. Compute rho hat and see if on average it gives me correct rho. Check how serious bias is when the expected value of rho hat isn't equal to rho. I want a feeling about whether this is something I should worry about or not
for (i in 1:Nsim){
data=mvrnorm(N, mu, Sigma)
r_vec[i]=cor(data[,1],data[,2])
}
mean(r_vec)
update.packages("deSolve")
x.n=seq(-1,1,0.1)
sim_rho0<-function(Nsim,N,rho){
rho=rho
mu=c(0,0)
Sigma=matrix(c(1,rho,rho,1),nrow=2)
r_vec=matrix(NaN,nrow=Nsim)
for (i in 1:Nsim){
data=mvrnorm(N, mu, Sigma)
r_vec[i]=cor(data[,1],data[,2])
}
# here we compute t, which should have a t_{N-2} distribution. This is different here and trying to reconstruct the .Not a mathematical proof. Might be a mistake*****
#range of values and plotting density for each one
h<- function(N,rho,x.n){
rho=rho
a <- ((N-2)*(gamma(N-1))*(1-rho^2)^(N-1)/2*(1-x.n^2)^(N-4)/2)/((2*pi)*(sqrt(N-1/2))((1-x.n*rho)^(N-3/2)))
b <- hypergeo(1/2, 1/2, (2*N-1/2), ((x.n*rho)+1)/2)
h2 = a*b
return(h2)
}
t=r_vec*sqrt(N-2)/(1-r_vec^2)
x.n=seq(-1,1,0.1)
y.n= h(N=10, rho=0.8, x.n=x.n)
df=tibble(X=t)
df2=tibble(x=x.n,y=y.n)
ggplot()+geom_histogram(data=df, aes(x=X,y=..density..),binwidth=0.2,
color="black", fill="white")+ geom_line(data = df2, aes(x = x, y = y),
color = "red")+xlim(-5,5)
}
rho=0.8
Nsim=3000
N=10
sim_rho0(Nsim,N,rho)
You've defined that the function h has the arguments N, rho and x.n. Then you try to call it with the argument df which h does not have, therefore you get the error. You need to call h with the correct arguments (i.e. also don't leave out N and rho, and if the value x.n should be passed to the function argument x.n, you need to specify it (don't use a positional argument). I also recommend to follow a style guide, e.g. https://style.tidyverse.org/
I am attempting to run a two state HMM using a lognormal distribution. I have read Michelot and Langrock (2019) regarding choosing starting parameters through inspecting the data in a histogram and then running iterations in parallel, which has worked for my gamma distribution. Identifying the starting parameters for the lognormal distribution is troubling me however. Do I plot the log of my step length distribution then attempt extracting starting parameters or use the same starting parameters as my gamma distribution and rely on stepDist="lnorm"?
My code for the lognormal attempt currently looks like this:
ncores <- detectCores() - 1
cl <- makeCluster(getOption("cl.cores", ncores))
clusterExport(cl, list("data", "fitHMM"))
niter <- 20
allPar0 <- lapply(as.list(1:niter), function(x) {
stepMean0 <- runif(2,
min = c(x,y),
max = c(y,z))
stepSD0 <- runif(2,
min = c(x,y),
max = c(y,z))
angleMean0 <- c(0, 0)
angleCon0 <- runif(2,
min = c(a,b),
max = c(a,b))
stepPar0 <- c(stepMean0, stepSD0)
anglePar0 <- c(angleMean0, angleCon0)
return(list(step = stepPar0, angle = anglePar0))
})
# Fit the niter models in parallel
logP <- parLapply(cl = cl, X = allPar0, fun = function(par0) {
m <- fitHMM(data = data, nbStates = 2, stepDist = "lnorm", stepPar0 = par0$step,
anglePar0 = par0$angle)
return(m)
})
# Extract likelihoods of fitted models
likelihoodL <- unlist(lapply(logP, function(m) m$mod$minimum))
likelihoodL
# Index of best fitting model (smallest negative log-likelihood)
whichbestpL <- which.min(likelihoodL)
bestL <- logP[[whichbestpL]]
bestL
If I use negative values from plotting the log of the step length of the data then I get the error:
Error in checkForRemoteErrors(val) :
7 nodes produced errors; first error: Check the step parameters bounds (the initial parameters should be strictly between the bounds of their parameter space).
If I use the same starting parameter values that I used for my gamma distribution then I get the error
Error in unserialize(node$con) :
embedded nul in string: 'X\n\0\0\0\003\0\004\002\0\0\003\005\0\0\0'
Please could someone shed some light on how I'm failing at this?
Thank you!
Unfortunately, I can't tell for sure what the problem is from the code you included. If you don't get an error when you run fitHMM outside of parLapply, then it suggests that the problem is in how you choose the values of x, y, and z in your code.
The first parameter of the log-normal distribution can be negative or positive, and it is actually the mean of the logarithm of the step length. So, to find good starting values for this, you should look at a histogram of the log step lengths (e.g., following the dedicated moveHMM vignette). The second parameter is the standard deviation of the log step lengths, and this should be strictly positive (but could also be chosen based on the spread of the histogram of log step lengths).
To summarise, you should choose all the initial values based on plots of the log step lengths (rather than the step lengths themselves), and you should not use the same ranges of values for stepMean0 and stepSD0 (because the former can be negative or positive, whereas the latter is positive). Hopefully, this should help you choose x, y, and z.
I used Kernel estimation to get a non parametric probability density function. Then, I want to compare the tails 'distance' between two Kernel distribution of continuous variables, using Kullback-leiber divergence. I have tried the following code:
kl_l <- function(x,y) {
integrand <- function(x,y) {
f.x <- fitted(density(x, bw="nrd0"))
f.y <- fitted(density(y, bw="nrd0"))
return((log(f.x)-log(f.y))*f.x)
}
return(integrate(integrand, lower=-Inf,upper=quantile(density(x, bw="nrd0"),0.25))$value)
#the Kullback-leiber equation
}
When I run kl_l(a,b) for a, b = 19 continuous variables, it returns a warning
Error in density(y, bw = "nrd0") : argument "y" is missing, with no default
Is there any way to calculate this?
(If anyone wants to see the actual equation: https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2019/attention-to-the-tails-global-financial-conditions-and-exchange-rate-risks.pdf page 13.)
In short, I think you just need to move the f.x and f.y outside the integrand (and possibly replace fitted with approxfun):
kl_l <- function(x, y) {
f.x <- approxfun(density(x, bw = "nrd0"))
f.y <- approxfun(density(y, bw = "nrd0"))
integrand <- function(z) {
return((log(f.x(z)) - log(f.y(z))) * f.x(z))
}
return(integrate(integrand, lower = -Inf, upper = quantile(density(x, bw="nrd0"), 0.25))$value)
#the Kullback-leiber equation
}
Expanding a little:
Looking at the paper you referenced, it appears as though you need to first create the two fitted distributions f and g. So if your variable a contains observations under the 1-standard-deviation increase in global financial conditions, and b contains the observations under average global financial conditions, you can create two functions as in your example:
f <- approxfun(density(a))
g <- approxfun(density(b))
Then define the integrand:
integrand <- function(x) log(f(x) / g(x)) * f(x)
The upper bound:
upper <- quantile(density(b, bw = "nrd0"), 0.25)
And finally do the integration on x within the specified bounds. Note that each value of x in the numerical computation has to go into both f and g; in your function kl_l, the x and y were separately going into the integrand, which I think is incorrect; and in any case, integrate will only have operated on the first variable.
integrate(integrand, lower = -Inf, upper = upper)$value
One thing to check for is that approxfun returns NA for values outside the range specified in the density, which can mess up your operation, so you'll need to adjust for those (if you expect the density to go to zero, for example).
I am trying to optmize a function in R
The function is the Likelihood function of negative binominal when estimating only mu parameter. This should not be a problem since the function clearly has just one point of maximum. But, I am not being able to reach the desirable result.
The function to be optmized is:
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(Resultado)
}
Data is a vector of negative binomial variables with parameters 2 and 2
data <- rnegbin(10000, mu = 2, theta = 2)
When I plot the function having mu as variable with the following code:
x <- seq(0.1, 100, 0.02)
z <- EMV(data,0.1)
for (aux in x) {z <- rbind(z, EMV(data,aux))}
z <- z[2:NROW(z)]
plot(x,z)
I get the following curve:
And the maximum value of z is close to parameter value --> 2
x[which.max(z)]
But the optimization is not working with BFGS
Error in optim(par = theta, fn = EMV, data = data, method = "BFGS") :
non-finite finite-difference value [1]
And is not going to right value using SANN, for example:
$par
[1] 5.19767e-05
$value
[1] -211981.8
$counts
function gradient
10000 NA
$convergence
[1] 0
$message
NULL
The questions are:
What am I doing wrong?
Is there a way to tell optim that the param should be bigger than 0?
Is there a way to tell optim that I want to maximize the function? (I am afraid the optim is trying to minimize and is going to a very small value where function returns smallest values)
Minimization or Maximization?
Although ?optim says it can do maximization, but that is in a bracket, so minimization is default:
fn: A function to be minimized (or maximized) ...
Thus, if we want to maximize an objective function, we need to multiply an -1 to it, and then minimize it. This is quite a common situation. In statistics we often want to find maximum log likelihood, so to use optim(), we have no choice but to minimize the negative log likelihood.
Which method to use?
If we only do 1D minimization, we should use method "Brent". This method allows us to specify a lower bound and an upper bound of search region. Searching will start from one bound, and search toward the other, until it hit the minimum, or it reach the boundary. Such specification can help you to constrain your parameters. For example, you don't want mu to be smaller than 0, then just set lower = 0.
When we move to 2D or higher dimension, we should resort to "BFGS". In this case, if we want to constrain one of our parameters, say a, to be positive, we need to take log transform log_a = log(a), and reparameterize our objective function using log_a. Now, log_a is free of constraint. The same goes when we want constrain multiple parameters to be positive.
How to change your code?
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(-1 * Resultado)
}
optim(par = theta, fn = EMV, data = data, method = "Brent", lower = 0, upper = 1E5)
The help file for optim says: "By default optim performs minimization, but it will maximize if control$fnscale is negative." So if you either multiply your function output by -1 or change the control object input, you should get the right answer.
I am doing some Extreme Values analysis. I don't want to use the fevd package for a variety of reasons (the first I want to be able to tweak some things that I cannot do otherwise). I wrote my own code. It is mostly very simple, and I thought I had solved everything. But for some parameter combinations, the Hessian coming out of my log-likelihood analysis (based on optim ) will not be correct.
Going over one step at the time. My code - or selected part of it - looks like this:
# routines for non stationary
Log_lik_GEV <- function(dataIN,scaleIN,shapeIN,locationIN){
# simply calculate the negative log likelihood value for a set of X and parameters, for the GPD
#xi, mu, sigma - xi is the shape parameter, mu the location parameter, and sigma is the scale parameter.
# shape = xi
# location = mu
# scale = beta
library(fExtremes)
#dgev Density of the GEV Distribution, dgev(x, xi = 1, mu = 0, sigma = 1)
LLvalues <- dgev(dataIN, xi = shapeIN, mu = locationIN, beta = scaleIN)
NLL <- -sum(log(LLvalues[is.finite(LLvalues)]))
return(NLL)
}
function_MLE <- function(par , dataIN){
scoreLL <- 0
shape_param <- par[1]
scale_param <- par[2]
location_param <- par[3]
scoreLL <- Log_lik_GEV(dataIN, scale_param, shape_param, location_param)
if (abs(shape_param) > 0.3) scoreLL <- scoreLL*10000000
if ((scale_param) <= 0) {
scale_param <- abs(scale_param)
par[2] <- abs(scale_param)
scoreLL <- scoreLL*1000000000
}
sum(scoreLL)
}
kernel_estimation <- function(dati_AM, shape_o, scale_o, location_o) {
paramOUT <- optim(par = c(shape_o, scale_o, location_o), fn = function_MLE, dataIN = dati_AM, control = list(maxit = 3000, reltol = 0.00000001), hessian = TRUE)
# calculation std errors
covmat <- solve(paramOUT$hessian)
stde <- sqrt(diag(covmat))
print(covmat)
print('')
result <- list(shape_gev =paramOUT$par[1], scale_gev = paramOUT$par[2],location_gev =paramOUT$par[3], var_covar = covmat)
return(result)
}
Everything works great, in some cases. If I run my routines and the fevd routines, I get exactly the same results. In some cases (in my specific case when shape=-0.29 so strongly negative/weibull), my routine will give negative variances and funky hessians. It is not always wrong, but some parameter combinations are clearly not giving valid hessian (Note: the parameters are still estimated correctly, meaning are identical to the fevd results, but the covariance matrix is completely off).
I found this post that compared the hessian from two procedures, and indeed optim seems to be flaky. However, if I simply substitute maxLik in my routine, it just doesn't converge at all (even in those cases when the convergence was happening).
paramOUT = maxLik(function_MLE, start =c(shape_o, scale_o, location_o),
dataIN=dati_AM, method ='NR' )
I tried to give different initial values - even the correct ones - but it just doesn't converge.
I am not supplying data because I think that the optim routine is used correctly in my example. Simply, the numerical results are not stable for some parameter combination. My question is:
1) Am I missing something in the way I use maxLik?
2) Are there other optimization routines, besides maxLik, from which I can extract the hessian?
thanks