Integral and numeric optimization (nlminb) R - r

I am having issues with an optimization problem involving numerical estimation of an integral which contains an unknown variable.
Numerical estimating an integral is simple enough, just use the integrate function in R. I am trying to estimate a rather unpleasant integral which requires optimization since it contains an unknown variable and a constraint. I am using the nlminb function but the result is highly incorrect. The idea is to evaluate the integral to constraint smaller or equal to 1-l, where l is between 0 and 1.
the code is the following:
integrand <- function(x, p) {dnorm(x,0,1)*(1-dnorm((qnorm(p)
-sqrt(0.12)*x)/(sqrt(1-0.12)), 0, 1))^800}
and it is the variable p which is unknown.
The objective function to be minimised is the following:
objective <- function(p){
PoD <- integrate(integrand, lower = -Inf, upper = Inf, p = p)$value
PoD - 0.5
}
test <- nlminb(0.015, objective = objective, lower = 0, upper = 1)$par*100
Edited to reflect mistakes in the objective function and the integral.
Same issue still remains.
I think my mistake is not specifying which variable to minimise. The optimisation just gives the starting value in nlminb multiplied by 100.
The authors of the paper used dummy variables and showed that a l = 0,5 should give p=0,15%.
Thank you for your time.

Of course, since your objective function does not depend on p. Do:
integrand <- function(x, p) {dnorm(x,0,1)*(1-dnorm((qnorm(p)
-sqrt(0.12)*x)/(sqrt(1-0.12)), 0, 1))^800}
objective <- function(p){
PoD <- integrate(integrand, lower = -Inf, upper = Inf, p = p)$value
PoD - 0.5
}

Related

How to calculate Kullback-leiber divergence of Kernel estimation in R

I used Kernel estimation to get a non parametric probability density function. Then, I want to compare the tails 'distance' between two Kernel distribution of continuous variables, using Kullback-leiber divergence. I have tried the following code:
kl_l <- function(x,y) {
integrand <- function(x,y) {
f.x <- fitted(density(x, bw="nrd0"))
f.y <- fitted(density(y, bw="nrd0"))
return((log(f.x)-log(f.y))*f.x)
}
return(integrate(integrand, lower=-Inf,upper=quantile(density(x, bw="nrd0"),0.25))$value)
#the Kullback-leiber equation
}
When I run kl_l(a,b) for a, b = 19 continuous variables, it returns a warning
Error in density(y, bw = "nrd0") : argument "y" is missing, with no default
Is there any way to calculate this?
(If anyone wants to see the actual equation: https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2019/attention-to-the-tails-global-financial-conditions-and-exchange-rate-risks.pdf page 13.)
In short, I think you just need to move the f.x and f.y outside the integrand (and possibly replace fitted with approxfun):
kl_l <- function(x, y) {
f.x <- approxfun(density(x, bw = "nrd0"))
f.y <- approxfun(density(y, bw = "nrd0"))
integrand <- function(z) {
return((log(f.x(z)) - log(f.y(z))) * f.x(z))
}
return(integrate(integrand, lower = -Inf, upper = quantile(density(x, bw="nrd0"), 0.25))$value)
#the Kullback-leiber equation
}
Expanding a little:
Looking at the paper you referenced, it appears as though you need to first create the two fitted distributions f and g. So if your variable a contains observations under the 1-standard-deviation increase in global financial conditions, and b contains the observations under average global financial conditions, you can create two functions as in your example:
f <- approxfun(density(a))
g <- approxfun(density(b))
Then define the integrand:
integrand <- function(x) log(f(x) / g(x)) * f(x)
The upper bound:
upper <- quantile(density(b, bw = "nrd0"), 0.25)
And finally do the integration on x within the specified bounds. Note that each value of x in the numerical computation has to go into both f and g; in your function kl_l, the x and y were separately going into the integrand, which I think is incorrect; and in any case, integrate will only have operated on the first variable.
integrate(integrand, lower = -Inf, upper = upper)$value
One thing to check for is that approxfun returns NA for values outside the range specified in the density, which can mess up your operation, so you'll need to adjust for those (if you expect the density to go to zero, for example).

R: conditional expected value

Hello everybody (this is my first post in here)!
I'm having a problem with finding the conditional expected value for a given distribution.
Suppose that we need to find E( x | x>0.5 ), where x has gev (generalised extreme value) distribution, with density dgev(x, xi, sigma, mu). What I was trying to do was
library(evir)
func1 <- function(x) {x*dgev(x, xi, sigma, mu)}
integral <- integrate(func1, lower = 0.5, upper = 10000, subdivisions = 10000)
cond.exp.val <- as.numeric(integral[1])/(1-q)
where q is the value that gives qgev(q, xi, sigma, mu) = 0.5, used for normalisation.
The result greatly depends on the 'upper' parameter of integrate() function and for higher values of this parameter the integral diverges. As my distribution parameters are
xi <- 0.81
sigma <- 0.0067
mu <- 0.0072
this integration should be feasible and convergent. Do you have any ideas what I am doing wrong or is there any built-in R function that may calculate such conditional expected value?
Generally, you are advised to use Inf rather than a large number when integrating the right tail of a density. See details in ?integrate. I took your description of q as being a value obtained by iteration and I stopped when I got within 4 decimal places of 0.5 using q <- 0.99315:
qgev(.99315, xi, sigma, mu)
[1] 0.4998413
You also incorrectly used the extraction from your integral variable. Should use either "[[" or "$" when working with lists:
func1 <- function(x) {x*dgev(x, xi, sigma, mu)}
integral <- integrate(func1, lower = 0.5, upper = Inf, subdivisions = 10000)
(cond.exp.val <- integral[[1]]/(1-.99315)) # `as.numeric` not needed
#[1] 2.646068
I have concerns that your description of how to get q was misleading, since values above 1 should not be an expectation derived from a statistical PDF.

R: Convergence problems with numerical integration

Not sure if this numerical methods problem should really be here or in crossvalidated, but since I have a nice reproducible example I though I would start here.
I am going to be estimating and fitting a bunch of distributions both to some large data sets and to data sets generated randomly from similar distributions. As part of this process I will be generating estimates for the conditional mean of various value ranges, including truncated and non-truncated values of the right tail.
The function cr_moment below, given a pdf function for dfun and parameters for that function in params calculates the unconditional mean of that distribution. Given the upper, lower, or both bounds, it calculates the conditional mean for the range specified by those bounds, using the singly- or doubly-truncated distribution for those bounds. The function beneath it, cr_gb2, specializes cr_moment to the generalized beta distribution of the second kind. Finally, the parameter values supplied beneath that approximate the unadjusted current-dollar household income distribution from the US Census/BLS Current Population Survey for the year 2000. McDonald & Ransom 2008. (Also, kudos to Mikko Marttila on this list for help with coding this function).
This function gives me a failure to converge error, copied below, for various lower bounds and an upper bound equal to 4.55e8, or higher, but not at 4.54e8. The kth moment of the GB2 exists for k < shape1 * shape3, here about 2.51. This is a nice smooth unimodal function being integrated over a finite interval, and I don’t know why it is failing to converge and don-t know what to do about it. For other parameter values, but not this one, I have also seen convergence problems at the low end for lower bounds ranging from 6 to a couple of hundred.
Error in integrate(f = prob_interval, lower = lb, upper = ub, subdivisions = 100L):
the integral is probably divergent
455 billion will be above the highest observable income level, by one or two orders of magnitude, but given a wider range of parameter values and using hill-climbing algorithms to fit real and simulated data I think I will hit this wall many times. I know very little about numerical methods in a case like this and don’t really know where to start. Help and suggestions greatly appreciated.
cr_moment <- function(lb = -Inf, ub = Inf, dfun, params, v=1, ...){
x_pdf <- function(X){
X^v * do.call(what=dfun, args=c(list(x=X), params))
}
prob_interval <- function(X){
do.call(what=dfun, args=c(list(x=X), params))
}
integral_val <- integrate(f = x_pdf, lower = lb, upper = ub)
integral_prob <- integrate(f = prob_interval, lower = lb, upper = ub)
crm <- interval_val[[1]] / interval_prob[[1]]
out <- list(value = integral_val[[1]], probability = integral_prob[[1]],
cond_moment = crm)
out
}
library(GB2)
cr_gb2 <- function(lb = -Inf, ub = Inf, v = 1, params){
cr_moment(lb, ub, dfun = dgb2, params = get("params"))
}
GB2_params <- list(shape1 = 2.2474, scale = 58441.5, shape2 = 0.6186, shape3 = 1.118)
cr_gb2(lb=1, ub= 4.55e8, params = GB2_params)

DEoptim does not return optimal parameters

I am trying to use DEoptim to optimize the parameters of the Heston pricing model (NMOF package). My goal is to minimize the difference between the real option price and the heston price. However, when running my code, DEoptim does not save the best result but always displays the value that is obtained by using the initial parameters, not the optimized ones. Unfortunately, I'm totally new to R (and any kind of programming) and thus I cannot seem to fix the problem.
My data, for one exemplary subset of an option looks like this.
#Load data
#Real option price
C0116_P=as.vector(c(1328.700000, 1316.050000, 1333.050000, 1337.900000, 1344.800000))
#Strike price
C0116_K=as.vector(c(500, 500, 500, 500, 500))
#Time to maturity in years
C0116_T_t=as.vector(c(1.660274, 1.657534, 1.654795, 1.652055, 1.649315))
#Interest rate percentage
C0116_r=as.vector(c(0.080000, 0.080000, 0.090000, 0.090000, 0.090000))
#Dividend yield percentage
C0116_DY=as.vector(c(2.070000, 2.090000, 2.070000, 2.070000,2.060000))
#Price underlying
C0116_SP_500_P=as.vector(c(1885.08, 1872.83, 1888.03, 1892.49, 1900.53))
In the next step, I want to define the function I want to minimize (difference between real and heston price) and set some initial parameters. To optimize, I am running a loop which unfortunately at the end only returns the difference between the real option price and the heston price using the initial parameters as a best value and not the actual parameters that minimize the difference.
#Load packages
require(NMOF)
require(DEoptim)
#Initial parameters
v0=0.2
vT=0.2
rho=0.2
k=0.2
sigma=0.2
#Define function
error_heston<-function(x)
{error<-P-callHestoncf(S, X, tau, r, q, v0, vT, rho, k, sigma)
return(error)}
#Run optimization
outDEoptim<-matrix()
for (i in 1:5)
{
#I only want the parameters v0, vT, rho, k and sigma to change. That is why I kept the others constant
lower<-c(C0116_P[i],C0116_SP_500_P[i],C0116_K[i],C0116_T_t[i],C0116_r[i]/100,C0116_DY[i]/100,0.0001,0.0001,-1,0.0001,0.0001)
upper<-c(C0116_P[i],C0116_SP_500_P[i],C0116_K[i],C0116_T_t[i],C0116_r[i]/100,C0116_DY[i]/100,10,10,1,10,10)
outDEoptim<-(DEoptim(error_heston, lower, upper, DEoptim.control(VTR=0,itermax=100)))
print(outDEoptim$opti$bestval)
i=i+1
}
Any help is much appreciated!
One of the first problems is that your objective function only has one argument (the parameters to optimize), so all the others objects used inside the function must be looked up. It's better practice to pass them explicitly.
Plus, many of the necessary values aren't defined in your example (e.g. S, X, etc). All the parameters you want to optimize will be passed to your objective function via the first argument. It can help clarify things if you explicitly assign each element inside your objective function. So a more robust objective function definition is:
# Define objective function
error_heston <- function(x, P, S, K, tau, r, q) {
v0 <- x[1]
vT <- x[2]
rho <- x[3]
k <- x[4]
sigma <- x[5]
error <- abs(P - callHestoncf(S, K, tau, r, q, v0, vT, rho, k, sigma))
return(error)
}
Also note that I took the absolute error. DEoptim is going to minimize the objective function, so it would try to make P - callHestoncf() as negative as possible, when you want it to be close to zero instead.
You specified the box constraints upper and lower even for the parameters that don't vary. It's best to only have DEoptim generate a population for the parameters that do vary, so I removed the non-varying parameters from the box constraints. I also defined them outside the for loop.
# Only need to set bounds for varying parameters
lower <- c(1e-4, 1e-4, -1, 1e-4, 1e-4)
upper <- c( 10, 10, 1, 10, 10)
Now to the actual DEoptim call. Here is where you will pass the values for all the non-varying parameters. You set them as named arguments to the DEoptim call, as I've done below.
i <- 1
outDEoptim <- DEoptim(error_heston, lower, upper,
DEoptim.control(VTR=0, itermax=100), P = C0116_P[i], S = C0116_SP_500_P[i],
K = C0116_K[i], tau = C0116_T_t[i], r = C0116_r[i], q = C0116_DY[i])
I only ran one iteration of the for loop, because the callHestoncf() function frequently throws an error because the numerical integration routine fails. This stops the optimization. You should look into the cause of that, and ask a new question if you have trouble.
I also noticed you specified one of the non-varying inputs incorrectly. Your dividend yield percentages are 100 times too large. Your non-varying inputs should be:
# Real option price
C0116_P <- c(1328.70, 1316.05, 1333.05, 1337.90, 1344.80)
# Strike price
C0116_K <- c(500, 500, 500, 500, 500)
# Time to maturity in years
C0116_T_t <- c(1.660274, 1.657534, 1.654795, 1.652055, 1.649315)
# Interest rate percentage
C0116_r <- c(0.08, 0.08, 0.09, 0.09, 0.09)
# Dividend yield percentage
C0116_DY <- c(2.07, 2.09, 2.07, 2.07, 2.06) / 100
# Price underlying
C0116_SP_500_P <- c(1885.08, 1872.83, 1888.03, 1892.49, 1900.53)
As an aside, you should take a little time to format your code better. It makes it more readable, which should help you avoid typo-like errors.

Error in optim(): searching for global minimum for a univariate function

I am trying to optmize a function in R
The function is the Likelihood function of negative binominal when estimating only mu parameter. This should not be a problem since the function clearly has just one point of maximum. But, I am not being able to reach the desirable result.
The function to be optmized is:
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(Resultado)
}
Data is a vector of negative binomial variables with parameters 2 and 2
data <- rnegbin(10000, mu = 2, theta = 2)
When I plot the function having mu as variable with the following code:
x <- seq(0.1, 100, 0.02)
z <- EMV(data,0.1)
for (aux in x) {z <- rbind(z, EMV(data,aux))}
z <- z[2:NROW(z)]
plot(x,z)
I get the following curve:
And the maximum value of z is close to parameter value --> 2
x[which.max(z)]
But the optimization is not working with BFGS
Error in optim(par = theta, fn = EMV, data = data, method = "BFGS") :
non-finite finite-difference value [1]
And is not going to right value using SANN, for example:
$par
[1] 5.19767e-05
$value
[1] -211981.8
$counts
function gradient
10000 NA
$convergence
[1] 0
$message
NULL
The questions are:
What am I doing wrong?
Is there a way to tell optim that the param should be bigger than 0?
Is there a way to tell optim that I want to maximize the function? (I am afraid the optim is trying to minimize and is going to a very small value where function returns smallest values)
Minimization or Maximization?
Although ?optim says it can do maximization, but that is in a bracket, so minimization is default:
fn: A function to be minimized (or maximized) ...
Thus, if we want to maximize an objective function, we need to multiply an -1 to it, and then minimize it. This is quite a common situation. In statistics we often want to find maximum log likelihood, so to use optim(), we have no choice but to minimize the negative log likelihood.
Which method to use?
If we only do 1D minimization, we should use method "Brent". This method allows us to specify a lower bound and an upper bound of search region. Searching will start from one bound, and search toward the other, until it hit the minimum, or it reach the boundary. Such specification can help you to constrain your parameters. For example, you don't want mu to be smaller than 0, then just set lower = 0.
When we move to 2D or higher dimension, we should resort to "BFGS". In this case, if we want to constrain one of our parameters, say a, to be positive, we need to take log transform log_a = log(a), and reparameterize our objective function using log_a. Now, log_a is free of constraint. The same goes when we want constrain multiple parameters to be positive.
How to change your code?
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(-1 * Resultado)
}
optim(par = theta, fn = EMV, data = data, method = "Brent", lower = 0, upper = 1E5)
The help file for optim says: "By default optim performs minimization, but it will maximize if control$fnscale is negative." So if you either multiply your function output by -1 or change the control object input, you should get the right answer.

Resources