The integral is probably divergent - r

fun1 = function(y,mu=mu0,lsig=lsig0) {
res = 1/(exp(-y)+1)^2 * 1/sqrt(2*pi)/exp(lsig) * exp(-(y-mu)^2/2/exp(lsig)^2)
return(res)
}
fun4 = function(para=c(mu1,lsig1)) {
mu1 = para[1]
lsig1 = para[2]
res = n1 * log(noze(integrate(fun1,-Inf,Inf,mu=mu1,lsig=lsig1)$value)) +
n3 * log(noze(integrate(fun2,-Inf,Inf,mu=mu1,lsig=lsig1)$value)) +
n2 * log(noze(integrate(fun3,-Inf,Inf,mu=mu1,lsig=lsig1)$value))
return(-res)
}
noze = function(x) {
if (x < 1e-100) { x = 1e-100 }
return(x)
}
optim(c(0.5,2),fun4,method="L-BFGS-B",lower=c(-5,-3),upper=c(3.5,3.5))$par
I have to find two parameters of function 'fun4' which uses the integral of 'fun1.' ('fun2' and 'fun3' are slightly different from 'fun1')
I encountered an error 'Error in integrate(fun1, -Inf, Inf, -3.9538, -3) :
the integral is probably divergent'
Using scatterplot, I found that fun1 is close to zero almost everywhere except for (-4.2,-3.7).
Thus, integrating for that interval only gives (approximately) correct integral.
> integrate(fun1,-4.2,-3.6,-3.9538,-3)
0.0003558953 with absolute error < 3e-11
This can be confirmed using nearby parameter values
> integrate(fun1,-Inf,Inf,-3.9538,-3.1)
0.0003555906 with absolute error < 2.6e-05
> integrate(fun1,-Inf,Inf,-3.9538,-2.85)
0.0003564842 with absolute error < 3.7e-06
If the interval is too wide, it gives incorrect integral.
> integrate(fun1,-5,5,-3.9538,-3)
0.0003558953 with absolute error < 2.3e-08
> integrate(fun1,-15,15,-3.9538,-3)
3.492547e-11 with absolute error < 6.5e-11
> integrate(fun1,-30,30,-3.9538,-3)
1.980146e-41 with absolute error < 3.4e-41
> integrate(fun1,-50,50,-3.9538,-3)
0 with absolute error < 0
> integrate(fun1,-Inf,Inf,-3.9538,-3)
Error in integrate(fun1, -Inf, Inf, -3.9538, -3) :
the integral is probably divergent
If I have to integrate only once, I can find an interval where 'fun1' is large enough and integrate only for that interval.
But the problem is I use optim function which tries various parameters to find a minimizer of 'fun4.'
Using (-Inf,Inf) gives an error and wide enough interval gives incorrect integrals.
Is there a good method to solve this?

Convolution with gaussian kernel might be solved using Gauss-Hermite integration, and there is R package for that: https://cran.r-project.org/web/packages/gaussquad/gaussquad.pdf
Some test code:
library(gaussquad)
n.quad <- 128 # integration order
# get the particular weights/abscissas as data frame with 2 observables and n.quad observations
rule <- ghermite.h.quadrature.rules(n.quad, mu = 0.0)[[n.quad]]
# test function - integrate 1 over exp(-x^2) from -Inf to Inf
# should get sqrt(pi) as an answer
f <- function(x) {
1.0
}
q <- ghermite.h.quadrature(f, rule)
print(q - sqrt(pi))

Related

MLE of the parameters of a PDF written as an infinite sum of terms

My question relates to the use of R for the derivation of maximum likelihood estimates of parameters when a probability distributions is expressed in the form of an infinite sum, such as the one below due to Rao, Girija et al.
I wanted to see if I could reproduce the maximum likelihood estimates obtained by these authors (who used Matlab, rather than R) when the model is applied to a given set of data. My attempt is given below, although this throws up several warnings that "longer object length is not a multiple of shorter object length". I know why I am getting this warning, but I do not know how to remedy it. How can I edit my code to overcome this?
Also, is there a better way to handle infinite sums? Here I'm just using an arbitrary large number for n (1000).
library(bbmle)
svec <- list(c=1,lambda=1)
x <- scan(textConnection("0.1396263 0.1570796 0.2268928 0.2268928 0.2443461 0.3141593 0.3839724 0.4712389 0.5235988 0.5934119 0.6632251 0.6632251 0.6981317 0.7679449 0.7853982 0.8203047 0.8377580 0.8377580 0.8377580 0.8377580 0.8726646 0.9250245 0.9773844 0.9948377 1.0122910 1.0122910 1.0646508 1.0995574 1.1170107 1.1170107 1.1170107 1.1344640 1.1344640 1.1868239 1.2217305 1.2740904 1.3613568 1.3613568 1.3613568 1.4486233 1.4486233 1.5358897 1.5358897 1.5358897 1.5707963 1.6057029 1.6057029 1.6231562 1.6580628 1.6755161 1.7104227 1.7453293 1.7976891 1.8500490 1.9722221 2.0594885 2.4085544 2.6703538 2.6703538 2.7052603 3.5604717 3.7524579 3.8920842 3.9444441 4.1364303 4.1538836 4.2411501 4.2586034 4.3633231 4.3807764 4.4854962 4.6774824 4.9741884 5.5676003 5.9864793 6.1086524"))
dL <- function(x, c,lambda,n = 1000,log=TRUE) {
k <- 0:n
r <- log(sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))))
if (log) return(r) else return(exp(r))
}
dat <- data.frame(x)
m1 <- mle2( x ~ dL(c,lambda),
data=dat,
start=svec,
control=list(parscale=unlist(svec)),
method="L-BFGS-B",
lower=c(0,0)
)
I suggest starting out with that algorithm and making a density function that can be tested for proper behavior by integrating over its range of definition, (c(0, 2*pi). You are calling it a "probability function" but that is a term that I associate with CDF's rather than density distributions (PDF's):
dL <- function(x, c=1,lambda=1,n = 1000, log=FALSE) {
k <- 0:n
r <- sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda)))
if (log) {log(r) }
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(I think you were trying to pack too much into your log-likelihood function so I decide to break apart the steps.)
When I ran that version I got a warning message from the final mle2 step that I didn't like and I thought it might be the case that this density function was occasionally returning negative values, so this was my final version:
dL <- function(x, c=1,lambda=1,n = 1000) {
k <- 0:n
r <- max( sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))), 0.00000001)
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
#------------------------
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9009665 1.1372237
Log-likelihood: -116.96
(The warning and the warning-free LL numbers were the same.)
So I guess I think you were attempting to pack too much into your definition of a log-likelihood function and got tripped up somewhere. There should have been two summations, one for the density approximation and a second one for the summation of the log-likelihood. The numbers in those summations would have been different, hence the error you were seeing. Unpacking the steps allowed success at least to the extent of not throwing errors. I'm not sure what that density represents and cannot verify correctness.
As far as the question of whether there is a better way to approximate an infinite series, the answer hinges on what is known about the rate of convergence of the partial sums, and whether you can set up a tolerance value to compare successive values and stop calculations after a smaller number of terms.
When I look at the density, it makes me wonder if it applies to some scattering process:
curve(vdL(x, c=.9, lambda=1.137), 0.00001, 2*pi)
You can examine the speed of convergence by looking at the ratios of successive terms. Here's a function that does that for the first 10 terms at an arbitrary x:
> ratios <- function(x, c=1, lambda=1) {lambda*c*(x+2*(1:11)*pi)^(-c-1)*(exp(-(x+2*(1:10)*pi)^(-c))^(lambda))/lambda*c*(x+2*(0:10)*pi)^(-c-1)*(exp(-(x+2*(0:10)*pi)^(-c))^(lambda)) }
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.05)
[1] 1.755301e-08 1.235632e-04 1.541082e-05 4.024074e-06 1.482741e-06 6.686497e-07 3.445688e-07 1.952358e-07
[9] 1.187626e-07 7.634088e-08 4.443193e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
That looks like pretty rapid convergence to me, so I'm guessing that you could use only the first 20 terms and get similar results. With 20 terms the results look like:
> integrate(vdL, 0,2*pi)
0.9924498 with absolute error < 9.3e-06
> (m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9542066 1.1098169
Log-likelihood: -117.83
Since you never attempt to interpret a LL in isolation but rather look at differences, I'm guessing that the minor difference will not affect your inferences adversely.

DEoptim package - Argument is missing with no default

Im trying to use DEoptim to find the global minimum of z in in -1 < x < 1 , -1 < y < 1, but im getting Error in FUN(newX[, i], ...) : argument "y" is missing, with no default and I dont know what im supposed to do for the mission "y"
install.packages("Rmpfr")
install.packages("DEoptim")
library(gmp)
library(Rmpfr)
library(parallel) # https://cran.r-project.org/web/packages/DEoptim/vignettes/DEoptim.pdf
library(DEoptim)
z = function(x,y) {
(exp(sin(60.0*x)) + sin(50.0*exp(y)) + sin(80.0*sin(x)) + sin(sin(70.0*y)) - sin(10.0*(x+y)) + (x*x+y*y)/4.0)
}
optimized_Minimum <- DEoptim(z, lower = c(-1,-1), upper = c(1,1),
control=list(storepopfrom=1, trace=FALSE))
# optimized_Minimum <- optim(z, lower = c(-1,-1), upper = c(1,1), method = "Brent")
DEoptim is not expecting you to pass it 2 separate arguments to your function (x and y), but you can still solve for multiple variables.
You need to pass in a vector rather than 2 separate variables with the DEoptim package, as with the optim function.
I tested this with the functions from the linked solution and it worked:
fxcalc <- function(s,t){(1-(1-(parametros$ap/xm)^(s))^t)*100}
suma <- function(s,t){(parametros$fx-fxcalc(s,t))^2}
func <- function(st){
s <- st[1]
t <- st[2]
sum(suma(s,t))
}
optimized_Minimum <- DEoptim(func, lower = c(-1,-1), upper = c(1,1),
control=list(storepopfrom=1, trace=FALSE))
summary(optimized_Minimum)
***** summary of DEoptim object *****
best member : 1 1
best value : 0
after : 200 generations
fn evaluated : 402 times
*************************************

Issue with constrOptim

When doing constrained optimization using the constrOptim function, I sometimes get the following error message:
Error in optim(theta.old, fun, gradient, control = control, method = method, :
initial value in 'vmmin' is not finite
Example
x <- c(-0.2496881061155757641767394261478330008685588836669921875,
0.0824038146359631351600683046854101121425628662109375,
0.25000000111421105675191256523248739540576934814453125)
nw <- length(x)
ui <- diag(1, nrow = nw)
ui <- rbind(ui, rep(0, nw))
ui[cbind(2:(nw + 1), 1:nw)] <- -1
ci <- rep(-0.8 / (nw + 1), nw + 1)
constrOptim(theta = rep(0, nw), f = function(theta) mean((theta - x)^2),
grad = function(theta) 2 * (theta - x), ui = ui, ci = ci,
method = "BFGS")
What I know
The problem occurs during the iteration inside constrOptim, when the result comes so close to the boundary that almost all point evaluated by the BFGS optimizer are NaNs (excluding the initial point). In this case, BFGS will sometimes return an optimal value of NaN and a corresponding minimizing parameter outside the constraint set.
In constrOptim, the objective function fed to BFGS is given by
R <- function(theta, theta.old, ...) {
ui.theta <- ui %*% theta
gi <- ui.theta - ci
if (any(gi < 0)) {
return(NaN)
}
gi.old <- ui %*% theta.old - ci
bar <- sum(gi.old * log(gi) - ui.theta)
if (!is.finite(bar))
bar <- -Inf
f(theta, ...) - mu * bar
}
My question
It seems to me that the obvious solution to the problem is to simply return sign(mu) * Inf instead of NaN if there are any gi < 0, but could this fix lead to other problems?
After normalizing the gradient properly
constrOptim(theta = rep(0, nw), f = function(theta) mean((theta - x)^2),
grad = function(theta) 2 / nw * (theta - x), ui = ui, ci = ci,
method = "BFGS")
I can no longer replicate the problem. It seems that the issue was caused by the wrong weighting of the gradient of the objective function and the gradient of the logarithmic barrier term in the internal gradient.
However, I still think that returning Inf outside the boundary would be more robust than returning NaN.

Integrating beta-function-like function over compact support [0,alpha], alpha < 1

I would like to integrate a following function named betalog
g <- function(x,a,b){
if (a < 0 | b < 0) stop()
temp <- (a-1)*log(x) + (b-1)*log(1-x)
return( exp(temp) )
}
betalog<- function(x,a,b)
{
temp <- g(x=x,a=a,b=b)* log(x/(1-x))
return( temp )
}
The function g is integrand of the beta function. In theory, betalog should be integrable over any [0,alpha] interval if 0 < alpha < 1, and a > 0, b >0.
However, I cannot numerically integrate betalog with very small a:
a <- 0.00001
b <- 1
alpha <- 0.5
integrate(betalog,a=a,b=b,lower=0,upper=alpha,subdivisions=1000000L)
Error in integrate(betalog, a = a, b = b, lower = 0, upper = alpha, subdivisions =
1000000L) :
non-finite function value
In fact, I cannot even compute the incomplete beta function using R integrate function when a is very small:
integrate(g,a=a,b=b,lower=0,upper=alpha,subdivisions=1000000L)
Error in integrate(g, a = a, b = b, lower = 0, upper = alpha, subdivisions = 1000000L) :
roundoff error is detected in the extrapolation table
Can anyone gives me tip to integrate such incomplete beta-like function in R?
> betalog(0, a, b)
[1] -Inf
Your function is singular at the lower bound. Recall that to compute an improper integral you must replace the singular bounds with dummy variables and take the limit from the correct side towards that bound. In particular,
> integrate(betalog,a=a,b=b,lower=0.000001,upper=alpha,subdivisions=10000000L)
-94.60292 with absolute error < 0.00014
> integrate(betalog,a=a,b=b,lower=.Machine$double.xmin * 1000,upper=alpha,subdivisions=10000
-244894.7 with absolute error < 10
> integrate(betalog,a=a,b=b,lower=.Machine$double.xmin,upper=alpha,subdivisions=10000000L)
Error in integrate(betalog, a = a, b = b, lower = .Machine$double.xmin, :
non-finite function value
I suspect that your integral diverges, but this might be tricky since even state-of-the-art symbolic algebra systems can't prove that:
http://www.wolframalpha.com/input/?i=Integral%28x%5E%280.00001+-1%29+ln%28x%2F%281-x%29%29%2C+x%2C0%2C+0.5%29
Whatever the case, R is not the correct tool for this problem.

Error in runs test: "missing value where TRUE/FALSE needed"

I'm fairly new to R and I'm trying to simulate some data, fit it to a model, and do a runs test of the residuals. However, I get a strange type error when performing the runs test.
Here is my code:
library(TSA)
d = arima.sim(list(ma=c(0.5)), n=5000)
model = arima(d, order=c(0, 0, 1), include.mean=FALSE)
runs(model$residuals)
The error is:
Error in if (pvalue > 0.5) pvalue <- 1 - pvalue :
missing value where TRUE/FALSE needed
What does this error mean?
I'm not familiar with this package but I can tell that either your data is a particular case or this package (runs function at least) needs a re-review.
pdf <- pdf/sum(pdf) # pdf contains Inf, so becomes NaN
mu <- 1 + 2 * n1 * n2/(n1 + n2)
if (r1 <= mu) # This is verified
pvalue <- sum(pdf[(1:l2) <= r1]) # pvalue is not calculated, pdf is all NaN
if (r1 > mu) # there should be some R versions without else...
pvalue <- sum(pdf[(1:l2) >= r1])
if (pvalue > 0.5) # This gives you the error! pvalue is all NaN
pvalue <- 1 - pvalue
few lines above pdf gets the Inf values:
for (i in seq(4, l2, 2)) { # when i is 166 pdf gets its first Inf value
r <- (i - 2)/2
f[r + 1] <- (n1 - r) * (n2 - r)/r/r * f[r]
pdf[i] <- f[r + 1]
}
I can't go any further cause I don't know neither how such data is supposed to look like nor the results you should get from such function. Try to give a look yourself into the function, I had enough :-) the missing else in the code above is not the only odd thing into it. Hope that helped

Resources