Generating/Simulating numbers truncated GPD - r

I'm trying to generate a random sample from the truncated GPD (Generalized Pareto Distribution). In order to do just that, I've begun with writing the CDF and quantile functions of the GPD:
##CDF of the GPD
pGPD <- function(q, xi = 1, mu = 0, beta = 1, lower.tail = TRUE){
shape = xi
location = mu
scale = beta
# Probability:
p = .pepd(q, location, scale, shape, lower.tail)
# Return Value:
p
}
##Quantile function (inverse of CDF) of GPD
qGPD <- function(p, xi = 1, mu = 0, beta = 1, lower.tail = TRUE){
shape = xi
location = mu
scale = beta
# Quantiles:
q = .qepd(p, location, scale, shape, lower.tail)
# Return Value:
q
}
##Generate random numbers of GPD-distribution
rGPD <- function(n, xi = 1, mu = 0, beta = 1){
shape = xi
location = mu
scale = beta
# Random Variates:
r = .repd(n, location, scale, shape)
# Return Value:
r
}
.pepd <- function(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE) {
# Check:
stopifnot(min(scale) > 0)
stopifnot(length(shape) == 1)
# Probability:
q <- pmax(q - location, 0)/scale
if (shape == 0)
p <- 1 - exp(-q)
else {
p <- pmax(1 + shape * q, 0)
p <- 1 - p^(-1/shape)
}
# Lower Tail:
if (!lower.tail)
p <- 1 - p
# Return Value:
p
}
.qepd <-function(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE){
# Check:
stopifnot(min(scale) > 0)
stopifnot(length(shape) == 1)
stopifnot(min(p, na.rm = TRUE) >= 0)
stopifnot(max(p, na.rm = TRUE) <= 1)
# Lower Tail:
if (lower.tail)
p <- 1 - p
# Quantiles:
if (shape == 0) {
q = location - scale * log(p)
} else {
q = location + scale * (p^(-shape) - 1)/shape
}
# Return Value:
q
}
.repd <-
function(n, location = 0, scale = 1, shape = 0) {
# Check:
stopifnot(min(scale) > 0)
stopifnot(length(shape) == 1)
# Random Variates:
if (shape == 0) {
r = location + scale * rexp(n)
} else {
r = location + scale * (runif(n)^(-shape) - 1)/shape
}
# Return Value:
r
}
This all works perfectly. Now, I want to generate numbers from the Truncated GPD and to do that, I've used the following relation:
where Q resembles the quantile functions of the subscript and F_{GPD}(T) is the CDF of the GPD. Using this, I've written the following code:
##Quantiles truncated GPD
qtGPD <- function (p,q,xi=1,mu=0,beta=1,lower.tail=TRUE){
ans= qGPD(p*pGPD(q,xi,mu,beta,lower.tail),
xi,mu,beta, lower.tail)
print(paste0("Generated from the ", 100*p, "th% quantile"))
return (ans)
}
rtGPD <- function (n,q,xi=1,mu=0,beta=1,lower.tail=TRUE){
qtGPD(p= runif(n),q,xi,mu,beta,lower.tail)
}
But now, if I want for example to generate numbers from the 99th% quantile truncated GPD with the function rtGPD it doesn't work, because my p value keeps changing. So, what am I doing wrong or how can I fix this? All I want is to generate numbers from the truncated GPD at the 99th% quantile for example, or at the 97.5% quantile or... you get the idea.
Thanks in advance!
EDIT: For example, if you run the following code:
set.seed(10)
A= rGPD(10)
sort(A)
qtGPD(0.99,2)
rtGPD(10,2)
Then you should normally get a vector A, with random values from the GPD which can be bigger than 1, like expected.
With the command qtGPD(O.99,2), one obtains
[1] "Generated from the 99th% quantile"
[1] 1.941176
which is also OK. But if you then run rtGPD(10,2), a function that I want to give me random values for the truncated GPD, you get different values for p in runif(10), all generated from different quantiles. I just want to generate/simulate random numbers for the truncated GPD at a certain quantile, for example the 99% quantile. But this code isn't letting me do that.

Related

MCMC for estimating negative binomial distribution

I want to estimate parameters of negative binomial distribution using MCMC Metropolis-Hastings algorithm. In other words, I have sample:
set.seed(42)
y <- rnbinom(20, size = 3, prob = 0.2)
and I want to write algorithm that will estimate parameter of size and parameter of prob.
My work so far
I defined prior distribution of size as Poisson:
prior_r <- function(r) {
return(dpois(r, lambda = 2, log = T))
}
And prior distribution of prob as uniform on [0, 1]:
prior_prob <- function(prob) {
return(dunif(prob, min = 0, max = 1, log = T))
}
Moreover for simplicity I defined loglikelihood and joint probability functions:
loglikelihood <- function(data, r, prob) {
loglikelihoodValue <- sum(dnorm(data, mean = r, sd = prob, log = T))
return(loglikelihoodValue)
}
joint <- function(r, prob) {
data <- y
return(loglikelihood(data, r, prob) + prior_r(r) + prior_prob(prob))
}
Finally, the whole algorithm:
run_mcmc <- function(startvalue, iterations) {
chain <- array(dim = c(iterations + 1, 2))
chain[1, ] <- startvalue
for (i in 1:iterations) {
proposal_r <- rpois(1, lambda = chain[i, 1])
proposal_prob <- chain[i, 2] + runif(1, min = -0.2, max = 0.2)
quotient <- joint(proposal_r, proposal_prob) - joint(chain[i, 1], chain[i, 2])
if (runif(1, 0, 1) < min(1, exp(quotient))) chain[i + 1, ] <- c(proposal_r, proposal_prob)
else chain[i + 1, ] <- chain[i, ]
}
return(chain)
}
The problem
Problem that I'm having is that when I run it with starting values even very close to correct ones:
iterations <- 2000
startvalue <- c(4, 0.25)
res <- run_mcmc(startvalue, iterations)
I'll obtain posterior distribution which is obviously wrong. For example
> colMeans(res)
[1] 11.963018 0.994533
As you can see, size is located very close to point 12, and probability is located in point 1.
Do you know what's the cause of those phenomeons?
Change dnorm in loglikelihood to dnbinom and fix the proposal for prob so it doesn't go outside (0,1):
set.seed(42)
y <- rnbinom(20, size = 3, prob = 0.2)
prior_r <- function(r) {
return(dpois(r, lambda = 2, log = T))
}
prior_prob <- function(prob) {
return(dunif(prob, min = 0, max = 1, log = TRUE))
}
loglikelihood <- function(data, r, prob) {
loglikelihoodValue <- sum(dnbinom(data, size = r, prob = prob, log = TRUE))
return(loglikelihoodValue)
}
joint <- function(r, prob) {
return(loglikelihood(y, r, prob) + prior_r(r) + prior_prob(prob))
}
run_mcmc <- function(startvalue, iterations) {
chain <- array(dim = c(iterations + 1, 2))
chain[1, ] <- startvalue
for (i in 1:iterations) {
proposal_r <- rpois(1, lambda = chain[i, 1])
proposal_prob <- chain[i, 2] + runif(1, min = max(-0.2, -chain[i,2]), max = min(0.2, 1 - chain[i,2]))
quotient <- joint(proposal_r, proposal_prob) - joint(chain[i, 1], chain[i, 2])
if (runif(1, 0, 1) < min(1, exp(quotient))) {
chain[i + 1, ] <- c(proposal_r, proposal_prob)
} else {
chain[i + 1, ] <- chain[i, ]
}
}
return(chain)
}
iterations <- 2000
startvalue <- c(4, 0.25)
res <- run_mcmc(startvalue, iterations)
colMeans(res)
#> [1] 3.1009495 0.1988177

My variogram code result different from variog() result

I am writing code for producing a variogram. For validating my result, I checked with geoR::variog() but both variograms are different.
I tried to understand the code of variog() to see what happens under the hood but there are so many things happening that I can't seem to understand it. I, in my code, am using the parameters X-coordinate, Y-coordiante, data value, number of lags, minimum lag value, lag interval, azimuth (angle in degrees; 90 corresponds to vertical direction), angle tolerance (in degrees) and maximum bandwidth.
variogram = function(xcor, ycor, data, nlag, minlag, laginv, azm, atol, maxbandw){
dl <- length(data)
lowangle <- azm - atol
upangle <- azm + atol
gamlag <- integer(nlag)
n <- integer(nlag)
dist <- pairdist(xcor, ycor)
maxd <- max(dist)
llag <- seq(minlag, minlag + (nlag-1) * laginv, by = laginv)
hlag <- llag + laginv
for(i in 1:dl){
for(j in i:dl){
if(i != j){
if(xcor[j]- xcor[i] == 0)
theta <- 90
else
theta <- 180/pi * atan((ycor[j] - ycor[i])/(xcor[j] - xcor[i]))
for(k in 1:nlag){
d <- dist[j, i]
b <- abs(d * sin(theta - azm))
if((llag[k] <= d & d < hlag[k]) & (lowangle <= theta & theta < upangle) & (b <= maxbandw)){
gamlag[k] <- gamlag[k] + (data[i] - data[j])^2;
n[k] <- n[k] + 1
}
}
}
}
}
gamlag <- ifelse(n == 0, NA, gamlag/(2*n))
tmp <- data.frame("lag" = llag, "gamma" = gamlag)
return(tmp)
}
function call for the above code
ideal_variogram_2 <- variogram(data3[,1], data3[,2], data3[,3], 18, 0, 0.025, 90, 45, 1000000)
ideal_variogram_2 <- na.omit(ideal_variogram_2)
plot(ideal_variogram_2$lag, ideal_variogram_2$gamma, main = "Using my code")
function call for variog()
geodata1 <- as.geodata(data3, coords.col = 1:2, data.col = 3)
ideal_variogram_1 <- variog(geodata1, coords = geodata1$coords, data = geodata1$data, option = "bin", uvec = seq(0, 0.45, by = 0.025), direction = pi/2, tolerance = pi/4)
df <- data.frame(u = ideal_variogram_1$u, v = ideal_variogram_1$v)
plot(df$u, df$v, main = "Using variog()")
The 2 variograms that I got are at the following link:
Variogram

integrate quadratic b-splines in R

I am working with a function that depends on quadratic B-spline interpolation estimated up front by the the cobs function in the same R package. The estimated knots and corresponding coefficients are given in code.
Further on, I require the integral of this function from 0 to some value, for example 0.6 or 0.7. Since my function is strictly positive, the integral value should increase if the upper bound of the integral increases. However this is not the case for some values, as shown when using 0.6 and 0.7
library(cobs)
b <- 0.6724027
xi1 <- 0.002541667
xi2 <- 2.509625
knots <- c(5.000010e-06, 8.700000e-05, 3.420000e-04, 1.344000e-03, 5.292000e-03, 2.082900e-02, 8.198800e-02, 3.227180e-01, 1.270272e+00, 5.000005e+00)
coef <- c(2.509493, 2.508141, 2.466733, 2.378368, 2.239769, 2.063977, 1.874705, 1.601780, 1.288163, 1.262683, 1.432729)
fn <- function(x) {
z <- (2 - b) * (cobs:::.splValue(2, knots, coef, x, 0) - 2 * x * xi1) / xi2 - b
return (z)
}
x <- seq(0, 0.7, 0.0001)
plot(x, fn(x), type = 'l')
integrate(f = fn, 0, 0.6)
# 0.1049019 with absolute error < 1.2e-15
integrate(f = fn, 0, 0.7)
# 0.09714124 with absolute error < 1.1e-15
I know I could integrate directly on the cobs:::.splValue function, and transform the results correspondingly. However, I am interested to know why this strange behaviour occurs.
I think that the algorithm used by the function "integrate" is not behaving well for those conditions. For example, if you modify the lower limits, it works as expected:
> integrate(f = fn, 0.1, 0.6)
0.06794357 with absolute error < 7.5e-16
> integrate(f = fn, 0.1, 0.7)
0.07432096 with absolute error < 8.3e-16
This is common with numerical integration methods, you have to choose on a case by case basis.
I'm using the trapezoidal rule to integrate over the same region and works well original code
composite.trapezoid <- function(f, a, b, n) {
if (is.function(f) == FALSE) {
stop('f must be a function with one parameter (variable)')
}
h <- (b - a) / n
j <- 1(:n - 1)
xj <- a + j * h
approx <- (h / 2) * (f(a) + 2 * sum(f(xj)) + f(b))
return(approx)
}
> composite.trapezoid(f = fn, 0, 0.6, 10000)
[1] 0.1079356
> composite.trapezoid(f = fn, 0, 0.7, 10000)
[1] 0.1143195
If we analyze the behavior of the integral close to the 0.65 region, we can see that there is a problem with the first approach (it is not smooth):
tst = sapply(seq(0.5, 0.8, length.out = 100), function(upper) {
integrate(f = fn, 0, upper)[[1]]
})
plot(seq(0.5, 0.8, length.out = 100), tst)
and that the trapezoid rule behaves better:
tst2 = sapply(seq(0.5, 0.8, length.out = 100), function(upper) {
composite.trapezoid(f = fn, 0, upper, 10000)[[1]]
})
plot(seq(0.5, 0.8, length.out = 100), tst2)

How to insert a numeric gradient inside constrOptim

I was doing maximum likelihood estimation using optim() and it was quite easy. It's a generalized logistic distribution with 4 parameters and a couple of restrictions, all listed in the likelihood function:
genlogis.loglikelihood <- function(param = c(sqrt(2/pi),0.5, 2, 0), x){
if(length(param) < 3 | length(param) > 4 ){
stop('Incorrect number of parameters: param = c(a,b,p,location)')
}
if(length(param) == 3){
#warning('Location parameter is set to 0')
location = 0
}
if(length(param) == 4){
location = param[4]
}
a = param[1]
b = param[2]
p = param[3]
if(!missing(a)){
if(a < 0){
stop('The argument "a" must be positive.')
}
}
if(!missing(b)){
if(b < 0){
stop('The argument "b" must be positive.')
}
}
if(!missing(p)){
if(p < 0){
stop('The argument "p" must be positive.')
}
}
if(p == 0 && b > 0 && a > 0){
stop('If "p" equals to 0, "b" or "a" must be
0 otherwise there is identifiability problem.')
}
if(b == 0 && a == 0){
stop('The distribution is not defined for "a"
and "b" equal to 0 simultaneously.')
}
z <- sum(log((a+b*(1+p)*abs((x-location))^p ) * exp(-((x-location)*(a+b*abs((x-location))^p))))) -
sum(2*log(exp(-((x-location)*(a+b*abs((x-location))^p))) + 1))
if(!is.finite(z)){
z <- 1e+20
}
return(-z)
}
I made it's likelihood function and worked flawessly this way:
opt <- function(parameters, data){
optim(par = parameters, fn = genlogis.loglikelihood, x=data,
lower = c(0.00001,0.00001,0.00001, -Inf),
upper = c(Inf,Inf,Inf,Inf), method = 'L-BFGS-B')
}
opt(c(0.3, 1.01, 2.11, 3.5), faithful$eruptions)
Since this function does the gradient numerically I had not much problem.
Then I wanted to change to constrOptim() because the boundaries are actually 0 and not a small number on the first 3 parameters. But, the problem I face is that the argument grad has to be specified and I can't derive that function to give a gradient function, so I have to do it numerically as in optim(), it works if I put grad = NULL but I don't want Nelder-Mead method but BFGS.
I've tried this way but not of much sucess:
opt2 <- function(initial, data){
ui <- rbind(c(1, 0, 0, 0), c(0,1,0,0), c(0,0,1,0))
ci <- c(0,0,0)
constrOptim(theta = initial, f = genlogis.loglikelihood(param, x),
grad = numDeriv::grad(func = function(x, param) genlogis.loglikelihood(param, x), param = theta, x = data)
, x = data, ui = ui, ci = ci)
}
Your notation is a bit complicated, maybe that confused you.
opt2 <- function(parameters, data){
fn = function(p) genlogis.loglikelihood(p, x = data)
gr = function(p) numDeriv::grad(fn, p)
ui <- rbind(c(1, 0, 0, 0), c(0,1,0,0), c(0,0,1,0))
ci <- c(0,0,0)
constrOptim(theta = parameters, f = fn, grad = gr,
ui = ui, ci = ci, method="BFGS")
}
opt2(c(0.3, 1.01, 2.11, 3.5), faithful$eruptions)

Automatically solve an equation of `pt` for `ncp`

I wonder if it is possible to efficiently change ncp in the below code such that x becomes .025 and .975 (within rounding error).
x <- pt(q = 5, df = 19, ncp = ?)
----------
Clarification
q = 5 and df = 19 (above) are just two hypothetical numbers, so q and df could be any other two numbers. What I expect is a function / routine, that takes q and df as input.
What is wrong with uniroot?
f <- function (ncp, alpha) pt(q = 5, df = 19, ncp = ncp) - alpha
par(mfrow = c(1,2))
curve(f(ncp, 0.025), from = 5, to = 10, xname = "ncp", main = "0.025")
abline(h = 0)
curve(f(ncp, 0.975), from = 0, to = 5, xname = "ncp", main = "0.975")
abline(h = 0)
So for 0.025 case, the root lies in (7, 8); for 0.975 case, the root lies in (2, 3).
uniroot(f, c(7, 8), alpha = 0.025)$root
#[1] 7.476482
uniroot(f, c(2, 3), alpha = 0.975)$root
#[1] 2.443316
---------
(After some discussion...)
OK, now I see your ultimate goal. You want to implement this equation solver as a function, with input q and df. So they are unknown, but fixed. They might come out of an experiment.
Ideally if there is an analytical solution, i.e., ncp can be written as a formula in terms of q, df and alpha, that would be so great. However, this is not possible for t-distribution.
Numerical solution is the way, but uniroot is not a great option for this purpose, as it relies on "plot - view - guess - specification". The answer by loki is also crude but with some improvement. It is a grid search, with fixed step size. Start from a value near 0, say 0.001, and increase this value and check for approximation error. We stop when this error fails to decrease.
This really initiates the idea of numerical optimization with Newton-method or quasi-Newton method. In 1D case, we can use function optimize. It does variable step size in searching, so it converges faster than a fixed step-size searching.
Let's define our function as:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Note, -37.62 / 37.62 is chosen as lower / upper bound for ncp, as it is the maximum supported by t-distribution in R (read ?dt).
For example, let's try this function. If you, as given in your question, has q = 5 and df = 19:
ncp_solver(alpha = 0.025, q = 5, df = 19)
# ncp abs.error
#7.476472e+00 1.251142e-07
The result is a named vector, with ncp and absolute approximation error.
Similarly we can do:
ncp_solver(alpha = 0.975, q = 5, df = 19)
# ncp abs.error
#2.443347e+00 7.221928e-07
----------
Follow up
Is it possible that in the function ncp_solver(), alpha takes a c(.025, .975) together?
Why not wrapping it up for a "vectorization":
sapply(c(0.025, 0.975), ncp_solver, q = 5, df = 19)
# [,1] [,2]
#ncp 7.476472e+00 2.443347e+00
#abs.error 1.251142e-07 7.221928e-07
How come 0.025 gives upper bound of confidence interval, while 0.975 gives lower bound of confidence interval? Should this relationship reversed?
No surprise. By default pt computes lower tail probability. If you want the "right" relationship, set lower.tail = FALSE in pt:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Now you see:
ncp_solver(0.025, 5, 19)[[1]] ## use "[[" not "[" to drop name
#[1] 2.443316
ncp_solver(0.975, 5, 19)[[1]]
#[1] 7.476492
--------
Bug report and fix
I was reported that the above ncp_solver is unstable. For example:
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-8.880922 0.025000
But on the other hand, if we double check with uniroot here:
f <- function (ncp, alpha) pt(q = 0, df = 98, ncp = ncp, lower.tail = FALSE) - alpha
curve(f(ncp, 0.025), from = -3, to = 0, xname = "ncp"); abline(h = 0)
uniroot(f, c(-2, -1.5), 0.025)$root
#[1] -1.959961
So there is clearly something wrong with ncp_solver.
Well it turns out that we can not use too big bound, c(-37.62, 37.62). If we narrow it to c(-35, 35), it will be alright.
Also, to avoid tolerance problem, we can change objective function from squared error to absolute error:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-1.959980e+00 9.190327e-07
Damn, this is a pretty annoying bug. But relax now.
Report on getting warning messages from pt
I also receive some report on annoying warning messages from pt:
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
#Warning message:
#In pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) :
# full precision may not have been achieved in 'pnt{final}'
I am not too sure what is going on here, but meanwhile I did not observe misleading result. Therefore, I decide to suppress those warnings from pt, using suppressWarnings:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(suppressWarnings(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE)) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
OK, quiet now.
You could use two while loops like this:
i <- 0.001
lowerFound <- FALSE
while(!lowerFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.025){
lowerFound <- TRUE
print(paste("Lower is", i))
lower <- i
} else {
i <- i + 0.0005
}
}
i <- 0.001
upperFound <- FALSE
while(!upperFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.975){
upperFound <- TRUE
print(paste("Upper is ", i))
upper <- i
} else {
i <- i + 0.0005
}
}
c(Lower = lower, Upper = upper)
# Lower Upper
# 7.4655 2.4330
Of course, you can adapt the increment in i <- i + .... or change the check if (round(x,...) == ....) to fit this solution to your specific needs of accuracy.
I know this is an old question, but there is now a one-line solution to this problem using the conf.limits.nct() function in the MBESS package.
install.packages("MBESS")
library(MBESS)
result <- conf.limits.nct(t.value = 5, df = 19)
result
$Lower.Limit
[1] 2.443332
$Prob.Less.Lower
[1] 0.025
$Upper.Limit
[1] 7.476475
$Prob.Greater.Upper
[1] 0.025
$Lower.Limit is the result where pt = 0.975
$Upper.Limit is the result where pt = 0.025
pt(q=5,df=19,ncp=result$Lower.Limit)
[1] 0.975
> pt(q=5,df=19,ncp=result$Upper.Limit)
[1] 0.025

Resources