R: extract parameter estmates from object of class 'mle' - r

I was wondering how one extracts the estimated parameters stored in an R object of class mle-class.
Here is an example:
x <- matrix(rnorm(300), ncol = 3)
x[x > 1] <- 1
require(tmvtnorm)
fit1 <- mle.tmvnorm(X = x, lower = rep(-Inf, 3), upper = rep(1, 3))
Now, fit1 is an object of class:
class(fit1)
[1] "mle"
attr(,"package")
[1] "stats4
"
fit1 itself gives me:
fit1
Call:
mle(minuslogl = function (mu_1 = 0, mu_2 = 0, mu_3 = 0, sigma_1.1 = 1,
sigma_1.2 = 0, sigma_1.3 = 0, sigma_2.2 = 1, sigma_2.3 = 0,
sigma_3.3 = 1)
{
nf <- names(formals())
theta <- sapply(nf, function(x) {
eval(parse(text = x))
})
mean <- theta[1:n]
if (cholesky) {
L <- inv_vech(theta[-(1:n)])
L[lower.tri(L, diag = FALSE)] <- 0
sigma <- t(L) %*% L
}
else {
sigma <- inv_vech(theta[-(1:n)])
}
if (det(sigma) <= 0 || any(diag(sigma) < 0)) {
return(.Machine$integer.max)
}
f <- -(sum(dmvnorm(X, mean, sigma, log = TRUE)) - nrow(X) *
log(pmvnorm(lower = lower, upper = upper, mean = mean,
sigma = sigma)))
if (is.infinite(f) || is.na(f)) {
return(.Machine$integer.max)
}
f
}, start = as.list(c(0, 0, 0, 1, 0, 0, 1, 0, 1)), method = "BFGS",
fixed = list())
Coefficients:
mu_1 mu_2 mu_3 sigma_1.1 sigma_1.2 sigma_1.3
0.64218198 1.51720543 0.97047201 1.73395947 -0.03889188 0.14627774
sigma_2.2 sigma_2.3 sigma_3.3
2.18020597 0.38822509 1.49854600
My question is: how do I extract these coefficients from the object fit1?
Thanks again for your time, and for your help in answering this question!

coef is a generic function which extracts model coefficients from objects returned by modeling functions. coefficients is an alias for it.
Usage
coef(object, ...)
coefficients(object, ...)
So, fit1#coef should work.
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/coef.html

Sorry for this silly question: I will keep it just in case someone ends up looking.
fit1#coef
mu_1 mu_2 mu_3 sigma_1.1 sigma_1.2 sigma_1.3
0.64218198 1.51720543 0.97047201 1.73395947 -0.03889188 0.14627774
sigma_2.2 sigma_2.3 sigma_3.3
2.18020597 0.38822509 1.49854600
solves the query. Duh!

Related

MCMC for estimating negative binomial distribution

I want to estimate parameters of negative binomial distribution using MCMC Metropolis-Hastings algorithm. In other words, I have sample:
set.seed(42)
y <- rnbinom(20, size = 3, prob = 0.2)
and I want to write algorithm that will estimate parameter of size and parameter of prob.
My work so far
I defined prior distribution of size as Poisson:
prior_r <- function(r) {
return(dpois(r, lambda = 2, log = T))
}
And prior distribution of prob as uniform on [0, 1]:
prior_prob <- function(prob) {
return(dunif(prob, min = 0, max = 1, log = T))
}
Moreover for simplicity I defined loglikelihood and joint probability functions:
loglikelihood <- function(data, r, prob) {
loglikelihoodValue <- sum(dnorm(data, mean = r, sd = prob, log = T))
return(loglikelihoodValue)
}
joint <- function(r, prob) {
data <- y
return(loglikelihood(data, r, prob) + prior_r(r) + prior_prob(prob))
}
Finally, the whole algorithm:
run_mcmc <- function(startvalue, iterations) {
chain <- array(dim = c(iterations + 1, 2))
chain[1, ] <- startvalue
for (i in 1:iterations) {
proposal_r <- rpois(1, lambda = chain[i, 1])
proposal_prob <- chain[i, 2] + runif(1, min = -0.2, max = 0.2)
quotient <- joint(proposal_r, proposal_prob) - joint(chain[i, 1], chain[i, 2])
if (runif(1, 0, 1) < min(1, exp(quotient))) chain[i + 1, ] <- c(proposal_r, proposal_prob)
else chain[i + 1, ] <- chain[i, ]
}
return(chain)
}
The problem
Problem that I'm having is that when I run it with starting values even very close to correct ones:
iterations <- 2000
startvalue <- c(4, 0.25)
res <- run_mcmc(startvalue, iterations)
I'll obtain posterior distribution which is obviously wrong. For example
> colMeans(res)
[1] 11.963018 0.994533
As you can see, size is located very close to point 12, and probability is located in point 1.
Do you know what's the cause of those phenomeons?
Change dnorm in loglikelihood to dnbinom and fix the proposal for prob so it doesn't go outside (0,1):
set.seed(42)
y <- rnbinom(20, size = 3, prob = 0.2)
prior_r <- function(r) {
return(dpois(r, lambda = 2, log = T))
}
prior_prob <- function(prob) {
return(dunif(prob, min = 0, max = 1, log = TRUE))
}
loglikelihood <- function(data, r, prob) {
loglikelihoodValue <- sum(dnbinom(data, size = r, prob = prob, log = TRUE))
return(loglikelihoodValue)
}
joint <- function(r, prob) {
return(loglikelihood(y, r, prob) + prior_r(r) + prior_prob(prob))
}
run_mcmc <- function(startvalue, iterations) {
chain <- array(dim = c(iterations + 1, 2))
chain[1, ] <- startvalue
for (i in 1:iterations) {
proposal_r <- rpois(1, lambda = chain[i, 1])
proposal_prob <- chain[i, 2] + runif(1, min = max(-0.2, -chain[i,2]), max = min(0.2, 1 - chain[i,2]))
quotient <- joint(proposal_r, proposal_prob) - joint(chain[i, 1], chain[i, 2])
if (runif(1, 0, 1) < min(1, exp(quotient))) {
chain[i + 1, ] <- c(proposal_r, proposal_prob)
} else {
chain[i + 1, ] <- chain[i, ]
}
}
return(chain)
}
iterations <- 2000
startvalue <- c(4, 0.25)
res <- run_mcmc(startvalue, iterations)
colMeans(res)
#> [1] 3.1009495 0.1988177

R code for simulating stochastic asset price path

Consider the following model for the evolution of an asset's price:
This what I have done (in R). I could not find a function that randomly outputs +1 or -1, so I decided to adapt the inbuilt rbinom function.
## This code is in R
rm(list = ls())
library(dplyr)
library(dint)
library(magrittr)
library(stats)
path =
function(T, mu, sigma, p, x0) {
x = rep(NA, T)
x[1] = x0
for(i in 2:T){
z = if_else(rbinom(1,1,p) == 0, -1, 1)
x[i] = x[i-1] * exp(mu + sigma*z)
}
return(x)
}
## Just some testing
x_sim = path(T = 4, mu = 0, sigma = 0.01, p = 0.5, x0 = 100)
## Actual answer
Np = 10000
mc = matrix(nrow = 17, ncol = Np)
for(j in 1:Np){
mc[,j] = path(T = 17, mu = 0, sigma = 0.01, p = 0.5, x0 = 100)
}
test = mc[2:nrow(mc), ] >= 100
sum_test = colSums(test)
comp = sum(sum_test >= 1)/length(sum_test)
prob = 1 - comp
Does this make sense? Any help/tips/advice would be much appreciated. Thanks!
Staying close to your code, I came up with this. Intuitively, if you think about it, the probability should be rather low due to the parameters and I get a probability of about 6.7% which is roughly what I get if I run your code with the parameters from the assignment.
simpath <- function(t, mu, sigma, p, x0, seed){
# set seed
if(!missing(seed)){
set.seed(seed)
}
# set up matrix for storing the results
res <- matrix(c(1:t, rep(NA, t*2)), ncol = 3)
colnames(res) <- c('t', 'z_t', 'x_t')
res[, 'z_t'] <- sample(c(1, -1), size = t, prob = c(p, 1-p), replace = TRUE)
res[1, 3] <- x0
for(i in 2:t){
res[i, 3] <- res[i-1, 3] * exp(mu+sigma*res[i, 2])
}
return(res)
}
x_sim <- simpath(t = 4, mu = 0, sigma = 0.01, p = 0.5, x0 = 100, seed = 123)
x_sim2 <- simpath(t = 36, mu = 0, sigma = 0.03, p = 0.5, x0 = 100, seed = 123)
## Actual answer
Np <- 100000
mc <- matrix(nrow = 36, ncol = Np)
for (j in 1:Np){
mc[, j] <- simpath(t = 36, mu = 0, sigma = 0.03, p = 0.5, x0 = 100)[, 3]
}
test <- mc > 100
sum_test <- colSums(test)
comp = sum(sum_test == 0)/length(sum_test)
prob = comp
> prob
[1] 0.06759

4d monte carlo integration R

How to integrate function using the Monte Carlo method
F(X[1],X[2],X[3],X[4])
Which depends on 4 variable by 4 dimensions?
I mean int_{0}^{1} int_{0}^{1} int_{0}^{1} int_{0}^{1} X[1]X[2]X[3]X[4] dX[1] dX[2] dX[3] dX[4]
UPD
function is
data1 = rnorm(100, 0, 1)
data2 = rnorm(100, 0.1, 0.5)
data3 = rnorm(100, 0.2, 0.8)
data4 = rnorm(100, 0.3, 0.9)
kernel1 = kdensity(data1,kernel = 'gaussian')
kernel2 = kdensity(data2,kernel = 'gaussian')
kernel3 = kdensity(data3,kernel = 'gaussian')
kernel4 = kdensity(data4,kernel = 'gaussian')
f <- function(X) {
return(X[1]*kernel1(X[1])*kernel2(X[2])*kernel3(X[3])*kernel4(X[4]))
}
and i want to integrate it
int_{0}^{1} int_{0}^{1} int_{0}^{1} int_{0}^{1} f dX[1] dX[2] dX[3] dX[4]
Due to your particular structure of integral expression, you can rewrite your nested integral into a product of integrals.
Thus, the solution below may be a example (with random seed set.seed(1)) for you:
g <- Vectorize(function(ker) {integrate(ker,0,1)}$value)
gE <- function(ker) {integrate(function(x) x*ker(x),0,1)}$value
res1 <- prod(c(gE(kernel1),g(c(kernel2,kernel3,kernel4))))
such that
> res1
[1] 0.01559343
Nested Integral: you need to rewrite your function f first, i.e.,
f <- function(x1,x2,x3,x4) {
return(x1*kernel1(x1)*kernel2(x2)*kernel3(x3)*kernel4(x4))
}
res2 <- integrate(Vectorize(function(x4)
integrate(Vectorize(function(x3,x4)
integrate(Vectorize(function(x2,x3,x4)
integrate(f,0,1,x2,x3,x4)$value),
0,
1,
x3,
x4)$value),
0,
1,
x4)$value),
0,
1)$value
such that
> res2
[1] 0.01559343

How use an expression object as input in sum function in R

I want to use the content of my "Input" Object inside my min.rss function but i have the following error :
Error in sum((input[[1]])) : invalid 'type' (language) of argument
I don't understand why sum is not able to read the content of input[[1]] while when i copy/paste the content of input[[1]] directly in sum it's working
Here is my code :
var = mtcars
var$mpg2 = var$mpg
var$mpg = NULL
mDegree = 10
mList = c()
for (i in 1:mDegree){
mList = c(mList,paste0(paste0(paste0('var[[',i),']]'),'*'))
}
mListBis = c()
for (j in 1:mDegree){
mListBis = c(mListBis,paste0(paste0(paste0('par[',j+1),']'),'+'))
}
mChar = "par[1]+"
for (k in 1:mDegree){
mChar = paste0(mChar,paste0(mList[k],mListBis[k]))
}
mChar = strtrim(mChar, nchar(mChar)-1)
mChar = paste0(mChar,'-',paste0('var[[',i+1),']]')
mChar = paste0('(',mChar,')^2')
input = parse(text = mChar)
min.RSS <- function(data, par) {
with(data,
sum((input[[1]])))
}
#########################################################
#########################################################
(result <- optim(par = c(0, 1, 0, 1, 1, 0, 1, 0, 1, 0), min.RSS, data = var))
My principal objective is to make the following code more responsive :
mtcars <- data.frame(mtcars)
dat=data.frame(y=mtcars$mpg, x=mtcars$wt)
min.RSS <- function(data, par) {
with(data,
sum((par[1] + par[2] * x - y)^2))
}
(result <- optim(par = c(0, 1), min.RSS, data = dat))

Automatically solve an equation of `pt` for `ncp`

I wonder if it is possible to efficiently change ncp in the below code such that x becomes .025 and .975 (within rounding error).
x <- pt(q = 5, df = 19, ncp = ?)
----------
Clarification
q = 5 and df = 19 (above) are just two hypothetical numbers, so q and df could be any other two numbers. What I expect is a function / routine, that takes q and df as input.
What is wrong with uniroot?
f <- function (ncp, alpha) pt(q = 5, df = 19, ncp = ncp) - alpha
par(mfrow = c(1,2))
curve(f(ncp, 0.025), from = 5, to = 10, xname = "ncp", main = "0.025")
abline(h = 0)
curve(f(ncp, 0.975), from = 0, to = 5, xname = "ncp", main = "0.975")
abline(h = 0)
So for 0.025 case, the root lies in (7, 8); for 0.975 case, the root lies in (2, 3).
uniroot(f, c(7, 8), alpha = 0.025)$root
#[1] 7.476482
uniroot(f, c(2, 3), alpha = 0.975)$root
#[1] 2.443316
---------
(After some discussion...)
OK, now I see your ultimate goal. You want to implement this equation solver as a function, with input q and df. So they are unknown, but fixed. They might come out of an experiment.
Ideally if there is an analytical solution, i.e., ncp can be written as a formula in terms of q, df and alpha, that would be so great. However, this is not possible for t-distribution.
Numerical solution is the way, but uniroot is not a great option for this purpose, as it relies on "plot - view - guess - specification". The answer by loki is also crude but with some improvement. It is a grid search, with fixed step size. Start from a value near 0, say 0.001, and increase this value and check for approximation error. We stop when this error fails to decrease.
This really initiates the idea of numerical optimization with Newton-method or quasi-Newton method. In 1D case, we can use function optimize. It does variable step size in searching, so it converges faster than a fixed step-size searching.
Let's define our function as:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Note, -37.62 / 37.62 is chosen as lower / upper bound for ncp, as it is the maximum supported by t-distribution in R (read ?dt).
For example, let's try this function. If you, as given in your question, has q = 5 and df = 19:
ncp_solver(alpha = 0.025, q = 5, df = 19)
# ncp abs.error
#7.476472e+00 1.251142e-07
The result is a named vector, with ncp and absolute approximation error.
Similarly we can do:
ncp_solver(alpha = 0.975, q = 5, df = 19)
# ncp abs.error
#2.443347e+00 7.221928e-07
----------
Follow up
Is it possible that in the function ncp_solver(), alpha takes a c(.025, .975) together?
Why not wrapping it up for a "vectorization":
sapply(c(0.025, 0.975), ncp_solver, q = 5, df = 19)
# [,1] [,2]
#ncp 7.476472e+00 2.443347e+00
#abs.error 1.251142e-07 7.221928e-07
How come 0.025 gives upper bound of confidence interval, while 0.975 gives lower bound of confidence interval? Should this relationship reversed?
No surprise. By default pt computes lower tail probability. If you want the "right" relationship, set lower.tail = FALSE in pt:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Now you see:
ncp_solver(0.025, 5, 19)[[1]] ## use "[[" not "[" to drop name
#[1] 2.443316
ncp_solver(0.975, 5, 19)[[1]]
#[1] 7.476492
--------
Bug report and fix
I was reported that the above ncp_solver is unstable. For example:
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-8.880922 0.025000
But on the other hand, if we double check with uniroot here:
f <- function (ncp, alpha) pt(q = 0, df = 98, ncp = ncp, lower.tail = FALSE) - alpha
curve(f(ncp, 0.025), from = -3, to = 0, xname = "ncp"); abline(h = 0)
uniroot(f, c(-2, -1.5), 0.025)$root
#[1] -1.959961
So there is clearly something wrong with ncp_solver.
Well it turns out that we can not use too big bound, c(-37.62, 37.62). If we narrow it to c(-35, 35), it will be alright.
Also, to avoid tolerance problem, we can change objective function from squared error to absolute error:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-1.959980e+00 9.190327e-07
Damn, this is a pretty annoying bug. But relax now.
Report on getting warning messages from pt
I also receive some report on annoying warning messages from pt:
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
#Warning message:
#In pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) :
# full precision may not have been achieved in 'pnt{final}'
I am not too sure what is going on here, but meanwhile I did not observe misleading result. Therefore, I decide to suppress those warnings from pt, using suppressWarnings:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(suppressWarnings(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE)) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
OK, quiet now.
You could use two while loops like this:
i <- 0.001
lowerFound <- FALSE
while(!lowerFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.025){
lowerFound <- TRUE
print(paste("Lower is", i))
lower <- i
} else {
i <- i + 0.0005
}
}
i <- 0.001
upperFound <- FALSE
while(!upperFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.975){
upperFound <- TRUE
print(paste("Upper is ", i))
upper <- i
} else {
i <- i + 0.0005
}
}
c(Lower = lower, Upper = upper)
# Lower Upper
# 7.4655 2.4330
Of course, you can adapt the increment in i <- i + .... or change the check if (round(x,...) == ....) to fit this solution to your specific needs of accuracy.
I know this is an old question, but there is now a one-line solution to this problem using the conf.limits.nct() function in the MBESS package.
install.packages("MBESS")
library(MBESS)
result <- conf.limits.nct(t.value = 5, df = 19)
result
$Lower.Limit
[1] 2.443332
$Prob.Less.Lower
[1] 0.025
$Upper.Limit
[1] 7.476475
$Prob.Greater.Upper
[1] 0.025
$Lower.Limit is the result where pt = 0.975
$Upper.Limit is the result where pt = 0.025
pt(q=5,df=19,ncp=result$Lower.Limit)
[1] 0.975
> pt(q=5,df=19,ncp=result$Upper.Limit)
[1] 0.025

Resources