Parameters estimation of a bivariate mixture normal-lognormal model - r

I have to create a model which is a mixture of a normal and log-normal distribution. To create it, I need to estimate the 2 covariance matrixes and the mixing parameter (total =7 parameters) by maximizing the log-likelihood function. This maximization has to be performed by the nlm routine.
As I use relative data, the means are known and equal to 1.
I’ve already tried to do it in 1 dimension (with 1 set of relative data) and it works well. However, when I introduce the 2nd set of relative data I get illogical results for the correlation and a lot of warnings messages (at all 25).
To estimate these parameters I defined first the log-likelihood function with the 2 commands dmvnorm and dlnorm.plus. Then I assign starting values of the parameters and finally I use the nlm routine to estimate the parameters (see script below).
`P <- read.ascii.grid("d:/Documents/JOINT_FREQUENCY/grid_E727_P-3000.asc", return.header=
FALSE );
V <- read.ascii.grid("d:/Documents/JOINT_FREQUENCY/grid_E727_V-3000.asc", return.header=
FALSE );
p <- c(P); # tranform matrix into a vector
v <- c(V);
p<- p[!is.na(p)] # removing NA values
v<- v[!is.na(v)]
p_rel <- p/mean(p) #Transforming the data to relative values
v_rel <- v/mean(v)
PV <- cbind(p_rel, v_rel) # create a matrix of vectors
L <- function(par,p_rel,v_rel) {
return (-sum(log( (1- par[7])*dmvnorm(PV, mean=c(1,1), sigma= matrix(c(par[1]^2, par[1]*par[2]
*par[3],par[1]*par[2]*par[3], par[2]^2 ),nrow=2, ncol=2))+
par[7]*dlnorm.rplus(PV, meanlog=c(1,1), varlog= matrix(c(par[4]^2,par[4]*par[5]*par[6],par[4]
*par[5]*par[6],par[5]^2), nrow=2,ncol=2)) )))
}
par.start<- c(0.74, 0.66 ,0.40, 1.4, 1.2, 0.4, 0.5) # log-likelihood estimators
result<-nlm(L,par.start,v_rel=v_rel,p_rel=p_rel, hessian=TRUE, iterlim=200, check.analyticals= TRUE)
Messages d'avis :
1: In log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values) :
production de NaN
2: In sqrt(2 * pi * det(varlog)) : production de NaN
3: In nlm(L, par.start, p_rel = p_rel, v_rel = v_rel, hessian = TRUE) :
NA/Inf replaced by maximum positive value
4: In log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values) :
production de NaN
…. Until 25.
par.hat <- result$estimate
cat("sigN_p =", par[1],"\n","sigN_v =", par[2],"\n","rhoN =", par[3],"\n","sigLN_p =", par [4],"\n","sigLN_v =", par[5],"\n","rhoLN =", par[6],"\n","mixing parameter =", par[7],"\n")
sigN_p = 0.5403361
sigN_v = 0.6667375
rhoN = 0.6260181
sigLN_p = 1.705626
sigLN_v = 1.592832
rhoLN = 0.9735974
mixing parameter = 0.8113369`
Does someone know what is wrong in my model or how should I do to find these parameters in 2 dimensions?
Thank you very much for taking time to look at my questions.
Regards,
Gladys Hertzog

When I do these kind of optimization problems, I find that it's important to make sure that all the variables that I'm optimizing over are constrained to plausible values. For example, standard deviation variables have to be positive, and from knowledge of the situation that I'm modelling I'll probably be able to put an upper bound all my standard deviation variables as well. So if s is one of my standard deviation variables, and if m is the maximum value that I want it to take, instead of working with s I'll solve for the variable z which is related to s via
s = m/(1+e-z)
In that formula, z is unconstrained, but s must lie between 0 and m. This is vital because optimization routines where the variables are not constrained to take plausible values will often try completely implausible values while they're trying to bound the solution. Implausible values often cause problems with e.g. precision, that then results in NaN's etc. The general formula that I use for constraining a single variable x to lie between a and b is
x = a + (b - a)/(1+e-z)
However, regarding your particular problem where you're looking for covariance matrices, a more sophisticated approach is necessary than simply bounding all the individual variables. Covariance matrices must be positive semi-definite, so if you're simply optimizing the individual values in the matrix, the optimization will probably fail (producing NaN's) if a matrix which isn't positive definite is fed into the likelihood function. To get round this problem, one approach is to solve for the Cholesky decomposition of the covariance matrix instead of the covariance matrix itself. My guess is that this is probably what's causing your optimization to fail.

Related

Is there a correction I can apply to negative values within a probability matrix produced by matexpo {ape}?

I'm simulating discrete character data using the function rTraitDisc {ape} in R using a variety of model matrices. I've not encountered any issues with scaling when all state changes are possible. However when I supply an ordered model with 8 or more possible states, the function breaks down and returns the following error:
## library
library(ape)
## read in tree
data("bird.orders")
## build model
model.matrix <- matrix(c(0,0.1,0,0,0,0,0,0,
0.1,0,0.1,0,0,0,0,0,
0,0.1,0,0.1,0,0,0,0,
0,0,0.1,0,0.1,0,0,0,
0,0,0,0.1,0,0.1,0,0,
0,0,0,0,0.1,0,0.1,0,
0,0,0,0,0,0.1,0,0.1,
0,0,0,0,0,0,0.1,0), 8)
## run function
rTraitDisc(phy = bird.orders, model = model.matrix)
Error message:
Error in sample.int(k, size = 1, FALSE, prob = p) : negative probability
Having dug a little deeper, it seems that when there are 8 or more states but only one possible transition (e.g. if the ancestral state is 0, only a transition to state 1 should be possible in an ordered matrix), the function matexpo produces a probability matrix with negative values for the shortest branch of the tree (0.5). As these probabilities are used by sample.int as the "prob" argument, the negative probabilities cause the function to break down.
## get number of states
k <- ncol(model.matrix)
## get equilibrium relative frequencies
freq = rep(1/k, k)
## match number of elements in model
freq <- rep(freq, each = k)
## get Q matrix
Q <- model.matrix * freq
diag(Q) <- 0
diag(Q) <- -rowSums(Q)
## get minimum edge length
min.el <- min(bird.orders$edge.length)
## run matexpo
matexpo(Q*min.el)
How do I deal with these negative values in this context? Is there a correction I can/should apply?

Trying to plot loglikelihood of Cauchy distribution for different values of theta in R

I am trying to plot the log-likelihood function of the Cauchy distribution for varying values of theta (location parameter). These are my observations:
obs<-c(1.77,-0.23,2.76,3.80,3.47,56.75,-1.34,4.24,3.29,3.71,-2.40,4.53,-0.07,-1.05,-13.87,-2.53,-1.74,0.27,43.21)
Here is my log-likelihood function:
ll_c<-function(theta,x_values){
n<-length(x_values)
logl<- -n*log(pi)-sum(log(1+(x_values-theta)^2))
return(logl)
}
and Ive tried making a plot by using this code:
x<-seq(from=-10,to=10,by=0.1);length(x)
theta_null<-NULL
for (i in x){
theta_log<-ll_c(i,counts)
theta_null<-c(theta_null,theta_log)
}
plot(theta_null)
The graph does not look right and for some reason the length of x and theta_null differs.
I am assuming that theta is your location parameter (the scale is set to 1 in my example). You should obtain the same result using a t-distribution with 1 df and shifting the observations by theta. I left some comments in the code as guidance.
obs = c(1.77,-0.23,2.76,3.80,3.47,56.75,-1.34,4.24,3.29,3.71,-2.40,4.53,-0.07,-1.05,-13.87,-2.53,-1.74,0.27,43.21)
ll_c=function(theta, obs)
{
# Compute log-lik for obs and a value of thet (location)
logl= sum(dcauchy(obs, location = theta, scale = 1, log = T))
return(logl)
}
# Loop for possible values of theta(obs given)
x = seq(from=-10,to=10,by=0.1)
ll = NULL
for (i in x)
{
ll = c(ll, ll_c(i, obs))
}
# Plot log-lik vs possible value of theta
plot(x, ll)
It is hard to say exactly what you are experiencing without more info. But I'll make an educated guess.
First of all, we can simplify this a lot by using the *t family of functions for the t distribution, as the cauchy distribution is just the t distribution with df = 1. So your calculations could've been done using
for(i in ncp)
theta_null <- c(theta_null, sum(dt(values, 1, i, log = TRUE)))
Note that multiplying by n doesn't actually matter for any practical purposes. We are usually interested in minimizing/maximizing the likelihood in which case all constants are irrelevant.
Now if we use this approach, we can quite quickly notice something by printing the values:
print(head(theta_null))
[1] -Inf -Inf -Inf -Inf -Inf -Inf
So I am assuming what you are experiencing is that many of your values are "almost" negative infinity, and maybe these are not stored correctly in your outcome vector. I can't see that this should be the case from your code, but this would be my initial guess.

Fitting a truncated binomial distribution to data in R

I have discrete count data indicating the number of successes in 10 binomial trials for a pilot sample of 46 cases. (Larger samples will follow once I have the analysis set up.) The zero class (no successes in 10 trials) is missing, i.e. each datum is an integer value between 1 and 10 inclusive. I want to fit a truncated binomial distribution with no zero class, in order to estimate the underlying probability p. I can do this adequately on an Excel spreadsheet using least squares with Solver, but because I want to calculate bootstrap confidence intervals on p, I am trying to implement it in R.
Frankly, I am struggling to understand how to code this. This is what I have so far:
d <- detections.data$x
# load required packages
library(fitdistrplus)
library(truncdist)
library(mc2d)
ptruncated.binom <- function(q, p) {
ptrunc(q, "binom", a = 1, b = Inf, p)
}
dtruncated.binom <- function(x, p) {
dtrunc(x, "binom", a = 1, b = Inf, p)
}
fit.tbin <- fitdist(d, "truncated.binom", method="mle", start=list(p=0.1))
I have had lots of error messages which I have solved by guesswork, but the latest one has me stumped and I suspect I am totally misunderstanding something.
Error in checkparamlist(arg_startfix$start.arg, arg_startfix$fix.arg, :
'start' must specify names which are arguments to 'distr'.<
I think this means I must specify starting values for x in dtrunc and q in ptrunc, but I am really unclear what they should be.
Any help would be very gratefully received.

How to find the minimum floating-point value accepted by betareg package?

I'm doing a beta regression in R, which requires values between 0 and 1, endpoints excluded, i.e. (0,1) instead of [0,1].
I have some 0 and 1 values in my dataset, so I'd like to convert them to the smallest possible neighbor, such as 0.0000...0001 and 0.9999...9999. I've used .Machine$double.xmin (which gives me 2.225074e-308), but betareg() still gives an error:
invalid dependent variable, all observations must be in (0, 1)
If I use 0.000001 and 0.999999, I got a different set of errors:
1: In betareg.fit(X, Y, Z, weights, offset, link, link.phi, type, control) :
failed to invert the information matrix: iteration stopped prematurely
2: In sqrt(wpp) :
Error in chol.default(K) :
the leading minor of order 4 is not positive definite
Only if I use 0.0001 and 0.9999 I can run without errors. Is there any way I can improve this minimum values with betareg? Or should I just be happy with that?
Try it with eps (displacement from 0 and 1) first equal to 1e-4 (as you have here) and then with 1e-3. If the results of the models don't differ in any way you care about, that's great. If they are, you need to be very careful, because it suggests your answers will be very sensitive to assumptions.
In the example below the dispersion parameter phi changes a lot, but the intercept and slope parameter don't change very much.
If you do find that the parameters change by a worrying amount for your particular data, then you need to think harder about the process by which zeros and ones arise, and model that process appropriately, e.g.
a censored-data model: zero/one arise through a minimum/maximum detection threshold, models the zero/one values as actually being somewhere in the tails or
a hurdle/zero-one inflation model: zeros and ones arise through a separate process from the rest of the data, use a binomial or multinomial model to characterize zero vs. (0,1) vs. one, then use a Beta regression on the (0,1) component)
Questions about these steps are probably more appropriate for CrossValidated than for SO.
sample data
set.seed(101)
library(betareg)
dd <- data.frame(x=rnorm(500))
rbeta2 <- function(n, prob=0.5, d=1) {
rbeta(n, shape1=prob*d, shape2=(1-prob)*d)
}
dd$y <- rbeta2(500,plogis(1+5*dd$x),d=1)
dd$y[dd$y<1e-8] <- 0
trial fitting function
ss <- function(eps) {
dd <- transform(dd,
y=pmin(1-eps,pmax(eps,y)))
m <- try(betareg(y~x,data=dd))
if (inherits(m,"try-error")) return(rep(NA,3))
return(coef(m))
}
ss(0) ## fails
ss(1e-8) ## fails
ss(1e-4)
## (Intercept) x (phi)
## 0.3140810 1.5724049 0.7604656
ss(1e-3) ## also fails
ss(1e-2)
## (Intercept) x (phi)
## 0.2847142 1.4383922 1.3970437
ss(5e-3)
## (Intercept) x (phi)
## 0.2870852 1.4546247 1.2029984
try it for a range of values
evec <- seq(-4,-1,length=51)
res <- t(sapply(evec, function(e) ss(10^e)) )
library(ggplot2)
ggplot(data.frame(e=10^evec,reshape2::melt(res)),
aes(e,value,colour=Var2))+
geom_line()+scale_x_log10()

Conducting MLE for multivariate case (bivariate normal) in R

The case is that I am trying to construct an MLE algortihm for a bivariate normal case. Yet, I stuck somewhere that seems there is no error, but when I run the script it ends up with a warning.
I have a sample of size n (a fixed constant, trained with 100, but can be anything else) from a bivariate normal distribution with mean vector = (0,0) and covariance matrix = matrix(c(2.2,1.8,1.8,3),2,2)
I've tried several optimization functions (including nlm(), mle(), spg() and optim()) to maximize the likelihood function (,or minimize neg-likelihood), but there are warnings or errors.
require(MASS)
require(tmvtnorm)
require(BB)
require(matrixcalc)
I've defined the first likelihood function as follows;
bvrt_ll = function(mu,sigma,rho,sample)
{
n = nrow(sample)
mu_hat = c(mu[1],mu[2])
p = length(mu)
if(sigma[1]>0 && sigma[2]>0)
{
if(rho<=1 && rho>=-1)
{
sigma_hat = matrix(c(sigma[1]^2
,sigma[1]*sigma[2]*rho
,sigma[1]*sigma[2]*rho
,sigma[2]^2),2,2)
stopifnot(is.positive.definite(sigma_hat))
neg_likelihood = (n*p/2)*log(2*pi) + (n/2)*log(det(sigma_hat)) + 0.5*sum(((sample-mu_hat)%*%solve(sigma_hat)%*%t(sample-mu_hat)))
return(neg_likelihood)
}
}
else NA
}
I prefered this one since I could set the constraints for sigmas and rho, but when I use mle()
> mle(minuslogl = bvrt_ll ,start = list(mu = mu_est,sigma=sigma_est,rho =
rho_est)
+ ,method = "BFGS")
Error in optim(start, f, method = method, hessian = TRUE, ...) :
(list) object cannot be coerced to type 'double'
I also tried nlm and spg in package BB, but they did not help as well. I tried the same function without defining constraints (inside the likelihood, not in optimization function), I could have some results but with warnings, like in nlm and spg both said the process was failed due to covariance matrix being not positive definite while it was, I think that was due to iteration, when iterating covariance matrix might not have been positive definite, and the fact that I did not define the constraints.
Thus, as a result I need to construct an mle algorithm for bivariate normal, where do I do the mistake?
NOTE: I also tried the optimization functions with the following, (I am not sure I did it correct);
neg_likelihood = function(mu,sigma,rho)
{
if(rho>=-1 && rho<=1)
{
-sum(mvtnorm::dmvnorm(x=sample_10,mean=mu
,sigma = matrix(c(sigma[1]^2
,sigma[1]*sigma[2]*rho,sigma[1]*sigma[2]*rho
,sigma[2]^2),2,2),log = T))
}
else NA
}
Any help is appreciated.
Thanks.
EDIT : mu is a vector of length 2 specifying the population means, sigma is a vector of length 2 (specifying population standard deviations of the random variables), and rho is a scalar as correlation coefficient between the bivariate r.v s.
You can do it in closed form so there is no need for numeric optimization. See wiki. Just use colMeans and cov and take note of the method argument in help("cov") and this comment
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations. These functions return NA when
there is only one observation (whereas S-PLUS has been returning NaN),
and fail if x has length zero.

Resources