How to use the gamma distribution equation - r

I am using R to fit a GLM with Gamma distribution (link inverse). I would like to use the equation of the model to get other values of my predictors, knowing the response value. I know that the equation of a gamma distribution with link inverse is 1/μ = b0 + b1x1i, can you confirm that I should substitute to μ the mean of my response value?

Related

How do I specify the dispersion parameter when computing the confidence interval for a GLM?

I have a model of exponential decay in the form Y = exp{a + bX + cW}. In R, I represent this as a generalized linear model (GLM) using a gamma random component with log link function.
fitted <- glm(Y ~ X + W, family=Gamma(link='log'))
I know from this post that for the standard errors to really represent an exponential rather than gamma random component, I need to specify the dispersion parameter as being 1 when I call summary.
summary(fitted, dispersion=1)
summary(fitted) # not the same!
Now, I want to find the 95% confidence intervals for my estimates of a, b, c. However, there seems to be no way to specify the dispersion parameter for the confint, even though I know it should affect the confidence interval (because it affects the standard error).
confint(fitted)
confint(fitted, dispersion=1) # same as the last confint :(
So, in order to get the confidence intervals corresponding to an exponential rather than gamma random component, how do I specify the dispersion parameter when computing the confidence interval for a GLM?

Inverse of the lognormal distribution

I need to find the inverse of a given lognormal distribution.
Since there is no inbuilt function in R for inverse lognormal, I need to design my own.
I have this lognormal distribution for a random variable 'x'
f_lambda <- function(x,mu,sig) {dlnorm(x, meanlog = mu, sdlog = sig,log=FALSE)}
On wikipedia it says
G(y) = 1- F(1/y)
where G(Y)n is the inverse distribution to F(X) and X= 1/Y.
But, I am confused as to how to encode F(1/y) in r and what to use to define that distribution - mu or 1/mu.
I have estimates of mu and sigma for F(x).
Thanks in advance.
In general, the quantile distribution is the inverse of a cumulative distribution. This really means:
which means that to find the inverse of the lognormal distribution you can use
qlnorm()

How does lmer (from the R package lme4) compute log likelihood?

I'm trying to understand the function lmer. I've found plenty of information about how to use the command, but not much about what it's actually doing (save for some cryptic comments here: http://www.bioconductor.org/help/course-materials/2008/PHSIntro/lme4Intro-handout-6.pdf). I'm playing with the following simple example:
library(data.table)
library(lme4)
options(digits=15)
n<-1000
m<-100
data<-data.table(id=sample(1:m,n,replace=T),key="id")
b<-rnorm(m)
data$y<-rand[data$id]+rnorm(n)*0.1
fitted<-lmer(b~(1|id),data=data,verbose=T)
fitted
I understand that lmer is fitting a model of the form Y_{ij} = beta + B_i + epsilon_{ij}, where epsilon_{ij} and B_i are independent normals with variances sigma^2 and tau^2 respectively. If theta = tau/sigma is fixed, I computed the estimate for beta with the correct mean and minimum variance to be
c = sum_{i,j} alpha_i y_{ij}
where
alpha_i = lambda/(1 + theta^2 n_i)
lambda = 1/[\sum_i n_i/(1+theta^2 n_i)]
n_i = number of observations from group i
I also computed the following unbiased estimate for sigma^2:
s^2 = \sum_{i,j} alpha_i (y_{ij} - c)^2 / (1 + theta^2 - lambda)
These estimates seem to agree with what lmer produces. However, I can't figure out how log likelihood is defined in this context. I calculated the probability density to be
pd(Y_{ij}=y_{ij}) = \prod_{i,j}[f_sigma(y_{ij}-ybar_i)]
* prod_i[f_{sqrt(sigma^2/n_i+tau^2)}(ybar_i-beta) sigma sqrt(2 pi/n_i)]
where
ybar_i = \sum_j y_{ij}/n_i (the mean of observations in group i)
f_sigma(x) = 1/(sqrt{2 pi}sigma) exp(-x^2/(2 sigma)) (normal density with sd sigma)
But log of the above is not what lmer produces. How is log likelihood computed in this case (and for bonus marks, why)?
Edit: Changed notation for consistency, striked out incorrect formula for standard deviation estimate.
The links in the comments contained the answer. Below I've put what the formulae simplify to in this simple example, since the results are somewhat intuitive.
lmer fits a model of the form , where and are independent normals with variances and respectively. The joint probability distribution of and is therefore
where
.
The likelihood is obtained by integrating this with respect to (which isn't observed) to give
where is the number of observations from group , and is the mean of observations from group . This is somewhat intuitive since the first term captures spread within each group, which should have variance , and the second captures the spread between groups. Note that is the variance of .
However, by default (REML=T) lmer maximises not the likelihood but the "REML criterion", obtained by additionally integrating this with respect to to give
where is given below.
Maximising likelihood (REML=F)
If is fixed, we can explicitly find the and which maximise likelihood. They turn out to be
Note has two terms for variation within and between groups, and is somewhere between the mean of and the mean of depending on the value of .
Substituting these into likelihood, we can express the log likelihood in terms of only:
lmer iterates to find the value of which minimises this. In the output, and are shown in the fields "deviance" and "logLik" (if REML=F) respectively.
Maximising restricted likelihood (REML=T)
Since the REML criterion doesn't depend on , we use the same estimate for as above. We estimate to maximise the REML criterion:
The restricted log likelihood is given by
In the output of lmer, and are shown in the fields "REMLdev" and "logLik" (if REML=T) respectively.

Determining the values of beta in a logistic regression equation

I read about logistic regression on Wikipedia and it talks of the equation where z, the output depends on the values of beta1, beta2, beta3, and so on which are termed as regression coefficients along with beta0, which is the intercept.
How do you determine the values of these coefficients and intercept given a sample set where we know the output probability and the values of the input dependent factors?
Use R. Here's a video: www.youtube.com/watch?v=Yv05RjKpEKY

How to fit a negative binomial distribution in R while incorporating censoring

I need to fit Y_ij ~ NegBin(m_ij,k), hence a negative binomial distribution to a count. However, the data I have observed are censored, I know the value of y_ij, but it could be more than that value. Writting down the loglikelihood going with this problem is:
ll = \sum_{i=1}^n w_i (c_i log(P(Y_ij=y_ij|X_ij)) + (1- c_i) log(1- \sum_{k=1}^32 P(Y_ij = k|X_ij)))
Where X_ij represent the design matrix (with the covariates of interest), w_i is the weight for each observation, y_ij is the response variable and P(Y_ij=y_ij|Xij) is the negative binomial distribution where the m_ij=exp(X_ij \beta) and \alpha is the overdispersion parameter.
Does someone knows if there exist a build-in code in R that could be used to obtain this?
Check this paper out: Regression Models for Count Data in R

Resources