How to make lm() output welch t test in R - r

I've been told and have seen examples where a linear model and t test are basically the same test just that the t test is a specialized linear model with dummy-coded predictors. Is there a way to get the output of lm to output the same t values, p values, confidence intervals, and standard error as the normal t.test function in r where the default value for the var.equal argument is FALSE?
For example right now the outputs of lm and t.test are different right now
data("mtcars")
#these outputs below give me different values
summary(lm(mpg ~ am, mtcars))
t.test(mpg ~ am, mtcars)
What I want is to make lm have the same values as the t.test function., which is a Welch t test. How would I do that?

First off, there exists a great post on CrossValidated How are regression, the t-test, and the ANOVA all versions of the general linear model?
that gives a lot of background information on the relationship between a t-test, linear regression and ANOVA.
In essence, the p-value from a t-test corresponds to the p-value of the slope parameter in a linear model.
In your case, you need to compare
t.test(mpg ~ am, mtcars, alternative = "two.sided", var.equal = T)
#
# Two Sample t-test
#
#data: mpg by am
#t = -4.1061, df = 30, p-value = 0.000285
#alternative hypothesis: true difference in means is not equal to 0
#95 percent confidence interval:
# -10.84837 -3.64151
#sample estimates:
#mean in group 0 mean in group 1
# 17.14737 24.39231
fit <- lm(mpg ~ as.factor(am), mtcars)
summary(fit)
#
#Call:
#lm(formula = mpg ~ as.factor(am), data = mtcars)
#
#Residuals:
# Min 1Q Median 3Q Max
#-9.3923 -3.0923 -0.2974 3.2439 9.5077
#
#Coefficients:
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 17.147 1.125 15.247 1.13e-15 ***
#as.factor(am)1 7.245 1.764 4.106 0.000285 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 4.902 on 30 degrees of freedom
#Multiple R-squared: 0.3598, Adjusted R-squared: 0.3385
#F-statistic: 16.86 on 1 and 30 DF, p-value: 0.000285
Note that p-values agree.
Two comments:
as.factor(am) turns am into a categorical variable
To match the assumptions of the linear model (where the error term epsilon ~ N(0, sigma^2)), we need to use t.test with var.equal = T which assumes the variance to be the same for measurements from both groups.
The difference in the sign of the t value comes from the different definition of the reference level of "categorised" am.
To get the same group means in the linear model, we can remove the intercept
lm(mpg ~ as.factor(am) - 1, mtcars)
#
#Call:
#lm(formula = mpg ~ as.factor(am) - 1, data = mtcars)
#
#Coefficients:
#as.factor(am)0 as.factor(am)1
# 17.15 24.39

An assumption of linear regression is that the residuals are normally distributed with a mean of 0 and a constant variance. Therefore your t.test and regression summary will have consistent results only if you assume that the variances are equal.

Related

Simulate data for logistic regression with fixed r2

I would like to simulate data for a logistic regression where I can specify its explained variance beforehand. Have a look at the code below. I simulate four independent variables and specify that each logit coefficient should be of size log(2)=0.69. This works nicely, the explained variance (I report Cox & Snell's r2) is 0.34.
However, I need to specify the regression coefficients in such a way that a pre-specified r2 will result from the regression. So if I would like to produce an r2 of let's say exactly 0.1. How do the coefficients need to be specified? I am kind of struggling with this..
# Create independent variables
sigma.1 <- matrix(c(1,0.25,0.25,0.25,
0.25,1,0.25,0.25,
0.25,0.25,1,0.25,
0.25,0.25,0.25,1),nrow=4,ncol=4)
mu.1 <- rep(0,4)
n.obs <- 500000
library(MASS)
sample1 <- as.data.frame(mvrnorm(n = n.obs, mu.1, sigma.1, empirical=FALSE))
# Create latent continuous response variable
sample1$ystar <- 0 + log(2)*sample1$V1 + log(2)*sample1$V2 + log(2)*sample1$V3 + log(2)*sample1$V4
# Construct binary response variable
sample1$prob <- exp(sample1$ystar) / (1 + exp(sample1$ystar))
sample1$y <- rbinom(n.obs,size=1,prob=sample1$prob)
# Logistic regression
logreg <- glm(y ~ V1 + V2 + V3 + V4, data=sample1, family=binomial)
summary(logreg)
The output is:
Call:
glm(formula = y ~ V1 + V2 + V3 + V4, family = binomial, data = sample1)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.7536 -0.7795 -0.0755 0.7813 3.3382
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.002098 0.003544 -0.592 0.554
V1 0.691034 0.004089 169.014 <2e-16 ***
V2 0.694052 0.004088 169.776 <2e-16 ***
V3 0.693222 0.004079 169.940 <2e-16 ***
V4 0.699091 0.004081 171.310 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 693146 on 499999 degrees of freedom
Residual deviance: 482506 on 499995 degrees of freedom
AIC: 482516
Number of Fisher Scoring iterations: 5
And Cox and Snell's r2 gives:
library(pscl)
pR2(logreg)["r2ML"]
> pR2(logreg)["r2ML"]
r2ML
0.3436523
If you add a random error term to the ystar variable making ystat.r and then work with that, you can tweek the standard deviation until it meets you specifications.
sample1$ystar.r <- sample1$ystar+rnorm(n.obs, 0, 3.8) # tried a few values
sample1$prob <- exp(sample1$ystar.r) / (1 + exp(sample1$ystar.r))
sample1$y <- rbinom(n.obs,size=1,prob=sample1$prob)
logreg <- glm(y ~ V1 + V2 + V3 + V4, data=sample1, family=binomial)
summary(logreg) # the estimates "shrink"
pR2(logreg)["r2ML"]
#-------
r2ML
0.1014792
R-squared (and its variations) is a random variable, as it depends on your simulated data. If you simulate data with the exact same parameters multiple times, you'll most likely get different values for R-squared each time. Therefore, you cannot produce a simulation where the R-squared will be exactly 0.1 just by controlling the parameters.
On the other hand, since it's a random variable, you could potentially simulate your data from a conditional distribution (conditioning on a fixed value of R-squared), but you would need to find out what these distributions look like (math might get really ugly here, cross validated is more appropriate for this part).

Likelihood ratio test: 'models were not all fitted to the same size of dataset'

I'm an absolute R beginner and need some help with my likelihood ratio tests for my univariate analyses. Here's the code:
#Univariate analysis for conscientiousness (categorical)
fit <- glm(BCS_Bin~Conscientiousness_cat,data=dat,family=binomial)
summary(fit)
#Likelihood ratio test
fit0<-glm(BCS_Bin~1, data=dat, family=binomial)
summary(fit0)
lrtest(fit, fit0)
The results are:
Call:
glm(formula = BCS_Bin ~ Conscientiousness_cat, family = binomial,
data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.8847 -0.8847 -0.8439 1.5016 1.5527
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.84933 0.03461 -24.541 <2e-16 ***
Conscientiousness_catLow 0.11321 0.05526 2.049 0.0405 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 7962.1 on 6439 degrees of freedom
Residual deviance: 7957.9 on 6438 degrees of freedom
(1963 observations deleted due to missingness)
AIC: 7961.9
Number of Fisher Scoring iterations: 4
And:
Call:
glm(formula = BCS_Bin ~ 1, family = binomial, data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.8524 -0.8524 -0.8524 1.5419 1.5419
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.82535 0.02379 -34.69 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 10251 on 8337 degrees of freedom
Residual deviance: 10251 on 8337 degrees of freedom
(65 observations deleted due to missingness)
AIC: 10253
Number of Fisher Scoring iterations: 4
For my LRT:
Error in lrtest.default(fit, fit0) :
models were not all fitted to the same size of dataset
I understand that this is happening because there's different numbers of observations missing? That's because it is data from a large questionnaire, and many more drop outs had occurred by the question assessing my predictor variable (conscientiousness) when compared with the outcome variable (body condition score/BCS). So I just have more data for BCS than conscientiousness, for example (it's producing the same error for many of my other variables too).
In order to run the likelihood ratio test, the model with just the intercept has to be fit to the same observations as the model that includes Conscientiousness_cat. So, you need the subset of the data that has no missing values for Conscientiousness_cat:
BCS_bin_subset = BCS_bin[complete.cases(BCS_bin[,"Conscientiousness_cat"]), ]
You can run both models on this subset of the data and the likelihood ratio test should run without error.
In your case, you could also do:
BCS_bin_subset = BCS_bin[!is.na(BCS_bin$Conscientiousness_cat), ]
However, it's nice to have complete.cases handy when you want a subset of a data frame with no missing values across multiple variables.
Another option that is more convenient if you're going to run multiple models, but that is also more complex is to first fit whatever model uses the largest number of variables from BCS_bin (since that model will exclude the largest number of observations due to missingness) and then use the update function to update that model to models with fewer variables. We just need to make sure that update uses the same observations each time, which we do using a wrapper function defined below. Here's an example using the built-in mtcars data frame:
library(lmtest)
dat = mtcars
# Create some missing values in mtcars
dat[1, "wt"] = NA
dat[5, "cyl"] = NA
dat[7, "hp"] = NA
# Wrapper function to ensure the same observations are used for each
# updated model as were used in the first model
# From https://stackoverflow.com/a/37341927/496488
update_nested <- function(object, formula., ..., evaluate = TRUE){
update(object = object, formula. = formula., data = object$model, ..., evaluate = evaluate)
}
m1 = lm(mpg ~ wt + cyl + hp, data=dat)
m2 = update_nested(m1, . ~ . - wt) # Remove wt
m3 = update_nested(m1, . ~ . - cyl) # Remove cyl
m4 = update_nested(m1, . ~ . - wt - cyl) # Remove wt and cyl
m5 = update_nested(m1, . ~ . - wt - cyl - hp) # Remove all three variables (i.e., model with intercept only)
lrtest(m5,m4,m3,m2,m1)

How to obtain Poisson's distribution "lambda" from R glm() coefficients

My R-script produces glm() coeffs below.
What is Poisson's lambda, then? It should be ~3.0 since that's what I used to create the distribution.
Call:
glm(formula = h_counts ~ ., family = poisson(link = log), data = pois_ideal_data)
Deviance Residuals:
Min 1Q Median 3Q Max
-22.726 -12.726 -8.624 6.405 18.515
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 8.222532 0.015100 544.53 <2e-16 ***
h_mids -0.363560 0.004393 -82.75 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 11451.0 on 10 degrees of freedom
Residual deviance: 1975.5 on 9 degrees of freedom
AIC: 2059
Number of Fisher Scoring iterations: 5
random_pois = rpois(10000,3)
h=hist(random_pois, breaks = 10)
mean(random_pois) #verifying that the mean is close to 3.
h_mids = h$mids
h_counts = h$counts
pois_ideal_data <- data.frame(h_mids, h_counts)
pois_ideal_model <- glm(h_counts ~ ., pois_ideal_data, family=poisson(link=log))
summary_ideal=summary(pois_ideal_model)
summary_ideal
What are you doing here???!!! You used a glm to fit a distribution???
Well, it is not impossible to do so, but it is done via this:
set.seed(0)
x <- rpois(10000,3)
fit <- glm(x ~ 1, family = poisson())
i.e., we fit data with an intercept-only regression model.
fit$fitted[1]
# 3.005
This is the same as:
mean(x)
# 3.005
It looks like you're trying to do a Poisson fit to aggregated or binned data; that's not what glm does. I took a quick look for canned ways of fitting distributions to canned data but couldn't find one; it looks like earlier versions of the bda package might have offered this, but not now.
At root, what you need to do is set up a negative log-likelihood function that computes (# counts)*prob(count|lambda) and minimize it using optim(); the solution given below using the bbmle package is a little more complex up-front but gives you added benefits like easily computing confidence intervals etc..
Set up data:
set.seed(101)
random_pois <- rpois(10000,3)
tt <- table(random_pois)
dd <- data.frame(counts=unname(c(tt)),
val=as.numeric(names(tt)))
Here I'm using table rather than hist because histograms on discrete data are fussy (having integer cutpoints often makes things confusing because you have to be careful about right- vs left-closure)
Set up density function for binned Poisson data (to work with bbmle's formula interface, the first argument must be called x, and it must have a log argument).
dpoisbin <- function(x,val,lambda,log=FALSE) {
probs <- dpois(val,lambda,log=TRUE)
r <- sum(x*probs)
if (log) r else exp(r)
}
Fit lambda (log link helps prevent numerical problems/warnings from negative lambda values):
library(bbmle)
m1 <- mle2(counts~dpoisbin(val,exp(loglambda)),
data=dd,
start=list(loglambda=0))
all.equal(unname(exp(coef(m1))),mean(random_pois),tol=1e-6) ## TRUE
exp(confint(m1))
## 2.5 % 97.5 %
## 2.972047 3.040009

Multiple correlation coefficient in R

I am looking for a way to calculate the multiple correlation coefficient in R http://en.wikipedia.org/wiki/Multiple_correlation, is there a built-in function to calculate it ?
I have one dependent variable and three independent ones.
I am not able to find it online, any idea ?
The easiest way to calculate what you seem to be asking for when you refer to 'the multiple correlation coefficient' (i.e. the correlation between two or more independent variables on the one hand, and one dependent variable on the other) is to create a multiple linear regression (predicting the values of one variable treated as dependent from the values of two or more variables treated as independent) and then calculate the coefficient of correlation between the predicted and observed values of the dependent variable.
Here, for example, we create a linear model called mpg.model, with mpg as the dependent variable and wt and cyl as the independent variables, using the built-in mtcars dataset:
> mpg.model <- lm(mpg ~ wt + cyl, data = mtcars)
Having created the above model, we correlate the observed values of mpg (which are embedded in the object, within the model data frame) with the predicted values for the same variable (also embedded):
> cor(mpg.model$model$mpg, mpg.model$fitted.values)
[1] 0.9111681
R will in fact do this calculation for you, but without telling you so, when you ask it to create the summary of a model (as in Brian's answer): the summary of an lm object contains R-squared, which is the square of the coefficient of correlation between the predicted and observed values of the dependent variable. So an alternative way to get the same result is to extract R-squared from the summary.lm object and take the square root of it, thus:
> sqrt(summary(mpg.model)$r.squared)
[1] 0.9111681
I feel that I should point out, however, that the term 'multiple correlation coefficient' is ambiguous.
The built-in function lm gives at least one version, not sure if this is what you are looking for:
fit <- lm(yield ~ N + P + K, data = npk)
summary(fit)
Gives:
Call:
lm(formula = yield ~ N + P + K, data = npk)
Residuals:
Min 1Q Median 3Q Max
-9.2667 -3.6542 0.7083 3.4792 9.3333
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 54.650 2.205 24.784 <2e-16 ***
N1 5.617 2.205 2.547 0.0192 *
P1 -1.183 2.205 -0.537 0.5974
K1 -3.983 2.205 -1.806 0.0859 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 5.401 on 20 degrees of freedom
Multiple R-squared: 0.3342, Adjusted R-squared: 0.2343
F-statistic: 3.346 on 3 and 20 DF, p-value: 0.0397
More info on what's going on at ?summary.lm and ?lm.
Try this:
# load sample data
data(mtcars)
# calculate correlation coefficient between all variables in `mtcars` using
# the inbulit function
M <- cor(mtcars)
# M is a matrix of correlation coefficient which you can display just by
# running
print(M)
# If you want to plot the correlation coefficient
library(corrplot)
corrplot(M, method="number",type= "lower",insig = "blank", number.cex = 0.6)

Custom regression equation in R

I have a set of data in R and I want to run a regression to test for correlation using custom coefficients.
Example:
x = lm(a ~ b + c + d, data=data, weights=weights)
That gives me coefficients for b, c, and d, but I just want to give b, c, and d my own coefficients and find, for example, the r^2. How would I do so?
Let's assume your predetermined coefficients are a three-element, numeric vector named: vec and that none of a,b,c are factors or character vectors:
#edit ... add a sum() function
(x = lm(a ~ 1, data=data, offset=apply(data, 1, function(x) {sum( c(1,x) * vec))} )
This should produce a model that has the specified estimates. You will probably need to do this:
summary(x)
As always... if you want tested code, then provide a dataset for testing. With the mtcars dataframe:
m1 = lm(mpg ~ carb + wt, data=mtcars)
vec <- coef(m1)
(x = lm(mpg ~ 1, data=mtcars,
offset=apply( mtcars[c("carb","wt")], 1,
function(x){ sum( c(1,x) *vec)} )))
Call:
lm(formula = mpg ~ 1, data = mtcars, offset = apply(mtcars[c("carb",
"wt")], 1, function(x) {
sum( c(1, x) * vec)
}))
Coefficients:
(Intercept)
-7.85e-17
So the offset model (with the coefficients used in the offset) is essentially an exact fit to the m1 model.
#BondedDust's method will be more efficient in the long run, but just for illustration, here's a simple example of how to create your own function to calculate R-squared for any regression coefficients you choose. We'll use the mtcars data set, which is built into R.
Assume a regression model that predicts "mpg" using the independent variables "carb" and "wt". a, b, and c are the three regression parameters that we need to provide to the function.
# Function to calculate R-squared
R2 = function(a,b,c) {
# Calculate the residual sum of squares from the regression model
SSresid = sum(((a + b*mtcars$carb + c*mtcars$wt) - mtcars$mpg)^2)
# Calculate the total sum of squares
SStot = sum((mtcars$mpg - mean(mtcars$mpg))^2)
# Calculate and return the R-squared for the regression model
return(1 - SSresid/SStot)
}
Now let's run the function. First let's see if our function matches the R-squared calculated by lm. We'll do this by creating a regression model in R, then we'll use the coefficients from that model and calculate the R-squared using our function and see if it matches the output from lm:
# Create regression model
m1 = lm(mpg ~ carb + wt, data=mtcars)
summary(m1)
Call:
lm(formula = mpg ~ carb + wt, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-4.5206 -2.1223 -0.0467 1.4551 5.9736
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 37.7300 1.7602 21.435 < 2e-16 ***
carb -0.8215 0.3492 -2.353 0.0256 *
wt -4.7646 0.5765 -8.265 4.12e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.839 on 29 degrees of freedom
Multiple R-squared: 0.7924, Adjusted R-squared: 0.7781
F-statistic: 55.36 on 2 and 29 DF, p-value: 1.255e-10
From the summary, we can see that the R-squared is 0.7924. Let's see what we get from the function we just created. All we need to do is feed our function the three regression coefficients listed in the summary above. We can hard-code those numbers, or we can extract the coefficients from the model object m1 (which is what I've done below):
R2(coef(m1)[1], coef(m1)[2], coef(m1)[3])
[1] 0.7924425
Now let's calculate the R-squared for other choices of the regression coefficients:
a = 37; b = -1; c = -3.5
R2(a, b, c)
[1] 0.5277607
a = 37; b = -2; c = -5
R2(a, b, c)
[1] 0.0256494
To check lots of values of a parameter at once, you can, for example, use sapply. The code below will return the R-squared for values of c ranging from -7 to -3 in increments of 0.1 (with the other two parameters set to the the values returned by lm:
sapply(seq(-7,-3,0.1), function(x) R2(coef(m1)[1],coef(m1)[2],x))

Resources