R: Creating a loop for two models? - r

I currently have following code with two functions that calculate the model fit for two distinct models. The difference is in the lm function, where + log(v2) has been added in model 2.
R code
dat <- data.frame(clicks = runif(30, 1, 100), v1 = runif(30, 1, 100), v2 = runif(30, 1, 100))
p0 <- 1 # number of parameters in lm()
p1 <- 2 # number of parameters in lm()
n <- nrow(dat) - 1
## Model 1 Loop
model1 <- function(x) {
fit <- lm(log(clicks) ~ log(v1), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ])
c(summary(fit)$r.squared)
}
## Model 1 Regression
result_m1 <- t(sapply(p0:n, model1))
data.frame(result_m1)
## Model 2 Loop
model2 <- function(x) {
fit <- lm(log(clicks) ~ log(v1) + log(v2), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ])
c(summary(fit)$r.squared)
}
## Model 2 Regression
result_m2 <- t(sapply(p1:n, model2))
data.frame(result_m2)
Question: Can I somehow create a function that implements a loop for the different models only, instead of repeating the calculation for every model?
I have something like this in mind but weren't able to implement it .http://www.ats.ucla.edu/stat/r/pages/looping_strings.htm

I don't see a point in recreating a function that can be easily done with model-selection functions in available packages.
library(leaps)
library(dplyr)
b <- regsubsets(clicks ~ ., data=dat, nbest=10, nvmax=2) # carries out exhaustive model selection (10 best models; 2 maximum predictors)
coef(b, 1:3) # returns coefficient for the 3 models in this case
[[1]]
(Intercept) v1
60.8067570 -0.2665699
[[2]]
(Intercept) v2
49.96974177 -0.05227489
[[3]]
(Intercept) v1 v2
62.02323816 -0.26422966 -0.02676747
summary(b)$rsq #provide r.squared value for 3 models
[1] 0.067952759 0.002366681 0.068568059
To run prediction is a tad more complicated.
all.mods <- summary(b)$which[,-1] # gives logic output of predictors combination
all.mods
v1 v2
1 TRUE FALSE
1 FALSE TRUE
2 TRUE TRUE
RHS <- lapply(seq(nrow(all.mods)), function(m) summary(b)$which[m,-1] %>% which %>% names %>% paste(., collapse="+"))
RHS
[[1]]
[1] "v1"
[[2]]
[1] "v2"
[[3]]
[1] "v1+v2"
lm.form <- lapply(RHS, function(m)parse(text=paste("lm(clicks ~", m, ", data=dat)")))
lm.mods <- lapply(lm.form, eval) # return list of all lm.mods generated
The list of lm.mods can subsequently be used for predict with new.data.

Related

Running a regression model over 30 specific set.seed automatically using R

I am working with several linear regression models.
I want to run a linear regression model with different 30 set.seed
For clarification, I only share the code with two regression models and 10 set.seed (In my project I have 12 regression models and each one should be run with 30 different set.seeds)
I need a solution that I can run a 30 set.seed for one linear regression model so I can go away from my laptop during the running period (30 set.seeds). Then I did the same for the second regression model.
Is there a way to run the code over the 30 different set.seed automatically. So I got a result for each set.seed.
I hope everything is clear and I am happy to clarify more.
NOTE
Bear in mind that I have four related Blocks with each regression model. So any change with set.seed or creatFolds may affect the other blocks.
EDIT1
The dataset used
wdbc <- read.delim("airfoil_self_noise.dat",header=F)
wdbcc=as.data.frame(scale(wdbc))
#set.seed(1)
#set.seed(2)
#set.seed(3)
#set.seed(4)
...
k = 30
folds <- createFolds(wdbcc$V6, k = k, list = TRUE, returnTrain = TRUE)
## Ordinary Least Square regression ##
#Block A
lm = list()
for (i in 1:k) {
lm[[i]] = lm(V6~ ., data = wdbcc[folds[[i]],])
}
#Block B
lm_coef = list()
lm_coef_var = list()
for(j in 1:(lm[[1]]$coefficients %>% length())){
for(i in 1:k){
lm_coef[[i]] = lm[[i]]$coefficients[j]
lm_coef_var[[j]] = lm_coef %>% unlist() %>% var()
}
}
#Block C
lm_var = unlist(lm_coef_var)
lm_df = cbind(coefficients = lm[[1]]$coefficients%>% names() %>% as.data.frame()
, variance = lm_var %>% as.data.frame())
colnames(lm_df) = c("coefficients", "variance_lm")
#Block D
lm_var_sum = sum(lm_var)
PQSQ-Regression
X=list()
Y=list()
for (i in 1:k) {
n=wdbcc[folds[[i]],-6]
m=wdbcc[folds[[i]],6]
X[[i]]=n
Y[[i]]=m
}
#Block A
lmPQSQ1 = list()
for (i in 1:k) {
lmPQSQ1[[i]] = PQSQRegression(X[[i]],Y[[i]],0.01,data = wdbcc[folds[[i]],])
}
lmmPQSQ1=list()
for (i in 1:k) {
L=list(coefficients = c(lmPQSQ1[[i]][[2]],lmPQSQ1[[i]][[1]]))
lmmPQSQ1[[i]]=L
}
#Block B
lm_coefPQSQ1 = list()
lm_coef_varPQSQ1 = list()
for(j in 1:(lmmPQSQ1[[1]]$coefficients %>% length())){
for(i in 1:k){
lm_coefPQSQ1[[i]] = lmmPQSQ1[[i]]$coefficients[j]
lm_coef_varPQSQ1[[j]] = lm_coefPQSQ1 %>% unlist() %>% var()
}
}
#Block C
lm_varPQSQ1 = unlist(lm_coef_varPQSQ1)
lm_dfPQSQ1 = variance = lm_varPQSQ1 %>% as.data.frame()
#Block D
PQSQ1_var_sum = sum(lm_varPQSQ1)
If I understand you correctly you want to regress V6 on all the other variables using both OLS and a LAD model. You want to select k=30 random "folds" using createFolds and repeat the process also n=30 times. As result you want the variances for each repetition and each coefficient.
I would wrap the fitting part into a function FX. Generate n=30 seeds with sample, loop over it with an lapply to repeat FX n=30 times.
FX <- function(seed, data, k=30) {
set.seed(seed) ## sets seed for each iteration
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## folds
## OLS
lm1 <- lapply(folds, function(folds) lm(V6 ~ ., data=data[folds, ]))
lm.coefs <- t(sapply(lm1, coef)) ## lm coefficients
## LAD
lad1 <- lapply(folds, function(folds) lad(V6 ~ ., data=data[folds, ], method="BR"))
lad.coefs <- t(sapply(lad1, coef)) ## lad coefficients
## calculate column variances for both coef matrices
## use `attr<-` to add the seed as an attribute if you want
return(`attr<-`(cbind(lm=apply(lm.coefs, 2, var), lad=apply(lad.coefs, 2, var)),
"seed", seed))
}
seeds <- 1:30 ## specific seeds 1, 2, ... 30
## if you want non-consecutive specific seeds, do:
# set.seed(42) ## set some initial seed
# n <- 30 ## n. o. seeds
# seeds <- sample(1:1e6, n) ## sample seeds for `FX`
res <- lapply(seeds, FX, data=wdbcc) ## lapply loop
Result
This results in a list of length 30 with variance matrices for each repetition, each model, and each coefficient.
res[1:2] ## first two lists
# [[1]]
# lm lad
# (Intercept) 9.104280e-06 1.273920e-05
# V1 2.609623e-05 6.992753e-05
# V2 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.209651e-05
# V4 7.986000e-06 9.273005e-06
# V5 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
#
# [[2]]
# lm lad
# (Intercept) 4.558722e-06 2.031707e-05
# V1 2.256583e-05 9.291900e-05
# V2 6.519648e-05 2.768443e-05
# V3 1.800889e-05 9.983524e-06
# V4 1.131813e-05 1.174496e-05
# V5 3.866105e-05 1.022452e-05
# attr(,"seed")
# [1] 2
length(res)
# [1] 30
To calculate the sum of variances for each seed you may use colSums in an sapply.
# sum of variances
sov <- t(sapply(res, colSums))
dim(sov)
# [1] 30 2
head(sov)
# lm lad
# [1,] 1.829835e-04 0.0001401535
# [2,] 1.603091e-04 0.0001728735
# [3,] 1.003093e-04 0.0001972869
# [4,] 1.460591e-04 0.0001508251
# [5,] 9.915082e-05 0.0001262106
# [6,] 1.425996e-04 0.0001478449
To understand what one iteration of the lapply does, consider this:
## provide the values of first iteration for arguments of function `FX`
seed <- 1
data <- wdbcc
k <- 30
## first iteration of `lapply`
set.seed(seed)
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## folds
## OLS
lm1 <- lapply(folds, function(folds) lm(V6 ~ ., data=data[folds, ]))
lm.coefs <- t(sapply(lm1, coef)) ## lm coefficients
dim(lm.coefs)
# [1] 30 6
head(lm.coefs)
# (Intercept) V1 V2 V3 V4 V5
# Fold01 -0.0039130125 -0.5806272 -0.3564769 -0.4804492 0.2271908 -0.2805472
# Fold02 0.0013260444 -0.5863764 -0.3533327 -0.4759213 0.2253128 -0.2874691
# Fold03 0.0006791787 -0.5890787 -0.3678586 -0.4832066 0.2220979 -0.2739124
# Fold04 -0.0010721593 -0.5868079 -0.3722466 -0.4895328 0.2227811 -0.2749657
# Fold05 0.0021856620 -0.5850165 -0.3495360 -0.4810657 0.2235410 -0.2936287
# Fold06 0.0001486909 -0.5872607 -0.3677774 -0.4848523 0.2275780 -0.2823764
## LAD (same as OLS)
lad1 <- lapply(folds, function(folds) lad(V6 ~ ., data=data[folds, ], method="BR"))
lad.coefs <- t(sapply(lad1, coef)) ## lad coefficients
## return, throws variances for each coefficient of each model in a matrix
## the seed is added as an attribute, to be able to identify it later
res.1 <- `attr<-`(cbind(var.lm=apply(lm.coefs, 2, var),
var.lad=apply(lad.coefs, 2, var)),
"seed", seed)
res.1
# var.lm var.lad
# (Intercept) 9.104280e-06 1.273920e-05
# V1 2.609623e-05 6.992753e-05
# V2 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.209651e-05
# V4 7.986000e-06 9.273005e-06
# V5 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
Compare res.1 with the first element of list res above.
sov.1 <- colSums(res.1)
sov.1
# var.lm var.lad
# 0.0001829835 0.0001401535
Compare sov.1 with the first row of matrix sov above.
Edit
For regression functions with matrix notation, such as lm.fit, we may use model.matrix and do the subsetting beforehand, see line lm2.coefs in the function; compare lm and lm2 columns in res2 below, they're equal. (lm.fit is also faster than lm, because it omits unnecessary calculations, and you just need the coefficients; hence you may actually replace lm with lm.fit line. There might also be a way with lad using lsfit in the code, but honestly I'm too unfamiliar with lad to provide you this solution.)
Also notice, that, for sake of brevity I merged the two lines per model into one using sapply directly on the $coefficients. sapply works as lapply but throws a matrix; note that we need to transpose.
FX2 <- function(seed, data, k=30) {
set.seed(seed) ## sets seed for each iteration
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## draw folds
lm.coefs <- t(sapply(folds, function(f) lm(V6 ~ ., data=data[f, ])$coef))
lm2.coefs <- t(sapply(folds, function(f) {
data2 <- data[f, ]
lm.fit(x=model.matrix(V6 ~ ., data2), y=data2[,"V6"])$coef
}))
lad.coefs <- t(sapply(folds, function(f) lad(V6 ~ ., data=data[f, ], method="BR")$coef))
return(`attr<-`(cbind(lm=apply(lm.coefs, 2, var),
lm2=apply(lm2.coefs, 2, var),
lad=apply(lad.coefs, 2, var)),
"seed", seed))
}
seeds <- 1:30
res.2 <- lapply(seeds, FX2, data=wdbcc) ## lapply loop
res.2[1:2]
# [[1]]
# lm lm2 lad
# (Intercept) 9.104280e-06 9.104280e-06 1.273920e-05
# V1 2.609623e-05 2.609623e-05 6.992753e-05
# V2 7.082099e-05 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.352299e-05 1.209651e-05
# V4 7.986000e-06 7.986000e-06 9.273005e-06
# V5 5.545298e-05 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
#
# [[2]]
# lm lm2 lad
# (Intercept) 4.558722e-06 4.558722e-06 2.031707e-05
# V1 2.256583e-05 2.256583e-05 9.291900e-05
# V2 6.519648e-05 6.519648e-05 2.768443e-05
# V3 1.800889e-05 1.800889e-05 9.983524e-06
# V4 1.131813e-05 1.131813e-05 1.174496e-05
# V5 3.866105e-05 3.866105e-05 1.022452e-05
# attr(,"seed")
# [1] 2
Data and libraries:
invisible(lapply(c("caret", "L1pack"), library, character.only=TRUE))
wdbcc <- read.delim("airfoil_self_noise.dat", header=F)
wdbcc[] <- lapply(wdbcc, scale)

R Loop For Extracting Values

hsb2 <- read.csv("https://stats.idre.ucla.edu/stat/data/hsb2.csv")
names(hsb2)
varlist <- names(hsb2)[8:11]
models <- lapply(varlist, function(x) {
lm(substitute(read ~ i, list(i = as.name(x))), data = hsb2)
})
## look at the first element of the list, model 1
models[[1]]
The code above generates a series of simple regression models for different independent variables. My priority is to then extract the coefficient and standard error for each of the variables listed in varlist. My attempt shows below.
ATTEMPT = lapply(1:length(models), function(x) {
cbind(cov, coef(summary(models[[x]]))[2,1:2])})
My hopeful output will show three columns--variable, coefficient, std. error:
How about:
ATTEMPT2 = lapply(1:length(models), function(x) {
cf <- coef(summary(models[[x]]))
data.frame(Variable=rownames(cf)[2],
Estimate=cf[2,1],
Std.Error=cf[2,2])})
(df2 <- do.call("rbind", ATTEMPT2))
# Variable Estimate Std.Error
# 1 write 0.6455300 0.06168323
# 2 math 0.7248070 0.05827449
# 3 science 0.6525644 0.05714318
# 4 socst 0.5935322 0.05317162

Fast post hoc computation using R

I have a large dataset which I would like to perform post hoc computation:
dat = as.data.frame(matrix(runif(10000*300), ncol = 10000, nrow = 300))
dat$group = rep(letters[1:3], 100)
Here is my code:
start <- Sys.time()
vars <- names(dat)[-ncol(dat)]
aov.out <- lapply(vars, function(x) {
lm(substitute(i ~ group, list(i = as.name(x))), data = dat)})
TukeyHSD.out <- lapply(aov.out, function(x) TukeyHSD(aov(x)))
Sys.time() - start
Time difference of 4.033335 mins
It takes about 4 min, are there more efficient and elegant ways to perform post hoc using R?
Thanks a lot
Your example is too big. For illustration of the idea I use a small one.
set.seed(0)
dat = as.data.frame(matrix(runif(2*300), ncol = 2, nrow = 300))
dat$group = rep(letters[1:3], 100)
Why do you call aov on a fitted "lm" model? That basically refits the same model.
Have a read on Fitting a linear model with multiple LHS first. lm is the workhorse of aov, so you can pass a multiple LHS formula to aov. The model has class c("maov", "aov", "mlm", "lm").
response_names <- names(dat)[-ncol(dat)]
form <- as.formula(sprintf("cbind(%s) ~ group", toString(response_names)))
fit <- do.call("aov", list(formula = form, data = quote(dat)))
Now the issue is: there is no "maov" method for TuckyHSD. So we need a hacking.
TuckyHSD relies on the residuals of the fitted model. In c("aov", "lm") case the residuals is a vector, but in c("maov", "aov", "mlm", "lm") case it is a matrix. The following demonstrates the hacking.
aov_hack <- fit
aov_hack[c("coefficients", "fitted.values")] <- NULL ## don't need them
aov_hack[c("contrasts", "xlevels")] <- NULL ## don't need them either
attr(aov_hack$model, "terms") <- NULL ## don't need it
class(aov_hack) <- c("aov", "lm") ## drop "maov" and "mlm"
## the following elements are mandatory for `TukeyHSD`
## names(aov_hack)
#[1] "residuals" "effects" "rank" "assign" "qr"
#[6] "df.residual" "call" "terms" "model"
N <- length(response_names) ## number of response variables
result <- vector("list", N)
for (i in 1:N) {
## change response variable in the formula
aov_hack$call[[2]][[2]] <- as.name(response_names[i])
## change residuals
aov_hack$residuals <- fit$residuals[, i]
## change effects
aov_hack$effects <- fit$effects[, i]
## change "terms" object and attribute
old_tm <- terms(fit) ## old "terms" object in the model
old_tm[[2]] <- as.name(response_names[i]) ## change response name in terms
new_tm <- terms.formula(formula(old_tm)) ## new "terms" object
aov_hack$terms <- new_tm ## replace `aov_hack$terms`
## replace data in the model frame
aov_hack$model[1] <- data.frame(fit$model[[1]][, i])
names(aov_hack$model)[1] <- response_names[i]
## run `TukeyHSD` on `aov_hack`
result[[i]] <- TukeyHSD(aov_hack)
}
result[[1]] ## for example
# Tukey multiple comparisons of means
# 95% family-wise confidence level
#
#Fit: aov(formula = V1 ~ group, data = dat)
#
#$group
# diff lwr upr p adj
#b-a -0.012743870 -0.1043869 0.07889915 0.9425847
#c-a -0.022470482 -0.1141135 0.06917254 0.8322109
#c-b -0.009726611 -0.1013696 0.08191641 0.9661356
I have used a "for" loop. Replace it with a lapply if you want.

Rolling regression and prediction with lm() and predict()

I need to apply lm() to an enlarging subset of my dataframe dat, while making prediction for the next observation. For example, I am doing:
fit model predict
---------- -------
dat[1:3, ] dat[4, ]
dat[1:4, ] dat[5, ]
. .
. .
dat[-1, ] dat[nrow(dat), ]
I know what I should do for a particular subset (related to this question: predict() and newdata - How does this work?). For example to predict the last row, I do
dat1 = dat[1:(nrow(dat)-1), ]
dat2 = dat[nrow(dat), ]
fit = lm(log(clicks) ~ log(v1) + log(v12), data=dat1)
predict.fit = predict(fit, newdata=dat2, se.fit=TRUE)
How can I do this automatically for all subsets, and potentially extract what I want into a table?
From fit, I'd need the summary(fit)$adj.r.squared;
From predict.fit I'd need predict.fit$fit value.
Thanks.
(Efficient) solution
This is what you can do:
p <- 3 ## number of parameters in lm()
n <- nrow(dat) - 1
## a function to return what you desire for subset dat[1:x, ]
bundle <- function(x) {
fit <- lm(log(clicks) ~ log(v1) + log(v12), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ], se.fit = TRUE)
c(summary(fit)$adj.r.squared, pred$fit, pred$se.fit)
}
## rolling regression / prediction
result <- t(sapply(p:n, bundle))
colnames(result) <- c("adj.r2", "prediction", "se")
Note I have done several things inside the bundle function:
I have used subset argument for selecting a subset to fit
I have used model = FALSE to not save model frame hence we save workspace
Overall, there is no obvious loop, but sapply is used.
Fitting starts from p, the minimum number of data required to fit a model with p coefficients;
Fitting terminates at nrow(dat) - 1, as we at least need the final column for prediction.
Test
Example data (with 30 "observations")
dat <- data.frame(clicks = runif(30, 1, 100), v1 = runif(30, 1, 100),
v12 = runif(30, 1, 100))
Applying code above gives results (27 rows in total, truncated output for 5 rows)
adj.r2 prediction se
[1,] NaN 3.881068 NaN
[2,] 0.106592619 3.676821 0.7517040
[3,] 0.545993989 3.892931 0.2758347
[4,] 0.622612495 3.766101 0.1508270
[5,] 0.180462206 3.996344 0.2059014
The first column is the adjusted-R.squared value for fitted model, while the second column is the prediction. The first value for adj.r2 is NaN, because the first model we fit has 3 coefficients for 3 data points, hence no sensible statistics is available. The same happens to se as well, as the fitted line has no 0 residuals, so prediction is done without uncertainty.
I just made up some random data to use for this example. I'm calling the object data because that was what it was called in the question at the time that I wrote this solution (call it anything you like).
(Efficient) Solution
data <- data.frame(v1=rnorm(100),v2=rnorm(100),clicks=rnorm(100))
data1 = data[1:(nrow(data)-1), ]
data2 = data[nrow(data), ]
for(i in 3:nrow(data)){
nam <- paste("predict", i, sep = "")
nam1 <- paste("fit", i, sep = "")
nam2 <- paste("summary_fit", i, sep = "")
fit = lm(clicks ~ v1 + v2, data=data[1:i,])
tmp <- predict(fit, newdata=data2, se.fit=TRUE)
tmp1 <- fit
tmp2 <- summary(fit)
assign(nam, tmp)
assign(nam1, tmp1)
assign(nam2, tmp2)
}
All of the results you want will be stored in the data objects this creates.
For example:
> summary_fit10$r.squared
[1] 0.3087432
You mentioned in the comments that you'd like a table of results. You can programmatically create tables of results from the 3 types of output files like this:
rm(data,data1,data2,i,nam,nam1,nam2,fit,tmp,tmp1,tmp2)
frames <- ls()
frames.fit <- frames[1:98] #change index or use pattern matching as needed
frames.predict <- frames[99:196]
frames.sum <- frames[197:294]
fit.table <- data.frame(intercept=NA,v1=NA,v2=NA,sourcedf=NA)
for(i in 1:length(frames.fit)){
tmp <- get(frames.fit[i])
fit.table <- rbind(fit.table,c(tmp$coefficients[[1]],tmp$coefficients[[2]],tmp$coefficients[[3]],frames.fit[i]))
}
fit.table
> fit.table
intercept v1 v2 sourcedf
2 -0.0647017971121678 1.34929652763687 -0.300502017324518 fit10
3 -0.0401617893034109 -0.034750571912636 -0.0843076273486442 fit100
4 0.0132968863522573 1.31283604433593 -0.388846211083564 fit11
5 0.0315113918953643 1.31099122173898 -0.371130010135382 fit12
6 0.149582794027583 0.958692838785998 -0.299479715938493 fit13
7 0.00759688947362175 0.703525856001948 -0.297223988673322 fit14
8 0.219756240025917 0.631961979610744 -0.347851129205841 fit15
9 0.13389223748979 0.560583832333355 -0.276076134872669 fit16
10 0.147258022154645 0.581865844000838 -0.278212722024832 fit17
11 0.0592160359650468 0.469842498721747 -0.163187274356457 fit18
12 0.120640756525163 0.430051839741539 -0.201725012088506 fit19
13 0.101443924785995 0.34966728554219 -0.231560038360121 fit20
14 0.0416637001406594 0.472156988919337 -0.247684504074867 fit21
15 -0.0158319749710781 0.451944113682333 -0.171367482879835 fit22
16 -0.0337969739950376 0.423851304105399 -0.157905431162024 fit23
17 -0.109460218252207 0.32206642419212 -0.055331391802687 fit24
18 -0.100560410735971 0.335862465403716 -0.0609509815266072 fit25
19 -0.138175283219818 0.390418411384468 -0.0873106257144312 fit26
20 -0.106984355317733 0.391270279253722 -0.0560299858019556 fit27
21 -0.0740684978271464 0.385267011513678 -0.0548056844433894 fit28

Best way to extract a reference category from a glm model?

I'm writing a function that accepts a full and a reduced glm object to summarize interaction results for a variable of interest varofint and the interaction variable interaction_var (by performing a lrtest and using svycontrast on the full object to extract results for varofint for each level of interaction_var). Sample data:
x <- data.frame(outcome=rbinom(100,1,.3),varofint=rnorm(100), interaction_var=sample(letters[1:3],100,replace=TRUE))
reduced <- glm(outcome~varofint+interaction_var,data=x)
full <- glm(outcome~varofint*interaction_var,data=x)
I'd like to know the best way to extract a reference category for said (full) glm model. I could obviously do something like
levels(full$data$interaction_var)[1]
but would this be a "safe" method to extract a reference category given inputs to the contrasts argument? It seems like, given the option to select SAS contrast, this method could produce a level of interactionv_var that isn't the one used as a reference category in the model. Would the following be safer?
mf <- model.frame(full)
setdiff(rownames(contrasts(mf[, "interaction_var"])), colnames(contrasts(mf[, "interaction_var"])))
or similarly
names(which(apply(contrasts(mf[, "interaction_var"]),1,function(.v){all(.v==0)})))
Am I missing a simpler way to extract the reference category?
Here is a function for this task:
refCat <- function(model, var) {
cs <- attr(model.matrix(model), "contrasts")[[var]]
if (is.character(cs)) {
if (cs == "contr.treatment")
ref <- 1
else stop("No treatment contrast")
}
else {
zeroes <- !cs
ones <- cs == 1
stopifnot(all(zeroes | ones))
cos <- colSums(ones)
stopifnot(all(cos == 1))
ros <- rowSums(ones)
stopifnot(sum(!ros) == 1 && sum(ros) != ncol(cs))
ref <- which(!ros)
}
return(levels(model$data[[var]])[ref])
}
The function will stop if the variable var is not represented as treatment contrasts.
Examples:
refCat(reduced, "interaction_var")
# [1] "a"
refCat(full, "interaction_var")
# [1] "a"
Bit late, but dummy.coef() could work... the first value in each variable element of its output is the reference category.
# R 4.0.0 data.frame() does not produce factors
x <- data.frame(
outcome = rbinom(100, 1, .3),
varofint = rnorm(100),
interaction_var = sample(letters[1:3], 100, replace = TRUE),
stringsAsFactors = TRUE
)
reduced <- glm(outcome ~ varofint + interaction_var, data = x)
full <- glm(outcome ~ varofint * interaction_var, data = x)
d <- dummy.coef(full)
d
# Full coefficients are
#
# (Intercept): 0.310136
# varofint: -0.07247677
# interaction_var: a b c
# 0.00000000 0.07017833 -0.05891015
# varofint:interaction_var: a b c
# 0.00000000 -0.14824179 -0.04123618
d$interaction_var
# a b c
# 0.00000000 0.07017833 -0.05891015
d$interaction_var[1]
# a
# 0
names(d$interaction_var[1])
# [1] "a"

Resources