I have a linear model where one of the independent variables is a factor and where I am trying to make predictions on a data set that contains a new factor level (a factor level that wasn't in the data set the model was estimated on). I want to be able to make predictions for the observations with the new factor level by manually specifying the coefficient that will be applied to the factor. For example, suppose I estimate daily sales volumes for three types of stores, and I introduce a fourth type of store into the dataset. I have no historical data for it, but I might assume it will behave like some weighted combination of the other stores, for whom I have model coefficients.
If I try to apply predict.lm() to the new data I will get an error telling me that the factor has new levels (this makes sense).
df <- data.frame(y=rnorm(100), x1=factor(rep(1:4,25)))
lm1 <- lm(y ~ x1, data=df)
newdata <- data.frame(y=rnorm(100), x1=factor(rep(1:5,20)))
predict(lm1, newdata)
Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) :
factor x2 has new levels 5
I could do the prediction manually by simply multiplying the coefficients by the individual columns in the data.frame. However, this is cumbersome given that the real model I'm working with has many variables and interaction terms, and I want to be able to easily cycle through various model specifications by changing the model formula. Is there a way for me to essentially add a new coefficient to a model object and then use it to make forecasts? If not, is there another approach that is less cumbersome than setting up the entire prediction step manually?
Assumming you want level 5 to be evenly weighted, you can convert to a matrix, plug in the 25%, and multiply it by the coefficients from the model...
n.mat <- model.matrix(~x1, data=newdata)
n.mat[n.mat[,5] == 1, 2:4] <- .25
n.mat <- n.mat[,-5]
n.prediction <- n.mat %*% coef(lm1)
Here is what you could do:
Using rbind, stack the training and test datasets.
Factorize the predictors.
Divide the stack back to training and test datasets.
This way all the levels will be present in both the datasets.
Related
Suppose I was given a data as the following.
id1=rep(1:10,2)
trt=c(rep(1,10),rep(0,10))
outcome=rnorm(20)
set.seed(1005)
missing=c()
for(i in outcome){
if(rbinom(1,1,0.8*abs(i/max(abs(outcome))))==1){
missing=c(missing,which(outcome==i))
}
}
missing
trt[missing]=NA
dat=data.frame(id=id1,trt=as.factor(trt),outcome=outcome)
a1=mice::mice(dat,method=c('','logreg',''))
I can use mice to impute the data first and then conduct analysis on a1 where I assume the outcome and id predicts trt by logistic regression. In fact, only outcome predicts trt here. a1$formulas$trt will access the formula for imputation. I want to modify the formula here so that there is a constant offset.
forms_a1=a1$formulas
forms_a1$trt=as.formula(trt~outcome+offset(2))
mice::mice(dat,method=c('','logreg',''),formulas = forms_a1)
However, mice gives the following output.
iter imp variable
1 1 trtError in model.frame.default(formula, data = data, na.action = na.pass) :
variable lengths differ (found for 'offset(2)')
$Q1:$ How do I offset intercept here? I think I could add an extra column as variable to offset and modify formula and predictormatrix. The $\delta$ shift was implemented here(https://stefvanbuuren.name/fimd/sec-sensitivity.html) for continuous variable case. However, it seems that this may change estimated coefficients.
$Q2:$ If I am interested in offsetting a slope say by 10*id in above formula, how would I do so?
I am doing a multi-level meta-analysis. Many studies have several subgroups. When I make a forest plot studies are presented as subgroups. There are 60 of them, however, I would like to plot studies according to the study, then it would be 25 studies and it would be more appropriate. Does anyone have an idea how to do this forest plot?
I did it this way:
full.model <- rma.mv(yi = yi,
V = vi,
slab = Author,
data = df,
random = ~ 1 | Author/Study,
test = "t",
method = "REML")
forest(full.model)
It is not clear to me if you want to aggregate to the Author level or to the Study level. If there are multiple rows of data for particular studies, then the model isn't really complete and you would want to add another random intercept for the level of the estimates within studies. Essentially, the lowest random effect should have as many values for nlvls in the output as there are estimates (k).
Let's first tackle the case where we have a multilevel structure with two levels, studies and multiple estimates within studies (for some technical reasons, some might call this a three-level model, but let's not get into this). I will use a fully reproducible example for illustration purposes, using the dat.konstantopoulos2011 dataset, where we have districts and schools within districts. We fit a multilevel model of the type as you have with:
library(metafor)
dat <- dat.konstantopoulos2011
res <- rma.mv(yi, vi, random = ~ 1 | district/school, data=dat)
res
We can aggregate the estimates to the district level using the aggregate() function, specifying the marginal var-cov matrix of the estimates from the model to account for their non-independence (note that this makes use of aggregate.escalc() which only works with escalc objects, so if it is not, you need to convert the dataset to one - see help(aggregate.escalc) for details):
agg <- aggregate(dat, cluster=dat$district, V=vcov(res, type="obs"))
agg
You will find that if you then fit an equal-effects model to these estimates based on the aggregated data that the results are identical to what you obtained from the multilevel model (we use an equal-effects model since the heterogeneity accounted for by the multilevel model is already encapsulated in vcov(res, type="obs")):
rma(yi, vi, method="EE", data=agg)
So, we can now use these aggregated values in a forest plot:
with(agg, forest(yi, vi, slab=district))
My guess based on your description is that you actually have an additional level that you should include in the model and that you want to aggregate to the intermediate level. This is a tad more complicated, since aggregate() isn't meant for that. Just for illustration purposes, say we use year as another (higher) level and I will mess a bit with the data so that all three variance components are non-zero (again, just for illustration purposes):
dat$yi[dat$year == 1976] <- dat$yi[dat$year == 1976] + 0.8
res <- rma.mv(yi, vi, random = ~ 1 | year/district/school, data=dat)
res
Now instead of aggregate(), we can accomplish the same thing by using a multivariate model, including the intermediate level as a factor and using again vcov(res, type="obs") as the var-cov matrix:
agg <- rma.mv(yi, V=vcov(res, type="obs"), mods = ~ 0 + factor(district), data=dat)
agg
Now the model coefficients of this model are the aggregated values and the var-cov matrix of the model coefficients is the var-cov matrix of these aggregated values:
coef(agg)
vcov(agg)
They are not all independent (since we haven't aggregated to the highest level), so if we want to check that we can obtain the same results as from the multilevel model, we must account for this dependency:
rma.mv(coef(agg), V=vcov(agg), method="EE")
Again, exactly the same results. So now we use these coefficients and the diagonal from vcov(agg) as their sampling variances in the forest plot:
forest(coef(agg), diag(vcov(agg)), slab=names(coef(agg)))
The forest plot cannot indicate the dependency that still remains in these values, so if one were to meta-analyze these aggregated values using only diag(vcov(agg)) as their sampling variances, the results would not be identical to what you get from the full multilevel model. But there isn't really a way around that and the plot is just a visualization of the aggregated estimates and the CIs shown are correct.
You need to specify your own grouping in a new column of data and use this as the new random effect:
df$study_group <- c(1,1,1,2,2,3,4,5,5,5) # example
full.model <- rma.mv(yi = yi,
V = vi,
slab = Author,
data = df,
random = ~ 1 | study_group,
test = "t",
method = "REML")
forest(full.model)
I'm using the caret package with the leaps package to get the number of variables to use in a linear regression. How do I extract the model with the lowest RMSE that uses mdl$bestTune number of variables? If this can't be done are there functions in other packages you would recommend that allow for loocv of a stepwise linear regression and actually allow me to find the final model?
Below is reproducible code. From it, I can tell from mdl$bestTune that the number of variables should be 4 (even though I would have hoped for 3). It seems like I should be able to extract the variables from the third row of summary(mdl$finalModel) but I'm not sure how I would do this in a general case and not just this example.
library(caret)
set.seed(101)
x <- matrix(rnorm(36*5), nrow=36)
colnames(x) <- paste0("V", 1:5)
y <- 0.2*x[,1] + 0.3*x[,3] + 0.5*x[,4] + rnorm(36) * .0001
train.control <- trainControl(method="LOOCV")
mdl <- train(x=x, y=y, method="leapSeq", trControl = train.control, trace=FALSE)
coef(mdl$finalModel, as.double(mdl$bestTune))
mdl$bestTune
summary(mdl$finalModel)
mdl$results
Here's the context behind my question in case it's of interest. I have historical monthly returns hundreds of mutual fund. Each fund's returns will be a dependent variable that I'd like to regress against a set of returns on a handful (e.g. 5) factors. For each fund I want to run a stepwise regression. I expect only 1 to 3 of the five factors to be significant for any fund.
you can use:
coef(mdl$finalModel,unlist(mdl$bestTune))
I'm learning to implement robust glms in R, but can't figure out why I am unable to get glmrob to predict values from my regression models when I have a model where some columns are dropped due to co-linearity. Specifically when I use the predict function to predict values from a glmrob, it always gives NA for all values. I don't observe this when predicting values from the same data & model using glm. It doesn't seem to matter what data I use -- as long as there is a NA coefficient in the fitted model (and the NA isn't the last coefficient in the coefficient vector), the predict does not work.
This behavior holds for all datasets and models I have tried where an internal column is dropped due to co-linearity. I include a fake data set where two columns are dropped from the model, which gives two NAs in the coefficient list. Both glm and glmrob give nearly identical coefficients, yet predict only works with the glm model. So my question is: what don't I understand about robust regression that would prevent my glmrob models from generating predicted values?
library(robustbase)
#Make fake data with two categorial predictors
df <- data.frame("category" = rep(c("A","B","C"),each=6))
df$location <- rep(1:6,each=3)
val <- rep(c(500,50,5000),each=6)+rep(c(50,100,25,200,100,1),each=3)
df$value <- rpois(NROW(df),val)
#note that predict works if we omit the newdata parameter. However I need the newdata param
#so I use the original dataframe here as a stand-in.
mod <- glm(val ~ category + as.factor(location), data=df, family=poisson)
predict(mod, newdata=df) # works fine
mod <- glmrob(val ~ category + as.factor(location), data=df, family=poisson)
predict(mod, newdata=df) #predicts NA for all values
I've been digging into this and have concluded that the problem does not lie in my understanding of robust regression, but rather the problem lies with a bug in the robustbase package. The predict.lmrob function does not correctly pick the necessary coefficients from the model before the prediction. It needs to pick the first x non-NA coefficients (where x=rank of the model matrix). Instead it merely picks the first x coefficients without checking if they are NA. This explains why this problem only surfaces for models where the NA isn't the last coefficient in the coefficient vector.
To fix this, I copied the predict.lmrob source using:
getAnywhere(predict.lmrob)
and created my own replacement function. In this function I made a single modification to the code:
...
p <- object$rank
if (is.null(p)) {
df <- Inf
p <- sum(!is.na(coef(object)))
#piv <- seq_len(p) # old code
piv <- which(!is.na(coef(object))) # new code
}
else {
p1 <- seq_len(p)
piv <- if (p)
qr(object)$pivot[p1]
}
...
I've run a few hundred datasets using this change and it has worked well.
I have some data with some missing values for one variable, and I want to be able to create (random) predictions for what these could be. Here's my first thought:
# miss indicates where the observations with missing response are
library(MASS)
model <- glm.nb(data[-miss,4] ~ ., data=data[-miss,-4])
predict(model, newdata=data[miss,-4])
However, if I repeat the last line, it gives the same answers over and over - it appears to give the predicted mean of responses given that data and the model. I want a random prediction which incorporates variance i.e. a random draw from the distribution of the response of an observation with such predictors under the given model.
It could have something to do with the pred.var argument, but I'm unsure how to use that.
Suppose we have data like this:
set.seed(101)
dd <- data.frame(x=(1:20)*0.1)
dd$y <- rnbinom(20,mu=exp(dd$x),size=1)
## make some missing values
miss <- c(2,3,5)
dd$y[miss] <- NA
Now fit a model:
m1 <- MASS::glm.nb(y~x,dd,na.action=na.exclude)
Now use predictions from that model to get the expected mean value and rnbinom to generate the random values ...
p <- predict(m1,newdata=dd,type="response")
randvals <- rnbinom(length(p),mu=p,size=m1$theta)
(This gives random values for every element, not just the missing ones, but obviously you can pick out just the ones you want ...) It would be nice if the simulate method did this, but it's not quite flexible enough ...