Comparing percent change of model coefficients - r

I am working through step 3 of purposeful model-building from Hosmer-Lemeshow and it suggests to compare the percent change in coefficients between a full model [Iris.mod1] and a reduced model [Iris.mod2]. I would like to automate this step if possible.
Right now I have the following code:
#Make species a binomial DV
iris = subset(iris, iris$Species != 'virginica')
iris$Species = as.numeric(ifelse(iris$Species == 'setosa', 1, 0))
#Build models
Iris.mod1 = glm(Species~Sepal.Length+Sepal.Width+Petal.Length+Petal.Width,
data = iris, family = binomial())
Iris.mod2 = glm(Species~Sepal.Length+Petal.Length, data = iris, family =
binomial())
The dataset I am actually using has about 93 variables and 1.7 million rows. But I am using the iris data just for this example.
#Try to see if any coefficients changed by > 20%
paste(names(which((summary(Iris.mod1)$coefficients[2:
(nrow(summary(Iris.mod1)$coefficients)),1] -
(summary(Iris.mod2)$coefficients[2:
(nrow(summary(Iris.mod2)$coefficients)),1]/
(summary(Iris.mod1)$coefficients[2:nrow(summary(Iris.mod1)$coefficients)),1]
> 0.2 == TRUE)))))
However, this code is full of errors and I am lost in a sea of parenthesis.
Is there an efficient way to determine which variables coefficient changed by more than 20%?
Thank you in advance.

The broom package is really nice for making data frames of model coefficients and terms. We can use that to get things in a workable format:
library(broom)
m_list = list(m1 = Iris.mod1, m2 = Iris.mod2)
t_list = lapply(m_list, tidy)
library(dplyr)
library(tidyr)
bind_rows(t_list, .id = "mod") %>%
select(term, estimate, mod) %>%
spread(key = mod, value = estimate) %>%
mutate(p_change = (m2 - m1) / m1 * 100,
p_change_gt_20 = p_change > 20)
# term m1 m2 p_change p_change_gt_20
# 1 (Intercept) -6.556265 -65.84266 904.2709 TRUE
# 2 Petal.Length -19.053588 -49.04616 157.4117 TRUE
# 3 Petal.Width -25.032928 NA NA NA
# 4 Sepal.Length 9.878866 37.56141 280.2199 TRUE
# 5 Sepal.Width 7.417640 NA NA NA

Related

Running random forest algorithm with one variable

I'm using the random forest algorithm by using one predictor.
RF_MODEL <- randomForest(x=Data_[,my_preds], y=as.factor(Data_$P_A), data=Data_, ntree=1000, importance =T)
But I got this error message:
Error in if (n == 0) stop("data (x) has 0 rows") :
l'argument est de longueur nulle
Does this mean that we can't use RF with one variable?
The issue here is that when you specify x in randomForest, x should be "a data frame or a matrix of predictors, or a formula describing the model to be fitted". You are specifying a vector, Data_[, my_preds] where I assume my_preds is a string describing the column name. You get a vector by default when specifying one column of a data frame.
You can use drop = FALSE to ensure that x stays as a data frame column.
RF_MODEL <- randomForest(x = Data_[,my_preds, drop = FALSE],
y = as.factor(Data_$P_A),
data = Data_,
ntree = 1000, importance = TRUE)
We can demonstrate using the iris dataset.
library(randomForest)
randomForest(x = iris[, "Sepal.Width"], y = iris$Species, data = iris)
Error in if (n == 0) stop("data (x) has 0 rows") :
argument is of length zero
Using drop = FALSE:
randomForest(x = iris[, "Sepal.Width", drop = FALSE], y = iris$Species, data = iris)
Call:
randomForest(x = iris[, "Sepal.Width", drop = FALSE], y = iris$Species, data = iris)
Type of random forest: classification
Number of trees: 500
No. of variables tried at each split: 1
OOB estimate of error rate: 52.67%
Confusion matrix:
setosa versicolor virginica class.error
setosa 31 2 17 0.38
versicolor 3 20 27 0.60
virginica 17 13 20 0.60
You might also consider using a formula to avoid this issue:
randomForest(Species ~ Sepal.Width, data = iris)

How would I get the pattern of errors on test items for a logistic regression model?

I am trying to analyse the pattern of error (accuracy) on test items for the model I coded below. I would like to find out how often Setosa and Versicolor Species of iris are incorrectly classified as Virginica and how often Virginica Species of iris are incorrectly classified as not Virginica. Could this be done? Any suggestions would be great. Here are my logistic regression model and a built classifer using the model...
library(datasets)
iris$dummy_virginica_iris <- 0
iris$dummy_virginica_iris[iris$Species == 'virginica'] <- 1
iris$dummy_virginica_iris
# Logistic regression model.
glm <- glm(dummy_virginica_iris ~ Petal.Width + Sepal.Width,
data = iris,
family = 'binomial')
summary(glm)
# Classifer.
glm.pred <- predict(glm, type="response")
virginica <- ifelse(glm.pred > .5, TRUE, FALSE)
You can create a new vector to seperate the flowers into virginica / non-virginica like this:
species <- as.character(iris$Species)
species[species != "virginica"] <- "non-virginica"
Then you can just tabulate this against your model's predictions as a 2 x 2 contingency table:
result <- table(virginica, species)
print(result)
# species
# virginica non-virginica virginica
# FALSE 96 3
# TRUE 4 47
Which allows for easy calculations of sensitivity, specificity and accuracy of your model like this:
sensitivity <- result[2, 2] / sum(result[, 2])
specificity <- result[1, 1] / sum(result[, 1])
accuracy <- (result[1, 1] + result[2, 2]) / sum(result)
sensitivity
# [1] 0.94
specificity
# [1] 0.96
accuracy
# [1] 0.9533333

Linear regression with `lm()`: prediction interval for aggregated predicted values

I'm using predict.lm(fit, newdata=newdata, interval="prediction") to get predictions and their prediction intervals (PI) for new observations. Now I would like to aggregate (sum and mean) these predictions and their PI's based on an additional variable (i.e. a spatial aggregation on the zip code level of predictions for single households).
I learned from StackExchange, that you cannot aggregate the prediction intervals of single predictions just by aggregating the limits of the prediction intervals. The post is very helpful to understand why this can't be done, but I have a hard time translating this bit into actual code. The answer reads:
Here's a reproducible example:
library(dplyr)
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit regression model
fit1 <- lm(Petal.Width ~ Petal.Length, data=train)
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
#Predict Pedal.Width for new data incl prediction intervals for each prediction
predictions1<-predict(fit1, newdata=pred, interval="prediction")
predictions2<-predict(fit2, newdata=pred, interval="prediction")
# Aggregate data by summing predictions for species
#NOT correct for prediction intervals
predictions_agg1<-data.frame(predictions1,Species=pred$Species) %>%
group_by(Species) %>%
summarise_all(funs(sum,mean))
predictions_agg2<-data.frame(predictions2,Species=pred$Species) %>%
group_by(Species) %>%
summarise_all(funs(sum,mean))
I couldn't find a good tutorial or package which describes how to properly aggregate predictions and their PI's in R when using predict.lm(). Is there something out there? Would highly appreciate if you could point me in the right direction on how to do this in R.
Your question is closely related to a thread I answered 2 years ago: linear model with `lm`: how to get prediction variance of sum of predicted values. It provides an R implementation of Glen_b's answer on Cross Validated. Thanks for quoting that Cross Validated thread; I didn't know it; perhaps I can leave a comment there linking the Stack Overflow thread.
I have polished my original answer, wrapping up line-by-line code cleanly into easy-to-use functions lm_predict and agg_pred. Solving your question is then simplified to applying those functions by group.
Consider the iris example in your question, and the second model fit2 for demonstration.
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
We split pred by group Species, then apply lm_predict (with diag = FALSE) on all sub data frames.
oo <- lapply(split(pred, pred$Species), lm_predict, lmObject = fit2, diag = FALSE)
To use agg_pred we need to specify a weight vector, whose length equals to the number of data. We can determine this by consulting the length of fit in each oo[[i]]:
n <- lengths(lapply(oo, "[[", 1))
#setosa versicolor virginica
# 11 13 14
If aggregation operation is sum, we do
w <- lapply(n, rep.int, x = 1)
#List of 3
# $ setosa : num [1:11] 1 1 1 1 1 1 1 1 1 1 ...
# $ versicolor: num [1:13] 1 1 1 1 1 1 1 1 1 1 ...
# $ virginica : num [1:14] 1 1 1 1 1 1 1 1 1 1 ...
SUM <- Map(agg_pred, w, oo)
SUM[[1]] ## result for the first group, for example
#$mean
#[1] 2.499728
#
#$var
#[1] 0.1271554
#
#$CI
# lower upper
#1.792908 3.206549
#
#$PI
# lower upper
#0.999764 3.999693
sapply(SUM, "[[", "CI") ## some nice presentation for CI, for example
# setosa versicolor virginica
#lower 1.792908 16.41526 26.55839
#upper 3.206549 17.63953 28.10812
If aggregation operation is average, we rescale w by n and call agg_pred.
w <- mapply("/", w, n)
#List of 3
# $ setosa : num [1:11] 0.0909 0.0909 0.0909 0.0909 0.0909 ...
# $ versicolor: num [1:13] 0.0769 0.0769 0.0769 0.0769 0.0769 ...
# $ virginica : num [1:14] 0.0714 0.0714 0.0714 0.0714 0.0714 ...
AVE <- Map(agg_pred, w, oo)
AVE[[2]] ## result for the second group, for example
#$mean
#[1] 1.3098
#
#$var
#[1] 0.0005643196
#
#$CI
# lower upper
#1.262712 1.356887
#
#$PI
# lower upper
#1.189562 1.430037
sapply(AVE, "[[", "PI") ## some nice presentation for CI, for example
# setosa versicolor virginica
#lower 0.09088764 1.189562 1.832255
#upper 0.36360845 1.430037 2.072496
This is great! Thank you so much! There is one thing I forgot to mention: in my actual application I need to sum ~300,000 predictions which would create a full variance-covariance matrix which is about ~700GB in size. Do you have any idea if there is a computationally more efficient way to directly get to the sum of the variance-covariance matrix?
Use the fast_agg_pred function provided in the revision of the original Q & A. Let's start it all over.
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
## list of new data
newdatlist <- split(pred, pred$Species)
n <- sapply(newdatlist, nrow)
#setosa versicolor virginica
# 11 13 14
If aggregation operation is sum, we do
w <- lapply(n, rep.int, x = 1)
SUM <- mapply(fast_agg_pred, w, newdatlist,
MoreArgs = list(lmObject = fit2, alpha = 0.95),
SIMPLIFY = FALSE)
If aggregation operation is average, we do
w <- mapply("/", w, n)
AVE <- mapply(fast_agg_pred, w, newdatlist,
MoreArgs = list(lmObject = fit2, alpha = 0.95),
SIMPLIFY = FALSE)
Note that we can't use Map in this case as we need to provide more arguments to fast_agg_pred. Use mapply in this situation, with MoreArgs and SIMPLIFY.

Extracting outputs from a list and save in a data frame

I'm doing some modeling experiments and I need to present the output for multiple models in a specific format for further analysis.
Here is some code to generate multiple models:
# This to generate the data
resp <- sample(0:1,100,TRUE)
x1 <- c(rep(5,20),rep(0,15), rep(2.5,40),rep(17,25))
x2 <- c(rep(23,10),rep(5,10), rep(15,40),rep(1,25), rep(2, 15))
x3 <- c(rep(2,10),rep(50,10), rep(1,40),rep(112,25), rep(22, 15))
dat <- data.frame(resp,x1, x2, x3)
# This to build multiple models
InitLOogModel<-list()
n <- 3
for (i in 1:n)
{
### Create training and testing data
## 80% of the sample size
# Note that I didn't use seed so that random split is performed every iteration.
smp_sizelogis <- floor(0.8 * nrow(dat))
train_indlogis <- sample(seq_len(nrow(dat)), size = smp_sizelogis)
trainlogis <- dat[train_indlogis, ]
testlogis <- dat[-train_indlogis, ]
InitLOogModel[[i]] <- glm(resp ~ ., data =trainlogis, family=binomial)
}
Here is the output:
InitLOogModel
[[1]]
Call: glm(formula = resp ~ ., family = binomial, data = trainlogis)
Coefficients:
(Intercept) x1 x2 x3
-0.007270 0.004585 -0.015271 -0.009911
Degrees of Freedom: 79 Total (i.e. Null); 76 Residual
Null Deviance: 106.8
Residual Deviance: 104.5 AIC: 112.5
[[2]]
Call: glm(formula = resp ~ ., family = binomial, data = trainlogis)
Coefficients:
(Intercept) x1 x2 x3
1.009670 -0.058227 -0.058783 -0.008337
Degrees of Freedom: 79 Total (i.e. Null); 76 Residual
Null Deviance: 110.1
Residual Deviance: 108.1 AIC: 116.1
[[3]]
Call: glm(formula = resp ~ ., family = binomial, data = trainlogis)
Coefficients:
(Intercept) x1 x2 x3
1.51678 -0.06482 -0.07868 -0.01440
Degrees of Freedom: 79 Total (i.e. Null); 76 Residual
Null Deviance: 110.5
Residual Deviance: 106.3 AIC: 114.3
Note that the output here is a list. Now this is the output I need to create as a data frame (let's call outDF):
Model Intercept x1 x2 x3
1 -0.00727 0.004585 -0.015271 -0.009911
2 1.00967 -0.058227 -0.058783 -0.008337
3 1.51678 -0.06482 -0.07868 -0.0144
Note that the numbers inside each column in outDF are just the regression coefficients. This is how to get them for Model 1 for example:
as.data.frame(coef(summary(InitLOogModel[[1]]))[,1])
You can loop through your list of models and grab the desired summary information with sapply:
as.data.frame(t(sapply(InitLOogModel, function(x) coef(summary(x))[,1])))
# (Intercept) x1 x2 x3
# 1 0.5047799 0.01932560 -0.01268125 -0.0041356214
# 2 -1.2712605 0.11281741 0.06717180 0.0050441023
# 3 -0.7052121 0.08568746 0.03964437 0.0003167443
sapply in this case creates a column of coefficients for each model. Since we want the models to be the rows instead of the columns, we use t to transpose the result.
The sapply approach in #josliber 's answer is reasonable, but I would tend to prefer to leave the results in a list and combine them afterword. The principle is that you the simplification that sapply does is a convenience only -- if it is not convenient, don't use it. Just combine the results in whatever way makes sense for your specific situation. This principle leads to the following:
do.call(rbind, lapply( InitLOogModel, coef))
I know that coef.lm returns a vector, and since I know that each model has the same coefficients I know that it makes sense to rbind them. Notice that I avoid the taking the summary of each model since that doesn't produce anything needed for the result we want to achieve.
Of course do.call(rbind ... returns a matrix instead of a data.frame. If a data.frame is desired the matrix can be converted with as.data.frame
do.call(rbind, lapply( InitLOogModel, coef))
EDIT:
Inspired by #jake-kaupp 's answer here is how I would do it in the tidyverse:
The combining of coefficients looks very similar to the base R approach above:
library(tidyverse)
map(InitLOogModel, coef) %>%
reduce(rbind)
The for-loop used to construct the list of models can be replaced with
library(modelr)
smp_sizelogis <- floor(0.8 * nrow(dat))
rows <- seq_len(nrow(dat))
rerun(3, dat %>%
resample(sample(rows, size = smp_sizelogis))) %>%
map(function(x) glm(resp ~ ., family = binomial, data = x))
Putting the whole thing together gives us
smp_sizelogis <- floor(0.8 * nrow(dat))
rows <- seq_len(nrow(dat))
rerun(3, dat %>%
resample(sample(rows, size = smp_sizelogis))) %>%
map(function(x) glm(resp ~ ., family = binomial, data = x)) %>%
map(coef) %>%
reduce(rbind)
The main advantages over #jake-kaupp 's answer are that a) we don't compute stuff we don't need, and b) we never stuff the results into a data.frame, so we never have to think about how to get the pieces we want back out.
You could also use a tidyverse solution, which I personally find to produce easier to read code, at the expense of using more packages.
EDIT: While #Ista may be right about the nested-listframe approach appearing complex, it has the appeal of keeping the full steps of the analysis from data to model to model details. This approach doesn't compute anything extra, just simply manipulates the data to the desired requested result.
I also prefer it to keeping things in lists of data frames as I find they make things easier to access for downstream work. It boils down to preference in method and how well it fits your workflow.
library(tidyverse)
smp_sizelogis <- floor(0.8 * nrow(dat))
rows <- seq_len(nrow(dat))
analysis <- rerun(3, resample(dat, sample(rows, size = smp_sizelogis))) %>%
tibble(data = .) %>%
add_rownames("model_number") %>%
mutate(model = map(data, ~glm('resp ~ .', family = binomial, data = .))) %>%
mutate(coefs = map(model, tidy))
analysis %>%
select(model_number, term, estimate) %>%
spread(term, estimate) %>%
select(-`(Intercept)`)
# A tibble: 3 × 4
model_number x1 x2 x3
* <chr> <dbl> <dbl> <dbl>
1 1 -0.08160034 0.03156254 0.008613346
2 2 -0.04740939 0.04084883 0.004282003
3 3 -0.05980735 0.01625652 0.002075468

Using neural networks neuralnet in R to predict factor values

I am using neuralnet package, use several inputs to predict an output.
Originally, my output is a factor variable, and I saw the error:
Error in neurons[[i]] %*% weights[[i]] :
requires numeric/complex matrix/vector arguments
When I converted the output to numeric variable, the error disappeared. Is there a way to neural network with factor output?
I adapted code that I found at this site, which uses the iris dataset with the neuralnet package to predict iris species from the morphological data.
Without a reproducible example, I'm not sure if this applies to your case. The key here was to convert the factorial response level to its own binary variable. The prediction is a bit different than other models in R - you choose the factor level with the highest score.
Example code:
library(neuralnet)
# Make training and validation data
set.seed(1)
train <- sample(nrow(iris), nrow(iris)*0.5)
valid <- seq(nrow(iris))[-train]
iristrain <- iris[train,]
irisvalid <- iris[valid,]
# Binarize the categorical output
iristrain <- cbind(iristrain, iristrain$Species == 'setosa')
iristrain <- cbind(iristrain, iristrain$Species == 'versicolor')
iristrain <- cbind(iristrain, iristrain$Species == 'virginica')
names(iristrain)[6:8] <- c('setosa', 'versicolor', 'virginica')
# Fit model
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain,
hidden=c(3)
)
plot(nn)
# Predict
comp <- compute(nn, irisvalid[-5])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('setosa', 'versicolor', 'virginica')[idx]
table(pred, irisvalid$Species)
#pred setosa versicolor virginica
# setosa 23 0 0
# versicolor 1 21 7
# virginica 0 1 22
This might raise warnings:
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain,
hidden=c(3)
)
So replace it with:
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain, hidden = 3,lifesign = "full")
If this does not work:
comp <- compute(nn, irisvalid[-5])
then use
comp <- neuralnet::compute(nn, irisvalid[,1:4])

Resources