Using neural networks neuralnet in R to predict factor values - r

I am using neuralnet package, use several inputs to predict an output.
Originally, my output is a factor variable, and I saw the error:
Error in neurons[[i]] %*% weights[[i]] :
requires numeric/complex matrix/vector arguments
When I converted the output to numeric variable, the error disappeared. Is there a way to neural network with factor output?

I adapted code that I found at this site, which uses the iris dataset with the neuralnet package to predict iris species from the morphological data.
Without a reproducible example, I'm not sure if this applies to your case. The key here was to convert the factorial response level to its own binary variable. The prediction is a bit different than other models in R - you choose the factor level with the highest score.
Example code:
library(neuralnet)
# Make training and validation data
set.seed(1)
train <- sample(nrow(iris), nrow(iris)*0.5)
valid <- seq(nrow(iris))[-train]
iristrain <- iris[train,]
irisvalid <- iris[valid,]
# Binarize the categorical output
iristrain <- cbind(iristrain, iristrain$Species == 'setosa')
iristrain <- cbind(iristrain, iristrain$Species == 'versicolor')
iristrain <- cbind(iristrain, iristrain$Species == 'virginica')
names(iristrain)[6:8] <- c('setosa', 'versicolor', 'virginica')
# Fit model
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain,
hidden=c(3)
)
plot(nn)
# Predict
comp <- compute(nn, irisvalid[-5])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('setosa', 'versicolor', 'virginica')[idx]
table(pred, irisvalid$Species)
#pred setosa versicolor virginica
# setosa 23 0 0
# versicolor 1 21 7
# virginica 0 1 22

This might raise warnings:
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain,
hidden=c(3)
)
So replace it with:
nn <- neuralnet(
setosa+versicolor+virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
data=iristrain, hidden = 3,lifesign = "full")
If this does not work:
comp <- compute(nn, irisvalid[-5])
then use
comp <- neuralnet::compute(nn, irisvalid[,1:4])

Related

Auto VIF (Variable importance factor) for variable analysis

I want to calculate an "Auto" VIF for variables, one vs the others. For example in the iris dataset, I want Sepal.Width to act as target value and the others as explanatory variables.
First I remove the Species column, so only the variables stay. Then I want to loop over each variable and test it again the others. Finally I want the VIF resulto to be stored in a list.
This is what I have tried:
library(car)
library(dplyr)
library(tidyr)
iris_clean <- iris %>%
select(-Species)
col_names <- colnames(iris)
i <- 1
for(col in col_names) {
regr <- lm(col ~ ., data=iris_clean)
list[i] <- vif(regr)
i <- i+1
}
For some reason I get an error:
Error in model.frame.default(formula = col ~ ., data = iris_clean, drop.unused.levels = TRUE) :
variable lengths differ (found for 'Sepal.Length')
Which I don't understand because the variables have the same length. Please, any help will be greatly appreciated.
You can try this -
library(car)
library(dplyr)
library(tidyr)
iris_clean <- iris %>% select(-Species)
col_names <- colnames(iris_clean)
result <- vector('list', length(col_names))
for(i in seq_along(col_names)) {
regr <- lm(paste0(col_names[i], '~ .'), data=iris_clean)
result[[i]] <- vif(regr)
}
result
#[[1]]
# Sepal.Width Petal.Length Petal.Width
# 1.270815 15.097572 14.234335
#[[2]]
#Sepal.Length Petal.Length Petal.Width
# 4.278282 19.426391 14.089441
#[[3]]
#Sepal.Length Sepal.Width Petal.Width
# 3.415733 1.305515 3.889961
#[[4]]
#Sepal.Length Sepal.Width Petal.Length
# 6.256954 1.839639 7.557780

How would I get the pattern of errors on test items for a logistic regression model?

I am trying to analyse the pattern of error (accuracy) on test items for the model I coded below. I would like to find out how often Setosa and Versicolor Species of iris are incorrectly classified as Virginica and how often Virginica Species of iris are incorrectly classified as not Virginica. Could this be done? Any suggestions would be great. Here are my logistic regression model and a built classifer using the model...
library(datasets)
iris$dummy_virginica_iris <- 0
iris$dummy_virginica_iris[iris$Species == 'virginica'] <- 1
iris$dummy_virginica_iris
# Logistic regression model.
glm <- glm(dummy_virginica_iris ~ Petal.Width + Sepal.Width,
data = iris,
family = 'binomial')
summary(glm)
# Classifer.
glm.pred <- predict(glm, type="response")
virginica <- ifelse(glm.pred > .5, TRUE, FALSE)
You can create a new vector to seperate the flowers into virginica / non-virginica like this:
species <- as.character(iris$Species)
species[species != "virginica"] <- "non-virginica"
Then you can just tabulate this against your model's predictions as a 2 x 2 contingency table:
result <- table(virginica, species)
print(result)
# species
# virginica non-virginica virginica
# FALSE 96 3
# TRUE 4 47
Which allows for easy calculations of sensitivity, specificity and accuracy of your model like this:
sensitivity <- result[2, 2] / sum(result[, 2])
specificity <- result[1, 1] / sum(result[, 1])
accuracy <- (result[1, 1] + result[2, 2]) / sum(result)
sensitivity
# [1] 0.94
specificity
# [1] 0.96
accuracy
# [1] 0.9533333

Extracting a list of R2 from within lm() based on variable in multiple regression in R

I've performed a multiple regression analysis on a dataset in R using lm() and I am able to extract the coefficients for each day of year using the function below. I would also like to extract the R2 for each day of year but this doesn't seem to work in the same way.
This is pretty much the same question as:
Print R-squared for all of the models fit with lmList
but when I try this I get 'Error: $ operator is invalid for atomic vectors'. I would also like to include it in the same function if possible. How can I extract the R2 for each doy in this way?
#Create MR function for extracting coefficients
getCoef <- function(df) {
coefs <- lm(y ~ T + P + L + T * L + P * L, data = df)$coef
names(coefs) <- c("intercept", "T", "P", "L", "T_L", "P_L")
coefs
}
#Extract coefficients for each doy
coefs.MR_uM <- ddply(MR_uM, ~ doy, getCoef)```
The point is r.squared is stored in summary(lm(...)) not in lm(...). Here is another version of your function to extract R2:
library(plyr)
df <- iris
#Create MR function for extracting coefficients and R2
getCoef <- function(df) {
model <- lm(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width, data = df)
coefs <- model$coef
names(coefs) <- c("intercept", "Sepal.Width", "Petal.Length", "Petal.Width")
R2 <- summary(model)$r.squared
names(R2) <- c("R2")
c(coefs, R2)
}
#Extract coefficients and R2 for each Species
coefs.MR_uM <- ddply(df, ~ Species, getCoef)
coefs.MR_uM # output
Species intercept Sepal.Width Petal.Length Petal.Width R2
1 setosa 2.351890 0.6548350 0.2375602 0.2521257 0.5751375
2 versicolor 1.895540 0.3868576 0.9083370 -0.6792238 0.6050314
3 virginica 0.699883 0.3303370 0.9455356 -0.1697527 0.7652193
As suggested by Parfait, you don't need plyr::ddply(), you can use do.call(rbind, by(df, df$Species, getCoef))
Hope this helps !

Linear regression with `lm()`: prediction interval for aggregated predicted values

I'm using predict.lm(fit, newdata=newdata, interval="prediction") to get predictions and their prediction intervals (PI) for new observations. Now I would like to aggregate (sum and mean) these predictions and their PI's based on an additional variable (i.e. a spatial aggregation on the zip code level of predictions for single households).
I learned from StackExchange, that you cannot aggregate the prediction intervals of single predictions just by aggregating the limits of the prediction intervals. The post is very helpful to understand why this can't be done, but I have a hard time translating this bit into actual code. The answer reads:
Here's a reproducible example:
library(dplyr)
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit regression model
fit1 <- lm(Petal.Width ~ Petal.Length, data=train)
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
#Predict Pedal.Width for new data incl prediction intervals for each prediction
predictions1<-predict(fit1, newdata=pred, interval="prediction")
predictions2<-predict(fit2, newdata=pred, interval="prediction")
# Aggregate data by summing predictions for species
#NOT correct for prediction intervals
predictions_agg1<-data.frame(predictions1,Species=pred$Species) %>%
group_by(Species) %>%
summarise_all(funs(sum,mean))
predictions_agg2<-data.frame(predictions2,Species=pred$Species) %>%
group_by(Species) %>%
summarise_all(funs(sum,mean))
I couldn't find a good tutorial or package which describes how to properly aggregate predictions and their PI's in R when using predict.lm(). Is there something out there? Would highly appreciate if you could point me in the right direction on how to do this in R.
Your question is closely related to a thread I answered 2 years ago: linear model with `lm`: how to get prediction variance of sum of predicted values. It provides an R implementation of Glen_b's answer on Cross Validated. Thanks for quoting that Cross Validated thread; I didn't know it; perhaps I can leave a comment there linking the Stack Overflow thread.
I have polished my original answer, wrapping up line-by-line code cleanly into easy-to-use functions lm_predict and agg_pred. Solving your question is then simplified to applying those functions by group.
Consider the iris example in your question, and the second model fit2 for demonstration.
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
We split pred by group Species, then apply lm_predict (with diag = FALSE) on all sub data frames.
oo <- lapply(split(pred, pred$Species), lm_predict, lmObject = fit2, diag = FALSE)
To use agg_pred we need to specify a weight vector, whose length equals to the number of data. We can determine this by consulting the length of fit in each oo[[i]]:
n <- lengths(lapply(oo, "[[", 1))
#setosa versicolor virginica
# 11 13 14
If aggregation operation is sum, we do
w <- lapply(n, rep.int, x = 1)
#List of 3
# $ setosa : num [1:11] 1 1 1 1 1 1 1 1 1 1 ...
# $ versicolor: num [1:13] 1 1 1 1 1 1 1 1 1 1 ...
# $ virginica : num [1:14] 1 1 1 1 1 1 1 1 1 1 ...
SUM <- Map(agg_pred, w, oo)
SUM[[1]] ## result for the first group, for example
#$mean
#[1] 2.499728
#
#$var
#[1] 0.1271554
#
#$CI
# lower upper
#1.792908 3.206549
#
#$PI
# lower upper
#0.999764 3.999693
sapply(SUM, "[[", "CI") ## some nice presentation for CI, for example
# setosa versicolor virginica
#lower 1.792908 16.41526 26.55839
#upper 3.206549 17.63953 28.10812
If aggregation operation is average, we rescale w by n and call agg_pred.
w <- mapply("/", w, n)
#List of 3
# $ setosa : num [1:11] 0.0909 0.0909 0.0909 0.0909 0.0909 ...
# $ versicolor: num [1:13] 0.0769 0.0769 0.0769 0.0769 0.0769 ...
# $ virginica : num [1:14] 0.0714 0.0714 0.0714 0.0714 0.0714 ...
AVE <- Map(agg_pred, w, oo)
AVE[[2]] ## result for the second group, for example
#$mean
#[1] 1.3098
#
#$var
#[1] 0.0005643196
#
#$CI
# lower upper
#1.262712 1.356887
#
#$PI
# lower upper
#1.189562 1.430037
sapply(AVE, "[[", "PI") ## some nice presentation for CI, for example
# setosa versicolor virginica
#lower 0.09088764 1.189562 1.832255
#upper 0.36360845 1.430037 2.072496
This is great! Thank you so much! There is one thing I forgot to mention: in my actual application I need to sum ~300,000 predictions which would create a full variance-covariance matrix which is about ~700GB in size. Do you have any idea if there is a computationally more efficient way to directly get to the sum of the variance-covariance matrix?
Use the fast_agg_pred function provided in the revision of the original Q & A. Let's start it all over.
set.seed(123)
data(iris)
#Split dataset in training and prediction set
smp_size <- floor(0.75 * nrow(iris))
train_ind <- sample(seq_len(nrow(iris)), size = smp_size)
train <- iris[train_ind, ]
pred <- iris[-train_ind, ]
#Fit multiple linear regression model
fit2 <- lm(Petal.Width ~ Petal.Length + Sepal.Width + Sepal.Length, data=train)
## list of new data
newdatlist <- split(pred, pred$Species)
n <- sapply(newdatlist, nrow)
#setosa versicolor virginica
# 11 13 14
If aggregation operation is sum, we do
w <- lapply(n, rep.int, x = 1)
SUM <- mapply(fast_agg_pred, w, newdatlist,
MoreArgs = list(lmObject = fit2, alpha = 0.95),
SIMPLIFY = FALSE)
If aggregation operation is average, we do
w <- mapply("/", w, n)
AVE <- mapply(fast_agg_pred, w, newdatlist,
MoreArgs = list(lmObject = fit2, alpha = 0.95),
SIMPLIFY = FALSE)
Note that we can't use Map in this case as we need to provide more arguments to fast_agg_pred. Use mapply in this situation, with MoreArgs and SIMPLIFY.

Comparing percent change of model coefficients

I am working through step 3 of purposeful model-building from Hosmer-Lemeshow and it suggests to compare the percent change in coefficients between a full model [Iris.mod1] and a reduced model [Iris.mod2]. I would like to automate this step if possible.
Right now I have the following code:
#Make species a binomial DV
iris = subset(iris, iris$Species != 'virginica')
iris$Species = as.numeric(ifelse(iris$Species == 'setosa', 1, 0))
#Build models
Iris.mod1 = glm(Species~Sepal.Length+Sepal.Width+Petal.Length+Petal.Width,
data = iris, family = binomial())
Iris.mod2 = glm(Species~Sepal.Length+Petal.Length, data = iris, family =
binomial())
The dataset I am actually using has about 93 variables and 1.7 million rows. But I am using the iris data just for this example.
#Try to see if any coefficients changed by > 20%
paste(names(which((summary(Iris.mod1)$coefficients[2:
(nrow(summary(Iris.mod1)$coefficients)),1] -
(summary(Iris.mod2)$coefficients[2:
(nrow(summary(Iris.mod2)$coefficients)),1]/
(summary(Iris.mod1)$coefficients[2:nrow(summary(Iris.mod1)$coefficients)),1]
> 0.2 == TRUE)))))
However, this code is full of errors and I am lost in a sea of parenthesis.
Is there an efficient way to determine which variables coefficient changed by more than 20%?
Thank you in advance.
The broom package is really nice for making data frames of model coefficients and terms. We can use that to get things in a workable format:
library(broom)
m_list = list(m1 = Iris.mod1, m2 = Iris.mod2)
t_list = lapply(m_list, tidy)
library(dplyr)
library(tidyr)
bind_rows(t_list, .id = "mod") %>%
select(term, estimate, mod) %>%
spread(key = mod, value = estimate) %>%
mutate(p_change = (m2 - m1) / m1 * 100,
p_change_gt_20 = p_change > 20)
# term m1 m2 p_change p_change_gt_20
# 1 (Intercept) -6.556265 -65.84266 904.2709 TRUE
# 2 Petal.Length -19.053588 -49.04616 157.4117 TRUE
# 3 Petal.Width -25.032928 NA NA NA
# 4 Sepal.Length 9.878866 37.56141 280.2199 TRUE
# 5 Sepal.Width 7.417640 NA NA NA

Resources