Radom Forest Prediction Intervals using a categorical response variable - r

I'm trying to get prediction intervals from a random forest model that has a categorical response variable. Ideally, I would like to see how confident the model is for classifying an observation into a given response category.
On the last line of my code you'll see a predict() that works when the interval = argument is not included. When I include the "interval =" I get an error. Any idea on how to get prediction intervals for the output?
# Load libraries
library(data.table)
library(randomForest)
library(caret)
# Set seed
set.seed(123)
# Load the necessary data
df.0 <- diamonds
setDT(df.0)
# set up the cross-validation parameters
control <- trainControl(method = "repeatedcv",
number = 10)
metric <- "Accuracy"
mtry <- seq(from = 1,
to = length(unique(df.0$cut)),
by = 1)
tunegrid <- expand.grid(mtry = mtry)
# Add rownames so we can use as index
df.0[, indexNum := .I]
trainer <- df.0[ ,.SD[sample(x = .N, size = (.N * 0.9))], by = cut] # Pull 90% of each cut into training
tester <- df.0[!trainer, on = c("indexNum")]
# Remove index number
tester <- tester[, ":=" (indexNum = NULL)]
trainer <- trainer[, ":=" (indexNum = NULL)]
# build a model and assess its accuracy via 10-fold cross validation
rf_mod <-
train(
x = trainer[, .(x, y, z, depth, table)],
y = trainer$cut,
method = "rf",
metric = "Accuracy",
tuneGrid = tunegrid
)
# check out which mtry value was best
plot(rf_mod)
# test the model against the test data
cut_pred <- predict(rf_mod, newdata = tester[, .(x, y, z, depth, table), interval = "prediction")

Related

predict() function in R is not providing prediction in R console

My training data has 87620 rows and 5 columns. My test data has the same number of rows and columns. When I use a CART model to predict the "Defaults" (that is the target variable), my model works and provides me with predictions.
When I use a validation data set that has 6 columns and only 19561 rows, and does not have the Defaults variable, and then proceed to use the
View(validationsetpreds.CART3.3x)
I get the attached picture
Validationsetpreds Picture
And when I perform the same command using the test data set I get the following Testsetpreds Picture
set.seed(123)
loans_training$Default <- as.factor(loans_training$Default)#Make the default variable categorical
loans_test$Default <- as.factor(loans_test$Default)#Make the default variable categorical
loans_training$term <- as.factor(loans_training$term)
loans_test$term <- as.factor(loans_test$term)
#Standardize datasets
library(psych)
library(caret)
preprocess.train.z <- preProcess(loans_training[1:5], method = c("center", "scale"))
preprocess.train.z
loans_train.z <- predict(preprocess.train.z,loans_training[1:5])
describe(loans_train.z)
View(loans_train.z)
summary(loans_train.z$Default)
preprocess.test.z <- preProcess(loans_test[1:5], method = c("center", "scale"))
preprocess.test.z
loans_test.z <- predict(preprocess.test.z,loans_test[1:5])
describe(loans_test.z)
View(loans_test.z)
summary(loans_train.z$Default)
(22417 * 2.3) + 22417
#Resampling subroutine
rare.record.indices <- which(loans_train.z$Default == "1")
rare.indices.resampled <- sample(x = rare.record.indices,size = 51559, replace = TRUE)
rare.records.resampled <- loans_train.z[rare.indices.resampled,]
loans_train.3.3x <- rbind(loans_train.z, rare.records.resampled)
table(loans_train.3.3x$Default)
#Develop 3.3x CART model
TC <- trainControl(method = "CV", number = 10)
fit.CART.3.3x <- train(Default ~ ., data = loans_train.3.3x, method = "rpart", trControl = TC)
fit.CART.3.3x$resample
testsetpreds.CART3.3x <- predict(fit.CART.3.3x,loans_test.z)
table(loans_test.z$Default, testsetpreds.CART3.3x)
testsetpreds.CART3.3x
#Predictions
set.seed(123)
loans_validation$grade <- as.character(loans_validation$grade)#Make the grade variable categorical
loans_validation$term <- as.factor(loans_validation$term)#Make the term variable categorical
loans_validation$Index <- as.factor(loans_validation$Index)#Make the Index variable categorical
#Standardize dataset
library(psych)
library(caret)
preprocess.validation.z <- preProcess(loans_validation[1:6], method = c("center", "scale"))
preprocess.validation.z
loans_validation.z <- predict(preprocess.validation.z,loans_validation[1:6])
#Predict Defaults using Cart
validationsetpreds.CART3.3x <- predict(fit.CART.3.3x,loans_validation.z)
View(validationsetpreds.CART3.3x)
Any help would be greatly appreaciated :)
How would I apply this to the validation data set?

R: Multiclass Matrices

I am working with the R programming language. I am trying to learn how to make a "confusion matrix" for multiclass variables (e.g. How to construct the confusion matrix for a multi class variable).
Suppose I generate some data and fit a decision tree model :
#load libraries
library(rpart)
library(caret)
#generate data
a <- rnorm(1000, 10, 10)
b <- rnorm(1000, 10, 5)
d <- rnorm(1000, 5, 10)
group_1 <- sample( LETTERS[1:3], 1000, replace=TRUE, prob=c(0.33,0.33,0.34) )
e = data.frame(a,b,d, group_1)
e$group_1 = as.factor(d$group_1)
#split data into train and test set
trainIndex <- createDataPartition(e$group_1, p = .8,
list = FALSE,
times = 1)
training <- e[trainIndex,]
test <- e[-trainIndex,]
fitControl <- trainControl(## 10-fold CV
method = "repeatedcv",
number = 5,
## repeated ten times
repeats = 1)
#fit decision tree model
TreeFit <- train(group_1 ~ ., data = training,
method = "rpart2",
trControl = fitControl)
From here, I am able to store the results into a "confusion matrix":
pred <- predict(TreeFit,test)
table_example <- table(pred,test$group_1)
This satisfies my requirements - but this "table" requires me to manually calculate the different accuracy metrics of "A", "B" and "C" (as well as the total accuracy).
My question: Is it possible to use the caret::confusionMatrix() command for this problem?
e.g.
pred <- predict(TreeFit, test, type = "prob")
labels_example <- as.factor(ifelse(pred[,2]>0.5, "1", "0"))
con <- confusionMatrix(labels_example, test$group_1)
This way, I would be able to directly access the accuracy measurements from the confusion matrix. E.g. metric = con$overall[1]
Thanks
Is this what you're looking for?
pred <- predict(
TreeFit,
test)
con <- confusionMatrix(
test$group_1,
pred)
con
con$overall[1]
Same output as in:
table(test$group_1, pred)
Plus accuracy metrics.

How to find the optimal value for K in K-nearest neighbors using R?

My dataset contains 5851 observations, and is split into a train (3511 observations) and test (2340 observations) set. I now want to train a model using KNN, with two variables. I want to do 10-fold CV, repeated 5 times, using ROC metric and the one-standard error rule and the variables are preprocessed. The code is shown below.
set.seed(44780)
ctrl_repcvSE <- trainControl(method = "repeatedcv", number = 10, repeats = 5,
summaryFunction = twoClassSummary, classProbs = TRUE,
selectionFunction = "oneSE")
tune_grid <- expand.grid(k = 45:75)
mod4 <- train(purchased ~ total_policies + total_contrib,
data = mhomes_train, method = "knn",
trControl= ctrl_repcvSE, metric = "ROC",
tuneGrid = tune_grid, preProcess = c("center", "scale"))
The problem I have is that I already have tried so many different values of K (e.g., K = 10:20, 30:40, 50:60, 150:160 + different tuning lengths. However, every time the output says that the chosen value for K is the one which is last, so for example for values of K = 70:80, the chosen value for K = 80, every time I do this. This means I should look further, because if the chosen value is K in that case then there are better values of K available which are above 80. How should I eventually find this one?
The assignment only specifies: For k-nearest neighbours, explore reasonable values of k using the total_policies and total_contrib variables only.
Welcome to Stack Overflow. Your question isn't easy to answer.
For k-nearest neighbours I use another function knn3 part of the caret library.
I'll give an example using the iris dataset. We try to get the accuracy of our model for different values for k and plot those accuracies.
library(data.table)
library(tidyverse)
library(scales)
library(caret)
dt <- as.data.table(iris)
# converting and scaling data ----
dt$Species <- dt$Species %>% as.factor()
dt$Sepal.Length <- dt$Sepal.Length %>% scale()
dt$Sepal.Width <- dt$Sepal.Width %>% scale()
dt$Petal.Length <- dt$Petal.Length %>% scale()
dt$Petal.Width <- dt$Petal.Width %>% scale()
# remove in the real run ----
set.seed(1234567)
# split data into train and test - 3:1 ----
train_index <- createDataPartition(dt$Species, p = 0.75, list = FALSE)
train <- dt[train_index, ]
test <- dt[-train_index, ]
# values to check for k ----
K_VALUES <- 20:1
test_acc <- numeric(0)
train_acc <- numeric(0)
# calculate different models for each value of k ----
for (x in K_VALUES){
model <- knn3(Species ~ ., data = train, k = x)
pred_test <- predict(model, test, type = "class")
pred_test_acc <- confusionMatrix(table(pred_test,
test$Species))$overall["Accuracy"]
test_acc <- c(test_acc, pred_test_acc)
pred_train <- predict(model, train, type = "class")
pred_train_acc <- confusionMatrix(table(pred_train,
train$Species))$overall["Accuracy"]
train_acc <- c(train_acc, pred_train_acc)
}
data <- data.table(x = K_VALUES, train = train_acc, test = test_acc)
# plot a validation curve ----
plot_data <- gather(data, "type", "value", -x)
g <- qplot(x = x,
y = value,
data = plot_data,
color = type,
geom = "path",
xlim = c(max(K_VALUES),min(K_VALUES)-1))
print(g)
Now find a k with a good accuracy for your test data. That's the value you're looking for.
Disclosure: That's simplified but this approach should help you solving your problem.

Metric Accuracy not applicable for regression models

I am trying to investigate my model with R with machine learning. Training model in general works not well.
# # Logistic regression multiclass
for (i in 1:30) {
# split data into training/test
trainPhyIndex <- createDataPartition(subs_phy$Methane, p=10/17,list = FALSE)
trainingPhy <- subs_phy[trainPhyIndex,]
testingPhy <- subs_phy[-trainPhyIndex,]
# Pre-process predictor values
trainXphy <- trainingPhy[,names(trainingPhy)!= "Methane"]
preProcValuesPhy <- preProcess(x= trainXphy,method = c("center","scale"))
# using boot to avoid over-fitting
fitControlPhyGLMNET <- trainControl(method = "repeatedcv",
number = 10,
repeats = 4,
savePredictions="final",
classProbs = TRUE
)
fit_glmnet_phy <- train (Methane~.,
trainingPhy,
method = "glmnet",
tuneGrid = expand.grid(
.alpha =0.1,
.lambda = 0.00023),
metric = "Accuracy",
trControl = fitControlPhyGLMNET)
pred_glmnet_phy <- predict(fit_glmnet_phy, testingPhy)
# Get the confusion matrix to see accuracy value
u <- union(pred_glmnet_phy,testingPhy$Methane)
t <- table(factor(pred_glmnet_phy, u), factor(testingPhy$Methane, u))
accu_glmnet_phy <- confusionMatrix(t)
# accu_glmnet_phy<-confusionMatrix(pred_glmnet_phy,testingPhy$Methane)
glmnetstatsPhy[(nrow(glmnetstatsPhy)+1),] = accu_glmnet_phy$overall
}
glmnetstatsPhy
The program always stopped on fit_glmnet_phy <- train (Methane~., ..
this command and shows
Metric Accuracy not applicable for regression models
I have no idea about this error
I also attached the type of mathane
enter image description here
Try normalizing the input columns and mapping the output column as factors. This helped me resolve an issue similar to it.

R caret: leave subject out cross validation with data subset for training?

I want to perform leave subject out cross validation with R caret (cf. this example) but only use a subset of the data in training for creating CV models. Still, the left out CV partition should be used as a whole, as I need to test on all data of a left out subject (no matter if it's millions of samples that cannot be used in training due to computational restrictions).
I've created a minimal 2 class classification example using the subset and index parameters of caret::train and caret::trainControl to achieve this. From my observation this should solve the problem, but I have a hard time actually ensuring that the evaluation is still done in a leave-subject-out way. Maybe someone with experience in this task could shed some light on this:
library(plyr)
library(caret)
library(pROC)
library(ggplot2)
# with diamonds we want to predict cut and look at results for different colors = subjects
d <- diamonds
d <- d[d$cut %in% c('Premium', 'Ideal'),] # make a 2 class problem
d$cut <- factor(d$cut)
indexes_data <- c(1,5,6,8:10)
indexes_labels <- 2
# population independent CV indexes for trainControl
index <- llply(unique(d[,3]), function(cls) c(which(d[,3]!=cls)))
names(index) <- paste0('sub_', unique(d[,3]))
str(index) # indexes used for training models with CV = OK
m3 <- train(x = d[,indexes_data],
y = d[,indexes_labels],
method = 'glm',
metric = 'ROC',
subset = sample(nrow(d), 5000), # does this subset the data used for training and obtaining models, but not the left out partition used for estimating CV performance?
trControl = trainControl(returnResamp = 'final',
savePredictions = T,
classProbs = T,
summaryFunction = twoClassSummary,
index = index))
str(m3$resample) # all samples used once = OK
# performance over all subjects
myRoc <- roc(predictor = m3$pred[,3], response = m3$pred$obs)
plot(myRoc, main = 'all')
performance for individual subjects
l_ply(unique(m3$pred$Resample), .fun = function(cls) {
pred_sub <- m3$pred[m3$pred$Resample==cls,]
myRoc <- roc(predictor = pred_sub[,3], response = pred_sub$obs)
plot(myRoc, main = cls)
} )
Thanks for your time!
Using both the index and indexOut parameter in caret::trainControl at the same time seems to do the trick (thanks to Max for the hint in this question). Here is the updated code:
library(plyr)
library(caret)
library(pROC)
library(ggplot2)
str(diamonds)
# with diamonds we want to predict cut and look at results for different colors = subjects
d <- diamonds
d <- d[d$cut %in% c('Premium', 'Ideal'),] # make a 2 class problem
d$cut <- factor(d$cut)
indexes_data <- c(1,5,6,8:10)
indexes_labels <- 2
# population independent CV partitions for training and left out partitions for evaluation
indexes_populationIndependence_subjects <- 3
index <- llply(unique(d[,indexes_populationIndependence_subjects]), function(cls) c(which(d[,indexes_populationIndependence_subjects]!=cls)))
names(index) <- paste0('sub_', unique(d[,indexes_populationIndependence_subjects]))
indexOut <- llply(index, function(part) (1:nrow(d))[-part])
names(indexOut) <- paste0('sub_', unique(d[,indexes_populationIndependence_subjects]))
# subsample partitions for training
index <- llply(index, function(i) sample(i, 1000))
m3 <- train(x = d[,indexes_data],
y = d[,indexes_labels],
method = 'glm',
metric = 'ROC',
trControl = trainControl(returnResamp = 'final',
savePredictions = T,
classProbs = T,
summaryFunction = twoClassSummary,
index = index,
indexOut = indexOut))
m3$resample # seems OK
str(m3$pred) # seems OK
myRoc <- roc(predictor = m3$pred[,3], response = m3$pred$obs)
plot(myRoc, main = 'all')
# analyze results per subject
l_ply(unique(m3$pred$Resample), .fun = function(cls) {
pred_sub <- m3$pred[m3$pred$Resample==cls,]
myRoc <- roc(predictor = pred_sub[,3], response = pred_sub$obs)
plot(myRoc, main = cls)
} )
Still, I'm not absolutely sure if this is actually does the estimation in a population independent way, so if anybody has knowledge about the details please share your thoughts!

Resources