I have built a glm model using R package "caret" and I'd like to assess its performance using RMSE. I notice that the two RMSEs are different and I wonder which one is the real RMSE?
Also, how can I extract each fold (5*5=25 in total) of the training data, test data, and predicted data (based on the optimal tuned parameter) from the model?
library(caret)
data("mtcars")
set.seed(100)
mydata = mtcars[, -c(8,9)]
model_glm <- train(
hp ~ .,
data = mydata,
method = "glm",
metric = "RMSE",
preProcess = c('center', 'scale'),
trControl = trainControl(
method = "repeatedcv",
number = 5,
repeats = 5,
verboseIter = TRUE
)
)
GLM.pred = predict(model_glm, subset(mydata, select = -hp))
RMSE(pred = GLM.pred, obs = mydata$hp) # 21.89
model_glm$results$RMSE # 32.16
With the following code, I get :
sqrt(mean((mydata$hp - predict(model_glm)) ^ 2))
[1] 21.89127
This suggests that the real is "RMSE(pred = GLM.pred, obs = mydata$hp)"
Also, you have
model_glm$resample$RMSE
[1] 28.30254 34.69966 25.55273 25.29981 40.78493 31.91056 25.05311 41.83223 26.68105 23.64629 27.98388 25.98827 45.26982 37.28214
[15] 38.13617 31.14513 23.35353 42.05274 34.04761 35.17733 28.28838 35.89639 21.42580 45.17860 29.13998
which is the RMSE for each of the 25 CV. Also, we have
mean(model_glm$resample$RMSE)
32.16515
So, the 32.16 is the average of the RMSE of the 25 CV. The 21.89 is the RMSE on the original dataset.
Related
I need to do an four-fold nested repeated cross validation to train a model.
I wrote the following code, which has the inner cross-validation, but now I'm struggling to create the outer.
fitControl <- trainControl(## 10-fold CV
method = "repeatedcv",
number = 10,
## repeated five times
repeats = 5,
savePredictions = TRUE,
classProbs = TRUE,
summaryFunction = twoClassSummary)
model_SVM_P <- train(Group ~ ., data = training_set,
method = "svmPoly",
trControl = fitControl,
verbose = FALSE,
tuneLength = 5)
I made an attempt to solve the problem:
ntrain=length(training_set)
train.ext=createFolds(training_set,k=4,returnTrain=TRUE)
test.ext=lapply(train.ext,function(x) (1:ntrain)[-x])
for (i in 1:4){
model_SVM_P <- train(Group ~ ., data = training_set[train.ext[[i]]],
method = "svmRadial",
trControl = fitControl,
verbose = FALSE,
tuneLength = 5)
}
But it didn't worked.
How can I do this outer loop?
The rsample package has implemented the outer loop in the nested_cv() function, see documentation.
To evaluate the models trained by nested_cv, have a look at this vignette which shows where the "heavylifting" is done:
# `object` is an `rsplit` object in `results$inner_resamples`
summarize_tune_results <- function(object) {
# Return row-bound tibble that has the 25 bootstrap results
map_df(object$splits, tune_over_cost) %>%
# For each value of the tuning parameter, compute the
# average RMSE which is the inner bootstrap estimate.
group_by(cost) %>%
summarize(mean_RMSE = mean(RMSE, na.rm = TRUE),
n = length(RMSE),
.groups = "drop")
}
tuning_results <- map(results$inner_resamples, summarize_tune_results)
This code applies the tune_over_cost function on every hyperparameter and split (or fold) of the training data which is here called "assessment data".
Please check out the vignette for more useful code including parallelization.
I am trying to investigate my model with R with machine learning. Training model in general works not well.
# # Logistic regression multiclass
for (i in 1:30) {
# split data into training/test
trainPhyIndex <- createDataPartition(subs_phy$Methane, p=10/17,list = FALSE)
trainingPhy <- subs_phy[trainPhyIndex,]
testingPhy <- subs_phy[-trainPhyIndex,]
# Pre-process predictor values
trainXphy <- trainingPhy[,names(trainingPhy)!= "Methane"]
preProcValuesPhy <- preProcess(x= trainXphy,method = c("center","scale"))
# using boot to avoid over-fitting
fitControlPhyGLMNET <- trainControl(method = "repeatedcv",
number = 10,
repeats = 4,
savePredictions="final",
classProbs = TRUE
)
fit_glmnet_phy <- train (Methane~.,
trainingPhy,
method = "glmnet",
tuneGrid = expand.grid(
.alpha =0.1,
.lambda = 0.00023),
metric = "Accuracy",
trControl = fitControlPhyGLMNET)
pred_glmnet_phy <- predict(fit_glmnet_phy, testingPhy)
# Get the confusion matrix to see accuracy value
u <- union(pred_glmnet_phy,testingPhy$Methane)
t <- table(factor(pred_glmnet_phy, u), factor(testingPhy$Methane, u))
accu_glmnet_phy <- confusionMatrix(t)
# accu_glmnet_phy<-confusionMatrix(pred_glmnet_phy,testingPhy$Methane)
glmnetstatsPhy[(nrow(glmnetstatsPhy)+1),] = accu_glmnet_phy$overall
}
glmnetstatsPhy
The program always stopped on fit_glmnet_phy <- train (Methane~., ..
this command and shows
Metric Accuracy not applicable for regression models
I have no idea about this error
I also attached the type of mathane
enter image description here
Try normalizing the input columns and mapping the output column as factors. This helped me resolve an issue similar to it.
I am doing a stack of models in R as follows:
ctrl <- trainControl(method="repeatedcv", number=5, repeats=3, returnResamp="final", savePredictions="final", classProbs=TRUE, selectionFunction="oneSE", verboseIter=TRUE)
models_stack <- caretStack(
model_list,
data=train_data,
tuneLength=10,
method="glmnet",
metric="ROC",
trControl=ctrl
)
1) Why am I seeing the following error? What can I do? I am stuck now.
Timing stopped at: 0.89 0.005 0.91
Show Traceback
Error in (function (x, y, family = c("gaussian", "binomial", "poisson", : unused argument (data = list(c(-0.00891097103286995, 0.455282701499392, 0.278236211515583, 0.532932725880776, 0.511036607368827, 0.688757947257125, -0.560727863490874, -0.21768155316146, 0.642219917023467, 0.220363129901216, 0.591732278371339, 1.02850020403572, -1.02417799431585, 0.806359545011601, -1.21490317454699, -0.671361009441299, 0.927344615788642, -0.10449847318776, 0.595493217624868, -1.05586363903119, -0.138457794869817, -1.026253562838, -1.38264471633224, -1.32900800143341, 0.0383617314263342, -0.82222313323842, -0.644251885665736, -0.174126438952992, 0.323934240274895, -0.124613523895458, 0.299359713721601, -0.723599218327519, -0.156528054435544, -0.76193093842169, 0.863217455799044, -1.01340448660914, -0.314365383747751, 1.19150804114605, 0.314703439577839, 1.55580594654149, -0.582911462615421, -0.515291378382375, 0.305142268138296, 0.513989405541095, -1.85093305614114, 0.436468060668601, -2.18997828727424, 1.12838871469007, -1.17619542016998, -0.218175589380355
2) Is there not supposed to have a "data" parameter? If i need to use a different dataset for my level 1 supervisor model what I can do?
3) Also I wanted to use AUC/ROC but got these errors
The metric "AUC" was not in the result set. Accuracy will be used instead.
and
The metric "ROC" was not in the result set. Accuracy will be used instead.
I saw some online examples that ROC can be used, is it because it is not for this model? What metrics can I use besides Accuracy for this model? If I need to use ROC, what are the other options.
As requested by #RLave, this is how my model_list is done
grid.xgboost <- expand.grid(.nrounds=c(40,50,60),.eta=c(0.2,0.3,0.4),
.gamma=c(0,1),.max_depth=c(2,3,4),.colsample_bytree=c(0.8),
.subsample=c(1),.min_child_weight=c(1))
grid.rf <- expand.grid(.mtry=3:6)
model_list <- caretList(y ~.,
data=train_data_0,
trControl=ctrl,
tuneList=list(
xgbTree=caretModelSpec(method="xgbTree", tuneGrid=grid.xgboost),
rf=caretModelSpec(method="rf", tuneGrid=grid.rf)
)
)
My train_data_0 and train_data are both from the same dataset. My dataset predicators are all numeric values with the label as a binary label
your question contains three questions:
Why am I seeing the following error? What can I do? I am stuck now.
caretStack should not have a data parameter, the data is generated based on predictions of models in caretList. Take a look at this reproducible example:
library(caret)
library(caretEnsemble)
library(mlbench)
using the Sonar data set:
data(Sonar)
create grid for hyper parameter tune for xgboost:
grid.xgboost <- expand.grid(.nrounds = c(40, 50, 60),
.eta = c(0.2, 0.3, 0.4),
.gamma = c(0, 1),
.max_depth = c(2, 3, 4),
.colsample_bytree = c(0.8),
.subsample = c(1),
.min_child_weight = c(1))
create grid for rf tune:
grid.rf <- expand.grid(.mtry = 3:6)
create train control:
ctrl <- trainControl(method="cv",
number=5,
returnResamp = "final",
savePredictions = "final",
classProbs = TRUE,
selectionFunction = "oneSE",
verboseIter = TRUE,
summaryFunction = twoClassSummary)
tune the models:
model_list <- caretList(Class ~.,
data = Sonar,
trControl = ctrl,
tuneList = list(
xgbTree = caretModelSpec(method="xgbTree",
tuneGrid = grid.xgboost),
rf = caretModelSpec(method = "rf",
tuneGrid = grid.rf))
)
create the stacked ensamble:
models_stack <- caretStack(
model_list,
tuneLength = 10,
method ="glmnet",
metric = "ROC",
trControl = ctrl
)
2) Is there not supposed to have a "data" parameter? If i need to use a different dataset for my level 1 supervisor model what I can do?
caretStack needs only the predictions from the base models, in order to create an ensemble of models trained on different data you must create a new caretList with the appropriate data specified there.
3) Also I wanted to use AUC/ROC but got these errors
The easiest way to use AUC as metric is to set: summaryFunction = twoClassSummary in
trainControl
I want to extend the RandomForest so that each leaf will contain naivebayes regression instead of average. In the following, I first tried to use mob() for adding linearModel. I got the following error:
Error in root.matrix(crossprod(process)) : matrix is not positive semidefinite
Here is my code:
require (data.table)
require (party)
set.seed(123)
data1 <- read.csv('https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data',header = TRUE)
colnames(data1)<- c("BuyingPrice","Maintenance","NumDoors","NumPersons","BootSpace","Safety","Condition")
# Split into Train and Validation sets
# Training Set : Validation Set = 70 : 30 (random)
set.seed(100)
train <- sample(nrow(data1), 0.7*nrow(data1), replace = FALSE)
TrainSet <- data1[train,]
ValidSet <- data1[-train,]
summary(TrainSet)
summary(ValidSet)
# Create a Random Forest model with default parameters
model1 <- randomForest(Condition ~ ., data = TrainSet, importance = TRUE)
model1
# Fine tuning parameters of Random Forest model
model2 <- randomForest(Condition ~ ., data = TrainSet, ntree = 500, mtry = 6, importance = TRUE)
model2
fmBH <- mob(Condition ~ BuyingPrice + Maintenance | NumDoors+ NumPersons + BootSpace + Safety ,
data = TrainSet, model = linearModel)
I'm building a random forest on some data from work (this means I can't share that data, there are 15k observations), using the caret train function for cross validation, the accuracy of the model is very low: 0.9%.
here's the code I used:
set.seed(512)
n <- nrow(my_data)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k=5)
model <- train(ICNumber ~ ., tuneGrid = data.frame(mtry = c(32), min.node.size = 1, splitrule = "gini"),
data = my_data, method = "ranger",
trControl = trainControl(verboseIter = TRUE, savePredictions = T, index=my_folds))
print(model$resample)
--Edit
As Gilles noticed, the folds indices are wrongly constructed and training is done on 20% of the observations, but even if I fix this by adding returnTrain = T , I'm still getting near zero accuracy
--Edit
model$resample produces this:
Accuracy ___ Kappa_____ Resample
0.026823683_ 0.0260175246_ Fold1
0.002615234_ 0.0019433907_ Fold2
0.002301118_ 0.0017644472_ Fold3
0.001637733_ 0.0007026352_ Fold4
0.010187315_ 0.0094986595_ Fold5
Now if I do the cross validation by hand like this:
set.seed(512)
n <- nrow(my_data)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k=5)
for (fold in my_folds) {
train_data <- my_data[-fold,]
test_data <- my_data[fold,]
model <- train(ICNumber ~ ., tuneGrid = data.frame(mtry = c(32), min.node.size = 1, splitrule = "gini"),
data = train_data, method = "ranger",
trControl = trainControl(method = "none"))
p <- predict(model, test_data)
e <- ifelse(p == test_data$ICNumber, T, F)
print(sum(e) / nrow(test_data))
}
I get the following accuracy:
[1] 0.743871
[1] 0.7566957
[1] 0.7380645
[1] 0.7390181
[1] 0.7311168
I was expecting to get about the same accuracy values, what am I doing wrong in train? Or is the manual prediction code wrong?
--Edit
Furthermore, this code works well on the Soybean data and I can reproduce the results from Gilles below
--Edit
--Edit2
Here are some details about my data:
15493 obs. of 17 variables:
ICNUmber is a string with 1531 different values, these are the classes
the other 16 variables are factors with 33 levels
--Edit2
--Edit3
My last experiment was to drop the observations for all the classes occurring less than 10 times, 12k observations of 396 classes remained. For this dataset, the manual and automatic cross validations accuracy match...
--Edit3
It was a tricky one ! ;-)
The error comes from a misuse of the index option in trainControl.
According tho the help page, index should be :
a list with elements for each resampling iteration. Each list element is a vector of integers corresponding to the rows used for training at that iteration.
In your code you provided the integers coresponding to the rows that should be removed
from the training dataset instead of providing the integers corresponding to the
rows that should be used...
You can cange that by using createFolds(train_indices, k=5, returnTrain = T) instead
of createFolds(train_indices, k=5).
Note also that internaly, afaik, caret is creating folds that are balanced relative
to the classes that you want to predict. So the code should ideally be more like :
createFolds(my_data[train_indices, "Class"], k=5, returnTrain = T), particularly
if the classes are not balanced...
Here is a reproducible example with the Soybean dataset
library(caret)
#> Le chargement a nécessité le package : lattice
#> Le chargement a nécessité le package : ggplot2
data(Soybean, package = "mlbench")
my_data <- droplevels(na.omit(Soybean))
Your code (the training data is here much smaller than expected, you use only 20% of the data, hence the lower accuracy).
You also get some warnings due to the absense of some classes in the training datasets (because of the class imbalance and reduced training set).
set.seed(512)
n <- nrow(my_data)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k=5)
model <- train(Class ~ ., tuneGrid = data.frame(mtry = c(32), min.node.size = 1, splitrule = "gini"),
data = my_data, method = "ranger",
trControl = trainControl(verboseIter = F, savePredictions = T,
index=my_folds))
#> Warning: Dropped unused factor level(s) in dependent variable: rhizoctonia-
#> root-rot.
#> Warning: Dropped unused factor level(s) in dependent variable: downy-
#> mildew.
print(model$resample)
#> Accuracy Kappa Resample
#> 1 0.7951002 0.7700909 Fold1
#> 2 0.5846868 0.5400131 Fold2
#> 3 0.8440980 0.8251373 Fold3
#> 4 0.8822222 0.8679453 Fold4
#> 5 0.8444444 0.8263563 Fold5
Corrected code, just with returnTrain = T (here you really use 80% of the data for training...)
set.seed(512)
n <- nrow(my_data)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k=5, returnTrain = T)
model <- train(Class ~ ., tuneGrid = data.frame(mtry = c(32), min.node.size = 1, splitrule = "gini"),
data = my_data, method = "ranger",
trControl = trainControl(verboseIter = F, savePredictions = T,
index=my_folds))
print(model$resample)
#> Accuracy Kappa Resample
#> 1 0.9380531 0.9293371 Fold1
#> 2 0.8750000 0.8583687 Fold2
#> 3 0.9115044 0.9009814 Fold3
#> 4 0.8660714 0.8505205 Fold4
#> 5 0.9107143 0.9003825 Fold5
To be compared to your loop. There are still some small differences so maybe there is still something that I don't understand.
set.seed(512)
n <- nrow(my_data)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k=5)
for (fold in my_folds) {
train_data <- my_data[-fold,]
test_data <- my_data[fold,]
model <- train(Class ~ ., tuneGrid = data.frame(mtry = c(32), min.node.size = 1, splitrule = "gini"),
data = train_data, method = "ranger",
trControl = trainControl(method = "none"))
p <- predict(model, test_data)
e <- ifelse(p == test_data$Class, T, F)
print(sum(e) / nrow(test_data))
}
#> [1] 0.9380531
#> [1] 0.875
#> [1] 0.9115044
#> [1] 0.875
#> [1] 0.9196429
Created on 2018-03-09 by the reprex package (v0.2.0).
To expand on the excellent answer by Gilles. Apart the mistake in specifying the indexes used for testing and training, to get a fully reproducible model for algorithms that involve some stochastic process like random forrest you should specify the seeds argument in trainControl. The length of this argument should equal the number of re-samples + 1 (for the final model):
library(caret)
library(mlbench)
data(Sonar)
data(Sonar)
set.seed(512)
n <- nrow(Sonar)
train_indices <- sample(1:n)
my_folds <- createFolds(train_indices, k = 5, returnTrain = T)
model <- train(Class ~ .,
tuneGrid = data.frame(mtry = c(32),
min.node.size = 1,
splitrule = "gini"),
data = Sonar,
method = "ranger",
trControl = trainControl(verboseIter = F,
savePredictions = T,
index = my_folds,
seeds = rep(512, 6))) #this is the important part
model$resample
#output
Accuracy Kappa Resample
1 0.8536585 0.6955446 Fold1
2 0.8095238 0.6190476 Fold2
3 0.8536585 0.6992665 Fold3
4 0.7317073 0.4786127 Fold4
5 0.8372093 0.6681367 Fold5
now lets do the resample manually:
for (fold in my_folds) {
train_data <- Sonar[fold,]
test_data <- Sonar[-fold,]
model <- train(Class ~ .,
tuneGrid = data.frame(mtry = c(32),
min.node.size = 1,
splitrule = "gini"),
data = train_data,
method = "ranger",
trControl = trainControl(method = "none",
seeds = 512)) #use the same seeds as above
p <- predict(model, test_data)
e <- ifelse(p == test_data$Class, T, F)
print(sum(e) / nrow(test_data))
}
#output
[1] 0.8536585
[1] 0.8095238
[1] 0.8536585
[1] 0.7317073
[1] 0.8372093
#semicolo if you can reproduce this example on the Sonar data set, but not with your own data, then the problem is in the data set and any further insights will need to investigate the data in question.
It looks like the train function transforms the class column into a factor, in my dataset there are a lot (about 20%) of classes that have less than 4 observations. When splitting the set by hand, the factor is constructed after the split and for each of the factor value there's at least one observation.
But during the automatic cross validation, the factor is constructed on the full dataset and when the splits are done some values of the factor don't have any observation. This seems to somehow mess up the accuracy. This probably calls for a new different question, thanks to Gilles and missuse for their help.