How do I choose the optimal df(degrees of freedom) for my splines?
I used poisson regression and splines that help me to adjust for non linear changes. Using the caret package, I used the train function with method = gamSpline to test only 3 df.
model <- train(
RBC ~ elapsed,
obgyn_aleph,
method = "gamSpline",
trControl = trainControl(
method = "cv",
number = 10,
verboseIter = TRUE
)
)
Aggregating results
Selecting tuning parameters
Fitting df = 3 on full training set
Is this the default? If so how I can change it?
Tnx,
Daniel
The tuneGrid argument allows the user to specify a custom grid of tuning parameters, in this case, df
model <- train(
RBC ~ elapsed,
obgyn_aleph,
method = "gamSpline",
trControl = trainControl(
method = "cv",
number = 10,
verboseIter = TRUE
),
tuneGrid = data.frame(df=seq(2,20,by=2))
)
Related
I am trying to build a SVM model using the caret package. After tuning the parameters, how can we build the model using the optimal parameters so we don't need to tune the parameters in the future when we use the model? Thanks.
library(caret)
data("mtcars")
set.seed(100)
mydata = mtcars[, -c(8,9)]
model_svmr <- train(
hp ~ .,
data = mydata,
tuneLength = 10,
method = "svmRadial",
metric = "RMSE",
preProcess = c('center', 'scale'),
trControl = trainControl(
method = "repeatedcv",
number = 5,
repeats = 2,
verboseIter = TRUE
)
)
model_svmr$bestTune
The results show that sigma=0.1263203, C=4. How can we build a SVM model using the tuned parameters?
From this page in the caret package's documentation:
In cases where the model tuning values are known, train can be used to fit the model to the entire training set without any resampling or parameter tuning. Using the method = "none" option in trainControl can be used.
In your case, that would look like:
library(caret)
data("mtcars")
set.seed(100)
mydata2 <- mtcars[, -c(8, 9)]
model_svmr <- train(
hp ~ .,
data = mydata,
method = "svmRadial",
trControl = trainControl(method = "none"), # Telling caret not to re-tune
tuneGrid = data.frame(sigma=0.1263203, C=4) # Specifying the parameters
)
where we have removed any parameters relating to the tuning, namely tunelength, metric and preProcess.
Note that plot.train, resamples, confusionMatrix.train and several other functions will not work with this object but predict.train and others will.
I am using the e1071 'tune' function to optimize an SVM model. I would like to use F1 instead of Accuracy as the value to optimize for. I have found on this post: Optimize F-score in e1071 package that I need to define a new error.fun. The problem that I am having is that the function that is shown in that post was not shown to ultimately be the solution and it does not work for me. If I knew the variable names for the predictions from each iteration of tune I could write a function to calculate F1 but I don't know how to get those values. How can I calculate F1 and use it to optimize model parameters using 'tune' in e1071? My code is as follows:
tuned = tune.svm(PriYN~., data = dataset, kernel = "radial", probability=TRUE, gamma = 10^(-5:-1), cost = 10^(-3:1), tunecontrol=tune.control(cross=10))
Using {caret} :
ctrl <- trainControl(method = "repeatedcv", # choose your CV method
number = 5, # according to CV method
repeats = 2, # according to CV method
summaryFunction = prSummary, # TO TUNE ON F1 SCORE
classProbs = T,
verboseIter = T
#sampling = "smote" # you can try 'smote' resampling method
)
Then tune your model
set.seed(2202)
svm_model <- train(target ~., data = training,
method = "svmRadial",
#preProcess = c("center", "scale"),
tuneLength = 10,
metric = "F", # The metric used for tuning is the F1 SCORE
trControl = ctrl)
svm_model
I am using random forest and support vector machine method in caret package in R. I want to calculate AUC under ROC for both cases; however, I do not know how to do it in this particular case. My outcome is coded as 0 and 1. Here is the example of code I am using :
set.seed(123)
cvCtrl <- trainControl(method = "cv", number = 10)
rf_moded<-train(readm30~.,data=train,method="rf", trControl=cvCtrl)
Do you want to train the model with ROC? Then you need the following:
For trainControl:
control <- trainControl(method = 'cv', number = 10,
savePredictions = 'final', classProbs = TRUE, summaryFunction = twoClassSummary)
And in train:
train(
outcome ~ .,
data = data,
method = method,
trControl = control,
metric = "ROC"
)
I have a model like the following:
library(mlbench)
data(Sonar)
library(caret)
set.seed(998)
my_data <- Sonar
fitControl <-
trainControl(
method = "boot632",
number = 10,
classProbs = T,
savePredictions = T,
summaryFunction = twoClassSummary
)
model <- train(
Class ~ .,
data = my_data,
method = "xgbTree",
trControl = fitControl,
metric = "ROC"
)
How do I plot the ROC curve for this model? As I understand it, the probabilities must be saved (which I did in trainControl), but because of the random sampling which bootstrapping uses to generate a 'test' set, I am not sure how caret calculates the ROC value and how to generate a curve.
To isolate the class probabilities for the best performing parameters, I am doing:
for (a in 1:length(model$bestTune))
{model$pred <-
model$pred[model$pred[, paste(colnames(model$bestTune)[a])] == model$bestTune[1, a], ]}
Please advise.
Thanks!
First an explanation:
If you are not going to check how each possible hyper parameter combination predicted on each sample in each re-sample you can set savePredictions = "final" in trainControl to save space:
fitControl <-
trainControl(
method = "boot632",
number = 10,
classProbs = T,
savePredictions = "final",
summaryFunction = twoClassSummary
)
after running the model:
model <- train(
Class ~ .,
data = my_data,
method = "xgbTree",
trControl = fitControl,
metric = "ROC"
)
results of interest are in model$pred
here you can check how many samples were tested in each re-sample (I set 25 repetitions)
nrow(model$pred[model$pred$Resample == "Resample01",])
#83
caret always provides prediction from rows not used in the model build.
nrow(my_data) #208
83/208 makes sense for the test samples for boot632
Now to build the ROC curve. You may opt for several options here:
-average the probability for each sample and use that (this is usual for CV since you have all samples repeated the same number of times, but it can be done with boot also).
-plot all as is without averaging
-plot ROC for each re-sample.
I will show you the second approach:
Create a data frame of class probabilities and true outcomes:
for_lift = data.frame(Class = model$pred$obs, xgbTree = model$pred$R)
plot ROC:
pROC::plot.roc(pROC::roc(response = for_lift$Class,
predictor = for_lift$xgbTree,
levels = c("M", "R")),
lwd=1.5)
You can also do this with ggplot, to do so I find it easiest to make a lift object using caret function lift
lift_obj = lift(Class ~ xgbTree, data = for_lift, class = "R")
specify which class the probability was used ^.
library(ggplot2)
ggplot(lift_obj$data)+
geom_line(aes(1-Sp , Sn, color = liftModelVar))+
scale_color_discrete(guide = guide_legend(title = "method"))
Is it possible to select a different ROC set point in the Caret Train function instead of using metric = ROC (which I believe maximizes the AUC).
For example:
random.forest.orig <- train(pass ~ x+y,
data = meter.train,
method = "rf",
tuneGrid = tune.grid,
metric = "ROC",
trControl = train.control)
Specifically I have a two class problem (fail or pass) and I want to maximize the fail predictions while still maintaining a fail accuracy (or negative prediction value) of >80%. ie for every 10 fails I predict at least 8 of them are correct.
You can customize the caret::trainControl() object to use AUC, instead of accuracy, to tune the parameters of your models. Please check the caret documentation for details. (The built-in function, twoClassSummary, will compute the sensitivity, specificity and area under the ROC curve).
Note: In order to compute class probabilities, the pass feature must be Factor
Here under is an example of using 5-fold CV:
fitControl <- caret::trainControl(
method = "cv",
number = 5,
summaryFunction = twoClassSummary,
classProbs = TRUE,
verboseIter = TRUE
)
So your code will be adjusted a bit:
random.forest.orig <- train(pass ~ x+y,
data = meter.train,
method = "rf",
tuneGrid = tune.grid,
metric = "ROC",
trControl = fitControl)
# Print model to console to examine the output
random.forest.orig