Possibly overfitted classification tree but with stable prediction error - r

I have a question regarding rpart and overfitting. My goal is only to do well on prediction. My dataset is large, almost 20000 points. Using around 2.5% of these points as training I get a prediction error around 50%. But using 97.5% of the data as training I get around 30%. Since I am using so much data for training I guess there is a risk for overfitting.
I run this 1000 times with random training/test data + pruning the tree which is some sort of cross validation if I have understood it correctly, and I get pretty much stable results (same prediction error and importance of variables).
Can overfitting still be a problem, even though I have run this 1000 times and the prediction error is stable?
I also have a question regarding correlation between my explanatory variables. Can that be a problem in CART (as with regression)? In regression I would maybe use Lasso to try to fix the correlation. How can I fix the correlation with my classification tree?
When I plot the cptree I get this graph:
cptree plot
Here is the code I am running (I have repeated this 1000 times with different random splits each time).
set.seed(1) # For reproducability
train_frac = 0.975
n = dim(beijing_data)[1]
# Split into training and testing data
ii = sample(seq(1,dim(beijing_data)[1]),n*train_frac)
data_train = beijing_data[ii,]
data_test = beijing_data[-ii,]
fit = rpart(as.factor(PM_Dongsi_levels)~DEWP+HUMI+PRES+TEMP+Iws+
precipitation+Iprec+wind_dir+tod+pom+weekend+month+
season+year+day,
data = data_train, minsplit = 0, cp = 0)
plotcp(fit)
# Find the split with minimum CP and prune the tree
cp_fit = fit$cptable[which.min(fit$cptable[,"xerror"]),"CP"]
pfit = prune(fit, cp = cp_fit)
pp <- predict(pfit, newdata = data_test, type = "class")
err = sum(data_test[,"PM_Dongsi_levels"] != pp)/length(pp)
print(err)
Link to beijing_data (as a RData-file so you can reproduce my example)
https://www.dropbox.com/s/6t3lcj7f7bqfjnt/beijing_data.RData?dl=0

The question is quite complex and it will be very hard to comprehensively answer. I will try to provide some insights and references for further reading.
Correlated features do not pose a severe problem for tree based methods as they do for models that use a hyper-plane as classification boundaries. When there are multiple correlated features the tree will just pick one and the rest will be ignored. However correlated features often cloud the interpretability of such a model, mask interaction and so on. Tree based models can also benefit from the removal of such variables since they will have to search a lesser space. Here is a decent resource on trees. Also check these videos 1, 2 and 3 and the ISLR book.
Models based on one tree tend to not perform as good as hyper plane based methods. So if you are interested mainly in the quality of prediction then you should explore models based on a bunch of trees such as bagging and boosting models. Popular implementations of bagging and boosting in R are randomForest and xgboost. Both can be utilized with little to no experience and can result in good predictions. Here is a resource on how to use the popular R machine learning library caret to tune a random forest. Another resource is the R mlr library which provides great wrappers for many great things related to ML, for instance here is a short blog post on Model based optimization of xgboost.
Re-sampling strategy for model validation varies with task and available data. With 20 k rows I would probably use over 50 - 60 % for training, 20 % for validation and 20 -30 % as test set. The 50 % test set I would use to select a suitable ML method, features, hyper parameters and so on by repeated K-fold cross validation (2-3 times repeated 4-5 - fold or similar). The 20 % validation set I would use to fine tune stuff and to get a feel on how good my cross validation on the train set generalizes. When I am satisfied with everything I would use the test set as a final proof I have a good model. Here are some resources on re-sampling: 1, 2, 3 and nested resampling.
In your situation I would use
z <- caret::createDataPartition(data$y, p = 0.6, list = FALSE)
train <- data[z,]
test <- data[-z,]
to split the data to train and test sets, I would then repeat the process to split the test set again with p = 0.5.
On the train data I would use this tutorial on random forests to tune the mtry and ntree parameters (Extend Caret section) using 5 fold repeated cross validation in caret and a grid search.
control <- trainControl(method = "repeatedcv", number = 5, repeats = 3)
tunegrid <- expand.grid(.mtry = c(1:15), .ntree = c(200, 500, 700, 1000, 1200, 1500))
and so on, as detailed in the mentioned link.
On a final note, the more data you have to train on, the less likely you are to over-fit.

Related

LASSO analysis (glmnet package). Can I loop the analysis and the results extraction?

I'm using the package glmnet, I need to run several LASSO analysis for the calibration of a large number of variables (%reflectance for each wavelength throughout the spectrum) against one dependent variable. I have a couple of doubts on the procedure and on the results I wish to solve. I show my provisional code below:
First I split my data in training (70% of n) and testing sets.
smp_size <- floor(0.70 * nrow(mydata))
set.seed(123)
train_ind <- sample(seq_len(nrow(mydata)), size = smp_size)
train <- mydata[train_ind, ]
test <- mydata[-train_ind, ]
Then I separate the target trait (y) and the independent variables (x) for each set as follows:
vars.train <- train[3:2153]
vars.test <- test[3:2153]
x.train <- data.matrix(vars.train)
x.test <- data.matrix(vars.test)
y.train <- train$X1
y.test <- test$X1
Afterwords, I run a cross-validated LASSO model for the training set and extract and writte the non-zero coefficients for lambdamin. This is because one of my concerns here is to note which variables (wavebands of the reflectance spectrum) are selected by the model.
install.packages("glmnet")
library(glmnet)
cv.lasso.1 <- cv.glmnet(y=y.train, x= x.train, family="gaussian", nfolds =
5, standardize=TRUE, alpha=1)
coef(cv.lasso.1,s=cv.lasso.1$lambda.min) # Using lambda min.
(cv.lasso.1)
install.packages("broom")
library(broom)
c <- tidy(coef(cv.lasso.1, s="lambda.min"))
write.csv(c, file = "results")
Finally, I use the function “predict” and apply the object “cv.lasso1” (the model obtained previously) to the variables of the testing set (x.2) in order to get the prediction of the variable and I run the correlation between the predicted and the actual values of Y for the testing set.
predict.1.2 <- predict(cv.lasso.1, newx=x.2, type = "response", s =
"lambda.min")
cor.test(x=c(predict.1.2), y=c(y.2))
This is a simplified code and had no problem so far, the point is that I would like to make a loop (of one hundred repetitions) of the whole code and get the non-zero coefficients of the cross-validated model as well as the correlation coefficient of the predicted vs actual values (for the testing set) for each repetition. I've tried but couldn't get any clear results. Can someone give me some hint?
thanks!
In general, running repeated analyses of the same type over and over on the same data can be tricky. And in your case, may not be necessary the way in which you have outlined it.
If you are trying to find the variables most predictive, you can use PCA, Principal Component Analysis to select variables with the most variation within the a variable AND between variables, but it does not consider your outcome at all, so if you have poor model design it will pick the least correlated data in your repository but it may not be predictive. So you should be very aware of all variables in the set. This would be a way of reducing the dimensionality in your data for a linear or logistic regression of some sort.
You can read about it here
yourPCA <- prcomp(yourData,
center = TRUE,
scale. = TRUE)
Scaling and centering are essential to making these models work right, by removing the distance between your various variables setting means to 0 and standard deviations to 1. Unless you know what you are doing, I would leave those as they are. And if you have skewed or kurtotic data, you might need to address this prior to PCA. Run this ONLY on your predictors...keep your target/outcome variable out of the data set.
If you have a classification problem you are looking to resolve with much data, try an LDA, Linear Discriminant Analysis which looks to reduce variables by optimizing the variance of each predictor with respect to the OUTCOME variable...it specifically considers your outcome.
require(MASS)
yourLDA =r <- lda(formula = outcome ~ .,
data = yourdata)
You can also set the prior probabilities in LDA if you know what a global probability for each class is, or you can leave it out, and R/ lda will assign the probabilities of the actual classes from a training set. You can read about that here:
LDA from MASS package
So this gets you headed in the right direction for reducing the complexity of data via feature selection in a computationally solid method. In looking to build the most robust model via repeated model building, this is known as crossvalidation. There is a cv.glm method in boot package which can help you get this taken care of in a safe way.
You can use the following as a rough guide:
require(boot)
yourCVGLM<- cv.glmnet(y = outcomeVariable, x = allPredictorVariables, family="gaussian", K=100) .
Here K=100 specifies that you are creating 100 randomly sampled models from your current data OBSERVATIONS not variables.
So the process is two fold, reduce variables using one of the two methods above, then use cross validation to build a single model from repeated trials without cumbersome loops!
Read about cv.glm here
Try starting on page 41, but look over the whole thing. The repeated sampling you are after is called booting and it is powerful and available in many different model types.
Not as much code and you might hope for, but pointing you in a decent direction.

How to build regression models and then compare their fits with data held out from the model training-testing?

I have been building a couple different regression models using the caret package in R in order to make predictions about how fluorescent certain genetic sequences will become under certain experimental conditions.
I have followed the basic protocol of splitting my data into two sets: one "training-testing set" (80%) and one "hold-out set" (20%), the former of which would be utilized to build the models, and the latter would be used to test them in order to compare and pick the final model, based on metrics such as their R-squared and RMSE values. One such guide of the many I followed can be found here (http://www.kimberlycoffey.com/blog/2016/7/16/compare-multiple-caret-run-machine-learning-models).
However, I run into a block in that I do not know how to test and compare the different models based on how well they can predict the scores in the hold-out set. In the guide I linked to above, the author uses a ConfusionMatrix in order to calculate the specificity and accuracy for each model after building a predict.train object that applied the recently built models on the hold-out set of data (which is referred to as test in the link). However, ConfusionMatrix can only be applied to classification models, wherein the outcome (or response) is a categorical value (as far as my research has indicated. Please correct me if this is incorrect, as I have not been able to conclude without any doubt that this is the case).
I have found that the resamples method is capable of comparing multiple models against each other (source: https://www.rdocumentation.org/packages/caret/versions/6.0-77/topics/resamples), but it cannot take into account how the new models fit with the data that I excluded from the training-testing sessions.
I tried to create predict objects using the recently built models and hold-out data, then calculate Rsquared and RMSE values using caret's R2 and RMSE methods. But I'm not sure if such an approach is best possible way for comparing and picking the best model.
At this point, I should note that all the model building methods I am using are based on linear regression, since I need to be able to extract the coefficients and apply them in a separate Python script.
Another option I considered was setting a threshold in my outcome, wherein any genetic sequence that had a fluorescence value over 100 was considered useful, while sequences scoring values under 100 were not. This would allow me utilize the ConfusionMatrix. But I'm not sure how I should implement this within my R code to make these two classes in my outcome variable. I'm further concerned that this approach might make it difficult to apply my regression models to other data and make predictions.
For what it's worth, each of the predictors is either an integer or a float, and have ranges that are not normally distributed.
Here is the code I thus far been using:
library(caret)
data <- read.table("mydata.csv")
sorted_Data<- data[order(data$fluorescence, decreasing= TRUE),]
splitprob <- 0.8
traintestindex <- createDataPartition(sorted_Data$fluorescence, p=splitprob, list=F)
holdoutset <- sorted_Data[-traintestindex,]
trainingset <- sorted_Data[traintestindex,]
traindata<- trainingset[c('x1', 'x2', 'x3', 'x4', 'x5', 'fluorescence')]
cvCtrl <- trainControl(method = "repeatedcv", number= 20, repeats = 20, verboseIter = FALSE)
modelglmStepAIC <- train(fluorescence~., traindata, method = "glmStepAIC", preProc = c("center","scale"), trControl = cvCtrl)
model_rlm <- train(fluorescence~., traindata, method = "rlm", preProc = c("center","scale"), trControl = cvCtrl)
pred_glmStepAIC<- predict.lm(modelglmStepAIC$finalModel, holdoutset)
pred_rlm<- predict.lm(model_rlm$finalModel, holdoutset)
glmStepAIC_r2<- R2(pred_glmStepAIC, holdoutset$fluorescence)
glmStepAIC_rmse<- RMSE(pred_glmStepAIC, holdoutset$fluorescence)
rlm_r2<- R2(pred_rlm, holdoutset$fluorescence)
rlm_rmse<- RMSE(pred_rlm, holdoutset$fluorescence)
The out-of-sample performance measures offered by Caret are RMSE, MAE and squared correlation between fitted and observed values (called R2). See more info here https://topepo.github.io/caret/measuring-performance.html
At least in time series regression context, RMSE is the standard measure for out-of-sample performance of regression models.
I would advise against discretising continuous outcome variable, because you are essentially throwing away information by discretising.

Xgboost dealing with imbalanced classification data

I have a dataset of some 20000 training examples, on which i want to do a binary classification.
The problem is the dataset is heavily imbalanced with only around 1000 being in the positive class. I am trying to use xgboost (in R) for doing my prediction.
I have tried oversampling and undersampling and no matter what i do, somehow the predictions always result in classifiying everything as the majority class.
I tried reading this article on how to tune parameters in xgboost.
https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/
But it only mentions which parameters help with imbalanced datasets, but not how to tune them.
I would appreciate if anyone has any advice on tuning the learning parameters of xgboost to handle imbalanced datasets and also on how to generate the validation set for such cases.
According to XGBoost documentation, the scale_pos_weight parameter is the one dealing with imbalanced classes. See, documentation here
scale_pos_weight, [default=1] Control the balance of positive and
negative weights, useful for unbalanced classes. A typical value to
consider: sum(negative cases) / sum(positive cases) See Parameters
Tuning for more discussion. Also see Higgs Kaggle competition demo for
examples: R, py1, py2, py3
Try something like this in R
bstSparse <- xgboost(data =xgbTrain , max_depth = 4, eta = 0.2, nthread = 2, nrounds = 200 ,
eval_metric = "auc" , scale_pos_weight = 48, colsample_bytree = 0.7,
gamma = 2.5,
eval_metric = "logloss",
objective = "binary:logistic")
Where scale_pos_weight is the imbalance. My baseline incidence rate is ~ 4%. use hyper parameter optimization. Can try that on scale_pos_weight too
A technique useful with neural networks is to introduce some noise into the observations.
In R there is the 'jitter' function to do this.
For your 1000 rare cases only apply a small amount of jitter to their features to give you another 1000 cases.
Run your code again and see if the predictions are now picking up any of the positive class.
You can experiment with more added cases and/or varying the amount of jitter.
HTH, cousin_pete

Pruning rule based classification tree (PART algorithm)

I am using PART algorithm in R (via package RWeka) for multi-class classification. Target attribute is time bucket in which an invoice will be paid by customer (like 7-15 days, 15-30 days etc). I am using following code for fitting and predicting from the model :
fit <- PART(DELAY_CLASS ~ AMT_TO_PAY + NUMBER_OF_CREDIT_DAYS + AVG_BASE_PRICE, data= trainingData)
predictedTrainingValues <- predict(fit, trainingData)
By using this model, I am getting around 82 % accuracy on training data. But accuracy on test data comes around 59 %. I understand that I am over-fitting the model. I tried to reduce the number of predictor variables (predictor variables in above code are reduced variables), but it is not helping much.Reducing the number of variables improves accuracy on test data to around 61 % and reduces the accuracy on training data to around 79 %.
Since PART algorithm is based on partial decision tree, another option can be to prune the tree. But I am not aware that how to prune tree for PART algorithm. On internet search, I found that FOIL criteria can be used for pruning rule based algorithm. But I am not able to find implementation of FOIL criterion in R or RWeka.
My question is that how to prune tree for PART algorithm, or any other suggestion to improve accuracy on test data are also welcome.
Thanks in advance!!
NOTE : I calculate accuracy as number of correctly classified instances divided by total number of instances.
In order to prune the tree with PART you need to specify it in the control argument of the function:
There is a complete list of the commands you can pass into the control argument here
I quote some of the options here which are relevant to pruning:
Valid options are:
-C confidence
Set confidence threshold for pruning. (Default: 0.25)
M number
Set minimum number of instances per leaf. (Default: 2)
-R
Use reduced error pruning.
-N number
Set number of folds for reduced error pruning. One fold is used as the pruning set. (Default: 3)
Looks like the C argument from above might be of help to you and then maybe R and N and M.
In order to use those in the function do:
fit <- PART(DELAY_CLASS ~ AMT_TO_PAY + NUMBER_OF_CREDIT_DAYS + AVG_BASE_PRICE,
data= trainingData,
control = Weka_control(R = TRUE, N = 5, M = 100)) #random choices
On a separate note for the accuracy metric:
Comparing the accuracy between the training set and the test set to determine over-fitting is not optimal in my opinion. The model was trained on the training set and therefore you expect it to work better there than the test set. A better test is cross-validation. Try performing a 10-fold cross-validation first (you could use caret's function train) and then compare the average cross-validation accuracy to your test set's accuracy. I think this will better. If you do not know what cross-validation is, in general it splits your training set into smaller training and tests sets and trains on the training and tests on the test set. Can read more about it here.

R glm - how to do multiple cross-validation

I have train data which I randomly split in two parts:
70% -> train_train
30% -> train_cv (for cross-validation)
I fit a glm (glmnet) model using train_train, then cross-validate with train_cv.
My problem is that a different random split for train_train and train_cv returns different cross-validation results (evaluated using Area Under the Curve, "AUC"):
AUC = 0.6381583 the 1st time
AUC = 0.6164524 the 2nd time
Is there a way to run multiple cross-validations, without duplicating the code?
There are some confusing things here. I think what you are describing is more of a standard train/test split, the word cross-validation is usually used differently. So you've held out 30% of the data for testing, which is good, and you can use that to find out how optimistic your train set estimate of AUC is. But of course the estimate depends on how you do the train/test split, and it would be good to know how much this test performance varies. You can use multiple runs of cross-validation to achieve this.
Cross-validation is slightly from just using a holdout set - five fold cross validation, for example, involves the following steps:
Randomly split the full dataset into five equal sized parts.
For i = 1 to 5, fit the model on all the data except the ith part.
Evaluate AUC on the part that was held out from the fit.
Average the five AUC results.
This process can be repeated multiple times to estimate the mean and variance of the out of sample estimate.
The R package cvTools allows you to do this. For example
library(ROCR)
library(cvTools)
calc_AUC <- function(pred, act) {
u<-prediction(pred, act)
return(performance(u, "auc")#y.values[[1]])
}
cvFit(m, data = train, y = train$response,
cost = calc_AUC, predictArgs = "response")
will perform 5-fold cross-validatino of the model m using AUC as the performance metric. cvFit also takes arguments K (number of cross-validation folds) and R (number of times to perform the cross-validation with different random splits).
See http://en.wikipedia.org/wiki/Cross-validation_(statistics) from more info on cross-validation.

Resources