R - factor examcard has new levels - r

I built a classification model in R using C5.0 given below:
library(C50)
library(caret)
a = read.csv("All_SRN.csv")
set.seed(123)
inTrain <- createDataPartition(a$anatomy, p = .70, list = FALSE)
training <- a[ inTrain,]
test <- a[-inTrain,]
Tree <- C5.0(anatomy ~ ., data = training,
trControl = trainControl(method = "repeatedcv", repeats = 10,
classProb = TRUE))
TreePred <- predict(Tree, test)
The training set has features like - examcard, coil_used, anatomy_region, bodypart_anatomy and anatomy(target class). All the features are categorical variables. There are a total of 10k odd values, I divided the data into training and test data. The learner worked great with this training and test set partioned in 70:30 ratio, but the problem comes when I provide the test set with new values given below:
TreePred <- predict(Tree, test_add)
Here, test_add contains the already present test set and a set of new values and on executing the learner fails to classify the new values and throws the following error:
Error in model.frame.default(object$Terms, newdata, na.action = na.action, : factor examcard has new levels
I tried to merge the new factor levels with the existing one using:
Tree$xlevels[["examcard"]] <- union(Tree$xlevels[["examcard"]], levels(test_add$examcard))
But, this wasn't of much help since the code executed with the following message and didn't yield any fruitful result:
predict code called exit with value 1
The feaure examcard holds a good deal of primacy in the classification hence can't be ignored. How can these set of values be classified?

You cannot create a prediction for factor levels in your test set that are absent in your training set. Your model will not have coefficients for these new factor levels.
If you are doing a 70/30 split, you need to repartition your data using caret::CreateDataPartition...
... or your own stratified sample function to ensure that all levels are represented in the training set: use the "split-apply-combine" approach: split the data set by examcard, and for each subset, apply the split, then combine the training subsets and the testing subsets.
See this question for more details.

Related

Random Forest model yields incorrect predictions despite having accuracy of over 99 percent

For a ML course, I am supposed to build a model based on the training set to predict the variable "classe" on a validation set. I removed all unnecessary variables in the training set, used cross validation to prevent over-fitting, and made sure the validation set matched the training set in terms of which columns are removed. When I predict classe in the validation set, it yields all classe A, and I know this is incorrect.
I included the entire script below.
Where did I go wrong?
library(caret)
download.file("https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv", "train.csv")
download.file("https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv", "test.csv")
train <- read.csv("./train.csv")
val <- read.csv("./test.csv")
#getting rid of columns with NAs
nas <- sapply(train, function(x) sum(is.na(x)))
train <- train[, nas<1900]
#removing near zero variance columns
remove <- nearZeroVar(train)
train <- train[, -remove]
#create partition in our training set
set.seed(8675309)
inTrain <- createDataPartition(train$classe, p = .7, list = FALSE)
training <- train[inTrain,]
testing <- train[-inTrain,]
model <- train(classe ~ ., method = "rf", data = training)
confusionMatrix(predict(model, testing), testing$classe)
#make sure validation set has same features as training set
trainforvalid <- subset(training, select = -classe)
val <- val[, colnames(trainforvalid)]
predict(model, val)
#the above step yields all predictions as classe A
This might be happening because the data is unbalanced. If the data have a lot more data points for Class A then Class B, the model will simply learn to predict always Class A.
Try to use a better metric in this case like F1 score.
I also recommend using techniques like oversampling or undersampling to avoid the unbalanced data issue.

Get row number for prediction with caret

I use caret a lot for my machine learning tasks in R and I like it a lot.
But I face the following problem:
I train a model in caret, say a linear regression with lm()
When I want to score new data, I do: predict(model, new_data)
When new_datacontains missing values in my predictors, predict returns no prediction, instead of say NA
Is it possible to either:
return a prediction for all rows in new_data with a prediction of NA when it is not possible or
return predictions + the row number of the dataframe the prediction corresponds to?
E.g. like the mlr-package does with an id-column that shows which row the prediction corresponds to:
Here is the link to the mlr-predict page with more details:
mlr-package: predict with row-id
Any help greatly appreciated!
You can identify the cases with missing values prior to running caret::train() by creating a new column with the row names in your data set, since these default to the row numbers in the data frame.
Using the Sonar data set from the mlbench package as an illustration:
library(mlbench)
data(Sonar)
library(caret)
set.seed(95014)
# add row numbers
Sonar$rowId <- rownames(Sonar)
# create training & testing data sets
inTraining <- createDataPartition(Sonar$Class, p = .75, list=FALSE)
training <- Sonar[inTraining,]
testing <- Sonar[-inTraining,]
# set column 60 to NA for some values in test data
testing[48:51,60] <- NA
testing[!complete.cases(testing),"rowId"]
...and the output:
> testing[!complete.cases(testing),"rowId"]
[1] "193" "194" "200" "206"
You can then run predict() on the rows in the test data set that have complete cases. Again using the Sonar dataset with a random forest model and 3 fold cross validation to expedite processing:
fitControl <- trainControl(method = "cv",number = 3)
fit <- train(x,y, method="rf",data=Sonar,trControl = fitControl)
predicted <- predict(fit,testing[complete.cases(testing),])
Another way to handle this situation is to use an imputation strategy to eliminate the missing values for the independent variables in your model. My article on Github, Strategies for Handling Missing Values links to a number of research papers on this topic.

Sampling with many categories

I'm using a linear regression to work with a dataset with many categorical variables that each contain several categories, up to 45 categories in one of them.
I'm sampling the data this way:
## 70% of the sample size
smp_size <- floor(0.7 * nrow(plot_data))
## set the seed to make your partition reproductible
set.seed(888)
train_ind <- sample(seq_len(nrow(plot_data)), size = smp_size)
train <- plot_data[train_ind, ]
test <- plot_data[-train_ind, ]
Then I make the model like this:
linear_model = lm(train$dependent_variable~., data = train)
The problem is that whenever I try to predict and work with the testing set, the training set contains some categories that the training set does not.
pred_data = predict(linear_model, newdata = test)
This gives me the following error:
Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) :
factor origin has new levels someCategory1, SomeCategory2
Is there a way to ensure that all the categories are in both the train and testing sets or is there a workaround on this?
I ended up removing the observations with new levels on the test set. I know it has it's limitations and that the OSR2 loses reliability, but it got the job done:
test = na.omit(remove_missing_levels (fit=linear_model, test_data=test));
I found the remove_missing_levels function here.
It requires this library:
install.packages("magrittr");
library(magrittr);

How to create a learning curve (bias/variance) from the output of caret::train

I am new to the caret library. I would like to use the train function to run cross-validation on my dataset (using the rpart method for classification). My goal is is to produce learning curves using the data returned from my call to train. The learning curve would plot the dataset size on the x-axis. The error of the predictions on the training and cross validation sets would be plotted as a function of dataset size.
My question is, does caret make predictions on both the training and cv folds? If the answer is yes, how would I go about extracting that data?
Assuming the answer is yes, here is a simple code sample that you could append to to illustrate:
library(MASS)
data(biopsy)
biopsy <- biopsy[, -1]
names(biopsy) <- c("thick", "u.size", "u.shape", "adhsn", "s.size", "nucl", "chrom", "n.nuc", "mit", "class")
biopsy.v2 <- na.omit(biopsy)
set.seed(1)
ind <- sample(2, nrow(biopsy.v2), replace = TRUE, prob = c(0.7, + 0.3))
biop.train <- biopsy.v2[ind == 1, ]
tr.model <- caret::train(class ~ ., data= biop.train, trControl = trainControl(method="cv", number=4, verboseIter = FALSE, savePredictions = "final"), method='rpart')
#Can I extract train and cv accuracies from tr.model?
Thanks.
note: I realize that I may need to call train repeatedly with different samples of my dataset (assuming caret doesn't also support this), and that is not reflected in the code sample here.
You can try this:
A data frame with predictions for each resample:
tr.model$pred
A data frame with columns for each performance metric. Each row corresponds to each resample:
tr.model$resample
A data frame with the final parameters:
tr.model$bestTune
A data frame with the training error rate and values of the tuning parameters:
tr.model$results
To specify repeated CV:
trainControl(..., repeats = n)
where n is an integer (the number of complete sets of folds to compute)
EDIT: determine which resamples were in the test folds:
the relevant information is in tr.model$pred data frame:
tr.model$pred[tr.model$pred$Resample=="Fold1",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold2",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold3",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold4",4:5]
the ones that were not in the test folds were in the training folds

Can I do predict.glmnet on test data with different number of predictor variables?

I used glmnet to build a predictive model on a training set with ~200 predictors and 100 samples, for a binomial regression/classification problem.
I selected the best model (16 predictors) that gave me the max AUC. I have an independent test set with only those variables (16 predictors) which made it into the final model from the training set.
Is there any way to use the predict.glmnet based on the optimal model from the training set with new test set which has data for only those variables that made it into the final model from the training set?
glmnet requires the exact same number/names of variables from the training dataset to be in the validation/test set. For example:
library(caret)
library(glmnet)
df <- ... # a dataframe with 200 variables, some of which you want to predict on
# & some of which you don't care about.
# Variable 13 ('Response.Variable') is the dependent variable.
# Variables 1-12 & 14-113 are the predictor variables
# All training/testing & validation datasets are derived from this single df.
# Split dataframe into training & testing sets
inTrain <- createDataPartition(df$Response.Variable, p = .75, list = FALSE)
Train <- df[ inTrain, ] # Training dataset for all model development
Test <- df[ -inTrain, ] # Final sample for model validation
# Run logistic regression , using only specified predictor variables
logCV <- cv.glmnet(x = data.matrix(Train[, c(1:12,14:113)]), y = Train[,13],
family = 'binomial', type.measure = 'auc')
# Test model over final test set, using specified predictor variables
# Create field in dataset that contains predicted values
Test$prob <- predict(logCV,type="response", newx = data.matrix(Test[,
c(1:12,14:113) ]), s = 'lambda.min')
For a completely new set of data, you could constrain the new df to the necessary variables using some variant of the following method:
new.df <- ... # new df w/ 1,000 variables, which include all predictor variables used
# in developing the model
# Create object with requisite predictor variable names that we specified in the model
predictvars <- c('PredictorVar1', 'PredictorVar2', 'PredictorVar3',
... 'PredictorVarK')
new.df$prob <- predict(logCV,type="response", newx = data.matrix(new.df[names(new.df)
%in% predictvars ]), s = 'lambda.min')
# the above method limits the new df of 1,000 variables to
# whatever the requisite variable names or indices go into the
# model.
Additionally, glmnet only deals with matrices. This is probably why you're getting the error you post in the comment to your question. Some users (myself included) have found that as.matrix() doesn't resolve the issue; data.matrix() seems to work though (hence why it's in the above code). This issue is addressed in a thread or two on SO.
I assume that all variables in the new dataset to be predicted also need to be formatted the same as they were in the dataset used for model development. I usually pull all of my data from the same source so I haven't encountered what glmnet will do in cases where formatting is different.

Resources