Use of randomforest() for classification in R? - r
I originally had a data frame composed of 12 columns in N rows. The last column is my class (0 or 1). I had to convert my entire data frame to numeric with
training <- sapply(training.temp,as.numeric)
But then I thought I needed the class column to be a factor column to use the randomforest() tool as a classifier, so I did
training[,"Class"] <- factor(training[,ncol(training)])
I proceed to creating the tree with
training_rf <- randomForest(Class ~., data = trainData, importance = TRUE, do.trace = 100)
But I'm getting two errors:
1: In Ops.factor(training[, "Status"], factor(training[, ncol(training)])) :
<= this is not relevant for factors (roughly translated)
2: In randomForest.default(m, y, ...) :
The response has five or fewer unique values. Are you sure you want to do regression?
I would appreciate it if someone could point out the formatting mistake I'm making.
Thanks!
So the issue is actually quite simple. It turns out my training data was an atomic vector. So it first had to be converted as a data frame. So I needed to add the following line:
training <- as.data.frame(training)
Problem solved!
First, your coercion to a factor is not working because of syntax errors. Second, you should always use indexing when specifying a RF model. Here are changes in your code that should make it work.
training <- sapply(training.temp,as.numeric)
training[,"Class"] <- as.factor(training[,"Class"])
training_rf <- randomForest(x=training[,1:(ncol(training)-1)], y=training[,"Class"],
importance=TRUE, do.trace=100)
# You can also coerce to a factor directly in the model statement
training_rf <- randomForest(x=training[,1:(ncol(training)-1)], y=as.factor(training[,"Class"]),
importance=TRUE, do.trace=100)
Related
How can I include both my categorical and numeric predictors in my elastic net model? r
As a note beforehand, I think I should mention that I am working with highly sensitive medical data that is protected by HIPAA. I cannot share real data with dput- it would be illegal to do so. That is why I made a fake dataset and explained my processes to help reproduce the error. I have been trying to estimate an elastic net model in r using glmnet. However, I keep getting an error. I am not sure what is causing it. The error happens when I go to train the data. It sounds like it has something to do with the data type and matrix. I have provided a sample dataset. Then I set the outcomes and certain predictors to be factors. After setting certain variables to be factors, I label them. Next, I create an object with the column names of the predictors I want to use. That object is pred.names.min. Then I partition the data into the training and test data frames. 65% in the training, 35% in the test. With the train control function, I specify a few things I want to have happen with the model- random paraments for lambda and alpha, as well as the leave one out method. I also specify that it is a classification model (categorical outcome). In the last step, I specify the training model. I write my code to tell it to use all of the predictor variables in the pred.names.min object for the trainingset data frame. library(dplyr) library(tidyverse) library(glmnet),0,1,0 library(caret) #creating sample dataset df<-data.frame("BMIfactor"=c(1,2,3,2,3,1,2,1,3,2,1,3,1,1,3,2,3,2,1,2,1,3), "age"=c(0,4,8,1,2,7,4,9,9,2,2,1,8,6,1,2,9,2,2,9,2,1), "L_TartaricacidArea"=c(0,1,1,0,1,1,1,0,0,1,0,1,1,0,1,0,0,1,1,0,1,1), "Hydroxymethyl_5_furancarboxylicacidArea_2"= c(1,1,0,1,0,0,1,0,1,1,0,1,1,0,1,1,0,1,0,1,0,1), "Anhydro_1.5_D_glucitolArea"= c(8,5,8,6,2,9,2,8,9,4,2,0,4,8,1,2,7,4,9,9,2,2), "LevoglucosanArea"= c(6,2,9,2,8,6,1,8,2,1,2,8,5,8,6,2,9,2,8,9,4,2), "HexadecanolArea_1"= c(4,9,2,1,2,9,2,1,6,1,2,6,2,9,2,8,6,1,8,2,1,2), "EthanolamineArea"= c(6,4,9,2,1,2,4,6,1,8,2,4,9,2,1,2,9,2,1,6,1,2), "OxoglutaricacidArea_2"= c(4,7,8,2,5,2,7,6,9,2,4,6,4,9,2,1,2,4,6,1,8,2), "AminopentanedioicacidArea_3"= c(2,5,5,5,2,9,7,5,9,4,4,4,7,8,2,5,2,7,6,9,2,4), "XylitolArea"= c(6,8,3,5,1,9,9,6,6,3,7,2,5,5,5,2,9,7,5,9,4,4), "DL_XyloseArea"= c(6,9,5,7,2,7,0,1,6,6,3,6,8,3,5,1,9,9,6,6,3,7), "ErythritolArea"= c(6,7,4,7,9,2,5,5,8,9,1,6,9,5,7,2,7,0,1,6,6,3), "hpresponse1"= c(1,0,1,1,0,1,1,0,0,1,0,0,1,0,1,1,1,0,1,0,0,1), "hpresponse2"= c(1,0,1,0,0,1,1,1,0,1,0,1,0,1,1,0,1,0,1,0,0,1)) #setting variables as factors df$hpresponse1<-as.factor(df$hpresponse1) df$hpresponse2<-as.factor(df$hpresponse2) df$BMIfactor<-as.factor(df$BMIfactor) df$L_TartaricacidArea<- as.factor(df$L_TartaricacidArea) df$Hydroxymethyl_5_furancarboxylicacidArea_2<- as.factor(df$Hydroxymethyl_5_furancarboxylicacidArea_2) #labeling factor levels df$hpresponse1 <- factor(df$hpresponse1, labels = c("group1.2", "group3.4")) df$hpresponse2 <- factor(df$hpresponse2, labels = c("group1.2.3", "group4")) df$L_TartaricacidArea <- factor(df$L_TartaricacidArea, labels =c ("No", "Yes")) df$Hydroxymethyl_5_furancarboxylicacidArea_2 <- factor(df$Hydroxymethyl_5_furancarboxylicacidArea_2, labels =c ("No", "Yes")) df$BMIfactor <- factor(df$BMIfactor, labels = c("<40", ">=40and<50", ">=50")) #creating list of predictor names pred.start.min <- which(colnames(df) == "BMIfactor"); pred.start.min pred.stop.min <- which(colnames(df) == "ErythritolArea"); pred.stop.min pred.names.min <- colnames(df)[pred.start.min:pred.stop.min] #partition data into training and test (65%/35%) set.seed(2) n=floor(nrow(df)*0.65) train_ind=sample(seq_len(nrow(df)), size = n) trainingset=df[train_ind,] testingset=df[-train_ind,] #specifying that I want to use the leave one out cross- #validation method and use "random" as search for elasticnet tcontrol <- trainControl(method = "LOOCV", search="random", classProbs = TRUE) #training model elastic_model1 <- train(as.matrix(trainingset[, pred.names.min]), trainingset$hpresponse1, data = trainingset, method = "glmnet", trControl = tcontrol) After I run the last chunk of code, I end up with this error: Error in { : task 1 failed - "error in evaluating the argument 'x' in selecting a method for function 'as.matrix': object of invalid type "character" in 'matrix_as_dense()'" In addition: There were 50 or more warnings (use warnings() to see the first 50) I tried removing the "as.matrix" arguemtent: elastic_model1 <- train((trainingset[, pred.names.min]), trainingset$hpresponse1, data = trainingset, method = "glmnet", trControl = tcontrol) It still produces a similar error. Error in { : task 1 failed - "error in evaluating the argument 'x' in selecting a method for function 'as.matrix': object of invalid type "character" in 'matrix_as_dense()'" In addition: There were 50 or more warnings (use warnings() to see the first 50) When I tried to make none of the predictors factors (but keep outcome as factor), this is the error I get: Error: At least one of the class levels is not a valid R variable name; This will cause errors when class probabilities are generated because the variables names will be converted to X0, X1 . Please use factor levels that can be used as valid R variable names (see ?make.names for help). How can I fix this? How can I use my predictors (both the numeric and categorical ones) without producing an error?
glmnet does not handle factors well. The recommendation currently is to dummy code and re-code to numeric where possible: Using LASSO in R with categorical variables
Error with RandomForest in R because of "too many categories"
I'm trying to train a RF model in R, but when i try to define the model: rf <- randomForest(labs ~ .,data=as.matrix(dd.train)) It gives me the error: Error in randomForest.default(m, y, ...) : Can not handle categorical predictors with more than 53 categories. Any idea what could it be? And no, before you say "You have some categoric variable with more than 53 categories". No, all variables but labs are numeric. Tim Biegeleisen: Read the last line of my question and you will see why is not the same as the one you are linking!
Edited to address followup from OP I believe using as.matrix in this case implicitly creates factors. It is also not necessary for this packages. You can keep it as a data frame, but will need to make sure that any unused factor levels are dropped by using droplevels (or something similar). There are many reasons an unused factor may be in your data set, but a common one is a dropped observation. Below is a quick example that reproduces your error: library('randomForest') #making a toy data frame x <- data.frame('one' = c(1,1,1,1,1,seq(50) ), 'two' = c(seq(54),NA), 'three' = seq(55), 'four' = seq(55) ) x$one <- as.factor(x$one) x <- na.omit(x) #getting rid of an NA. Note this removes the whole row. randomForest(one ~., data = as.matrix(x)) #your first error randomForest(one ~., data = x) #your second error x <- droplevels(x) randomForest(one ~., data = x) #OK
How to use predict from a model stored in a list in R?
I have a dataframe dfab that contains 2 columns that I used as argument to generate a series of linear models as following: models = list() for (i in 1:10){ models[[i]] = lm(fc_ab10 ~ (poly(nUs_ab, i)), data = dfab) } dfab has 32 observations and I want to predict fc_ab10 for only 1 value. I thought of doing so: newdf = data.frame(newdf = nUs_ab) newdf[] = 0 newdf[1,1] = 56 prediction = predict(models[[1]], newdata = newdf) First I tried writing newdf as a dataframe with only one position, but since there are 32 in the dataset on which the model was built, I thought I had to provide at least 32 points as well. I don't think this is necessary though. Every time I run that piece of code I am given the following error: Error: variable 'poly(nUs_ab, i) was fitted with type “nmatrix.1” but type “numeric” was supplied. In addition: Warning message: In Z/rep(sqrt(norm2[-1L]), each = length(x)) : longer object length is not a multiple of shorter object length I thought all I need to use predict was a LM model, predictors (the number 56) given in a column-named dataframe. Obviously, I am mistaken. How can I fix this issue? Thanks.
newdf should be a data.frame with column name nUs_ab, otherwise R won't be able to know which column to operate upon (i.e., generate the prediction design matrix). So the following code should work newdf = data.frame(nUs_ab = 56) prediction = predict(models[[1]], newdata = newdf)
Kaggle Digit Recognizer Using SVM (e1071): Error in predict.svm(ret, xhold, decision.values = TRUE) : Model is empty
I am trying to solve the digit Recognizer competition in Kaggle and I run in to this error. I loaded the training data and adjusted the values of it by dividing it with the maximum pixel value which is 255. After that, I am trying to build my model. Here Goes my code, Given_Training_data <- get(load("Given_Training_data.RData")) Given_Testing_data <- get(load("Given_Testing_data.RData")) Maximum_Pixel_value = max(Given_Training_data) Tot_Col_Train_data = ncol(Given_Training_data) training_data_adjusted <- Given_Training_data[, 2:ncol(Given_Training_data)]/Maximum_Pixel_value testing_data_adjusted <- Given_Testing_data[, 2:ncol(Given_Testing_data)]/Maximum_Pixel_value label_training_data <- Given_Training_data$label final_training_data <- cbind(label_training_data, training_data_adjusted) smp_size <- floor(0.75 * nrow(final_training_data)) set.seed(100) training_ind <- sample(seq_len(nrow(final_training_data)), size = smp_size) training_data1 <- final_training_data[training_ind, ] train_no_label1 <- as.data.frame(training_data1[,-1]) train_label1 <-as.data.frame(training_data1[,1]) svm_model1 <- svm(train_label1,train_no_label1) #This line is throwing an error Error : Error in predict.svm(ret, xhold, decision.values = TRUE) : Model is empty! Please Kindly share your thoughts. I am not looking for an answer but rather some idea that guides me in the right direction as I am in a learning phase. Thanks. Update to the question : trainlabel1 <- train_label1[sapply(train_label1, function(x) !is.factor(x) | length(unique(x))>1 )] trainnolabel1 <- train_no_label1[sapply(train_no_label1, function(x) !is.factor(x) | length(unique(x))>1 )] svm_model2 <- svm(trainlabel1,trainnolabel1,scale = F) It didn't help either.
Read the manual (https://cran.r-project.org/web/packages/e1071/e1071.pdf): svm(x, y = NULL, scale = TRUE, type = NULL, ...) ... Arguments: ... x a data matrix, a vector, or a sparse matrix (object of class Matrix provided by the Matrix package, or of class matrix.csr provided by the SparseM package, or of class simple_triplet_matrix provided by the slam package). y a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression). Therefore, the mains problems are that your call to svm is switching the data matrix and the response vector, and that you are passing the response vector as integer, resulting in a regression model. Furthermore, you are also passing the response vector as a single-column data-frame, which is not exactly how you are supposed to do it. Hence, if you change the call to: svm_model1 <- svm(train_no_label1, as.factor(train_label1[, 1])) it will work as expected. Note that training will take some minutes to run. You may also want to remove features that are constant (where the values in the respective column of the training data matrix are all identical) in the training data, since these will not influence the classification.
I don't think you need to scale it manually since svm itself will do it unlike most neural network package. You can also use the formula version of svm instead of the matrix and vectors which is svm(result~.,data = your_training_set) in your case, I guess you want to make sure the result to be used as factor,because you want a label like 1,2,3 not 1.5467 which is a regression I can debug it if you can share the data:Given_Training_data.RData
Subscript out of bound error in predict function of randomforest
I am using random forest for prediction and in the predict(fit, test_feature) line, I get the following error. Can someone help me to overcome this. I did the same steps with another dataset and had no error. but I get error here. Error: Error in x[, vname, drop = FALSE] : subscript out of bounds training_index <- createDataPartition(shufflled[,487], p = 0.8, times = 1) training_index <- unlist(training_index) train_set <- shufflled[training_index,] test_set <- shufflled[-training_index,] accuracies<- c() k=10 n= floor(nrow(train_set)/k) for(i in 1:k){ sub1<- ((i-1)*n+1) sub2<- (i*n) subset<- sub1:sub2 train<- train_set[-subset, ] test<- train_set[subset, ] test_feature<- test[ ,-487] True_Label<- as.factor(test[ ,487]) fit<- randomForest(x= train[ ,-487], y= as.factor(train[ ,487])) prediction<- predict(fit, test_feature) #The error line correctlabel<- prediction == True_Label t<- table(prediction, True_Label) }
I had similar problem few weeks ago. To go around the problem, you can do this: df$label <- factor(df$label) Instead of as.factor try just factor generic function. Also, try first naming your label variable.
Are there identical column names in your training and validation x? I had the same error message and solved it by renaming my column names because my data was a matrix and their colnames were all empty, i.e. "".
Your question is not very clear, anyway I try to help you. First of all check your data to see the distribution in levels of your various predictors and outcomes. You may find that some of your predictor levels or outcome levels are very highly skewed, or some outcomes or predictor levels are very rare. I got that error when I was trying to predict a very rare outcome with a heavily tuned random forest, and so some of the predictor levels were not actually in the training data. Thus a factor level appears in the test data that the training data thinks is out of bounds. Alternatively, check the names of your variables. Before calling predict() to make sure that the variable names match. Without your data files, it's hard to tell why your first example worked. For example You can try: names(test) <- names(train)
Add the expression dimnames(test_feature) <- NULL before prediction <- predict(fit, test_feature)