SVM Prediction is dropping values - r

I'm running the SVM model on a dataset, which runs through fine on the train/fitted model. However when I run it for the prediction/test data, it seems to be dropping rows for some reason, when I try to add 'pred_SVM' back into the dataset, the lengths are different.
Below is my code
#SVM MODEL
SVM_swim <- svm(racetime_mins ~ event_date+ event_month +year
+event_id +
gender + place + distance+ New_Condition+
raceNo_Updated +
handicap_mins +points+
Wind_Speed_knots+
Air_Temp_Celsius +Water_Temp_Celsius +Wave_Height_m,
data = SVMTrain, kernel='linear')
summary(SVM_swim)
#Predict Race_Time Using Test Data
pred_SVM <- predict(SVM_swim, SVMTest, type ="response")
View(pred_SVM)
#Add predicted Race_Times back into the test dataset.
SVMTest$Pred_RaceTimes<- pred_SVM
View(SVMTest) #Returns 13214 rows
View(pred_SVM) #Returns 12830
Error in $<-.data.frame(*tmp*, Pred_RaceTime, value = c(2 = 27.1766438249356, :
replacement has 12830 rows, data has 13214

As it is mentioned in the command, you need to get rid of the NA values in your dataset. SVM is handling it for you so that, the pred_SVM output is calculated without the NA values.
To test if there exist NA in your data, just run : sum(is.na(SVMTest))
I am pretty sure that you will see a number greater than zero.
Before starting to build your SVM algorithm, get rid of all NA values by,
dataset <- dataset[complete.cases(dataset), ]
Then after separating your data into Train and Test sets you can run ,
SVM_swim <- svm(.....,data = SVMTrain, kernel='linear')

Related

subscript out of bounds Error, Random Forest Model

I'm trying to use the random forest model to predict Gender based on Height, Weight and Number of siblings. I've gotten the data from a much larger data set that contains dozens of variables, but I've cleaned it into this "clean" data.frame with omitted NA values and only the 4 variables I care about, the last column being Gender.
I've tried fiddling with the code and searching everywhere but I can't find a concrete fix.
Here's the code:
ind <- sample(nrow(clean),0.8*nrow(clean))
train <- clean[ind,]
test <- clean[-ind,]
rf <- randomForest(Gender ~ ., data = train[,1:4], ntree = 20)
pred <- predict(rf, newdata = test[,-c(length(test))])
cm <- table(test$Gender, pred)
cm
and here's the output:
Error in `[.default`(table(observed = y, predicted = out.class), levels(y), : subscript out of bounds
Traceback:
1. randomForest(Gender ~ ., data = train[, 1:4], ntree = 20)
2. randomForest.formula(Gender ~ ., data = train[, 1:4], ntree = 20)
3. randomForest.default(m, y, ...)
4. table(observed = y, predicted = out.class)[levels(y), levels(y)]
5. `[.table`(table(observed = y, predicted = out.class), levels(y),
. levels(y))
6. NextMethod()
The problem is likely that you have some kind of a variable level in your test data that was not reflected in your training data. So when it goes to assign the outcome, it has no basis to do so.
It is impossible to say for sure without sample data, but it is the most likely scenario. Try setting a seed set.seed=3 and then change the seed number set.seed=28 and so on, a few times to see if you end up finding a combination where you do not get the error.
Compare the conflicted data frame with the un-conflicted one to see what is missing.
EDIT:
Also, try running str(train) and str(test) to be sure the fields have remained the same. You can share that if you like by editing your post.
If any of the columns are factors with levels missing (meaning it has 10 levels but only 8 are represented in the train with 9 or 10 in the test) it might be a problem. They should be balanced if you are trying to create a predictor for all possible outcomes.
If nothing else works, you can set a seed and remove predictors one at a time until it runs correctly, then look to see how the train and test sets are different in that removed column.

Getting a warning using predict function in R

I have a data set of 400 observations which I divided in 2 separate sets one for training (300 observations) and one for testing (100 observations). I am trying to create a step function regression, the problem is once I try to use the model in order to predict values form the test set I get a warning:
Warning message: 'newdata' had 100 rows but variables found have 300 rows
The variable I am trying to predict is Income and the explanatory variable is called Age.
This is the code:
fit=lm(Incomeāˆ¼cut (training$Age ,4) ,data=training)
predict(fit,test)
Instead of getting 100 predictions based on the test data I get a warning sign and 300 predictions based on the training data.
I read about other people having this question and usually the answer has to do with the name of the variable being different in the data set and in the model, but I don't think this is the problem because while using a regular simple regression I don't get a warning :
lm.fit=lm(Income~Age,data = training)
predict(lm.fit,test)
There are a number of problems here, so it will take several steps to get to a good answer. You did not provide data so I am going to use other data that gets the same kind of error message. The built-in data set iris has 4 continuous variables. I will arbitrarily select two for use here, then apply code just like yours
MyData = iris[,3:4]
set.seed(2017) # for reproducibility
T = sample(150, 100)
training = MyData[ T, ]
test = MyData[-T, ]
fit=lm(Petal.Width ~ cut(training$Petal.Length, 4), data=training)
predict(fit,test)
Warning message:
'newdata' had 50 rows but variables found have 100 rows
So I am getting the same type of error.
cut is changing the continuous variable Petal.Length into a factor with 4 levels. You built your model on the factor, but when you try to predict the new values, you just passed in test, which still has the continuous values (Age in your data; Petal.Length in mine). Trying to evaluate the predict statement, we need to evaluate cut(test$Petal.Length, 4) as part of the process. Look at what that means.
C1 = cut(training$Petal.Length, 4)
C2 = cut(test$Petal.Length, 4)
levels(C1)
[1] "(0.994,2.42]" "(2.42,3.85]" "(3.85,5.28]" "(5.28,6.71]"
levels(C2)
[1] "(1.09,2.55]" "(2.55,4]" "(4,5.45]" "(5.45,6.91]"
The levels are completely different. There is no way that your model can be used on these different levels. You can see the bin boundaries for C1 so it is tempting to just use those boundaries and partition the test data.
levels(C1)
"[0.994,2.42]" "(2.42,3.85]" "(3.85,5.28]" "(5.28,6.71]"
CutPoints = c(0.994, 2.42, 3.85, 5.28, 6.71)
C2 = cut(test$Petal.Length, breaks=CutPoints, include.lowest=TRUE)
But under careful examination, you will see that this did not work. Just printing out a relevant piece of the data
C2[42:46]
[1] (5.28,6.71] (5.28,6.71] <NA> (3.85,5.28] (3.85,5.28]
C2[44] is undefined. Why? One of the values in the test set fell outside the range of values for the training set, so it does not belong in any bin.
test$Petal.Length[44]
[1] 6.9
So what you really need to do is impose no lower limit or upper limit.
## cut the training data to get cut points
C1 = cut(training$Petal.Length, 4)
levels(C1)
"[0.994,2.42]" "(2.42,3.85]" "(3.85,5.28]" "(5.28,6.71]"
CutPoints = c(-Inf, 2.42, 3.85, 5.28, Inf)
It may be easiest to just make new data.frames with the binned data
Binned.training = training
Binned.training$Petal.Length = cut(training$Petal.Length, CutPoints)
Binned.test = test
Binned.test$Petal.Length = cut(test$Petal.Length, CutPoints)
fit=lm(Petal.Width ~ Petal.Length, data=Binned.training)
predict(fit,Binned.test)
## No errors
This will work for your test data and any data that you get in the future.

Used Predict function on New Dataset with different Columns

Using "stackloss" data in R, I created a regression model as seen below:
stackloss.lm = lm(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,data=stackloss)
stackloss.lm
newdata = data.frame(Air.Flow=stackloss$Air.Flow, Water.Temp= stackloss$Water.Temp, Acid.Conc.=stackloss$Acid.Conc.)
Suppose I get a new data set and would need predict its "stack.loss" based on the previous model as seen below:
#suppose I need to used my model on a new set of data
stackloss$predict1[-1] <- predict(stackloss.lm, newdata)
I get this error:
Error in `$<-.data.frame`(`*tmp*`, "predict1", value = numeric(0)) :
replacement has 0 rows, data has 21
Is their a way to used the predict function on different data set with the same columns but different rows?
Thanks in advance.
You can predict into a new data set of whatever length you want, you just need to make sure you assign the results to an existing vector of appropriate size.
This line causes a problem because
stackloss$predict1[-1] <- predict(stackloss.lm, newdata)
because you can't assign and subset a non-existing vector at the same time. This also doesn't work
dd <- data.frame(a=1:3)
dd$b[-1]<-1:2
The length of stackloss which you used to fit the model will always be the same length so re-assigning new values to that data.frame doesn't make sense. If you want to use a smaller dataset to predict on, that's fine
stackloss.lm = lm(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,data=stackloss)
newdata = head(data.frame(Air.Flow=stackloss$Air.Flow, Water.Temp= stackloss$Water.Temp, Acid.Conc.=stackloss$Acid.Conc.),5)
predict(stackloss.lm, newdata)
1 2 3 4 5
38.76536 38.91749 32.44447 22.30223 19.71165
Since the result has the same number of values as newdata has rows (n=5), it makes sense to attach these to newdata. It would not make sense to attach to stackloss because that has a different number of rows (n=21)
newdata$predcit1 <- predict(stackloss.lm, newdata)

Multiple regression predicting using R, predicting a data.frame

I have been given data in a data.frame called petrol which has 125 rows and the following columns:
hydrcarb, tanktemp, disptemp, tankpres, disppres, sqrtankpres, sqrdisppres
I have been asked to delete the last 25 rows from petrol, fit the model where hydrcarb is the response variable and the rest are the explanatory variables, and to do this for the first 100 rows. Then use the fitted model to predict for the remaining 25.
This is what I have done so far:
#make a new table that only contains first 100
petrold <- petrol[-101:-125,]
petrold
#FITTING THE MODEL
petrol.lmB <- lm(hydrcarb~ tanktemp + disptemp + tankpres + disppres + sqrtankpres + sqrdisppres, data=petrol)
#SELECT LAST 25 ROWS FROM PETROL
last25rows <-petrol[101:125,c('tanktemp','disptemp','tankpres','disppres','sqrtankpres','sqrdisppres')]
#PREDICT LAST 25 ROWS
predict(petrold,last25rows[101,c('tanktemp','disptemp','tankpres','disppres','sqrtankpres','sqrdisppres')])
I know I have done something wrong for my predict command since R gives me the error message:
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "data.frame"
So I am not sure how to get predicted values for hydrcarb for 25 different sets of data.
Alex A. already pointed out that predict expects a model as first argument. In addition to this, you should pass predict all rows you want predict at once. Besides, I recommend that you subset your dataframe "on-the-fly" instead of creating unnecessary copies. Lastly, there's a shorter way to write the fromula you pass to lm:
# data for example
data(Seatbelts)
petrol <- as.data.frame(Seatbelts[1:125, 1:7])
colnames(petrol) <- c("hydrcarb", "tanktemp", "disptemp", "tankpres", "disppres", "sqrtankpres", "sqrdisppres")
# fit model using observations 1:100
petrol.lmB <- lm(hydrcarb ~ ., data = petrol[1:100,])
#predict last 25 rows
predict(petrol.lmB, newdata = petrol[101:125,])

Use of randomforest() for classification in R?

I originally had a data frame composed of 12 columns in N rows. The last column is my class (0 or 1). I had to convert my entire data frame to numeric with
training <- sapply(training.temp,as.numeric)
But then I thought I needed the class column to be a factor column to use the randomforest() tool as a classifier, so I did
training[,"Class"] <- factor(training[,ncol(training)])
I proceed to creating the tree with
training_rf <- randomForest(Class ~., data = trainData, importance = TRUE, do.trace = 100)
But I'm getting two errors:
1: In Ops.factor(training[, "Status"], factor(training[, ncol(training)])) :
<= this is not relevant for factors (roughly translated)
2: In randomForest.default(m, y, ...) :
The response has five or fewer unique values. Are you sure you want to do regression?
I would appreciate it if someone could point out the formatting mistake I'm making.
Thanks!
So the issue is actually quite simple. It turns out my training data was an atomic vector. So it first had to be converted as a data frame. So I needed to add the following line:
training <- as.data.frame(training)
Problem solved!
First, your coercion to a factor is not working because of syntax errors. Second, you should always use indexing when specifying a RF model. Here are changes in your code that should make it work.
training <- sapply(training.temp,as.numeric)
training[,"Class"] <- as.factor(training[,"Class"])
training_rf <- randomForest(x=training[,1:(ncol(training)-1)], y=training[,"Class"],
importance=TRUE, do.trace=100)
# You can also coerce to a factor directly in the model statement
training_rf <- randomForest(x=training[,1:(ncol(training)-1)], y=as.factor(training[,"Class"]),
importance=TRUE, do.trace=100)

Resources