I came across some code studying machine learning in R leveraging the Vanderbilt Titanic dataset available HERE. It is part of a class without a live instructor or additional resources to answer my own question. The ultimate goal of this exercise is to predict survival based on the other observed data. We have split the data into training and test sets, and running str(training) returns:
> str(training)
'data.frame': 917 obs. of 14 variables:
$ pclass : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
$ survived : Factor w/ 2 levels "0","1": 2 2 1 1 2 2 1 2 2 2 ...
$ name : chr "Allen, Miss. Elisabeth Walton" "Allison, Master. Hudson Trevor" "Allison, Miss. Helen Loraine" "Allison, Mrs. Hudson J C (Bessie Waldo Daniels)" ...
$ sex : Factor w/ 2 levels "female","male": 1 2 1 1 2 1 2 1 1 1 ...
$ age : num 29 0.92 2 25 48 63 71 18 24 26 ...
$ sibsp : int 0 1 1 1 0 1 0 1 0 0 ...
$ parch : int 0 2 2 2 0 0 0 0 0 0 ...
$ ticket : chr "24160" "113781" "113781" "113781" ...
$ fare : num 211.3 151.6 151.6 151.6 26.6 ...
$ cabin : chr "B5" "C22 C26" "C22 C26" "C22 C26" ...
$ embarked : Factor w/ 4 levels "","C","Q","S": 4 4 4 4 4 4 2 2 2 4 ...
$ boat : chr "2" "11" "" "" ...
$ body : int NA NA NA NA NA NA 22 NA NA NA ...
$ home.dest: chr "St Louis, MO" "Montreal, PQ / Chesterville, ON" "Montreal, PQ / Chesterville, ON" "Montreal, PQ / Chesterville, ON" ...
My question is twofold. The first step in this process was to label and apply a function to the "factor variables" like so:
factor_vars <- c('pclass', 'sex', 'embarked', 'survived')
training[factor_vars] <- lapply(training[factor_vars], function(x) as.factor(x))
I understand the factor_vars assignment here, as those variables are clearly labelled as Factor when calling str(training). My question is why are we running the lapply function? It appears it is simply classifying the factor variables as factors. What is really happening in the training[factor_vars] <- lapply(training[factor_vars], function(x) as.factor(x)) line of code?
The next step was to impute the missing variable age.
impute_variables <- c('pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked')
mice_model <- mice(training[,impute_variables], method='rf')
Why was that specific subset of variables selected as impute_variables? What was the basis for including things like sex but not boat?
Why are we subsetting the training data within the mice() function to only act on the impute_variables columns?
The output returned by mice_model is:
iter imp variable
1 1 age
1 2 age
1 3 age
1 4 age
1 5 age
2 1 age
2 2 age
2 3 age
2 4 age
2 5 age
3 1 age
3 2 age
3 3 age
3 4 age
3 5 age
4 1 age
4 2 age
4 3 age
4 4 age
4 5 age
5 1 age
5 2 age
5 3 age
5 4 age
5 5 age
Where in any of the above code did we explicitly tell the mice() function to impute age?
Short Answer: the instructor of the course routinely gives ambiguous and confusing examples.
Long Answer: As LAP pointed out, mice() does impute any variables fed into it. In this particular case, the titanic dataset only had a single column with ANY missing values - age. Why the instructor chose to arbitrarily include other variables in the imputation is anybody's guess. He did not explain why he was doing so in the book.
Related
in R, how to make subcolumns (dataframes in dataframes) to be in the same column level?
for example:
The attributes of my dataframe are these:
As you can see, in order to find column x0 i must use df$z$x0, or x2 df$v$x2. so how can I can "flat" the columns of this dataframe to be able to call x0 or x2 directly e.g. df$x0, df$x2?
I knwo that i can take every column and assign it again to a new dataframe. Although, I hope that a function exists to do that directly.
$ perc0: num 14.4 16.9 31.1 37.7
$ z :'data.frame': 4 obs. of 1 variable:
..$ x0: Factor w/ 254 levels "Boy.Numbercigs.0",..: 192 193 239 240
$ v :'data.frame': 4 obs. of 3 variables:
..$ GENDER: Factor w/ 2 levels "Boy","Girl": 1 2 1 2
..$ X2 : Factor w/ 1 level "EDUCATION": 1 1 1 1
..$ X3 : Factor w/ 2 levels "High School",..: 1 1 2 2
the R output gives that dataframe by writing e.g. df:
perc0 x0 v.GENDER v.X2 v.X3
1 14.39 Boy.EDUCATION.High School Boy EDUCATION High School
2 16.86 Girl.EDUCATION.High School Girl EDUCATION High School
3 31.06 Boy.EDUCATION.Secondary Boy EDUCATION Secondary
4 37.69 Girl.EDUCATION.Secondary Girl EDUCATION Secondary
>data
ACC_ID REG PRBLT OPP_TYPE_DESC PARENT_ID ACCT_NM INDUSTRY_ID BUY PWR REV QTY
11316456 No 90 A 2122628569 INF 7379 10190.82 6500 1
11456476 Yes 1 I 2385888136 Module 9199 17441.72 466.5 31
13453245 No 10 D 2122628087 Wooden 3559 44279.21 2500 500
15674568 No 1 I 2702074521 Nine 7379 183218.8 25.91 1
Above is the given dataset
When I load the same in R, I have the following structure
>str(data)
$ ACC_ID : int 11316974 11620677 11865091 ...
$ REG : Factor w/ 2 levels "No ","Yes ": 1 2 1 1 1 1 1 1 1 1 ...
$ PRBLT : int 90 1 10 1 30 30 10 1 60 1 ...
$ OPP_TYPE_DESC : Factor w/ 3 levels "D",..: 3 2 1 2 1 1 1 3 3 2 ...
$ PARENT_ID : num 2.12e+09 2.39e+09 2.12e+09 2.70e+09 2.12e+09 ...
$ ACCT_NM : Factor w/ 20 levels "Marketing Vertical",..: 10 15 20 17 8 16 2 14 7 11 ...
$ INDUSTRY_ID : int 7379 9199 3559 7379 2711 7374 7371 8742 4813 2111 ..
$ BUY PWR : num 1014791 17442 ...
$ REV : num 6500 46617 250000 25564 20000 ...
$ QTY : int 1 31 500 1 6 100 ...
But, I would want to somehow automatically want R to output the below fields as factors instead of int (with the help of statistical modelling or any other technique). Ideally, these are not continuous fields but categorical nominal fields
ACC_ID
PARENT_ID
INDUSTRY_ID
Whereas the REV and QTY columns should be left as is.
Also, the analysis should not be specific to the data and the columns shown here. The logic must be applicable to any data-set (with different columns) that we load in R
Can there be any method through which this is possible? Any ideas are welcome.
Thank you
I asked a question on this this morning but am deleting that and posting here with more betterer wording.
I created my first machine learning model using train and test data. I returned a confusion matrix and saw some summary stats.
I would now like to apply the model to new data to make predictions but I don't know how.
Context: Predicting monthly "churn" cancellations. Target variable is "churned" and it has two possible labels "churned" and "not churned".
head(tdata)
months_subscription nvk_medium org_type churned
1 25 none Community not churned
2 7 none Sports clubs not churned
3 28 none Sports clubs not churned
4 18 unknown Religious congregations and communities not churned
5 15 none Association - Professional not churned
6 9 none Association - Professional not churned
Here's me training and testing:
library("klaR")
library("caret")
# import data
test_data_imp <- read.csv("tdata.csv")
# subset only required vars
# had to remove "revenue" since all churned records are 0 (need last price point)
variables <- c("months_subscription", "nvk_medium", "org_type", "churned")
tdata <- test_data_imp[variables]
#training
rn_train <- sample(nrow(tdata),
floor(nrow(tdata)*0.75))
train <- tdata[rn_train,]
test <- tdata[-rn_train,]
model <- NaiveBayes(churned ~., data=train)
# testing
predictions <- predict(model, test)
confusionMatrix(test$churned, predictions$class)
Everything up till here works fine.
Now I have new data, structure and laid out the same way as tdata above. How can I apply my model to this new data to make predictions? Intuitively I was seeking a new column cbinded that had the predicted class for each record.
I tried this:
## prediction ##
# import data
data_imp <- read.csv("pdata.csv")
pdata <- data_imp[variables]
actual_predictions <- predict(model, pdata)
#append to data and output (as head by default)
predicted_data <- cbind(pdata, actual_predictions$class)
# output
head(predicted_data)
Which threw errors
actual_predictions <- predict(model, pdata)
Error in object$tables[[v]][, nd] : subscript out of bounds
In addition: Warning messages:
1: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 1
2: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 2
3: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 3
How can I apply my model to the new data? I'd like a new data frame with a new column that has the predicted class?
** following comment, here is head and str of new data for prediction**
head(pdata)
months_subscription nvk_medium org_type churned
1 26 none Community not churned
2 8 none Sports clubs not churned
3 30 none Sports clubs not churned
4 19 unknown Religious congregations and communities not churned
5 16 none Association - Professional not churned
6 10 none Association - Professional not churned
> str(pdata)
'data.frame': 6433 obs. of 4 variables:
$ months_subscription: int 26 8 30 19 16 10 3 5 14 2 ...
$ nvk_medium : Factor w/ 16 levels "cloned","CommunityIcon",..: 9 9 9 16 9 9 9 3 12 9 ...
$ org_type : Factor w/ 21 levels "Advocacy and civic activism",..: 8 18 18 14 6 6 11 19 6 8 ...
$ churned : Factor w/ 1 level "not churned": 1 1 1 1 1 1 1 1 1 1 ...
This is most likely caused by a mismatch in the encoding of factors in the training data (variable tdata in your case) and the new data used in the predict function (variable pdata), typically that you have factor levels in the test data that are not present in the training data. Consistency in the encoding of the features must be enforced by you, because the predict function will not check it. Therefore, I suggest that you double-check the levels of the features nvk_medium and org_type in the two variables.
The error message:
Error in object$tables[[v]][, nd] : subscript out of bounds
is raised when evaluating a given feature (the v-th feature) in a data point, in which nd is the numeric value of the factor corresponding to the feature. You also have warnings, indicating that the posterior probabilities for all the cases in data points ("observation") 1, 2, and 3 are all zero, but it is not clear if this is also related to the encoding of the factors...
To reproduce the error that you are seeing, consider the following toy data (from http://amunategui.github.io/binary-outcome-modeling/), which has a set of features somewhat similar to that in your data:
# Data setup
# From http://amunategui.github.io/binary-outcome-modeling/
titanicDF <- read.csv('http://math.ucdenver.edu/RTutorial/titanic.txt', sep='\t')
titanicDF$Title <- as.factor(ifelse(grepl('Mr ',titanicDF$Name),'Mr',ifelse(grepl('Mrs ',titanicDF$Name),'Mrs',ifelse(grepl('Miss',titanicDF$Name),'Miss','Nothing'))) )
titanicDF$Age[is.na(titanicDF$Age)] <- median(titanicDF$Age, na.rm=T)
titanicDF$Survived <- as.factor(titanicDF$Survived)
titanicDF <- titanicDF[c('PClass', 'Age', 'Sex', 'Title', 'Survived')]
# Separate into training and test data
inds_train <- sample(1:nrow(titanicDF), round(0.5 * nrow(titanicDF)), replace = FALSE)
Data_train <- titanicDF[inds_train, , drop = FALSE]
Data_test <- titanicDF[-inds_train, , drop = FALSE]
with:
> str(Data_train)
'data.frame': 656 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 3 3 3 1 1 3 3 3 3 ...
$ Age : num 35 28 34 28 29 28 28 28 45 28 ...
$ Sex : Factor w/ 2 levels "female","male": 2 2 2 1 2 1 1 2 1 2 ...
$ Title : Factor w/ 4 levels "Miss","Mr","Mrs",..: 2 2 2 1 2 4 3 2 3 2 ...
$ Survived: Factor w/ 2 levels "0","1": 2 1 1 1 1 2 1 1 2 1 ...
> str(Data_test)
'data.frame': 657 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 1 1 1 1 1 1 1 1 1 ...
$ Age : num 47 63 39 58 19 28 50 37 25 39 ...
$ Sex : Factor w/ 2 levels "female","male": 2 1 2 1 1 2 1 2 2 2 ...
$ Title : Factor w/ 4 levels "Miss","Mr","Mrs",..: 2 1 2 3 3 2 3 2 2 2 ...
$ Survived: Factor w/ 2 levels "0","1": 2 2 1 2 2 1 2 2 2 2 ...
Then everything goes as expected:
model <- NaiveBayes(Survived ~ ., data = Data_train)
# This will work
pred_1 <- predict(model, Data_test)
> str(pred_1)
List of 2
$ class : Factor w/ 2 levels "0","1": 1 2 1 2 2 1 2 1 1 1 ...
..- attr(*, "names")= chr [1:657] "6" "7" "8" "9" ...
$ posterior: num [1:657, 1:2] 0.8352 0.0216 0.8683 0.0204 0.0435 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:657] "6" "7" "8" "9" ...
.. ..$ : chr [1:2] "0" "1"
However, if the encoding is not consistent, e.g.:
# Mess things up, by "displacing" the factor values (i.e., 'Nothing'
# will now be encoded as number 5, which was not present in the
# training data)
Data_test_2 <- Data_test
Data_test_2$Title <- factor(
as.character(Data_test_2$Title),
levels = c("Dr", "Miss", "Mr", "Mrs", "Nothing")
)
> str(Data_test_2)
'data.frame': 657 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 1 1 1 1 1 1 1 1 1 ...
$ Age : num 47 63 39 58 19 28 50 37 25 39 ...
$ Sex : Factor w/ 2 levels "female","male": 2 1 2 1 1 2 1 2 2 2 ...
$ Title : Factor w/ 5 levels "Dr","Miss","Mr",..: 3 2 3 4 4 3 4 3 3 3 ...
$ Survived: Factor w/ 2 levels "0","1": 2 2 1 2 2 1 2 2 2 2 ...
then:
> pred_2 <- predict(model, Data_test_2)
Error in object$tables[[v]][, nd] : subscript out of bounds
I have a large data set which looks like so:
str(ldt)
data.frame': 116105 obs. of 11 variables:
$ s : Factor w/ 35 levels "1","10","11",..: 1 1 1 1 1 1 1 1 1 1 ...
$ PM : Factor w/ 3 levels "C","F","NF": 3 3 3 3 3 3 3 3 3 3 ...
$ day : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
$ block : Factor w/ 3 levels "1","2","3": 2 2 2 2 2 2 2 2 2 2 ...
$ item : chr "parity" "grudoitong" "gunirec" "pirul" ...
$ C : logi TRUE TRUE TRUE TRUE TRUE FALSE ...
$ S : Factor w/ 2 levels "Nonword","Word": 2 1 1 1 2 2 2 1 2 1 ...
$ R : Factor w/ 2 levels "Nonword","Word": 2 1 1 1 2 1 2 1 2 1 ...
$ RT : num 0.838 1.026 0.93 0.553 0.815 ...
When I get means by factor from this data set, and then get the mean of those means it's slightly different from the mean of the original data set. It's different again when I split it into more factors and get the mean of those means. For example:
mean(ldt$RT[ldt$C])
[1] 0.6630013
mean(tapply(ldt$RT[ldt$C],list(s=ldt$s[ldt$C], PM= ldt$PM[ldt$C]),mean))
[1] 0.6638781
mean(tapply(ldt$RT[ldt$C],list(s=ldt$s[ldt$C], day = ldt$day[ldt$C], item=ldt$S[ldt$C], PM=ldt$PM[ldt$C]),mean))
[1] 0.6648401
What on earth is causing this discrepancy? The only thing I can imagine is that the subset means are getting rounded off. Is that why the answers are different? What's the exact mechanic at work here?
Thank you
The mean of means is not the same as the mean of all numbers.
Simple example: Take the dataset
1,3,5,6,7
The mean of 1 and 3 obviously is 2, the mean of 5,6,7 is 6.
The mean of the means therefore would be 4.
However, we have 1+3+5+6+7 = 22 and 22/5 = 4.4.
Thus, your problem is on the mathematical side of your calculation on not with your code.
To overcome this problem you would have to use the weighted mean, e.g. weight the summands of the outer mean with the number of values in each group, divided by the total number of observations. In our example:
2/5 * 2 + 3/5 * 6 = 4.4
I am training svm using my traindata. (e1071 package in R). Following is the information about my data.
> str(train)
'data.frame': 891 obs. of 10 variables:
$ survived: int 0 1 1 1 0 0 0 0 1 1 ...
$ pclass : int 3 1 3 1 3 3 1 3 3 2 ...
$ name : Factor w/ 15 levels "capt","col","countess",..: 12 13 9 13 12 12 12 8 13 13
$ sex : Factor w/ 2 levels "female","male": 2 1 1 1 2 2 2 2 1 1 ...
$ age : num 22 38 26 35 35 ...
$ ticket : Factor w/ 533 levels "110152","110413",..: 516 522 531 50 473 276 86 396
$ fare : num 7.25 71.28 7.92 53.1 8.05 ...
$ cabin : Factor w/ 9 levels "a","b","c","d",..: 9 3 9 3 9 9 5 9 9 9 ...
$ embarked: Factor w/ 4 levels "","C","Q","S": 4 2 4 4 4 3 4 4 4 2 ...
$ family : int 1 1 0 1 0 0 0 4 2 1 ...
I train it as the following.
library(e1071)
model1 <- svm(survived~.,data=train, type="C-classification")
No problem here. But when I predict as:
pred <- predict(model1,test)
I get the following error:
Error in newdata[, object$scaled, drop = FALSE] :
(subscript) logical subscript too long
I also tried removing "ticket" predictor from both train and test data. But still same error. What is the problem?
There might a difference in the number of levels in one of the factors in 'test' dataset.
run str(test) and check that the factor variables have the same levels as corresponding variables in the 'train' dataset.
ie the example below shows my.test$foo only has 4 levels.....
str(my.train)
'data.frame': 554 obs. of 7 variables:
....
$ foo: Factor w/ 5 levels "C","Q","S","X","Z": 2 2 4 3 4 4 4 4 4 4 ...
str(my.test)
'data.frame': 200 obs. of 7 variables:
...
$ foo: Factor w/ 4 levels "C","Q","S","X": 3 3 3 3 1 3 3 3 3 3 ...
Thats correct train data contains 2 blanks for embarked because of this there is one extra categorical value for blanks and you are getting this error
$ Embarked : Factor w/ 4 levels "","C","Q","S": 4 2 4 4 4 3 4 4 4 2 ...
The first is blank
I encountered the same problem today. It turned out that the svm model in e1071 package can only use rows as the objects, which means one row is one sample, rather than column. If you use column as the sample and row as the variable, this error will occur.
Probably your data is good (no new levels in test data), and you just need a small trick, then you are fine with prediction.
test.df = rbind(train.df[1,],test.df)
test.df = test.df[-1,]
This trick was from R Random Forest - type of predictors in new data do not match.
Today I encountered this problem, used above trick and then solved the problem.
I have been playing with that data set as well. I know this was a long time ago, but one of the things you can do is explicitly include only the columns you feel will add to the model, like such:
fit <- svm(Survived~Pclass + Sex + Age + SibSp + Parch + Fare + Embarked, data=train)
This eliminated the problem for me by eliminating columns that contribute nothing (like ticket number) which have no relevant data.
Another possible issue that resolved my code was the fact I hard forgotten to make some of my independent variables factors.