lattice plot error: need finite xlim values calls - r

Whenever I try and plot across factors I keep getting the error.
Here is how my data looks like:
str(dataWithNoNa)
## 'data.frame': 17568 obs. of 4 variables:
## $ steps : num 1.717 0.3396 0.1321 0.1509 0.0755 ...
## $ date : Factor w/ 61 levels "2012-10-01","2012-10-02",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ interval: int 0 5 10 15 20 25 30 35 40 45 ...
## $ dayType : Factor w/ 2 levels "Weekday","Weekend": 1 1 1 1 1 1 1 1 1 1 ...
I am trying to plot using the lattice plotting system using Weekday/Weekend as a factor.
Here is what I tried:
plot(dataWithNoNa$steps~ dataWithNoNa$interval | dataWithNoNa$dayType, type="l")
Error in plot.window(...) : need finite 'xlim' values
I even checked to make sure my data had no NAs:
sum(is.na(dataWithNoNa$interval))
## [1] 0
sum(is.na(dataWithNoNa$steps))
## [1] 0
What am I doing wrong?

Try this:
library(lattice)
xyplot(steps ~ interval | factor(dayType), data=df)
Output:
Sample data:
df <- data.frame(
steps=c(1.717,0.3396,0.1321,0.1509,0.0755),
interval=c(0,5,10,15,20),
dayType=c(1,1,1,2,2)
)

Related

R round correlate function from corrr package

I'm creating a correlation table using the correlate function in the corrr package. Here is my code and a screenshot of the output.
correlation_table <- corrr::correlate(salary_professor_dataset_cor_table,
method = "pearson")
correlation_table
I think this would look better and be easier to read if I could round off the values in the correlation table. I tried this code:
correlation_table <- round(corrr::correlate(salary_professor_dataset_cor_table,
method = "pearson"),2)
But I get this error:
Error in Math.data.frame(list(term = c("prof_rank_factor", "yrs.since.phd", : non-numeric variable(s) in data frame: term
The non-numeric variables part of this error message doesn't make sense to me. When I look at the structure I only see integer or numeric variable types.
'data.frame': 397 obs. of 6 variables:
$ prof_rank_factor : num 3 3 1 3 3 2 3 3 3 3 ...
$ yrs.since.phd : int 19 20 4 45 40 6 30 45 21 18 ...
$ yrs.service : int 18 16 3 39 41 6 23 45 20 18 ...
$ salary : num 139750 173200 79750 115000 141500 ...
$ sex_factor : num 1 1 1 1 1 1 1 1 1 2 ...
$ discipline_factor: num 2 2 2 2 2 2 2 2 2 2 ...
How can I clean up this correlation table with rounded values?
After returning the tibble output with correlate, loop across the columns that are numeric and round
library(dplyr)
corrr::correlate(salary_professor_dataset_cor_table,
method = "pearson") %>%
mutate(across(where(is.numeric), round, digits = 2))
We can use:
options(digits=2)
correlation_table <- corrr::correlate(salary_professor_dataset_cor_table,
method = "pearson")
correlation_table

How to apply Naive Bayes model to new data

I asked a question on this this morning but am deleting that and posting here with more betterer wording.
I created my first machine learning model using train and test data. I returned a confusion matrix and saw some summary stats.
I would now like to apply the model to new data to make predictions but I don't know how.
Context: Predicting monthly "churn" cancellations. Target variable is "churned" and it has two possible labels "churned" and "not churned".
head(tdata)
months_subscription nvk_medium org_type churned
1 25 none Community not churned
2 7 none Sports clubs not churned
3 28 none Sports clubs not churned
4 18 unknown Religious congregations and communities not churned
5 15 none Association - Professional not churned
6 9 none Association - Professional not churned
Here's me training and testing:
library("klaR")
library("caret")
# import data
test_data_imp <- read.csv("tdata.csv")
# subset only required vars
# had to remove "revenue" since all churned records are 0 (need last price point)
variables <- c("months_subscription", "nvk_medium", "org_type", "churned")
tdata <- test_data_imp[variables]
#training
rn_train <- sample(nrow(tdata),
floor(nrow(tdata)*0.75))
train <- tdata[rn_train,]
test <- tdata[-rn_train,]
model <- NaiveBayes(churned ~., data=train)
# testing
predictions <- predict(model, test)
confusionMatrix(test$churned, predictions$class)
Everything up till here works fine.
Now I have new data, structure and laid out the same way as tdata above. How can I apply my model to this new data to make predictions? Intuitively I was seeking a new column cbinded that had the predicted class for each record.
I tried this:
## prediction ##
# import data
data_imp <- read.csv("pdata.csv")
pdata <- data_imp[variables]
actual_predictions <- predict(model, pdata)
#append to data and output (as head by default)
predicted_data <- cbind(pdata, actual_predictions$class)
# output
head(predicted_data)
Which threw errors
actual_predictions <- predict(model, pdata)
Error in object$tables[[v]][, nd] : subscript out of bounds
In addition: Warning messages:
1: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 1
2: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 2
3: In FUN(1:6433[[4L]], ...) :
Numerical 0 probability for all classes with observation 3
How can I apply my model to the new data? I'd like a new data frame with a new column that has the predicted class?
** following comment, here is head and str of new data for prediction**
head(pdata)
months_subscription nvk_medium org_type churned
1 26 none Community not churned
2 8 none Sports clubs not churned
3 30 none Sports clubs not churned
4 19 unknown Religious congregations and communities not churned
5 16 none Association - Professional not churned
6 10 none Association - Professional not churned
> str(pdata)
'data.frame': 6433 obs. of 4 variables:
$ months_subscription: int 26 8 30 19 16 10 3 5 14 2 ...
$ nvk_medium : Factor w/ 16 levels "cloned","CommunityIcon",..: 9 9 9 16 9 9 9 3 12 9 ...
$ org_type : Factor w/ 21 levels "Advocacy and civic activism",..: 8 18 18 14 6 6 11 19 6 8 ...
$ churned : Factor w/ 1 level "not churned": 1 1 1 1 1 1 1 1 1 1 ...
This is most likely caused by a mismatch in the encoding of factors in the training data (variable tdata in your case) and the new data used in the predict function (variable pdata), typically that you have factor levels in the test data that are not present in the training data. Consistency in the encoding of the features must be enforced by you, because the predict function will not check it. Therefore, I suggest that you double-check the levels of the features nvk_medium and org_type in the two variables.
The error message:
Error in object$tables[[v]][, nd] : subscript out of bounds
is raised when evaluating a given feature (the v-th feature) in a data point, in which nd is the numeric value of the factor corresponding to the feature. You also have warnings, indicating that the posterior probabilities for all the cases in data points ("observation") 1, 2, and 3 are all zero, but it is not clear if this is also related to the encoding of the factors...
To reproduce the error that you are seeing, consider the following toy data (from http://amunategui.github.io/binary-outcome-modeling/), which has a set of features somewhat similar to that in your data:
# Data setup
# From http://amunategui.github.io/binary-outcome-modeling/
titanicDF <- read.csv('http://math.ucdenver.edu/RTutorial/titanic.txt', sep='\t')
titanicDF$Title <- as.factor(ifelse(grepl('Mr ',titanicDF$Name),'Mr',ifelse(grepl('Mrs ',titanicDF$Name),'Mrs',ifelse(grepl('Miss',titanicDF$Name),'Miss','Nothing'))) )
titanicDF$Age[is.na(titanicDF$Age)] <- median(titanicDF$Age, na.rm=T)
titanicDF$Survived <- as.factor(titanicDF$Survived)
titanicDF <- titanicDF[c('PClass', 'Age', 'Sex', 'Title', 'Survived')]
# Separate into training and test data
inds_train <- sample(1:nrow(titanicDF), round(0.5 * nrow(titanicDF)), replace = FALSE)
Data_train <- titanicDF[inds_train, , drop = FALSE]
Data_test <- titanicDF[-inds_train, , drop = FALSE]
with:
> str(Data_train)
'data.frame': 656 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 3 3 3 1 1 3 3 3 3 ...
$ Age : num 35 28 34 28 29 28 28 28 45 28 ...
$ Sex : Factor w/ 2 levels "female","male": 2 2 2 1 2 1 1 2 1 2 ...
$ Title : Factor w/ 4 levels "Miss","Mr","Mrs",..: 2 2 2 1 2 4 3 2 3 2 ...
$ Survived: Factor w/ 2 levels "0","1": 2 1 1 1 1 2 1 1 2 1 ...
> str(Data_test)
'data.frame': 657 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 1 1 1 1 1 1 1 1 1 ...
$ Age : num 47 63 39 58 19 28 50 37 25 39 ...
$ Sex : Factor w/ 2 levels "female","male": 2 1 2 1 1 2 1 2 2 2 ...
$ Title : Factor w/ 4 levels "Miss","Mr","Mrs",..: 2 1 2 3 3 2 3 2 2 2 ...
$ Survived: Factor w/ 2 levels "0","1": 2 2 1 2 2 1 2 2 2 2 ...
Then everything goes as expected:
model <- NaiveBayes(Survived ~ ., data = Data_train)
# This will work
pred_1 <- predict(model, Data_test)
> str(pred_1)
List of 2
$ class : Factor w/ 2 levels "0","1": 1 2 1 2 2 1 2 1 1 1 ...
..- attr(*, "names")= chr [1:657] "6" "7" "8" "9" ...
$ posterior: num [1:657, 1:2] 0.8352 0.0216 0.8683 0.0204 0.0435 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:657] "6" "7" "8" "9" ...
.. ..$ : chr [1:2] "0" "1"
However, if the encoding is not consistent, e.g.:
# Mess things up, by "displacing" the factor values (i.e., 'Nothing'
# will now be encoded as number 5, which was not present in the
# training data)
Data_test_2 <- Data_test
Data_test_2$Title <- factor(
as.character(Data_test_2$Title),
levels = c("Dr", "Miss", "Mr", "Mrs", "Nothing")
)
> str(Data_test_2)
'data.frame': 657 obs. of 5 variables:
$ PClass : Factor w/ 3 levels "1st","2nd","3rd": 1 1 1 1 1 1 1 1 1 1 ...
$ Age : num 47 63 39 58 19 28 50 37 25 39 ...
$ Sex : Factor w/ 2 levels "female","male": 2 1 2 1 1 2 1 2 2 2 ...
$ Title : Factor w/ 5 levels "Dr","Miss","Mr",..: 3 2 3 4 4 3 4 3 3 3 ...
$ Survived: Factor w/ 2 levels "0","1": 2 2 1 2 2 1 2 2 2 2 ...
then:
> pred_2 <- predict(model, Data_test_2)
Error in object$tables[[v]][, nd] : subscript out of bounds

Difficulty creating ROC curve using library(ROCR) in R

I have a simple 2 column data frame the labels (binary) are Benign and Malignant and predictor is a five-point ordinal variable here is the summary
data.frame': 127 obs. of 2 variables:
$ GRADE : Ord.factor w/ 5 levels "benign"<"likely benign"<..: 1 1 1 1 1 1 1 1 1 1 ...
$ BENIGN.MALIGN: Factor w/ 2 levels "Benign","Malignant": 1 1 1 1 1 1 1 1 1 1 ...
But when I use:
pred<-prediction(myTable[[1]],myTable[[2]])
I get this error message:
Error in prediction(myTable[[1]], myTable[[2]]) :
Format of predictions is invalid.
What can i do to rectify this? Thanks
If you are using the grade as a score and have verified or assumed that the intervals in the score are equidistant, you can convert the 5 point likert into a numeric form as follows:
pred <- prediction(as.numeric(myTable[[1]]), myTable[[2]])

can't plot predict line in R

I'm using those data :
'data.frame': 1584 obs. of 3 variables:
$ Individual: Factor w/ 3 levels "AG201","AG202",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Used : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
$ NDVI : int 4724 4576 4894 4297 4670 4932 4346 3810 3481 4058 ...
I'm doing a glm with "NDVI" as a continuous explanatory variable, and then I'm plotting the model through the scatterplot of the data (I'm reproducing the same script as in the R book, Crawley, p.596)
model<-glm(Used~NDVI,binomial);
xv<-seq(0,10000,0.2);
yv<-predict(model,list(NDVI=xv),type="response");
plot(NDVI,Used);
lines(xv,yv);
My problem is that no line appears on my graph...
Any idea what's wrong?
Following Gavin's insight, here's a suggestion:
plot(NDVI, as.numeric(Used)-1 )
lines(xv,yv)
Factors are integer vectors starting at 1L with assignments by default in alpha order of the labels. So you should be OK with "no" < "yes" leading to the No's being 1 and hte Yes's being 2 and then shifting down to the correct scale [0,1]. You may need to also look at str(yv)

Factors in aov()

I got a wired problem following the example 1 in R Guide.
Here is the example
> datafilename="http://personality-project.org/r/datasets/R.appendix1.data"
> data.ex1 = read.table(datafilename,header=T) #read the data into a table
> aov.ex1 = aov(Alertness~Dosage,data=data.ex1) #do the analysis of variance
> summary(aov.ex1) #show the summary table
But, when I applied aov on my own data, things changed.
> test.data <- data.frame(fac=letters[c(1:3,1:3)], x=1:6)
> test.result <- aov(fac~x, data=test.data)
Error in storage.mode(y) <- "double" :
invalid to change the storage mode of a factor
In addition: Warning message:
In model.response(mf, "numeric") :
using type="numeric" with a factor response will be ignored
I'm totally confused. what's the difference between test.data and data.ex1 in example of R guide?
> str(test.data)
'data.frame': 6 obs. of 2 variables:
$ fac: Factor w/ 3 levels "a","b","c": 1 2 3 1 2 3
$ x : int 1 2 3 4 5 6
> str(data.ex1)
'data.frame': 18 obs. of 2 variables:
$ Dosage : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 2 2 2 2 ...
$ Alertness: int 30 38 35 41 27 24 32 26 31 29 ...
it should be aov(x ~ fac, data = test.data), which works. The formula needs to be response ~ factor, not factor ~ response.

Resources