Error in is.data.frame(data) : object 'test_data' not found - r

I am new in to R programming and trying creating Logistic Regression model for the first time.
While creating the model, I am getting the below error:
m<-glm(ad~.,data=test_data,family='binomial')
Error in terms.formula(formula, data = data) :
'.' in formula and no 'data' argument
Code:
college<- read.csv(file.choose(),header=T)
head(college)
set.seed(2020)
split_data<- sample.split(college_final$admit,SplitRatio=3/4)
split_data
train_data<- subset(split_data,split==T)
train_data
test_data<-subset(split_data,split==F)
test_data
model<-glm(admit~.,data=test_data,family='binomial')
model
summary(model)
Tried looking into R community for the same, but nothing was mentioned about it.

I don't have enough rep to leave a comment. But I tried to reproduce the data and it showed an error when I was creating test_data, so I think the issue is with subsetting. (I wonder if you got an error?)
In this case, we want test_data to be a data frame instead of vector. Try str(test_data) to see if it returns data.frame.
If not, try replacing
train_data<- subset(split_data,split==T)
test_data<-subset(split_data,split==F)
With
train_data <- subset(college, split_data == T)
test_data <- subset(college, split_data == F)
And run glm again.

Related

Error in panel regression in case of different independent variable r

I am trying to run Fama Macbeth regression by the following code:
require(foreign)
require(plm)
require(lmtest)
fpmg <- pmg(return~max_1,df_all_11, index=c("yearmonth","firms" ))
Fama<-fpmg
coeftest(Fama)
It is working when I regress the data using the independent variable named 'max_1'. However when I change it and use another independent variable named 'ivol_1' the result is showing an error. The code is
require(foreign)
require(plm)
require(lmtest)
fpmg <- pmg(return~ivol_1,df_all_11, index=c("yearmonth","firms" ))
Fama<-fpmg
coeftest(Fama)
the error message is like this:
Error in pmg(return ~ ivol_1, df_all_11, index = c("yearmonth", "firms")) :
Insufficient number of time periods
or sometimes the error is like this
Error in model.frame.default(terms(formula, lhs = lhs, rhs = rhs, data = data, :
object is not a matrix
For your convenience, I am sharing my data with you. The data link is
data frame
I am wondering why this is happening in case of the different variable in the same data frame. I would be grateful if you can solve this problem.
This problem can be solved by mice function
library(mice)
library(dplyr)
require(foreign)
require(plm)
require(lmtest)
df_all_11<-read.csv("df_all_11.csv.part",sep = ",",header = TRUE,stringsAsFactor = F)
x<-data.frame(ivol_1=df_all_11$ivol_1,month=df_all_11$Month)
imputed_Data <- mice(x, m=3, maxit =5, method = 'pmm', seed = 500)
completeData <- complete(imputed_Data, 3)
df_all_11<-mutate(df_all_11,ivol_1=completeData$ivol_1)
fpmg2 <- pmg(return~ivol_1,df_all_11, index=c("yearmonth","firms"))
coeftest(fpmg2)
this problem because the variable ivol_1 have a lots of NA so you should impute the NA first then run the pmg function.

R: Missing data causes error with XGBoost / sparse.model.matrix

As far as I can understand, XGB should have the benefit of dealing with missing data, however, whenever I test the Boston housing set with a few NAs added, I get the error:
The length of labels must equal to the number of rows in the input data
The code I am running is
trainm <- sparse.model.matrix(class ~ ., data = train)
train_label <- train[,"class"]
train_matrix <- xgb.DMatrix(data = as.matrix(trainm) label=train_label)
When I don't add the NAs everything runs fine. I am pretty sure that the issue is that the NAs are removed from a sparse matrix which causes the confusion, but I am not sure how to address it.
My code is here.
Any feedback that can help me on will be highly appreciated.
You need to do corrective action for handling NAs while building sparse model matrix. Rest there is no problem wit your code/data. This is the modified code:
options(na.action='na.pass')
trainm <- sparse.model.matrix(class ~ ., data = train)
train_label <- train$class
train_matrix <- xgb.DMatrix(data = trainm, label=train$class)

Subscript out of bound error in predict function of randomforest

I am using random forest for prediction and in the predict(fit, test_feature) line, I get the following error. Can someone help me to overcome this. I did the same steps with another dataset and had no error. but I get error here.
Error: Error in x[, vname, drop = FALSE] : subscript out of bounds
training_index <- createDataPartition(shufflled[,487], p = 0.8, times = 1)
training_index <- unlist(training_index)
train_set <- shufflled[training_index,]
test_set <- shufflled[-training_index,]
accuracies<- c()
k=10
n= floor(nrow(train_set)/k)
for(i in 1:k){
sub1<- ((i-1)*n+1)
sub2<- (i*n)
subset<- sub1:sub2
train<- train_set[-subset, ]
test<- train_set[subset, ]
test_feature<- test[ ,-487]
True_Label<- as.factor(test[ ,487])
fit<- randomForest(x= train[ ,-487], y= as.factor(train[ ,487]))
prediction<- predict(fit, test_feature) #The error line
correctlabel<- prediction == True_Label
t<- table(prediction, True_Label)
}
I had similar problem few weeks ago.
To go around the problem, you can do this:
df$label <- factor(df$label)
Instead of as.factor try just factor generic function. Also, try first naming your label variable.
Are there identical column names in your training and validation x?
I had the same error message and solved it by renaming my column names because my data was a matrix and their colnames were all empty, i.e. "".
Your question is not very clear, anyway I try to help you.
First of all check your data to see the distribution in levels of your various predictors and outcomes.
You may find that some of your predictor levels or outcome levels are very highly skewed, or some outcomes or predictor levels are very rare. I got that error when I was trying to predict a very rare outcome with a heavily tuned random forest, and so some of the predictor levels were not actually in the training data. Thus a factor level appears in the test data that the training data thinks is out of bounds.
Alternatively, check the names of your variables.
Before calling predict() to make sure that the variable names match.
Without your data files, it's hard to tell why your first example worked.
For example You can try:
names(test) <- names(train)
Add the expression
dimnames(test_feature) <- NULL
before
prediction <- predict(fit, test_feature)

Error in predict.randomForest

I was hoping someone would be able to help me out with an issue I am having with the prediction function of the randomForest package in R. I keep getting the same error when I try to predict my test data:
Here's my code so far:
extractFeatures <- function(RCdata) {
features <- c(4, 9:13, 17:20)
fea <- RCdata[, features]
fea$Week <- as.factor(fea$Week)
fea$Age_Range <- as.factor(fea$Age_Range)
fea$Race <- as.factor(fea$Race)
fea$Referral_Source <- as.factor(fea$Referral_Source)
fea$Referral_Source_Category <- as.factor(fea$Referral_Source_Category)
fea$Rehire <- as.factor(fea$Rehire)
fea$CLFPR_.HS <- as.factor(fea$CLFPR_.HS)
fea$CLFPR_HS <- as.factor(fea$CLFPR_HS)
fea$Job_Openings <- as.factor(fea$Job_Openings)
fea$Turnover <- as.factor(fea$Turnover)
return(fea)
}
gp <- runif(nrow(RCdata))
RCdata <- RCdata[order(gp), ]
train <- RCdata[1:4600, ]
test <- RCdata[4601:6149, ]
rf <- randomForest(extractFeatures(train), suppressWarnings(as.factor(train$disposition_category)), ntree=100, importance=TRUE)
testpredict <- predict(rf, extractFeatures(test))
"Error in predict.randomForest(rf, extractFeatures(test)) :
Type of predictors in new data do not match that of the training data."
I have tried adding in the following line to the code, and still receive the same error:
testpredict <- predict(rf, extractFeatures(test), type="prob")
I found the source of the error being the fact that the training data has a level or two that is not found in the test data. So when I tried another suggestion I found online to adjust the levels of the test data to that of the training data, I keep getting NULL values in the fields I am using in both the training and test sets.
levels(test$Referral)
NULL
I can see the levels when I use the function, however.
levels(as.factor(test$Referral))
So then I tried the same suggestion I found online with adjusting the levels of the test to equal that of the training data using the following function and received an error:
levels(as.factor(test$Referral)) -> levels(as.factor(train$Referral))
Error in `levels<-.factor`(`*tmp*`, value = c(... :
number of levels differs
I am sure there is something simple I am missing (I am still very new to R), so any insight you can provide would be unbelievably helpful. Thanks!

lm function throws an error in terms.formula() in R

I am trying to run linear modelling on the training data frame, but it is not giving me the output.
It gives me an error saying
Error in terms.formula(formula, data = data) :
'.' in formula and no 'data' argument
Code
n <- ncol(training)
input <- as.data.frame(training[,-n])
fit <- lm(training[,n] ~.,data = training[,-n])
There's no need to remove the column from the data to perform this operation, and it's best to use names.
Say that your last column is called response. Then run this:
lm(response ~ ., data=training)
It's hard to say that this is the formula that you need. If you provide a reproducible example, that will become clear.

Resources