What is an alternative to leap() that can handle NAs? - r

Need to apply Branch and bound method to choose best model. leaps() from leaps package works well, only if the data has no NA values, otherwise throws an error:
#dummy data
x<-matrix(rnorm(100),ncol=4)
#convert to 0,1,2 - this is a genetic data, NA=NoCall
x<-matrix(round(runif(100)*10) %% 3,ncol=4)
#introduce NA=NoCall
x[1,1] <-NA
#response, case or control
y<-rep(c(0,1,1,0,1),5)
leaps(x,y)
Error in leaps.setup(x, y, wt = wt, nbest = nbest, nvmax = NCOL(x) + int, :
NA/NaN/Inf in foreign function call (arg 4)
Using only complete.cases() is not an option as I lose 80% of data.
What is an alternative to leap that can handle NAs? I am writing my own function to do something similar, but it is getting big and clunky, I feel like I am reinventing the wheel...
UPDATE:
I have tried using stepAIC(), facing the same problem of missing data:
Error in stepAIC(fit) :
number of rows in use has changed: remove missing values?

you may try bestglm::bestglm where branch-bound method can be specified. The NAs can be handled by na.action argument as it in glm. see here for additional information:
http://cran.r-project.org/web/packages/bestglm/vignettes/bestglm.pdf

This is a stats problem, as AIC can't compare models built with
different data sets. So to compare models with and without certain
variables, you need to remove the rows with missing values for those
variables. You may need to "reconsider your modeling strategy", to
quote Ben
Bolker.
Otherwise you may also want to look into variants of AIC, a quick
Google search brings up a recent JASA
article
that might be a good starting point.
- Aaron

Related

Puzzling error in svyby of survey package

I am using "svyby" function from survey R package, and get an error I don't know how to deal with.
At first, I used variable cntry as a grouping, next, I used essround as grouping, and it all worked smoothly. But when I use their combination ~cntry+essround it returns an error.
I am puzzled how it can work separately for each grouping but doesn't work for combined grouping.
This is somehow related to omitted data, as when I drop all the empty cells (i.e. using na.omit(dat) instead of dat for defining survey design) it starts working. But I don't want to drop all the missings. I thought na.rm argument of svymean should deal with it. Note that variables cntry and essround do not contain any missing values.
library("survey")
s.w <- svydesign(ids = ~1, data = dat, weights = dat[,weight])
svyby(~ Security, by=~ essround, s.w, svymean, na.rm=T) # Works
svyby(~ Security, by=~ cntry, s.w, svymean, na.rm=T) # Also works
svyby(~ Security, by=~ essround+cntry, s.w, svymean, na.rm=T) # Gives an error
Error in tapply(1:NROW(x), list(factor(strata)), function(index) { :
arguments must have same length
So my question is - how to make it work?
UPDATE.
Sorry, I misread the documentation. The problem is solved by adding na.rm.all = TRUE to the svyby function.
Forgive me for the late answer, but I was just looking for solution for a similar problem and solved it for myself just now. Check to see if you have any empty cells in your cross-tabulation of essround, cntry, and Security (using table()). If you do, try transforming the grouping variables into ordered factors with ordered() and explicitly naming your levels with the levels argument of the function, before you run the svyby(). Ordered factors will show frequency of 0 in a cross tabulation, while regular factors will drop empty cells.
I don't know exactly why, but here's how I resolved the same issue. It seems to have something to do with the way svyby deals with NA data - even if you specify na.rm=T. I made subsets of my data frame and found that it does happen if the subset is smaller than the certain threshold (it was 500 in my case, but the exact value is to be determined) AND contains NA - works well for other subsets like bigger than 10,000 with NA or smaller than 500 without NA. In your case, there should be a subset of essround==x & cntry==y which is small and where Security = NA. So, clean the data not to have NA BEFORE you do svyby (could be removal, estimate, or separate grouping - it's up to you) and then try once again. It worked for me.

Calculating MSE: why are these two ways giving different results?

I am having some doubt regarding the calculation of MSE in R.
I have tried two different ways and I am getting two different results. Wanted to know which one is the correct way of finding mse.
First:
model1 <- lm(data=d, x ~ y)
rmse_model1 <- mean((d - predict(model1))^2)
Second:
mean(model1$residuals^2)
In principle, they should give you the same result. But in the first option, you should use d$x. If you just use d, recycling rule in R will repeat predict(model1) twice (as d has two columns) and the computation will also involve d$y.
Note that it is recommended to include na.rm = TRUE to mean, and newdata = d to predict in the first option. This makes your code robust to missing values in your data. On the other hand you don't need worry about NA in the second option, as lm automatically drops NA cases. You may have a look at this thread for potential effect of this feature: Aligning Data frame with missing values.

problems with mice in R: cannot coerce class '"mids"' into a data.frame

I have a dataset with about 11,500 rows and 15 factors. I only need to impute values for 3 of the factors, with only 2 of the factors having any significant number of missing values. I have been trying to use mice to create imputed datasets, and I am using the following code:
dataset<-read.csv("filename.csv",header=TRUE)
model<-success~1+course+medium+ethnicity+gender+age+enrollment+HSGPA+GPA+Pell+ethnicity*medium
library(mice)
vempty<-c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
v12<-c(0,0,0,0,0,0,0,1,1,1,1,0,1,1,1)
v13<-c(0,0,0,0,0,0,0,1,1,1,1,1,0,1,1)
v14<-c(0,0,0,0,0,0,0,1,1,1,1,1,1,0,1)
list<-list(vempty,vempty,vempty,vempty,vempty,vempty,vempty,vempty,vempty,vempty,vempty,v12,v13,v14,vempty)
predmatrix<-do.call(rbind,list)
MIdataset<-mice(dataset,m=2,predictorMatrix=predmatrix)
MIoutput<- pool(glm(model, data=MIdataset, family=binomial))
After this code, I get the error message:
Error in as.data.frame.default(data) :
cannot coerce class '"mids"' into a data.frame
I'm totally at a loss as to what this means. I had no trouble doing this same analysis just deleting the missing data and using regular glm. I'd also like to do a multilvel logistic model on imputed datasets using lmer (that's the next step after I get this to work with glm), so if there is anything I am doing wrong that will also impact that next step, that would be good to know, too. I've tried to search this error on the internet, and I'm not getting anywhere. I'm just really learning R, so I'm also not that familiar with the environment yet.
Thanks for your time!
You need to apply the with.mids function. I think the last line in your code should look like this:
pool(with(MIdataset, glm(formula(model), family = binomial)))
You could also try this:
expr <- 'glm(success ~ course, family = binomial)'
pool(with(MIdataset, parse(text = expr)))

Error "x and xtest must have the same number of columns" when using randomForest

I am getting an error when I am trying to use randomForest in R.
When I enter
basic3prox <- randomForest(activity ~.,data=train,proximity=TRUE,xtest=valid)
where train is a dataframe of training data and valid is a dataframe of test data,
I get the following error
Error in randomForest.default(m, y, ...) :
x and xtest must have same number of columns
But they do have the same number of columns. I used subset() to get them from the same original dataset and when I run dim() i get
dim(train)
[1] 3237 563
dim(valid)
[1] 2630 563
So I am at a loss to figure out what is wrong here.
No they don't; train has 562 predictor columns and 1 decision column, so valid must have 562 columns (and corresponding decision must be passed to ytest argument).
So the invocation should look like:
randomForest(activity~.,data=train,proximity=TRUE,
xtest=valid[,names(valid)!='activity'],ytest=valid[,'activity'])
However, this is a dirty hack which will fail for more complex formulae and thus it shouldn't be used (even the authors tried to prohibit it, as Joran pointed out in comments). The correct, easier and faster way is to use separate objects for predictors and decisions instead of formulae, like this:
randomForest(trainPredictors,trainActivity,proximity=TRUE,
xtest=testPredictors,ytest=testActivity)
Maybe it is not a bug. When you use dim(), you got different number. It means that training data and valid data do have different dims. I have encountered such problem. My solution is as following: First, I use names() show the variables in the training data and in valid data. I see they do have different variables; Second, I use setdiff() to "subtract" the surplus variables (if the training data has more variables than the valid data, then subtract the surplus variables in training data,vice versa.) After that, training data and valid data have the same variables. You can use randomForest.

Handling missing/incomplete data in R--is there function to mask but not remove NAs?

As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance:
Many R functions have an na.rm flag that when set to TRUE, remove the NAs:
>>> v = mean( c(5, NA, 6, 12, NA, 87, 9, NA, 43, 67), na.rm=T)
>>> v
(5, 6, 12, 87, 9, 43, 67)
But if you want to deal with NAs before the function call, you need to do something like this:
to remove each 'NA' from a vector:
vx = vx[!is.na(a)]
to remove each 'NA' from a vector and replace it w/ a '0':
ifelse(is.na(vx), 0, vx)
to remove entire each row that contains 'NA' from a data frame:
dfx = dfx[complete.cases(dfx),]
All of these functions permanently remove 'NA' or rows with an 'NA' in them.
Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it).
to be as clear as possible about what i'm looking for: python/numpy has a class, masked array, with a mask method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?
Exactly what to do with missing data -- which may be flagged as NA if we know it is missing -- may well differ from domain to domain.
To take an example related to time series, where you may want to skip, or fill, or interpolate, or interpolate differently, ... is that just the (very useful and popular) zoo has all these functions related to NA handling:
zoo::na.approx zoo::na.locf
zoo::na.spline zoo::na.trim
allowing to approximate (using different algorithms), carry-forward or backward, use spline interpolation or trim.
Another example would be the numerous missing imputation packages on CRAN -- often providing domain-specific solutions. [ So if you call R a DSL, what is this? "Sub-domain specific solutions for domain specific languages" or SDSSFDSL? Quite a mouthful :) ]
But for your specific question: no, I am not aware of a bit-level flag in base R that allows you to mark observations as 'to be excluded'. I presume most R users would resort to functions like na.omit() et al or use the na.rm=TRUE option you mentioned.
It's a good practice to look at the data, hence infer about the type of missing values: is it MCAR (missing complete and random), MAR (missing at random) or MNAR (missing not at random)? Based on these three types, you can study the underlying structure of missing values and conclude whether imputation is at all applicable (you're lucky if it's not MNAR, 'cause, in that case, missing values are considered non-ignorable, and are related to some unknown underlying influence, factor, process, variable... whatever).
Chapter 3. in "Interactive and Dynamic Graphics for Data Analyst with R and GGobi" by Di Cook and Deborah Swayne is great reference regarding this topic.
You'll see norm package in action in this chapter, but Hmisc package has data imputation routines. See also Amelia, cat (for categorical missings imputation), mi, mitools, VIM, vmv (for missing data visualisation).
Honestly, I still don't quite understand is your question about statistics, or about R missing data imputation capabilities? I reckon that I've provided good references on second one, and about the first one: you can replace your NA's either with central tendency (mean, median, or similar), hence reduce the variability, or with random constant "pulled out" of observed (recorded) cases, or you can apply regression analysis with variable that contains NA's as criteria, and other variables as predictors, then assign residuals to NA's... it's an elegant way to deal with NA's, but quite often it would not go easy on your CPU (I have Celeron on 1.1GHz, so I have to be gentle).
This is an optimization problem... there's no definite answer, you should decide what/why are you sticking with some method. But it's always good practice to look at the data! =)
Be sure to check Cook & Swayne - it's an excellent, skilfully written guide. "Linear Models with R" by Faraway also contains a chapter about missing values.
So there.
Good luck! =)
The function na.exclude() sounds like what you want, although it's only an option for some (important) functions.
In the context of fitting and working with models, R has a family of generic functions for dealing with NAs: na.fail(), na.pass(), na.omit(), and na.exclude(). These are, in turn, arguments for some of R's key modeling functions, such as lm(), glm(), and nls() as well as functions in MASS, rpart, and survival packages.
All four generic functions basically act as filters. na.fail() will only pass the data through if there are no NAs, otherwise it fails. na.pass() passes all cases through. na.omit() and na.exclude() will both leave out cases with NAs and pass the other cases through. But na.exclude() has a different attribute that tells functions processing the resulting object to take into account the NAs. You could see this attribute if you did attributes(na.exclude(some_data_frame)). Here's a demonstration of how na.exclude() alters the behavior of predict() in the context of a linear model.
fakedata <- data.frame(x = c(1, 2, 3, 4), y = c(0, 10, NA, 40))
## We can tell the modeling function how to handle the NAs
r_omitted <- lm(x~y, na.action="na.omit", data=fakedata)
r_excluded <- lm(x~y, na.action="na.exclude", data=fakedata)
predict(r_omitted)
# 1 2 4
# 1.115385 1.846154 4.038462
predict(r_excluded)
# 1 2 3 4
# 1.115385 1.846154 NA 4.038462
Your default na.action, by the way, is determined by options("na.action") and begins as na.omit() but you can set it.

Resources