How to extract aggregated imputed data from R-package 'mice'? - r

I have a question regarding the aggregation of imputed data as created by the R-package 'mice'.
As far as I understand it, the 'complete'-command of 'mice' is applied to extract the imputed values of, e.g., the first imputation. However, when running a total of ten imputations, I am not sure, which imputed values to extract. Does anyone know how to extract the (aggregate) imputed data across all imputations?
Since I would like to enter the data into MS Excel and perform further calculations in another software tool, such a command would be very helpful.
Thank you for your comments. A simple example (from 'mice' itself) can be found below:
R> library("mice")
R> nhanes
R> imp <- mice(nhanes, seed = 23109) #create imputation
R> complete(imp) #extraction of the five imputed datasets (row-stacked matrix)
How can I aggregate the five imputed data sets and extract the imputed values to Excel?

I had similar issue.
I used the code below which is good enough to numeric vars.
For others I thought about randomly choose one of the imputed results (because averaging can disrupt it).
My offered code is (for numeric):
tempData <- mice(data,m=5,maxit=50,meth='pmm',seed=500)
completedData <- complete(tempData, 'long')
a<-aggregate(completedData[,3:6] , by = list(completedData$.id),FUN= mean)
you should join the results back.
I think the 'Hmisc' is a better package.
if you already found nicer/ more elegant/ built in solution - please share with us.

You should use complete(imp,action="long") to get values for each imputation. If you see ?complete, you will find
complete(x, action = 1, include = FALSE)
Arguments
x
An object of class mids as created by the function mice().
action
If action is a scalar between 1 and x$m, the function returns the data with imputation number action filled in. Thus, action=1 returns the first completed data set, action=2 returns the second completed data set, and so on. The value of action can also be one of the following strings: 'long', 'broad', 'repeated'. See 'Details' for the interpretation.
include
Flag to indicate whether the orginal data with the missing values should be included. This requires that action is specified as 'long', 'broad' or 'repeated'.
So, the default is to return the first imputed values. In addition, the argument action can also be a string: long, broad, and repeated. If you enter long, it will give you the data in long format. You can also set include = TRUE if you want the original missing data.

ok, but still you have to choose one imputed dataset for further analyses... I think the best option is to analyze using your complete(imp,action="long") and pool the results afterwards.fit <- with(data=imp,exp=lm(bmi~hyp+chl))
pool(fit)
but I also assume its not forbidden to use just one of the imputed datasets ;)

Related

How to deal with NaN values in R?

I'm testing for random intercepts as a preparation for growth curve modeling.
Therefore, I've first created a wide subset and then converted it to a Long data set.
Calculating my ModelM1 <- gls(ent_act~1, data=school_l) with the long data set, I get an error message as I have missing values. In my long subset these values are stated as NaN.
When applying temp<-na.omit(school_l$ent_act), I can calculate ModelM1. But, when calculating ModelM2 ModelM2 <- lme(temp~1, random=~1|ID, data=school_l), then I get the error message of my variables being of unqueal lengths.
How can I deal with those missing values?
Any ideas or recommendations?
What you might get success with would be to make a temp dataframe where your remove entire lines indexed by negation of the missing condition: !is.na(school_1$ent_act)
temp<-school_l[ !is.na(school_l$ent_act), ]
Then re-run the lme call. There should now be no mismatch of variable lengths.
ModelM2 <- lme(ent_act ~1, random= ~1|ID, data=school_l)
Note that using school_l is going to be potentially confusing because it looks so much like school_1 when viewed in Times font.

Matched Sampling package or function for large dataset

I need an R package or function that will allow me to match controls to cases for a large dataset, 5 million subjects.
I have tried a few packages, my problems are summarized below. I only tried to match on a single covariate and I most likely will need to match on several.
Package MatchIt: The nearest neighbor, optimal, and genetic methods all just run for hours and hours. The "cem" method runs really quickly but I need to know which cases were matched/unmatched so I can do further analysis with the matched subset. Running the match.data() on the cem results only supplies the weights to be used in a regression and not the matched subset. The paired function in cem would work if I wanted one to one matching but I want to retain as many controls as possible.
matchControls() in the e1071 package: runs for a long time and them returns "not able to allocate vector of size 1352 GB"
Match() function from Matching package: Just runs and runs...
quickmatch() from the quickmatch package: It ran quickly but I am not sure I'm using the function correctly or how to extract the matched data from the "qm_matching" object returned. Below is my attempt using quickmatch on fake data.
library(MatchIt)
library(cem)
library(Matching)
library(rgenoud)
library(quickmatch)
set.seed(100)
control_df=data.frame(Group=factor("Control"),value=rnorm(1400000,95,2))
set.seed(101)
treatment_df=data.frame(Group=factor("Treatment"),value=c(rnorm(500000,92,2),rnorm(100000,50,5)))
dat=rbind(control_df,treatment_df)
covariate_balance(dat$Group, dat$value, matching = NULL,
normalize = TRUE, all_differences = TRUE)
my_distances <- distances(dat, dist_variables = c("value"))
matchedDat=quickmatch(my_distances,dat$Group )
matchedDat.df=data.frame(matchedDat)
Not sure what to do with the returned object. I think quickmatch may be the most viable option. The covariate_balance result shows a decent amount of imbalance between the Control and Treatment groups so some amount of matching can be done.
Specifically how do I obtain matched results,i.e. flag the subjects that were successfully matched between the Control and Treatment? The cluster_label from matchedDat.df implies that the function is creating a large number of clusters how/can I restrict this?
Any help with respect to speeding up some of the functions above or new suggestions would be appreciated.
After a more careful reading of the cem documentation I think I have the solution to my problem using the Matchit package or the cem package.
library(cem)
library(tidyverse)
set.seed(100)
control_df=data.frame(Group=factor("Control"),value=rnorm(1400000,95,2))
set.seed(101)
treatment_df=data.frame(Group=factor("Treatment"),value=c(rnorm(500000,92,2),rnorm(100000,50,5)))
dat=rbind(control_df,treatment_df)%>% rownames_to_column()
cem.match=cem(treatment="Group", baseline.group="Control",data=dat,keep.all=TRUE, drop ="rowname")
matchedData=data.frame(Group.check=cem.match$groups, matched=cem.match$matched,weights=cem.match$w)%>%
rownames_to_column()%>%
inner_join(dat,by="rowname") %>%
filter(matched==TRUE)

Data frame/list of correlated variables specified above a certain limit

I am creating a correlation matrix, and via the findCorrelation() function from the caret package I am identifying parameters that have a correlation with another parameter higher than 0.75.
After that I am removing the correlated parameters coming out of the findCorrelation command.
highlyCorrelated <- findCorrelation(correlationMatrix,cutoff=(0.75),verbose = FALSE)
correlated_var=colnames(data[,highlyCorrelated])
data.dat <- data[!(names(data) %in% c(correlated_var))]
For completeness sake, in presenting later results, I want to present a list of what parameters are removed, and also because of what correlation.
Is there a way to generate a data frame that contains in the first column the removed parameter, and in the following columns the parameter(s) that that specific parameter was correlated to?
I can call upon certain correlations by using:
correlationMatrix[correlationMatrix[x,]>0.75,x]
Where x is an identified parameter with a correlation higher than 0.75 with other parameter(s). But I am not sure how I can turn this into a data frame or table, in order to present the findings.
Help is much appreciated!
Regards,
Eddy
I got somewhere using the packages plyr and rowr:
cor.table <- matrix(, nrow = 0, ncol = 0)
for (i in sort(highlyCorrelated)){
cor.table.i <- c(paste(colnames(correlationMatrix)[c(i)],":"),paste(names(correlationMatrix[abs(correlationMatrix[i,])>0.75,i])))
cor.table <- cbind.fill(cor.table,cor.table.i,fill=NA)
}
cor.table <- t(cor.table[c(-1)])
It's a bit of a workaround, and maybe not the prettiest, but at least I get something I can export.
I can't get rid of the fact that the answer shows that the parameter is correlated to itself, for some reason.

R -- make a prediction based upon multiple fields (factors), but not the entire data frame

Ok, I have a data frame with 250 observations of 9 variables. For simplicity, let's just label them A - I
I've done all the standard stuff (converting things to int or factor, creating the data partition, test and train sets, etc).
What I want to do is use columns A and B, and predict column E. I don't want to use the entire set of nine columns, just these three when I make my prediction.
I tried only using the limited columns in the prediction, like this:
myPred <- predict(rfModel, newdata=myData)
where rfModel is my model, and myData only contains the two fields I want to use, as a dataframe. Unfortunately, I get the following error:
Error in predict.randomForest(rfModel, newdata = myData) :
variables in the training data missing in newdata
Honestly, I'm very new to R, and I'm not even sure this is feasible. I think the data that I'm collecting (the nine fields) are important to use for "training", but I can't figure out how to make a prediction using just the "resultant" field (in this case field E) and the other two fields (A and B), and keeping the other important data.
Any advice is greatly appreciated. I can post some of the code if necessary.
I'm just trying to learn more about things like this.
A assume you used random forest method:
library(randomForest)
model <- randomForest(E ~ A+ B - c(C,D,F,G,H,I), data = train)
pred <- predict(model, newdata = test)
As you can see in this example only A and B column would be taken to build a model, others are removed from model building (however not removed from the dataset). If you want to include all of them use (E~ .). It also means that if you build your model based on all column you need to have those columns in test set too, predict won't work without them. If the test data have only A and B column the model has to be build based on them.
Hope it helped
As I mentioned in my comment above, perhaps you should be building your model using only the A and B columns. If you can't/don't want to do this, then one workaround perhaps would be to simply use the median values for the other columns when calling predict. Something like this:
myData <- cbind(data[, c("A", "B)], median(data$C), median(data$D), median(data$E),
median(data$F), median(data$G), median(data$H), median(data$I))
myPred <- predict(rfModel, newdata=myData)
This would allow you to use your current model, built with 9 predictors. Of course, you would be assuming average behavior for all predictors except for A and B, which might not behave too differently from a model built solely on A and B.

perform function on pairs of columns

I am trying to run some Monte Carlo simulations on animal position data. So far, I have sampled 100 X and Y coordinates, 100 times. This results in a list of 200. I then convert this list into a dataframe that is more condusive to eventual functions I want to run for each sample (kernel.area).
Now I have a data frame with 200 columns, and I would like to perform the kernel.area function using each successive pair of columns.
I can't reproduce my own data here very well, so I've tried to give a basic example just to show the structure of the data frame I'm working with. I've included the for loop I've tried so far, but I am still an R novice and would appreciate any suggestions.
# generate dataframe representing X and Y positions
df <- data.frame(x=seq(1:200),y=seq(1:200))
# 100 replications of sampling 100 "positions"
resamp <- replicate(100,df[sample(nrow(df),100),])
# convert to data frame (kernel.area needs an xy dataframe)
df2 <- do.call("rbind", resamp[1:2,])
# xy positions need to be in columns for kernel.area
df3 <- t(df2)
#edit: kernel.area requires you have an id field, but I am only dealing with one individual, so I'll construct a fake one of the same length as the positions
id=replicate(100,c("id"))
id=data.frame(id)
Here is the structure of the for loop I've tried (edited since first post):
for (j in seq(1,ncol(df3)-1,2)) {
kud <- kernel.area(df3[,j:(j+1)],id=id,kern="bivnorm",unin=c("m"),unout=c("km2"))
print(kud)
}
My end goal is to calculate kernel.area for each resampling event (ie rows 1:100 for every pair of columns up to 200), and be able to combine the results in a dataframe. However, after running the loop, I get this error message:
Error in df[, 1] : incorrect number of dimensions
Edit: I realised my id format was not the same as my data frame, so I change it and now have the error:
Error in kernelUD(xy, id, h, grid, same4all, hlim, kern, extent) :
id should have the same length as xy
First, a disclaimer: I have never worked with the package adehabitat, which has a function kernel.area, which I assume you are using. Perhaps you could confirm which package contains the function in question.
I think there are a couple suggestions I can make that are independent of knowledge of the specific package, though.
The first lies in the creation of df3. This should probably be
df3 <- t(df2), but this is most likely correct in your actual code
and just a typo in your post.
The second suggestion has to do with the way you subset df3 in the
loop. j:j+1 is just a single number, since the : has a higher
precedence than + (see ?Syntax for the order in which
mathematical operations are conducted in R). To get the desired two
columns, use j:(j+1) instead.
EDIT:
When loading adehabitat, I was warned to "Be careful" and use the related new packages, among which is adehabitatHR, which also contains a function kernel.area. This function has slightly different syntax and behavior, but perhaps it would be worthwhile examining. Using adehabitatHR (I had to install from source since the package is not available for R 2.15.0), I was able to do the following.
library(adehabitatHR)
for (j in seq(1,ncol(df3)-1,2)) {
kud <-kernelUD(SpatialPoints(df3[,j:(j+1)]),kern="bivnorm")
kernAr<-kernel.area(kud,unin=c("m"),unout=c("km2"))
print(kernAr)
}
detach(package:adehabitatHR, unload=TRUE)
This prints something, and as is mentioned in a comment below, kernelUD() is called before kernel.area().

Resources