I have used the mice package for calculating imputations (10 iterations, 5 imputations). Because I am new to this area, my 'methodologist' - who is very patient with me! - want to judge the imputed values (so not the completed sets). I can't seem to find a way to collect all imputed values in one clear dataframe.
The data is about youngsters who answered a lot of questions in a 5 point Likert scale. I have several imp per age category. For example:
With my command imp_val_15_plus <- Filter(Negate(is.null), imp_15plus$imp)I can see all imputed values per question and per id. So for example imp_val_15plus[1:2] gives:
$X02_07
1 2 3 4 5
qwertyuiop123456789 4 4 4 4 4
$X02_12
1 2 3 4 5
adfghjkl09823430233 2 2 5 2 2
zcvnmoi987412597800 1 2 1 1 2
So here are two questions (X02_07 and X02_12). The first has one NA (id qwe...789) and the latter has two NA's (id adf...0233 and zcvn...7800)
I would like to have a dataframe like this:
q_nr id 1 2 3 4 5
$X02_07 qwertyuiop123456789 4 4 4 4 4
$X02_12 adfghjkl09823430233 2 2 5 2 2
$X02_12 zcvnmoi987412597800 1 2 1 1 2
So I thought of a way to extract the values I need and then try to use a loop for all these values. I tried to extract the values:
names(imp_val_15plus[1]) gives me the question number [1] "X02_07"
row.names(imp_val_15plus[[1]]) gives me the id number [1] "qwertyuiop123456789"
But then I go wrong with the imputed values.
With as.integer(imp_val_15plus[[1]]) I get [1] 3 3 3 3 3 instead of what I wanted [1] 4 4 4 4 4. The three's are logic, because of the factors available for question $X02_07. Normally there should be the factorlevels 1 - 5, but none of the youngsters used a 1, so my levels for this question are 2 - 5.
Have a look at str(imp_val_15plus[[1]]), it gives:
'data.frame': 1 obs. of 5 variables:
$ 1: Factor w/ 4 levels "2","3","4","5": 3
..- attr(*, "contrasts")= num [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "2" "3" "4" "5"
.. .. ..$ : chr "2" "3" "4"
$ 2: Factor w/ 4 levels "2","3","4","5": 3
..- attr(*, "contrasts")= num [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "2" "3" "4" "5"
.. .. ..$ : chr "2" "3" "4"
etc., etc.
It makes sense that I got the three's, because this is the number of the factor with the levels "2","3","4","5". How do I obtain the value of the value itself (the 4) instead of the 3? Or is there an other way to present all imputed values (not the completed set!!) in a neat way?
Related
In the exemplary work flow of the iNEXT vignette, the ciliates dataset is analyzed to show the diversity estimation based on raw species incidence data. Ciliates is a List of 3 object, made up of 3 habitats with multiple samples each. The dataset I am planning to analyze with iNEXT consists of species abundance data of 4 habitats with a varying number of samples.
This is what the ciliates dataset structure looks like:
List of 3
$ EtoshaPan : int [1:365, 1:19] 0 0 0 0 0 0 0 0 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:365] "Acaryophrya.collaris" ...
.. ..$ : chr [1:19] "x53" "x54" "x55" "x56" ...
$ CentralNamibDesert : int [1:365, 1:17] 0 0 0 0 0 1 0 0 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:365] "Acaryophrya.collaris" ...
.. ..$ : chr [1:17] "x31" "x32" "x34" "x35" ...
$ SouthernNamibDesert: int [1:365, 1:15] 0 0 0 0 0 0 0 0 0 0 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:365] "Acaryophrya.collaris" ...
.. ..$ : chr [1:15] "x9" "x17" "x19" "x20" ...
When converting my abundance data to raw incidence data and creating a List of 4 object, (4 habitats with several samples each), just like the exemplary ciliates dataset, the iNEXT algorithm and ggiNEXT visualization work smoothly. I am able to choose the number of samples of the most extensively sampled habitat as "endpoint" and the diversity curves of all habitats are extrapolated to this point. This is what our dataset structure looks like (after converting to raw incidence data):
List of 4
$ Leafs : num [1:314, 1:29] 0 0 0 0 0 0 1 1 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:314] "Foram1" "Foram2" "Foram3" "Foram4" ...
.. ..$ : chr [1:29] "Blatt_3B_1" "Blatt_3B_2" "Blatt_3B_3" "Blatt_3B_4" ...
$ Sprouts : num [1:314, 1:20] 0 0 1 1 1 0 1 0 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:314] "Foram1" "Foram2" "Foram3" "Foram4" ...
.. ..$ : chr [1:20] "Wurz_3B_1" "Wurz_3B_2" "Wurz_3B_3" "Wurz_3B_4" ...
$ Redalgae : num [1:314, 1:16] 0 0 1 0 1 0 1 0 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:314] "Foram1" "Foram2" "Foram3" "Foram4" ...
.. ..$ : chr [1:16] "ReAl_Co_1" "ReAl_Co_2" "ReAl_Co_3" "ReAl_Co_4" ...
$ Posidonia: num [1:314, 1:49] 0 0 0 0 0 0 1 1 0 0 ...
..- attr(, "dimnames")=List of 2
.. ..$ : chr [1:314] "Foram1" "Foram2" "Foram3" "Foram4" ...
.. ..$ : chr [1:49] "Blatt_3B_1" "Blatt_3B_2" "Blatt_3B_3" "Blatt_3B_4" ...
However, we also would like to estimate diversity based on our original abundance data. When passing our List of 4 object with abundance data as input to the iNEXT function with datatype = "abundance" , the following error output appears:
Error in FUN(X[[i]], ...) : invalid data structure
So it seems as for abundance data, iNEXT does not work with complex list objects consisting of several habitats with multiple samples each, but only with list objects looking like the spider one:
List of 2
$ Girdled: num [1:26] 46 22 17 15 15 9 8 6 6 4 ...
$ Logged : num [1:37] 88 22 16 15 13 10 8 8 7 7 ...
Why is this the case? Let's imagine for inter- and extrapolation, I am not interested in the diversity as a function of individuals (as shown in the vignette exemplary workflow for the spider dataset), but as a function of sample units. Without being able to pass a list object such as ciliates (with abundance data in this case, of course) to the iNEXT algorithm, I would have to pool all my samples within each habitat to one single list to make it work with iNEXT and to be able to compare the habitats. This way, however, we would lose the information stored in each single sample list.
Has this issue occurred before? Is there a simple solution for this problem I am currently not aware of? What is the reason behind not being able to analyze abundance data stored in a more complex list object such as ciliates with the iNEXT algorithm?
Thanks for the help.
EDIT: This is not a course with a live instructor and I cannot ask him directly in any way. If it were, I wouldn't be wasting your time here.
I am taking an R class that is dealing with the basics of machine learning. We are working with the Vanderbilt Titanic dataset available HERE. The goal is the use the R mice package to imput missing age values. I've already split my data into train and test samples, and str(training) outputs:
'data.frame': 917 obs. of 14 variables:
$ pclass : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
$ survived : Factor w/ 2 levels "0","1": 2 2 1 1 2 2 1 2 2 2 ...
$ name : chr "Allen, Miss. Elisabeth Walton" "Allison, Master. Hudson Trevor" "Allison, Miss. Helen Loraine" "Allison, Mrs. Hudson J C (Bessie Waldo Daniels)" ...
$ sex : Factor w/ 2 levels "female","male": 1 2 1 1 2 1 2 1 1 1 ...
$ age : num 29 0.92 2 25 48 63 71 18 24 26 ...
$ sibsp : int 0 1 1 1 0 1 0 1 0 0 ...
$ parch : int 0 2 2 2 0 0 0 0 0 0 ...
$ ticket : chr "24160" "113781" "113781" "113781" ...
$ fare : num 211.3 151.6 151.6 151.6 26.6 ...
$ cabin : chr "B5" "C22 C26" "C22 C26" "C22 C26" ...
$ embarked : Factor w/ 4 levels "","C","Q","S": 4 4 4 4 4 4 2 2 2 4 ...
$ boat : chr "2" "11" "" "" ...
$ body : int NA NA NA NA NA NA 22 NA NA NA ...
$ home.dest: chr "St Louis, MO" "Montreal, PQ / Chesterville, ON" "Montreal, PQ / Chesterville, ON" "Montreal, PQ / Chesterville, ON" ...
The instructor then goes on to write:
factor_vars <- c('pclass', 'sex', 'embarked', 'survived')
training[factor_vars] <- lapply(training[factor_vars], function(x) as.factor(x))
impute_variables <- c('pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked')
mice_model <- mice(training[,impute_variables], method='rf')
mice_output <- complete(mice_model)
mice_output
I understand the factor_vars piece - these variables are labelled as factors in the structure output. What I don't understand is how the impute_variables were chosen or what they mean exactly. Are they arbitrarily chosen, perhaps on the basis that the instructor believed things like 'pclass' (which is the indicator for steerage, coach, or first class) may help predict age (with older people being able to afford first class perhaps) while things like 'cabin' would have no relevance?
Furthermore, in the line mice_model <- mice(training[,impute_variables], method='rf'), which portion of the function is declaring that we want to be imputing the age of the passengers?
I'm using the aggregate function to summarise some data. The data is loans data, I have the ContractNum and LoanAmount. I want to aggregate the data by StartDate, count the number of Loans and Average the loan amount.
Here is a sample of the data and the function that I use:
ContractNum <- c("RHL-1","RHL-2","RHL-3","RHL-3")
StartDate <- c("2016-11-01","2016-11-01","2016-12-01","2016-12-01")
LoanPurpose <- c("Personal","Personal","HomeLoan","Investment")
LoanAmount <- c(200,500,600,150)
dat <- data.frame(ContractNum,StartDate,LoanPurpose,LoanAmount)
aggr.data <- aggregate(
cbind(LoanAmount,ContractNum) ~ StartDate + LoanPurpose
,data = dat
,FUN = function(x)c(count = mean(x),length(x))
)
When I lookat the results of the aggregate function, it looks ok:
> aggr.data
StartDate LoanPurpose LoanAmount.count LoanAmount.V2 ContractNum.count ContractNum.V2
1 2016-12-01 HomeLoan 600 1 3.0 1.0
2 2016-12-01 Investment 150 1 3.0 1.0
3 2016-11-01 Personal 350 2 1.5 2.0
But when I look at the strucutre of it, it seems to have created a sub-list:
> str(aggr.data)
'data.frame': 3 obs. of 4 variables:
$ StartDate : Factor w/ 2 levels "2016-11-01","2016-12-01": 2 2 1
$ LoanPurpose: Factor w/ 3 levels "HomeLoan","Investment",..: 1 2 3
$ LoanAmount : num [1:3, 1:2] 600 150 350 1 1 2
..- attr(*, "dimnames")=List of 2
.. ..$ : NULL
.. ..$ : chr "count" ""
$ ContractNum: num [1:3, 1:2] 3 3 1.5 1 1 2
..- attr(*, "dimnames")=List of 2
.. ..$ : NULL
.. ..$ : chr "count" ""
How do I get rid of this sub-list so that I can access each column the way I would normally access a DF? I understand that in the code I've asked to give me a mean on a ContractNum which is not meaningful, but I can just get rid of that column.
Thank you
Just do do.call(data.frame, ...) on aggr.data to unnest the matrices.
aggr.data <- do.call(data.frame, aggr.data);
str(aggr.data);
#'data.frame': 3 obs. of 6 variables:
# $ StartDate : Factor w/ 2 levels "2016-11-01","2016-12-01": 2 2 1
# $ LoanPurpose : Factor w/ 3 levels "HomeLoan","Investment",..: 1 2 3
# $ LoanAmount.count : num 600 150 350
# $ LoanAmount.V2 : num 1 1 2
# $ ContractNum.count: num 3 3 1.5
# $ ContractNum.V2 : num 1 1 2
I've used the package haven to read SPSS data into R. All seems ok, except that when I try to subset the data it doesn't seem to behave correctly. Here's the code (I don't have SPSS to create example data and can't post the real stuff):
require(haven)
df <- read_spss("filename1.sav")
tmp <- df[as_factor(df$variable1) == "factor1",]
tmp <- tmp[!is.na(tmp$variable2), ]
The above df has "NA" scattered throughout. I expected the above to subset only the data, keeping only rows with variable1 with "factor1" and discarding all rows with NAs in variable2. The first subset works as expected. But the second subset does not. It removes rows, but NAs are still present.
I suspect the issue has something to do with the way haven structures the imported data and uses the class labelled instead of an actual factor variable, but it's over my head. Anyone know what could be happening and how to accomplish the same?
Here's the structure of df, variable1 and variable2:
> str(df)
'data.frame': 4573 obs. of 316 variables:
> str(df$variable1)
Class 'labelled' atomic [1:4573] 9 9 9 14 8 8 2 4 8 16 ...
..- attr(*, "labels")= Named num [1:18] 1 2 3 4 5 6 7 8 9 10 ...
.. ..- attr(*, "names")= chr [1:18] "factor1" "factor2" "factor3" "factor4" ...
> str(df$variable2)
Class 'labelled' atomic [1:4573] 3 NA 3 NA 3 NA 1 1 NA NA ...
..- attr(*, "labels")= Named num [1:3] 1 2 3
.. ..- attr(*, "names")= chr [1:3] "Sponsor" "Not a Sponsor" "Don't Know"
When working on a hierarchical/multilevel/panel dataset, it may be very useful to adopt a package which returns the within- and between-group standard deviations of the available variables.
This is something that with the following data in Stata can be easily done through the command
xtsum, i(momid)
I made a research, but I cannot find any R package which can do that..
edit:
Just to fix ideas, an example of hierarchical dataset could be this:
son_id mom_id hispanic mom_smoke son_birthweigth
1 1 1 1 3950
2 1 1 0 3890
3 1 1 0 3990
1 2 0 1 4200
2 2 0 1 4120
1 3 0 0 2975
2 3 0 1 2980
The "multilevel" structure is given by the fact that each mother (higher level) has two or more sons (lower level). Hence, each mother defines a group of observations.
Accordingly, each dataset variable can vary either between and within mothers or only between mothers. birtweigth varies among mothers, but also within the same mother. Instead, hispanic is fixed for the same mother.
For example, the within-mother variance of son_birthweigth is:
# mom1 means
bwt_mean1 <- (3950+3890+3990)/3
bwt_mean2 <- (4200+4120)/2
bwt_mean3 <- (2975+2980)/2
# Within-mother variance for birthweigth
((3950-bwt_mean1)^2 + (3890-bwt_mean1)^2 + (3990-bwt_mean1)^2 +
(4200-bwt_mean2)^2 + (4120-bwt_mean2)^2 +
(2975-bwt_mean3)^2 + (2980-bwt_mean3)^2)/(7-1)
While the between-mother variance is:
# overall mean of birthweigth:
# mean <- sum(data$son_birthweigth)/length(data$son_birthweigth)
mean <- (3950+3890+3990+4200+4120+2975+2980)/7
# within variance:
((bwt_mean1-mean)^2 + (bwt_mean2-mean)^2 + (bwt_mean3-mean)^2)/(3-1)
I don't know what your stata command should reproduce, but to answer the second part of question about
hierarchical structure , it is easy to do this with list.
For example, you define a structure like this:
tree = list(
"var1" = list(
"panel" = list(type ='p',mean = 1,sd=0)
,"cluster" = list(type = 'c',value = c(5,8,10)))
,"var2" = list(
"panel" = list(type ='p',mean = 2,sd=0.5)
,"cluster" = list(type="c",value =c(1,2)))
)
To create this lapply is convinent to work with list
tree <- lapply(list('var1','var2'),function(x){
ll <- list(panel= list(type ='p',mean = rnorm(1),sd=0), ## I use symbol here not name
cluster= list(type = 'c',value = rnorm(3))) ## R prefer symbols
})
names(tree) <-c('var1','var2')
You can view he structure with str
str(tree)
List of 2
$ var1:List of 2
..$ panel :List of 3
.. ..$ type: chr "p"
.. ..$ mean: num 0.284
.. ..$ sd : num 0
..$ cluster:List of 2
.. ..$ type : chr "c"
.. ..$ value: num [1:3] 0.0722 -0.9413 0.6649
$ var2:List of 2
..$ panel :List of 3
.. ..$ type: chr "p"
.. ..$ mean: num -0.144
.. ..$ sd : num 0
..$ cluster:List of 2
.. ..$ type : chr "c"
.. ..$ value: num [1:3] -0.595 -1.795 -0.439
Edit after OP clarification
I think that package reshape2 is what you want. I will demonstrate this here.
The idea here is in order to do the multilevel analysis we need to reshape the data.
First to divide the variables into two groups :identifier and measured variables.
library(reshape2)
dat.m <- melt(dat,id.vars=c('son_id','mom_id')) ## other columns are measured
str(dat.m)
'data.frame': 21 obs. of 4 variables:
$ son_id : Factor w/ 3 levels "1","2","3": 1 2 3 1 2 1 2 1 2 3 ...
$ mom_id : Factor w/ 3 levels "1","2","3": 1 1 1 2 2 3 3 1 1 1 ...
$ variable: Factor w/ 3 levels "hispanic","mom_smoke",..: 1 1 1 1 1 1 1 2 2 2 ...
$ value : num 1 1 1 0 0 0 0 1 0 0 ..
Once your have data in "moten" form , you can "cast" to rearrange it in the shape that you want:
# mom1 means for all variable
acast(dat.m,variable~mom_id,mean)
1 2 3
hispanic 1.0000000 0 0.0
mom_smoke 0.3333333 1 0.5
son_birthweigth 3943.3333333 4160 2977.5
# Within-mother variance for birthweigth
acast(dat.m,variable~mom_id,function(x) sum((x-mean(x))^2))
1 2 3
hispanic 0.0000000 0 0.0
mom_smoke 0.6666667 0 0.5
son_birthweigth 5066.6666667 3200 12.5
## overall mean of each variable
acast(dat.m,variable~.,mean)
[,1]
hispanic 0.4285714
mom_smoke 0.5714286
son_birthweigth 3729.2857143
I know this question is four years old, but recently I wanted to do the same in R and came up with the following function. It depends on dplyr and tibble. Where: df is the dataframe, columns is a numerical vector to subset the dataframe and individuals is the column with the individuals.
xtsumR<-function(df,columns,individuals){
df<-dplyr::arrange_(df,individuals)
panel<-tibble::tibble()
for (i in columns){
v<-df %>% dplyr::group_by_() %>%
dplyr::summarize_(
mean=mean(df[[i]]),
sd=sd(df[[i]]),
min=min(df[[i]]),
max=max(df[[i]])
)
v<-tibble::add_column(v,variacao="overal",.before=-1)
v2<-aggregate(df[[i]],list(df[[individuals]]),"mean")[[2]]
sdB<-sd(v2)
varW<-df[[i]]-rep(v2,each=12) #
varW<-varW+mean(df[[i]])
sdW<-sd(varW)
minB<-min(v2)
maxB<-max(v2)
minW<-min(varW)
maxW<-max(varW)
v<-rbind(v,c("between",NA,sdB,minB,maxB),c("within",NA,sdW,minW,maxW))
panel<-rbind(panel,v)
}
var<-rep(names(df)[columns])
n1<-rep(NA,length(columns))
n2<-rep(NA,length(columns))
var<-c(rbind(var,n1,n1))
panel$var<-var
panel<-panel[c(6,1:5)]
names(panel)<-c("variable","variation","mean","standard.deviation","min","max")
panel[3:6]<-as.numeric(unlist(panel[3:6]))
panel[3:6]<-round(unlist(panel[3:6]),2)
return(panel)
}