Subset columns based on certain columns missing value - r

My dataset is pretty big. I have about 2,000 variables and 1,000 observations.
I want to run a model for each variable using other variables.
To do so, I need to drop variables which have missing values where the dependent variable doesn't have.
I meant that for instance, for variable "A" I need to drop variable C and D because those have missing values where variable A doesn't have. for variable "C" I can keep variable "D".
data <- read.table(text="
A B C D
1 3 9 4
2 1 3 4
NA NA 3 5
4 2 NA NA
2 5 4 3
1 1 1 2",header=T,sep="")
I think I need to make a loop to go through each variable.

I think this gets what you need:
for (i in 1:ncol(data)) {
# filter out rows with NA's in on column 'i'
# which is the column we currently care about
tmp <- data[!is.na(data[,i]),]
# now column 'i' has no NA values, so remove other columns
# that have NAs in them from the data frame
tmp <- tmp[sapply(tmp, function(x) !any(is.na(x)))]
#run your model on 'tmp'
}
For each iteration of i, the tmp data frame looks like:
'data.frame': 5 obs. of 2 variables:
$ A: int 1 2 4 2 1
$ B: int 3 1 2 5 1
'data.frame': 5 obs. of 2 variables:
$ A: int 1 2 4 2 1
$ B: int 3 1 2 5 1
'data.frame': 4 obs. of 2 variables:
$ C: int 3 3 4 1
$ D: int 4 5 3 2
'data.frame': 5 obs. of 1 variable:
$ D: int 4 4 5 3 2

I'll provide a way to get the usable vadiables for each column you choose:
getVars <- function(data, col){
tmp<-!sapply(data[!is.na(data[[col]]),], function(x) { any(is.na(x)) })
names(data)[tmp & names(data) != col]
}
PS: I'm on my phone so I didn't test the above nor had the chance for a good code styling.
EDIT: Styling fixed!

Related

what is the difference between df[1] and df[,1] (in dataframes)

i've noticed they give the same result except that for df[1] it gives the column in the shape of a dataframe while df[,1] returns a vector.Also, i've noticed they give exactly the same result in tibbles. is that all it is to it ?
The "[" function has (at least) two different forms. When used on a dataframe which is a special form of a list with two arguments it returns the contents of the rows and columns specified columns. It does have an optional argument "drop" whose default is TRUE. If it is set to FALSE, then you get the subset as a dataframeWhen used with one argument, it returns the columns itself without loss of the "data.frame" class attribute. The columns are actually lists in their own right.
The other extraction function, "[[" also returns the contents only.
dat <- data.frame(A=1:10,B=letters[1:10])
> str(dat[1:5,])
'data.frame': 5 obs. of 2 variables:
$ A: int 1 2 3 4 5
$ B: Factor w/ 10 levels "a","b","c","d",..: 1 2 3 4 5
> str(dat[1:5,1])
int [1:5] 1 2 3 4 5
> str(dat[1])
'data.frame': 10 obs. of 1 variable:
$ A: int 1 2 3 4 5 6 7 8 9 10
> str(dat[[1]])
int [1:10] 1 2 3 4 5 6 7 8 9 10
> str(dat[,1,drop=FALSE])
'data.frame': 10 obs. of 1 variable:
$ A: int 1 2 3 4 5 6 7 8 9 10

Reverse Coding Certain Columns in R

I have a dataset with 49 columns.
'data.frame': 1351 obs. of 47 variables:
$ ID : Factor w/ 1351 levels "PID0001","PID0002",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Survey: int 1 2 1 1 2 2 2 1 1 2 ...
$ hsinc1: int 2 4 4 4 5 4 3 3 1 1 ...
$ hsinc2: int 2 3 3 3 4 3 3 3 1 1 ...
$ hsinc3: int 4 4 2 3 3 4 5 4 5 5 ...
$ hsinc4: int 4 4 4 4 4 4 4 4 5 4 ...
$ hfair1: int 2 2 2 1 1 1 1 2 1 2 ...
$ hfair2: int 4 5 5 4 5 5 5 5 5 5 ...
$ hfair3: int 4 5 4 3 5 4 3 3 5 5 ...
etc ...
I want to reverse code columns 5,6,8,9,10,12,13,14,17 and 18 such that a score of 5 becomes a score of 1, and 4 becomes 2 etc.
At first, I thought this was achievable by using the psych::reverse.code() function, so I tried this:
With the -1's being the 5,6,8,9,10,12,13,14,17 and 18 columns.
library('psych')
keys <-c(1,1,1,1,-1,-1,1,-1,-1,-1,1,-1,-1,-1,1,1,-1,-1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
df_rev <- reverse.code(keys, items = df, mini = rep(1,49), maxi = rep(5,49))
However, when I run this code, I get the following error:
Error in items %*% keys.d :
requires numeric/complex matrix/vector arguments
Can anybody help with this, please?
Another method I have just been trying is to create a subset of the original data frame, with just the columns I want to reverse code:
data_to_rev <- df[c(5,6,8,9,10,12,13,14,17,18)]
And then reverse coding this subset:
keys <- c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1)
df_rev <- reverse.code(keys, items = data_to_rev, mini = rep(1,10), maxi = rep(5,10))
This works successfully. All variables are now reverse coded like I need them. However, how do I get this subset of reverse coded values and place it back into the original data frame - overwriting the old (non-reversed) columns?
Any help would be hugely appreciated, thank you!
EDIT - SOLUTION
I think I have managed to solve it using #MikeH's help.
I created a subset of just the participant ID's (the factor variable) data_ID <- df[1]
And then used:
data_rev <- reverse.code(keys, items = df[,-1], mini = rep(1,46), maxi = rep(5,46))
This leaves me with 2 data frames/subsets:
1 with all the participant ID's.
1 with all their data and columns 5,6,8,9,10,12,13,14,17 and 18 reverse coded.
I then used: data_final <- cbind(data_ID, data_rev) to join the 2 subsets back together.
Can anyone see anything wrong with this method? I think it has worked upon visual inspection...
df[c(5,6,8,9,10,12,13,14,17)] <- 6 - df[c(5,6,8,9,10,12,13,14,17)]
An efficient way to do it is to write the reverse function yourself and apply it to the columns you want
library(data.table)
start=1
end=5
myrev=function(x) end+start-x
dt=data.table(x=c(1,2,1,4),y=c(2,5,4,1))
cols=1:2
dt[, (cols) := lapply(.SD,myrev), .SDcols = cols]
Or
dt[, (cols) := end + start-cols]

Function in R that creates dummy variables if a condition is met

I am looking to create a function that will convert any factor variable with more than 4 levels into a dummy variable. The dataset has ~2311 columns, so I would really need to create a function. Your help would be immensely appreciated.
I have compiled the code below and was hoping to get it to work.
library(dummies)
# example function
for(i in names(Final_Dataset)){
if(count (Final_Dataset[i])>4){
y <- Final_Dataset[i]
Final_Dataset <- cbind(Final_Dataset, dummy(y, sep = "_"))
}
}
I was also considering an alternative approach where I would get all the number of columns that need to be dummied and then loop through all the columns and if the column number is in that array then create dummy variables out of the variable.
Example data
fct = data.frame(a = as.factor(letters[1:10]), b = 1:10, c = as.factor(sample(letters[1:4], 10, replace = T)), d = as.factor(letters[10:19]))
str(fct)
'data.frame': 10 obs. of 4 variables:
$ a: Factor w/ 10 levels "a","b","c","d",..: 1 2 3 4 5 6 7 8 9 10
$ b: int 1 2 3 4 5 6 7 8 9 10
$ c: Factor w/ 4 levels "a","b","c","d": 2 4 1 3 1 1 2 3 1 2
$ d: Factor w/ 10 levels "j","k","l","m",..: 1 2 3 4 5 6 7 8 9 10
# keep columns with more than 4 factors
fact_cols = sapply(fct, function(x) is.factor(x) && length(levels(x)) > 4)
# create dummy variables for subset (omit intercept)
dummy_cols = model.matrix(~. -1, fct[, fact_cols])
# cbind new data
out_df = cbind(fct[, !fact_cols], dummy_cols)
You could get all the columns with more than a given number of levels (n = 4) with something like
which(sapply(Final_Dataset, function (c) length(levels(c)) > n))

Aggregate command in R to combine rows based on unique ID - output data structure?

I'm sure there's a super-easy answer to this. I am trying to combine ratings on subjects based on their unique ID. Here is a test dataset (called Aggregate_Test)I created, where the ID is unique to the subject, and the StaticScore was done by different raters:
ID StaticScore
1 6
2 7
1 5
2 6
3 7
4 8
3 4
4 5
After reading other posts carefully, I used aggregate to create the following dataset with new columns:
StaticAggregate<-aggregate(StaticScore ~ ID, Aggregate_Test, c)
> StaticAggregate
ID StaticScore.1 StaticScore.2
1 1 6 5
2 2 7 6
3 3 7 4
4 4 8 5
This data frame has the following str:
> str(StaticAggregate)
'data.frame': 4 obs. of 2 variables:
$ ID : num 1 2 3 4
$ StaticScore: num [1:4, 1:2] 6 7 7 8 5 6 4 5
If I try to create a new variable by subtracting StaticScore.1 from StaticScore.2, I get the following error:
Staticdiff<-StaticScore.1-StaticScore.2
Error: object 'StaticScore.1' not found
So, please help me - what is this data structure created by aggregate? A matrix? How could I convert StaticScore.1 and StaticScore.2 to separate variables, or barring that, what is the notation to subtract one from the other to create a new variable?
We can do a dcast to create a wide format from long and subtract those columns to create the 'StaticDiff'
library(data.table)
dcast(setDT(Aggregate_Test), ID~paste0("StaticScore", rowid(ID)), value.var="StaticScore"
)[, StaticDiff := StaticScore1 - StaticScore2]
Regarding the specific question about the aggregate behavior, we are just concatenating (c) the 'StaticScore' by 'ID'. The default behavior is to create a matrix column in aggregate
StaticAggregate<-aggregate(StaticScore ~ ID, Aggregate_Test, c)
This can be checked by looking at the str(StaticAggregate)
str(StaticAggregate)
#'data.frame': 4 obs. of 2 variables:
#$ ID : int 1 2 3 4
#$ StaticScore: int [1:4, 1:2] 6 7 7 8 5 6 4 5
How do we change it to normal columns?
It can be done with do.call(data.frame
StaticAggregate <- do.call(data.frame, StaticAggregate)
Check the str again
str(StaticAggregate)
#'data.frame': 4 obs. of 3 variables:
# $ ID : int 1 2 3 4
# $ StaticScore.1: int 6 7 7 8
# $ StaticScore.2: int 5 6 4 5
Now, we can do the calcuation as showed in the OP's post
StaticAggregate$Staticdiff <- with(StaticAggregate, StaticScore.1-StaticScore.2)
StaticAggregate
# ID StaticScore.1 StaticScore.2 Staticdiff
#1 1 6 5 1
#2 2 7 6 1
#3 3 7 4 3
#4 4 8 5 3
As the str output shown in the question indicates, StaticAggregate is a two column data.frame whose second column is a two column matrix, StaticScore. We can display the matrix like this:
StaticAggregate$StaticScore
## [,1] [,2]
## [1,] 6 5
## [2,] 7 6
## [3,] 7 4
## [4,] 8 5
To create a new column with the difference:
transform(StaticAggregate, diff = StaticScore[, 1] - StaticScore[, 2])
## ID StaticScore.1 StaticScore.2 diff
## 1 1 6 5 1
## 2 2 7 6 1
## 3 3 7 4 3
## 4 4 8 5 3
Note that there are no columns in StaticAggregate or in StaticAggregate$StaticScore named StaticScore.1 and StaticScore.2. StaticScore.1 in the heading of the data.frame print output just denotes the first column of the StaticScore matrix.
The reason that the matrix has no column names is that the aggregate function c does not produce them. If we change the original aggregate to this then they would have names:
StaticAggregate2 <- aggregate(StaticScore ~ ID, Aggregate_Test, setNames, c("A", "B"))
StaticAggregate2
## ID StaticScore.A StaticScore.B
## 1 1 6 5
## 2 2 7 6
## 3 3 7 4
## 4 4 8 5
Now we can write this using the column names of the matrix:
StaticAggregate2$StaticScore[, "A"]
## [1] 6 7 7 8
StaticAggregate2$StaticScore[, "B"]
## [1] 5 6 4 5
Note that there is a significant advantage of the way R's aggregate works as it allows simpler access to the results -- the kth column of the matrix is the kth result of the aggregate function. This is in contrast to having the k+1st column of the data.frame representing the kth result of the aggregate function. This may not seem like much of a simplification here but for more complex problems it can be a significant simplification if you need to access the statistics matrix. Of course, you can always flatten it to 3 columns if you want
do.call(data.frame, StaticAggregate)
but once you think about it for a while you may find that the structure it provides is actually more convenient.

when to use na.omit versus complete.cases

I have following code comparing na.omit and complete.cases:
> mydf
AA BB
1 2 2
2 NA 5
3 6 8
4 5 NA
5 9 6
6 NA 1
>
>
> na.omit(mydf)
AA BB
1 2 2
3 6 8
5 9 6
>
> mydf[complete.cases(mydf),]
AA BB
1 2 2
3 6 8
5 9 6
>
> str(na.omit(mydf))
'data.frame': 3 obs. of 2 variables:
$ AA: int 2 6 9
$ BB: int 2 8 6
- attr(*, "na.action")=Class 'omit' Named int [1:3] 2 4 6
.. ..- attr(*, "names")= chr [1:3] "2" "4" "6"
>
>
> str(mydf[complete.cases(mydf),])
'data.frame': 3 obs. of 2 variables:
$ AA: int 2 6 9
$ BB: int 2 8 6
>
> identical(na.omit(mydf), mydf[complete.cases(mydf),])
[1] FALSE
Are there any situations where one or the other should be used or effectively they are the same?
It is true that na.omit and complete.cases are functionally the same when complete.cases is applied to all columns of your object (e.g. data.frame):
R> all.equal(na.omit(mydf),mydf[complete.cases(mydf),],check.attributes=F)
[1] TRUE
But I see two fundamental differences between these two functions (there may very well be additional differences). First, na.omit adds an na.action attribute to the object, providing information about how the data was modified WRT missing values. I imagine a trivial use case for this as something like:
foo <- function(data) {
data <- na.omit(data)
n <- length(attributes(na.omit(data))$row.names)
message(sprintf("Note: %i rows removed due to missing values.",n))
# do something with data
}
##
R> foo(mydf)
Note: 3 rows removed due to missing values.
where we provide the user with some relevant information. I'm sure a more creative person could (and probably has) find (found) better uses of the na.action attribute, but you get the point.
Second, complete.cases allows for partial manipulation of missing values, e.g.
R> mydf[complete.cases(mydf[,1]),]
AA BB
1 2 2
3 6 8
4 5 NA
5 9 6
Depending on what your variables represent, you may feel comfortable imputing values for column BB, but not for column AA, so using complete.cases like this allows you finer control.

Resources