How to get na.omit with data.table to only omit NAs in each column - r

Let's say I have
az<-data.table(a=1:6,b=6:1,c=4)
az[b==4,c:=NA]
az
a b c
1: 1 6 4
2: 2 5 4
3: 3 4 NA
4: 4 3 4
5: 5 2 4
6: 6 1 4
I can get the sum of all the columns with
az[,lapply(.SD,sum)]
a b c
1: 21 21 NA
This is what I want for a and b but c is NA. This is seemingly easy enough to fix by doing
az[,lapply(na.omit(.SD),sum)]
a b c
1: 18 17 20
This is what I want for c but I didn't want to omit the values of a and b where c is NA. This is a contrived example in my real data there could be 1000+ columns with random NAs throughout. Is there a way to get na.omit or something else to act per column instead of on the whole table without relying on looping through each column as a vector?

Expanding on my comment:
Many base functions allow you to decide how to treat NA. For example, sum has the argument na.rm:
az[,lapply(.SD,sum,na.rm=TRUE)]
In general, you can also use the function na.omit on each vector individually:
az[,lapply(.SD,function(x) sum(na.omit(x)))]

Related

R Order only one factor level (or column if after) to affect order long to wide (using spread)

I have a problem after changing my dataset from long to wide (using spread, from the tidyr library on the Result_Type column). I have the following example df:
Group<-c("A","A","A","B","B","B","C","C","C","D", "D")
Result_Type<-c("Final.Result", "Verification","Test", "Verification","Final.Result","Fast",
"Verification","Fast", "Final.Result", "Test", "Final.Result")
Result<-c(7,1,8,7,"NA",9,10,12,17,50,11)
df<-data.frame(Group, Result_Type, Result)
df
Group Result_Type Result
1 A Final.Result 7
2 A Verification 1
3 A Test 8
4 B Verification 7
5 B Final.Result NA
6 B Fast 9
7 C Verification 10
8 C Fast 12
9 C Final.Result 17
10 D Test 50
11 D Final.Result 11
In the column Result_type there are many possible result types and in some datasets I have Result_Type 's that will not occur in other datasets. However, one level: Final.Resultdoes occur in every dataset.
Also: This is example data but the actual data has many different columns, and as these differ across the datasets I use, I used spread (from the tidyr library) so I don't have to give any specific column names other than my target columns.
library("tidyr")
df_spread<-spread(df, key = Result_Type, value = Result)
Group Fast Final.Result Test Verification
1 A <NA> 7 8 1
2 B 9 NA <NA> 7
3 C 12 17 <NA> 10
4 D <NA> 11 50 <NA>
What I would like is that once I convert the dataset from long to wide, Final.Result is the first column, how the rest of the columns is arranged doesn't matter, so I would like it to be like this (without calling any names of the other columns that are spread, or using order index numbers):
Group Final.Result Fast Test Verification
1 A 7 <NA> 8 1
2 B NA 9 <NA> 7
3 C 17 12 <NA> 10
4 D 11 <NA> 50 <NA>
I saw some answers that indicated you can reverse the order of the spreaded columns, or turn off the ordering of spread, but that doesn't make sure that Final.Result is always the first column of the spread levels.
I hope I am making myself clear, it's a little complicated to explain. If someone needs extra info I will be happy to explain more!
spread creates columns in the order of the key column's factor levels. Within the tidyverse, forcats::fct_relevel is a convenience function for rearranging factor levels. The default is that the level(s) you specify will be moved to the front.
library(dplyr)
library(tidyr)
...
levels(df$Result_Type)
#> [1] "Fast" "Final.Result" "Test" "Verification"
Calling fct_relevel will put "Final.Result" as the first level, keeping the rest of the levels in their previous order.
reordered <- df %>%
mutate(Result_Type = forcats::fct_relevel(Result_Type, "Final.Result"))
levels(reordered$Result_Type)
#> [1] "Final.Result" "Fast" "Test" "Verification"
Adding that into your pipeline puts Final.Result as the first column after spreading.
df %>%
mutate(Result_Type = forcats::fct_relevel(Result_Type, "Final.Result")) %>%
spread(key = Result_Type, value = Result)
#> Group Final.Result Fast Test Verification
#> 1 A 7 <NA> 8 1
#> 2 B NA 9 <NA> 7
#> 3 C 17 12 <NA> 10
#> 4 D 11 <NA> 50 <NA>
Created on 2018-12-14 by the reprex package (v0.2.1)
One option is to refactor Result_Type to put final.result as the first one:
df$Result_Type<-factor(df$Result_Type,levels=c("Final.Result",as.character(unique(df$Result_Type)[!unique(df$Result_Type)=="Final.Result"])))
spread(df, key = Result_Type, value = Result)
Group Final.Result Verification Test Fast
1 A 7 1 8 NA
2 B NA 7 NA 9
3 C 17 10 NA 12
4 D 11 NA 50 NA
If you'd like you can use this opportunity to also sort the rest of the columns whichever way you want.

Remove duplicates while keeping NA in R

I have data that looks like the following:
a<-data.frame(ID=c("A","B","C","C",NA,NA),score=c(1,2,3,3,5,6),stringsAsFactors=FALSE)
print(a)
ID score
A 1
B 2
C 3
C 3
<NA> 5
<NA> 6
I am trying to remove duplicates without R treating <NA> as duplicates to get the following:
b<-data.frame(ID=c("A","B","C",NA,NA),score=c(1,2,3,5,6),stringsAsFactors=FALSE)
print(b)
ID score
A 1
B 2
C 3
<NA> 5
<NA> 6
I have tried the following:
b<-a[!duplicated(a$ID),]
library(dplyr)
b<-distinct(a,ID)
print(b)
But both treat <NA> as a duplicate ID and remove one, but I want to keep all instances of <NA>. Thoughts? Thank you!
A straight forward approach is to break the original data frame down into 2 parts where ID is NA and where it is not. Perform your distinct filter and then combine the data frames back together:
a<-data.frame(ID=c("A","B","C","C",NA,NA),score=c(1,2,3,3,5,6),stringsAsFactors=FALSE)
aprime<-a[!is.na(a$ID),]
aNA<-a[is.na(a$ID),]
b<-aprime[!duplicated(aprime$ID),]
b<-rbind(b, aNA)
With a little work, one can reduce this down to a 1-2 line lines of code.
using dplyr:
b%>%group_by(ID,score)%>%distinct()
# A tibble: 5 x 2
# Groups: ID, score [5]
ID score
<chr> <dbl>
1 A 1
2 B 2
3 C 3
4 <NA> 5
5 <NA> 6
Found a very simple way to do this simply using the base duplicated() function.
b<-a[!duplicated(a$ID, incomparables = NA),]
Setting incomparables = NA makes R read NA duplicates as FALSE, therefore including them in the result dataset.

Getting stale values on using ifelse in a dataframe

Hi I am aggregating values from two columns and creating a final third column, based on priorities. If values in column 1 are missing or are NA then I go for column 2.
df=data.frame(internal=c(1,5,"",6,"NA"),external=c("",6,8,9,10))
df
internal external
1 1
2 5 6
3 8
4 6 9
5 NA 10
df$final <- df$internal
df$final <- ifelse((df$final=="" | df$final=="NA"),df$external,df$final)
df
internal external final
1 1 2
2 5 6 3
3 8 4
4 6 9 4
5 NA 10 2
How can I get final value as 4 and 2 for row 3 and row 5 when the external is 8 and 2. I don't know what's wrong but these values don't make any sense to me.
The issue arises because R converts your values to factors.
Your code will work fine with
df=data.frame(internal=c(1,5,"",6,"NA"),external=c("",6,8,9,10),stringsAsFactors = FALSE)
PS: this hideous conversion to factors should definitely belong to the R Inferno, http://www.burns-stat.com/pages/Tutor/R_inferno.pdf

How to delete rows from a dataframe that contain n*NA

I have a number of large datasets with ~10 columns, and ~200000 rows. Not all columns contain values for each row, although at least one column must contain a value for the row to be present, I would like to set a threshold for how many NAs are allowed in a row.
My Dataframe looks something like this:
ID q r s t u v w x y z
A 1 5 NA 3 8 9 NA 8 6 4
B 5 NA 4 6 1 9 7 4 9 3
C NA 9 4 NA 4 8 4 NA 5 NA
D 2 2 6 8 4 NA 3 7 1 32
And I would like to be able to delete the rows that contain more than 2 cells containing NA to get
ID q r s t u v w x y z
A 1 5 NA 3 8 9 NA 8 6 4
B 5 NA 4 6 1 9 7 4 9 3
D 2 2 6 8 4 NA 3 7 1 32
complete.cases removes all rows containing any NA, and I know one can delete rows that contain NA in certain columns but is there a way to modify it so that it is non-specific about which columns contain NA, but how many of the total do?
Alternatively, this dataframe is generated by merging several dataframes using
file1<-read.delim("~/file1.txt")
file2<-read.delim(file=args[1])
file1<-merge(file1,file2,by="chr.pos",all=TRUE)
Perhaps the merge function could be altered?
Thanks
Use rowSums. To remove rows from a data frame (df) that contain precisely n NA values:
df <- df[rowSums(is.na(df)) != n, ]
or to remove rows that contain n or more NA values:
df <- df[rowSums(is.na(df)) < n, ]
in both cases of course replacing n with the number that's required
If dat is the name of your data.frame the following will return what you're looking for:
keep <- rowSums(is.na(dat)) < 2
dat <- dat[keep, ]
What this is doing:
is.na(dat)
# returns a matrix of T/F
# note that when adding logicals
# T == 1, and F == 0
rowSums(.)
# quickly computes the total per row
# since your task is to identify the
# rows with a certain number of NA's
rowSums(.) < 2
# for each row, determine if the sum
# (which is the number of NAs) is less
# than 2 or not. Returns T/F accordingly
We use the output of this last statement to
identify which rows to keep. Note that it is not necessary to actually store this last logical.
If d is your data frame, try this:
d <- d[rowSums(is.na(d)) < 2,]
This will return a dataset where at most two values per row are missing:
dfrm[ apply(dfrm, 1, function(r) sum(is.na(x)) <= 2 ) , ]

Removing NAs when multiplying columns

This is a really simple question, but I am hoping someone will be able to help me avoid extra lines of unnecessary code. I have a simple dataframe:
Df.1 <- data.frame(A = c(5,4,7,6,8,4),B = (c(1,5,2,4,9,1)),C=(c(2,3,NA,5,NA,9)))
What I want to do is produce an extra column which is the multiplication of A, B and C, which I will then cbind to the original dataframe.
So, I would normally use:
attach(Df.1)
D<-A*B*C
But obviously where the NAs are in column C, I get an NA in variable D. I don't want to exclude all the NA rows, rather just ignore the NA values in this column (and then the value in D would simply be the multiplication of A and B, or where C was available, A*B*C.
I know I could simply replace the NAs with 1s, so the calculation remains unchanged, or use if statements, but I was wodnering what the simplist way of doing this is?
Any ideas?
You can use prod which has an na.rm argument. To do it by row use apply:
apply(Df.1,1,prod,na.rm=TRUE)
[1] 10 60 14 120 72 36
As #James said, prod and apply will work, but you don't need to waste memory storing it in a separate variable, or even cbinding it
Df.1$D = apply(Df.1, 1, prod, na.rm=T)
Assigning the new variable in the data frame directly will work.
> Df.1 <- data.frame(A = c(5,4,7,6,8,4),B = (c(1,5,2,4,9,1)),C=(c(2,3,NA,5,NA,9)))
> Df.1
A B C
1 5 1 2
2 4 5 3
3 7 2 NA
4 6 4 5
5 8 9 NA
6 4 1 9
> Df.1$D = apply(Df.1, 1, prod, na.rm=T)
> Df.1$D
[1] 10 60 14 120 72 36
> Df.1
A B C D
1 5 1 2 10
2 4 5 3 60
3 7 2 NA 14
4 6 4 5 120
5 8 9 NA 72
6 4 1 9 36

Resources