I want to fuse multiple date fields that contains NAs using piping in R. The data looks like:
dd <- data.frame(id=c("a","b","c","d"),
f1=as.Date(c(NA, "2012-03-24", NA,NA)),
f2=as.Date(c("2010-01-24", NA, NA,NA)),
f3=as.Date(c(NA, NA, "2014-11-22", NA)))
dd
id f1 f2 f3
1 a <NA> 2010-01-24 <NA>
2 b 2012-03-24 <NA> <NA>
3 c <NA> <NA> 2014-11-22
4 d <NA> <NA> <NA>
I know how to do it the R base way:
unlist(apply(dd[,c("f1","f2","f3")],1,na.omit))
f2 f1 f3
"2010-01-24" "2012-03-24" "2014-11-22"
So that is not the point of my question. I'm in the process of learning piping and dplyr so I want to pipe this function. I've tried:
library(dplyr)
dd %>% mutate(f=na.omit(c(f1,f2,f3)))
Error in mutate_impl(.data, dots) :
Column `f` must be length 4 (the number of rows) or one, not 3
It doesn't work because of the line with all NAs. Without this line, it would work:
dd[-4,] %>% mutate(f=na.omit(c(f1,f2,f3)))
id f1 f2 f3 f
1 a <NA> 2010-01-24 <NA> 2012-03-24
2 b 2012-03-24 <NA> <NA> 2010-01-24
3 c <NA> <NA> 2014-11-22 2014-11-22
Any idea how to do it properly?
BTW, my question is different from this and this as I want to use piping and because my field is a date field, I cannot use sum with na.rm=T.
Thanks
We can use coalesce to create the new column,
library(dplyr)
dd %>%
transmute(newcol = coalesce(f1, f2, f3)) #%>%
#then `filter` the rows to remove the NA elements
#and `pull` as a `vector` (if needed)
#filter(!is.na(newcol)) %>%
#pull(newcol)
# newcol
#1 2010-01-24
#2 2012-03-24
#3 2014-11-22
#4 <NA>
Related
I am trying to identify any participant taking statins in a dataset of over 1 million rows and subset based on this. I have a vector that includes all the codes for these medications (I've just made a few up for demonstration purposes), and I would next like to create a function that searches through the dataframe and identifies any case that has a medication code that "starts with" any of the characters listed in the df.
The df looks like this:
ID readcode_1 readcode_2 generic_name
1 1001 bxd1 1146785342 Simvastatin
2 1002 <NA> <NA> <NA>
3 1003 <NA> <NA> Pravastatin
4 1004 <NA> <NA> <NA>
5 1005 bxd4 45432344 <NA>
6 1006 <NA> <NA> <NA>
7 1007 <NA> <NA> <NA>
8 1008 <NA> <NA> <NA>
9 1009 <NA> <NA> <NA>
10 1010 bxde <NA> <NA>
11 1011 <NA> <NA> <NA>
Ideally, I'd like the end product to look like this:
ID readcode_1 readcode_2 generic_name
1 1001 bxd1 1146785342 Simvastatin
3 1003 <NA> <NA> Pravastatin
5 1005 bxd4 45432344 <NA>
10 1010 bxde <NA> <NA>
Here is my code so far (doesn't currently work)
#create vector with list of medication codes of interest
medications <- c("bxd", "Simvastatin", "1146785342", "45432344", "Pravastatin")
# look through all columns (apart from IDs in first column) and if any of them start with the codes listed in the medications vector, return a 1
df$statin_prescribed <- apply(df[, -1], 1, function(x) {
if(any(x %in% startsWith(x, medications))) {
return(1)
} else {
return(0)
}
})
# subset to include only individuals prescribed statins
df <- subset(df, statin_prescribed == 1)
The part that doesn't seem to work is startsWith(x, statin).
Please let me know if you have any suggestions and additional, whether there is alternative code that may be more time efficient!
This is a solution using the dplyr package
library(dplyr)
df %>%
filter_at(vars(-ID), any_vars(grepl(paste(medications, collapse = "|"), .)))
Small explanation: we are asking to filter all those rows where at least one variable (excluding ID) starts with one of the values inside medications
Output
# ID readcode_1 readcode_2 generic_name
# 1 1001 bxd1 1146785342 Simvastatin
# 2 1003 <NA> <NA> Pravastatin
# 3 1005 bxd4 45432344 <NA>
# 4 1010 bxde <NA> <NA>
Another solution in base R with a similar rationale is the following
df[apply(df[,-1], 1, function(x) {any(grepl(paste(medications, collapse = "|"), x))}),]
Output is the same (except row index which I believe is not relevant)
# ID readcode_1 readcode_2 generic_name
# 1 1001 bxd1 1146785342 Simvastatin
# 3 1003 <NA> <NA> Pravastatin
# 5 1005 bxd4 45432344 <NA>
# 10 1010 bxde <NA> <NA>
After some benchmarking tests, the base R solution seems to be around 5x faster than the dplyr one. So I suggest you to use the base R solution if time efficiency is your main concern.
microbenchmark::microbenchmark(
df %>% filter_at(vars(-ID), any_vars(grepl(paste(medications, collapse = "|"), .))),
df[apply(df[,-1], 1, function(x) {any(grepl(paste(medications, collapse = "|"), x))}),],
times = 100
)
# Unit: microseconds
# # expr min
# df %>% filter_at(vars(-ID), any_vars(grepl(paste(medications, collapse = "|"), .))) 1958.4
# df[apply(df[, -1], 1, function(x) { any(grepl(paste(medications, collapse = "|"), x)) }), ] 341.7
# lq mean median uq max neval
# 1989.55 2146.993 2041.30 2149.05 7851.1 100
# 352.50 405.972 380.25 401.55 2154.0 100
I have a data frame which consists of a single column of some very messy JSON data. I would like to convert the JSON entries in that column to additional columns in the same data frame, I have a messy solution, but it will be tedious and long to apply it to my actual dataset.
Here is my sample data frame:
sample.df <- data.frame(id = c(101, 102, 103, 104),
json_col = c('[{"foo_a":"bar"}]',
'[{"foo_a":"bar","foo_b":"bar"}]',
'[{"foo_a":"bar","foo_c":2}]',
'[{"foo_a":"bar","foo_b":"bar","foo_c":2,"nested_col":{"foo_d":"bar","foo_e":3}}]'),
startdate = as.Date(c('2010-11-1','2008-3-25','2007-3-14','2006-2-21')))
in reality my data frame has over 100000 entries and consists of multiple JSON columns where I need to apply the solution to this question, there are also several orders of nested lists (i.e. nested lists within nested lists).
Here is my solution:
j.col <- sample.df[2]
library(jsonlite)
j.l <- apply(j.col, 1, jsonlite::fromJSON, flatten = T)
library(dplyr)
l.as.df <- bind_rows(lapply(j.l,data.frame))
new.df <- cbind(sample.df$id, l.as.df, sample.df$startdate)
My solution is a roundabout method were I extract the column from the data frame with the JSON stuff, and then convert the JSON into a second dataframe, and then I combine the two dataframes into a third datadframe. This will be long and tedious to do with my actual data, not to mention that it is inelegant. How can I do this without having to create the additional data frames?
Thanks in advance for any help!
Here's another approach that will spare you the intermediate dataframes:
library(dplyr)
library(jsonlite)
new.df <- sample.df %>%
rowwise() %>%
do(data.frame(fromJSON(.$json_col, flatten = T))) %>%
ungroup() %>%
bind_cols(sample.df %>% select(-json_col))
print(new.df)
# # A tibble: 4 x 7
# foo_a foo_b foo_c nested_col.foo_d nested_col.foo_e id startdate
# <chr> <chr> <int> <chr> <int> <dbl> <date>
# 1 _ <NA> NA <NA> NA 101 2010-11-01
# 2 _ _ NA <NA> NA 102 2008-03-25
# 3 _ <NA> 2 <NA> NA 103 2007-03-14
# 4 _ _ 2 _ 3 104 2006-02-21
library(dplyr)
library(tidyr)
library(purrr)
library(jsonlite)
sample.df %>%
mutate(
json_parsed = map(json_col, ~ fromJSON(., flatten=TRUE))
) %>%
unnest(json_parsed)
# id
# 1 101
# 2 102
# 3 103
# 4 104
# json_col
# 1 [{"foo_a":"bar"}]
# 2 [{"foo_a":"bar","foo_b":"bar"}]
# 3 [{"foo_a":"bar","foo_c":2}]
# 4 [{"foo_a":"bar","foo_b":"bar","foo_c":2,"nested_col":{"foo_d":"bar","foo_e":3}}]
# startdate foo_a foo_b foo_c nested_col.foo_d nested_col.foo_e
# 1 2010-11-01 bar <NA> NA <NA> NA
# 2 2008-03-25 bar bar NA <NA> NA
# 3 2007-03-14 bar <NA> 2 <NA> NA
# 4 2006-02-21 bar bar 2 bar 3
If you are reducing libraries, you can remove purrr and instead use:
...
json_parsed = lapply(.$json_col, fromJSON, flatten=TRUE)
...
I think this will work. The main idea is that we take json_col and turn it into a character string, that we can then pass into the fromJSON function that takes care of the rest.
library(stringi)
library(jsonlite)
sample.df$json_col<- as.character(sample.df$json_col)
json_obj<- paste(sample.df$json_col, collapse = "")
json_obj<- stri_replace_all_fixed(json_obj, "][", ",")
new.df<- cbind(sample.df$id, fromJSON(json_obj), sample.df$startdate)
> new.df
# sample.df$id foo_a foo_b foo_c nested_col.foo_d nested_col.foo_e
#1 101 _ <NA> NA <NA> NA
#2 102 _ _ NA <NA> NA
#3 103 _ <NA> 2 <NA> NA
#4 104 _ _ 2 _ 3
# sample.df$startdate
#1 2010-11-01
#2 2008-03-25
#3 2007-03-14
#4 2006-02-21
Make sure that the cbind part work correctly! In this case it did, but make sure that in your overall manipulations, you don't change the order of things.
I have data that looks like the following:
moo <- data.frame(Farm = c("A","B",NA,NA,"A","B"),
Barn_Yard = c("A","A",NA,"A",NA,"B"),
stringsAsFactors=FALSE)
print(moo)
Farm Barn_Yard
A A
B A
<NA> <NA>
<NA> A
A <NA>
B B
I am attempting to combine the columns into one variable where if they are the same the results yields what is found in both columns, if both have data the result is what is in the Farm column, if both are <NA> the result is <NA>, and if one has a value and the other doesn't the result is the value present in the column that has the value. Thus, in this instance the result would be:
oink <- data.frame(Animal_House = c("A","B",NA,"A","A","B"),
stringsAsFactors = FALSE)
print(oink)
Animal_House
A
B
<NA>
A
A
B
I have tried the unite function from tidyr but it doesn't give me exactly what I want. Any thoughts? Thanks!
dplyr::coalesce does exactly that, substituting any NA values in the first vector with the value from the second:
library(dplyr)
moo <- data.frame(Farm = c("A","B",NA,NA,"A","B"),
Barn_Yard = c("A","A",NA,"A",NA,"B"),
stringsAsFactors = FALSE)
oink <- moo %>% mutate(Animal_House = coalesce(Farm, Barn_Yard))
oink
#> Farm Barn_Yard Animal_House
#> 1 A A A
#> 2 B A B
#> 3 <NA> <NA> <NA>
#> 4 <NA> A A
#> 5 A <NA> A
#> 6 B B B
If you want to discard the original columns, use transmute instead of mutate.
A less succinct option is to use a couple ifelse() statements, but this could be useful if you wish to introduce another condition or column into the mix.
moo <- data.frame(Farm = c("A","B",NA,NA,"A","B"),
Barn_Yard = c("A","A",NA,"A",NA,"B"),
stringsAsFactors = FALSE)
moo$Animal_House = with(moo,ifelse(is.na(Farm) & is.na(Barn_Yard),NA,
ifelse(!is.na(Barn_Yard) & is.na(Farm),Barn_Yard,
Farm)))
I have a data frame that looks like:
d<-data.frame(id=(1:9),
grp_id=(c(rep(1,3), rep(2,3), rep(3,3))),
a=rep(NA, 9),
b=c("No", rep(NA, 3), "Yes", rep(NA, 4)),
c=c(rep(NA,2), "No", rep(NA,6)),
d=c(rep(NA,3), "Yes", rep(NA,2), "No", rep(NA,2)),
e=c(rep(NA, 7), "No", NA),
f=c(NA, "No", rep(NA,3), "No", rep(NA,2), "No"))
>d
id grp_id a b c d e f
1 1 1 NA No <NA> <NA> <NA> <NA>
2 2 1 NA <NA> <NA> <NA> <NA> No
3 3 1 NA <NA> No <NA> <NA> <NA>
4 4 2 NA <NA> <NA> Yes <NA> <NA>
5 5 2 NA Yes <NA> <NA> <NA> <NA>
6 6 2 NA <NA> <NA> <NA> <NA> No
7 7 3 NA <NA> <NA> No <NA> <NA>
8 8 3 NA <NA> <NA> <NA> No <NA>
9 9 3 NA <NA> <NA> <NA> <NA> No
Within each group (grp_id) there is only 1 "Yes" or "No" value associated with each of the columns a:f.
I'd like to create a single row for each grp_id to get a data frame that looks like the following:
grp_id a b c d e f
1 NA No No <NA> <NA> No
2 NA Yes <NA> Yes <NA> No
3 NA <NA> <NA> No No No
I recognize that the tidyr package is probably the best tool and the 1st steps are likely to be
d %>%
group_by(grp_id) %>%
summarise()
I would appreciate help with the commands within summarise, or any solution really. Thanks.
We can use summarise_at and subset the first non-NA element
library(dplyr)
d %>%
group_by(grp_id) %>%
summarise_at(2:7, funs(.[!is.na(.)][1]))
# A tibble: 3 x 7
# grp_id a b c d e f
# <dbl> <lgl> <fctr> <fctr> <fctr> <fctr> <fctr>
#1 1 NA No No <NA> <NA> No
#2 2 NA Yes <NA> Yes <NA> No
#3 3 NA <NA> <NA> No No No
In the example dataset, columns 'a' to 'f' are all factors with some having only 'No' levels. If it needs to be standardized with all the columns having the same levels, then we may need to call the factor with levels specified as c('Yes', 'No') in the summarise_at i.e. summarise_at(2:7, funs(factor(.[!is.na(.)][1], levels = c('Yes', 'No'))))
We can use aggregate. No packages are used.
YN <- function(x) c(na.omit(as.character(x)), NA)[1]
aggregate(d[3:8], d["grp_id"], YN)
giving:
## grp_id a b c d e f
## 1 1 <NA> No No <NA> <NA> No
## 2 2 <NA> Yes <NA> Yes <NA> No
## 3 3 <NA> <NA> <NA> No No No
The above gives character columns. If you prefer factor columns then use this:
YNfac <- function(x) factor(YN(x), c("No", "Yes"))
aggregate(d[3:8], d["grp_id"], YNfac)
Note: Other alternate implementations of YN are:
YN <- function(x) sort(as.character(x), na.last = TRUE)[1]
YN <- function(x) if (all(is.na(x))) NA_character_ else na.omit(as.character(x))[1]
library(zoo)
YN <- function(x) na.locf0(as.character(x), fromLast = TRUE)[1]
You've received some good answers but neither of them actually uses the tidyr package. (The summarize() and summarize_at() family of functions is from dplyr.)
In fact, a tidyr-only solution for your problem is very doable.
d %>%
gather(col, value, -id, -grp_id, factor_key=TRUE) %>%
na.omit() %>%
select(-id) %>%
spread(col, value, fill=NA, drop=FALSE)
The only hard part is ensuring that you get the a column in your output. For your example data, it is entirely NA. The trick is the factor_key=TRUE argument to gather() and the drop=FALSE argument to spread(). Without those two arguments being set, the output would not have an a column, and would only have columns with at least one non-NA entry.
Here's a description of how it works:
gather(col, value, -id, -grp_id, factor_key=TRUE) %>%
This tidies your data -- it effectively replaces columns a - f with new columns col and value, forming a long-formated "tidy" data frame. The entries in the col column are letters a - f. And because we've used factor_key=TRUE, this column is a factor with levels, not just a character vector.
na.omit() %>%
This removes all the NA values from the long data.
select(-id) %>%
This eliminates the id column.
spread(col, value, fill=NA, drop=FALSE)
This re-widens the data, using the values in the col column to define new column names, and the values in the value column to fill in the entries of the new columns. When data is missing, a value of fill (here NA) is used instead. And the drop=FALSE means that when col is a factor, there will be one column per level of the factor, no matter whether that level appears in the data or not. This, along with setting col to be a factor, is what gets a as an output column.
I personally find this approach more readable than the approaches requiring subsetting or lapply stuff. Additionally, this approach will fail if your data is not actually one-hot, whereas other approaches may "work" and give you unexpected output. The downside of this approach is that the output columns a - f are not factors, but character vectors. If you need factor output you should be able to do (untested)
mutate(value = factor(value, levels=c('Yes', 'No', NA))) %>%
anywhere between the gather() and spread() functions to ensure factor output.
Trying to re-organise a data set (sdevDFC) into a matrix with my latitude (Lat) as row names and longitude (Lon) as column names, then filling in the matrix with values respective to the coordinates.
stand_dev_m <- matrix(data=sdevDFC$SDev, nrow=length(sdevDFC$Lat), ncol=length(sdevDFC$Lon), byrow=TRUE, dimnames = list(sdevDFC$Lat, sdevDFC$Lon))
The column and row names appear as they should, but my data fills in so that all values in their respective columns are identical as shown in the
image (which should not be the case as none of my values ever repeat).
I've filled it with byrow = FALSE to see if it also occurred then (it does), and I've also used colnames and rownames instead of dimnames (changes nothing).
Would appreciate any insight into what I may be doing wrong here--also new to this platform so I apologise if I've missed a guideline or another question that's similar
Example data:
df <- data.frame(LON=1:5,
LAT=11:15,
VAL=letters[1:5],
stringsAsFactors=F)
You could try the following:
rn <- df$LON # Save what-will-be-rownames
df1 <- df %>%
spread(LAT,VAL,fill=NA) %>%
select(-LON) %>%
setNames(., df$LAT)
rownames(df1) <- rn
Output
11 12 13 14 15
1 a <NA> <NA> <NA> <NA>
2 <NA> b <NA> <NA> <NA>
3 <NA> <NA> c <NA> <NA>
4 <NA> <NA> <NA> d <NA>
5 <NA> <NA> <NA> <NA> e