Merge/combine rows with same ID and Date in R - r

I have an excel database like below. The Excel database had option to enter only 3 drug details. Wherever there are more than 3 drugs, it has been entered into another row with PID and Date.
Is there a way I can merge the rows in R so that each patient's records will be in a single row? In the example below, I need to merge Row 1 & 2 and 4 & 6.
Thanks.
Row
PID
Date
Drug1
Dose1
Drug2
Dose2
Drug3
Dose3
Age
Place
1
11A
25/10/2021
RPG
12
NAT
34
QRT
5
45
PMk
2
11A
25/10/2021
BET
10
SET
43
BLT
45
3
12B
20/10/2021
ATY
13
LTP
3
CRT
3
56
GTL
4
13A
22/10/2021
GGS
7
GSF
12
ERE
45
45
RKS
5
13A
26/10/2021
BRT
9
ARR
4
GSF
34
46
GLO
6
13A
22/10/2021
DFS
5
7
14B
04/08/2021
GDS
2
TRE
55
HHS
34
25
MTK

Up front, the two methods below are completely different, not equivalents in "base R vs dplyr". I'm sure either can be translated to the other.
dplyr
The premise here is to first reshape/pivot the data longer so that each Drug/Dose is on its own line, renumber them appropriately, and then bring it back to a wide state.
NOTE: frankly, I usually prefer to deal with data in a long format, so consider keeping it in its state immediately before pivot_wider. This means you'd need to bring Age and Place back into it somehow.
Why? A long format deals very well with many types of aggregation; ggplot2 really really prefers data in the long format; I dislike seeing and having to deal with all of the NA/empty values that will invariably happen with this wide format, since many PIDs don't have (e.g.) Drug6 or later. This seems subjective, but it can really be an objective change/improvement to data-mangling, depending on your workflow.
library(dplyr)
# library(tidyr) # pivot_longer, pivot_wider
dat0 <- select(dat, PID, Date, Age, Place) %>%
group_by(PID, Date) %>%
summarize(across(everything(), ~ .[!is.na(.) & nzchar(trimws(.))][1] ))
dat %>%
select(-Age, -Place) %>%
tidyr::pivot_longer(
-c(Row, PID, Date),
names_to = c(".value", "iter"),
names_pattern = "^([^0-9]+)([123]?)$") %>%
arrange(Row, iter) %>%
group_by(PID, Date) %>%
mutate(iter = row_number()) %>%
select(-Row) %>%
tidyr::pivot_wider(
c("PID", "Date"), names_sep = "",
names_from = "iter", values_from = c("Drug", "Dose")) %>%
left_join(dat0, by = c("PID", "Date"))
# # A tibble: 5 x 16
# # Groups: PID, Date [5]
# PID Date Drug1 Drug2 Drug3 Drug4 Drug5 Drug6 Dose1 Dose2 Dose3 Dose4 Dose5 Dose6 Age Place
# <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int> <int> <int> <int> <int> <int> <chr>
# 1 11A 25/10/2021 RPG NAT QRT BET "SET" "BLT" 12 34 5 10 43 45 45 PMk
# 2 12B 20/10/2021 ATY LTP CRT <NA> <NA> <NA> 13 3 3 NA NA NA 56 GTL
# 3 13A 22/10/2021 GGS GSF ERE DFS "" "" 7 12 45 5 NA NA 45 RKS
# 4 13A 26/10/2021 BRT ARR GSF <NA> <NA> <NA> 9 4 34 NA NA NA 46 GLO
# 5 14B 04/08/2021 GDS TRE HHS <NA> <NA> <NA> 2 55 34 NA NA NA 25 MTK
Notes:
I broke out dat0 early, since Age and Place don't really fit into the pivot/renumber/pivot mindset.
base R
Here's a base R method that splits (according to your grouping criteria: PID and Date), finds the Drug/Dose columns that need to be renumbered, renames them, and the merges all of the frames back together.
spl <- split(dat, ave(rep(1L, nrow(dat)), dat[,c("PID", "Date")], FUN = seq_along))
spl
# $`1`
# Row PID Date Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place
# 1 1 11A 25/10/2021 RPG 12 NAT 34 QRT 5 45 PMk
# 3 3 12B 20/10/2021 ATY 13 LTP 3 CRT 3 56 GTL
# 4 4 13A 22/10/2021 GGS 7 GSF 12 ERE 45 45 RKS
# 5 5 13A 26/10/2021 BRT 9 ARR 4 GSF 34 46 GLO
# 7 7 14B 04/08/2021 GDS 2 TRE 55 HHS 34 25 MTK
# $`2`
# Row PID Date Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place
# 2 2 11A 25/10/2021 BET 10 SET 43 BLT 45 NA
# 6 6 13A 22/10/2021 DFS 5 NA NA NA
nms <- lapply(spl, function(x) grep("^(Drug|Dose)", colnames(x), value = TRUE))
nms <- data.frame(i = rep(names(nms), lengths(nms)), oldnm = unlist(nms))
nms$grp <- gsub("[0-9]+$", "", nms$oldnm)
nms$newnm <- paste0(nms$grp, ave(nms$grp, nms$grp, FUN = seq_along))
nms <- split(nms, nms$i)
newspl <- Map(function(x, nm) {
colnames(x)[ match(nm$oldnm, colnames(x)) ] <- nm$newnm
x
}, spl, nms)
newspl[-1] <- lapply(newspl[-1], function(x) x[, c("PID", "Date", grep("^(Drug|Dose)", colnames(x), value = TRUE)), drop = FALSE ])
newspl
# $`1`
# Row PID Date Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place
# 1 1 11A 25/10/2021 RPG 12 NAT 34 QRT 5 45 PMk
# 3 3 12B 20/10/2021 ATY 13 LTP 3 CRT 3 56 GTL
# 4 4 13A 22/10/2021 GGS 7 GSF 12 ERE 45 45 RKS
# 5 5 13A 26/10/2021 BRT 9 ARR 4 GSF 34 46 GLO
# 7 7 14B 04/08/2021 GDS 2 TRE 55 HHS 34 25 MTK
# $`2`
# PID Date Drug4 Dose4 Drug5 Dose5 Drug6 Dose6
# 2 11A 25/10/2021 BET 10 SET 43 BLT 45
# 6 13A 22/10/2021 DFS 5 NA NA
Reduce(function(a, b) merge(a, b, by = c("PID", "Date"), all = TRUE), newspl)
# PID Date Row Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place Drug4 Dose4 Drug5 Dose5 Drug6 Dose6
# 1 11A 25/10/2021 1 RPG 12 NAT 34 QRT 5 45 PMk BET 10 SET 43 BLT 45
# 2 12B 20/10/2021 3 ATY 13 LTP 3 CRT 3 56 GTL <NA> NA <NA> NA <NA> NA
# 3 13A 22/10/2021 4 GGS 7 GSF 12 ERE 45 45 RKS DFS 5 NA NA
# 4 13A 26/10/2021 5 BRT 9 ARR 4 GSF 34 46 GLO <NA> NA <NA> NA <NA> NA
# 5 14B 04/08/2021 7 GDS 2 TRE 55 HHS 34 25 MTK <NA> NA <NA> NA <NA> NA
Notes:
The underlying premise of this is that you want to merge the rows onto previous rows. This means (to me) using base::merge or dplyr::full_join; two good links for understanding these concepts, in case you are not aware: How to join (merge) data frames (inner, outer, left, right), What's the difference between INNER JOIN, LEFT JOIN, RIGHT JOIN and FULL JOIN?
To do that, we need to determine which rows are duplicates of previous; further, we need to know how many previous same-key rows there are. There are a few ways to do this, but I think the easiest is with base::split. In this case, no PID/Date combination has more than two rows, but if you had one combination that mandated a third row, spl would be length-3, and the resulting names would go out to Drug9/Dose9.
The second portion (nms <- ...) is where we work on the names. The first few steps create a nms dataframe that we'll use to map from old to new names. Since we're concerned about contiguous numbering through all multi-row groups, we aggregate on the base (number removed) of the Drug/Dose names, so that we number all Drug columns from Drug1 through how many there are.
Note: this assumes that there are always perfect pairs of Drug#/Dose#; if there is ever a mismatch, then the numbering will be suspect.
We end with nms being a split dataframe, just like spl of the data. This is useful and important, since we'll Map (zip-like lapply) them together.
The third block updates spl with the new names. The result in newspl is just renaming of the columns so that when we merge them together, no column-duplication will occur.
One additional step here is removing unrelated columns from the 2nd and subsequent frame in the list. That is, we keep Age and Place in the first such frame but remove it from the rest. My assumption (based on the NA/empty nature of those fields in duplicate rows) is that we only want to keep the first row's values.
The last step is to iteratively merge them together. The Reduce function is nice for this.

Update:
With the help of akrun see here: Use ~separate after mutate and across
We could:
library(dplyr)
library(stringr)
library(tidyr)
df %>%
group_by(PID) %>%
summarise(across(everything(), ~toString(.))) %>%
mutate(across(everything(), ~ list(tibble(col1 = .) %>%
separate(col1, into = str_c(cur_column(), 1:3), sep = ",\\s+", fill = "left", extra = "drop")))) %>%
unnest(c(PID, Row, Date, Drug1, Dose1, Drug2, Dose2, Drug3, Dose3, Age,
Place)) %>%
distinct() %>%
select(-1, -2)
PID3 Row1 Row2 Row3 Date1 Date2 Date3 Drug11 Drug12 Drug13 Dose11 Dose12 Dose13 Drug21 Drug22 Drug23 Dose21 Dose22 Dose23 Drug31 Drug32 Drug33 Dose31 Dose32 Dose33 Age1 Age2 Age3 Place1 Place2 Place3
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 11A NA 1 2 NA 25/10/2021 25/10/2021 NA RPG BET NA 12 10 NA NAT SET NA 34 43 NA QRT BLT NA 5 45 NA 45 NA NA PMk NA
2 12B NA NA 3 NA NA 20/10/2021 NA NA ATY NA NA 13 NA NA LTP NA NA 3 NA NA CRT NA NA 3 NA NA 56 NA NA GTL
3 13A 4 5 6 22/10/2021 26/10/2021 22/10/2021 GGS BRT DFS 7 9 5 GSF ARR NA 12 4 NA ERE GSF NA 45 34 NA 45 46 NA RKS GLO NA
4 14B NA NA 7 NA NA 04/08/2021 NA NA GDS NA NA 2 NA NA TRE NA NA 55 NA NA HHS NA NA 34 NA NA 25 NA NA MTK
First answer:
Keeping the excellent explanation of #r2evans in mind! We could do it this way if really desired.
library(dplyr)
df %>%
group_by(PID) %>%
summarise(across(everything(), ~toString(.)))
output:
PID Row Date Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 11A 1, 2 25/10/2021, 25/10/2021 RPG, BET 12, 10 NAT, SET 34, 43 QRT, BLT 5, 45 45, NA PMk, NA
2 12B 3 20/10/2021 ATY 13 LTP 3 CRT 3 56 GTL
3 13A 4, 5, 6 22/10/2021, 26/10/2021, 22/10/2021 GGS, BRT, DFS 7, 9, 5 GSF, ARR, NA 12, 4, NA ERE, GSF, NA 45, 34, NA 45, 46, NA RKS, GLO, NA
4 14B 7 04/08/2021 GDS 2 TRE 55 HHS 34 25 MTK

Another tidyverse-based solution, with a pivot_longer followed by a pivot_wider:
library(tidyverse)
# Note that my dataframe does not contain column Row
df %>%
mutate(across(starts_with("Dose"), as.character)) %>%
pivot_longer(!c(PID, Date, Age, Place),names_to = "trm") %>%
group_by(PID, Date) %>%
fill(Age, Place) %>%
mutate(trm = paste(trm,1:n(),sep="_")) %>%
ungroup %>%
pivot_wider(c(PID, Date, Age, Place), names_from = trm) %>%
rename_with(~ paste0("Drug",1:length(.x)), starts_with("Drug")) %>%
rename_with(~ paste0("Dose",1:length(.x)), starts_with("Dose")) %>%
mutate(across(starts_with("Dose"), as.numeric))
#> # A tibble: 5 × 16
#> PID Date Age Place Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Drug4 Dose4 Drug5
#> <chr> <chr> <int> <chr> <chr> <dbl> <chr> <dbl> <chr> <dbl> <chr> <dbl> <chr>
#> 1 11A 25/10… 45 PMk RPG 12 NAT 34 QRT 5 BET 10 SET
#> 2 12B 20/10… 56 GTL ATY 13 LTP 3 CRT 3 <NA> NA <NA>
#> 3 13A 22/10… 45 RKS GGS 7 GSF 12 ERE 45 DFS 5 <NA>
#> 4 13A 26/10… 46 GLO BRT 9 ARR 4 GSF 34 <NA> NA <NA>
#> 5 14B 04/08… 25 MTK GDS 2 TRE 55 HHS 34 <NA> NA <NA>
#> # … with 3 more variables: Dose5 <dbl>, Drug6 <chr>, Dose6 <dbl>

a data.table approach
library(data.table)
DT <- fread("Row PID Date Drug1 Dose1 Drug2 Dose2 Drug3 Dose3 Age Place
1 11A 25/10/2021 RPG 12 NAT 34 QRT 5 45 PMk
2 11A 25/10/2021 BET 10 SET 43 BLT 45
3 12B 20/10/2021 ATY 13 LTP 3 CRT 3 56 GTL
4 13A 22/10/2021 GGS 7 GSF 12 ERE 45 45 RKS
5 13A 26/10/2021 BRT 9 ARR 4 GSF 34 46 GLO
6 13A 22/10/2021 DFS 5
7 14B 04/08/2021 GDS 2 TRE 55 HHS 34 25 MTK")
dcast(DT)
DT
# Melt to long format
ans <- melt(DT, id.vars = c("PID", "Date"),
measure.vars = patterns(drug = "^Drug", dose = "^Dose"),
na.rm = TRUE)
# Paste and Collapse, use ; as separator
ans <- ans[, lapply(.SD, paste0, collapse = ";"), by = .(PID, Date)]
# Split string on ;
ans[, paste0("Drug", 1:length(tstrsplit(ans$drug, ";"))) := tstrsplit(drug, ";")]
ans[, paste0("Dose", 1:length(tstrsplit(ans$dose, ";"))) := tstrsplit(dose, ";")]
#join Age + Place data
ans[DT[!is.na(Age), ], `:=`(Age = i.Age, Place = i.Place), on = .(PID, Date)]
ans[, -c("variable", "drug", "dose")]
# PID Date Drug1 Drug2 Drug3 Drug4 Drug5 Drug6 Dose1 Dose2 Dose3 Dose4 Dose5 Dose6 Age Place
# 1: 11A 25/10/2021 RPG BET NAT SET QRT BLT 12 10 34 43 5 45 45 PMk
# 2: 12B 20/10/2021 ATY LTP CRT <NA> <NA> <NA> 13 3 3 <NA> <NA> <NA> 56 GTL
# 3: 13A 22/10/2021 GGS DFS GSF ERE <NA> <NA> 7 5 12 45 <NA> <NA> 45 RKS
# 4: 13A 26/10/2021 BRT ARR GSF <NA> <NA> <NA> 9 4 34 <NA> <NA> <NA> 46 GLO
# 5: 14B 04/08/2021 GDS TRE HHS <NA> <NA> <NA> 2 55 34 <NA> <NA> <NA> 25 MTK

Another answer to the festival.
Reading data from this page
require(rvest)
require(tidyverse)
d = read_html("https://stackoverflow.com/q/69787018/694915") %>%
html_nodes("table") %>%
html_table(fill = TRUE)
List of dose per PID and DATE
# primera tabla
d[[1]] -> df
df %>%
pivot_longer(
cols = starts_with("Drug"),
values_to = "Drug"
) %>%
select( !name ) %>%
pivot_longer(
cols = starts_with("Dose"),
values_to = "Dose"
) %>%
select( !name ) %>%
drop_na() %>%
pivot_wider(
names_from = Drug,
values_from = Dose ,
values_fill = list(0)
) -> dose
Variable dose contains this data
(https://i.stack.imgur.com/lc3iN.png)
Not that elegant as previous ones, but is an idea to see the whole treatment per PID.

Related

How to delete missing observations for a subset of columns: the R equivalent of dropna(subset) from python pandas

Consider a dataframe in R where I want to drop row 6 because it has missing observations for the variables var1:var3. But the dataframe has valid observations for id and year. See code below.
In python, this can be done in two ways:
use df.dropna(subset = ['var1', 'var2', 'var3'], inplace=True)
use df.set_index(['id', 'year']).dropna()
How to do this in R with tidyverse?
library(tidyverse)
df <- tibble(id = c(seq(1,10)), year=c(seq(2001,2010)),
var1 = c(sample(1:100, 10, replace=TRUE)),
var2 = c(sample(1:100, 10, replace=TRUE)),
var3 = c(sample(1:100, 10, replace=TRUE)))
df[3,4] = NA
df[6,3:5] = NA
df[8,3:4] = NA
df[10,4:5] = NA
We may use complete.cases
library(dplyr)
df %>%
filter(if_any(var1:var3, complete.cases))
-output
# A tibble: 9 x 5
id year var1 var2 var3
<int> <int> <int> <int> <int>
1 1 2001 48 55 82
2 2 2002 22 83 67
3 3 2003 89 NA 19
4 4 2004 56 1 38
5 5 2005 17 58 35
6 7 2007 4 30 94
7 8 2008 NA NA 36
8 9 2009 97 100 80
9 10 2010 37 NA NA
We can use pmap for this case also:
library(dplyr)
library(purrr)
df %>%
filter(!pmap_lgl(., ~ {x <- c(...)[-c(1, 2)];
all(is.na(x))}))
# A tibble: 9 x 5
id year var1 var2 var3
<int> <int> <int> <int> <int>
1 1 2001 90 55 77
2 2 2002 77 5 18
3 3 2003 17 NA 70
4 4 2004 72 33 33
5 5 2005 10 55 77
6 7 2007 22 81 17
7 8 2008 NA NA 46
8 9 2009 93 28 100
9 10 2010 50 NA NA
Or we could also use complete.cases function in pmap as suggested by dear #akrun:
df %>%
filter(pmap_lgl(select(., 3:5), ~ any(complete.cases(c(...)))))
You can use if_any in filter -
library(dplyr)
df %>% filter(if_any(var1:var3, Negate(is.na)))
# id year var1 var2 var3
# <int> <int> <int> <int> <int>
#1 1 2001 14 99 43
#2 2 2002 25 72 76
#3 3 2003 90 NA 15
#4 4 2004 91 7 32
#5 5 2005 69 42 7
#6 7 2007 57 83 41
#7 8 2008 NA NA 74
#8 9 2009 9 78 23
#9 10 2010 93 NA NA
In base R, we can use rowSums to select rows which has atleast 1 non-NA value.
cols <- grep('var', names(df))
df[rowSums(!is.na(df[cols])) > 0, ]
If looking for complete cases, use the following (kernel of this is based on other answers):
library(tidyverse)
df <- tibble(id = c(seq(1,10)), year=c(seq(2001,2010)),
var1 = c(sample(1:100, 10, replace=TRUE)),
var2 = c(sample(1:100, 10, replace=TRUE)),
var3 = c(sample(1:100, 10, replace=TRUE)))
df[3,4] = NA
df[6,3:5] = NA
df[8,3:4] = NA
df[10,4:5] = NA
df %>% filter(!if_any(var1:var3, is.na))
#> # A tibble: 6 x 5
#> id year var1 var2 var3
#> <int> <int> <int> <int> <int>
#> 1 1 2001 13 28 26
#> 2 2 2002 61 77 58
#> 3 4 2004 95 38 58
#> 4 5 2005 38 34 91
#> 5 7 2007 85 46 14
#> 6 9 2009 45 60 40
Created on 2021-06-24 by the reprex package (v2.0.0)

How to create a new column with the derivative of a set of time serie values

I'm looking for help with R. I want to add three columns to existing data frames that contain time series data and have a lot of NA values. The data is about test scores. The first column I want to add is the first test score available. In the second column, I want the last test score available. In the third column, I want to calculate the derivative for each row by dividing the difference between the first and last scores by the number of tests that have passed. Important is that some of these past tests have NA values but I still want to include these when dividing. However, NA values that come after the last available test score I don't want to count.
Some explanation of my data:
A have a couple of data frames that all have test scores of different people. The different people are the rows and each column represents a test score. There are multiple test scores per person for the same test in the data frame. Column T1 shows their first score, T2 the second score, which was gathered a week later, and so on. Some people have started sooner than others and therefore have more test scores available. Also, some scores at the beginning and the middle are missing for various reasons. See the two example tables below where the index column is the actual index of the data frame and not a separate column. Some numbers are missing from the index (like 3) because this person had only NA values in their row, which I removed. It is important for me that the index stays this way.
Example 1 (test A):
INDEX
T1
T2
T3
T4
T5
T6
1
NA
NA
NA
3
4
5
2
57
57
57
57
NA
NA
4
44
NA
NA
NA
NA
NA
5
9
11
11
17
12
NA
Example 2 (test B):
INDEX
T1
T2
T3
T4
1
NA
NA
NA
17
2
11
16
20
20
4
1
20
NA
NA
5
20
20
20
20
My goal now is to add to these data frames the three columns mentioned before. For example 1 this would look like:
INDEX
T1
T2
T3
T4
T5
T6
FirstScore
LastScore
Derivative
1
NA
NA
NA
3
4
5
3
5
0.33
2
57
57
57
57
NA
NA
57
57
0
4
44
NA
NA
NA
NA
NA
44
44
0
5
9
11
11
17
12
NA
9
12
0.6
And for example 2:
INDEX
T1
T2
T3
T4
FirstScore
LastScore
Derivative
1
NA
NA
NA
17
17
17
0
2
11
16
20
20
11
20
2.25
4
1
20
NA
NA
1
20
9.5
5
20
20
20
20
20
20
0
I hope I have made myself clear and that someone can help me, thanks in advance!
Using one pmap_*
pmap_dfr(df1, ~{c(...) %>% t %>% as.data.frame() %>%
mutate(first_score = first(na.omit(c(...)[-1])),
last_score = last(na.omit(c(...)[-1])),
deriv = (last_score - first_score)/max(which(!is.na(c(...)[-1]))))})
INDEX T1 T2 T3 T4 T5 T6 first_score last_score deriv
1 1 NA NA NA 3 4 5 3 5 0.3333333
2 2 57 57 57 57 NA NA 57 57 0.0000000
3 4 44 NA NA NA NA NA 44 44 0.0000000
4 5 9 11 11 17 12 NA 9 12 0.6000000
in dplyr only using cur_data without rowwise() which slows down the operations
df1 %>% group_by(INDEX) %>%
mutate(first_score = c_across(starts_with('T'))[min(which(!is.na(cur_data())))],
last_score = c_across(starts_with('T'))[max(which(!is.na(cur_data()[1:6])))],
deriv = (last_score - first_score)/max(which(!is.na(cur_data()[1:6]))))
I think you can use the following solution. It surprisingly turned out to be a little verbose and convoluted but I think it is quite effective. I assumed that if the Last available score is not actually the last T, so I need to detect its index and divide the difference by it meaning NA values after the last one do not count. Otherwise it is divided by the number of all Ts available.
library(dplyr)
library(purrr)
df %>%
select(T1:T6) %>%
pmap(., ~ {x <- c(...)[!is.na(c(...))]; c(x[1], x[length(x)])}) %>%
exec(rbind, !!!.) %>%
as_tibble() %>%
set_names(c("First", "Last")) %>%
bind_cols(df) %>%
relocate(First, Last, .after = last_col()) %>%
rowwise() %>%
mutate(Derivative = ifelse(!is.na(T6) & T6 == Last, (Last - First)/(length(df)-1),
(Last - First)/last(which(c_across(T1:T6) == Last))))
# First Sample Data
# A tibble: 4 x 10
# Rowwise:
INDEX T1 T2 T3 T4 T5 T6 First Last Derivative
<int> <int> <int> <int> <int> <int> <int> <int> <int> <dbl>
1 1 NA NA NA 3 4 5 3 5 0.333
2 2 57 57 57 57 NA NA 57 57 0
3 4 44 NA NA NA NA NA 44 44 0
4 5 9 11 11 17 12 NA 9 12 0.6
Second Sample Data
df2 %>%
select(T1:T4) %>%
pmap(., ~ {x <- c(...)[!is.na(c(...))]; c(x[1], x[length(x)])}) %>%
exec(rbind, !!!.) %>%
as_tibble() %>%
set_names(c("First", "Last")) %>%
bind_cols(df2) %>%
relocate(First, Last, .after = last_col()) %>%
rowwise() %>%
mutate(Derivative = ifelse(!is.na(T4) & T4 == Last, (Last - First)/(length(df2)-1),
(Last - First)/last(which(c_across(T1:T4) == Last))))
# A tibble: 4 x 8
# Rowwise:
INDEX T1 T2 T3 T4 First Last Derivative
<int> <int> <int> <int> <int> <int> <int> <dbl>
1 1 NA NA NA 17 17 17 0
2 2 11 16 20 20 11 20 2.25
3 4 1 20 NA NA 1 20 9.5
4 5 20 20 20 20 20 20 0
Here's a tidyverse solution with no hardcoding. First I pivot longer, then extract the stats for each INDEX.
library(tidyverse)
df1 %>%
pivot_longer(-INDEX, names_to = "time", names_prefix = "T", names_transform = list(time = as.integer)) %>%
filter(!is.na(value)) %>%
group_by(INDEX) %>%
summarize(FirstScore = first(value), LastScore = last(value), divisor = max(time)) %>%
mutate(Derivative = (LastScore - FirstScore) / divisor) %>%
right_join(df1) %>%
select(INDEX, T1:T6, FirstScore, LastScore, Derivative)
for this output:
# A tibble: 4 x 10
INDEX T1 T2 T3 T4 T5 T6 FirstScore LastScore Derivative
<int> <int> <int> <int> <int> <int> <int> <int> <int> <dbl>
1 1 NA NA NA 3 4 5 3 5 0.333
2 2 57 57 57 57 NA NA 57 57 0
3 4 44 NA NA NA NA NA 44 44 0
4 5 9 11 11 17 12 NA 9 12 0.6
Output for 2nd data, with no changes to the code:
# A tibble: 4 x 10
INDEX T1 T2 T3 T4 T5 T6 FirstScore LastScore Derivative
<int> <int> <int> <int> <int> <int> <int> <int> <int> <dbl>
1 1 NA NA NA 3 4 5 17 17 0
2 2 57 57 57 57 NA NA 11 20 2.25
3 4 44 NA NA NA NA NA 1 20 9.5
4 5 9 11 11 17 12 NA 20 20 0
Sample data
df1 <- data.frame(
INDEX = c(1L, 2L, 4L, 5L),
T1 = c(NA, 57L, 44L, 9L),
T2 = c(NA, 57L, NA, 11L),
T3 = c(NA, 57L, NA, 11L),
T4 = c(3L, 57L, NA, 17L),
T5 = c(4L, NA, NA, 12L),
T6 = c(5L, NA, NA, NA)
)
df2 <- data.frame(
INDEX = c(1L, 2L, 4L, 5L),
T1 = c(NA, 11L, 1L, 20L),
T2 = c(NA, 16L, 20L, 20L),
T3 = c(NA, 20L, NA, 20L),
T4 = c(17L, 20L, NA, 20L)
)
You could also do:
df1 %>%
rowwise()%>%
mutate(firstScore = first(na.omit(c_across(T1:T6))),
lastScore = last(na.omit(c_across(T1:T6))),
Derivative = (lastScore-firstScore)/max(which(!is.na(c_across(T1:T6)))))
# A tibble: 4 x 10
# Rowwise:
INDEX T1 T2 T3 T4 T5 T6 firstScore lastScore Derivative
<int> <int> <int> <int> <int> <int> <int> <int> <int> <dbl>
1 1 NA NA NA 3 4 5 3 5 0.333
2 2 57 57 57 57 NA NA 57 57 0
3 4 44 NA NA NA NA NA 44 44 0
4 5 9 11 11 17 12 NA 9 12 0.6

How to split a data set with duplicated informations based on date

I have this situation:
ID date Weight
1 2014-12-02 23
1 2014-10-02 25
2 2014-11-03 27
2 2014-09-03 45
3 2014-07-11 56
3 NA 34
4 2014-10-05 25
4 2014-08-09 14
5 NA NA
5 NA NA
And I would like split the dataset in this, like this:
1-
ID date Weight
1 2014-12-02 23
1 2014-10-02 25
2 2014-11-03 27
2 2014-09-03 45
4 2014-10-05 25
4 2014-08-09 14
2- Lowest Date
ID date Weight
3 2014-07-11 56
3 NA 34
5 NA NA
5 NA NA
I tried this for second dataset:
dt <- dt[order(dt$ID, dt$date), ]
dt.2=dt[duplicated(dt$ID), ]
but didn't work
Get the ID's for which date are NA and then subset based on that
NA_ids <- unique(df$ID[is.na(df$date)])
subset(df, !ID %in% NA_ids)
# ID date Weight
#1 1 2014-12-02 23
#2 1 2014-10-02 25
#3 2 2014-11-03 27
#4 2 2014-09-03 45
#7 4 2014-10-05 25
#8 4 2014-08-09 14
subset(df, ID %in% NA_ids)
# ID date Weight
#5 3 2014-07-11 56
#6 3 <NA> 34
#9 5 <NA> NA
#10 5 <NA> NA
Using dplyr, we can create a new column which has TRUE/FALSE for each ID based on presence of NA and then use group_split to split into list of two.
library(dplyr)
df %>%
group_by(ID) %>%
mutate(NA_ID = any(is.na(date))) %>%
ungroup %>%
group_split(NA_ID, keep = FALSE)
The above dplyr logic can also be implemented in base R by using ave and split
df$NA_ID <- with(df, ave(is.na(date), ID, FUN = any))
split(df[-4], df$NA_ID)

Cleaning a data.frame in a semi-reshape/semi-aggregate fashion

First time posting something here, forgive any missteps in my question.
In my example below I've got a data.frame where the unique identifier is the tripID with the name of the vessel, the species code, and a catch metric.
> testFrame1 <- data.frame('tripID' = c(1,1,2,2,3,4,5),
'name' = c('SS Anne','SS Anne', 'HMS Endurance', 'HMS Endurance','Salty Hippo', 'Seagallop', 'Borealis'),
'SPP' = c(101,201,101,201,102,102,103),
'kept' = c(12, 22, 14, 24, 16, 18, 10))
> testFrame1
tripID name SPP kept
1 1 SS Anne 101 12
2 1 SS Anne 201 22
3 2 HMS Endurance 101 14
4 2 HMS Endurance 201 24
5 3 Salty Hippo 102 16
6 4 Seagallop 102 18
7 5 Borealis 103 10
I need a way to basically condense the data.frame so that all there is only one row per tripID as shown below.
> testFrame1
tripID name SPP kept SPP.1 kept.1
1 1 SS Anne 101 12 201 22
2 2 HMS Endurance 101 14 201 24
3 3 Salty Hippo 102 16 NA NA
4 4 Seagallop 102 18 NA NA
5 5 Borealis 103 10 NA NA
I've looked into tidyr and reshape but neither of those are can deliver quite what I'm asking for. Is there anything out there that does this quasi-reshaping?
Here are two alternatives using base::reshape and data.table::dcast:
1) base R
reshape(transform(testFrame1,
timevar = ave(tripID, tripID, FUN = seq_along)),
idvar = cbind("tripID", "name"),
timevar = "timevar",
direction = "wide")
# tripID name SPP.1 kept.1 SPP.2 kept.2
#1 1 SS Anne 101 12 201 22
#3 2 HMS Endurance 101 14 201 24
#5 3 Salty Hippo 102 16 NA NA
#6 4 Seagallop 102 18 NA NA
#7 5 Borealis 103 10 NA NA
2) data.table
library(data.table)
setDT(testFrame1)
dcast(testFrame1, tripID + name ~ rowid(tripID), value.var = c("SPP", "kept"))
# tripID name SPP_1 SPP_2 kept_1 kept_2
#1: 1 SS Anne 101 201 12 22
#2: 2 HMS Endurance 101 201 14 24
#3: 3 Salty Hippo 102 NA 16 NA
#4: 4 Seagallop 102 NA 18 NA
#5: 5 Borealis 103 NA 10 NA
Great reproducible post considering it's your first. Here's a way to do it with dplyr and tidyr -
testFrame1 %>%
group_by(tripID, name) %>%
summarise(
SPP = toString(SPP),
kept = toString(kept)
) %>%
ungroup() %>%
separate("SPP", into = c("SPP", "SPP.1"), sep = ", ", extra = "drop", fill = "right") %>%
separate("kept", into = c("kept", "kept.1"), sep = ", ", extra = "drop", fill = "right")
# A tibble: 5 x 6
tripID name SPP SPP.1 kept kept.1
<dbl> <chr> <chr> <chr> <chr> <chr>
1 1.00 SS Anne 101 201 12 22
2 2.00 HMS Endurance 101 201 14 24
3 3.00 Salty Hippo 102 <NA> 16 <NA>
4 4.00 Seagallop 102 <NA> 18 <NA>
5 5.00 Borealis 103 <NA> 10 <NA>

Repeated measures in messy format, need help to tidy

I have a very large data set containing weekly weights that have been coded with week of study and the weight at that visit. There are some missing visits and the data is not currently aligned.
df <- data.frame(ID=1:3, Week_A=c(6,6,7), Weight_A=c(23,24,23), Week_B=c(7,7,8),
Weight_B=c(25,26,27), Week_C=c(8,9,9), Weight_C=c(27,26,28))
df
ID Week_A Weight_A Week_B Weight_B Week_C Weight_C
1 1 6 23 7 25 8 27
2 2 6 24 7 26 9 26
3 3 7 23 8 27 9 28
I would like to align the data by week number (ideal output below).
df_ideal <- data.frame (ID=1:3, Week_6=c(23,24,NA), Week_7=c(25,26,23),
Week_8=c(27,NA,27), Week_9=c(NA,26,28))
df_ideal
ID Week_6 Week_7 Week_8 Week_9
1 1 23 25 27 NA
2 2 24 26 NA 26
3 3 NA 23 27 28
I would appreciate some help with this, even to find a starting point to manipulate this data to an easier to manage format.
A tidyverse solution:
df <- data.frame(ID=1:3,
Week_A=c(6,6,7),
Weight_A=c(23,24,23),
Week_B=c(7,7,8),
Weight_B=c(25,26,27),
Week_C=c(8,9,9),
Weight_C=c(27,26,28))
library(tidyverse)
df_long <- df %>% gather(key="v", value="value", -ID) %>%
separate(v, into=c("v1", "v2")) %>%
spread(v1, value) %>%
complete(ID, Week) %>%
arrange(Week, ID)
df_long
# A tibble: 12 x 4
# ID Week v2 Weight
# <int> <dbl> <chr> <dbl>
# 1 1 6 A 23
# 2 2 6 A 24
# 3 3 6 <NA> NA
# 4 1 7 B 25
# 5 2 7 B 26
# 6 3 7 A 23
# 7 1 8 C 27
# 8 2 8 <NA> NA
# 9 3 8 B 27
#10 1 9 <NA> NA
#11 2 9 C 26
#12 3 9 C 28
df_wide <- df_long %>% select(-v2) %>%
spread(Week, Weight, sep="_")
df_wide
# A tibble: 3 x 5
# ID Week_6 Week_7 Week_8 Week_9
# <int> <dbl> <dbl> <dbl> <dbl>
#1 1 23 25 27 NA
#2 2 24 26 NA 26
#3 3 NA 23 27 28
Personally, I'd keep using df_long instead of df_wide, as it is a tidy data frame, while df_wide is not.
Here is a possible approach using the data.table package
library(data.table)
#convert into a data.table
setDT(df)
#convert into a long format
mdat <- melt(df, id.vars="ID", measure.vars=patterns("^Week", "^Weight", cols=names(df)))
#pivot into desired output
ans <- dcast(mdat, ID ~ value1, value.var="value2")
ans output:
ID 6 7 8 9
1: 1 23 25 27 NA
2: 2 24 26 NA 26
3: 3 NA 23 27 28
And if you really need the "Week_" in your column names, you can use
setnames(ans, names(ans)[-1L], paste("Week_", names(ans)[-1L]))
Another tidyverse solution using a double-gather with a final spread
df %>%
gather(k, v, -ID, -starts_with("Weight")) %>%
separate(k, into = c("k1", "k2")) %>%
unite(k1, k1, v) %>%
gather(k, v, starts_with("Weight")) %>%
separate(k, into = c("k3", "k4")) %>%
filter(k2 == k4) %>%
select(-k2, -k3, -k4) %>%
spread(k1, v)
# ID Week_6 Week_7 Week_8 Week_9
#1 1 23 25 27 NA
#2 2 24 26 NA 26
#3 3 NA 23 27 28
In base R, it's a double reshape, firstly to long and then back to wide on a different variable:
tmp <- reshape(df, idvar="ID", varying=lapply(c("Week_","Weight_"), grep, names(df)),
v.names=c("time","Week"), direction="long")
reshape(tmp, idvar="ID", direction="wide", sep="_")
# ID Week_6 Week_7 Week_8 Week_9
#1.1 1 23 25 27 NA
#2.1 2 24 26 NA 26
#3.1 3 NA 23 27 28

Resources