Ascending group by date - r

I cannot able to ascend my group by dates. Please help!
df <- data.frame(A = c('a1','a1','b1','b1','b1','c2','d2','d2'),
B = c("2017-02-20","2018-02-14","2017-02-06","2018-02-27","2017-02-29","2017-02-28","2017-02-09","2017-02-10"))
Code:
df %>% group_by(A) %>% arrange(A,(as.Date(B)))
I am getting wrong result as the b1 didn't sort
A B
<fctr> <fctr>
1 a1 2017-02-20
2 a1 2018-02-14
3 b1 2017-02-06
4 b1 2018-02-27
5 b1 2017-02-29
6 c2 2017-02-28
7 d2 2017-02-09
8 d2 2017-02-10

You can see that the 2017-02-29 is not a real date, only 28 days in feb 2017. So, when you are converting your column B to date, it converts that value to NA. Fix that entry and it your answer should work.
Also, you probably do not need to group_by A
library(dplyr)
#>
df <- data.frame(A = c('a1','a1','b1','b1','b1','c2','d2','d2'),
B = c("2017-02-20","2018-02-14","2017-02-06","2018-02-27","2017-02-29","2017-02-28","2017-02-09","2017-02-10"))
as.Date(df$B)
#> [1] "2017-02-20" "2018-02-14" "2017-02-06" "2018-02-27" NA
#> [6] "2017-02-28" "2017-02-09" "2017-02-10"
df%>%arrange(A, as.Date(B))
#> A B
#> 1 a1 2017-02-20
#> 2 a1 2018-02-14
#> 3 b1 2017-02-06
#> 4 b1 2018-02-27
#> 5 b1 2017-02-29
#> 6 c2 2017-02-28
#> 7 d2 2017-02-09
#> 8 d2 2017-02-10
Created on 2019-09-16 by the reprex package (v0.2.1)

Related

Include in data frame 1 averaged values from several other data frames and based on varying time intervals

I have a data frame with several variables, and whose first columns look like this:
Place <- c(rep("PlaceA",14),rep("PlaceB",15))
Group_Id <- c(rep("A1",5),rep("A1",6),rep("A2",3),rep("B1",6),rep("B2",4),rep("B2",5))
Time <- as.Date(c("2018-01-15","2018-02-03","2018-02-27","2018-03-10","2018-03-18","2019-02-02","2019-03-01","2019-03-15","2019-03-28","2019-04-05","2019-04-12","2018-02-01",
"2018-03-01","2018-04-07","2018-01-17","2018-01-27","2018-02-17","2018-03-03","2018-04-02","2018-04-25","2018-03-03","2018-03-18","2018-04-08","2018-04-20",
"2019-01-23","2019-02-09","2019-02-27","2019-03-12","2019-03-30"))
FollowUp <- c("start",paste("week",week(ymd(Time[2:5]))),"start",paste("week",week(ymd(Time[7:11]))),"start",paste("week",week(ymd(Time[13:14]))),"start",paste("week",week(ymd(Time[16:20]))),"start",paste("week",week(ymd(Time[22:24]))),"start",paste("week",week(ymd(Time[26:29]))))
exprmt <- c(rep(1,5),rep(2,6),rep(3,3),rep(4,6),rep(5,4),rep(6,5))
> df1
Place Group_Id Time exprmt FollowUp
1 PlaceA A1 2018-01-15 1 start
2 PlaceA A1 2018-02-03 1 week 5
3 PlaceA A1 2018-02-27 1 week 9
4 PlaceA A1 2018-03-10 1 week 10
5 PlaceA A1 2018-03-18 1 week 11
6 PlaceA A1 2019-02-02 2 start
7 PlaceA A1 2019-03-01 2 week 9
8 PlaceA A1 2019-03-15 2 week 11
9 PlaceA A1 2019-03-28 2 week 13
10 PlaceA A1 2019-04-05 2 week 14
11 PlaceA A1 2019-04-12 2 week 15
12 PlaceA A2 2018-02-01 3 start
13 PlaceA A2 2018-03-01 3 week 9
14 PlaceA A2 2018-04-07 3 week 14
15 PlaceB B1 2018-01-17 4 start
16 PlaceB B1 2018-01-27 4 week 4
17 PlaceB B1 2018-02-17 4 week 7
18 PlaceB B1 2018-03-03 4 week 9
19 PlaceB B1 2018-04-02 4 week 14
20 PlaceB B1 2018-04-25 4 week 17
21 PlaceB B2 2018-03-03 5 start
22 PlaceB B2 2018-03-18 5 week 11
23 PlaceB B2 2018-04-08 5 week 14
24 PlaceB B2 2018-04-20 5 week 16
25 PlaceB B2 2019-01-23 6 start
26 PlaceB B2 2019-02-09 6 week 6
27 PlaceB B2 2019-02-27 6 week 9
28 PlaceB B2 2019-03-12 6 week 11
29 PlaceB B2 2019-03-30 6 week 13
For each Place (more than 2 in my actual data), I have a separate data frame with temperature records by hours. For example:
set.seed(1032)
t <- c(seq.POSIXt(from = ISOdate(2018,01,01),to = ISOdate(2018,06,01), by = "hour"),seq.POSIXt(from = ISOdate(2019,01,01),to = ISOdate(2019,06,01), by = "hour"))
temp_A <- runif(length(t),min = 5, max = 25)
temp_B <- runif(length(t),min = 3, max = 32)
data_A <- data.frame(t,temp_A)
data_B <- data.frame(t,temp_B)
> head(data_A)
t temp_A
1 2018-01-01 12:00:00 14.24961
2 2018-01-01 13:00:00 21.64925
3 2018-01-01 14:00:00 21.77058
4 2018-01-01 15:00:00 13.31673
5 2018-01-01 16:00:00 16.10350
6 2018-01-01 17:00:00 17.64567
I need to add a column in df1 with average temperature for the time interval by Place, group_Id and exprmt: the first of each group_byshould be a NaN, than I would need the average for each time interval. Knowing that for each Place, the data are also in a separate data frame.
I tried something like this, but it is not working:
df1 <- df1 %>% group_by(Place,Group_Id,exprmt) %>% mutate(
temp = case_when(FollowUp == "start" & Place == "PlaceA" ~ NA,
FollowUp == FollowUp[c(2:n())] & Place == "PlaceA" ~ mean(temp_A[c(which(date(temp_A$t))==lag(Time,1):which(date(temp_A$t))==Time),2]),
)
)
I found information on how calculate averages over multiple dataframes (e.g. this or this), but this is not what I am looking for. I would like to do it without a loop. My expected results is (etc stand for and so on..):
> df1
Place Group_Id Time exprmt FollowUp expected
1 PlaceA A1 2018-01-15 1 start NaN
2 PlaceA A1 2018-02-03 1 week 5 mean temp_A between 2018-01-15 and 2018-02-03
3 PlaceA A1 2018-02-27 1 week 9 mean temp_A between 2018-02-03 and 2018-02-27
4 PlaceA A1 2018-03-10 1 week 10 mean temp_A between 2018-02-27 and 2018-03-10
5 PlaceA A1 2018-03-18 1 week 11 mean temp_A between 2018-03-10 and 2018-03-18
6 PlaceA A1 2019-02-02 2 start NaN
7 PlaceA A1 2019-03-01 2 week 9 mean temp_A between 2019-02-02 and 2019-03-01
8 PlaceA A1 2019-03-15 2 week 11 etc
9 PlaceA A1 2019-03-28 2 week 13 etc
10 PlaceA A1 2019-04-05 2 week 14 etc
11 PlaceA A1 2019-04-12 2 week 15 etc
12 PlaceA A2 2018-02-01 3 start etc
13 PlaceA A2 2018-03-01 3 week 9 etc
14 PlaceA A2 2018-04-07 3 week 14 etc
15 PlaceB B1 2018-01-17 4 start NaN
16 PlaceB B1 2018-01-27 4 week 4 mean temp_B between 2018-01-17 and 2018-01-27
17 PlaceB B1 2018-02-17 4 week 7 etc
18 PlaceB B1 2018-03-03 4 week 9 etc
19 PlaceB B1 2018-04-02 4 week 14 etc
20 PlaceB B1 2018-04-25 4 week 17 etc
21 PlaceB B2 2018-03-03 5 start etc
22 PlaceB B2 2018-03-18 5 week 11 etc
23 PlaceB B2 2018-04-08 5 week 14 etc
24 PlaceB B2 2018-04-20 5 week 16 etc
25 PlaceB B2 2019-01-23 6 start etc
26 PlaceB B2 2019-02-09 6 week 6 etc
27 PlaceB B2 2019-02-27 6 week 9 etc
28 PlaceB B2 2019-03-12 6 week 11 etc
29 PlaceB B2 2019-03-30 6 week 13 etc
Any help will be appreciated!
I suggest a detailed step-by-step solution (using data.table, lubridate and gtools libraries) which tries not to lose the reader. So, please find below a reprex.
Reprex
1. DATA PREPARATION
library(data.table)
library(lubridate)
library(gtools)
# Convert the dataframe 'df1' into data.table and add of the dummy variable 'StartTime'
setDT(df1)[, StartTime := shift(Time,1), by = .(Place, Group_Id, exprmt)][]
setcolorder(df1, c("Place", "Group_Id", "FollowUp", "exprmt", "StartTime", "Time"))
# Convert 'StartTime' and 'Time' columns into class 'PosiXct' and into ymd_hms format
# with the function 'ymd_TO_ymd_hms'
ymd_TO_ymd_hms <- function(x,y) as_datetime(as.double(as.POSIXct(x)+3600), tz = y)
sel_cols <- c("StartTime", "Time")
df1[, (sel_cols) := lapply(.SD, ymd_TO_ymd_hms, "GMT"), .SDcols = sel_cols][, Time := Time - 3600]
# Here is to what 'df1' looks like:
df1
#> Place Group_Id FollowUp exprmt StartTime Time
#> 1: PlaceA A1 start 1 <NA> 2018-01-14 23:00:00
#> 2: PlaceA A1 week 5 1 2018-01-15 00:00:00 2018-02-02 23:00:00
#> 3: PlaceA A1 week 9 1 2018-02-03 00:00:00 2018-02-26 23:00:00
#> 4: PlaceA A1 week 10 1 2018-02-27 00:00:00 2018-03-09 23:00:00
#> 5: PlaceA A1 week 11 1 2018-03-10 00:00:00 2018-03-17 23:00:00
#> 6: PlaceA A1 start 2 <NA> 2019-02-01 23:00:00
#> 7: PlaceA A1 week 9 2 2019-02-02 00:00:00 2019-02-28 23:00:00
#> 8: PlaceA A1 week 11 2 2019-03-01 00:00:00 2019-03-14 23:00:00
#> 9: PlaceA A1 week 13 2 2019-03-15 00:00:00 2019-03-27 23:00:00
#> 10: PlaceA A1 ...
# Convert the dataframes 'data_A' and 'data_B' into data.tables
setDT(data_A)
setDT(data_B)
2. EXPAND ROWS OF 'df1' BY DATE RANGE USING 'StartTime' and 'Time'
df1_time_seq <- df1[!is.na(StartTime) # remove rows where StartTime = NA
][ ,.(Place = Place, Group_Id = Group_Id, FollowUp = FollowUp, exprmt = exprmt, Time_seq = seq(from = StartTime, to = Time, by = "hour")), by = 1:nrow(df1[!is.na(StartTime)])]
df1_time_seq
#> nrow Place Group_Id FollowUp exprmt Time_seq
#> 1: 1 PlaceA A1 week 5 1 2018-01-15 00:00:00
#> 2: 1 PlaceA A1 week 5 1 2018-01-15 01:00:00
#> 3: 1 PlaceA A1 week 5 1 2018-01-15 02:00:00
#> 4: 1 PlaceA A1 week 5 1 2018-01-15 03:00:00
#> 5: 1 PlaceA A1 week 5 1 2018-01-15 04:00:00
#> ---
#> 9784: 23 PlaceB B2 week 13 6 2019-03-29 19:00:00
#> 9785: 23 PlaceB B2 week 13 6 2019-03-29 20:00:00
#> 9786: 23 PlaceB B2 week 13 6 2019-03-29 21:00:00
#> 9787: 23 PlaceB B2 week 13 6 2019-03-29 22:00:00
#> 9788: 23 PlaceB B2 week 13 6 2019-03-29 23:00:00
3. JOINS
# Merge 'data_A' and 'data_B' on 't'
data_merge <- merge(data_A, data_B, by = 't')
# Merge 'df1_time_seq' and 'data_merge' on 'Time_seq' = 't' and add a column 'temp' filled with 'temp_A' values when 'Place == PlaceA' and 'temp_B' values when 'Place == PlaceB'
df1_time_seq_merge <- merge(df1_time_seq, data_merge, by.x = "Time_seq", by.y = "t")[, temp := fcase(Place == "PlaceA", temp_A,
Place == "PlaceB", temp_B)
][, `:=` (temp_A = NULL, temp_B = NULL)
][]
df1_time_seq_merge
#> Time_seq nrow Place Group_Id FollowUp exprmt temp
#> 1: 2018-01-15 00:00:00 1 PlaceA A1 week 5 1 10.618465
#> 2: 2018-01-15 01:00:00 1 PlaceA A1 week 5 1 16.156850
#> 3: 2018-01-15 02:00:00 1 PlaceA A1 week 5 1 6.806842
#> 4: 2018-01-15 03:00:00 1 PlaceA A1 week 5 1 21.036855
#> 5: 2018-01-15 04:00:00 1 PlaceA A1 week 5 1 21.578569
#> ---
#> 9784: 2019-04-11 18:00:00 9 PlaceA A1 week 15 2 16.646570
#> 9785: 2019-04-11 19:00:00 9 PlaceA A1 week 15 2 12.362436
#> 9786: 2019-04-11 20:00:00 9 PlaceA A1 week 15 2 24.853746
#> 9787: 2019-04-11 21:00:00 9 PlaceA A1 week 15 2 22.553074
#> 9788: 2019-04-11 22:00:00 9 PlaceA A1 week 15 2 21.020600
4. SUMMARIZE 'df1_time_seq_merge'
# Summarize df1_time_seq_merge to get the mean of 'temp' by group in the 'expected' variable
df1_mean <- df1_time_seq_merge[, .(expected = mean(temp)), by = .(Place, Group_Id, exprmt, FollowUp)]
df1_mean
#> Place Group_Id exprmt FollowUp expected
#> 1: PlaceA A1 1 week 5 15.17243
#> 2: PlaceB B1 4 week 4 19.26662
#> 3: PlaceB B1 4 week 7 17.32940
#> 4: PlaceA A2 3 week 9 14.92409
#> 5: PlaceA A1 1 week 9 14.86734
#> 6: PlaceB B1 4 week 9 18.36255
#> 7: PlaceA A1 1 week 10 14.75482
#> 8: PlaceA A2 3 week 14 14.86063
#> 9: PlaceB B1 4 week 14 17.35101
#> 10: PlaceB B2 5 week 11 17.93565
#> 11: PlaceA A1 1 week 11 14.86273
#> 12: PlaceB B2 5 week 14 16.77532
#> 13: PlaceB B1 4 week 17 18.00866
#> 14: PlaceB B2 5 week 16 18.15545
#> 15: PlaceB B2 6 week 6 17.95428
#> 16: PlaceA A1 2 week 9 14.96347
#> 17: PlaceB B2 6 week 9 16.85704
#> 18: PlaceB B2 6 week 11 17.23744
#> 19: PlaceA A1 2 week 11 15.22046
#> 20: PlaceB B2 6 week 13 17.33922
#> 21: PlaceA A1 2 week 13 14.58677
#> 22: PlaceA A1 2 week 14 15.24341
#> 23: PlaceA A1 2 week 15 15.87080
#> Place Group_Id exprmt FollowUp expected
5. FINAL JOIN BETWEEN 'df1' AND 'df1_MEAN'
DF_Results <- merge(df1, df1_mean, by = c("Place", "Group_Id", "exprmt", "FollowUp"), all.x = TRUE)[, Time := Time + 3600][]
6. CLEANING 'DF_Results' TO GET THE DESIRED OUTPUT
ymd_hms_TO_ymd <- function(x) as_date(as.POSIXct(x))
DF_Results[, `:=` (StartTime = NULL, Time = lapply(Time, ymd_hms_TO_ymd))]
setcolorder(DF_Results, c("Place", "Group_Id", "exprmt", "Time", "FollowUp", "expected"))
DF_Results <- DF_Results[gtools::mixedorder(FollowUp, decreasing = FALSE)]
setorder(DF_Results, Place, Group_Id, exprmt)
DF_Results
#> Place Group_Id exprmt Time FollowUp expected
#> 1: PlaceA A1 1 2018-01-15 start NA
#> 2: PlaceA A1 1 2018-02-03 week 5 15.17243
#> 3: PlaceA A1 1 2018-02-27 week 9 14.86734
#> 4: PlaceA A1 1 2018-03-10 week 10 14.75482
#> 5: PlaceA A1 1 2018-03-18 week 11 14.86273
#> 6: PlaceA A1 2 2019-02-02 start NA
#> 7: PlaceA A1 2 2019-03-01 week 9 14.96347
#> 8: PlaceA A1 2 2019-03-15 week 11 15.22046
#> 9: PlaceA A1 2 2019-03-28 week 13 14.58677
#> 10: PlaceA A1 2 2019-04-04 week 14 15.24341
#> 11: PlaceA A1 2 2019-04-11 week 15 15.87080
#> 12: PlaceA A2 3 2018-02-01 start NA
#> 13: PlaceA A2 3 2018-03-01 week 9 14.92409
#> 14: PlaceA A2 3 2018-04-06 week 14 14.86063
#> 15: PlaceB B1 4 2018-01-17 start NA
#> 16: PlaceB B1 4 2018-01-27 week 4 19.26662
#> 17: PlaceB B1 4 2018-02-17 week 7 17.32940
#> 18: PlaceB B1 4 2018-03-03 week 9 18.36255
#> 19: PlaceB B1 4 2018-04-01 week 14 17.35101
#> 20: PlaceB B1 4 2018-04-24 week 17 18.00866
#> 21: PlaceB B2 5 2018-03-03 start NA
#> 22: PlaceB B2 5 2018-03-18 week 11 17.93565
#> 23: PlaceB B2 5 2018-04-07 week 14 16.77532
#> 24: PlaceB B2 5 2018-04-19 week 16 18.15545
#> 25: PlaceB B2 6 2019-01-23 start NA
#> 26: PlaceB B2 6 2019-02-09 week 6 17.95428
#> 27: PlaceB B2 6 2019-02-27 week 9 16.85704
#> 28: PlaceB B2 6 2019-03-12 week 11 17.23744
#> 29: PlaceB B2 6 2019-03-30 week 13 17.33922
#> Place Group_Id exprmt Time FollowUp expected
Created on 2021-11-24 by the reprex package (v2.0.1)
Sharing the results with temperature data of 2 places. You can always generalize the same either by joining and creating a single data object (if total places are less) or use an ifelse statement.
library(data.table)
setDT(df1)
setDT(data_A) # converting to data.table
setDT(data_B) # converting to data.table
Merged temperature to have a single data object
data_AB <- merge(data_A, data_B, by = 't')
Create a lag column of Time variable based on Place, Group_Id, exprmt
df1[,':='(LAG_DATE = shift(Time, type = 'lag')), by = .(Place, Group_Id, exprmt)]
Using apply function and user defined function to subset the temperature data based on consecutive time periods and also using data.table functionality along with lapply to get the mean for those subsets
Here I have assumed Place column can somehow be joined/mapped on some condition with the temperature data.
Like in the example shared temp_A/temp_B can be formed by concatenating 'temp_' and 6th character of Place column
df1[,':='(EXPECTED = apply(cbind(LAG_DATE, Time, Place), 1, function(x) {
x1 <- as.Date(as.numeric(x[1]), origin = '1970-01-01')
x2 <- as.Date(as.numeric(x[2]), origin = '1970-01-01')
Place <- as.character(x[3])
Mean_Value <- ifelse(is.na(x1), NaN, data_AB[as.Date(t) >= x1 &
as.Date(t) <= x2, lapply(.SD, mean), .SDcols = paste('temp_', substr(Place, 6,
6), sep = '')])
return(as.numeric(Mean_Value))
}
))]

Combine data by several condition in R

I want to merge two data according to two conditions:
by same ID (only ID in the first data is retained)
if date_mid (from dat2) is in between date_begin and date_end (both from dat1), paste the result (from dat2), if not, noted as "NA"
Also, I want to drop the rows if the ID in the combine data already has the result (either as healthy or sick). In the example below I want to drop the 3rd and 12th rows.
First data (dat1):
dat1 <- tibble(ID = c(paste0(rep("A"), 1:10), "A2", "A10"),
date_begin = seq(as.Date("2020/1/1"), by = "month", length.out = 12),
date_end = date_begin + 365)
dat1
# A tibble: 12 x 3
ID date_begin date_end
<chr> <date> <date>
1 A1 2020-01-01 2020-12-31
2 A2 2020-02-01 2021-01-31
3 A3 2020-03-01 2021-03-01
4 A4 2020-04-01 2021-04-01
5 A5 2020-05-01 2021-05-01
6 A6 2020-06-01 2021-06-01
7 A7 2020-07-01 2021-07-01
8 A8 2020-08-01 2021-08-01
9 A9 2020-09-01 2021-09-01
10 A10 2020-10-01 2021-10-01
11 A2 2020-11-01 2021-11-01
12 A10 2020-12-01 2021-12-01
Second data (dat2):
dat2 <- tibble(ID = c(paste0(rep("A"), 1:4), paste0(rep("A"), 9:15), "A2"),
date_mid = seq(as.Date("2020/1/1"), by = "month", length.out = 12) + 100,
result = rep(c("healthy", "sick"), length = 12))
dat2
# A tibble: 12 x 3
ID date_mid result
<chr> <date> <chr>
1 A1 2020-04-10 healthy
2 A2 2020-05-11 sick
3 A3 2020-06-09 healthy
4 A4 2020-07-10 sick
5 A9 2020-08-09 healthy
6 A10 2020-09-09 sick
7 A11 2020-10-09 healthy
8 A12 2020-11-09 sick
9 A13 2020-12-10 healthy
10 A14 2021-01-09 sick
11 A15 2021-02-09 healthy
12 A2 2021-03-11 sick
I have tried left_join as below:
left_join(dat1, dat2, by = "ID") %>%
mutate(result = ifelse(date_mid %within% interval(date_begin, date_end), result, NA))
# A tibble: 14 x 5
ID date_begin date_end date_mid result
<chr> <date> <date> <date> <chr>
1 A1 2020-01-01 2020-12-31 2020-04-10 healthy
2 A2 2020-02-01 2021-01-31 2020-05-11 sick
3 A2 2020-02-01 2021-01-31 2021-03-11 NA
4 A3 2020-03-01 2021-03-01 2020-06-09 healthy
5 A4 2020-04-01 2021-04-01 2020-07-10 sick
6 A5 2020-05-01 2021-05-01 NA NA
7 A6 2020-06-01 2021-06-01 NA NA
8 A7 2020-07-01 2021-07-01 NA NA
9 A8 2020-08-01 2021-08-01 NA NA
10 A9 2020-09-01 2021-09-01 2020-08-09 NA
11 A10 2020-10-01 2021-10-01 2020-09-09 NA
12 A2 2020-11-01 2021-11-01 2020-05-11 NA
13 A2 2020-11-01 2021-11-01 2021-03-11 sick
14 A10 2020-12-01 2021-12-01 2020-09-09 NA
As I mentioned, I want to drop the 3rd and 12th rows of ID A2, since A2 already have a result of either healthy or sick in 2nd and 13th rows.
The exact result that I want is something like this (only 2 rows of A2):
# A tibble: 12 x 5
ID date_begin date_end date_mid result
<chr> <date> <date> <date> <chr>
1 A1 2020-01-01 2020-12-31 2020-04-10 healthy
2 A2 2020-02-01 2021-01-31 2020-05-11 sick
3 A3 2020-03-01 2021-03-01 2020-06-09 healthy
4 A4 2020-04-01 2021-04-01 2020-07-10 sick
5 A5 2020-05-01 2021-05-01 NA NA
6 A6 2020-06-01 2021-06-01 NA NA
7 A7 2020-07-01 2021-07-01 NA NA
8 A8 2020-08-01 2021-08-01 NA NA
9 A9 2020-09-01 2021-09-01 2020-08-09 NA
10 A10 2020-10-01 2021-10-01 2020-09-09 NA
11 A2 2020-11-01 2021-11-01 2021-03-11 sick
12 A10 2020-12-01 2021-12-01 2020-09-09 NA
Any pointer is appreciated, thanks.
If there is more than one row for an ID in the result after joining keep only the non-NA rows. This can be written in dplyr as -
library(dplyr)
library(lubridate)
left_join(dat1, dat2, by = "ID") %>%
mutate(result = ifelse(date_mid %within% interval(date_begin, date_end), result, NA)) %>%
group_by(ID, date_begin, date_end) %>%
filter(if(n() > 1) !is.na(result) else TRUE) %>%
ungroup
# ID date_begin date_end date_mid result
# <chr> <date> <date> <date> <chr>
# 1 A1 2020-01-01 2020-12-31 2020-04-10 healthy
# 2 A2 2020-02-01 2021-01-31 2020-05-11 sick
# 3 A3 2020-03-01 2021-03-01 2020-06-09 healthy
# 4 A4 2020-04-01 2021-04-01 2020-07-10 sick
# 5 A5 2020-05-01 2021-05-01 NA NA
# 6 A6 2020-06-01 2021-06-01 NA NA
# 7 A7 2020-07-01 2021-07-01 NA NA
# 8 A8 2020-08-01 2021-08-01 NA NA
# 9 A9 2020-09-01 2021-09-01 2020-08-09 NA
#10 A10 2020-10-01 2021-10-01 2020-09-09 NA
#11 A2 2020-11-01 2021-11-01 2021-03-11 sick
#12 A10 2020-12-01 2021-12-01 2020-09-09 NA

How to create another table in R to calculate the difference?

I have a set of data frame as below:
ID
Parameter
value
123-01
a1
x
123-02
a1
x
123-01
b3
x
123-02
b3
x
124-01
a1
x
125-01
a1
x
126-01
a1
x
124-01
b3
x
125-01
b3
x
126-01
b3
x
I would like to find the sampleID that ended with "-02", and calculate the difference of the same sample ID that has the same first three digit by same parameter.
For example, calculate the difference of 123-01 and 123-02 based on parameter a1. Then the difference of 123-01 and 123-02 based on parameter b3, etc....
In the end, I can get a table contains
ID
Parameter
DiffValue
123
a1
y
123
b3
y
127
a1
y
127
b3
y
How can I do it?
I tried to use dplyr (filter) to create a table that only contains the duplicate, and then how do I match the origin table and do the calculation?
try to do it this way
library(tidyverse)
df <- read.table(text = "ID Parameter value
123-01 a1 10
123-02 a1 10
123-01 b3 10
123-02 b3 10
124-01 a1 10
125-01 a1 10
126-01 a1 10
124-01 b3 10
125-01 b3 10
126-01 b3 10", header = T)
df %>%
arrange(Parameter, ID) %>%
separate(ID, into = c("id_grp", "n"), sep = "-", remove = F) %>%
group_by(Parameter, id_grp) %>%
mutate(diff_value = c(NA, diff(value))) %>%
select(-c(id_grp, n))
#> Adding missing grouping variables: `id_grp`
#> # A tibble: 10 x 5
#> # Groups: Parameter, id_grp [8]
#> id_grp ID Parameter value diff_value
#> <chr> <chr> <chr> <int> <int>
#> 1 123 123-01 a1 10 NA
#> 2 123 123-02 a1 10 0
#> 3 124 124-01 a1 10 NA
#> 4 125 125-01 a1 10 NA
#> 5 126 126-01 a1 10 NA
#> 6 123 123-01 b3 10 NA
#> 7 123 123-02 b3 10 0
#> 8 124 124-01 b3 10 NA
#> 9 125 125-01 b3 10 NA
#> 10 126 126-01 b3 10 NA
Created on 2021-01-26 by the reprex package (v0.3.0)

get previous value to the current value

How can i get the previous value of each group in a new column C and the starting value for each group will be empty as it does not have previous value of respective group!
Can dplyr can perform this?
Code:
df <- data.frame(A = c('a1','a1','b1','b1','b1','c2','d2','d2'),
B = c("2017-02-20","2018-02-14","2017-02-06","2017-02-27","2017-02-29","2017-02-28","2017-02-09","2017-02-10"))
Dataframe:
A B
a1 2017-02-20
a1 2018-02-14
b1 2017-02-06
b1 2017-02-27
b1 2017-02-29
c2 2017-02-28
d2 2017-02-09
d2 2017-02-10
Expected Output
A B C
a1 2017-02-20
a1 2018-02-14 2017-02-20
b1 2017-02-06
b1 2017-02-27 2017-02-06
b1 2017-02-29 2017-02-27
c2 2017-02-28
d2 2017-02-09
d2 2017-02-10 2017-02-09
You could use the lag function from dplyr:
df <- data.frame(A = c('a1','a1','b1','b1','b1','c2','d2','d2'),
B = c("2017-02-20","2018-02-14","2017-02-06",
"2017-02-27","2017-02-29","2017-02-28",
"2017-02-09","2017-02-10"))
library(dplyr)
df %>%
group_by(A) %>%
mutate(C = lag(B, 1, default = NA))
This will apply the lag function for each group of "A"
Output:
# A tibble: 8 x 3
# Groups: A [4]
A B C
<fct> <fct> <fct>
1 a1 2017-02-20 NA
2 a1 2018-02-14 2017-02-20
3 b1 2017-02-06 NA
4 b1 2017-02-27 2017-02-06
5 b1 2017-02-29 2017-02-27
6 c2 2017-02-28 NA
7 d2 2017-02-09 NA
8 d2 2017-02-10 2017-02-09

how to perform merge or join operation in R with two different dataframe size

I have two data frames A and B with different size where I am trying to implement either left join or merge data frames based on the certain conditions. Can anyone help me on how to join two tables in R. I am using a1, a2 and b1,b2 to join the two data frames?
df A
a1 a2 a3 a4
1 1 2017-04-25 2017-05-24
1 1 2017-05-25 2017-06-24
2 3 2017-04-25 2017-05-24
3 4 2017-04-25 2017-05-24
4 5 2017-04-25 2017-05-24
4 5 2017-05-25 2017-06-24
4 7 2017-04-25 2017-05-24
5 8 2017-04-25 2017-05-24
5 8 2017-05-25 2017-06-24
df B
b1 b2 b3 b4 b5
1 1 2017-04-20 2017-05-02 M
2 3 2017-03-27 2017-05-19 A
3 4 2017-04-20 2017-05-22 B
4 5 2017-04-21 2017-05-12 N
4 7 2017-05-02 2017-05-09 L
5 8 2017-05-15 2017-05-04 U
Dimension of the first dataframe
> dim(A)
[1] 506335 5
dimensions of the second data frame
> dim(B)
[1] 716776 6
tried below left join in R
left_join(A, B, a1=b1, a2 = b2, a3 > b3 , a4 < b4)
Error:
Error in common_by(by, x, y) : object 'b3' not found
Tried merge operation operation but getting below error
merge(A,B,by=c("a1","a2", "a3 > b3" , "a4 < b4"))
Error:
Error in ungroup_grouped_df(x) :
object 'dplyr_ungroup_grouped_df' not found
From what I gather you are trying to
1- Merge the DF by their first two columns
2- Filter the DF where this conditions are met a3 > b3 , a4 < b4
require(dplyr)
DF <- left_join(A,B, a1=b1, a2=b2) %>% filter(a3 > b3 , a4 < b4)
As Andrew Gustar has commented, you are trying to merge and filter at the same time. Instead, do the merge first, then the filter. It also looks like you're working with dates, so they need to be formatted correctly.
The code below can all be carried out in one chain, but I've broken it down to make it easier to understand.
For example, using the tidyverse dplyr and lubridate packages:
library(dplyr)
library(lubridate)
# load in your data
textA <- "a1 a2 a3 a4
1 1 2017-04-25 2017-05-24
1 1 2017-05-25 2017-06-24
2 3 2017-04-25 2017-05-24
3 4 2017-04-25 2017-05-24
4 5 2017-04-25 2017-05-24
4 5 2017-05-25 2017-06-24
4 7 2017-04-25 2017-05-24
5 8 2017-04-25 2017-05-24
5 8 2017-05-25 2017-06-24"
textB <- "b1 b2 b3 b4 b5
1 1 2017-04-20 2017-05-02 M
2 3 2017-03-27 2017-05-19 A
3 4 2017-04-20 2017-05-22 B
4 5 2017-04-21 2017-05-12 N
4 7 2017-05-02 2017-05-09 L
5 8 2017-05-15 2017-05-04 U"
# make dataframes
dfA <- read.table(text = textA, header = T)
dfB <- read.table(text = textB , header = T)
# now do the merging - when merging on more than one column, combine them using c
dfout <- left_join(x = dfA, y = dfB, by = c("a1" = "b1", "a2" = "b2"))
# now switch your a3, a4, b3, and b4 columns to dates format using the ymd function
dfout <- dfout %>% mutate_at(vars(a3:b4), ymd)
# finally the filtering
dfout <- dfout %>% filter(a3 > b3)
This returns:
a1 a2 a3 a4 b3 b4 b5
1 1 1 2017-04-25 2017-05-24 2017-04-20 2017-05-02 M
2 1 1 2017-05-25 2017-06-24 2017-04-20 2017-05-02 M
3 2 3 2017-04-25 2017-05-24 2017-03-27 2017-05-19 A
4 3 4 2017-04-25 2017-05-24 2017-04-20 2017-05-22 B
5 4 5 2017-04-25 2017-05-24 2017-04-21 2017-05-12 N
6 4 5 2017-05-25 2017-06-24 2017-04-21 2017-05-12 N
7 5 8 2017-05-25 2017-06-24 2017-05-15 2017-05-04 U
Note that filtering again (using code below) on a4 < b4 returns a dataframe with 0 rows.
dfout %>% mutate_at(vars(a3:b4), ymd) %>% filter(a3 > b3) %>% filter(a4 < b4)

Resources