R: Count by id, number of occurences in a predefined time interval - r

I want to compute a column that counts the number of occurences looking backward in a predefined time interval (e.g. 2 days) for a particular ID.
I have the following data structure (see code below) in R and want to compute the column countLast2d automatically:
userID <- c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3)
datetime <-c("2015-07-02 13:20:00", "2015-07-03 13:20:00", "2015-07-04 01:20:00",
"2015-07-10 01:20:00", "2015-07-23 01:20:00", "2015-07-23 06:08:00", "2015-07-24 06:08:00",
"2015-09-02 09:01:00", "2015-08-19 11:41:00", "2015-08-19 14:38:00", "2015-08-19 17:36:00",
"2015-08-19 20:33:00", "2015-08-19 23:30:00", "2015-08-19 23:46:00", "2015-08-19 05:19:00",
"2015-09-13 17:02:00", "2015-10-01 00:32:00", "2015-10-01 00:50:00")
The outcome should take on these values:
countLast2d <- c(0,1,2,0,0,1,2,0,0,1,0,0,0,1,0,0,0,1)
df <- data.frame(userID, countLast2d, datetime)
df$datetime = as.POSIXct(strptime(df$datetime, format = "%Y-%m-%d %H:%M:%S"))
In Excel, I would use the following formula:
=countifs([datecolumn],"<"&[date cell in that row],[datecolumn],"<"&[date cell in that row]-2,[idcolumn],[id cell in that row])
(So for example [C2]=+COUNTIFS($B:$B,"<"&$B2,$B:$B,">="&$B2-2,$A:$A,$A2), if Column A contains the id and column B the date)
I already asked that question once before (https://stackoverflow.com/questions/30998596/r-count-number-of-occurences-by-id-in-the-last-48h) but didn't include an example in my question. So sorry for asking again.

Here's a solution:
df <- data.frame(userID=c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3),datetime=as.POSIXct(c('2015-07-02 13:20:00','2015-07-03 13:20:00','2015-07-04 01:20:00','2015-07-10 01:20:00','2015-07-23 01:20:00','2015-07-23 06:08:00','2015-07-24 06:08:00','2015-09-02 09:01:00','2015-08-19 11:41:00','2015-08-19 14:38:00','2015-08-19 17:36:00','2015-08-19 20:33:00','2015-08-19 23:30:00','2015-08-19 23:46:00','2015-08-19 05:19:00','2015-09-13 17:02:00','2015-10-01 00:32:00','2015-10-01 00:50:00')));
window <- as.difftime(2,units='days');
df$countLast2d <- sapply(1:nrow(df),function(r) sum(df$userID==df$userID[r] & df$datetime<df$datetime[r] & df$datetime>=df$datetime[r]-window));
df;
## userID datetime countLast2d
## 1 1 2015-07-02 13:20:00 0
## 2 1 2015-07-03 13:20:00 1
## 3 1 2015-07-04 01:20:00 2
## 4 1 2015-07-10 01:20:00 0
## 5 1 2015-07-23 01:20:00 0
## 6 1 2015-07-23 06:08:00 1
## 7 1 2015-07-24 06:08:00 2
## 8 1 2015-09-02 09:01:00 0
## 9 2 2015-08-19 11:41:00 1
## 10 2 2015-08-19 14:38:00 2
## 11 2 2015-08-19 17:36:00 3
## 12 2 2015-08-19 20:33:00 4
## 13 2 2015-08-19 23:30:00 5
## 14 2 2015-08-19 23:46:00 6
## 15 2 2015-08-19 05:19:00 0
## 16 3 2015-09-13 17:02:00 0
## 17 3 2015-10-01 00:32:00 0
## 18 3 2015-10-01 00:50:00 1
Note that this differs from your expected output because your expected output is incorrect for userID==2.
This solution will work regardless of the ordering of df, which is essential for your example df because it is unordered (or at least not perfectly ordered) for userID==2.
Edit Here's a possibility, using by() to group by userID and only comparing each element against lesser-index elements, under the assumption that only those elements can be in the lookback window:
df2 <- df[order(df$userID,df$datetime),];
df2$countLast2d <- do.call(c,by(df2$datetime,df$userID,function(x) c(0,sapply(2:length(x),function(i) sum(x[1:(i-1)]>=x[i]-window)))));
df2;
## userID datetime countLast2d
## 1 1 2015-07-02 13:20:00 0
## 2 1 2015-07-03 13:20:00 1
## 3 1 2015-07-04 01:20:00 2
## 4 1 2015-07-10 01:20:00 0
## 5 1 2015-07-23 01:20:00 0
## 6 1 2015-07-23 06:08:00 1
## 7 1 2015-07-24 06:08:00 2
## 8 1 2015-09-02 09:01:00 0
## 15 2 2015-08-19 05:19:00 0
## 9 2 2015-08-19 11:41:00 1
## 10 2 2015-08-19 14:38:00 2
## 11 2 2015-08-19 17:36:00 3
## 12 2 2015-08-19 20:33:00 4
## 13 2 2015-08-19 23:30:00 5
## 14 2 2015-08-19 23:46:00 6
## 16 3 2015-09-13 17:02:00 0
## 17 3 2015-10-01 00:32:00 0
## 18 3 2015-10-01 00:50:00 1

Related

Find un-arrangeable consecutive time intervals with exactly n days difference

I have a data as follow and I need to group them based on dates that time_right + 1 = time_left (in other rows). The group id is equal to the minimum id of those records that satisfy this condition.
input = data.frame(id = c(1:6),
time_left = c("2016-01-01", "2016-09-05", "2016-09-06","2016-09-08", "2016-09-12","2016-09-15"),
time_right = c("2016-09-07", "2016-09-11", "2016-09-12", "2016-09-14", "2016-09-18","2016-09-21"))
Input
id time_left time_right
1 1 2016-01-01 2016-09-07
2 2 2016-09-05 2016-09-11
3 3 2016-09-06 2016-09-12
4 4 2016-09-08 2016-09-14
5 5 2016-09-12 2016-09-18
6 6 2016-09-15 2016-09-21
Output:
id time_left time_right group_id
1 1 2016-01-01 2016-09-07 1
2 2 2016-09-05 2016-09-11 2
3 3 2016-09-06 2016-09-12 3
4 4 2016-09-08 2016-09-14 1
5 5 2016-09-12 2016-09-18 2
6 6 2016-09-15 2016-09-21 1
Is there anyway to do it with dplyr?

How to print a date when the input is number of days since 01-01-60?

I received a set of dates, but it turns out that time is reported in days since 01-01-1960 in this specific data set.
D_INDDTO
1 20758
2 20856
3 21062
4 19740
5 21222
6 21203
The specific date of interest for Patient 1 is 20758 days since 01-01-60
I want to create a new covariate u$date containing the specific date of interest i d%m%y%. I tried
library(tidyverse)
u %>% mutate(date=as.date(D_INDDTO,origin="1960-01-01")
But that did not solve it.
u <- structure(list(D_INDDTO = c(20758, 20856, 21062, 19740, 21222,
21203, 20976, 20895, 18656, 18746)), row.names = c(NA, 10L), class = "data.frame")
Try this:
#Code 1
u %>% mutate(date=as.Date("1960-01-01")+D_INDDTO)
Output:
D_INDDTO date
1 20758 2016-10-31
2 20856 2017-02-06
3 21062 2017-08-31
4 19740 2014-01-17
5 21222 2018-02-07
6 21203 2018-01-19
7 20976 2017-06-06
8 20895 2017-03-17
9 18656 2011-01-29
10 18746 2011-04-29
Or this:
#Code 2
u %>% mutate(date=as.Date(D_INDDTO,origin="1960-01-01"))
Output:
D_INDDTO date
1 20758 2016-10-31
2 20856 2017-02-06
3 21062 2017-08-31
4 19740 2014-01-17
5 21222 2018-02-07
6 21203 2018-01-19
7 20976 2017-06-06
8 20895 2017-03-17
9 18656 2011-01-29
10 18746 2011-04-29
Or this:
#Code 3
u %>% mutate(date=format(as.Date(D_INDDTO,origin="1960-01-01"),'%d%m%y'))
Output:
D_INDDTO date
1 20758 311016
2 20856 060217
3 21062 310817
4 19740 170114
5 21222 070218
6 21203 190118
7 20976 060617
8 20895 170317
9 18656 290111
10 18746 290411
If more customization is required:
#Code 4
u %>% mutate(date=format(as.Date(D_INDDTO,origin="1960-01-01"),'%d-%m-%Y'))
Output:
D_INDDTO date
1 20758 31-10-2016
2 20856 06-02-2017
3 21062 31-08-2017
4 19740 17-01-2014
5 21222 07-02-2018
6 21203 19-01-2018
7 20976 06-06-2017
8 20895 17-03-2017
9 18656 29-01-2011
10 18746 29-04-2011

Min and max value based on another column and combine those in r

So I basically got a while loop function that creates 1's in the "algorithm_column" based on the highest percentages in the "percent" column, until a certain total percentage is reached (90% or something). The rest of the rows that are not taken into account will have a value of 0 in the "algorithm_column" ( Create while loop function that takes next largest value untill condition is met)
I want to show, based on what the loop function found, the min and max times of the column "timeinterval" (the min is where the 1's start and max is the last row with a 1, the 0's are out of the scope). And then finally create a time interval from this.
So if we have the following code, I want to create in another column, lets say "total_time" a calculation from the min time 09:00 ( this is where 1 start in the algorithm_column) until 11:15, which makes a time interval of 02:15 hours added to the "total_time" column.
algorithm
# pc4 timeinterval stops percent idgroup algorithm_column
#1 5464 08:45:00 1 1.3889 1 0
#2 5464 09:00:00 5 6.9444 2 1
#3 5464 09:15:00 8 11.1111 3 1
#4 5464 09:30:00 7 9.7222 4 1
#5 5464 09:45:00 5 6.9444 5 1
#6 5464 10:00:00 10 13.8889 6 1
#7 5464 10:15:00 6 8.3333 7 1
#8 5464 10:30:00 4 5.5556 8 1
#9 5464 10:45:00 7 9.7222 9 1
#10 5464 11:00:00 6 8.3333 10 1
#11 5464 11:15:00 5 6.9444 11 1
#12 5464 11:30:00 8 11.1111 12 0
I have multiple pc4 groups, so it should look at every group and calculate a total_time for each group respectively.
I got this function, but I'm a bit stuck if this is what I need.
test <- function(x) {
ind <- x[["algorithm$algorithm_column"]] == 0
Mx <- max(x[["timeinterval"]][ind], na.rm = TRUE);
ind <- x[["algorithm$algorithm_column"]] == 1
Mn <- min(x[["timeinterval"]][ind], na.rm = TRUE);
list(Mn, Mx) ## or return(list(Mn, Mx))
}
test(algorithm)
Here is a dplyr solution.
library(dplyr)
algorithm %>%
mutate(tmp = cumsum(c(0, diff(algorithm_column) != 0))) %>%
filter(algorithm_column == 1) %>%
group_by(pc4, tmp) %>%
summarise(first = first(timeinterval),
last = last(timeinterval)) %>%
select(-tmp)
## A tibble: 1 x 3
## Groups: pc4 [1]
# pc4 first last
# <int> <fct> <fct>
#1 5464 09:00:00 11:15:00
Data.
algorithm <- read.table(text = "
pc4 timeinterval stops percent idgroup algorithm_column
1 5464 08:45:00 1 1.3889 1 0
2 5464 09:00:00 5 6.9444 2 1
3 5464 09:15:00 8 11.1111 3 1
4 5464 09:30:00 7 9.7222 4 1
5 5464 09:45:00 5 6.9444 5 1
6 5464 10:00:00 10 13.8889 6 1
7 5464 10:15:00 6 8.3333 7 1
8 5464 10:30:00 4 5.5556 8 1
9 5464 10:45:00 7 9.7222 9 1
10 5464 11:00:00 6 8.3333 10 1
11 5464 11:15:00 5 6.9444 11 1
12 5464 11:30:00 8 11.1111 12 0
", header = TRUE)

How to calculate number of hours from a fixed start point that varies among levels of a variable

The dataframe df1 summarizes detections of different individuals (ID) through time (Datetime). As a short example:
library(lubridate)
df1<- data.frame(ID= c(1,2,1,2,1,2,1,2,1,2),
Datetime= ymd_hms(c("2016-08-21 00:00:00","2016-08-24 08:00:00","2016-08-23 12:00:00","2016-08-29 03:00:00","2016-08-27 23:00:00","2016-09-02 02:00:00","2016-09-01 12:00:00","2016-09-09 04:00:00","2016-09-01 12:00:00","2016-09-10 12:00:00")))
> df1
ID Datetime
1 1 2016-08-21 00:00:00
2 2 2016-08-24 08:00:00
3 1 2016-08-23 12:00:00
4 2 2016-08-29 03:00:00
5 1 2016-08-27 23:00:00
6 2 2016-09-02 02:00:00
7 1 2016-09-01 12:00:00
8 2 2016-09-09 04:00:00
9 1 2016-09-01 12:00:00
10 2 2016-09-10 12:00:00
I want to calculate for each row, the number of hours (Hours_since_begining) since the first time that the individual was detected.
I would expect something like that (It can contain some mistakes since I did the calculations by hand):
> df1
ID Datetime Hours_since_begining
1 1 2016-08-21 00:00:00 0
2 2 2016-08-24 08:00:00 0
3 1 2016-08-23 12:00:00 60 # Number of hours between "2016-08-21 00:00:00" (first time detected the Ind 1) and "2016-08-23 12:00:00"
4 2 2016-08-29 03:00:00 115
5 1 2016-08-27 23:00:00 167 # Number of hours between "2016-08-21 00:00:00" (first time detected the Ind 1) and "2016-08-27 23:00:00"
6 2 2016-09-02 02:00:00 210
7 1 2016-09-01 12:00:00 276
8 2 2016-09-09 04:00:00 380
9 1 2016-09-01 12:00:00 276
10 2 2016-09-10 12:00:00 412
Does anyone know how to do it?
Thanks in advance!
You can do this :
library(tidyverse)
# first get min datetime by ID
min_datetime_id <- df1 %>% group_by(ID) %>% summarise(min_datetime=min(Datetime))
# join with df1 and compute time difference
df1 <- df1 %>% left_join(min_datetime_id) %>% mutate(Hours_since_beginning= as.numeric(difftime(Datetime, min_datetime,units="hours")))

Summations by conditions on another row dealing with time

I am looking to run a cumulative sum at every row for values that occur in two columns before and after that point. So in this case I have volume of 2 incident types at every given minute over two days. I want to create a column which adds all the incidents that occured before and after for each row by the type. Sumif from excel comes to mind but I'm not sure how to port that over to R:
EDIT: ADDED set.seed and easier numbers
I have the following data set:
set.seed(42)
master_min =
setDT(
data.frame(master_min = seq(
from=as.POSIXct("2016-1-1 0:00", tz="America/New_York"),
to=as.POSIXct("2016-1-2 23:00", tz="America/New_York"),
by="min"
))
)
incident1= round(runif(2821, min=0, max=10))
incident2= round(runif(2821, min=0, max=10))
master_min = head(cbind(master_min, incident1, incident2), 5)
How do I essentially compute the following logic:
for each row, sum all the incident1s that occured before that row's timestamp and all the incident2s that occured after that row's timestamp? It would be great to get a data table solution, if not a dplyr as I am working with a large dataset. Below is a before and after for the data`:
BEFORE:
master_min incident1 incident2
1: 2016-01-01 00:00:00 9 6
2: 2016-01-01 00:01:00 9 5
3: 2016-01-01 00:02:00 3 5
4: 2016-01-01 00:03:00 8 6
5: 2016-01-01 00:04:00 6 9
AFTER THE CALCULATION:
master_min incident1 incident2 new_column
1: 2016-01-01 00:00:00 9 6 25
2: 2016-01-01 00:01:00 9 5 29
3: 2016-01-01 00:02:00 3 5 33
4: 2016-01-01 00:03:00 8 6 30
5: 2016-01-01 00:04:00 6 9 29
If I understand correctly:
# Cumsum of incident1, without current row:
master_min$sum1 <- cumsum(master_min$incident1) - master_min$incident1
# Reverse cumsum of incident2, without current row:
master_min$sum2 <- rev(cumsum(rev(master_min$incident2))) - master_min$incident2
# Your new column:
master_min$new_column <- master_min$sum1 + master_min$sum2
*update
The following two lines can do the job
master_min$sum1 <- cumsum(master_min$incident1)
master_min$sum2 <- sum(master_min$incident2) - cumsum(master_min$incident2)
I rewrote the question a bit to show a bit more comprehensive structure
library(data.table)
master_min <-
setDT(
data.frame(master_min = seq(
from=as.POSIXct("2016-1-1 0:00", tz="America/New_York"),
to=as.POSIXct("2016-1-1 0:09", tz="America/New_York"),
by="min"
))
)
set.seed(2)
incident1= as.integer(runif(10, min=0, max=10))
incident2= as.integer(runif(10, min=0, max=10))
master_min = cbind(master_min, incident1, incident2)
Now master_min looks like this
> master_min
master_min incident1 incident2
1: 2016-01-01 00:00:00 1 5
2: 2016-01-01 00:01:00 7 2
3: 2016-01-01 00:02:00 5 7
4: 2016-01-01 00:03:00 1 1
5: 2016-01-01 00:04:00 9 4
6: 2016-01-01 00:05:00 9 8
7: 2016-01-01 00:06:00 1 9
8: 2016-01-01 00:07:00 8 2
9: 2016-01-01 00:08:00 4 4
10: 2016-01-01 00:09:00 5 0
Apply transformations
master_min$sum1 <- cumsum(master_min$incident1)
master_min$sum2 <- sum(master_min$incident2) - cumsum(master_min$incident2)
Results
> master_min
master_min incident1 incident2 sum1 sum2
1: 2016-01-01 00:00:00 1 5 1 37
2: 2016-01-01 00:01:00 7 2 8 35
3: 2016-01-01 00:02:00 5 7 13 28
4: 2016-01-01 00:03:00 1 1 14 27
5: 2016-01-01 00:04:00 9 4 23 23
6: 2016-01-01 00:05:00 9 8 32 15
7: 2016-01-01 00:06:00 1 9 33 6
8: 2016-01-01 00:07:00 8 2 41 4
9: 2016-01-01 00:08:00 4 4 45 0
10: 2016-01-01 00:09:00 5 0 50 0

Resources