I have this csv data
Date Kilometer
2015-01-01 15:56:00 1
2015-01-01 17:40:00 2
2015-01-02 14:38:00 4
2015-01-02 14:45:00 3
And would like to group date and sum kilometer like that
Date Kilometer
2015-01-01 3
2015-01-02 7
We can use data.table
library(data.table)
library(lubridate)
setDT(df)[, .(Kilometer = sum(Kilometer)) , .(Date=date(Date))]
This can be done using dplyr and lubridate
library(dplyr)
df %>% group_by(Date = lubridate::date(Date)) %>% summarise(Kilometer=sum(Kilometer))
Date Kilometer
(date) (int)
1 2015-01-01 3
2 2015-01-02 7
Related
I have a dataset of observations with start and end dates. I would like to calculate the moving average difference between the start and end dates.
I've included an example dataset below.
require(dplyr)
df <- data.frame(id=c(1,2,3),
start=c("2019-01-01","2019-01-10", "2019-01-05"),
end=c("2019-02-01", "2019-01-15", "2019-01-10"))
df[,c("start", "end")] <- lapply(df[,c("start", "end")], as.Date)
id start end
1 2019-01-01 2019-02-01
2 2019-01-10 2019-01-15
3 2019-01-05 2019-01-10
The overall date ranges are 2019-01-01 to 2019-02-01. I would like to calculate the average difference between the start and end dates for each of the dates in that range.
The result would look exactly like this. I've included the actual values for the averages that should show up:
date avg
2019-01-01 0
2019-01-02 1
2019-01-03 2
2019-01-04 3
2019-01-05 4
2019-01-06 3
2019-01-07 4
2019-01-08 5
2019-01-09 6
2019-01-10 7
2019-01-11 5.5
. .
. .
. .
Creating a reproducible example:
df <- data.frame(id=c(1,2,3,4),
start=c("2019-01-01","2019-01-01", "2019-01-10", "2019-01-05"),
end=c("2019-01-04", "2019-01-05", "2019-01-12", "2019-01-08"))
df[,c("start", "end")] <- lapply(df[,c("start", "end")], as.Date)
df
Returns:
id start end
1 2019-01-01 2019-01-04
2 2019-01-01 2019-01-05
3 2019-01-10 2019-01-12
4 2019-01-05 2019-01-08
Then using the group_by function from dplyr:
library(dplyr)
df %>%
group_by(start) %>%
summarize(avg=mean(end - start)) %>%
rename(date=start)
Returns:
date avg
<time> <time>
2019-01-01 3.5 days
2019-01-05 3.0 days
2019-01-10 2.0 days
Editing the answer as per comments.
Creating the df:
require(dplyr)
df <- data.frame(id=c(1,2,3),
start=c("2019-01-01", "2019-01-10", "2019-01-05"),
end=c("2019-02-01", "2019-01-15", "2019-01-10"))
df[,c("start", "end")] <- lapply(df[,c("start", "end")], as.Date)
Create dates for every start-end combination:
#gives the list of all dates within start and end frames and calculates difference
datesList = lapply(1:nrow(df),function(i){
dat = data_frame('date'=seq.Date(from=df$start[i],to=df$end[i],by=1),
'start'=df$start[i]) %>%
dplyr::mutate(diff=date-start)
})
Finally, group_by the date and find avg to give output exactly as the one in the question:
finalDf = bind_rows(datesList) %>%
dplyr::filter(diff != 0) %>%
dplyr::group_by(date) %>%
dplyr::summarise(avg=mean(diff,na.rm=T))
The output thus becomes:
# A tibble: 31 x 2
date avg
<date> <time>
1 2019-01-02 1.0 days
2 2019-01-03 2.0 days
3 2019-01-04 3.0 days
4 2019-01-05 4.0 days
5 2019-01-06 3.0 days
6 2019-01-07 4.0 days
7 2019-01-08 5.0 days
8 2019-01-09 6.0 days
9 2019-01-10 7.0 days
10 2019-01-11 5.5 days
# … with 21 more rows
Let me know if it works.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm new here, so I apologize if I miss any conventions.
I have a ~2000 row dataset with data on unique cases happening in a three year period. Each case has a start date and an end date. I want to be able to get a new dataframe that shows how many cases occur per week in this three year period.
The structure of the dataset I have is like this:
ID Start_Date End_Date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03
`
This problem can be solved more easily with sqldf package but I thought to stick with dplyr package.
The approach:
library(dplyr)
library(lubridate)
# First create a data frame having all weeks from chosen start date to end date.
# 2015-01-01 to 2017-12-31
df_week <- data.frame(weekStart = seq(floor_date(as.Date("2015-01-01"), "week"),
as.Date("2017-12-31"), by = 7))
df_week <- df_week %>%
mutate(weekEnd = weekStart + 7,
weekNum = as.character(weekStart, "%V-%Y"),
dummy = TRUE)
# The dummy column is only for joining purpose.
# Header looks like
#> head(df_week)
# weekStart weekEnd weekNum dummy
#1 2014-12-28 2015-01-04 52-2014 TRUE
#2 2015-01-04 2015-01-11 01-2015 TRUE
#3 2015-01-11 2015-01-18 02-2015 TRUE
#4 2015-01-18 2015-01-25 03-2015 TRUE
#5 2015-01-25 2015-02-01 04-2015 TRUE
#6 2015-02-01 2015-02-08 05-2015 TRUE
# Prepare the data as mentioned in OP
df <- read.table(text = "ID Start_Date End_Date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03", header = TRUE, stringsAsFactors = FALSE)
df$Start_Date <- as.Date(df$Start_Date)
df$End_Date <- as.Date(df$End_Date)
df <- df %>% mutate(dummy = TRUE) # just for joining
# Use dplyr to join, filter and then group on week to find number of cases
# in each week
df_week %>%
left_join(df, by = "dummy") %>%
select(-dummy) %>%
filter((weekStart >= Start_Date & weekStart <= End_Date) |
(weekEnd >= Start_Date & weekEnd <= End_Date)) %>%
group_by(weekStart, weekEnd, weekNum) %>%
summarise(cases = n())
# Result
# weekStart weekEnd weekNum cases
# <date> <date> <chr> <int>
# 1 2014-12-28 2015-01-04 52-2014 1
# 2 2015-01-04 2015-01-11 01-2015 3
# 3 2015-01-11 2015-01-18 02-2015 5
# 4 2015-01-18 2015-01-25 03-2015 8
# 5 2015-01-25 2015-02-01 04-2015 8
# 6 2015-02-01 2015-02-08 05-2015 8
# 7 2015-02-08 2015-02-15 06-2015 8
# 8 2015-02-15 2015-02-22 07-2015 8
# 9 2015-02-22 2015-03-01 08-2015 8
#10 2015-03-01 2015-03-08 09-2015 8
# ... with 139 more rows
Welcome to SO!
Before solving the problem be sure to have installed some packages and run
install.packages(c("tidyr","dplyr","lubridate"))
if you haven installed those packages yet.
I'll present you a modern R solution next and those packages are magic.
This is a way to solve it:
library(readr)
library(dplyr)
library(lubridate)
raw_data <- 'id start_date end_date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03'
curated_data <- read_delim(raw_data, delim = "\t") %>%
mutate(start_date = as.Date(start_date)) %>% # convert column 2 to date format assuming the date is yyyy-mm-dd
mutate(weeks_lapse = as.integer((start_date - min(start_date))/dweeks(1))) # count how many weeks passed since the lowest date in the data
curated_data %>%
group_by(weeks_lapse) %>% # I group to count by week
summarise(cases_per_week = n()) # now count by group by week
And the solution is:
# A tibble: 3 x 2
weeks_lapse cases_per_week
<int> <int>
1 0 3
2 1 2
3 2 3
I'm getting started with R, so please bear with me
For example, I have this data.table (or data.frame) object :
Time Station count_starts count_ends
01/01/2015 00:30 A 2 3
01/01/2015 00:40 A 2 1
01/01/2015 00:55 B 1 1
01/01/2015 01:17 A 3 1
01/01/2015 01:37 A 1 1
My end goal is to group the "Time" column to hourly and sum the count_starts and count_ends based on the hourly time and station :
Time Station sum(count_starts) sum(count_ends)
01/01/2015 01:00 A 4 4
01/01/2015 01:00 B 1 1
01/01/2015 02:00 A 4 2
I did some research and found out that I should use the xts library.
Thanks for helping me out
UPDATE :
I converted the type of transactions$Time to POSIXct, so the xts package should be able to use the timeseries directly.
Using base R, we can still do the above. Only that the hour will be one less for all of them:
dat=read.table(text = "Time Station count_starts count_ends
'01/01/2015 00:30' A 2 3
'01/01/2015 00:40' A 2 1
'01/01/2015 00:55' B 1 1
'01/01/2015 01:17' A 3 1
'01/01/2015 01:37' A 1 1",
header = TRUE, stringsAsFactors = FALSE)
dat$Time=cut(strptime(dat$Time,"%m/%d/%Y %H:%M"),"hour")
aggregate(.~Time+Station,dat,sum)
Time Station count_starts count_ends
1 2015-01-01 00:00:00 A 4 4
2 2015-01-01 01:00:00 A 4 2
3 2015-01-01 00:00:00 B 1 1
You can use the order function to rearrange the table or even the sort.POSIXlt function:
m=aggregate(.~Time+Station,dat,sum)
m[order(m[,1]),]
Time Station count_starts count_ends
1 2015-01-01 00:00:00 A 4 4
3 2015-01-01 00:00:00 B 1 1
2 2015-01-01 01:00:00 A 4 2
A solution using dplyr and lubridate. The key is to use ceiling_date to convert the date time column to hourly time-step, and then group and summarize the data.
library(dplyr)
library(lubridate)
dt2 <- dt %>%
mutate(Time = mdy_hm(Time)) %>%
mutate(Time = ceiling_date(Time, unit = "hour")) %>%
group_by(Time, Station) %>%
summarise(`sum(count_starts)` = sum(count_starts),
`sum(count_ends)` = sum(count_ends)) %>%
ungroup()
dt2
# # A tibble: 3 x 4
# Time Station `sum(count_starts)` `sum(count_ends)`
# <dttm> <chr> <int> <int>
# 1 2015-01-01 01:00:00 A 4 4
# 2 2015-01-01 01:00:00 B 1 1
# 3 2015-01-01 02:00:00 A 4 2
DATA
dt <- read.table(text = "Time Station count_starts count_ends
'01/01/2015 00:30' A 2 3
'01/01/2015 00:40' A 2 1
'01/01/2015 00:55' B 1 1
'01/01/2015 01:17' A 3 1
'01/01/2015 01:37' A 1 1",
header = TRUE, stringsAsFactors = FALSE)
Explanation
mdy_hm is the function to convert the string to date-time class. It means "month-day-year hour-minute", which depends on the structure of the string. ceiling_date rounds a date-time object up based on the unit specified. group_by is to group the variable. summarise is to conduct summary operation.
There are basically two things required:
1) round of the Time to nearest 1 hour window:
library(data.table)
library(lubridate)
data=data.table(Time=c('01/01/2015 00:30','01/01/2015 00:40','01/01/2015 00:55','01/01/2015 01:17','01/01/2015 01:37'),Station=c('A','A','B','A','A'),count_starts=c(2,2,1,3,1),count_ends=c(3,1,1,1,1))
data[,Time_conv:=as.POSIXct(strptime(Time,'%d/%m/%Y %H:%M'))]
data[,Time_round:=floor_date(Time_conv,unit="1 hour")]
2) List the data table obtained above to get the desired result:
New_data=data[,list(count_starts_sum=sum(count_starts),count_ends_sum=sum(count_ends)),by='Time_round']
Let's say I have a dataframe of timestamps with the corresponding number of tickets sold at that time.
Timestamp ticket_count
(time) (int)
1 2016-01-01 05:30:00 1
2 2016-01-01 05:32:00 1
3 2016-01-01 05:38:00 1
4 2016-01-01 05:46:00 1
5 2016-01-01 05:47:00 1
6 2016-01-01 06:07:00 1
7 2016-01-01 06:13:00 2
8 2016-01-01 06:21:00 1
9 2016-01-01 06:22:00 1
10 2016-01-01 06:25:00 1
I want to know how to calculate the number of tickets sold within a certain time frame of all tickets. For example, I want to calculate the number of tickets sold up to 15 minutes after all tickets. In this case, the first row would have three tickets, the second row would have four tickets, etc.
Ideally, I'm looking for a dplyr solution, as I want to do this for multiple stores with a group_by() function. However, I'm having a little trouble figuring out how to hold each Timestamp fixed for a given row while simultaneously searching through all Timestamps via dplyr syntax.
In the current development version of data.table, v1.9.7, non-equi joins are implemented. Assuming your data.frame is called df and the Timestamp column is POSIXct type:
require(data.table) # v1.9.7+
window = 15L # minutes
(counts = setDT(df)[.(t=Timestamp+window*60L), on=.(Timestamp<t),
.(counts=sum(ticket_count)), by=.EACHI]$counts)
# [1] 3 4 5 5 5 9 11 11 11 11
# add that as a column to original data.table by reference
df[, counts := counts]
For each row in t, all rows where df$Timestamp < that_row is fetched. And by=.EACHI instructs the expression sum(ticket_count) to run for each row in t. That gives your desired result.
Hope this helps.
This is a simpler version of the ugly one I wrote earlier..
# install.packages('dplyr')
library(dplyr)
your_data %>%
mutate(timestamp = as.POSIXct(timestamp, format = '%m/%d/%Y %H:%M'),
ticket_count = as.numeric(ticket_count)) %>%
mutate(window = cut(timestamp, '15 min')) %>%
group_by(window) %>%
dplyr::summarise(tickets = sum(ticket_count))
window tickets
(fctr) (dbl)
1 2016-01-01 05:30:00 3
2 2016-01-01 05:45:00 2
3 2016-01-01 06:00:00 3
4 2016-01-01 06:15:00 3
Here is a solution using data.table. Also incorporating different stores.
Example data:
library(data.table)
dt <- data.table(Timestamp = as.POSIXct("2016-01-01 05:30:00")+seq(60,120000,by=60),
ticket_count = sample(1:9, 2000, T),
store = c(rep(c("A","B","C","D"), 500)))
Now apply the following:
ts <- dt$Timestamp
for(x in ts) {
end <- x+900
dt[Timestamp <= end & Timestamp >= x ,CS := sum(ticket_count),by=store]
}
This gives you
Timestamp ticket_count store CS
1: 2016-01-01 05:31:00 3 A 13
2: 2016-01-01 05:32:00 5 B 20
3: 2016-01-01 05:33:00 3 C 19
4: 2016-01-01 05:34:00 7 D 12
5: 2016-01-01 05:35:00 1 A 15
---
1996: 2016-01-02 14:46:00 4 D 10
1997: 2016-01-02 14:47:00 9 A 9
1998: 2016-01-02 14:48:00 2 B 2
1999: 2016-01-02 14:49:00 2 C 2
2000: 2016-01-02 14:50:00 6 D 6
I have a data.table with the following shape:
date_from date_until value
2015-01-01 2015-01-03 100
2015-01-02 2015-01-05 50
2015-01-02 2015-01-04 10
...
What I want to do is: I want to calculate for every date in the year the cumulative sum. For the first row the value 100 would be relevant for every day from 2015-01-01 until 2015-01-03. I want to add all values which are relevant for a certain date.
So, in the end there would be a data.table like this:
date value
2015-01-01 100
2015-01-02 160
2015-01-03 160
2015-01-04 60
2015-01-05 50
Is there any easy way with the data.table to do this?
dt[, .(date = seq(as.Date(date_from, '%Y-%m-%d'),
as.Date(date_until, '%Y-%m-%d'),
by='1 day'),
value), by = 1:nrow(dt)][, sum(value), by = date]
# date V1
#1: 2015-01-01 100
#2: 2015-01-02 160
#3: 2015-01-03 160
#4: 2015-01-04 60
#5: 2015-01-05 50
And another option using foverlaps:
# convert to Date for ease
dt[, date_from := as.Date(date_from, '%Y-%m-%d')]
dt[, date_until := as.Date(date_until, '%Y-%m-%d')]
# all of the dates
alldates = dt[, do.call(seq, c(as.list(range(c(date_from, date_until))), by = '1 day'))]
# foverlaps to find the intersections
foverlaps(dt, data.table(date_from = alldates, date_until = alldates,
key = c('date_from', 'date_until')))[,
sum(value), by = date_from]
# date_from V1
#1: 2015-01-01 100
#2: 2015-01-02 160
#3: 2015-01-03 160
#4: 2015-01-04 60
#5: 2015-01-05 50