newdf=data.frame(date=as.Date(c("2021-01-04","2021-01-05","2021-01-06","2021-01-07")),
time=c("10:32:29","11:25","12:18:42","09:58"))
This is my data frame. I want to calculate time difference between two consecutive days in hours. Could you please suggest a method to calculate? Note, some time values do not contain seconds. So, first we have to convert it to standard form. Could you please give me a method to solve all these problems. This is completely R programming.
Paste date and time together in one column, use parse_date_time to change the time value in standard format (Posixct) and use difftime to calculate difference between consecutive time in hours.
library(dplyr)
library(tidyr)
library(lubridate)
newdf %>%
unite(datetime, date, time, sep = ' ') %>%
mutate(datetime = parse_date_time(datetime, c('Ymd HMS', 'Ymd HM')),
difference_in_hours = round(as.numeric(difftime(datetime,
lag(datetime), 'hours')), 2))
# datetime difference_in_hours
#1 2021-01-04 10:32:29 NA
#2 2021-01-05 11:25:00 24.88
#3 2021-01-06 12:18:42 24.90
#4 2021-01-07 09:58:00 21.66
Related
I have following code and as you can see system calculating the difference correctly, but I would like consider only first 5 character i.e. 3.449 etc and I like to discard the 'hours' part.
Once I try to convert the same as character, then it not working.
Any help will be appreciated.
library(dplyr)
df %>%
mutate(difference = difftime(DateTime_Stat, DateTime_End, units = 'hours'))
# DateTime_Start DateTime_End difference
#1 2021-02-02 16:42:11 2021-02-02 16:43:15 0.000000 hours
#2 2021-02-02 20:10:14 2021-02-02 20:11:55 3.449754 hours
I have tried to convert the value to character type so I can use substr() to extract the values but system is failing to convert it to as character.
You can convert to numeric and round:
round(as.numeric(df$difference), 3)
I have a dateset that looks like this, the readingdate is in POSIXct format. I want to extract date in one field and time in another field in R. I'm trying to avoid using base R as much as possible so if you can do this that'ld be great (lubridate ). I want newly extracted fields to be in the right format because my ultimate goal is to plot the time(x) against total items sold (y) in order to determine what time of the day the highest sale is made. Thanks for your help.
If I understood well, R can read correctly your dates and times as you import your data (because they are in POSIXct format), but you can not extract the date and the time in the right format from your date-time column.
Considering that you have a data.frame in R, like this:
date_time Sold
1 2020-01-01 03:16:01 2
2 2020-01-02 02:15:12 2
3 2020-01-03 08:26:11 3
4 2020-01-04 09:29:14 2
5 2020-01-05 12:06:06 1
6 2020-01-06 08:08:11 3
Lubridate does not offer a function to extract the time component, so you have to extract piece by piece with the minute(), hour() and second() functions. Then you can just concatenate these components with paste() function. Now, with the dates, you can use the date() function to extract then, after that, you use the format() function to format these dates in the way you want.
library(lubridate)
library(dplyr)
library(magrittr)
tab <- tab %>%
mutate(
date = as.Date(date_time),
hour = hour(date_time),
minute = minute(date_time),
second = second(date_time)
) %>%
mutate(
format_date = format(date, "%m/%d/%Y"),
format_hour = paste(hour, minute, second, sep = ":")
)
Resulting this:
tab %>% select(format_date, format_hour) %>% head()
format_date format_hour
1 01/01/2020 12:4:23
2 01/02/2020 3:19:13
3 01/03/2020 8:6:24
4 01/04/2020 6:28:2
5 01/05/2020 2:16:20
6 01/06/2020 12:8:28
So I have this sample of UTC timestamps and a bunch of other data. I would like to group my data by date. This means I do not need hours/mins/secs and would like to have a new df which shows the number of actions grouped together.
I tried using lubridate to pull out the date but I cant get the origin right.
DATA
hw0 <- read.table(text =
'ID timestamp action
4f.. 20160305195246 visitPage
75.. 20160305195302 visitPage
77.. 20160305195312 checkin
42.. 20160305195322 checkin
8f.. 20160305195332 searchResultPage
29.. 20160305195342 checkin', header = T)
Here's what I tried
library(dplyr)
library(lubridate) #this will allow us to extract the date
daily <- hw0 %>%
mutate(date=date(as.POSIXct(timestamp),origin='1970-01-01'))
daily <- daily %>%
group_by(date)
I am unsure what to use as an origin and my error says this value is incorrect. Ultimately, I expect the code to return a new df which features a variable (date) with a list of unique dates as well as how many of the different actions there are in each day.
Assuming the numbers at the end are 24 hour time based, you can use:
daily = hw0 %>%
mutate(date = as.POSIXct(as.character(timestamp), format = '%Y%m%d%H%M%S'))
You can use as.Date instead if you want to get rid of the hour times. You need to supply the origin when you give a numeric argument, which is interpreted as the number of days since the origin. In your case you should just give it a character vector and supply the date format.
Lubridate also has the ymd_hms() function that can extract the date, and the floor_date() function that would help.
library(tidyverse)
daily <- hw0 %>%
mutate(time = ymd_hms(timestamp, tz = 'UTC'),
date = floor_date(time, unit = 'day'))
lubridate also has parse_date_time which seems to be a nice mix of the above two solutions.
library(tidyverse)
library(lubridate)
hw0 %>%
mutate(timestamp = parse_date_time(timestamp, order = "%Y%m%d%H%M%S"))
ID timestamp action
1 4f.. 2016-03-05 19:52:46 visitPage
2 75.. 2016-03-05 19:53:02 visitPage
3 77.. 2016-03-05 19:53:12 checkin
4 42.. 2016-03-05 19:53:22 checkin
5 8f.. 2016-03-05 19:53:32 searchResultPage
6 29.. 2016-03-05 19:53:42 checkin
Right now I have two dataframes. One contains over 11 million rows of a start date, end date, and other variables. The second dataframe contains daily values for heating degree days (basically a temperature measure).
set.seed(1)
library(lubridate)
date.range <- ymd(paste(2008,3,1:31,sep="-"))
daily <- data.frame(date=date.range,value=runif(31,min=0,max=45))
intervals <- data.frame(start=daily$date[1:5],end=daily$date[c(6,9,15,24,31)])
In reality my daily dataframe has every day for 9 years and my intervals dataframe has entries that span over arbitrary dates in this time period. What I wanted to do was to add a column to my intervals dataframe called nhdd that summed over the values in daily corresponding to that time interval (end exclusive).
For example, in this case the first entry of this new column would be
sum(daily$value[1:5])
and the second would be
sum(daily$value[2:8]) and so on.
I tried using the following code
intervals <- mutate(intervals,nhdd=sum(filter(daily,date>=start&date<end)$value))
This is not working and I think it might have something to do with not referencing the columns correctly but I'm not sure where to go.
I'd really like to use dplyr to solve this and not a loop because 11 million rows will take long enough using dplyr. I tried using more of lubridate but dplyr doesn't seem to support the Period class.
Edit: I'm actually using dates from as.Date now instead of lubridatebut the basic question of how to refer to a different dataframe from within mutate still stands
eps <- .Machine$double.eps
library(dplyr)
intervals %>%
rowwise() %>%
mutate(nhdd = sum(daily$value[between(daily$date, start, end - eps )]))
# start end nhdd
#1 2008-03-01 2008-03-06 144.8444
#2 2008-03-02 2008-03-09 233.4530
#3 2008-03-03 2008-03-15 319.5452
#4 2008-03-04 2008-03-24 531.7620
#5 2008-03-05 2008-03-31 614.2481
In case if you find dplyr solution bit slow (basically due torowwise), you might want to use data.table for pure speed
library(data.table)
setkey(setDT(intervals), start, end)
setDT(daily)[, date1 := date]
foverlaps(daily, by.x = c("date", "date1"), intervals)[, sum(value), by=c("start", "end")]
# start end V1
#1: 2008-03-01 2008-03-06 144.8444
#2: 2008-03-02 2008-03-09 233.4530
#3: 2008-03-03 2008-03-15 319.5452
#4: 2008-03-04 2008-03-24 531.7620
#5: 2008-03-05 2008-03-31 614.2481
The below is an example of the data I have.
date time size filename day.of.week
1 2015-01-16 5:36:12 1577 01162015053400.xml Friday
2 2015-01-16 5:38:09 2900 01162015053600.xml Friday
3 2015-01-16 5:40:09 3130 01162015053800.xml Friday
What I would like to do is sum up the size of the files for each hour.
I would like a resulting data table that looks like:
date hour size
2015-01-16 5 7607
2015-01-16 6 10000
So forth and so on.
But I can't quite seem to get the output I need.
I've tried ddply and aggregate, but I'm summing up the entire day, I'm not sure how to break it down by the hour in the time column.
And I've got multiple days worth of data. So it's not only for that one day. It's from that day, almost every day until yesterday.
Thanks!
The following should do the trick, assuming your example data are stored in a data frame called "test":
library(lubridate) # for hms and hour functions
test$time <- hms(test$time)
test$hour <- factor(hour(test$time))
library(dplyr)
test %>%
select(-time) %>% # dplyr doesn't like this column for some reason
group_by(date, hour) %>%
summarise(size=sum(size))
You can use data.table
library(data.table)
# Define a time stamp column.
dt[, timestamp=as.POSIXct(strptime(paste(df$date, df$time), format = "%Y-%m-%d %H:%M:%S"))]
# Aggregate by hours
dt[, size = .N, by = as.POSIXct(round(timestamp, "hour"))]
Benefit is that data.table is blazing fast!
Use a compound group_by(day,hour)
That will do it.
If you convert your date and time columns into a single POSIX date when (similar to a previous answer, i.e. df$when <- as.POSIXct(strptime(paste(df$date, df$time), format = "%Y-%m-%d %H:%M:%S"))), you could use:
aggregate(df[c("size")], FUN=sum, by=list(d=as.POSIXct(trunc(df$when, "hour"))))