I've a data frame with date and time format of different time zones, I want to compare this with the current time at that timezone. so I want to add 1 hr to the below "Date and time column and then compare that with the current time in that time zone like for the first one (timezone is EDT and the current time is 2017-07-18 10:20 in EDT)
Date and time TZ
2017-07-08 16:00 EDT
2017-07-17 15:30 PDT
2017-07-17 11:00 EDT
2017-07-17 20:00 EDT
2017-07-17 10:00 EDT
2017-07-13 15:00 PDT
where EDT is "America/New_York" and PDT is pacific time zone.
I just has the time in the raw data and later with city names I created a column to know if it is "EDT OR PDT" not sure how to proceed from here, I tried something on this time zone not changing when specified
It's really tricky with timezone and my system has a default if "America/New_York" time zone and I'm not sure if whatever I tried was wrong.
Can anyone give some idea how to get the local time in a column ?
my desired output is:
Date and time | TZ | Localtime(current)
2017-07-08 16:00 | EDT | 2017-07-17 2017-07-18 10:24:19 EDT
2017-07-17 15:30 | PDT | 2017-07-17 2017-07-18 09:25:19 PDT
2017-07-17 11:00 | CDT | 2017-07-17 2017-07-18 09:25:19 CDT
2017-07-17 20:00 | EDT | 2017-07-17 2017-07-18 23:02:19 EDT
2017-07-17 10:00 | EDT | 2017-07-17 2017-07-18 10:24:19 EDT
2017-07-13 15:00 | PDT | 2017-07-17 2017-07-18 09:25:19 PDT
library(lubridate)
currentTime <- Sys.time()
tzs <- c("America/Los_Angeles", "America/Chicago", "America/New_York")
names(tzs) <- c("PDT", "CDT", "EDT")
lapply(tzs[data$TZ], with_tz, time = currentTime)
As suggested use with_tz from lubridate, then loop through.
If you don't want/need to use lubridate, then this will give the same result, using the tzs and currentTime objects from #troh's answer:
Map(`attr<-`, currentTime, "tzone", tzs[dat$TZ])
Related
I'm getting wrong time from momentjs when I pass UTC date time I expect it to convert to 7:30pm in Melbourne
Ex:
var myUTCTime = moment("2020-12-02 09:30:00.0000000 +00:00").utc();
Wed Dec 02 2020 09:30:00 GMT+0000
var melbourne = moment("2020-12-02 09:30:00.0000000 +00:00").utc().tz("Australia/Melbourne");
Wed Dec 02 2020 20:30:00 GMT+1100
Expecting Melbourne to be Wed Dec 02 2020 19:30:00 GMT+1000
7:30 PM (19:30) Melbourne Time = 9:30 AM (9:30) UTC
Following url shows the graph
http://www.timebie.com/timezone/universalmelbourne.php
That's taking Daylight Savings into account (observed in month of December).
For other dates, it's just fine. I tried May.
moment("2020-05-02 09:30:00.000 +00:00").utc().toString()
"Sat May 02 2020 09:30:00 GMT+0000"
moment("2020-05-02 09:30:00.000 +00:00").utc().tz("Australia/Melbourne").toString();
"Sat May 02 2020 19:30:00 GMT+1000"
OK, this is making me crazy.
I have several datasets with time values that need to be rolled up into 15 minute intervals.
I found a solution here that works beautifully on one dataset. But on the next one I try to do I'm getting weird results. I have a column with character data representing dates:
BeginTime
-------------------------------
1 1/3/19 1:50 PM
2 1/3/19 1:30 PM
3 1/3/19 4:56 PM
4 1/4/19 11:23 AM
5 1/6/19 7:45 PM
6 1/7/19 10:15 PM
7 1/8/19 12:02 PM
8 1/9/19 10:43 PM
And I'm using the following code (which is exactly what I used on the other dataset except for the names)
df$by15 = cut(mdy_hm(df$BeginTime), breaks="15 min")
but what I get is:
BeginTime by15
-------------------------------------------------------
1 1/3/19 1:50 PM 2019-01-03 13:36:00
2 1/3/19 1:30 PM 2019-01-03 13:21:00
3 1/3/19 4:56 PM 2019-01-03 16:51:00
4 1/4/19 11:23 AM 2019-01-04 11:21:00
5 1/6/19 7:45 PM 2019-01-06 19:36:00
6 1/7/19 10:15 PM 2019-01-07 22:06:00
7 1/8/19 12:02 PM 2019-01-08 11:51:00
8 1/9/19 10:43 PM 2019-01-09 22:36:00
9 1/10/19 11:25 AM 2019-01-10 11:21:00
Any suggestions on why I'm getting such random times instead of the 15-minute intervals I'm looking for? Like I said, this worked fine on the other data set.
You can use lubridate::round_date() function which will roll-up your datetime data as follows;
library(lubridate) # To handle datetime data
library(dplyr) # For data manipulation
# Creating dataframe
df <-
data.frame(
BeginTime = c("1/3/19 1:50 PM", "1/3/19 1:30 PM", "1/3/19 4:56 PM",
"1/4/19 11:23 AM", "1/6/19 7:45 PM", "1/7/19 10:15 PM",
"1/8/19 12:02 PM", "1/9/19 10:43 PM")
)
df %>%
# First we parse the data in order to convert it from string format to datetime
mutate(by15 = parse_date_time(BeginTime, '%d/%m/%y %I:%M %p'),
# We roll up the data/round it to 15 minutes interval
by15 = round_date(by15, "15 mins"))
#
# BeginTime by15
# 1/3/19 1:50 PM 2019-03-01 13:45:00
# 1/3/19 1:30 PM 2019-03-01 13:30:00
# 1/3/19 4:56 PM 2019-03-01 17:00:00
# 1/4/19 11:23 AM 2019-04-01 11:30:00
# 1/6/19 7:45 PM 2019-06-01 19:45:00
# 1/7/19 10:15 PM 2019-07-01 22:15:00
# 1/8/19 12:02 PM 2019-08-01 12:00:00
# 1/9/19 10:43 PM 2019-09-01 22:45:00
I would like a help ... the clinic has several doctors and each one has a specific time of care. Example: 07:00 to 12:00, 12:00 to 17:00, 09:00 to 15:00 ... What is the SQL statement to display only records within the specified time range in the start_time and end_time ?
fields:
start_time | end_time
07:00:00 | 12:30:00
09:00:00 | 15:00:00
12:30:00 | 17:00:00
07:00:00 | 17:00:00
That is, in the morning, display only the records that are part of 07:00:00 to 12:30:00 from the current time. If it's afternoon show only record that are part of 12:30:00 until 17:00:00.
Thankful.
I am using POSIXct to convert my datetime. Once the data is read from the file I use the following:
DF$datetime <- as.POSIXct(DF$datetime, format="%m/%d/%Y %H:%M", "EET")
The problem comes when the time changes from EEST to EET. My data changes as following:
2018-10-28 03:25:00 EEST
2018-10-28 from 03:26:00 until 03:59:00 as EET
2018-10-28 from 03:00:00 until 03:25:00 as EEST
2018-10-28 03:26:00 EET
If I use UTC instead of EET it works fine. Any suggestion how can I avoid this situation?
I have my dates in the following format :- Wed Apr 25 2018 00:00:00 GMT-0700 (Pacific Standard Time) or 43167 or Fri May 18 2018 00:00:00 GMT-0700 (PDT) all mixed in 1 column. What would be the easiest way to convert all of these in a simple YYYY-mm-dd (2018-04-13) format? Here is the column:
dates <- c('Fri May 18 2018 00:00:00 GMT-0700 (PDT)',
'43203',
'Wed Apr 25 2018 00:00:00 GMT-0700 (Pacific Standard Time)',
'43167','43201',
'Fri May 18 2018 00:00:00 GMT-0700 (PDT)',
'Tue May 29 2018 00:00:00 GMT-0700 (Pacific Standard Time)',
'Tue May 01 2018 00:00:00 GMT-0700 (PDT)',
'Fri May 25 2018 00:00:00 GMT-0700 (Pacific Standard Time)',
'Fri Apr 06 2018 00:00:00 GMT-0700 (PDT)','43173')
Expected format:2018-05-18, 2018-04-13, 2018-04-25, ...
I believe similar questions have been asked several times before. However, there
is a crucial point which needs special attention:
What is the origin for the dates given as integer (or as character string which can be converted to integer to be exact)?
If the data is imported from the Windows version of Excel, origin = "1899-12-30" has to be used. For details, see the Example section in help(as.Date) and the Other Applications section of the R Help Desk article by Gabor Grothendieck and Thomas Petzoldt.
For conversion of the date time strings, the mdy_hms() function from the lubridate package is used. In addition, I am using data.table syntax for its conciseness:
library(data.table)
data.table(dates)[!dates %like% "^\\d+$", new_date := as.Date(lubridate::mdy_hms(dates))][
is.na(new_date), new_date := as.Date(as.integer(dates), origin = "1899-12-30")][]
dates new_date
1: Fri May 18 2018 00:00:00 GMT-0700 (PDT) 2018-05-18
2: 43203 2018-04-13
3: Wed Apr 25 2018 00:00:00 GMT-0700 (Pacific Standard Time) 2018-04-25
4: 43167 2018-03-08
5: 43201 2018-04-11
6: Fri May 18 2018 00:00:00 GMT-0700 (PDT) 2018-05-18
7: Tue May 29 2018 00:00:00 GMT-0700 (Pacific Standard Time) 2018-05-29
8: Tue May 01 2018 00:00:00 GMT-0700 (PDT) 2018-05-01
9: Fri May 25 2018 00:00:00 GMT-0700 (Pacific Standard Time) 2018-05-25
10: Fri Apr 06 2018 00:00:00 GMT-0700 (PDT) 2018-04-06
11: 43173 2018-03-14
Apparently, the assumption to choose the origin which belongs to the Windows version of Excel seems to hold.
If only a vector of Date values is required:
data.table(dates)[!dates %like% "^\\d+$", new_date := as.Date(lubridate::mdy_hms(dates))][
is.na(new_date), new_date := as.Date(as.integer(dates), origin = "1899-12-30")][, new_date]
[1] "2018-05-18" "2018-04-13" "2018-04-25" "2018-03-08" "2018-04-11" "2018-05-18"
[7] "2018-05-29" "2018-05-01" "2018-05-25" "2018-04-06" "2018-03-14"