Subtracting time from current time in R - r

I am designing a flex dashboard for my office.
I have a column( lets say Entry time) containing time stamp(ex-2020-06-01 20:30).I want to remove those rows for which the diff between current time and entry time is greater than 24 hours. Can u please help ?

If you are tidyverse person you can do so using lubridate and filter pretty easily and then select to keep the columns you want after filtering.
require(lubridate)
require(tidyverse)
df <- df%>%
mutate(time_difference = interval(ymd_hm(start_column), ymd_hm(end_column))%>%
filter(as.numeric(time_length(time_difference, 'hour')) >24)%>%
select(-time_difference)
This takes a dataframe, creates a new column with a lubridate interval in it. Then uses time length to get the duration in hours, which is coerced to numeric (just in case as some date objects are strings under the hood) within a filter to select times less than 24 hours. The last row using select simply removes the time_difference field created to do the filtering.
This will all be saved back into the original dataframe.
Just check the grammar before you go. Without code to test it on, I may have missed a closing parenthesis or something somewhere~

Related

Date / Time calculations

I'm trying to calculate the difference in 2 dates / times. My problem is the each date and time is in a separate column (see screenshot). Following is the formula I have been using:
=IF(RC[-1]-RC[-4] =0,"",RC[-1]-RC[-4])
This worked until the 2 date columns weren't the same day.
I'm having trouble trying to combine the dates and time within the formula. I could write a macro to do this or I could combine each date and time paring into one column if that makes it easier. I'd rather not combine them as separate columns is easier for the user base.
Any help or suggestions would be greatly appreciated. Thanks in advance for your help....
First concatenate the Date and Time
=concatenate(text(A2,"mm/dd/yyyy")&" "&text(B2,"hh:mm:ss"))
then
subtract them
Other wise look at this. You can direct Subtract the dates and time without adding any extra columns
enter image description here
=(CONCATENATE(TEXT(C2,"mm/dd/yyyy")&" "&TEXT(D2,"hh:mm:ss AM/PM"))-CONCATENATE(TEXT(A2,"mm/dd/yyyy")&" "&TEXT(B2,"hh:mm:ss AM/PM")))*24
File Reference

Creating a Time Series with Half Hourly Data in R

This is my first time ever asking a question on Stack Overflow and I'm a programming novice so any advice as to how to improve my question asking abilities would be appreciated.
Onto my question: I have two csv files, one containing three columns (date time in dd/mm/yyyy hh:(00 or 30) format, production of a certain product, and demand for said product), and the other containing several columns (decomposition of the date time into year, month, day, hour, and whether it is :00 or :30 represented by 1 or 2 respectively, alongside several columns for independent variables which may affect production/demand of said product).
I've only played around with the first csv file, converting the string into a datetime object but the ts() function won't recognise the datetime objects as my times. I've tried adjusting the frequency parameter but ultimately failed and have no idea how to create a time series using half hourly data. Would appreciate any help.
Thanks in advance!
My suggestion is to apply the "difftime" over all your time data. For instance, like following code, you can use your initial time (the time of first record) for all comparisons as time_start and the others as time_finish. Then it return the time intervals as number of seconds and then you are ready to use other column values as the value of the time stamps.
interval=as.integer(difftime(strptime(time_finish,"%H:%M"),strptime(time_start,"%H:%M"),units = "sec"))
Second 0 10 15 ....

Extract data for all days for last 30 days from R data frame

I am totally new to R environment and I'm stuck at Date operations. The scenario is, I have a daily database of customer activity of a certain Store, and I need to extract last 30 months data starting from current date.
In other words, suppose today is 18-NOV-2014, I need all the data from 18-OCT-2014 till today in a separate data-frame. To extract it, what kind of iteration logic should I write in R?
You don't need an iteration. What you could do is, assuming your data.frame is called X, and the date column, DATE, you could write:
X$DATE=as.Date(X$DATE, format='%d-%B-%Y')
the 'format' argument is to match your date format you specify in you question. Then, to get the lines you are interested in, something like:
X[X$DATE>=as.Date(today(),format='%d-%B-%Y')-30)]
which is all the lines that are after today - 30 days.
Does this help at all?

Index xts using string and return only observations at that exact time

I have an xts time series in R and am using the very handy function to subset the time series based on a string, for example
time_series["17/06/2006 12:00:00"]
This will return the nearest observation to that date/time - which is very handy in many situations. However, in this particular situation I only want to return the elements of the time series which are at that exact time. Is there a way to do this in xts using a nice date/time string like this?
In a more general case (I don't have this problem immediately now, but suspect I may run into it soon) - is it possible to extract the closest observation within a certain period of time? For example, the closest observation to the given date/time, assuming it is within 10 minutes of the given date/time - otherwise just discard that observation.
I suspect this more general case may require me writing a function to do this - which I am happy to do - I just wanted to check whether the more specific case (or the general case) was already catered for in xts.
AFAIK, the only way to do this is to use a subset that begins at the time you're interested in, then get the first observation of that.
e.g.
first(time_series["2006-06-17 12:00:00/2006-06-17 12:01"])
or, more generally, to get the 12:00 price every day, you can subset down to 1 minute of each day, then split by days and extract the first observation of each.
do.call(rbind, lapply(split(time_series["T12:00:00/T12:01"],'days'), first))
Here's a thread where Jeff (the xts author) contemplates adding the functionality you want
http://r.789695.n4.nabble.com/Find-first-trade-of-day-in-xts-object-td3598441.html#a3599887

Specific date format conversion problems in R

Basically I want to know why as.Date(200322,format="%Y%W") gives me NA. While we are at it, I would appreciate any advice on a data structure for repeated cross-section (aka pseudo-panel) in R.
I did get aggregate() to (sort of) work, but it is not flexible enough - it misses data on columns when I omit the missed values, for example.
Specifically, I have a survey that is repeated weekly for a couple of years with a bunch of similar questions answers to which I would like to combine, average, condition and plot in both dimensions. Getting the date conversion right should presumably help me towards my goal with zoo package or something similar.
Any input is appreciated.
Update: thanks for string suggestion, but as you can see in your own example, %W part doesn't work - it only identifies the year while setting the current day while I need to set a specific week (and leave the day blank).
Use a string as first argument in as.Date() and select a specific weekday (format %w, value 0-6). There are seven possible dates in each week, therefore strptime needs more information to select a unique date. Otherwise the current day and month are returned.
> as.Date(paste("200947", "0", sep="-"), format="%Y%W-%w")
[1] "2009-11-22"

Resources