I have a csv file that contains many thousands of timestamped data points. The file includes the following columns: Date, Tag, East, North & DistFromMean. The following is a sample of the data in the file:
The data is recorded approximately every 15 minutes for 12 tags over a month. What I'm wanting to do is select from the data, starting from the first date entry, subsets of data i.e. every 3 hours but due to the tags transmitting at slightly different rates I need a minimum and maximum value start and end time.
I have found the a related previous question but don't understand the answer enough to implement.
The solution could firstly ask for the Tag number, then the period required perhaps in minutes from the start time (i.e. every 3hrs or 180 minutes), the minimum time range and the maximum time range, both of which would be constant for whatever time period was used. The minimum and maximum would probably need to be plus and minus 6 minutes from the period selected.
As the code below shows, I've managed to read in the file, change the Date format to POSIXlt and extract data within a specific time frame but the bit I'm stuck on is extracting the data every nth minute and within a range.
TestData<- read.csv ("TestData.csv", header=TRUE, as.is=TRUE)
TestData$Date <- strptime(TestData$Date, "%d/%m/%Y %H:%M")
TestData[TestData$Date >= as.POSIXlt("2014-02-26 7:10:00") & TestData$Date < as.POSIXlt("2014-02-26 7:18:00"),]
Related
I have a dataset in .csv, and I have added in a column on my own in the csv that takes the total time taken for a task to be completed. There are two other columns that consists of the start time and the end time, and that is where I calculated the total time taken column from. The format of the start time and end time columns are in the datetime format 5/7/2018 16:13 while the format of the total time taken column is 0:08:20(H:MM:SS).
I understand that for datetime, it is possible to use the functions as.Date or as.POSIXlt to change the variable type from a factor to that of date. Is there a function that I can convert my total time taken column to (from that of factor) so that I can use it to plot scatterplots/plots in general? I tried as.numeric but the numbers that come out are gibberish and do not correspond to the original time.
If you want to plot the total time taken for each row, then I would suggest just plotting that difference as seconds. Here is a code snippet which shows how you can convert your start or end date into a numerical value:
start <- "5/7/2018 16:13"
start_date <- as.POSIXct(start, format="%d/%m/%Y %H:%M")
as.numeric(start_date)
[1] 1530799980
The above is a UNIX timestamp, which is number of seconds since the epoch (January 1, 1970). But, since you want a difference between start and end times, this detail does not really matter for you, and the difference you get should be valid.
If you want to use minutes, hours, or some other time unit, then you can easily convert.
Is there a way to window filter dates by a number of days excluding weekends?
I know you can use the between function for filtering between two specific dates but I only know one of the two specific dates, with the other date I would like to do is 4 days prior in business days only (not counting weekends).
An pseudo-example of what I am looking for is, given this wednesday I want to filter everything up to 4 business days beforehand:
window(z, start = as.POSIXct("2017-09-13"), end = as.POSIXct("2017-09-20"))
Another example would be if I am given this Friday's date, the start date would be Monday.
Ideally, I want to be able to play with the window value.
I have uber dataset containing variables pickup point, request time, drop time, date variable without month and year.
I need code for calculating idle time and creating a new variable idle time. Calculation as follows:
If pickup points are same for consecutive rows and date is different for consecutive rows then NA value if not difference between drop time of first row and the pickup time in second row. I have done it in excel and need to do it in R
Attached is the screenshot of data in excel
Try something like this, if this is what you are looking for
for(i in 2:nrow(df)){
df$idle[1]<-NA
if(df$Pickup.point[i]!=df$Pickup.point[i-1])
df$idle[i]<-NA
else
if(df$Date[i]!=df$Date[i-1])
df$idle[i]<-NA
else
df$idle[i]<-(df$Req[i]-df$Drop[i-1])
}
I have two time series.
Each point in either time series is for a week. A week here is not exactly a calendar week, but the first week in a calendar year always starts from Jan 1, and the other weeks in the same year follow that, and the last week of the year may contain more than 7 days but no more than 13 days.
The first time series A is stored in a compressed (.gz) text file A.gz, which looks like (each week and the corresponding time series value are separated by a comma in a line):
week,value
20060101-20060107,0
20060108-20060114,5
...
20061217-20061223,0
20061224-20061230,0
20070101-20070107,0
20070108-20070114,4
...
20150903-20150909,0
20150910-20150916,1
The second time series B is similarly stored in a compressed (.gz) text file B.gz, but over a subset of period of A, which looks like:
week,value
20130122-20130128,509
20130129-20130204,204
...
20131217-20131223,150
20131224-20131231,148.0
20140101-20140107,365.0
20140108-20140114,45.0
...
20150305-20150311,0
20150312-20150318,364
I wonder how to calculate the cross correlation between the two time series A and B (up to a specified maximum lag), and plot A and B in a single plot, in R?
Thanks
I have a data set (3.2 million rows) in R which consists of pairs of time (milliseconds) and volts. The sensor that gathers the data only runs during the day so the time is actually the milliseconds since start-up that day.
For example, if the sensor runs 12 hours per day, then the maximum possible time value for one day is 43,200,000 ms (12h * 60m * 60s * 1000ms).
The data is continually added to a single file, which means there are many overlapping time values:
X: [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5...] // example if range was 1-5 for one day
Y: [voltage readings at each point in time...]
I would like to separate each "run" into unique data frames so that I could clearly see individual days. Currently when I plot the entire data set it is incredibly muddy because in fact all of the days are being shown in the single plot. Thanks for any help.
If your data.frame df has columns X and Y, you can use diff to find every time X goes down (meaning a new day, it sounds like):
df$Day = cumsum(c(1, diff(df$X) < 0))
Day1 = df[df$Day==1,]
plot(Day1$X, Day1$Y)