I have a data frame with Id Column & Date Column.
Essentially, I would like to create a third column (Diff) that calculates the difference between dates, preferably grouped by ID.
I have constructed a large POSIXlt from the following code
c_time <- as.POSIXlt(df$Gf_Date)
a <- difftime(c_time[1:(length(c_time)-1)], c_time[2:length(c_time)], units = weeks")
However when I try cbind onto my data.frame it errors
"arguments imply differing number of rows"
as a is one row shorter than the original data.frame.
Any help would be greatly appreciated.
Since the difference can only be taken between two subsequent dates, it is undefined for the first entry. Therefore a reasonable choice would be to set that first value to NA.
This might work:
c_time <- as.POSIXlt(df$Gf_Date)
a <- c(NA,`units<-`(diff(c_time),"weeks"))
cbind(df,diff.dates=a)
(Hat tip to #thelatemail for a valuable suggestion to simplify the definition of a).
PS: Note that the differences in a may have a different sign compared to your original approach. Depending on the convention that you prefer, you can use a <- -a to convert between the two.
Related
I have the following data set from Douglas Montgomery's book Introduction to Time Series Analysis & Forecasting:
I created a data frame called pharm from this spreadsheet. We only have two variables but they're repeated over several columns. I'd like to take all odd "Week" columns past the 2nd column and stack them under the 1st Week column in order. Conversely I'd like to do the same thing with the even "Sales, in thousands" columns. Here's what I've tried so far:
pharm2 <- data.frame(week=c(pharm$week, pharm[,3], pharm[,5], pharm[,7]), sales=c(pharm$sales, pharm[,4], pharm[,6], pharm[,8]))
This works because there aren't many columns, but I need a way to do this more efficiently because hard coding won't be practical with many columns. Does anyone know a more efficient way to do this?
If the columns are alternating, just subset with a recycling logical vector, unlist and create a new data.frame
out <- data.frame(week = unlist(pharm[c(TRUE, FALSE)]),
sales = unlist(pharm[c(FALSE, TRUE)]))
You may use the seq function to generate sequence to extract alternating columns.
pharm2 <- data.frame(week = unlist(pharm[seq(1, ncol(pharm), 2)]),
sales = unlist(pharm[seq(2, ncol(pharm), 2)]))
This question already has an answer here:
r - Filter a rows by a date that alters each day
(1 answer)
Closed 1 year ago.
I have a question about the use of a logical expression in combination with a variable.
Imagine that I have a data frame with multiple rows that each contain a date saved as 2021-09-25T06:04:35:689Z.
I also have a variable that contains the date of yesterday as '2021-09-24' - yesterday <- Sys.Date()-1.
How do I filter the rows in my data frame based on the date of yesterday which is stored in the variable 'yesterday'?
To solve my problem, I have looked at multiple posts, for example:
Using grep to help subset a data frame
I am well aware that this question might be a duplicate. However, current questions do not provide me with the help that I need help. I hope that one of you can help me.
As an initial matter, it looks like you have a vector instead of a data frame (only one column). If you really do have a data frame and only ran str() on one column, the very similar technique at the end will work for you.
The first thing to know is that your dates are stored as character strings, while your yesterday object is in the Date format. R will not let you compare objects of different types, so you need to convert at least one of the two objects.
I suggest converting both to the POSIXct format so that you do not lose any information in your dates column but can still compare it to yesterday. Make sure to set the timezone to the same as your system time (mine is "America/New_York").
Dates <- c("2021-09-09T06:04:35.689Z", "2021-09-09T06:04:35.690Z", "2021-09-09T06:04:35.260Z", "2021-09-24T06:04:35.260Z")
Dates <- gsub("T", " ", Dates)
Dates <- gsub("Z", "", Dates)
Dates <- as.POSIXct(Dates, '%Y-%m-%d %H:%M:%OS', tz = "America/New_York")
yesterday <- Sys.time()-86400 #the number of seconds in one day
Now you can tell R to ignore the time any only compare the dates.
trunc(Dates, units = c("days")) == trunc(yesterday, units = c("days"))]
The other part of your question was about filtering. The easiest way to filter is subsetting. You first ask R for the indices of the matching values in your vector (or column) by wrapping your comparison in the which() function.
Indices <- which(trunc(Dates, units = c("days")) == trunc(yesterday, units = c("days"))])
None of the dates in your str() results match yesterday, so I added one at the end that matches. Calling which() returns a 4 to tell you that the fourth item in your vector matches yesterday's date. If more dates matched, it would have more values. I saved the results in "Indices"
We can then use the Indices from which() to subset your vector or dataframe.
Filtered_Dates <- Dates[Indices]
Filtered_Dataframe <- df[Indices,] #note the comma, which indicates that we are filtering rows instead of columns.
i have a dataframe with timestamps of several transportations from a to b plus information about the material (volume, weight etc.).
I recreated the important parts of the raw excel sheet I use.
My first step is to calculate the time it needed by simply substracting the dates as i only need a daily precision. I put all the times in a numerical vector to have it easy for further calculations and plots.
BUT:
I'd like to perform a regression analysis on it. I know how to create an lm.
My problem is, Due to several NA's my numerical vector of "transport days" is shorter than my cols in the df.
How can I merge the cols from the df with my numerical vector so that the transport times match the several materials again?
Do you look for something like
library(dplyr)
df %>%
mutate(diff = as.numeric(t4-t1))
You then have a time difference column while the colume column is still in the df. You can tell lm() how to deal with NAs anyway, so you don't need to drop them (I also don't think that you were doing it anyway).
Consider the following simulation snippet:
k <- 1:5
x <- seq(0,10,length.out = 100)
dsts <- lapply(1:length(k), function(i) cbind(x=x, distri=dchisq(x,k[i]),i) )
dsts <- do.call(rbind,dsts)
why does this code throws an error (dsts is matrix):
subset(dsts,i==1)
#Error in subset.matrix(dsts, i == 1) : object 'i' not found
Even this one:
colnames(dsts)[3] <- 'iii'
subset(dsts,iii==1)
But not this one (matrix coerced as dataframe):
subset(as.data.frame(dsts),i==1)
This one works either where x is already defined:
subset(dsts,x> 500)
The error occurs in subset.matrix() on this line:
else if (!is.logical(subset))
Is this a bug that should be reported to R Core?
The behavior you are describing is by design and is documented on the ?subset help page.
From the help page:
For data frames, the subset argument works on the rows. Note that subset will be evaluated in the data frame, so columns can be referred to (by name) as variables in the expression (see the examples).
In R, data.frames and matrices are very different types of objects. If this is causing a problem, you are probably using the wrong data structure for your data. Matrices are really only necessary if you meed matrix arithmetic. If you are thinking of your columns as different attributes for a row observations, then you should be storing your data in a data.frame in the first place. You could store all your values in a simple vector where every three values represent one observation, but that would also be a poor choice of data structure for your data. I'm not sure if you were trying to be more efficient by choosing a matrix but it seems like just the wrong choice.
A data.frame is stored as a named list while a matrix is stored as a dimensioned vector. A list can be used as an environment which makes it easy to evaluate variable names in that context. The biggest difference between the two is that data.frames can hold columns of different classes (numerics, characters, dates) while matrices can only hold values of exactly one data.type. You cannot always easily convert between the two without a loss of information.
Thinks like $ only work with data.frames as well.
dd <- data.frame(x=1:10)
dd$x
mm <- matrix(1:10, ncol=1, dimnames=list(NULL, "x"))
mm$x # Error
If you want to subset a matrix, you are better off using standard [ subsetting rather than the sub setting function.
dsts[ dsts[,"i"]==1, ]
This behavior has been a part of R for a very long time. Any changes to this behavior is likely to introduce breaking changes to existing code that relies on variables being evaluated in a certain context. I think the problem lies with whomever told you to use a matrix in the first place. Rather than cbind(), you should have used data.frame()
A dummy column for a column c and a given value x equals 1 if c==x and 0 else. Usually, by creating dummies for a column c, one excludes one value x at choice, as the last dummy column doesn't add any information w.r.t. the already existing dummy columns.
Here's how I'm trying to create a long list of dummies for a column firm, in a data.table:
values <- unique(myDataTable$firm)
cols <- paste('d',as.character(inds[-1]), sep='_') # gives us nice d_value names for columns
# the [-1]: I arbitrarily do not create a dummy for the first unique value
myDataTable[, (cols):=lapply(values[-1],function(x)firm==x)]
This code reliably worked for previous columns, which had smaller unique values. firm however is larger:
tr(values)
num [1:3082] 51560090 51570615 51603870 51604677 51606085 ...
I get a warning when trying to add the columns:
Warning message:
truelength (6198) is greater than 1000 items over-allocated (length = 36). See ?truelength. If you didn't set the datatable.alloccol option very large, please report this to datatable-help including the result of sessionInfo().
As far as I can tell, there is still all columns that I need. Can I just ignore this issue? Will it slow down future computations? I'm not sure what to make of this and the relevant of truelength.
Taking Arun's comment as an answer.
You should use alloc.col function to pre-allocate required amount of columns in your data.table to the number which will be bigger than expected ncol.
alloc.col(myDataTable, 3200)
Additionally depending on the way how you consume the data I would recommend to consider reshaping your wide table to long table, see EAV. Then you need to have only one column per data type.