Eliminate the non repeating dates in R - r

I have 2 zoo series one of stock returns and the other of market returns. The thing is that my market return series contains holidays (example: 4 of July) and the stock series doesn't. I want to compare the dates of the 2 series and eliminate the date that is not in my stock series. In that way I would have same length zoo series. In advance thank you very much.
Best, Tom.

Perform a right join (or left join if the inputs are reversed):
merge(market, stock, all = c(FALSE, TRUE))
Next time please provide sample data.

Related

How to convert a weekly dataset into a time series in R

i have a WEEKLY dataset that start on 1986.01.03 and end on 2022-10-07.
The problem is when I forecast the time series with Arima +garch, because the date in T0 is wrong, i.e. 1975 enter image description here.
The function that I used to convert the dataset into time series is here, but I think that the problem is here, since it doesn't take on the right date.
FutureWeekly= ts(WeeklyFuture$FutureWeekly, start= c(1986,1), end = c(2022,10), frequency = 52)
does anyone know how to convert a weekly dataset to time series other than this?
There are the first rows of my dataset and then I have to transform that into returns (diff(log(FutureWeekly) to do the ARMA+GARCH
enter image description here
Try this:
futures<-c(WeeklyFuture$FutureWeekly) #convert to vector
FutureWeekly= ts(futures, start= c(1986,1,10), end = c(1986,3,7), frequency = 52) #add day of week ending on
One of the things ts() demands is a vector of values. I think it might also be easier for ts() to convert the data if it was able to see the 7-day increments.
Assuming you have full un-broken weekly data for the entire period, I think these two things will solve the problem.

Quantmod - Chop data and constructing matrix of return series

I am having trouble with my R assignment I am working on this semester.
Here is the part that I am tasked with doing that I am confused about:
iv. Download 3 month TBill rate from Fred for the same sample period 01/01/1993 to 12/31/2013.
Useful Hints: You may have to chop the data to match the sample period.
v. Construct a matrix of return series combining Stock, S&P500, and TBill for the sample period.
Useful Hints:
Note that the rownames for the TBill may not match with the other two return series, as the dates do not match, although the month and year matches
You have to construct the row names for each of the series as Year – Month format (e.g. 1993-01) or delete the rownames from T-bill before you can combine all three series into one Return matrix.
You have to convert the Return matrix to a dataframe before you use the lm() function.
I tried this below like I have used getSymbols before for SPY and AAPL but it pulls an entire data set rather than the specific date range. How can I chop the data so it fits the desired date range?
getSymbols('TB3MS', src = 'FRED', from = "1993-01-01", to = "2013-12-31")
Next, how would I go about constructing the matrix of return series combining all of the stocks? Can anyone point me in the right direction?
Filtering an xts object: see examples in the xts documentation ?xts.
# filter 1993 until 2013
TB3MS["1993/2013"]
But these dates are of, because tbills are at the first day of the month, the stock dates are the last day of the month. With the coredata you can extract the tbill data and stick it into the other timeseries if the rows match.
Taking the data example from your previous question, you could do something like this (and I'm creating more steps than needed, you could combine a few statements into one):
# create monthly returns of the spy data and give the column a better name than monthly.returns
spy_returns <- monthlyReturn(SPY)
colnames(spy_returns) <- "SPY_returns"
# filter the tbill data
TB3MS_1993_2013 <- TB3MS["1993/2013"]
# add tbill data to spy data
spy_returns$TB3MS <- coredata(TB3MS_1993_2013)
Merging xts objects can just be done with merge. They will be merged on the dates.
merge(spy_returns, aapl_returns) would combine these two. If you have a lot of tickers, use Reduce (check help and SO on how to use Reduce with merge) but better would be to use the tidyquant package if allowed.

How to get monthly time series cross sectional into zoo using R

I want to get a panel data set into zoo so that it catches both month and year. My data set looks like this.
and the data can be downloaded from HERE.
The best way I could do is,
dat<-read.csv("dat_lag.csv")
zdat <- read.zoo(dat, format="%d/%m/%Y")
However, I could do this by including column 1- Date and column 4- Day in my data set. Is there any clever way to get both month and year into zoo using R without including the Date and Day columns? Thanks, in advance for any help.

How do I make periods out of times in R?

I have 10 million+ data points which look like:
Identifier Times Data
6597104 2015-05-01 04:08:05 0.15512575543732
In order to study these I want to add a Period (1, 2,...) column so the oldest row with the 6597104 identifier is period 1 and the second oldest is period 2 etc. However the times come irregularly so I can't just make it a time series object.
Does anyone know how to do this? Thanks in advance
Let's call your data frame data
First sort it using
data <- data[sort(data$Times,decreasing=TRUE),]
Then add a new column called Period
for i in 1:nrow(data){
data$Period[i] <- paste("Period",i,sep=" ")
}

Compute average over sliding time interval (7 days ago/later) in R

I've seen a lot of solutions to working with groups of times or date, like aggregate to sum daily observations into weekly observations, or other solutions to compute a moving average, but I haven't found a way do what I want, which is to pluck relative dates out of data keyed by an additional variable.
I have daily sales data for a bunch of stores. So that is a data.frame with columns
store_id date sales
It's nearly complete, but there are some missing data points, and those missing data points are having a strong effect on our models (I suspect). So I used expand.grid to make sure we have a row for every store and every date, but at this point the sales data for those missing data points are NAs. I've found solutions like
dframe[is.na(dframe)] <- 0
or
dframe$sales[is.na(dframe$sales)] <- mean(dframe$sales, na.rm = TRUE)
but I'm not happy with the RHS of either of those. I want to replace missing sales data with our best estimate, and the best estimate of sales for a given store on a given date is the average of the sales 7 days prior and 7 days later. E.g. for Sunday the 8th, the average of Sunday the 1st and Sunday the 15th, because sales is significantly dependent on day of the week.
So I guess I can use
dframe$sales[is.na(dframe$sales)] <- my_func(dframe)
where my_func(dframe) replaces every stores' sales data with the average of the store's sales 7 days prior and 7 days later (ignoring for the first go around the situation where one of those data points is also missing), but I have no idea how to write my_func in an efficient way.
How do I match up the store_id and the dates 7 days prior and future without using a terribly inefficient for loop? Preferably using only base R packages.
Something like:
with(
dframe,
ave(sales, store_id, FUN=function(x) {
naw <- which(is.na(x))
x[naw] <- rowMeans(cbind(x[naw+7],x[naw-7]))
x
}
)
)

Resources