Merging three time series in R - r

library(qrmdata)
library(xts)
library(dplyr)
#load the data
data("EUR_USD")
data("SP500_const")
data("EURSTX_const")
#select stock and period
walmart <- data.frame(nr = SP500_const['2005-05-20/2015-05-19',"WMT"])
danone <- data.frame(nr = EURSTX_const['2005-05-20/2015-05-19',"BN.PA"])
exrate <- data.frame(nr = EUR_USD['2005-05-20/2015-05-19',])
#omit 'NA' entries
walmart <- na.omit(walmart)
danone <- na.omit(danone)
exrate <- na.omit(exrate)
I want to merge the three time series walmart, danone and exrate into one time series, but I only want those days in it for which I have data in all three time series.
I tried to merge danone and walmart first using
z <- merge(danone,walmart, join='inner')
which should merge danone and walmart (only using the days for which I have data from both danone and walmart) but it doesn't give me the output I described above

You can use inner_join from dplyr library
walmart$date<-rownames(walmart)
danone$date<-rownames(danone)
exrate$date<-rownames(exrate)
a<-inner_join(inner_join(walmart, danone,by = "date" ), exrate, by = "date")
> head(a)
WMT date BN.PA EUR.USD
1 37.52 2005-05-20 24.8986 1.2561
2 38.05 2005-05-23 25.0496 1.2585
3 37.89 2005-05-24 25.1000 1.2586
4 37.61 2005-05-25 25.0832 1.2605
5 37.62 2005-05-26 25.3013 1.2513
6 37.59 2005-05-27 25.2006 1.2580

Related

Series of correlation matrices in R

Given the following baby fragment:
d1=as.Date('April 26, 2001',format='%B %d, %Y')
d2=as.Date('April 27, 2001',format='%B %d, %Y')
d3=as.Date('April 28, 2001',format='%B %d, %Y')
tibble(DATE=c(d1,d1,d2,d2,d3,d3), Symbol=c("A","B","A","B","A","B"), voladj=c(0.2, 0.3, -0.2, -0.1, 0.3, 0.2))
resulting in
# A tibble: 6 x 3
DATE Symbol voladj
<date> <chr> <dbl>
1 2001-04-26 A 0.2
2 2001-04-26 B 0.3
3 2001-04-27 A -0.2
4 2001-04-27 B -0.1
5 2001-04-28 A 0.3
6 2001-04-28 B 0.2
I try to compute a series of correlation/covariance matrices cor at time D2, cor at time D3, ... etc. Ideally data is exponentially weighted. What options do I have in R? To make things a bit more spicy a Symbol C may at some point show up, too. I was thinking to compute the outer product (rank 1 matrix) at time t1, t2, t3, and then use a simple moving mean.
A potential output could be the following:
DATE cov
<date>
1 2001-04-26 M1
2 2001-04-27 M2
3 2001-04-28 M3
where M_i are matrices (or frames), such as
M_1 = A B
A 1.0 c1
B c1 1.0
etc.
Obviously more interesting once more symbols are involved
Updated answer, given comments
Here is an approach using quantmod to retrieve 5 stocks for three weeks from Yahoo Finance. We combine the Close variable from the xts objects into a data frame, generate week identifiers with lubridate::week(), split() it by week, and calculate covariance matrices for each week using lapply().
library(quantmod)
from.dat <- as.Date("12/03/19",format="%m/%d/%y")
to.dat <- as.Date("12/24/19",format="%m/%d/%y")
theSymbols <- c("AAPL","AXP","BA","CAT","CSCO")
getSymbols(theSymbols,from=from.dat,to=to.dat,src="yahoo")
#combine to single data frame
combinedData <- data.frame(date = as.Date(rownames(as.data.frame(AAPL))),
AAPL$AAPL.Close,
AXP$AXP.Close,
BA$BA.Close,
CAT$CAT.Close,
CSCO$CSCO.Close)
colnames(combinedData) <- c("date","AAPL","AXP","BA","CAT","CSCO")
# split by week
library(lubridate)
combinedData$week <- week(combinedData$date)
symbolsByWeek <- split(combinedData,as.factor(combinedData$week))
covariances <- lapply(symbolsByWeek,function(x){
cov(x[,-c(1,7)])
})
covariances[[1]]
...and the output:
> covariances[[1]]
AAPL AXP BA CAT CSCO
AAPL 19.4962156 7.0959976 3.9093027 5.4158116 -0.66194433
AXP 7.0959976 3.0026695 2.0175793 2.2569625 -0.18793832
BA 3.9093027 2.0175793 10.4511473 1.8555752 0.55619975
CAT 5.4158116 2.2569625 1.8555752 1.8335361 -0.11141911
CSCO -0.6619443 -0.1879383 0.5561997 -0.1114191 0.07287982
>
Original answer
Here is an approach using quantmod to retrieve Dow 30 data for four days from Yahoo Finance, apply() and do.call() with rbind() to massage it into a single data frame, and split() to split by day to produce daily covariance matrices.
library(quantmod)
from.dat <- as.Date("12/02/19",format="%m/%d/%y")
to.dat <- as.Date("12/06/19",format="%m/%d/%y")
theSymbols <- c("AAPL","AXP","BA","CAT","CSCO","CVX","XOM","GS","HD","IBM",
"INTC","JNJ","KO","JPM","MCD","MMM","MRK","MSFT","NKE","PFE","PG",
"TRV","UNH","UTX","VZ","V","WBA","WMT","DIS","DOW")
getSymbols(theSymbols,from=from.dat,to=to.dat,src="yahoo")
# since quantmod::getSymbols() writes named xts objects, need to use
# get() with the symbol names to access each data frame
# e.g. head(get(theSymbols[[1]]))
# convert to list
symbolData <- lapply(theSymbols,function(x){
y <- as.data.frame(get(x))
colnames(y) <- c("open","high","low","close","volume","adjusted")
# add date and symbol name to output data frames
y$date <- rownames(y)
y$symbol <- x
y
})
#combine to single data frame
combinedData <- do.call(rbind,symbolData)
# split by day
symbolsByDay <- split(combinedData,as.factor(combinedData$date))
covariances <- lapply(symbolsByDay,function(x){
cov(x[,1:6]) # only use first 6 columns
})
# print first covariance matrix
covariances[1]
...and the output:
> covariances[1]
$`2019-12-02`
open high low close volume adjusted
open 5956.289 5962.359 5811.514 5818.225 -9.274871e+07 5809.939
high 5962.359 5968.557 5817.580 5824.272 -9.314473e+07 5816.005
low 5811.514 5817.580 5671.809 5678.470 -9.188418e+07 5670.276
close 5818.225 5824.272 5678.470 5685.467 -9.155485e+07 5677.246
volume -92748711.735 -93144729.578 -91884178.312 -91554853.356 4.365841e+13 -90986549.261
adjusted 5809.939 5816.005 5670.276 5677.246 -9.098655e+07 5669.171
>

#R - Split Quarterly data into monthly data using R

Please see the sample data below.
I want to convert the quarterly sale data (with a start date and end date) into monthly sale data.
For example:
Data set A-Row 1 will be split into Data set B- Row 1, 2 and 3 for June, July and August separately and the sale will be pro rata based on number of days in that month, all other columns will be the same;
Data set A-Row 2 will pick up what was left in Row 1 (which ends in 5/9/2017) and formed a complete September.
Is there an efficient way to execute this, the actual data is a csv file with 100K x 15 data size, which will be split to approximately 300K x 15 new data set for monthly analysis.
Some key characteristic from sample question data includes:
The start day for the first quarterly sales data is the day that customer joins, so it could be any day;
All sales will be quarterly but in various days between 90, 91, or 92 days, but it is also possible to have imcomplete quarterly sale data as customer leave in the quarter.
Sample Question:
Customer.ID Country Type Sale Start..Date End.Date Days
1 1 US Commercial 91 7/06/2017 5/09/2017 91
2 1 US Commerical 92 6/09/2017 6/12/2017 92
3 2 US Casual 25 10/07/2017 3/08/2017 25
4 3 UK Commercial 64 7/06/2017 9/08/2017 64
Sample Answer:
Customer.ID Country Type Sale Start.Date End.Date Days
1 1 US Commercial 24 7/06/2017 30/06/2017 24
2 1 US Commercial 31 1/07/2017 31/07/2017 31
3 1 US Commercial 31 1/08/2017 31/08/2017 31
4 1 US Commercial 30 1/09/2017 30/09/2017 30
5 1 US Commercial 31 1/10/2017 31/10/2017 31
6 1 US Commercial 30 1/11/2017 30/11/2017 30
7 1 US Commercial 6 1/12/2017 6/12/2017 6
8 2 US Casual 22 10/07/2017 31/07/2017 22
9 2 US Casual 3 1/08/2017 3/08/2017 3
10 3 UK Commercial 24 7/06/2017 30/06/2017 24
11 3 UK Commercial 31 1/07/2017 31/07/2017 31
12 3 UK Commercial 9 1/08/2017 9/08/2017 9
I just ran CIAndrews' code. It seems to work for the most part, but it is very slow when run on a dataset with 10,000 rows. I eventually cancelled the execution after a few minutes of waiting. There's also an issue with the number of days: For example, July has 31 days, but the days variable only shows thirty. It's true that 31-1 = 30, but the first day should be counted as well.
The code below only takes about 21 seconds on my 2015 MacBook Pro (not including data generation), and takes care of the other problem, too.
library(tidyverse)
library(lubridate)
# generate data -------------------------------------------------------------
set.seed(666)
# assign variables
customer <- sample.int(n = 2000, size = 10000, replace = T)
country <- sample(c("US", "UK", "DE", "FR", "IS"), 10000, replace = T)
type <- sample(c("commercial", "casual", "other"), 10000, replace = T)
start <- sample(seq(dmy("7/06/2011"), today(), by = "day"), 10000, replace = T)
days <- sample(85:105, 10000, replace = T)
end <- start + days
sale <- sample(500:3000, 10000, replace = T)
# generate dataframe of artificial data
df_quarterly <- tibble(customer, country, type, sale, start, end, days)
# split quarters into months ----------------------------------------------
# initialize empty list with length == nrow(dataframe)
list_date_dfs <- vector(mode = "list", length = nrow(df_quarterly))
# for-loop generates new dates and adds as dataframe to list
for (i in 1:length(list_date_dfs)) {
# transfer dataframe row to variable `row`
row <- df_quarterly[i,]
# correct end date so split successful when interval doesn't cover full month
end_corr <- row$end + day(row$start) - day(row$end)
# use lubridate to compute first and last days of relevant months
m_start <- seq(row$start, end_corr, by = "month") %>%
floor_date(unit = "month")
m_end <- m_start + days_in_month(m_start) - 1
# replace first and last elements with original dates
m_start[1] <- row$start
m_end[length(m_end)] <- row$end
# compute the number of days per month as well as sales per month
# correct difference by adding 1
m_days <- as.integer(m_end - m_start) + 1
m_sale <- (row$sale / sum(m_days)) * m_days
# add tibble to list
list_date_dfs[[i]] <- tibble(customer = row$customer,
country = row$country,
type = row$type,
sale = m_sale,
start = m_start,
end = m_end,
days = m_days
)
}
# bind dataframe list elements into single dataframe
df_monthly <- bind_rows(list_date_dfs)
It's not pretty as it uses multiple functions and loops, since it consists out of multiple operations:
# Creating the dataset
library(tidyr)
customer <- c(1,1,2,3)
country <- c("US","US","US","UK")
type <- c("Commercial","Commercial","Casual","Commercial")
sale <- c(91,92,25,64)
Start <- as.Date(c("7/06/2017","6/09/2017","10/07/2017","7/06/2017"),"%d/%m/%Y")
Finish <- as.Date(c("5/09/2017","6/12/2017","3/08/2017","9/08/2017"),"%d/%m/%Y")
days <- c(91,92,25,64)
df <- data.frame(customer,country, type,sale, Start,Finish,days)
# Function to split per month
library(zoo)
addrowFun <- function(y){
temp <- do.call("rbind", by(y, 1:nrow(y), function(x) with(x, {
eom <- as.Date(as.yearmon(Start), frac = 1)
if (eom < Finish)
data.frame(customer, country, type, Start = c(Start, eom+1), Finish = c(eom, Finish))
else x
})))
return(temp)
}
loop <- df
for(i in 1:10){ #not all months are split up at once
loop <- addrowFun(loop)
}
# Calculating the days per month
loop$days <- as.numeric(difftime(loop$Finish,loop$Start, units="days"))
# Creating the function to get the monthly sales pro rata
sumFun <- function(x){
tempSum <- df[x$Start >= df$Start & x$Finish <= df$Finish & df$customer == x$customer,]
totalSale <- sum(tempSum$sale)
totalDays <- sum(tempSum$days)
return(x$days / totalDays * totalSale)
}
for(i in 1:length(loop$customer)){
loop$sale[i] <- sumFun(loop[i,])
}
loop
CiAndrews,
Thanks for the help and patience. I have managed to get the answer with small change. I have replace the "rbind" with "rbind.fill" from "plyr" package and everything runs smoothly after that.
Please see the head of sample2.csv below
customer country type sale Start Finish days
1 43108181108 US Commercial 3330 17/11/2016 24/02/2017 99
2 43108181108 US Commercial 2753 24/02/2017 23/05/2017 88
3 43108181108 US Commercial 3043 13/02/2018 18/05/2018 94
4 43108181108 US Commercial 4261 23/05/2017 18/08/2017 87
5 43103703637 UK Casual 881 4/11/2016 15/02/2017 103
6 43103703637 UK Casual 1172 26/07/2018 1/11/2018 98
Please see the codes below:
library(tidyr)
#read data and change the start and finish to data type
data <- read.csv("Sample2.csv")
data$Start <- as.Date(data$Start, "%d/%m/%Y")
data$Finish <- as.Date(data$Finish, "%d/%m/%Y")
customer <- data$customer
country <- data$country
days <- data$days
Finish <- data$Finish
Start <- data$Start
sale <- data$sale
type <- data$type
df <- data.frame(customer, country, type, sale, Start, Finish, days)
# Function to split per month
library(zoo)
library(plyr)
addrowFun <- function(y){
temp <- do.call("rbind.fill", by(y, 1:nrow(y), function(x) with(x, {
eom <- as.Date(as.yearmon(Start), frac = 1)
if (eom < Finish)
data.frame(customer, country, type, Start = c(Start, eom+1), Finish = c(eom, Finish))
else x
})))
return(temp)
}
loop <- df
for(i in 1:10){ #not all months are split up at once
loop <- addrowFun(loop)
}
# Calculating the days per month
loop$days <- as.numeric(difftime(loop$Finish,loop$Start, units="days"))
# Creating the function to get the monthly sales pro rata
sumFun <- function(x){
tempSum <- df[x$Start >= df$Start & x$Finish <= df$Finish & df$customer == x$customer,]
totalSale <- sum(tempSum$sale)
totalDays <- sum(tempSum$days)
return(x$days / totalDays * totalSale)
}
for(i in 1:length(loop$customer)){
loop$sale[i] <- sumFun(loop[i,])
}
loop

Create 10,000 date data.frames with fake years based on 365 days window

Here my time period range:
start_day = as.Date('1974-01-01', format = '%Y-%m-%d')
end_day = as.Date('2014-12-21', format = '%Y-%m-%d')
df = as.data.frame(seq(from = start_day, to = end_day, by = 'day'))
colnames(df) = 'date'
I need to created 10,000 data.frames with different fake years of 365days each one. This means that each of the 10,000 data.frames needs to have different start and end of year.
In total df has got 14,965 days which, divided by 365 days = 41 years. In other words, df needs to be grouped 10,000 times differently by 41 years (of 365 days each one).
The start of each year has to be random, so it can be 1974-10-03, 1974-08-30, 1976-01-03, etc... and the remaining dates at the end df need to be recycled with the starting one.
The grouped fake years need to appear in a 3rd col of the data.frames.
I would put all the data.frames into a list but I don't know how to create the function which generates 10,000 different year's start dates and subsequently group each data.frame with a 365 days window 41 times.
Can anyone help me?
#gringer gave a good answer but it solved only 90% of the problem:
dates.df <- data.frame(replicate(10000, seq(sample(df$date, 1),
length.out=365, by="day"),
simplify=FALSE))
colnames(dates.df) <- 1:10000
What I need is 10,000 columns with 14,965 rows made by dates taken from df which need to be eventually recycled when reaching the end of df.
I tried to change length.out = 14965 but R does not recycle the dates.
Another option could be to change length.out = 1 and eventually add the remaining df rows for each column by maintaining the same order:
dates.df <- data.frame(replicate(10000, seq(sample(df$date, 1),
length.out=1, by="day"),
simplify=FALSE))
colnames(dates.df) <- 1:10000
How can I add the remaining df rows to each col?
The seq method also works if the to argument is unspecified, so it can be used to generate a specific number of days starting at a particular date:
> seq(from=df$date[20], length.out=10, by="day")
[1] "1974-01-20" "1974-01-21" "1974-01-22" "1974-01-23" "1974-01-24"
[6] "1974-01-25" "1974-01-26" "1974-01-27" "1974-01-28" "1974-01-29"
When used in combination with replicate and sample, I think this will give what you want in a list:
> replicate(2,seq(sample(df$date, 1), length.out=10, by="day"), simplify=FALSE)
[[1]]
[1] "1985-07-24" "1985-07-25" "1985-07-26" "1985-07-27" "1985-07-28"
[6] "1985-07-29" "1985-07-30" "1985-07-31" "1985-08-01" "1985-08-02"
[[2]]
[1] "2012-10-13" "2012-10-14" "2012-10-15" "2012-10-16" "2012-10-17"
[6] "2012-10-18" "2012-10-19" "2012-10-20" "2012-10-21" "2012-10-22"
Without the simplify=FALSE argument, it produces an array of integers (i.e. R's internal representation of dates), which is a bit trickier to convert back to dates. A slightly more convoluted way to do this is and produce Date output is to use data.frame on the unsimplified replicate result. Here's an example that will produce a 10,000-column data frame with 365 dates in each column (takes about 5s to generate on my computer):
dates.df <- data.frame(replicate(10000, seq(sample(df$date, 1),
length.out=365, by="day"),
simplify=FALSE));
colnames(dates.df) <- 1:10000;
> dates.df[1:5,1:5];
1 2 3 4 5
1 1988-09-06 1996-05-30 1987-07-09 1974-01-15 1992-03-07
2 1988-09-07 1996-05-31 1987-07-10 1974-01-16 1992-03-08
3 1988-09-08 1996-06-01 1987-07-11 1974-01-17 1992-03-09
4 1988-09-09 1996-06-02 1987-07-12 1974-01-18 1992-03-10
5 1988-09-10 1996-06-03 1987-07-13 1974-01-19 1992-03-11
To get the date wraparound working, a slight modification can be made to the original data frame, pasting a copy of itself on the end:
df <- as.data.frame(c(seq(from = start_day, to = end_day, by = 'day'),
seq(from = start_day, to = end_day, by = 'day')));
colnames(df) <- "date";
This is easier to code for downstream; the alternative being a double seq for each result column with additional calculations for the start/end and if statements to deal with boundary cases.
Now instead of doing date arithmetic, the result columns subset from the original data frame (where the arithmetic is already done). Starting with one date in the first half of the frame and choosing the next 14965 values. I'm using nrow(df)/2 instead for a more generic code:
dates.df <-
as.data.frame(lapply(sample.int(nrow(df)/2, 10000),
function(startPos){
df$date[startPos:(startPos+nrow(df)/2-1)];
}));
colnames(dates.df) <- 1:10000;
>dates.df[c(1:5,(nrow(dates.df)-5):nrow(dates.df)),1:5];
1 2 3 4 5
1 1988-10-21 1999-10-18 2009-04-06 2009-01-08 1988-12-28
2 1988-10-22 1999-10-19 2009-04-07 2009-01-09 1988-12-29
3 1988-10-23 1999-10-20 2009-04-08 2009-01-10 1988-12-30
4 1988-10-24 1999-10-21 2009-04-09 2009-01-11 1988-12-31
5 1988-10-25 1999-10-22 2009-04-10 2009-01-12 1989-01-01
14960 1988-10-15 1999-10-12 2009-03-31 2009-01-02 1988-12-22
14961 1988-10-16 1999-10-13 2009-04-01 2009-01-03 1988-12-23
14962 1988-10-17 1999-10-14 2009-04-02 2009-01-04 1988-12-24
14963 1988-10-18 1999-10-15 2009-04-03 2009-01-05 1988-12-25
14964 1988-10-19 1999-10-16 2009-04-04 2009-01-06 1988-12-26
14965 1988-10-20 1999-10-17 2009-04-05 2009-01-07 1988-12-27
This takes a bit less time now, presumably because the date values have been pre-caclulated.
Try this one, using subsetting instead:
start_day = as.Date('1974-01-01', format = '%Y-%m-%d')
end_day = as.Date('2014-12-21', format = '%Y-%m-%d')
date_vec <- seq.Date(from=start_day, to=end_day, by="day")
Now, I create a vector long enough so that I can use easy subsetting later on:
date_vec2 <- rep(date_vec,2)
Now, create the random start dates for 100 instances (replace this with 10000 for your application):
random_starts <- sample(1:14965, 100)
Now, create a list of dates by simply subsetting date_vec2 with your desired length:
dates <- lapply(random_starts, function(x) date_vec2[x:(x+14964)])
date_df <- data.frame(dates)
names(date_df) <- 1:100
date_df[1:5,1:5]
1 2 3 4 5
1 1997-05-05 2011-12-10 1978-11-11 1980-09-16 1989-07-24
2 1997-05-06 2011-12-11 1978-11-12 1980-09-17 1989-07-25
3 1997-05-07 2011-12-12 1978-11-13 1980-09-18 1989-07-26
4 1997-05-08 2011-12-13 1978-11-14 1980-09-19 1989-07-27
5 1997-05-09 2011-12-14 1978-11-15 1980-09-20 1989-07-28

Using lapply to output values between date ranges within different factor levels

I have 2 dataframes, one representing daily sales figures of different stores (df1) and one representing when each store has been audited (df2). I need to create a new dataframe displaying sales information from each site taken 1 week before each audit (i.e. the information in df2). Some example data, firstly for the daily sales figures from different stores across a certain period:
Dates <- as.data.frame(seq(as.Date("2015/12/30"), as.Date("2016/4/7"),"day"))
Sales <- as.data.frame(matrix(sample(0:50, 30*10, replace=TRUE), ncol=3))
df1 <- cbind(Dates,Sales)
colnames(df1) <- c("Dates","Site.A","Site.B","Site.C")
And for the dates of each audit across different stores:
Store<- c("Store.A","Store.A","Store.B","Store.C","Store.C")
Audit_Dates <- as.data.frame(as.POSIXct(c("2016/1/4","2016/3/1","2016/2/1","2016/2/1","2016/3/1")))
df2 <- as.data.frame(cbind(Store,Audit_Dates ))
colnames(df2) <- c("Store","Audit_Dates")
Of note is that there will be an uneven amount of dates within each output (i.e. there may not be a full weeks worth of information prior to some store audits). I have previously asked a question addressing a similar problem Creating a dataframe from an lapply function with different numbers of rows. Below shows an answer from this which would work for an example if I was to consider information from only 1 store:
library(lubridate)
##Data input
Store.A_Dates <- as.data.frame(seq(as.Date("2015/12/30"), as.Date("2016/4/7"),"day"))
Store.A_Sales <- as.data.frame(matrix(sample(0:50, 10*10, replace=TRUE), ncol=1))
Store.A_df1 <- cbind(Store.A_Dates,Store.A_Sales)
colnames(Store.A_df1) <- c("Store.A_Dates","Store.A_Sales")
Store.A_df2 <- as.Date(c("2016/1/3","2016/3/1"))
##Output
Store.A_output<- lapply(Store.A_df2, function(x) {Store.A_df1[difftime(Store.A_df1[,1], x - days(7)) >= 0 & difftime(Store.A_df1[,1], x) <= 0, ]})
n1 <- max(sapply(Store.A_output, nrow))
output <- data.frame(lapply(Store.A_output, function(x) x[seq_len(n1),]))
But I don't know how I would get this for multiple sites.
Try this:
# Renamed vars for my convenience...
colnames(df1) <- c("t","Store.A","Store.B","Store.C")
colnames(df2) <- c("Store","t")
library(tidyr)
library(dplyr)
# Gather df1 so that df1 and df2 have the same format:
df1 = gather(df1, Store, Sales, -t)
head(df1)
t Store Sales
1 2015-12-30 Store.A 16
2 2015-12-31 Store.A 24
3 2016-01-01 Store.A 8
4 2016-01-02 Store.A 42
5 2016-01-03 Store.A 7
6 2016-01-04 Store.A 46
# This lapply call does not iterate over actual values, just indexes, which allows
# you to subset the data comfortably:
r <- lapply(1:nrow(df2), function(i) {
audit.t = df2[i, "t"] #time of audit
audit.s = df1[, "Store"] == df2[i, "Store"] #store audited
df = df1[audit.s, ] #data from audited store
df[, "audited"] = audit.t #add extra column with audit date
week_before = difftime(df[, "t"], audit.t - (7*24*3600)) >= 0
week_audit = difftime(df[, "t"], audit.t) <= 0
df[week_before & week_audit, ]
})
Does this give you the proper subsets?
Also, to summarise your results:
r = do.call("rbind", r) %>%
group_by(audited, Store) %>%
summarise(sales = sum(Sales))
r
audited Store sales
<time> <chr> <int>
1 2016-01-04 Store.A 97
2 2016-02-01 Store.B 156
3 2016-02-01 Store.C 226
4 2016-03-01 Store.A 115
5 2016-03-01 Store.C 187

How do I calculate a monthly rate of change from a daily time series in R?

I'm beginning to get my feet wet with R, and I'm brand new to time series concepts. Can anyone point me in the right direction to calculate a monthly % change, based on a daily data point? I want the change between the first and last data points of each month. For example:
tseries data:
1/1/2000 10.00
...
1/31/2000 10.10
2/1/2000 10.20
...
2/28/2000 11.00
I'm looking for a return data frame of the form:
1/31/2000 .01
2/28/2000 .0784
Ideally, I'd be able to calculate from the endpoint of the prior month to the endpoint of current month, but I'm supposing partitioning by month is easier as a starting point. I'm looking at packages zoo and xts, but am still stuck. Any takers? Thanks...
Here's one way to do it using plyr and ddply.
I use ddply sequentially, first to get the first and last rows of each month, and again to calculate the monthlyReturn.
(Perhaps using xts or zoo might be easier, I am not sure.)
#Using plyr and the data in df
df$Date <- as.POSIXlt(as.Date(df$Date, "%m/%d/%Y"))
df$Month <- (df$Date$mon + 1) #0 = January
sdf <- df[,-1] #drop the Date Column, ddply doesn't like it
library("plyr")
#this function is called with 2 row data frames
monthlyReturn<- function(df) {
(df$Value[2] - df$Value[1])/(df$Value[1])
}
adf <- ddply(sdf, .(Month), function(x) x[c(1, nrow(x)), ]) #get first and last values for each Month
mon.returns <- ddply(adf, .(Month), monthlyReturn)
Here's the data I used to test it out:
> df
Date Value
1 1/1/2000 10.0
2 1/31/2000 10.1
3 2/1/2000 10.2
4 2/28/2000 11.0
5 3/1/2000 10.0
6 3/31/2000 24.1
7 5/10/2000 510.0
8 5/22/2000 522.0
9 6/04/2000 604.0
10 7/03/2000 10.1
11 7/30/2000 7.2
12 12/28/2000 11.0
13 12/30/2000 3.0
> mon.returns
Month V1
1 1 0.01000000
2 2 0.07843137
3 3 1.41000000
4 5 0.02352941
5 6 0.00000000
6 7 -0.28712871
7 12 -0.72727273
Hope that helps.
Here is another way to do this(using the quantmod package):
This calculates the monthly return from the daily price of AAPL.
*library(quantmod) # load the quantmod package
getSymbols("AAPL") # download daily price for stock AAPL
monthlyReturn = periodReturn(AAPL,period="monthly")
monthlyReturn2014 = periodReturn(AAPL,period="monthly",subset='2014:') # for 2014*
This is a pretty old thread, but for reference, here comes a data.table solution using same data as #Ram:
structure(list(Date = structure(c(10957, 10987, 10988, 11015, 11017, 11047, 11087, 11099, 11112, 11141, 11168, 11319, 11321), class = "Date"), Value = c(10, 10.1, 10.2, 11, 10, 24.1, 510, 522, 604, 10.1, 7.2, 11, 3)), .Names = c("Date", "Value"), row.names = c(NA, -13L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x00000000001b0788>)
It's essentially a one-liner that uses the data.table::month function:
library(data.table)
setDT(df)[ , diff(Value) / Value[1], by= .(month(Date))]
This will produce the change, relative to the first recorded day in each month. If the change relative to the last day is preferred, then the expression in the middle should be changed to diff(Value) / Vale[2].
1) no packages Try this:
DF <- read.table(text = Lines)
fmt <- "%m/%d/%Y"
ym <- format(as.Date(DF$V1, format = fmt), "%Y-%m")
ret <- function(x) diff(range(x))/x[1]
ag <- aggregate(V2 ~ ym, DF, ret)
giving:
> ag
ym V2
1 2000-01 0.01000000
2 2000-02 0.07843137
We could convert this to "ts" class, if desired. Assuming no missing months:
ts(ag$V2, start = 2000, freq = 12)
giving:
Jan Feb
2000 0.01000000 0.07843137
2) It's a bit easier if you use the zoo or xts time series packages. fmt and ret are from above:
library(zoo)
z <- read.zoo(text = Lines, format = fmt)
z.ret <- aggregate(z, as.yearmon, ret)
giving:
> z.ret
Jan 2000 Feb 2000
0.01000000 0.07843137
If you already have a data.frame DF then the read.zoo statement could be replaced with z <- read.zoo(DF, format = fmt) or omit the format arg if the first column is of "Date" class.
If "ts" class were desired then use as.ts(z.ret)
Note: The input Lines is:
Lines <- "1/1/2000 10.00
1/31/2000 10.10
2/1/2000 10.20
2/28/2000 11.00"
The ROC function in the TTR package will do this. You can use to.monthly or endpoints() (From daily time series to weekly time series in R xts object) first if you will only be looking at monthly behaviour.
library(TTR)
# data.monthly <- to.monthly( data, indexAt='periodEnd' ) # if OHLC data
# OR
data.monthly <- data[ endpoints(data, on="months", k=1), ]
data.roc <- ROC(data.monthly, n = 1, type = "discrete")

Resources