Weekends in a Month in R - r

I am trying to prepare an xreg serie for my Arima model and I will use number of weekends in a month for it. I can find results for a year but when it is longer than a year, it usually is, I couldn't find a way. Here is what I do so far.
dates <- seq(from=as.Date("2001-01-01"), to=as.Date("2010-12-31"), by = "day")
wd <- weekdays(dates)
aylar <- months(dates[which(wd == "Sunday" | wd == "Satuday")])
table(aylar)
What I want is gathering all months' weekends not based on only months but also years. So that I can have the same length of serie with my original forecast serie.

Here is my solution:
library(chron)
library(dplyr)
library(lubridate)
month <- months(dates[chron::is.weekend(dates)])
day <- dates[chron::is.weekend(dates)]
# create data.frame
df <- data.frame(date = day, month = month, year = chron::years(day))
df %>% group_by(year, month) %>% summarize(weekends = floor(n()/2))
# year month weekends
# <dbl> <fctr> <dbl>
#1 2001 April 4
#2 2001 August 4
#3 2001 Dezember 5
#4 2001 Februar 4
#5 2001 Januar 4
#6 2001 Juli 4
#7 2001 Juni 4
#8 2001 Mai 4
#9 2001 März 4
#10 2001 November 4
## ... with 110 more rows
I hope this is a starting point for your work.

Related

R Calculate change in Weekly values Year on Year (with additional complication)

I have a data set of daily value. It spans from Dec-1 2018 to April-1 2020.
The columns are "date" and "value". As shown here:
date <- c("2018-12-01","2000-12-02", "2000-12-03",
...
"2020-03-30","2020-03-31","2020-04-01")
value <- c(1592,1825,1769,1909,2022, .... 2287,2169,2366,2001,2087,2099,2258)
df <- data.frame(date,value)
What I would like to do is the sum the values by week and then calculate week over week change from the current to previous year.
I know that I can sum by week using the following function:
Data_week <- df%>% group_by(category ,week = cut(date, "week")) %>% mutate(summed= sum(value))
My questions are twofold:
1) How do I sum by week and then manipulate the dataframe so that I can calculate week over week change (e.g. week dec.1 2019/ week dec.1 2018).
2) How can I do that above, but using a "customized" week. Let's say I want to define a week as moving 7 days back from the latest date I have data for. Eg. the latest week I would have would be week starting on March 26th (April 1st -7 days).
We can use lag from dplyr to help and also some convenience functions from lubridate.
library(dplyr)
library(lubridate)
df %>%
mutate(year = year(date)) %>%
group_by(week = week(date),year) %>%
summarize(summed = sum(value)) %>%
arrange(year, week) %>%
ungroup %>%
mutate(change = summed - lag(summed))
# week year summed change
# <dbl> <dbl> <dbl> <dbl>
# 1 48 2018 3638. NA
# 2 49 2018 15316. 11678.
# 3 50 2018 13283. -2033.
# 4 51 2018 15166. 1883.
# 5 52 2018 12885. -2281.
# 6 53 2018 1982. -10903.
# 7 1 2019 14177. 12195.
# 8 2 2019 14969. 791.
# 9 3 2019 14554. -415.
#10 4 2019 12850. -1704.
#11 5 2019 1907. -10943.
If you would like to define "weeks" in different ways, there is also isoweek and epiweek. See this answer for a great explaination of your options.
Data
set.seed(1)
df <- data.frame(date = seq.Date(from = as.Date("2018-12-01"), to = as.Date("2019-01-29"), "days"), value = runif(60,1500,2500))

How to calculate the average year

I have a 20-year monthly XTS time series
Jan 1990 12.3
Feb 1990 45.6
Mar 1990 78.9
..
Jan 1991 34.5
..
Dec 2009 89.0
I would like to get the average (12-month) year, or
Jan xx
Feb yy
...
Dec kk
where xx is the average of every January, yy of every February, and so on.
I have tried apply.yearly and lapply but these return 1 value, which is the 20-year total average
Would you have any suggestions? I appreciate it.
The lubridate package could be useful for you. I would use the functions year() and month() in conjunction with aggregate():
library(xts)
library(lubridate)
#set up some sample data
dates = seq(as.Date('2000/01/01'), as.Date('2005/01/01'), by="month")
df = data.frame(rand1 = runif(length(dates)), rand2 = runif(length(dates)))
my_xts = xts(df, dates)
#get the mean by year
aggregate(my_xts$rand1, by=year(index(my_xts)), FUN=mean)
This outputs something like:
2000 0.5947939
2001 0.4968154
2002 0.4941752
2003 0.5291211
2004 0.6631564
To find the mean for each month you can do:
#get the mean by month
aggregate(my_xts$rand1, by=month(index(my_xts)), FUN=mean)
which will output something like
1 0.5560279
2 0.6352220
3 0.3308571
4 0.6709439
5 0.6698147
6 0.7483192
7 0.5147294
8 0.3724472
9 0.3266859
10 0.5331233
11 0.5490693
12 0.4642588

How to find out how many trading days in each month in R?

I have a dataframe like this. The time span is 10 years. Because it's Chinese market data, and China has Lunar Holidays. So each year have different holiday times in terms of the western calendar.
When it is a holiday, the stock market does not open, so it is a non-trading day. Weekends are non-trading days too.
I want to find out which month of which year has the least number of trading days, and most importantly, what number is that.
There are not repeated days.
date change open high low close volume
1 1995-01-03 -1.233 637.72 647.71 630.53 639.88 234518
2 1995-01-04 2.177 641.90 655.51 638.86 653.81 422220
3 1995-01-05 -1.058 656.20 657.45 645.81 646.89 430123
4 1995-01-06 -0.948 642.75 643.89 636.33 640.76 487482
5 1995-01-09 -2.308 637.52 637.55 625.04 625.97 509851
6 1995-01-10 -2.503 616.16 617.60 607.06 610.30 606925
If there are not repeated days, you can count days per month and year by:
library(data.table) "maxx"))), .Names = c("X2005", "X2006", "X2007", "X2008"))
library(lubridate)
dt <- as.data.table(dt)
dt_days <- dt[, .(count_day=.N), by=.(year(date), month(date))]
Then you only need to do this to get the min:
dt_days[count_day==min(count_day)]
The chron and bizdays packages deal with business days but neither actually contains a usable calendar of holidays limiting their usefulness.
We will use chron below assuming you have defined the .Holidays vector of dates that are holidays. (If you run the code below without doing that only weekdays will be regarded as business days as the default .Holidays vector supplied by chron has very few dates in it.) DF has 120 rows (one row for each year/month) and the last line subsets that to just the month in each year having least business days.
library(chron)
library(zoo)
st <- as.yearmon("2001-01")
en <- as.yearmon("2010-12")
ym <- seq(st, en, 1/12) # sequence of year/months of interest
# no of business days in each yearmonth
busdays <- sapply(ym, function(x) {
s <- seq(as.Date(x), as.Date(x, frac = 1), "day")
sum(!is.weekend(s) & !is.holiday(s))
})
# data frame with one row per year/month
yr <- as.integer(ym)
DF <- data.frame(year = yr, month = cycle(ym), yearmon = ym, busdays)
# data frame with one row per year
wx.min <- ave(busdays, yr, FUN = function(x) which.min(x) == seq_along(x))
DF[wx.min == 1, ]
giving:
year month yearmon busdays
2 2001 2 Feb 2001 20
14 2002 2 Feb 2002 20
26 2003 2 Feb 2003 20
38 2004 2 Feb 2004 20
50 2005 2 Feb 2005 20
62 2006 2 Feb 2006 20
74 2007 2 Feb 2007 20
95 2008 11 Nov 2008 20
98 2009 2 Feb 2009 20
110 2010 2 Feb 2010 20

Aggregate a column by defined weeks in R [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have the following table and I need to aggregate the columns from 4 to 5 based on the defined weeks below for a given month.
for example for any given month my weekly definition for purchase date column as follows:
wk1: 1-6 days
wk2: 7-12 days
wk3: 13-18 days
wk4: 19-24 days
wk5: 25-31 days
Year County purchase_Date acres_purchase Date_Diff
2010 Cache 9/28/2009 30.5 1
2010 Cache 10/1/2009 5.0 4
2010 Cache 10/3/2009 10.2 3
2010 Cache 10/5/2009 20 3
2010 Cache 10/7/2009 15 5
2010 Cache 10/13/2009 5 1
2010 Cache 10/14/2009 6 2
2010 Cache 10/19/2009 25 7
2010 Cache 10/25/2009 12 3
2010 Cache 10/30/2009 2 1
Output:
Year County purchase_Date Week purchase_by_date Date_Diff
2010 Cache 9/28/2009 Sep-wk5 30.5 1
2010 Cache 10/1/2009 Oct-wk1 35.2 10
2010 Cache 10/7/2009 Oct-wk2 15 5
2010 Cache 10/13/2009 Oct-wk3 11 3
2010 Cache 10/19/2009 Oct-wk4 25 7
2010 Cache 10/25/2009 Oct-wk5 14 4
Is there a way that I can achieve "output" table in R?
Any help is appreciated.
First convert purchase_Date to a date class, then extract purchase_Day:
df1$purchase_Date <- as.Date(df1$purchase_Date, format= "%m/%d/%Y")
df1$purchase_Day <- as.numeric(format(df1$purchase_Date, "%d"))
Define a helper function to assign each range of days to the correct week.
weekGroup <- function(x){
if (x <= 6) {
week <- "wk1"
} else if (x <= 12) {
week <- "wk2"
} else if (x <= 18) {
week <- "wk3"
} else if (x <= 24) {
week <- "wk4"
} else {
week <-"wk5"
}
return(week)
}
Pass each day to our helper function:
df1$week <- sapply(df1$purchase_Day, weekGroup)
Pull the month into a separate column, and convert to numeric
df1$month <- as.numeric(format(df1$purchase_Date, "%m"))
month.abb is a list of the month abbreviations. Use the numeric month to call the respective list element
df1$monthAbb <- sapply(df1$month, function(x) month.abb[x])
Combine week and monthAbb
df1$monthWeek <- paste(df1$monthAbb,df1$week, sep="-")
And #cmaher basically provided this already, but for completeness, the final summary:
require(dplyr)
df1 %>% group_by(Year, County,monthWeek) %>%
summarise(purchaseDate=min(purchase_Date),acres=sum(acres_purchase),
date_diff=sum(Date_Diff))
Year County monthWeek purchaseDate acres date_diff
<int> <fctr> <chr> <date> <dbl> <int>
1 2010 Cache Oct-wk1 2009-10-01 35.2 10
2 2010 Cache Oct-wk2 2009-10-07 15.0 5
3 2010 Cache Oct-wk3 2009-10-13 11.0 3
4 2010 Cache Oct-wk4 2009-10-19 25.0 7
5 2010 Cache Oct-wk5 2009-10-25 14.0 4
6 2010 Cache Sep-wk5 2009-09-28 30.5 1
Assuming your purchase_Date variable is of class Date, you can use lubridate::day() and base::findInterval to segment your dates:
df$Week <- findInterval(lubridate::day(df$purchase_Date), c(7, 13, 19, 25, 32)) + 1
df$Week <- as.factor(paste(lubridate::month(df$purchase_Date), df$Week, sep = "-"))
# purchase_Date Week
# 2017-10-01 10-1
# 2017-10-02 10-1
# 2017-10-03 10-1
# ...
# 2017-10-29 10-5
# 2017-10-30 10-5
# 2017-10-31 10-5
Then, one way to achieve your target output is with dplyr like so:
df %>% group_by(Year, Country, Week) %>%
summarize(
purchase_Date = min(purchase_Date),
purchase_by_date = sum(acres_purchase),
Date_Diff = sum(Date_Diff))

Sum daily values into monthly values

I am trying to sum daily rainfall values into monthly totals for a record over 100 years in length. My data takes the form:
Year Month Day Rain
1890 1 1 0
1890 1 2 3.1
1890 1 3 2.5
1890 1 4 15.2
In the example above I want R to sum all the days of rainfall in January 1890, then February 1890, March 1890.... through to December 2010. I guess what I'm trying to do is create a loop to sum values. My output file should look like:
Year Month Rain
1890 1 80.5
1890 2 72.4
1890 3 66.8
1890 4 77.2
Any easy way to do this?
Many thanks.
You can use dplyr for some pleasing syntax
library(dplyr)
df %>%
group_by(Year, Month) %>%
summarise(Rain = sum(Rain))
In some cases it can be beneficial to convert it to a time-series class like xts, then you can use functions like apply.monthly().
Data:
df <- data.frame(
Year = rep(1890,5),
Month = c(1,1,1,2,2),
Day = 1:5,
rain = rexp(5)
)
> head(df)
Year Month Day rain
1 1890 1 1 0.1528641
2 1890 1 2 0.1603080
3 1890 1 3 0.5363315
4 1890 2 4 0.6368029
5 1890 2 5 0.5632891
Convert it to xts and use apply.monthly():
library(xts)
dates <- with(df, as.Date(paste(Year, Month, Day), format("%Y %m %d")))
myXts <- xts(df$rain, dates)
> head(apply.monthly(myXts, sum))
[,1]
1890-01-03 0.8495036
1890-02-05 1.2000919

Resources