Been a little stuck on this for a couple days.
Let's say I have a cohort of 2 people.
Person 1 was in cohort from 01/01/2000 to 01/03/2001.
Person 2 was in cohort from 01/01/1999 to 31/12/2001.
This means person 1 was in the cohort for all of 2000 and 25% of 2001.
Person 2 was in the cohort for all of 1999, all of 2000, and all of 2001.
Adding this together means that, in total, the cohort contributed 1 year of person-time in 1999,
2 years of person-time in 2000, and 1.25 years of person-time in 2001.
Does anyone know of any R functions that might help with dividing up/summing time elapsed between dates like this? I could write it all from scratch, but I'd like to use existing functions if they're out there, and Google has got me nowhere.
Thanks!
Using data.table and lubridate:
Data <- Data[, .(Start, Start2 = seq(Start, End, by="year"), End), by=.(Person)]
Data[, End2 := Start2+years(1)-days(1)]
Data[year(Start2) != year(Start), Start := Start2]
Data[year(End2) != year(End), End := End2]
Data[, c("Year", "Contribution") := list(year(Start), (month(End)-month(Start)+1)/12)]
Data <- Data[, .(Contribution = sum(Contribution)), by=.(Year)][order(Year)]
Which gives:
> Data
Year Contribution
1: 1999 1.00
2: 2000 2.00
3: 2001 1.25
This is a possible generalized tidyverse approach also using lubridate. This creates rows for each year and appropriate time intervals for each person-year. The intersection between the calendar year and person-year interval will be the contribution summed up in the end. Note that Jan 1 to Mar 1 here would be considered 2 months or 1/6 of a year contribution (not 25%).
df <- data.frame(
person = c("Person 1", "Person 2"),
start = c("01/01/2000", "01/01/1999"),
end = c("01/03/2001", "31/12/2001")
)
df$start <- dmy(df$start)
df$end <- dmy(df$end)
library(lubridate)
library(tidyverse)
df %>%
mutate(date_int = interval(start, end),
year = map2(year(start), year(end), seq)) %>%
unnest(year) %>%
mutate(
year_int = interval(
as.Date(paste0(year, '-01-01')), as.Date(paste0(year, '-12-31'))
),
year_sect = intersect(date_int, year_int)
) %>%
group_by(year) %>%
summarise(contribute = signif(sum(as.numeric(year_sect, "years")), 2))
Output
year contribute
<int> <dbl>
1 1999 1
2 2000 2
3 2001 1.2
Related
I want to calculate log returns for a stock in R. The issue is that my financial year is from April 1 to March 31. I have tried using packages tidyquant and tidyverse. The code I have tried is as follows:
library(tidyquant)
RIL<- tq_get("RELIANCE.NS") # download the stock price data of Reliance Industries Limited listed on NSE of India. The data is from January 2011 to May 2021.
library(tidyverse)
RIL1<- RIL %>% mutate(CalYear = year(date),
Month = month(date),
FinYear = if_else(Month<4,CalYear,CalYear+1)) # This creates a new variable called FinYear, which correctly shows the financial year. If the month is >3 (ie March), the financial year is calendar year +1.
RIL_Returns<- RIL1 %>%
group_by(FinYear) %>%
tq_transmute(select = adjusted,
mutate_fun = periodReturn,
period = "yearly",
type = "log") #This part of the code has the problem.
From this code, I get two values for log returns per each year. This can't be true. I want a table with columns FinYear and Log_Returns, where Log_Returns is defined as ln(adjusted close price for the last trading day of given FinYear/adjusted close price for the first trading day of the given FinYear). How can I do this?
Perhaps this is not the most elegant but I think it works, I obtained the first and last day of each year manually and computed the log returns accordingly
# Get data
library("tibble")
library("tidyquant")
RIL<- tq_get("RELIANCE.NS")
RIL1<- RIL %>% mutate(CalYear = year(date),
Month = month(date),
FinYear = if_else(Month<4,CalYear,CalYear+1))
# Get minimum and max dates in each year
start_dates = c()
end_dates = c()
for(year in format(min(RIL1$date),"%Y"):format(max(RIL1$date),"%Y")){
start_dates =
c(start_dates,
min(RIL1$date[format(RIL1$date, "%Y") == format(as.Date(ISOdate(year, 1, 1)),"%Y")])
)
end_dates =
c(end_dates,
max(RIL1$date[format(RIL1$date, "%Y") == format(as.Date(ISOdate(year, 1, 1)),"%Y")])
)
}
# Get filtered data
RIL2 <- RIL1[(RIL1$date %in% start_dates | RIL1$date %in% end_dates),]
# Get log returns, even indexes represent end of each year rows
end_adjusted = RIL2$adjusted[1:length(RIL2$adjusted) %% 2 == 0]
beginning_adjusted = RIL2$adjusted[1:length(RIL2$adjusted) %% 2 != 0]
log_returns = log(end_adjusted/beginning_adjusted)
# Put log returns and years in a tibble.
result = tibble(log_returns ,format(RIL2$date[1:length(RIL2$date) %% 2 == 0], "%Y"))
# Result
result
Outputs
# A tibble: 11 x 2
log_returns `format(RIL2$date[1:length(RIL2$date)%%2 == 0],…
<dbl> <chr>
1 -0.412 2011
2 0.185 2012
3 0.0739 2013
4 0.0117 2014
5 0.145 2015
6 0.0743 2016
7 0.537 2017
8 0.215 2018
9 0.306 2019
10 0.287 2020
11 0.0973 2021
My data is in a dataframe which has a structure like this:
df2 <- data.frame(Year = c("2007"), Week = c(1:12), Measurement = c(rnorm(12, mean = 4, sd = 1)))
Unfortunately I do not have the complete date (e.g. days are missing) for each measurement, only the Year and the Weeks (these are ISO weeks).
Now I want to aggregate the Median of a Month's worth of measurements (e.g. the weekly measurements per month of the specific year) into a new column, Months. I did not find a convenient way to do this without having the exact day of the measurements available. Any inputs are much appreciated!
When it is necessary to allocate a week to a single month, the rule for first week of the year might be applied, although ISO 8601 does not consider this case. (Wikipedia)
For example, the 5th week of 2007 belongs to February, because the Thursday of the 5th week was the 1st of February.
I am using data.table and ISOweek packages. See the example how to compute the month of the week. Then you can do any aggregation by month.
require(data.table)
require(ISOweek)
df2 <- data.table(Year = c("2007"), Week = c(1:12),
Measurement = c(rnorm(12, mean = 4, sd = 1)))
# Generate Thursday as year, week of the year, day of week according to ISO 8601
df2[, thursday_ISO := paste(Year, sprintf("W%02d", Week), 4, sep = "-")]
# Convert Thursday to date format
df2[, thursday_date := ISOweek2date(thursday_ISO)]
# Compute month
df2[, month := format(thursday_date, "%m")]
df2
Suggestion by Uwe to compute a year-month string.
# Compute year-month
df2[, yr_mon := format(ISOweek2date(sprintf("%s-W%02d-4", Year, Week)), "%Y-%m")]
df2
And finally you can do an aggregation to the new table or by adding median as a column.
df2[, median(Measurement), by = yr_mon]
df2[, median := median(Measurement), by = yr_mon]
df2
If I understand correctly, you don't know the exact day, but only the week number and year. My answer takes the first day of the year as a starting date and then compute one week intervals based on that. You can probably refine the answer.
Based on
an answer by mnel, using the lubridate package.
library(lubridate)
# Prepare week, month, year information ready for the merge
# Make sure you have all the necessary dates
wmy <- data.frame(Day = seq(ymd('2007-01-01'),ymd('2007-04-01'),
by = 'weeks'))
wmy <- transform(wmy,
Week = isoweek(Day),
Month = month(Day),
Year = isoyear(Day))
# Merge this information with your data
merge(df2, wmy, by = c("Year", "Week"))
Year Week Measurement Day Month
1 2007 1 3.704887 2007-01-01 1
2 2007 10 1.974533 2007-03-05 3
3 2007 11 4.797286 2007-03-12 3
4 2007 12 4.291169 2007-03-19 3
5 2007 2 4.305010 2007-01-08 1
6 2007 3 3.374982 2007-01-15 1
7 2007 4 3.600008 2007-01-22 1
8 2007 5 4.315184 2007-01-29 1
9 2007 6 4.887142 2007-02-05 2
10 2007 7 4.155411 2007-02-12 2
11 2007 8 4.711943 2007-02-19 2
12 2007 9 2.465862 2007-02-26 2
using dplyr you can try:
require(dplyr)
df2 %>% mutate(Date = as.Date(paste("1", Week, Year, sep = "-"), format = "%w-%W-%Y"),
Year_Mon = format(Date,"%Y-%m")) %>% group_by(Year_Mon) %>%
summarise(result = median(Measurement))
As #djhrio pointed out, Thursday is used to determine the weeks in a month. So simply switch paste("1", to paste("4", in the code above.
This can be done relatively simply in dplyr.
library(dplyr)
df2 %>%
mutate(Month = rep(1:3, each = 4)) %>%
group_by(Month) %>%
summarise(MonthlyMedian = stats::median(Measurement))
Basically, add a new column to define your months. I'm presuming since you don't have days, you are going to allocate 4 weeks per month?
Then you just group by your Month variable and calculate the median. Very simple
Hope this helps
I have the following survival dataset that I would like to split the interval by January 1st of each year. For example, for personid 1220, i would make the split at 1912-01-01, 1913-01-01, 1914-01-01, 1915-01-01. I tried to use survSplit but they can only do numeric vector. Can you please let me know if there any other way?
In the dataset below, time = EndDate - StartDate. Here is what I have so far:
test.ts <- survSplit(Surv(time, censor) ~ .,
data = test,
cut = seq(0, 1826.25, 365.25),
episode = "tgroup")
but that only split by each year.
ID EndDate StartDate censor time status
1 1220 1915-03-01 1911-10-04 1 1244 Alive
3 4599 1906-02-15 1903-05-16 1 1006 Alive
4 6375 1899-04-10 1896-10-27 1 895 Alive
6 6386 1929-10-05 1922-01-26 0 1826 Outmigrated
7 6389 1933-12-08 1929-10-05 1 1525 Outmigrated
8 6390 1932-01-17 1927-07-24 1 1638 Dead 0-4 yrs
Not sure I understood what you wanted but it you want to replicate the information in your data frame for each year in the range of Start;End, you can do:
library(tidyverse)
library(lubridate)
df %>%
as_tibble() %>%
mutate(
RangeYear = map2(StartDate, EndDate, function(start, end) {
start <- `if`(day(start) == 1 && month(start) == 1,
year(start),
year(start) + 1)
seq(start, year(end))
})
) %>%
unnest(RangeYear)
I have data for hospitalisations that records date of admission and the number of days spent in the hospital:
ID date ndays
1 2005-06-01 15
2 2005-06-15 60
3 2005-12-25 20
4 2005-01-01 400
4 2006-06-04 15
I would like to create a dataset of days spend at the hospital per year, and therefore I need to deal with cases like ID 3, whose stay at the hospital goes over the end of the year, and ID 4, whose stay at the hospital is longer than one year. There is also the problem that some people do have a record on next year, and I would like to add the `surplus' days to those when this happens.
So far I have come up with this solution:
library(lubridate)
ndays_new <- ifelse((as.Date(paste(year(data$date),"12-31",sep="-")),
format="%Y-%m-%d") - data$date) < data$ndays,
(as.Date(paste(year(data$date),"12-31",sep="-")),
format="%Y-%m-%d") - data$date) ,
data$ndays)
However, I can't think of a way to get those `surplus' days that go over the end of the year and assign them to a new record starting on the next year. Can any one point me to a good solution? I use dplyr, so solutions with that package would be specially welcome, but I'm willing to try any other tool if needed.
My solution isn't compact. But, I tried to employ dplyr and did the following. I initially changed column names for my own understanding. I calculated another date (i.e., date.2) by adding ndays to date.1. If the years of date.1 and date.2 match, that means you do not have to consider the following year. If the years do not match, you need to consider the following year. ndays.2 is basically ndays for the following year. Then, I reshaped the data using do. After filtering unnecessary rows with NAs, I changed date to year and aggregated the data by ID and year.
rename(mydf, date.1 = date, ndays.1 = ndays) %>%
mutate(date.1 = as.POSIXct(date.1, format = "%Y-%m-%d"),
date.2 = date.1 + (60 * 60 * 24) * ndays.1,
ndays.2 = ifelse(as.character(format(date.1, "%Y")) == as.character(format(date.2, "%Y")), NA,
date.2 - as.POSIXct(paste0(as.character(format(date.2, "%Y")),"-01-01"), format = "%Y-%m-%d")),
ndays.1 = ifelse(ndays.2 %in% NA, ndays.1, ndays.1 - ndays.2)) %>%
do(data.frame(ID = .$ID, date = c(.$date.1, .$date.2), ndays = c(.$ndays.1, .$ndays.2))) %>%
filter(complete.cases(ndays)) %>%
mutate(date = as.numeric(format(date, "%Y"))) %>%
rename(year = date) %>%
group_by(ID, year) %>%
summarise(ndays = sum(ndays))
# ID year ndays
#1 1 2005 15
#2 2 2005 60
#3 3 2005 7
#4 3 2006 13
#5 4 2005 365
#6 4 2006 50
I have this a dataframe with a long list of dates in one column and values in another column, that looks like this:
set.seed(1234)
df <- data.frame(date= as.Date(c('2010-09-05', '2011-09-06', '2010-09-13',
'2011-09-14', '2010-09-23', '2011-09-24',
'2010-10-05', '2011-10-06', '2010-10-13',
'2011-10-14', '2010-10-23', '2011-10-24')),
value= rnorm(12))
I need to calculate the mean value in each 10 day period of each month, but irrespective of year, like this:
dfNeeded <- data.frame(datePeriod=c('period.Sept0.10', 'period.Sept11.20', 'period.Sept21.30',
'period.Oct0.10', 'period.Oct11.20', 'period.Oct21.31'),
meanValue=c(mean(df$value[c(1,2)]),
mean(df$value[c(3,4)]),
mean(df$value[c(5,6)]),
mean(df$value[c(7,8)]),
mean(df$value[c(9,10)]),
mean(df$value[c(11,12)])))
Is there a fast way of doing this?
Here is a way to do it, which uses the lubridate package for month and day extraction, but you can do it with base R date functions :
library(lubridate)
df$period <- paste(month(df$date),cut(day(df$date),breaks=c(0,10,20,31)),sep="-")
aggregate(df$value, list(period=df$period), mean)
Which gives :
period x
1 10-(0,10] -0.5606859
2 10-(10,20] -0.7272449
3 10-(20,31] -0.7377896
4 9-(0,10] -0.4648183
5 9-(10,20] -0.6306283
6 9-(20,31] 0.4675903
This approach with format.Date and modulo arithmetic should be reasonably fast:
tapply(df$value, list( format(df$date, "%b"), as.POSIXlt(df$date)$mday %/% 10), mean)
0 1 2
Oct -0.560686 -0.727245 -0.73779
Sep -0.464818 -0.630628 0.46759
I'm not sure how it would compare to the aggregate approach:
aggregate(df$value, list( format(df$date, "%b"), as.POSIXlt(df$date)$mday %/% 10), mean)
Group.1 Group.2 x
1 Oct 0 -0.560686
2 Sep 0 -0.464818
3 Oct 1 -0.727245
4 Sep 1 -0.630628
5 Oct 2 -0.737790
6 Sep 2 0.467590