R identifying and aggregating balances with no history - r

I have a group of accounts with balances over 4 months. I want to a sum the balances that have just appeared that particular month. This is what I have gotten so far.
One account originated (new) each month.
Accounts <- c('A','B','C','A','B','C','A','B','C')
Dates <- as.Date(c('2016-01-31', '2016-01-31','2016-01-31','2016-02-28','2016-02-28','2016-02-28','2016-03-31','2016-03-31','2016-03-31'))
Balances <- c(100,NA,NA,90,50,NA,80,40,120)
Origination <- data.frame(Dates,Accounts,Balances)
library(reshape2)
Origination <- dcast(Origination,Dates ~ Accounts, value.var = "Balances")
Origination$Originated <- apply(Origination[2:4],1,function(x) ifelse(sum(is.na(x))==nrow(Origination),NA,tail(na.omit(x),1)))
Origination <- melt(Origination, id = c("Dates"))
Origination <-dcast(Origination, variable ~ Dates, value.var = "value")
variable 2016-01-31 2016-02-29 2016-03-31
1 A 100 90 80
2 B NA 50 40
3 C NA NA 120
4 Originated 100 50 120
This creates an origination table with a row called Originated. First month we only have the 100, second month we have the amortized A to 90 but also a new account 50 and last month we have both the amortized A and B with new C at 120. The Originated Column captures it exactly as I want.
But if I introduce another account D with in month 2 it picks just that amount (10) and not the sum of the two that is being originated. ie 50 (B) plus the 10(C).
Accounts <- c('A','B','C','D','A','B','C','D','A','B','C','D')
Dates <- as.Date(c('2016-01-31', '2016-01-31','2016-01-31','2016-01-31','2016-02-28','2016-02-28','2016-02-28','2016-02-28','2016-03-31','2016-03-31','2016-03-31','2016-03-31'))
Balances <- c(100,NA,NA,NA,90,50,10,NA,80,40,5,120)
Origination <- data.frame(Dates,Accounts,Balances)
library(reshape2)
Origination <- dcast(Origination,Dates ~ Accounts, value.var = "Balances")
Origination$Originated <- apply(Origination[2:4],1,function(x) ifelse(sum(is.na(x))==nrow(Origination),NA,tail(na.omit(x),1)))
Origination <- melt(Origination, id = c("Dates"))
Origination <-dcast(Origination, variable ~ Dates, value.var = "value")
variable 2016-01-31 2016-02-28 2016-03-31
1 A 100 90 80
2 B NA 50 40
3 C NA 10 5
4 D NA NA 120
5 Originated 100 10 5
So the ask is, how do I sum the newly added accounts from A through D across dates. Perhaps I am over thinking it. The result I would like is this:
variable 2016-01-31 2016-02-28 2016-03-31
1 A 100 90 80
2 B NA 50 40
3 C NA 10 5
4 D NA NA 120
5 Originated 100 60 120
Help is much appreciated.
Aksel

I have finally found a way to get the output I want. Here is the answer for those whom are interested.
sel <- rbind(FALSE, !is.na(head(Origination[-1], -1)))
#sel
# A B C D
#[1,] FALSE FALSE FALSE FALSE
#[2,] TRUE FALSE FALSE FALSE
#[3,] TRUE TRUE TRUE FALSE
rowSums(replace(Origination[-1], sel, 0), na.rm=TRUE)
#[1] 100 60 120

Related

How to apply function to each row of data.table

I am trying to calculate net present value (NPV) using library(financial) for each observation from given cashflow in data.table format. Here is my cashflow:
library(data.table)
dt <- data.table(id=c(1,2,3,4), Year1=c(NA, 30, 40, NA), Year2=c(20, 30, 20 ,70), Year3=c(60, 40, 0, 10))
To calculate NPV and update in data.table,
library(financial)
npv <- apply(dt, 1, function(x) cf(na.omit(x[-1]), i = 20)$tab[, 'NPV'])
dt[, NPV:=npv]
return,
id Year1 Year2 Year3 NPV
1: 1 NA 20 60 70.00000
2: 2 30 30 40 82.77778
3: 3 40 20 0 56.66667
4: 4 NA 70 10 78.33333
How can I directly update result using function cf to each row in the data.table ?
FYI: In my real dataset, there are over 50 columns
We can try a join based approach
dt[melt(dt, id.var = "id")[, .(NPV = cf(value[!is.na(value)],
i = 20)$tab[, "NPV"]), id], on = 'id']
# id Year1 Year2 Year3 NPV
#1: 1 NA 20 60 70.00000
#2: 2 30 30 40 82.77778
#3: 3 40 20 0 56.66667
#4: 4 NA 70 10 78.33333
Rewriting the cf function to only calculate the part that is needed will speed things up dramatically:
dt[, NPV := {x <- na.omit(unlist(.SD)); sum(x * sppv(20,0:(length(x)-1)))}, by=id]
# id Year1 Year2 Year3 NPV
#1: 1 NA 20 60 70.00000
#2: 2 30 30 40 82.77778
#3: 3 40 20 0 56.66667
#4: 4 NA 70 10 78.33333
In fact, this could probably be vectorised now.... hmmm, let me think!
We could try and make our own npv function for using in this example.
dcf <- function(x, r, t0=FALSE){
# calculates discounted cash flows (DCF) given cash flow and discount rate
#
# x - cash flows vector
# r - vector or discount rates, in decimals. Single values will be recycled
# t0 - cash flow starts in year 0, default is FALSE, i.e. discount rate in first period is zero.
if(length(r)==1){
r <- rep(r, length(x))
if(t0==TRUE){r[1]<-0}
}
x/cumprod(1+r)
}
npv <- function(x, r, t0=FALSE){
# calculates net present value (NPV) given cash flow and discount rate
#
# x - cash flows vector
# r - discount rate, in decimals
# t0 - cash flow starts in year 0, default is FALSE
sum(dcf(x, r, t0))
}
Now, whenever you want to apply(x,1,f), melt/gather/nest instead.
Unless you are intending to totally mislead the users of your data, you should never drop NA when calculating NPV. This will mean that you are discounting cash flows to different points in time. Replace NA with 0 instead. I also see that the package you intended to use, discounts cash flows to year 0, basically meaning that the first cash flow (in Year1) is not discounted.
library(data.table)
npv_dt <- melt(dt, id.vars = "id")[is.na(value), value:=0][order(variable), .(NPV=npv(x=value, r=0.2, t0=TRUE)), by="id"]
setkey(dt, id)
setkey(npv_dt, id)
npv_dt[dt]
#> id NPV Year1 Year2 Year3
#> 1: 1 58.33333 NA 20 60
#> 2: 2 82.77778 30 30 40
#> 3: 3 56.66667 40 20 0
#> 4: 4 65.27778 NA 70 10

Finding newest data older than a specific date in R

I have a two data.frames (call them dataset.new and dataset.old) that both contain information about some individuals. These individuals all have a identification number (a variable we can call ”individual”) that occurs in both of the data.frames and each frame has information on when the data was collected, stored in a column that we can call ”some.date”.
The second of these two data.frames (dataset.old) contains historical data for the individuals, i.e. values of some other variables measured at other times and thus each individual appears many times in dataset.old.
What I wish to do is the following. For each individual in dataset.new, find the rows from dataset.old that are the newest but still older than the observations in dataset.new. For the individuals that have no such date present in dataset.old, I want it to return NA.
This is perhaps easiest illustrated through some example data, presented below.
dataset.new
individual some.date
1 1 2016-05-01
2 2 2016-01-28
3 7 2016-03-03
dataset.old
individual some.date
1 1 2016-01-12
2 1 2015-12-30
3 1 2016-04-27
4 1 2016-05-02
5 2 2015-11-15
6 2 2012-01-27
7 2 2016-02-06
8 3 2016-04-30
9 3 2016-01-27
10 4 2016-03-01
11 4 2011-01-16
In this example, I am looking for a way get the following output:
individual row.nr
1 1 3
2 2 5
3 7 NA
since those rows correspond to the newest data in dataset.old that still is older than the data in dataset.new.
I have a code that solves the problem, but it is too slow for the data that I have in mind (which has well over 20 000 rows in dataset.new and many, many more in dataset.old). My solution is basically a loop over all individuals, subsetting the data at each stage.
find.previous <- function(dataset.old, individual, some.new.date){
subsetted.dataset <- dataset.old[dataset.old[, "individual"] == individual, ] # We only look at the individual in question.
subsetted.dataset <- subsetted.dataset[subsetted.dataset[, "some.date"] < some.new.date, ]# Here we get all the rows that have data that are measured BEFORE timepoint.
row.index <- which.min(some.new.date - subsetted.dataset[, "some.date"]) # This can be done, since we have already made sure that fromdatum < timepoint.
ifelse(length(row.index)!= 0, as.integer(rownames(subsetted.dataset[row.index,])), NA) # Then we output the row that had that information.
}
output <- matrix(ncol=2, nrow=0)
for(i in 1:nrow(dataset.new)){
output <- rbind(output, cbind(dataset.new[, "individual"][i], find.previous(dataset.old, dataset.new[, "individual"][i], dataset.new[, "some.date"][i])))
}
colnames(output) <- c("individual", "row.nr")
output
Any help on how to solve this problem would be greatly appreciated. I have tried using my Google skills as well as reading other posts on here stackoverflow, but without success.
The example data can be replicated by copying the following lines of code:
dataset.new <- data.frame(individual=c(1, 2, 7), some.date=as.Date(c("2016-05-01", "2016-01-28", "2016-03-03")))
dataset.old <- data.frame(individual=c(1,1,1,1,2,2,2,3,3,4,4), some.date=as.Date(c("2016-01-12", "2015-12-30", "2016-04-27", "2016-05-02", "2015-11-15", "2012-01-27", "2016-02-06", "2016-04-30", "2016-01-27", "2016-03-01", "2011-01-16")))
You can solve this efficiently with a merge.
First make the rownumber variable you want in dataset.old. Then merge dataset.new with dataset.old on individual (left join, or merge(lhs, rhs, all.x = TRUE)). This can get you:
dataset.old
individual new.date old.date old.rownumber
1 1 2016-05-01 2016-01-12 1
2 1 2016-05-01 2015-12-30 2
3 1 2016-05-01 2016-04-27 3
4 1 2016-05-01 2016-05-02 4
5 2 2016-01-28 2015-11-15 5
6 2 2016-01-28 2012-01-27 6
7 2 2016-01-28 2016-02-06 7
8 7 2016-03-03 NA NA
Subset to new.date > old.date or is.na(old.date):
dataset.old
individual new.date old.date old.rownumber
1 1 2016-05-01 2016-01-12 1
2 1 2016-05-01 2015-12-30 2
3 1 2016-05-01 2016-04-27 3
5 2 2016-01-28 2015-11-15 5
6 2 2016-01-28 2012-01-27 6
8 7 2016-03-03 NA NA
Subset to old.date == max(old.date) or is.na(old.date) grouped by individual.
dataset.old
individual new.date old.date old.rownumber
3 1 2016-05-01 2016-04-27 3
6 2 2016-01-28 2012-01-27 5
8 7 2016-03-03 NA NA
Edit:
I'm partial to data.table. The code would look something like:
dataset.old[, old.rownumber := 1:.N]
setnames(dataset.old, "some.date", "old.date")
setnames(dataset.new, "some.date", "new.date")
dataset.merge <- merge(dataset.old, dataset.new, by = "individual", all.x = TRUE)
dataset.merge <- dataset.merge[, new.date > old.date]
dataset.merge[old.date == max(old.date) | is.na(old.date), by = individual]
We can skip the NA search by finding the minimum square root. The negative values will be coerced to missing for us:
dataset.old$rn <- 1:nrow(dataset.old)
minp <- function(x) if(!length(m <- which.min(as.numeric(x)^.5))) NA else m
mrg <- merge(dataset.new, dataset.old, by="individual", all.x=TRUE)
mrg %>% group_by(individual) %>%
summarise(row.nr=rn[minp(some.date.x - some.date.y)])
# A tibble: 3 x 2
# individual row.nr
# <int> <int>
# 1 1 3
# 2 2 5
# 3 7 NA

R: Create a column of averages based upon groups of four rows

>head(df)
person week target actual drop_out organization agency
1: QJ1 1 30 19 TRUE BB LLC
2: GJ2 1 30 18 FALSE BB LLC
3: LJ3 1 30 22 TRUE CC BBR
4: MJ4 1 30 24 FALSE CC BBR
5: PJ5 1 35 55 FALSE AA FUN
6: EJ6 1 35 50 FALSE AA FUN
There are around ~30 weeks in the dataset with a repeating Person ID each week.
I want to look at each person's values FOUR weeks at a time (so week 1-4, 5-9, 10-13, and so on). For each of these chunks, I want to add up all the "actual" columns and divide it by the sum of the "target" columns. Then we could put that value in a column called "monthly percent."
As per Shape's recommendation I've created a month column like so
fullReshapedDT$month <- with(fullReshapedDT, ceiling(week/4))
Trying to figure out how to iterate over the month column and calculate averages now. Trying something like this, but it obviously doesn't work:
fullReshapedDT[,.(monthly_attendance = actual/target,by=.(person_id, month)]
Have you tried creating a group variable? It will allow you to group operations by the four-week period:
setDT(df1)[,grps:=ceiling(week/4) #Create 4-week groups
][,sum(actual)/sum(target), .(person, grps) #grouped operations
][,grps:=NULL][] #Remove unnecessary columns
# person V1
# 1: QJ1 1.1076923
# 2: GJ2 1.1128205
# 3: LJ3 0.9948718
# 4: MJ4 0.6333333
# 5: PJ5 1.2410256
# 6: EJ6 1.0263158
# 7: QJ1 1.2108108
# 8: GJ2 0.6378378
# 9: LJ3 0.9891892
# 10: MJ4 0.8564103
# 11: PJ5 1.1729730
# 12: EJ6 0.8666667

Calculate time until next different (successful) event in grouped longitudinal data

I have a dataset of people's appointment attendance. When they miss an appointment I want to calculate the number of days until they do attend, or return NA if they never do.
In the process of asking this question, I came up with a solution that calculated the number of days between events, then the reverse cumulative sum of these (see here), grouping by patient and change in attendance state (see here). I'm posting this in case it helps others, or someone spots a mistake or can come up with a better approach.
library(dplyr)
df <- data.frame(
id = rep(c("A","B"), each = 5),
event = c(FALSE, FALSE, TRUE, TRUE, FALSE,
FALSE, TRUE, FALSE, TRUE, TRUE),
date = as.Date(c("2016-01-02","2016-02-10","2016-02-12","2016-07-05","2016-12-28",
"2016-01-16","2016-02-11","2016-02-15","2016-04-20","2016-10-23")))
df %>%
# Sort data (if not already)
arrange(id, date) %>%
group_by(id) %>%
mutate(
# Calculate days before next appointment
days_next_event = lead(date) - date,
# Identify change in attend status
event_chng_n = cumsum(event != lag(event, default = 1))) %>%
group_by(id, event_chng_n) %>%
mutate(
# Calculate days before next change in event ('cumsum' not defined for "difftime" objects)
days_next_chng = rev(cumsum(rev(as.numeric(
ifelse(is.na(days_next_event), 0, days_next_event)
)))),
# Calculate days before next success
days_next_success = ifelse(event, 0, rev(cumsum(rev(
as.numeric(days_next_event)
)))))
Source: local data frame [10 x 7]
Groups: id, event_chng_n [7]
id event date days_next_event event_chng_n days_next_chng days_next_success
(fctr) (lgl) (date) (dfft) (int) (dbl) (dbl)
1 A FALSE 2016-01-02 39 days 1 41 41
2 A FALSE 2016-02-10 2 days 1 2 2
3 A TRUE 2016-02-12 144 days 2 320 0
4 A TRUE 2016-07-05 176 days 2 176 0
5 A FALSE 2016-12-28 NA days 3 0 NA
6 B FALSE 2016-01-16 26 days 1 26 26
7 B TRUE 2016-02-11 4 days 2 4 0
8 B FALSE 2016-02-15 65 days 3 65 65
9 B TRUE 2016-04-20 186 days 4 186 0
10 B TRUE 2016-10-23 NA days 4 0 0

Merging overlapping dataframes in R

Okay, so I have two different data frames (df1 and df2) which, to simplify it, have an ID, a date, and the score on a test. In each data frame the person (ID) have taken the test on multiple dates. When looking between the two data frames, some of the people are listed in df1 but not in df2, and vice versa, but some are listed in both and they can overlap differently.
I want to combine all the data into one frame, but the tricky part is if any of the IDs and scores from df1 and df2 are within 7 days (I can do this with a subtracted dates column), I want to combine that row.
In essence, for every ID there will be one row with both scores written separately if taken within 7 days, and if not it will make two separate rows, one with score from df1 and one from df2 along with all the other scores that might not be listed in both.
EX:
df1
ID Date1(yyyymmdd) Score1
1 20140512 50
1 20140501 30
1 20140703 50
1 20140805 20
3 20140522 70
3 20140530 10
df2
ID Date2(yyyymmdd) Score2
1 20140530 40
1 20140622 20
1 20140702 10
1 20140820 60
2 20140522 30
2 20140530 80
Wanted_df
ID Date1(yyyymmdd) Score1 Date2(yyyymmdd) Score2
1 20140512 50
1 20140501 30
1 20140703 50 20140702 10
1 20140805 20
1 20140530 40
1 20140622 20
1 20140820 60
3 20140522 70
3 20140530 10
2 20140522 30
2 20140530 80
Alright. I feel bad about the bogus outer join answer (which may be possible in a library I don't know about, but there are advantages to using RDBMS sometimes...) so here is a hacky workaround. It assumes that all the joins will be at most one to one, which you've said is OK.
# ensure the date columns are date type
df1$Date1 <- as.Date(as.character(df1$Date1), format="%Y%m%d")
df2$Date2 <- as.Date(as.character(df2$Date2), format="%Y%m%d")
# ensure the dfs are sorted
df1 <- df1[order(df1$ID, df1$Date1),]
df2 <- df2[order(df2$ID, df2$Date2),]
# initialize the output df3, which starts as everything from df1 and NA from df2
df3 <- cbind(df1,Date2=NA, Score2=NA)
library(plyr) #for rbind.fill
for (j in 1:nrow(df2)){
# see if there are any rows of test1 you could join test2 to
join_rows <- which(df3[,"ID"]==df2[j,"ID"] & abs(df3[,"Date1"]-df2[j,"Date2"])<7 )
# if so, join it to the first one (see discussion)
if(length(join_rows)>0){
df3[min(join_rows),"Date2"] <- df2[j,"Date2"]
df3[min(join_rows),"Score2"] <- df2[j,"Score2"]
} # if not, add a new row of just the test2
else df3 <- rbind.fill(df3,df2[j,])
}
df3 <- df3[order(df3$ID,df3$Date1,df3$Date2),]
row.names(df3)<-NULL # i hate these
df3
# ID Date1 Score1 Date2 Score2
# 1 1 2014-05-01 30 <NA> NA
# 2 1 2014-05-12 50 <NA> NA
# 3 1 2014-07-03 50 2014-07-02 10
# 4 1 2014-08-05 20 <NA> NA
# 5 1 <NA> NA 2014-05-30 40
# 6 1 <NA> NA 2014-06-22 20
# 7 1 <NA> NA 2014-08-20 60
# 8 2 <NA> NA 2014-05-22 30
# 9 2 <NA> NA 2014-05-30 80
# 10 3 2014-05-22 70 <NA> NA
# 11 3 2014-05-30 10 <NA> NA
I couldn't get the rows in the same sort order as yours, but they look the same.
Short explanation: For each row in df2, see if there's a row in df1 you can "join" it to. If not, stick it at the bottom of the table. In the initialization and rbinding, you'll see some hacky ways of assigning blank rows or columns as placeholders.
Why this is a bad hacky workaround: for large data sets, the rbinding of df3 to itself will consume more and more memory. The loop is definitely not optimal and its search does not exploit the fact that the tables are sorted. If by some chance the test were taken twice within a week, you would see some unexpected behavior (duplicates from df2, etc).
Use an outer join with an absolute value limit on the date difference. (A outer join B keeps all rows of A and B.) For example:
library(sqldf)
sqldf("select a.*, b.* from df1 a outer join df2 b on a.ID = b.ID and abs(a.Date1 - b.Date2) <=7")
Note that your date variables will have to be true dates. If they are currently characters or integers, you need to do something like df1$Date1 <- as.Date(as.character(df$Date1), format="%Y%M%D) etc.

Resources