Merge/Join Data Frame / Table based on criteria - > or < - r

I have a data frame with weekly data by Section. Each Section has approx 104 weeks worth of data and there is 83 sections in total.
I have a second data frame with the Start and End week by Section that I want to filter the main data frame on.
In both tables the Week is a combination of Year and Week e.g. 201501 and is always from weeks 1 to 52.
So in the example below I want to filter Section A by weeks 201401 to 201404, Section B by weeks 201551 to 201603.
I initially thought I could add an additional column to my Weeks_Filter data frame that is a sequential number from the start and end of the the weeks for each section (duplicating each row for each week), then merge the 2 tables and keep all the data from the Weeks_Filter table (all.y = TRUE) because this worked on a small sample I did but I don't know how to add the sequential weeks since they can span different years.
Week <- c("201401","201402","201403","201404","201405", "201451", "201552", "201601", "201602", "201603")
Section <- c(rep("A",5),rep("B",5))
df <- data.frame(cbind(Week, Section))
Section <- c("A", "B")
Start <- c("201401","201551")
End <- c("201404","201603")
Weeks_Filter <- data.frame(cbind(Section, Start, End))

The latest development version of data.table adds non-equi joins (and in the older ones you can use foverlaps):
setDT(df) # convert to data.table in place
setDT(Weeks_Filter)
# fix the column types - you have factors currently, converting to integer
df[, Week := as.integer(as.character(Week))]
Weeks_Filter[, `:=`(Start = as.integer(as.character(Start)),
End = as.integer(as.character(End)))]
# the actual magic
df[df[Weeks_Filter, on = .(Section, Week >= Start, Week <= End), which = T]]
# Week Section
#1: 201401 A
#2: 201402 A
#3: 201403 A
#4: 201404 A
#5: 201552 B
#6: 201601 B
#7: 201602 B
#8: 201603 B

Using dplyr you can
combine your data frames
group by Section
filter based on the Start and End columns
One problem is that your 'weeks' are characters and become factors the way you've encoded them. I took the shortcut and just made them numeric, but I'd recommend using lubridate to make these proper Date class vectors.
library(dplyr)
tempdf <- full_join(df, Weeks_Filter)
tempdf$Week <- as.numeric(as.character(tempdf$Week))
tempdf$Start <- as.numeric(as.character(tempdf$Start))
tempdf$End <- as.numeric(as.character(tempdf$End))
tempdf_filt <- tempdf %>%
group_by(Section) %>%
filter(Week >= Start,
Week <= End)
It looks like there's a problem in your data that "201451" should be "201551", but otherwise returns what you want:
> tempdf_filt
Source: local data frame [8 x 4]
Groups: Section [2]
Week Section Start End
(dbl) (fctr) (dbl) (dbl)
1 201401 A 201401 201404
2 201402 A 201401 201404
3 201403 A 201401 201404
4 201404 A 201401 201404
5 201552 B 201551 201603
6 201601 B 201551 201603
7 201602 B 201551 201603
8 201603 B 201551 201603

Perhaps creating a vector of all desired weeks would work for the filter. Here is a rough example using base R:
# get weeks
allWeeks <- as.character(1:52)
allWeeks <- ifelse(nchar(allWeeks)==1, paste0("0",allWeeks), allWeeks)
# get all year-weeks
allWeeks <- paste0(2014:2015, allWeeks)
# filter vector to select desired weeks
keepWeeks <- keepWeeks[grep("201(40[1-4]|55[12]|60[123]))", allWeeks)]
dfKeeper <- df[df$Week %in% keepWeeks,]
I tried to construct a regular expression that would capture the periods that you want, but you may have to adjust it a bit.

require(data.table)
df <- merge(df, Weeks_Filter)
df[, -1] <- apply(df[, -1], 2, function(x) as.numeric(as.character(x)))
df <- data.table(df)
df[Week >= Start & Week <= End, .SD, by = Section]
The Output is,
Section Start End Week
1: A 201401 201404 201401
2: A 201401 201404 201402
3: A 201401 201404 201403
4: A 201401 201404 201404
5: B 201551 201603 201552
6: B 201551 201603 201601
7: B 201551 201603 201602
8: B 201551 201603 201603

Related

Rank most recent scores of students within a given date - 30 days window

Following is what my dataframe/data.table looks like. The rank column is my desired calculated field.
library(data.table)
df <- fread('
Name Score Date Rank
John 42 1/1/2018 3
Rob 85 12/31/2017 2
Rob 89 12/26/2017 1
Rob 57 12/24/2017 1
Rob 53 08/31/2017 1
Rob 72 05/31/2017 2
Kate 87 12/25/2017 1
Kate 73 05/15/2017 1
')
df[,Date:= as.Date(Date, format="%m/%d/%Y")]
I am trying to calculate the rank of each student at every given point in time in the data within a 30 day windows. For that, I need to fetch the most recent scores of all students at a given point in time and then pass the rank function.
In the 1st row, as of 1/1/2018, John has two more competitors in a past 30 day window: Rob with the most recent score of 85 in 12/31/2017 AND Kate with the most recent score of 87 in 12/25/2017 and both of these dates fall within the 1/1/2018 - 30 Day Window. John gets a rank of 3 with the lowest score of 42. If only one students falls within date(at a given row) - 30 day window, then the rank is 1.
In the 3rd row the date is 12/26/2017. So Rob's score as of 12/26/2017 is 89. There is only one case of another student that falls in the time window of 12/26/2017 - 30 days and that is the most recent score(87) of kate on 12/25/2017. So within the time window of (12/26/2017) - 30 , Rob's score of 89 is higher than Kate's score of 87 and therefore Rob gets rank 1.
I was thinking about using the framework from here Efficient way to perform running total in the last 365 day window but struggling to think of a way to fetch all recent score of all students at a given point in time before using rank.
This seems to work:
ranks = df[.(d_dn = Date - 30L, d_up = Date), on=.(Date >= d_dn, Date <= d_up), allow.cart=TRUE][,
.(LatestScore = last(Score)), by=.(Date = Date.1, Name)]
setorder(ranks, Date, -LatestScore)
ranks[, r := rowid(Date)]
df[ranks, on=.(Name, Date), r := i.r]
Name Score Date Rank r
1: John 42 2018-01-01 3 3
2: Rob 85 2017-12-31 2 2
3: Rob 89 2017-12-26 1 1
4: Rob 57 2017-12-24 1 1
5: Rob 53 2017-08-31 1 1
6: Rob 72 2017-05-31 2 2
7: Kate 87 2017-12-25 1 1
8: Kate 73 2017-05-15 1 1
... using last since the Cartesian join seems to sort and we want the latest measurement.
How the update join works
The i. prefix means it's a column from i in the x[i, ...] join, and the assignment := is always in x. So it's looking up each row of i in x and where matches are found, copying values from i to x.
Another way that is sometimes useful is to look up x rows in i, something like df[, r := ranks[df, on=.(Name,Date), x.r]] in which case x.r is still from the ranks table (now in the x position relative to the join).
There's also...
ranks = df[CJ(Name = Name, Date = Date, unique=TRUE), on=.(Name, Date), roll=30, nomatch=0]
setnames(ranks, "Score", "LatestScore")
# and then use the same last three lines above
I'm not sure about efficiency of one vs another, but I guess it depends on number of Names, frequency of measurement and how often measurement days coincide.
A solution that uses data.table though not sure if it is the most efficient usage:
df[.(iName=Name, iScore=Score, iDate=Date, StartDate=Date-30, EndDate=Date),
.(Rank=frank(-c(iScore[1L], .SD[Name != iName, max(Score), by=.(Name)]$V1),
ties.method="first")[1L]),
by=.EACHI,
on=.(Date >= StartDate, Date <= EndDate)]
Explanation:
1) The outer square brackets do a non-equi join within a date range (i.e. 30days ago and latest date for each row). Try studying the below output against the input data:
df[.(iName=Name, iScore=Score, iDate=Date, StartDate=Date-30, EndDate=Date),
c(.(RowGroup=.GRP),
.SD[, .(Name, Score, Date, OrigDate, iName, iScore, iDate, StartDate, EndDate)]),
by=.EACHI,
on=.(Date >= StartDate, Date <= EndDate)]
2) .EACHI is to perform j calculations for each row of i.
3) Inside j, iScore[1L] is the score for the current row, .SD[Name != iName] means taking scores not corresponding to the student in the current row. Then, we use the max(Score) for each student of those students within the 30days window.
4) Concatenate all these scores and calculate the rank for the score of the current row while taking care of ties by taking the first tie.
Note:
see ?data.table to understand what i, j, by, on and .EACHI refers to.
EDIT after comments by OP:
I would add a OrigDate column and find those that matches the latest date.
df[, OrigDate := Date]
df[.(iName=Name, iScore=Score, iDate=Date, StartDate=Date-30, EndDate=Date),
.(Name=iName, Score=iScore, Date=iDate,
Rank=frank(-c(iScore[1L],
.SD[Name != iName, Score[OrigDate==max(OrigDate)], by=.(Name)]$V1),
ties.method="first")[1L]),
by=.EACHI,
on=.(Date >= StartDate, Date <= EndDate)]
I came up with following partial solution, encountered however problem - is it possible that there will be two people occuring with the same date?
if not, have a look at following piece of code:
library(tidyverse) # easy manipulation
library(lubridate) # time handling
# This function can be added to
get_top <- function(df, date_sel) {
temp <- df %>%
filter(Date > date_sel - months(1)) %>% # look one month in the past from given date
group_by(Name) %>% # and for each occuring name
summarise(max_score = max(Score)) %>% # find the maximal score
arrange(desc(max_score)) %>% # sort them
mutate(Rank = 1:n()) # and rank them
temp
}
Now, you have to find the name in the table, for given date and return its rank.
library(data.table)
library(magrittr)
setorder(df, -Date)
fun <- function(i){
df[i:nrow(df), head(.SD, 1), by = Name] %$%
rank(-Score[Date > df$Date[i] - 30])[1]
}
df[, rank := sapply(1:.N, fun)]
This can be done by joining to df those rows of df that are within 30 days behind it or the same date and have higher or equal scores. Then for each original row and joined row Name get the joined row Name that is the most recent. The count of the remaining joined rows for each of the original df rows is the rank.
library(sqldf)
sqldf("with X as
(select a.rowid r, a.*, max(b.Date) Date
from df a join df b
on b.Date between a.Date - 30 and a.Date and b.Score >= a.Score
group by a.rowid, b.Name)
select Name, Date, Score, count(*) Rank
from X
group by r
order by r")
giving:
Name Date Score Rank
1 John 2018-01-01 42 3
2 Rob 2017-12-31 85 2
3 Rob 2017-12-26 89 1
4 Rob 2017-12-24 57 1
5 Rob 2017-08-31 53 1
6 Rob 2017-05-31 72 2
7 Kate 2017-12-25 87 1
8 Kate 2017-05-15 73 1
A tidyverse solution (dplyr + tidyr):
df %>%
complete(Name,Date) %>%
group_by(Name) %>%
mutate(last_score_date = `is.na<-`(Date,is.na(Score))) %>%
fill(Score,last_score_date) %>%
filter(!is.na(Score) & Date-last_score_date <30) %>%
group_by(Date) %>%
mutate(Rank = rank(-Score)) %>%
right_join(df)
# # A tibble: 8 x 5
# # Groups: Date [?]
# Name Date Score last_score_date Rank
# <chr> <date> <int> <date> <dbl>
# 1 John 2018-01-01 42 2018-01-01 3
# 2 Rob 2017-12-31 85 2017-12-31 2
# 3 Rob 2017-12-26 89 2017-12-26 1
# 4 Rob 2017-12-24 57 2017-12-24 1
# 5 Rob 2017-08-31 53 2017-08-31 1
# 6 Rob 2017-05-31 72 2017-05-31 2
# 7 Kate 2017-12-25 87 2017-12-25 1
# 8 Kate 2017-05-15 73 2017-05-15 1
We add all missing combinations of Date and Name
then we create a column for the last_score_date, equal to Date when score isn't NA.
by filling NAs down Score has become the latest score
we filter out NAs and keep only scores that have < 30 days of age
That's our table of valid scores by dates
From there it's easy to add ranks
and a final right_join on the original table gives us the expected output
data
library(data.table)
df <- fread('
Name Score Date
John 42 01/01/2018
Rob 85 12/31/2017
Rob 89 12/26/2017
Rob 57 12/24/2017
Rob 53 08/31/2017
Rob 72 05/31/2017
Kate 87 12/25/2017
Kate 73 05/15/2017
')
df[,Date:= as.Date(Date, format="%m/%d/%Y")]

How to diagonally subtract different columns in R

I have a dataset of a hypothetical exam.
id <- c(1,1,3,4,5,6,7,7,8,9,9)
test_date <- c("2012-06-27","2012-07-10","2013-07-04","2012-03-24","2012-07-22", "2013-09-16","2012-06-21","2013-10-18", "2013-04-21", "2012-02-16", "2012-03-15")
result_date <- c("2012-07-29","2012-09-02","2013-08-01","2012-04-25","2012-09-01","2013-10-20","2012-07-01","2013-10-31", "2013-05-17", "2012-03-17", "2012-04-20")
data1 <- as_data_frame(id)
data1$test_date <- test_date
data1$result_date <- result_date
colnames(data1)[1] <- "id"
"id" indicates the ID of the students who have taken a particular exam. "test_date" is the date the students took the test and "result_date" is the date when the students' results are posted. I'm interested in finding out which students retook the exam BEFORE the result of that exam session was released, e.g. students who knew that they have underperformed and retook the exam without bothering to find out their scores. For example, student with "id" 1 took the exam for the second time on "2012-07-10" which was before the result date for his first exam - "2012-07-29".
I tried to:
data1%>%
group_by(id) %>%
arrange(id, test_date) %>%
filter(n() >= 2) %>% #To only get info on students who have taken the exam more than once and then merge it back in with the original data set using a join function
So essentially, I want to create a new column called "re_test" where it would equal 1 if a student retook the exam BEFORE receiving the result of a previous exam and 0 otherwise (those who retook after seeing their marks or those who did not retake).
I have tried to mutate in order to find cases where dates are either positive or negative by subtracting the 2nd test_date from the 1st result_date:
mutate(data1, re_test = result_date - lead(test_date, default = first(test_date)))
However, this leads to mixing up students with different id's. I tried to split but mutate won't work on a list of dataframes so now I'm stuck:
split(data1, data1$id)
Just to add on, this is a part of the desired result:
data2 <- as_data_frame(id <- c(1,1,3,4))
data2$test_date_result <- c("2012-06-27","2012-07-10", "2013-07-04","2012-03-24")
data2$result_date_result <- c("2012-07-29","2012-09-02","2013-08-01","2012-04-25")
data2$re_test <- c(1, 0, 0, 0)
Apologies for the verbosity and hope I was clear enough.
Thanks a lot in advance!
library(reshape2)
library(dplyr)
# first melt so that we can sequence by date
data1m <- data1 %>%
melt(id.vars = "id", measure.vars = c("test_date", "result_date"), value.name = "event_date")
# any two tests in a row is a flag - use dplyr::lag to comapre the previous
data1mc <- data1m %>%
arrange(id, event_date) %>%
group_by(id) %>%
mutate (multi_test = (variable == "test_date" & lag(variable == "test_date"))) %>%
filter(multi_test)
# id variable event_date multi_test
# 1 1 test_date 2012-07-10 TRUE
# 2 9 test_date 2012-03-15 TRUE
## join back to the original
data1 %>%
left_join (data1mc %>% select(id, event_date, multi_test),
by=c("id" = "id", "test_date" = "event_date"))
I have a piecewise answer that may work for you. I first create a data.frame called student that contains the re-test information, and then join it with the data1 object. If students re-took the test multiple times, it will compare the last test to the first, which is a flaw, but I'm unsure if students have the ability to re-test multiple times?
student <- data1 %>%
group_by(id) %>%
summarise(retest=(test_date[length(test_date)] < result_date[1]) == TRUE)
Some re-test values were NA. These were individuals that only took the test once. I set these to FALSE here, but you can retain the NA, as they do contain information.
student$retest[is.na(student$retest)] <- FALSE
Join the two data.frames to a single object called data2.
data2 <- left_join(data1, student, by='id')
I am sure there are more elegant ways to approach this. I did this by taking advantage of the structure of your data (sorted by id) and the lag function that can refer to the previous records while dealing with a current record.
### Ensure Data are sorted by ID ###
data1 <- arrange(data1,id)
### Create Flag for those that repeated ###
data1$repeater <- ifelse(lag(data1$id) == data1$id,1,0)
### I chose to do this on all data, you could filter on repeater flag first ###
data1$timegap <- as.Date(data1$result_date) - as.Date(data1$test_date)
data1$lagdate <- as.Date(data1$test_date) - lag(as.Date(data1$result_date))
### Display results where your repeater flag is 1 and there is negative time lag ###
data1[data1$repeater==1 & !is.na(data1$repeater) & as.numeric(data1$lagdate) < 0,]
# A tibble: 2 × 6
id test_date result_date repeater timegap lagdate
<dbl> <chr> <chr> <dbl> <time> <time>
1 1 2012-07-10 2012-09-02 1 54 days -19 days
2 9 2012-03-15 2012-04-20 1 36 days -2 days
I went with a simple shift comparison. 1 line of code.
data1 <- data.frame(id = c(1,1,3,4,5,6,7,7,8,9,9), test_date = c("2012-06-27","2012-07-10","2013-07-04","2012-03-24","2012-07-22", "2013-09-16","2012-06-21","2013-10-18", "2013-04-21", "2012-02-16", "2012-03-15"), result_date = c("2012-07-29","2012-09-02","2013-08-01","2012-04-25","2012-09-01","2013-10-20","2012-07-01","2013-10-31", "2013-05-17", "2012-03-17", "2012-04-20"))
data1$re_test <- unlist(lapply(split(data1,data1$id), function(x)
ifelse(as.Date(x$test_date) > c(NA, as.Date(x$result_date[-nrow(x)])), 0, 1)))
data1
id test_date result_date re_test
1 1 2012-06-27 2012-07-29 NA
2 1 2012-07-10 2012-09-02 1
3 3 2013-07-04 2013-08-01 NA
4 4 2012-03-24 2012-04-25 NA
5 5 2012-07-22 2012-09-01 NA
6 6 2013-09-16 2013-10-20 NA
7 7 2012-06-21 2012-07-01 NA
8 7 2013-10-18 2013-10-31 0
9 8 2013-04-21 2013-05-17 NA
10 9 2012-02-16 2012-03-17 NA
11 9 2012-03-15 2012-04-20 1
I think there is benefit in leaving NAs but if you really want all others as zero, simply:
data1$re_test <- ifelse(is.na(data1$re_test), 0, data1$re_test)
data1
id test_date result_date re_test
1 1 2012-06-27 2012-07-29 0
2 1 2012-07-10 2012-09-02 1
3 3 2013-07-04 2013-08-01 0
4 4 2012-03-24 2012-04-25 0
5 5 2012-07-22 2012-09-01 0
6 6 2013-09-16 2013-10-20 0
7 7 2012-06-21 2012-07-01 0
8 7 2013-10-18 2013-10-31 0
9 8 2013-04-21 2013-05-17 0
10 9 2012-02-16 2012-03-17 0
11 9 2012-03-15 2012-04-20 1
Let me know if you have any questions, cheers.

generate list of dates based on one date in r

I am new to R and am finding it difficult to generate a series of rows where each generated row has a calculated date.
For example, going from a dataset like this:
Name date_birth
Greg 01/02/2015
Fred 02/02/2015
...to generate the following:
Name date_birth age date_atage<br/>
Greg 01/02/2015 0 01/02/2015
Greg 01/02/2015 1 02/02/2015
Greg 01/02/2015 2 03/02/2015
Fred 02/02/2015 0 02/02/2015
Fred 02/02/2015 1 03/02/2015
Fred 02/02/2015 2 04/02/2015
I have been studying sites like R-blogger, general instructional blogs and this site and I have been trying to figure out a loop statement involving the Seq statement, so that for each individual (e.g. Greg, Fred, etc) the process can be repeated where dates are calculated and placed in their own rows. Your first thought may be that this is simpler to do in Excel, but it isn't, as I need to repeat this for over 800 individuals (i.e. not just Greg and Fred), and for up to 300 days of age.
We can use data.table
library(data.table)
setDT(df1)[, .(date_birth, date_at_age = format(seq(as.Date(date_birth,
"%d/%m/%Y"), length.out=3, by = "1 day"), "%d/%m/%Y")) ,
by = Name][,age := seq_len(.N)-1 , by = Name][]
# Name date_birth date_at_age age
#1: Greg 01/02/2015 01/02/2015 0
#2: Greg 01/02/2015 02/02/2015 1
#3: Greg 01/02/2015 03/02/2015 2
#4: Fred 02/02/2015 02/02/2015 0
#5: Fred 02/02/2015 03/02/2015 1
#6: Fred 02/02/2015 04/02/2015 2
This is a long form way of getting the same place that data.table will take you.
Have a look at how you use dates in R. I've taken your original format and converted it to a date (code line 2). See http://strftime.org/ for more codes.
Set some dummy data:
df = data.frame(name=c("Gregg", "Joan"), DOB=c("01/02/2015", "02/02/2015"), stringsAsFactors=F)
Make date format:
df$DOB = as.Date(df$DOB, format="%d/%m/%Y")
Loop over each name, making 301 instances and adding day to DoB
df = lapply(1:nrow(df), function(i){
x = data.frame(name=rep(df[i, 1], times=301),
DoB=rep(df[i, 2], times=301),
age=0:300)
x$newDate = x$DoB + x$age
x
})
Convert list to a data frame:
df = do.call("rbind.data.frame", df)
Check output:
head(df)
Setup
df <- cbind(c("Greg","Fred"),c("01/02/2015","02/02/2015"))
max_age <- 2
start_at <- 0
Script
new_df <- data.frame(rep(NA,(max_age+1)*dim(df)[1]))
new_df[,1] <- rep(df[,1],each=max_age-start_at+1) #Names
new_df[,2] <- rep(df[,2],each=max_age-start_at+1) #Birth date
new_df[,3] <- rep(seq(from=start_at,to=max_age),dim(df)[1]) #Age
library(lubridate)
new_df[,4] <- dmy(new_df[,2]) + days(new_df[,3]) #Date at age
colnames(new_df) <- c("names","date_birth","age","date_at_age")

Count number of rows meeting criteria in another table - R PRogramming

I have two tables, one with property listings and another one with contacts made for a property (i.e. is someone is interested in the property they will "contact" the owner).
Sample "listings" table below:
listings <- data.frame(id = c("6174", "2175", "9176", "4176", "9177"), city = c("A", "B", "B", "B" ,"A"), listing_date = c("01/03/2015", "14/03/2015", "30/03/2015", "07/04/2015", "18/04/2015"))
listings$listing_date <- as.Date(listings$listing_date, "%d/%m/%Y")
listings
# id city listing_date
#1 6174 A 01/03/2015
#2 2175 B 14/03/2015
#3 9176 B 30/03/2015
#4 4176 B 07/04/2015
#5 9177 A 18/04/2015
Sample "contacts" table below:
contacts <- data.frame (id = c ("6174", "6174", "6174", "6174", "2175", "2175", "2175", "9176", "9176", "4176", "4176", "9177"), contact_date = c("13/03/2015","14/04/2015", "27/03/2015", "13/04/2015", "15/03/2015", "16/03/2015", "17/03/2015", "30/03/2015", "01/06/2015", "08/05/2015", "09/05/2015", "23/04/2015" ))
contacts$contact_date <- as.Date(contacts$contact_date, "%d/%m/%Y")
contacts
# id contact_date
#1 6174 2015-03-13
#2 6174 2015-04-14
#3 6174 2015-03-27
#4 6174 2015-04-13
#5 2175 2015-03-15
#6 2175 2015-03-16
#7 2175 2015-03-17
#8 9176 2015-03-30
#9 9176 2015-06-01
#10 4176 2015-05-08
#11 4176 2015-05-09
#12 9177 2015-04-23
Problem
1. I need to count the number of contacts made for a property within 'x' days of listing. The output should be a new column added to "listings" with # contacts:
Sample ('x' = 30 days)
listings
# id city listing_date ngs
#1 6174 A 2015-03-01 2
#2 2175 B 2015-03-14 3
#3 9176 B 2015-03-30 1
#4 4176 B 2015-04-07 0
#5 9177 A 2015-04-18 1
I have done this with the for loop; it is horrible slow for live data:
n <- nrow(listings)
mat <- vector ("integer", n)
for (i in 1:n) {
mat[i] <- nrow (contacts[contacts$id==listings[i,"id"] & as.numeric (contacts$contact_date - listings[i,"listing_date"]) <=30,])
}
listings$ngs <- mat
I need to prepare a histogram of # contacts vs days with 'x' as variable - through manipulate function. I can't figure out a way to do all this inside the manipulate function.
Here's a possible solution using data.table rolling joins
library(data.table)
# key `listings` by proper columns in order perform the binary join
setkey(setDT(listings), id, listing_date)
# Perform a binary rolling join while extracting matched icides and counting them
indx <- data.table(listings[contacts, roll = 30, which = TRUE])[, .N, by = V1]
# Joining back to `listings` by proper rows while assigning the counts by reference
listings[indx$V1, ngs := indx$N]
# id city listing_date ngs
# 1: 2175 B 2015-03-14 3
# 2: 4176 B 2015-04-07 NA
# 3: 6174 A 2015-03-01 2
# 4: 9176 B 2015-03-30 1
# 5: 9177 A 2015-04-18 1
I'm not sure if your actual id values are factor, but I'll start by making those numeric. Using them as factors will cause you problems:
listings$id <- as.numeric(as.character(listings$id))
contacts$id <- as.numeric(as.character(contacts$id))
Then, the strategy is to calculate the "days since listing" value for each contact and add this to your contacts data.frame. Then, aggregate this new data.frame (in your example, sum of contacts within 30 days), and then merge the resulting count back into your original data.
contacts$ngs <- contacts$contact_date - listings$listing_date[match(contacts$id, listings$id)]
a <- aggregate(ngs ~ id, data = contacts, FUN = function(x) sum(x <= 30))
merge(listings, a)
# id city listing_date ngs
# 1 2175 B 2015-03-14 3
# 2 4176 B 2015-04-07 0
# 3 6174 A 2015-03-01 2
# 4 9176 B 2015-03-30 1
# 5 9177 A 2015-04-18 1
Or:
indx <- match(contacts$id, listings$id)
days_since <- contacts$contact_date - listings$listing_date[indx]
n <- with(contacts[days_since <= 30, ], tapply(id, id, length))
n[is.na(n)] <- 0
listings$n <- n[match(listings$id, names(n))]
It's similar to Thomas' answer but utilizes tapply and match instead of aggregate and merge.
You could use the dplyr package. First merge the data:
all.data <- merge(contacts,listings,by = "id")
Set a target number of days:
number.of.days <- 30
Then gather the data by ID (group_by), exclude the results that are not within the time frame (filter) and count the number of occurrences/rows (summarise).
result <- all.data %>% group_by(id) %>% filter(contact_date > listing_date + number.of.days) %>% summarise(count.of.contacts = length(id))
I think there are a number of ways this could be potentially solved but I have found dplyr to be very helpful in a lot circumstances.
EDIT:
Sorry should have thought about that a little more. Does this work,
result <- all.data %>% group_by(id,city,listing_date) %>% summarise(ngs = length(id[which(contact_date < listing_date + number.of.days)]))
I don't think zero results can be passed sensibly through the filter stage (understandably, the goal is usually the opposite). I'm not too sure what sort of impact the 'which' component will have on processing time, likely to be slower than using the 'filter' function but might not matter.
Using dplyr for your first problem:
left_join(contacts, listings, by = c("id" = "id")) %>%
filter(abs(listing_date - contact_date) < 30) %>%
group_by(id) %>% summarise(cnt = n()) %>%
right_join(listings)
And the output is:
id cnt city listing_date
1 6174 2 A 2015-03-01
2 2175 3 B 2015-03-14
3 9176 1 B 2015-03-30
4 4176 NA B 2015-04-07
5 9177 1 A 2015-04-18
I'm not sure I understand your second question to answer it.

Order multiple columns in R

Sample data:
now <- data.frame(id=c(123,123,123,222,222,222,135,135,135),year=c(2002,2001,2003,2006,2007,2005,2001,2002,2003),freq=c(3,1,2,2,3,1,3,1,2))
Desired output:
wanted <- data.frame(id=c(123,123,123,222,222,222,135,135,135),year=c(2001,2002,2003,2005,2006,2007,2001,2002,2003),freq=c(1,2,3,1,2,3,1,2,3))
This solution works, but I'm getting memory error (cannot assign 134kb...)
ddply(now,.(id), transform, year=sort(year))
Please note I need speedwise efficient solution as I have dataframe of length 300K and 50 columns. Thanks.
You can use dplyr to sort it (which is called arrange in dplyr). dplyr is also faster than plyr.
wanted <- now %>% arrange(id, year)
# or: wanted <- arrange(now, id, year)
> wanted
# id year freq
#1 123 2001 1
#2 123 2002 3
#3 123 2003 2
#4 135 2001 3
#5 135 2002 1
#6 135 2003 2
#7 222 2005 1
#8 222 2006 2
#9 222 2007 3
You could do the same with base R:
wanted <- now[order(now$id, now$year),]
However, there is a diffrence in your now and wanted data.frame for id == 123 and year 2002 (in your now df, the freq is 2 while it is 3 in the wanted df). Based on your question, I assume this is a typo and that you did not actually want to change the freq values.
You could use base R function here
now <- now[order(now$id, now$year), ]
or data.table for faster performance
library(data.table)
setDT(now)[order(id, year)]
or
now <- data.table(now, key = c("id", "year"))
or
setDT(now)
setkey(now, id, year)

Resources