How to subset data frame based on date differences? - r

I have two data frame and I want to subset specific rows in df2. Here are df1 and df2:
df1:
Sdate columnA D
2020-05-14 DD 1
2020-05-14 FF 5
2020-05-14 EE 6
2020-05-14 GG 7
df2:
Sdate ColA C
2020-04-13 NN 1
2020-04-13 XX 1
2020-04-14 VV 5
2020-04-15 DD 6
2020-04-16 AA 7
Here are the steps to get my final output:
I need to calculate date differences between df1's [1,1] which is "2020-05-14" and df2's [1,1] which is "2020-04-13"
I need to figure out if the difference is larger than 10 days.
Finally, if it is larger than 10 days, I want to delete rows having oldest dates in df2. Because 2020-04-13 is the oldest date in df2, I want to delete first two lows of df2.
"2020-05-14" - "2020-04-13" is 31. Therefore, my final output of df2 should be
Sdate ColA C
2020-04-14 VV 5
2020-04-15 DD 6
2020-04-16 AA 7
I tried with the codes following:
df2 <- ifelse(as.numeric(as.Date(as.character(df1[1,1]), format="%Y-%m-%d")-
as.Date(as.character(df2[1,1]), format="%Y-%m-%d"))>10,
subset(df2, Sdate!= df2[1,1]),print("Pass"))
I tested this code separately in three pieces, and they worked well. But it doesn't in combined code above. df2 is just gone with the code.
What should I change to get what I want to have?

You can use dplyr for this. I have provided a method where you don't need to compare the first row, but can simply take the minimum.
library(dplyr)
new_df <- df2 %>%
mutate(
isOldest = Sdate == min(Sdate),
deleteOldest = as.integer(min(df1$Sdate) - min(Sdate)) > 10
) %>%
filter(!(isOldest & deleteOldest))
If instead you actually do need just a comparison of the first row:
new_df <- df2 %>%
mutate(
isOldest = Sdate == df2$Sdate[1],
deleteOldest = as.integer(df1$Sdate[1] - df2$Sdate[1]) > 10
) %>%
filter(!(isOldest & deleteOldest))
Hope this is what you need. The dataframes below.
df1 <- data.frame(
Sdate = as.Date('2020-05-14'),
columnA = c('DD', 'FF', 'EE', 'GG'),
D = c(1, 5, 6, 7),
stringsAsFactors = FALSE
)
df2 <- data.frame(
Sdate = as.Date(c(rep('2020-04-13', 2), '2020-04-14', '2020-04-15',' 2020-04-16')),
colA = c('NN', 'XX', 'VV', 'DD', 'AA'),
C = c(1, 1, 5, 6, 7),
stringsAsFactors = FALSE
)

Related

How can I see if any values in one data frame exist in any other data frame?

I have eight data frames, all of which contain an id field and I want to know if any of the id values are common among all eight data frames.
I'm not looking for an intersection (where the values are common across all data frames); I simply want to know those instances where they appear in any of the other data frames.
Let's say that one of the data frames looks like this:
id TestDay
1 66 m
2 90 t
3 71 w
4 59 th
5 38 f
6 84 sa
7 15 su
8 89 m
9 18 t
10 93 w
11 88 th
12 42 f
13 10 sa
14 33 su
15 49 m
16 51 t
17 80 w
18 32 th
19 1 f
20 91 sa
21 58 su
If you wish to create eight sample data frames, you can do so by using this code eight times (with different data frame names, naturally):
x <- data.frame(id = sample(1:100, 21, FALSE), TestDay = rep(c("m","t","w","th","f","sa","su"), 3))
I want to know if any of the id values listed here appear in any of the other seven data frames, and conversely, whether any of the id values listed in any of the other seven data frames exist in this one.
How can this be done?
Combine all the dataframes in one dataframe with a unique id value which will distinguish each dataframe.
I created two dataframes here with data column representing the dataframe number.
library(dplyr)
x1 <- data.frame(id = round(runif(21, 1, 21)), TestDay = rep(c("m","t","w","th","f","sa","su"), 3))
x2 <- data.frame(id = round(runif(21, 1, 21)), TestDay = rep(c("m","t","w","th","f","sa","su"), 3))
combine_data <- bind_rows(x1, x2, .id = 'data')
group by the id column and count how many dataframes that id is present in.
combine_data %>%
group_by(id) %>%
summarise(count_unique = n_distinct(data))
You can add filter(count_unique > 1) to the above chain to get id's which are present in more than 1 dataframe.
To add to #Ronak 's answer, you can also concatenate c() the dataframe number using summarise(). This tells you which dataframe the ID comes from.
df1 <- data.frame(id = letters[1:3])
df2 <- data.frame(id = letters[4:6])
df3 <- data.frame(id = letters[5:10])
library(tidyverse)
df <- list(df1, df2, df3)
df4 <- df %>%
bind_rows(.id = "df_num") %>%
mutate(df_num = as.integer(df_num)) %>%
group_by(id) %>%
summarise(
df_found = list(c(df_num)),
df_n = map_int(df_found, length)
)
df4
We can use data.table methods
Get the datasets in a list and rbind them with rbindlist
Grouped by 'id' get the count of unique 'data' with uniqueN
library(data.table)
rbindlist(list(x1, x2, idcol = 'data')[, .(count_unique = uniqueN(data)), by = id]

Reshape origin destination data

I need to turn this data frame :
df1 <- data.frame(A = c(1,2,3), B = c(2,1,4), Flow = c(50,30,20))
into a data frame like this :
df2 <- data.frame(A = c(1,3), B = c(3,4), AtoB = c(50,20), BtoA = c(20, NA))
I am trying to reshape it with dplyr. Is there an existing function or a way to do that ?
An option would be to create an Identifier column between 'A' and 'B' with labels 'AtoB/BtoA' based on the minimum value in each row, then change the values in 'A', 'B' by taking the min/max for each row (pmin/pmax) and spread the output back to 'wide' format
library(dplyr)
library(tidyr)
df1 %>%
mutate(grpIdent = case_when(A == pmin(A, B) ~ 'AtoB', TRUE ~ 'BtoA'),
A1= pmin(A, B), B1 = pmax(A, B)) %>%
select(A = A1, B = B1, grpIdent, Flow) %>%
spread(grpIdent, Flow)
# A B AtoB BtoA
#1 1 2 50 30
#2 3 4 20 NA
Using base R(This might require introducing a blank or blanks). It is also assumed that the to and fro- values are entered in succession.
new_df<-cbind(df[seq(1,nrow(df), by=2),], df[seq(2,nrow(df), by=2),])[,-c(4,5)]
names(new_df)<-c("A","B","AtoB","BtoA")
new_df
Result:
# A B AtoB BtoA
#1 1 2 50 30
#3 3 4 20 30

update table row values conditionally matching multiple columns in R [duplicate]

I have two data.frames that I want to merge together. The first is:
datess <- seq(as.Date('2005-01-01'), as.Date('2009-12-31'), 'days')
sample<- data.frame(matrix(ncol = 3, nrow = length(datess)))
colnames(sample) <- c('Date', 'y', 'Z')
sample$Date <- datess
The second:
a <- data.frame(matrix(ncol = 3, nrow = 5))
colnames(a) <- c('a', 'y', 'Z')
a$Z <- c(1, 3, 4, 5, 2)
a$a <- c(2005, 2006, 2007, 2008, 2009)
a$y <- c('abc', 'def', 'ijk', 'xyz', 'thanks')
And I'd like the merged one to match the year and then fill in the rest of the values for every day of that year.
Date y Z
2005-01-01 abc 1
2005-01-02 abc 1
2005-01-03 abc 1
{cont}
2009-12-31 thanks 2
So far, three different approaches have been posted:
using match()
using dplyr
using merge()
There is a fourth approach called update join suggested by Frank in chat:
library(data.table)
setDT(sample)[, yr := year(Date)][setDT(a), on = .(yr = a), `:=`(y = i.y, Z = i.Z)]
which turned out to be the fastest and most concise of the four.
Benchmark results:
To decide which of the approaches is the most efficient in terms of speed I've set up a benchmark using the microbenchmarkpackage.
Unit: microseconds
expr min lq mean median uq max neval
create_data 248.827 291.116 316.240 302.0655 323.588 665.298 100
match 4488.685 4545.701 4752.226 4649.5355 4810.763 6881.418 100
dplyr 6086.609 6275.588 6513.997 6385.2760 6625.229 8535.979 100
merge 2871.883 2942.490 3183.712 3004.6025 3168.096 5616.898 100
update_join 1484.272 1545.063 1710.651 1659.8480 1733.476 3434.102 100
As sample is modified it has to be created anew before each benchmark run. This is been done by a function which is included in the benchmark as well (create data). The times for create data need to be subtracted from the other timings.
So, even for the small data set of about 1800 rows, update join is the fastest, nearly twice as fast as the second merge, followed by match, and dplyr being last, more than 4 times slower than update join (with the time for create data subtracted).
Benchmark code
datess <- seq(as.Date('2005-01-01'), as.Date('2009-12-31'), 'days')
a <- data.frame(Z = c(1, 3, 4, 5, 2),
a = 2005:2009,
y = c('abc', 'def', 'ijk', 'xyz', 'thanks'),
stringsAsFactors = FALSE)
setDT(a)
make_sample <- function() data.frame(Date = datess, y = NA_character_, Z = NA_real_)
library(data.table)
library(magrittr)
microbenchmark::microbenchmark(
create_data = make_sample(),
match = {
sample <- make_sample()
matched<-match(format(sample$Date,"%Y"),a$a)
sample$y<-a$y[matched]
sample$Z<-a$Z[matched]
},
dplyr = {
sample <- make_sample()
sample <- sample %>%
dplyr::mutate(a = format(Date, "%Y") %>% as.numeric) %>%
dplyr::inner_join(a %>% dplyr::select(a), by = "a")
},
merge = {
sample <- make_sample()
sample2 <- data.frame(Date = datess)
sample2$a <- lubridate::year(sample2$Date)
sample <- base::merge(sample2, a, by="a")
},
update_join = {
sample <- make_sample()
setDT(sample)[, yr := year(Date)][a, on = .(yr = a), `:=`(y = i.y, Z = i.Z)]
}
)
You can use match
matched<-match(format(sample$Date,"%Y"),a$a)
sample$y<-a$y[matched]
sample$Z<-a$Z[matched]
If y and Z are always zero in sample you do not need them there, so all you have to do is join on year like this:
library(dplyr)
sample %>% mutate(a = format(Date, "%Y") %>% as.numeric) %>%
inner_join(a %>% select(a))
Is there anything speaking against having a column with year in your new df? If not you could generate one in 'sample' and use the merge function
require(lubridate) #to make generating the year easy
sample2<-data.frame(Date=datess)
sample2$a<-year(sample2$Date)
df<-merge(sample2,a,by="a")
this will result in something like this:
head(df)
a Date y Z
1 2005 2005-01-01 abc 1
2 2005 2005-01-02 abc 1
3 2005 2005-01-03 abc 1
4 2005 2005-01-04 abc 1
5 2005 2005-01-05 abc 1
6 2005 2005-01-06 abc 1
You could then remove the year column again if it bothers you.

merge data.frames based on year and fill in missing values

I have two data.frames that I want to merge together. The first is:
datess <- seq(as.Date('2005-01-01'), as.Date('2009-12-31'), 'days')
sample<- data.frame(matrix(ncol = 3, nrow = length(datess)))
colnames(sample) <- c('Date', 'y', 'Z')
sample$Date <- datess
The second:
a <- data.frame(matrix(ncol = 3, nrow = 5))
colnames(a) <- c('a', 'y', 'Z')
a$Z <- c(1, 3, 4, 5, 2)
a$a <- c(2005, 2006, 2007, 2008, 2009)
a$y <- c('abc', 'def', 'ijk', 'xyz', 'thanks')
And I'd like the merged one to match the year and then fill in the rest of the values for every day of that year.
Date y Z
2005-01-01 abc 1
2005-01-02 abc 1
2005-01-03 abc 1
{cont}
2009-12-31 thanks 2
So far, three different approaches have been posted:
using match()
using dplyr
using merge()
There is a fourth approach called update join suggested by Frank in chat:
library(data.table)
setDT(sample)[, yr := year(Date)][setDT(a), on = .(yr = a), `:=`(y = i.y, Z = i.Z)]
which turned out to be the fastest and most concise of the four.
Benchmark results:
To decide which of the approaches is the most efficient in terms of speed I've set up a benchmark using the microbenchmarkpackage.
Unit: microseconds
expr min lq mean median uq max neval
create_data 248.827 291.116 316.240 302.0655 323.588 665.298 100
match 4488.685 4545.701 4752.226 4649.5355 4810.763 6881.418 100
dplyr 6086.609 6275.588 6513.997 6385.2760 6625.229 8535.979 100
merge 2871.883 2942.490 3183.712 3004.6025 3168.096 5616.898 100
update_join 1484.272 1545.063 1710.651 1659.8480 1733.476 3434.102 100
As sample is modified it has to be created anew before each benchmark run. This is been done by a function which is included in the benchmark as well (create data). The times for create data need to be subtracted from the other timings.
So, even for the small data set of about 1800 rows, update join is the fastest, nearly twice as fast as the second merge, followed by match, and dplyr being last, more than 4 times slower than update join (with the time for create data subtracted).
Benchmark code
datess <- seq(as.Date('2005-01-01'), as.Date('2009-12-31'), 'days')
a <- data.frame(Z = c(1, 3, 4, 5, 2),
a = 2005:2009,
y = c('abc', 'def', 'ijk', 'xyz', 'thanks'),
stringsAsFactors = FALSE)
setDT(a)
make_sample <- function() data.frame(Date = datess, y = NA_character_, Z = NA_real_)
library(data.table)
library(magrittr)
microbenchmark::microbenchmark(
create_data = make_sample(),
match = {
sample <- make_sample()
matched<-match(format(sample$Date,"%Y"),a$a)
sample$y<-a$y[matched]
sample$Z<-a$Z[matched]
},
dplyr = {
sample <- make_sample()
sample <- sample %>%
dplyr::mutate(a = format(Date, "%Y") %>% as.numeric) %>%
dplyr::inner_join(a %>% dplyr::select(a), by = "a")
},
merge = {
sample <- make_sample()
sample2 <- data.frame(Date = datess)
sample2$a <- lubridate::year(sample2$Date)
sample <- base::merge(sample2, a, by="a")
},
update_join = {
sample <- make_sample()
setDT(sample)[, yr := year(Date)][a, on = .(yr = a), `:=`(y = i.y, Z = i.Z)]
}
)
You can use match
matched<-match(format(sample$Date,"%Y"),a$a)
sample$y<-a$y[matched]
sample$Z<-a$Z[matched]
If y and Z are always zero in sample you do not need them there, so all you have to do is join on year like this:
library(dplyr)
sample %>% mutate(a = format(Date, "%Y") %>% as.numeric) %>%
inner_join(a %>% select(a))
Is there anything speaking against having a column with year in your new df? If not you could generate one in 'sample' and use the merge function
require(lubridate) #to make generating the year easy
sample2<-data.frame(Date=datess)
sample2$a<-year(sample2$Date)
df<-merge(sample2,a,by="a")
this will result in something like this:
head(df)
a Date y Z
1 2005 2005-01-01 abc 1
2 2005 2005-01-02 abc 1
3 2005 2005-01-03 abc 1
4 2005 2005-01-04 abc 1
5 2005 2005-01-05 abc 1
6 2005 2005-01-06 abc 1
You could then remove the year column again if it bothers you.

R applying a data frame on another data frame

I have two data frames.
set.seed(1234)
df <- data.frame(
id = factor(rep(1:24, each = 10)),
price = runif(20)*100,
quantity = sample(1:100,240, replace = T)
)
df2 <- data.frame(
id = factor(seq(1:24)),
eq.quantity = sample(1:100, 24, replace = T)
)
I would like to use df2$­eq.quantity to find the closest absolute value compared to df$quantity, by the factor variable, id. I would like to do that for each id in df2 and bind it into a new data-frame, called results.
I can do it like this for each individually ID:
d.1 <- df2[df2$id == 1, 2]
df.1 <- subset(df, id == 1)
id.1 <- df.1[which.min(abs(df.1$quantity-d.1)),]
Which would give the solution:
id price quantity
1 66.60838 84
But I would really like to be able to use a smarter solution, and also gathered the results into a dataframe, so if I do it manually it would look kinda like this:
results <- cbind(id.1, id.2, etc..., id.24)
I had some trouble giving this question a good name?
data.tables are smart!
Adding this to your current example...
library(data.table)
dt = data.table(df)
dt2 = data.table(df2)
setkey(dt, id)
setkey(dt2, id)
dt[dt2, dif:=abs(quantity - eq.quantity)]
dt[,list(price=price[which.min(dif)], quantity=quantity[which.min(dif)]), by=id]
result:
dt[,list(price=price[which.min(dif)], quantity=quantity[which.min(dif)]), by=id]
id price quantity
1: 1 66.6083758 84
2: 2 29.2315840 19
3: 3 62.3379442 63
4: 4 54.4974836 31
5: 5 66.6083758 6
6: 6 69.3591292 13
...
Merge the two datasets and use lapply to perform the function on each id.
df3 <- merge(df,df2,all.x=TRUE,by="id")
diffvar <- function(df){
df4 <- subset(df3, id == df)
df4[which.min(abs(df4$quantity-df4$eq.quantity)),]
}
resultslist <- lapply(levels(df3$id),function(df) diffvar(df))
Combine the resulting list elements in a dataframe:
resultsdf <- data.frame(matrix(unlist(resultslist), ncol=4, byrow=T))
Or more easy:
library(plyr)
resultsdf <- ddply(df3, .(id), function(x)x[which.min(abs(x$quantity-x$eq.quantity)),])

Resources