Delete duplicates with multiple grouping conditions - r

I want to delete duplicates with multiple grouping conditions but always get way less results than expected.
The dataframe compares two companies per year. Like this:
year
c1
c2
2000
a
b
2000
a
c
2000
a
d
2001
a
b
2001
b
d
2001
a
c
For every c1 I want to look at c2 and delete rows which are in the previous year.
I found a similar problem but with just one c. Here are some of my tries so far:
df<- df%>%
group_by(c1,c2) %>%
mutate(dup = n() > 1) %>%
group_split() %>%
map_dfr(~ if(unique(.x$dup) & (.x$year[2] - .x$year[1]) == 1) {
.x %>% slice_head(n = 1)
} else {
.x
}) %>%
select(-dup) %>%
arrange(year)
df<- sqldf("select a.*
from df a
left join df b on b.c1=a.c1 and b.c2 = a.c2 and b.year = a.year - 1
where b.year is null")
The desired output for the example would be:
year
c1
c2
2000
a
b
2000
a
c
2000
a
d
2001
b
d

Assuming you want to check duplicate in the previous year only. So showing it to you on a modified sample
library(tidyverse)
df <- read.table(header = T, text = 'year c1 c2
2000 a b
2000 a c
2000 a d
2001 a b
2001 b d
2001 a c
2002 a d')
df %>%
filter(map2_lgl(df$year, paste(df$c1, df$c2), ~ !paste(.x -1, .y) %in% paste(df$year, df$c1, df$c2)))
#> year c1 c2
#> 1 2000 a b
#> 2 2000 a c
#> 3 2000 a d
#> 4 2001 b d
#> 5 2002 a d
Created on 2021-07-08 by the reprex package (v2.0.0)

Some of the other solutions won't work because I think they ignore the fact that you will probably have many years and want to eliminate duplicates from only the prior.
Here is something fairly simple. You could do this in some map function or whatnot, but sometimes a simple loop does just fine. For each year of data, use anti_join() to return only those values from the current year which are not in the prior year. Then just restack the data.
df_split <- df %>%
group_split(year)
for (this_year in 2:length(df_split)) {
df_split[[this_year]] <- df_split[[this_year]] %>%
anti_join(df_split[[this_year - 1]], by = c("c1", "c2"))
}
bind_rows(df_split)
# # A tibble: 4 x 3
# year c1 c2
# <int> <chr> <chr>
# 1 2000 a b
# 2 2000 a c
# 3 2000 a d
# 4 2001 b d
Edit
Another approach is to add a dummy column for the prior year and just use an anti_join() with that. This is probably what I would do.
df %>%
mutate(prior_year = year - 1) %>%
anti_join(df, by = c(prior_year = "year", "c1", "c2")) %>%
select(-prior_year)

You can also use the following solution.
library(dplyr)
library(purrr)
df %>%
filter(pmap_int(list(df$c1, df$c2, df$year), ~ df %>%
filter(year %in% c(..3, ..3 - 1)) %>%
rowwise() %>%
mutate(output = all(c(..1, ..2) %in% c_across(c1:c2))) %>%
pull(output) %>% sum) < 2)
# AnilGoyal's modified data set
year c1 c2
1 2000 a b
2 2000 a c
3 2000 a d
4 2001 b d
5 2002 a d

this will only keep the data u want.
The datais your data frame.
data[!duplicated(data[,2:3]),]

I think this is pretty simple with base duplicated using the fromLast option to get the last rather than the first entry. (It does assum the ordering by year.
dat[!duplicated(dat[2:3], fromLast=TRUE), ] # negate logical vector in i-position
year c1 c2
3 2000 a d
4 2001 a b
5 2001 b d
6 2001 a c
I do get a different result than you said was expected so maybe I misunderstood the specifications?

Assuming, that you indeed wanted to keep your last year, as stated in the question, but contrary to your example table, you could simply use slice:
library(dplyr)
df = data.frame(year=c("2000","2000","2000","2001","2001","2001"),
c1 = c("a","a","a","a","b","a"),c2=c("b","c","d","b","d","c"))
df %>% group_by(c1,c2) %>%
slice_tail() %>%arrange(year,c1,c2)
Use slice_head(), if you wanted the first year.
Here is the documentation: slice

Related

R: update the values in df1 based on data in df2

Hi I have two data frames (df1 and df2) with two shared variables (ID and Yr). I want to update the values in a third variable (value) in df1 with the new data in the respective value in df2. But below code does not update the value in df1, it seems the values are not passed to the corresponding cels in df1.
df1 = data.frame(ID = c("a","b","c","d","e") ,
Yr = c(2000,2001,2002,2003,2004),
value= c(100,100,100,100, 100))
df2 = data.frame(ID = c("a","b","c") ,
Yr = c(2000,2001,2002),
valuenew= c(200,150,120))
for (i in 1:nrow(df2)){
id <- df2[i,'ID']
year <- df2[i, 'Yr']
valuenew<- df2[i, 'valuenew']
df1[which (df1$ID == id & df1$Yr == year), 'value'] <- valuenew
}
the desired result
ID Yr value
a 2000 200
b 2001 150
c 2002 120
d 2003 100
e 2004 100
The real data I use with which none of these solutions works
df1
head(df1, 5)
CoreID Yr FluxTot
1 Asmund2000_Greenland coast_4001 1987 0.3239693
2 Asmund2000_Greenland coast_4001 1986 0.2864100
3 Asmund2000_Greenland coast_4001 1985 0.2488508
4 Asmund2000_Greenland coast_4001 1984 0.2964794
5 Asmund2000_Greenland coast_4001 1983 0.3441080
df2
head(df2, 5)
CoreID Yr GamfitHgdep
1 Beal2015_Mount Logan 2000 0.01105077
2 Eyrikh2017_Belukha glacier 2000 0.02632597
3 Zheng2014_Mt. Oxford 2000 0.01377599
4 Zheng2014_Agassiz 2000 0.01940151
5 Zheng2014_NEEM-2010-S3 2000 -0.01483026
#merged database
m<-merge(df1, df2)
head(m,5)
CoreID Yr FluxTot GamfitHgdep
1 Beal2014_Yanacocha 2000 0.003365556 0.024941373
2 Beal2014_Yanacocha 2001 0.003423333 0.027831253
3 Beal2014_Yanacocha 2002 0.003481111 -0.002908330
4 Beal2014_Yanacocha 2003 0.003538889 -0.004591100
5 Beal2014_Yanacocha 2004 0.003596667 0.005189858
Below is the exact code I used to do the trick but failed. No difference if the value assigning part is replaced with any other solutions. No warning, no error raised.
library(readxl)
library(dplyr)
metal = 'Hg'
df = read_excel('All core data.xlsx','Sheet1')
df = data.frame(df)
df1 <- df[which (df$Metal==metal),]
rownames(df1) = seq(length=nrow(df1))
head(df1, 5)
dfgam = read_excel('GAM prediction.xlsx','Sheet1')
df2 <- data.frame(dfgam)
head(df2, 5)
for (i in 1:nrow(df2)){
coreid <- df2[i,'CoreID']
year <- df2[i, 'Yr']
predicted<- df2[i, 'GamfitHgdep']
df1[which (df1$CoreID == coreid & df1$Yr == year), 'FluxTot'] <- predicted
}
after running the code, the values in df1 have not changed, for instance
the value should be 0.024941373 as shown in head(m,5)
Since dplyr version 1.0.0, you can use rows_update for this:
dplyr::rows_update(
df1,
rename(df2, value=valuenew),
by = c("ID", "Yr")
)
# ID Yr value
# 1 a 2000 200
# 2 b 2001 150
# 3 c 2002 120
# 4 d 2003 100
# 5 e 2004 100
We could use a join for this: For example left_join
library(dplyr)
left_join(df1, df2, by="ID") %>%
mutate(value = ifelse(!is.na(valuenew), valuenew, value)) %>%
select(ID, Yr=Yr.x, value)
ID Yr value
1 a 2000 200
2 b 2001 150
3 c 2002 120
4 d 2003 100
5 e 2004 100
Option using data.table:
df1 = data.frame(ID = c("a","b","c","d","e") ,
Yr = c(2000,2001,2002,2003,2004),
value= c(100,100,100,100, 100))
df2 = data.frame(ID = c("a","b","c") ,
Yr = c(2000,2001,2002),
valuenew= c(200,150,120))
library(data.table)
setDT(df1)[df2, value := i.valuenew, on = .(ID, Yr)]
df1
#> ID Yr value
#> 1: a 2000 200
#> 2: b 2001 150
#> 3: c 2002 120
#> 4: d 2003 100
#> 5: e 2004 100
Created on 2022-07-05 by the reprex package (v2.0.1)
Your example is working and updating df1 just fine.
However, to add one more solution, you can try the lines below without using a for loop or attaching extra packages:
key <- paste(df1$ID, df1$Yr)
values <- setNames(df2$value, paste(df2$ID, df2$Yr))[key]
df1$value[!is.na(values)] <- values[!is.na(values)]
Maybe something worth to mention in general for your problem, make sure you don't have any duplicated ID/Yr combinations in df2...
EDIT:
Sorry, I was terrible at helping you! Providing just another working solution is not helpful at all. So here's my attempt to help you further.
First, check that you have the classes/types that you expect for the columns that you compare.
Next - usually I'd recommend placing a browser() in your code (e.g. before your assignment/last line in your example:
for (i in 1:nrow(df2)){
id <- df2[i,'ID']
year <- df2[i, 'Yr']
valuenew<- df2[i, 'valuenew']
browser()
df1[which (df1$ID == id & df1$Yr == year), 'value'] <- valuenew
}
This is especially helpful if you need to debug a function. However in your case you can step through your for loop manually, which is a bit simpler to handle:
Assign the first value to your iterator i <- 1 and run the code inside your for loop. Is which(df1$ID == id & df1$Yr == year) really returning what you expect?
If you can't find any issues, increment i by 1 and proceed with debugging...
You can try this for loop
for(i in 1:nrow(df1)){
y <- which(df1$Yr[i] == df2$Yr)
if(length(y) > 0) df1$value[i] <- df2$valuenew[y]
}
Output
ID Yr value
1 a 2000 200
2 b 2001 150
3 c 2002 120
4 d 2003 100
5 e 2004 100

R - Return column name for row where first given value is found

I am trying to find the first occurrence of a FALSE in a dataframe for each row value. My rows are specific occurrences and the columns are dates. I would like to be able to find the date of first FALSE so that I can use that value to find a return date.
An example structure of my dataframe:
df <- data.frame(ID = c(1,2,3), '2001' = c(TRUE, TRUE, TRUE),
'2002' = c(FALSE, TRUE, FALSE), '2003' = c(TRUE, FALSE, TRUE))
I want to end up with a second dataframe or list that contains the ID and the column name that identifies the first instance of a FALSE.
For example :
ID | Date
1 | 2002
2 | 2003
3 | 2002
I do not know the mechanism to find such a result.
The actual dataframe contains a couple thousand rows so I unfortunately can't do it by hand.
I am a new R user so please don't refrain from suggesting things you might expect a more experienced R user to have already thought about.
Thanks in advance
Try this using tidyverse functions. You can reshape data to long and then filter for F values. If there are some duplicated rows the second filter can avoid them. Here the code:
library(dplyr)
library(tidyr)
#Code
newdf <- df %>% pivot_longer(-ID) %>%
group_by(ID) %>%
filter(value==F) %>%
filter(!duplicated(value)) %>% select(-value) %>%
rename(Myname=name)
Output:
# A tibble: 3 x 2
# Groups: ID [3]
ID Myname
<dbl> <chr>
1 1 2002
2 2 2003
3 3 2002
Another option without duplicated values can be using the row_number() to extract the first value (row_number()==1):
library(dplyr)
library(tidyr)
#Code 2
newdf <- df %>% pivot_longer(-ID) %>%
group_by(ID) %>%
filter(value==F) %>%
mutate(V=ifelse(row_number()==1,1,0)) %>%
filter(V==1) %>%
select(-c(value,V)) %>% rename(Myname=name)
Output:
# A tibble: 3 x 2
# Groups: ID [3]
ID Myname
<dbl> <chr>
1 1 2002
2 2 2003
3 3 2002
Or using base R with apply() and a generic function:
#Code 3
out <- data.frame(df[,1,drop=F],Res=apply(df[,-1],1,function(x) names(x)[min(which(x==F))]))
Output:
ID Res
1 1 2002
2 2 2003
3 3 2002
We can use max.col with ties.method = 'first' after inverting the logical values.
cbind(df[1], Date = names(df[-1])[max.col(!df[-1], ties.method = 'first')])
# ID Date
#1 1 2002
#2 2 2003
#3 3 2002

Subset a dataframe, calculate the mean and populate a dataframe in a loop in R

I have a set of 85 possible combinations from two variables, one with five values (years) and one with 17 values (locations). I make a dataframe that has the years in the first column and the locations in the second column. For each combination of year and location I want to calculate the weighted mean value and then add it to the third column, according to the year and location values.
My code is as follows:
for (i in unique(data1$year)) {
for (j in unique(data1$location)) {
data2 <- crossing(data1$year, data1$location)
dataname <- subset(data1, year %in% i & location %in% j)
result <- weighted.mean(dataname$length, dataname$raising_factor, na.rm = T)
}
}
The result I gets puts the last calculated mean in the third column for each row.
How can I get it to add according to matching year and location combination?
thanks.
A base R option would be by
by(df[c('x', 'y')], df[c('group', 'year')],
function(x) weighted.mean(x[,1], x[,2]))
Based on #LAP's example
As #A.Suleiman suggested, we can use dplyr::group_by.
Example data:
df <- data.frame(group = rep(letters[1:5], each = 4),
year = rep(2001:2002, 10),
x = 1:20,
y = rep(c(0.3, 1, 1/0.3, 0.4), each = 5))
library(dplyr)
df %>%
group_by(group, year) %>%
summarise(test = weighted.mean(x, y))
# A tibble: 10 x 3
# Groups: group [?]
group year test
<fctr> <int> <dbl>
1 a 2001 2.000000
2 a 2002 3.000000
3 b 2001 6.538462
4 b 2002 7.000000
5 c 2001 10.538462
6 c 2002 11.538462
7 d 2001 14.000000
8 d 2002 14.214286
9 e 2001 18.000000
10 e 2002 19.000000

R: Consolidating duplicate observations?

I have a large data frame with approximately 500,000 observations (identified by "ID") and 150+ variables. Some observations only appear once; others appear multiple times (upwards of 10 or so). I would like to "collapse" these multiple observations so that there is only one row per unique ID, and that all information in columns 2:150 are concatenated. I do not need any calculations run on these observations, just a quick munging.
I've tried:
df.new <- group_by(df,"ID")
and also:
library(data.table)
dt = data.table(df)
dt.new <- dt[, lapply(.SD, na.omit), by = "ID"]
and unfortunately neither have worked. Any help is appreciated!
Using basic R:
df = data.frame(ID = c("a","a","b","b","b","c","d","d"),
day = c("1","2","3","4","5","6","7","8"),
year = c(2016,2017,2017,2016,2017,2016,2017,2016),
stringsAsFactors = F)
> df
ID day year
1 a 1 2016
2 a 2 2017
3 b 3 2017
4 b 4 2016
5 b 5 2017
6 c 6 2016
7 d 7 2017
8 d 8 2016
Do:
z = aggregate(df[,2:3],
by = list(id = df$ID),
function(x){ paste0(x, collapse = "/") }
)
Result:
> z
id day year
1 a 1/2 2016/2017
2 b 3/4/5 2017/2016/2017
3 c 6 2016
4 d 7/8 2017/2016
EDIT
If you want to avoid "collapsing" NA do:
z = aggregate(df[,2:3],
by = list(id = df$ID),
function(x){ paste0(x[!is.na(x)],collapse = "/") })
For a data frame like:
> df
ID day year
1 a 1 2016
2 a 2 NA
3 b 3 2017
4 b 4 2016
5 b <NA> 2017
6 c 6 2016
7 d 7 2017
8 d 8 2016
The result is:
> z
id day year
1 a 1/2 2016
2 b 3/4 2017/2016/2017
3 c 6 2016
4 d 7/8 2017/2016
I have had a similar problem in the past, but I wasn't dealing with several copies of the same data. It was in many cases just 2 instances and in some cases 3 instances. Below was my approach. Hopefully, it will help.
idx <- duplicated(df$key) | duplicated(df$key, fromLast=TRUE) # get the index of the duplicate entries. Or will help get the original value too.
dupes <- df[idx,] # get duplicated values
non_dupes <- df[!idx,] # get all non duplicated values
temp <- dupes %>% group_by(key) %>% # roll up the duplicated ones.
fill_(colnames(dupes), .direction = "down") %>%
fill_(colnames(dupes), .direction = "up") %>%
slice(1)
Then it is easy to merge back the temp and the non_dupes.
EDIT
I would highly recommend to filter the df to the only the population as much as possible and relevant for your end goal as this process could take some time.
What about?
df %>%
group_by(ID) %>%
summarise_each(funs(paste0(., collapse = "/")))
Or reproducible...
iris %>%
group_by(Species) %>%
summarise_each(funs(paste0(., collapse = "/")))

Scale relative to a value in each group (via dplyr)

I have a set of time series, and I want to scale each of them relative to their value in a specific interval. That way, each series will be at 1.0 at that time and change proportionally.
I can't figure out how to do that with dplyr.
Here's a working example using a for loop:
library(dplyr)
data = expand.grid(
category = LETTERS[1:3],
year = 2000:2005)
data$value = runif(nrow(data))
# the first time point in the series
baseYear = 2002
# for each category, divide all the values by the category's value in the base year
for(category in as.character(levels(factor(data$category)))) {
data[data$category == category,]$value = data[data$category == category,]$value / data[data$category == category & data$year == baseYear,]$value[[1]]
}
Edit: Modified the question such that the base time point is not indexable. Sometimes the "time" column is actually a factor, which isn't necessarily ordinal.
This solution is very similar to #thelatemail, but I think it's sufficiently different enough to merit its own answer because it chooses the index based on a condition:
data %>%
group_by(category) %>%
mutate(value = value/value[year == baseYear])
# category year value
#... ... ... ...
#7 A 2002 1.00000000
#8 B 2002 1.00000000
#9 C 2002 1.00000000
#10 A 2003 0.86462789
#11 B 2003 1.07217943
#12 C 2003 0.82209897
(Data output has been truncated. To replicate these results, set.seed(123) when creating data.)
Use first in dplyr, ensuring you use order_by
data %>%
group_by(category) %>%
mutate(value = value / first(value, order_by = year))
Something like this:
data %>%
group_by(category) %>%
mutate(value=value/value[1]) %>%
arrange(category,year)
Result:
# category year value
#1 A 2000 1.0000000
#2 A 2001 0.2882984
#3 A 2002 1.5224308
#4 A 2003 0.8369343
#5 A 2004 2.0868684
#6 A 2005 0.2196814
#7 B 2000 1.0000000
#8 B 2001 0.5952027

Resources