Remove row conditionally from data frame - r

How can I remove rows conditionally from a data table?
For example, I have:
Apple, 2001
Apple, 2002
Apple, 2003
Apple, 2004
Banana, 2001
Banana, 2002
Banana, 2003
Candy, 2001
Candy, 2002
Candy, 2003
Candy, 2004
Dog, 2001
Dog, 2002
Dog, 2004
Water, 2002
Water, 2003
Water, 2004
Then, I want to include only the rows with 2001-2004 per group, i.e.:
Apple, 2001
Apple, 2002
Apple, 2003
Apple, 2004
Candy, 2001
Candy, 2002
Candy, 2003
Candy, 2004

Using data.table, check if all the 2001:2004 are present %in% the 'year' column for each group of 'Col1', then get the Subset of Data.table
library(data.table)
setDT(df1)[, if(all(2001:2004 %in% year)) .SD, by = Col1]
# Col1 year
#1: Apple 2001
#2: Apple 2002
#3: Apple 2003
#4: Apple 2004
#5: Candy 2001
#6: Candy 2002
#7: Candy 2003
#8: Candy 2004
data
df1 <- structure(list(Col1 = c("Apple", "Apple", "Apple", "Apple", "Banana",
"Banana", "Banana", "Candy", "Candy", "Candy", "Candy", "Dog",
"Dog", "Dog", "Water", "Water", "Water"), year = c(2001L, 2002L,
2003L, 2004L, 2001L, 2002L, 2003L, 2001L, 2002L, 2003L, 2004L,
2001L, 2002L, 2004L, 2002L, 2003L, 2004L)), .Names = c("Col1",
"year"), class = "data.frame", row.names = c(NA, -17L))

With base R we can use ave to get the desired results
df[ave(df$year, df$Col1, FUN = function(x) all(2001:2004 %in% x)) == 1, ]
# Col1 year
#1 Apple 2001
#2 Apple 2002
#3 Apple 2003
#4 Apple 2004
#8 Candy 2001
#9 Candy 2002
#10 Candy 2003
#11 Candy 2004

dplyr approach:
library(dplyr) # or library(tidyverse)
df1 %>%
group_by(Col1) %>%
filter(all(2001:2004 %in% year))
. %>% filter(TRUE) returns all rows, while . %>% filter(FALSE) drops all rows of data.
Output:
Source: local data frame [8 x 2]
Groups: Col1 [2]
Col1 year
<chr> <int>
1 Apple 2001
2 Apple 2002
3 Apple 2003
4 Apple 2004
5 Candy 2001
6 Candy 2002
7 Candy 2003
8 Candy 2004

Related

R Dataframe By-Value Filter

Suppose I have a dataset looks like below
Person Year From To
Peter 2001 Apple Microsoft
Peter 2006 Microsoft IBM
Peter 2010 IBM Facebook
Peter 2016 Facebook Apple
Kate 2003 Microsoft Google
Jimmy 2001 Samsung IBM
Jimmy 2004 IBM Google
Jimmy 2009 Google Facebook
I want to filter by person and only keep people who worked at IBM sometime (either in the From or in the To column). Furthermore, I only want to keep the records before people move away from IBM (that is, before "IBM" first appears in the From column). Thus, I want something like below:
Person Year From To
Peter 2001 Apple Microsoft
Peter 2006 Microsoft IBM
Jimmy 2001 Samsung IBM
A possible solution with dplyr:
library(dplyr)
df %>%
group_by(Person) %>%
filter(To == "IBM" | lead(To) == "IBM") %>%
ungroup()
# A tibble: 3 x 4
Person Year From To
<chr> <int> <chr> <chr>
1 Peter 2001 Apple Microsoft
2 Peter 2006 Microsoft IBM
3 Jimmy 2001 Samsung IBM
Data
df <- structure(list(Person = c("Peter", "Peter", "Peter", "Peter",
"Kate", "Jimmy", "Jimmy", "Jimmy"), Year = c(2001L, 2006L, 2010L,
2016L, 2003L, 2001L, 2004L, 2009L), From = c("Apple", "Microsoft",
"IBM", "Facebook", "Microsoft", "Samsung", "IBM", "Google"),
To = c("Microsoft", "IBM", "Facebook", "Apple", "Google",
"IBM", "Google", "Facebook")), class = "data.frame", row.names = c(NA, -8L))

r conditional drop rows by Group

This is my dataset
Group Status From To
Blue No 1994 2000
Red No 1994 1997
Red Yes 1998 2002
Yellow No 1994 2014
Yellow Yes 2015 2021
Purple No 1994 1997
I like to get rid of the rows with Status=No only where they belong to a Group that repeats more than once.
For instance. Group=Red and Yellow have 2 rows, I like to get rid of the row with Status=No within these two groups. The final dataset like this.
Group Status From To
Blue No 1994 2000
Red Yes 1998 2002
Yellow Yes 2015 2021
Purple No 1994 1997
Any suggestions regarding this is much apricated. Thanks.
You can return rows with Status = 'Yes' if number of rows in the group is greater than 1.
library(dplyr)
df %>%
group_by(Group) %>%
filter(if(n() > 1) Status == 'Yes' else TRUE) %>%
ungroup
# Group Status From To
# <chr> <chr> <int> <int>
#1 Blue No 1994 2000
#2 Red Yes 1998 2002
#3 Yellow Yes 2015 2021
#4 Purple No 1994 1997
For this data, since 'Yes' > 'No' we can also do -
df %>%
arrange(Group, desc(Status)) %>%
distinct(Group, .keep_all = TRUE)
data
It is easier to help if you provide data in a reproducible format
df <- structure(list(Group = c("Blue", "Red", "Red", "Yellow", "Yellow",
"Purple"), Status = c("No", "No", "Yes", "No", "Yes", "No"),
From = c(1994L, 1994L, 1998L, 1994L, 2015L, 1994L), To = c(2000L,
1997L, 2002L, 2014L, 2021L, 1997L)),
class = "data.frame", row.names = c(NA, -6L))

Calculating sums of observation in time intervals in a df [duplicate]

This question already has answers here:
Aggregate one data frame by time intervals from another data frame
(3 answers)
Closed 1 year ago.
I've posted this as another question, but realised I've got my sample data wrong.
I've got two separate datasets. df1 looks like this:
loc_ID year observations
nin212 2002 90
nin212 2003 98
nin212 2004 102
cha670 2001 18
cha670 2002 19
cha670 2003 21
df2 looks like this:
loc_ID start_year end_year
nin212 2002 2003
nin212 2003 2004
cha670 2001 2002
cha670 2002 2003
I want to calculate the number of observations in the time intervals (start_year to end_year) per loc_ID. In the example above, I would like to achieve this final dataset:
loc_ID start_year end_year observations
nin212 2002 2003 188
nin212 2003 2004 200
cha670 2001 2002 37
cha670 2002 2003 40
How could I do this?
We can do a non-equi join
library(data.table)
setDT(df2)[, observations := setDT(df1)[df2, sum(observations),
on = .(loc_ID, year >= start_year, year <= end_year),
by = .EACHI]$V1]
-output
df2
# loc_ID start_year end_year observations
#1: nin212 2002 2003 188
#2: nin212 2003 2004 200
#3: cha670 2001 2002 37
#4: cha670 2002 2003 40
data
structure(list(loc_ID = c("nin212", "nin212", "nin212", "cha670",
"cha670", "cha670"), year = c(2002L, 2003L, 2004L, 2001L, 2002L,
2003L), observations = c(90L, 98L, 102L, 18L, 19L, 21L)),
class = "data.frame", row.names = c(NA,
-6L))
> dput(df2)
structure(list(loc_ID = c("nin212", "nin212", "cha670", "cha670"
), start_year = c(2002L, 2003L, 2001L, 2002L), end_year = c(2003L,
2004L, 2002L, 2003L)), class = "data.frame", row.names = c(NA,
-4L))

Summarise based on number of observations per year in a time-series

I've got a long dataframe like this:
year value town
2001 0.15 ny
2002 0.19 ny
2002 0.14 ca
2001 NA ny
2002 0.15 ny
2002 0.12 ca
2001 NA ny
2002 0.13 ny
2002 0.1 ca
I want to calculate a mean value per year and per species. Like this:
df %>% group_by(year, town) %>% summarise(mean_year = mean(value, na.rm=T))
However, I only want to summarise those town values which have more than 2 non-NA values. In the example above, I don't want to summarise year 2001 for ny because it only has 1 non-NA value.
So the output would be like this:
town year mean_year
ny 2001 NA
ny 2002 0.156
ca 2002 0.45
try this
df %>% group_by(year, town) %>%
summarise(mean_year = ifelse(sum(!is.na(value))>=2, mean(value, na.rm = T), NA))
# A tibble: 3 x 3
# Groups: year [2]
year town mean_year
<int> <chr> <dbl>
1 2001 ny NA
2 2002 ca 0.12
3 2002 ny 0.157
dput
> dput(df)
structure(list(year = c(2001L, 2002L, 2002L, 2001L, 2002L, 2002L,
2001L, 2002L, 2002L), value = c(0.15, 0.19, 0.14, NA, 0.15, 0.12,
NA, 0.13, 0.1), town = c("ny", "ny", "ca", "ny", "ny", "ca",
"ny", "ny", "ca")), class = "data.frame", row.names = c(NA, -9L
))

stacking/melting multiple columns into multiple columns in R

I am trying to melt/stack/gather multiple specific columns of a dataframe into 2 columns, retaining all the others.
I have tried many, many answers on stackoverflow without success (some below). I basically have a situation similar to this post here:
Reshaping multiple sets of measurement columns (wide format) into single columns (long format)
only many more columns to retain and combine. It is important to mention my year columns are factors and I have many, many more columns than the sample listed below so I want to call column names not positions.
>df
ID Code Country year.x value.x year.y value.y year.x.x value.x.x
1 A USA 2000 34.33422 2001 35.35241 2002 42.30042
1 A Spain 2000 34.71842 2001 39.82727 2002 43.22209
3 B USA 2000 35.98180 2001 37.70768 2002 44.40232
3 B Peru 2000 33.00000 2001 37.66468 2002 41.30232
4 C Argentina 2000 37.78005 2001 39.25627 2002 45.72927
4 C Peru 2000 40.52575 2001 40.55918 2002 46.62914
I tried using the pivot_longer in tidyr based on the post above which seemed very similar, which resulted in various errors depending on what I did:
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = ".")
I also played with melt in reshape2 in various ways which either melted only the values columns or only the years columns. Such as:
new.df <- reshape2:::melt(df, id.var = c("ID", "Code", "Country"), measure.vars=c("value.x", "value.y", "value.x.x", "value.y.y", "value.x.x.x", "value.y.y.y"), value.name = "value", variable.vars=c('year.x','year.y', "year.x.x", "year.y.y", "year.x.x.x", "year.y.y.y", "value.x", variable.name = "year")
I also tried dplyr gather based on other posts but I find it extremely difficult to understand the help page and posts.
To be clear what I am looking to achieve:
ID Code Country year value
1 A USA 2000 34.33422
1 A Spain 2000 34.71842
3 B USA 2000 35.98180
3 B Peru 2000 33.00000
4 C Argentina2000 37.78005
4 C Peru 2000 40.52575
1 A USA 2001 35.35241
1 A Spain 2001 39.82727
3 B USA 2001 37.70768
3 B Peru 2001 37.66468
4 C Argentina2001 39.25627
4 C Peru 2001 40.55918
1 A USA 2002 42.30042
etc.
I really appreciate the help here.
We can specify the names_pattern
library(tidyr)
library(dplyr)
df %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or use the names_sep with escaped . as according to ?pivot_longer
names_sep - names_sep takes the same specification as separate(), and can either be a numeric vector (specifying positions to break on), or a single string (specifying a regular expression to split on).
which implies that by default the regex is on and the . in regex matches any character and not the literal dot. To get the literal value, either escape or place it inside square bracket
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "\\.")
# A tibble: 18 x 6
# ID Code Country group year value
# <int> <chr> <chr> <chr> <int> <dbl>
# 1 1 A USA x 2000 34.3
# 2 1 A USA y 2001 35.4
# 3 1 A USA z 2002 42.3
# 4 1 A Spain x 2000 34.7
# 5 1 A Spain y 2001 39.8
# 6 1 A Spain z 2002 43.2
# 7 3 B USA x 2000 36.0
# 8 3 B USA y 2001 37.7
# 9 3 B USA z 2002 44.4
#10 3 B Peru x 2000 33
#11 3 B Peru y 2001 37.7
#12 3 B Peru z 2002 41.3
#13 4 C Argentina x 2000 37.8
#14 4 C Argentina y 2001 39.3
#15 4 C Argentina z 2002 45.7
#16 4 C Peru x 2000 40.5
#17 4 C Peru y 2001 40.6
#18 4 C Peru z 2002 46.6
Update
For the updated dataset
library(stringr)
df2 %>%
rename_at(vars(matches("year|value")), ~
str_replace(., "^([^.]+\\.[^.]+)\\.([^.]+)$", "\\1\\2")) %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or without the rename, use regex lookaround
df2 %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "(?<=year|value)\\.")
data
df <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.z = c(2002L, 2002L, 2002L, 2002L, 2002L,
2002L), value.z = c(42.30042, 43.22209, 44.40232, 41.30232, 45.72927,
46.62914)), class = "data.frame", row.names = c(NA, -6L))
df2 <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.x.x = c(2002L, 2002L, 2002L, 2002L,
2002L, 2002L), value.x.x = c(42.30042, 43.22209, 44.40232, 41.30232,
45.72927, 46.62914)), class = "data.frame", row.names = c(NA,
-6L))

Resources