Use Negation with Select in dplyr 0.7.x - r

I'm trying to write a function that needs to exclude a user passed variable from the resultant data frame. I'm also taking this opportunity to learn a bit more about the new dplyr syntax.
The function acts like a cross join for data frames. I want to use it as a clean way of duplicating data across parameters of a function.
The function works as follows:
crossjoin_df <- function(df1, df2, temp_col = ".k") {
df1 <- df1 %>%
mutate(!!temp_col := 1)
df2 <- df2 %>%
mutate(!!temp_col := 1)
out <- left_join(df1, df2, by = temp_col)
# I'm trying to replace the next line
out[,!names(out)==temp_col]
}
params <- data.frame(k = c(11,10),
n = c(27,26))
data <- data.frame(a = 1:3,
b = 4:6)
crossjoin_df(params, data) # 6 row data set
I want to see if it's possible to replace the last statement with a piped select statement. However, the negation does not seem to be working.
I am able to get something like:
out %>% select(!!temp_col)
to work, but that obviously only selects .k. I am not able to get anything like:
out %>% select(-!!temp_col)
to work.

You'll need rlang, the backend package for dplyr that enables tidy eval, whether you want to keep using strings, in which case you'll need sym to turn a string into a quosure:
library(dplyr)
params <- data.frame(k = c(11,10),
n = c(27,26))
data <- data.frame(a = 1:3,
b = 4:6)
crossjoin_df <- function(df1, df2, temp_col = ".k") {
df1 <- df1 %>% mutate(!!temp_col := 1)
df2 <- df2 %>% mutate(!!temp_col := 1)
left_join(df1, df2, by = temp_col) %>%
select(-!!rlang::sym(temp_col))
}
crossjoin_df(params, data)
#> k n a b
#> 1 11 27 1 4
#> 2 11 27 2 5
#> 3 11 27 3 6
#> 4 10 26 1 4
#> 5 10 26 2 5
#> 6 10 26 3 6
...or switch to full tidy eval, in which case you'll need quo_name to turn a quosure into a name:
crossjoin_df <- function(df1, df2, temp_col = .k) {
temp_col <- enquo(temp_col)
df1 <- df1 %>% mutate(!!rlang::quo_name(temp_col) := 1)
df2 <- df2 %>% mutate(!!rlang::quo_name(temp_col) := 1)
left_join(df1, df2, by = rlang::quo_name(temp_col)) %>%
select(-!!temp_col)
}
crossjoin_df(params, data)
#> k n a b
#> 1 11 27 1 4
#> 2 11 27 2 5
#> 3 11 27 3 6
#> 4 10 26 1 4
#> 5 10 26 2 5
#> 6 10 26 3 6
Alternatively, just use tidyr::crossing:
tidyr::crossing(params, data)
#> k n a b
#> 1 11 27 1 4
#> 2 11 27 2 5
#> 3 11 27 3 6
#> 4 10 26 1 4
#> 5 10 26 2 5
#> 6 10 26 3 6

You can use one_of, and then negate the selection with -:
out %>% select(-one_of(temp_col))
crossjoin_df <- function(df1, df2, temp_col = ".k") {
# `$`(df1, temp_col) <- 1
df1 <- df1 %>%
mutate(!!temp_col := 1)
# `$`(df2, temp_col) <- 1
df2 <- df2 %>%
mutate(!!temp_col := 1)
left_join(df1, df2, by = temp_col) %>% select(-one_of(temp_col))
}
params <- data.frame(k = c(11,10),
n = c(27,26))
data <- data.frame(a = 1:3,
b = 4:6)
crossjoin_df(params, data)
# k n a b
#1 11 27 1 4
#2 11 27 2 5
#3 11 27 3 6
#4 10 26 1 4
#5 10 26 2 5
#6 10 26 3 6

This should work as well:
out %>% select_(paste0("-",temp_col))

Related

R iterating by group and mapping values based on column value

I have the following data frame in R:
df <- data.frame(name = c('p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end'),
time = c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31),
target = c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2),
comb = c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1))
And another data frame:
data <- data.frame(time = c(2,5,8,14,14,20,21,26,28,28),
name = c('a','b','c','d','e','f','g','h','i','j'))
So, if we take a look at df we could sort the data by target and combination and we will notice that there are basically "groups". For example for target=1 and comb=0 there are four entries p1_start,p1_end,p2_start,p2_end and it is the same for all other target/comb combinations.
On the other side data contains entries with time being a timestamp.
Goal: I want to map the values from both data frames based on time.
Example: The first entry of data has time=2 meaning it happened between p1_start,p1_end so it should get the values target=1 and comb=0 mapped to the data data frame.
Example 2: The entries of data with time=14 happened between p2_start,p2_end so they should get the values target=1 and comb=1 mapped to the data data frame.
Idea: I thought I iterate over df by target and comb and for each combination of them check if there are rows in data whose time is between. The second could be done with the following command:
data[which(data$time > p1_start & data$time < p2_end),]
once I get the rows it is easy to append the values.
Problem: how could I do the iteration? I tried with the following:
df %>%
group_by(target, comb) %>%
print(data[which(data$time > df$p1_start & data$time < df$p2_end),])
But I am getting an error that time has not been initialized
Your problem is best known as performing non-equi join. We need to find a range in some given dataframe that corresponds to each value in one or more given vectors. This is better handled by the data.table package.
We would first transform your df into a format suitable for performing the join and then join data with df by time <= end while time >= start. Here is the code
library(data.table)
setDT(df)[, c("type", "name") := tstrsplit(name, "_", fixed = TRUE)]
df <- dcast(df, ... ~ name, value.var = "time")
cols <- c("target", "comb", "type")
setDT(data)[df, (cols) := mget(paste0("i.", cols)), on = .(time<=end, time>=start)]
After dcast, df looks like this
target comb type end start
1: 1 0 p1 3 1
2: 1 0 p2 7 5
3: 1 1 p1 11 9
4: 1 1 p2 15 13
5: 2 0 p1 19 17
6: 2 0 p2 23 21
7: 2 1 p1 27 25
8: 2 1 p2 31 29
And the output is
> data
time name target comb type
1: 2 a 1 0 p1
2: 5 b 1 0 p2
3: 8 c NA NA <NA>
4: 14 d 1 1 p2
5: 14 e 1 1 p2
6: 20 f NA NA <NA>
7: 21 g 2 0 p2
8: 26 h 2 1 p1
9: 28 i NA NA <NA>
10: 28 j NA NA <NA>
Here is a tidyverse solution:
library(tidyr)
library(dplyr)
df %>%
rename(name_df=name) %>%
mutate(x = time +1) %>%
pivot_longer(
cols = c(time, x),
names_to = "helper",
values_to = "time"
) %>%
right_join(data, by="time") %>%
select(time, name, target, comb)
time name target comb
<dbl> <chr> <dbl> <dbl>
1 2 a 1 0
2 5 b 1 0
3 8 c 1 0
4 14 d 1 1
5 14 e 1 1
6 20 f 2 0
7 21 g 2 0
8 26 h 2 1
9 28 i 2 1
10 28 j 2 1
df <- data.frame(name = c('p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end'),
time = c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31),
target = c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2),
comb = c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1))
data <- data.frame(time = c(2,5,8,14,14,20,21,26,28,28),
name = c('a','b','c','d','e','f','g','h','i','j'))
library(fuzzyjoin)
library(tidyverse)
tmp <- df %>%
separate(name,
into = c("p", "period"),
sep = "_",
remove = TRUE) %>%
pivot_wider(
id_cols = c(p, target, comb),
names_from = period,
values_from = time
) %>%
select(-p)
fuzzy_left_join(
x = data,
y = tmp,
by = c("time" = "start",
"time" = "end"),
match_fun = list(`>=`, `<=`))
#> time name target comb start end
#> 1 2 a 1 0 1 3
#> 2 5 b 1 0 5 7
#> 3 8 c NA NA NA NA
#> 4 14 d 1 1 13 15
#> 5 14 e 1 1 13 15
#> 6 20 f NA NA NA NA
#> 7 21 g 2 0 21 23
#> 8 26 h 2 1 25 27
#> 9 28 i NA NA NA NA
#> 10 28 j NA NA NA NA
Created on 2022-01-11 by the reprex package (v2.0.1)

Generate same random sample each time in loop using sample_frac

How can I generate same random sample each time in loop using sample_frac
library(dplyr)
tbl = tibble(val = 1:50)
for (i in 1:3)
{
tbl_sample = tbl %>% sample_frac(0.1)
print(tbl_sample)
}
Create a list and then assign it to the list element by looping over the sequence - print only prints the output to the console
n <- 3
lst1 <- vector('list', n)
for(i in seq_len(n)) {
lst1[[i]] <- tbl %>%
sample_frac(0.1)
}
If we want to get 5 random indices
for(i in seq_len(n)) {
lst1[[i]] <- tbl %>%
mutate(rn = row_number()) %>%
slice_sample(prop = 0.1)
}
In addition this can be automatically done with replicate
lst1 <- replicate(n, tbl %>%
sample_frac(0.1), simplify = FALSE)
We can use purrr, and loop through 1:3 with the same expression as the .fn argument:
library(purrr)
library(dplyr)
map(1:3, ~{
tbl_sample = tbl %>% sample_frac(0.1)
head(tbl_sample)
})
[[1]]
# A tibble: 5 x 1
val
<int>
1 1
2 20
3 35
4 32
5 19
[[2]]
# A tibble: 5 x 1
val
<int>
1 45
2 24
3 42
4 46
5 10
[[3]]
# A tibble: 5 x 1
val
<int>
1 35
2 23
3 28
4 49
5 43
You may want every sample in a single column in a dataframe. For that we may use imap_dfc:
imap_dfc(1:3, ~{
tbl %>% sample_frac(0.1) %>% head %>% set_names(paste0('sample_', .y))
})
# A tibble: 5 x 3
sample_1 sample_2 sample_3
<int> <int> <int>
1 49 17 8
2 29 9 6
3 25 35 50
4 9 4 44
5 34 45 3
>

Binding rows based on common id

I have a very simple case where I want to combine several data frames into one based on a common id elements of a particular data frame.
Example:
id <- c(1, 2, 3)
x <- c(10, 12, 14)
data1 <- data.frame(id, x)
id <- c(2, 3)
x <- c(20, 22)
data2 <- data.frame(id, x)
id <- c(1, 3)
x <- c(30, 32)
data3 <- data.frame(id, x)
Which gives us,
$data1
id x
1 1 10
2 2 12
3 3 14
$data2
id x
1 2 20
2 3 22
$data3
id x
1 1 30
2 3 32
Now, I want to combine all three data frames based on the id's of the data3. The expected output should look like
> comb
id x
1 1 10
2 1 NA
3 1 30
4 3 14
5 3 22
6 3 32
I am trying the following, but not getting the expected output.
library(dplyr)
library(tidyr)
combined <- bind_rows(data1, data2, data3, .id = "id") %>% arrange(id)
Any idea how to get the expected output?
Does this work:
library(dplyr)
library(tidyr)
data1 %>% full_join(data2, by = 'id') %>% full_join(data3, by = 'id') %>% arrange(id) %>% right_join(data3, by = 'id') %>%
pivot_longer(cols = -id) %>% select(-name) %>% distinct()
# A tibble: 6 x 2
id value
<dbl> <dbl>
1 1 10
2 1 NA
3 1 30
4 3 14
5 3 22
6 3 32
Combine the 3 dataframes in one list and use filter to select only the id's in 3rd dataframe.
library(dplyr)
library(tidyr)
bind_rows(data1, data2, data3, .id = "new_id") %>%
filter(id %in% id[new_id == 3]) %>%
complete(new_id, id)
# new_id id x
# <chr> <dbl> <dbl>
#1 1 1 10
#2 1 3 14
#3 2 1 NA
#4 2 3 22
#5 3 1 30
#6 3 3 32
A pure base R solution can also make it
lst <- list(data1, data2, data3)
reshape(
subset(
reshape(
do.call(rbind, Map(cbind, lst, grp = seq_along(lst))),
idvar = "id",
timevar = "grp",
direction = "wide"
),
id %in% lst[[3]]$id
),
idvar = "id",
varying = -1,
direction = "long"
)[c("id", "x")]
which gives
id x
1.1 1 10
3.1 3 14
1.2 1 NA
3.2 3 22
1.3 1 30
3.3 3 32
>
Using base R
do.call(rbind, unname(lapply(mget(ls(pattern = "^data\\d+$")), \(x) {
x1 <- subset(x, id %in% data3$id)
v1 <- setdiff(data3$id, x1$id)
if(length(v1) > 0) rbind(x1, cbind(id = v1, x = NA)) else x1
})))
-output
id x
1 1 10
3 3 14
2 3 22
11 1 NA
12 1 30
21 3 32
bind_rows(data1, data2, data3, .id = 'grp')%>%
complete(id, grp)%>%
select(-grp) %>%
filter(id%in%data3$id)
# A tibble: 6 x 2
id x
<dbl> <dbl>
1 1 10
2 1 NA
3 1 30
4 3 14
5 3 22
6 3 32

How can I remove rows with the same value in 2 ore more rows in R

I have a dataframe in the following format with ID's and A/B's. The dataframe is very long, over 3000 ID's.
id
type
1
A
2
B
3
A
4
A
5
B
6
A
7
B
8
A
9
B
10
A
11
A
12
A
13
B
...
...
I need to remove all rows (A+B), where more than one A is behind another one or more. So I dont want to remove the duplicates. If there are a duplicate (2 or more A's), i want to remove all A's and the B until the next A.
id
type
1
A
2
B
6
A
7
B
8
A
9
B
...
...
Do I need a loop for this problem? I hope for any help,thank you!
This might be what you want:
First, define a function that notes the indices of what you want to remove:
row_sequence <- function(value) {
inds <- which(value == lead(value))
sort(unique(c(inds, inds + 1, inds +2)))
}
Apply the function to your dataframe by first extracting the rows that you want to remove into df1 and second anti_joining df1 with df to obtain the final dataframe:
library(dplyr)
df1 <- df %>% slice(row_sequence(type))
df2 <- df %>%
anti_join(., df1)
Result:
df2
id type
1 1 A
2 2 B
3 6 A
4 7 B
5 8 A
6 9 B
Data:
df <- data.frame(
id = 1:13,
type = c("A","B","A","A","B","A","B","A","B","A","A","A","B")
)
I imagined there is only one B after a series of duplicated A values, however if that is not the case just let me know to modify my codes:
library(dplyr)
library(tidyr)
library(data.table)
df %>%
mutate(rles = data.table::rleid(type)) %>%
group_by(rles) %>%
mutate(rles = ifelse(length(rles) > 1, NA, rles)) %>%
ungroup() %>%
mutate(rles = ifelse(!is.na(rles) & is.na(lag(rles)) & type == "B", NA, rles)) %>%
drop_na() %>%
select(-rles)
# A tibble: 6 x 2
id type
<int> <chr>
1 1 A
2 2 B
3 6 A
4 7 B
5 8 A
6 9 B
Data
df <- read.table(header = TRUE, text = "
id type
1 A
2 B
3 A
4 A
5 B
6 A
7 B
8 A
9 B
10 A
11 A
12 A
13 B")

join and sum columns together R

I have a dataframe:
df <- data.frame(ca = c("a","b","a","c","b", "b"),
f = c(3,4,0,NA,3, 4),
f2 = c(NA,5,6,1,9, 7),
f3 = c(3,0,6,3,0, 8))
I want join and sum my columns "f" and "f2" and rename it in "f_news"
exemple :
df <- data.frame(ca = c("a","b","a","c","b", "b"),
f_new = c(3,9,6,1,12, 11),
f3 = c(3,0,6,3,0, 8))
Do you have an idea of how to do this with summarise, spread, group_by?
Using plyr and dplyr you can do this:
df %>%
rowwise() %>%
mutate(f_new=sum(f, f2, na.rm = T))
# A tibble: 6 x 5
# ca f f2 f3 f_new
# <fct> <dbl> <dbl> <dbl> <dbl>
#1 a 3 NA 3 3
#2 b 4 5 0 9
#3 a 0 6 6 6
#4 c NA 1 3 1
#5 b 3 9 0 12
#6 b 4 7 8 11
This method will retain and NA values
Here is an answer using tidyverse methods from dplyr and tidyr
library(tidyverse)
df <- data.frame(ca = c("a","b","a","c","b", "b"),
f = c(3,4,0,NA,3, 4),
f2 = c(NA,5,6,1,9, 7),
f3 = c(3,0,6,3,0, 8))
df %>%
replace_na(list(f = 0, f2 = 0)) %>%
mutate(f_new = f + f2)
#> ca f f2 f3 f_new
#> 1 a 3 0 3 3
#> 2 b 4 5 0 9
#> 3 a 0 6 6 6
#> 4 c 0 1 3 1
#> 5 b 3 9 0 12
#> 6 b 4 7 8 11
Dplyr can do this quite nice with the following code. Rowwise allows you to consider each row separately. And the mutate command sums whatever columns you want. the na.rm=TRUE handles the issue when you have NA's and want to ignore them. As a comment mentioned, if you do not have this, it will give you an NA if it's in any of the summed values.
library(dplyr)
df %>%
rowwise() %>%
mutate(f_new = sum(f,f2, na.rm = TRUE))

Resources