I have the following dataset:
I want to calculate the difference between values according to the subgroups. Nevertheless, subgroup 1 must come first. Thus 10-0=10; 0-20=-20; 30-31=-1. I want to perform it using R.
I know that it would be something like this, but I do not know how to put the sub_group into the code:
library(tidyverse)
df %>%
group_by(group) %>%
summarise(difference= diff(value))
Edited answer after OP's comment:
The OP clarified that the data are not sorted by sub_group within every group. Therefore, I added the arrange after group_by. The OP further clarified that the value of sub_group == 1 always should be the first term of the difference.
Below I demonstrate how to achieve this in an example with 3 sub_groups within every group. The code rests on the assumption that the lowest value of sub_group == 1. I drop each group's first sub_group after the difference.
library(tidyverse)
df <- tibble(group = rep(LETTERS[1:3], each = 3),
sub_group = rep(1:3, 3),
value = c(10,0,5,0,20,15,30,31,10))
df
#> # A tibble: 9 × 3
#> group sub_group value
#> <chr> <int> <dbl>
#> 1 A 1 10
#> 2 A 2 0
#> 3 A 3 5
#> 4 B 1 0
#> 5 B 2 20
#> 6 B 3 15
#> 7 C 1 30
#> 8 C 2 31
#> 9 C 3 10
df |>
group_by(group) |>
arrange(group, sub_group) |>
mutate(value = first(value) - value) |>
slice(2:n())
#> # A tibble: 6 × 3
#> # Groups: group [3]
#> group sub_group value
#> <chr> <int> <dbl>
#> 1 A 2 10
#> 2 A 3 5
#> 3 B 2 -20
#> 4 B 3 -15
#> 5 C 2 -1
#> 6 C 3 20
Created on 2022-10-18 with reprex v2.0.2
P.S. (from the original answer)
In the example data, you show the wrong difference for group C. It should read -1. I am convinced that most people here would appreciate if you could post your example data using code or at least as text which can be copied instead of a picture.
Related
I want to filter my grouped dataframe based on the number of occurrences of a specific value within a group.
Some exemplary data:
data <- data.frame(ID = sample(c("A","B","C","D"),100,replace = T),
rt = runif(100,0.2,1),
lapse = sample(1:2,100,replace = T))
The “lapse” column is my filter variable in this case.
I want to exclude every “ID” group that has more than 15 counts of “lapse” == 2 within!
data %>% group_by(ID) %>% count(lapse == 2)
So, if for example the group “A” has 17 times “lapse” == 2 within it should be filtered entirely from the datafame.
First I created some reproducible data using a set.seed and check the number of values per group. It seems that in this case only group D more values with lapse 2 has. You can use filter and sum the values with lapse 2 per group like this:
set.seed(7)
data <- data.frame(ID = sample(c("A","B","C","D"),100,replace = T),
rt = runif(100,0.2,1),
lapse = sample(1:2,100,replace = T))
library(dplyr)
# Check n values per group
data %>%
group_by(ID, lapse) %>%
summarise(n = n())
#> # A tibble: 8 × 3
#> # Groups: ID [4]
#> ID lapse n
#> <chr> <int> <int>
#> 1 A 1 8
#> 2 A 2 7
#> 3 B 1 13
#> 4 B 2 15
#> 5 C 1 18
#> 6 C 2 6
#> 7 D 1 17
#> 8 D 2 16
data %>%
group_by(ID) %>%
filter(!(sum(lapse == 2) > 15))
#> # A tibble: 67 × 3
#> # Groups: ID [3]
#> ID rt lapse
#> <chr> <dbl> <int>
#> 1 B 0.517 2
#> 2 C 0.589 1
#> 3 C 0.598 2
#> 4 C 0.715 1
#> 5 B 0.475 2
#> 6 C 0.965 1
#> 7 B 0.234 1
#> 8 B 0.812 2
#> 9 C 0.517 1
#> 10 B 0.700 1
#> # … with 57 more rows
Created on 2023-01-08 with reprex v2.0.2
I am trying to figure out a dplyr specific way of continuing a sequence of numbers when there are NAs in that column.
For example I have this dataframe:
library(tibble)
dat <- tribble(
~x, ~group,
1, "A",
2, "A",
NA_real_, "A",
NA_real_, "A",
1, "B",
NA_real_, "B",
3, "B"
)
dat
#> # A tibble: 7 × 2
#> x group
#> <dbl> <chr>
#> 1 1 A
#> 2 2 A
#> 3 NA A
#> 4 NA A
#> 5 1 B
#> 6 NA B
#> 7 3 B
I would like this one:
#> # A tibble: 7 × 2
#> x group
#> <dbl> <chr>
#> 1 1 A
#> 2 2 A
#> 3 3 A
#> 4 4 A
#> 5 1 B
#> 6 2 B
#> 7 3 B
When I try this I get a warning which makes me think I am probably approaching this incorrectly:
library(dplyr)
dat %>%
group_by(group) %>%
mutate(n = n()) %>%
mutate(new_seq = seq_len(n))
#> Warning in seq_len(n): first element used of 'length.out' argument
#> Warning in seq_len(n): first element used of 'length.out' argument
#> # A tibble: 7 × 4
#> # Groups: group [2]
#> x group n new_seq
#> <dbl> <chr> <int> <int>
#> 1 1 A 4 1
#> 2 2 A 4 2
#> 3 NA A 4 3
#> 4 NA A 4 4
#> 5 1 B 3 1
#> 6 NA B 3 2
#> 7 3 B 3 3
It's easier if you do it in one go. Your approach is not 'wrong', it is just that seq_len needs one integer, and you are giving a vector (n), so seq_len corrects it by using the first value.
dat %>%
group_by(group) %>%
mutate(x = seq_len(n()))
Note that row_number might be even easier here:
dat %>%
group_by(group) %>%
mutate(x = row_number())
We could use rowid directly if the intention is to create a sequence and group size is just intermediate column
library(data.table)
library(dplyr)
dat %>%
mutate(new_seq = rowid(group))
The issue with using a column after it is created is that it is no longer a single row as showed in #Maëls post. If we need to do that, use first as seq_len is not vectorized and here it is not needed as well
dat %>%
group_by(group) %>%
mutate(n = n()) %>%
mutate(new_seq = seq_len(first(n)))
A base R option using ave (work in a similar way as group_by in dplyr)
> transform(dat, x = ave(x, group, FUN = seq_along))
x group
1 1 A
2 2 A
3 3 A
4 4 A
5 1 B
6 2 B
7 3 B
I'm following up on this question. My LIST of data.frames below is made from my data. However, this LIST is missing the paper column (the name(s) of the missing column(s) are always provided) which is available in the original data.
I was wondering how to put the missing paper column back into LIST to achieve my DESIRED_LIST below?
I tried the solution suggested in this answer (lapply(LIST, function(x)data[do.call(paste, data[names(x)]) %in% do.call(paste, x),])) but it doesn't produce my DESIRED_LIST.
A Base R or tidyverse solution is appreciated.
Reproducible data and code are below.
m2="
paper study sample comp ES bar
1 1 1 1 1 7
1 2 2 2 2 6
1 2 3 3 3 5
2 3 4 4 4 4
2 3 4 4 5 3
2 3 4 5 6 2
2 3 4 5 7 1"
data <- read.table(text=m2,h=T)
LIST <- list(data.frame(study=1 ,sample=1 ,comp=1),
data.frame(study=rep(3,4),sample=rep(4,4),comp=c(4,4,5,5)),
data.frame(study=c(2,2) ,sample=c(2,3) ,comp=c(2,3)))
DESIRED_LIST <- list(data.frame(paper=1 ,study=1 ,sample=1 ,comp=1),
data.frame(paper=rep(2,4),study=rep(3,4),sample=rep(4,4),comp=c(4,4,5,5)),
data.frame(paper=rep(1,2),study=c(2,2) ,sample=c(2,3) ,comp=c(2,3)))
Please find a solution with the package data.table. Is this what you were looking for?
Reprex 1
library(data.table)
cols_to_remove <- c("ES")
split(setDT(data)[, (cols_to_remove) := NULL], by = c("paper", "study"))
#> $`1.1`
#> paper study sample comp
#> 1: 1 1 1 1
#>
#> $`1.2`
#> paper study sample comp
#> 1: 1 2 2 2
#> 2: 1 2 3 3
#>
#> $`2.3`
#> paper study sample comp
#> 1: 2 3 4 4
#> 2: 2 3 4 4
#> 3: 2 3 4 5
#> 4: 2 3 4 5
Created on 2021-11-06 by the reprex package (v2.0.1)
EDIT
Please find solution 2 with the package dplyr
Reprex 2
library(dplyr)
drop.cols <- c("ES")
data %>%
group_by(paper, study) %>%
select(-drop.cols) %>%
group_split()
#> <list_of<
#> tbl_df<
#> paper : integer
#> study : integer
#> sample: integer
#> comp : integer
#> >
#> >[3]>
#> [[1]]
#> # A tibble: 1 x 4
#> paper study sample comp
#> <int> <int> <int> <int>
#> 1 1 1 1 1
#>
#> [[2]]
#> # A tibble: 2 x 4
#> paper study sample comp
#> <int> <int> <int> <int>
#> 1 1 2 2 2
#> 2 1 2 3 3
#>
#> [[3]]
#> # A tibble: 4 x 4
#> paper study sample comp
#> <int> <int> <int> <int>
#> 1 2 3 4 4
#> 2 2 3 4 4
#> 3 2 3 4 5
#> 4 2 3 4 5
Created on 2021-11-07 by the reprex package (v2.0.1)
Consider ave to create a grouping column (due to repeated rows) and then run an iterative merge.
DESIRED_LIST_SO <- lapply(
LIST,
function(df) merge(
transform(data, grp = ave(paper, paper, study, sample, comp, FUN=seq_along)),
transform(df, grp = ave(study, study, sample, comp, FUN=seq_along)),
by=c("study", "sample", "comp", "grp")
)[c("paper", "study", "sample", "comp")]
)
all.equal(DESIRED_LIST, DESIRED_LIST_SO)
[1] TRUE
(Consider keeping the unique identifiers, ES and bar in desired list to avoid the duplicates rows.)
A tidyverse solution. First, create a look-up table, data2, which contains the four target columns. mutate(across(.fns = as.numeric)) is to make column type consistent. It may not be needed. Second, use map to apply left_join to all data frames in LIST. LIST2 and DESIRED_LIST are completely the same.
data2 <- data %>%
distinct(paper, study, sample, comp) %>%
mutate(across(.fns = as.numeric))
LIST2 <- map(LIST, function(x){
x2 <- x %>%
left_join(data2, by = names(x)) %>%
select(all_of(names(data2)))
return(x2)
})
# Check if the results are the same
identical(DESIRED_LIST, LIST2)
# [1] TRUE
Using dplyr (preferably), I am trying to calculate the group mean for each observation while excluding that observation from the group.
It seems that this should be doable with a combination of rowwise() and group_by(), but both functions cannot be used simultaneously.
Given this data frame:
df <- data_frame(grouping = rep(LETTERS[1:5], 3),
value = 1:15) %>%
arrange(grouping)
df
#> Source: local data frame [15 x 2]
#>
#> grouping value
#> (chr) (int)
#> 1 A 1
#> 2 A 6
#> 3 A 11
#> 4 B 2
#> 5 B 7
#> 6 B 12
#> 7 C 3
#> 8 C 8
#> 9 C 13
#> 10 D 4
#> 11 D 9
#> 12 D 14
#> 13 E 5
#> 14 E 10
#> 15 E 15
I'd like to get the group mean for each observation with that observation excluded from the group, resulting in:
#> grouping value special_mean
#> (chr) (int)
#> 1 A 1 8.5 # i.e. (6 + 11) / 2
#> 2 A 6 6 # i.e. (1 + 11) / 2
#> 3 A 11 3.5 # i.e. (1 + 6) / 2
#> 4 B 2 9.5
#> 5 B 7 7
#> 6 B 12 4.5
#> 7 C 3 ...
I've attempted nesting rowwise() inside a function called by do(), but haven't gotten it to work, along these lines:
special_avg <- function(chunk) {
chunk %>%
rowwise() #%>%
# filter or something...?
}
df %>%
group_by(grouping) %>%
do(special_avg(.))
No need to define a custom function, instead we could simply sum all elements of the group, subtract the current value, and divide by number of elements per group minus 1.
df %>% group_by(grouping) %>%
mutate(special_mean = (sum(value) - value)/(n()-1))
# grouping value special_mean
# (chr) (int) (dbl)
#1 A 1 8.5
#2 A 6 6.0
#3 A 11 3.5
#4 B 2 9.5
#5 B 7 7.0
I came across this old question just by chance and I wondered if there is a general solution which would work for other aggregation functions besides mean() as well, e.g., max() as requested by jlesuffleur or median().
The idea is to omit the actual row from computing the aggregate by looping over the rows within the actual group:
library(dplyr)
df %>%
group_by(grouping) %>%
mutate(special_mean = sapply(1:n(), function(i) mean(value[-i])))
grouping value special_mean
<chr> <int> <dbl>
1 A 1 8.5
2 A 6 6
3 A 11 3.5
4 B 2 9.5
5 B 7 7
...
This will work for max() as well
df %>%
group_by(grouping) %>%
mutate(special_max = sapply(1:n(), \(i) max(value[-i])))
grouping value special_max
<chr> <int> <int>
1 A 1 11
2 A 6 11
3 A 11 6
4 B 2 12
5 B 7 12
6 B 12 7
...
For the sake of completeness, here is also a data.table solution:
library(data.table)
setDT(df)[, special_mean := sapply(1:.N, function(i) mean(value[-i])), by = grouping][]
Using dplyr (preferably), I am trying to calculate the group mean for each observation while excluding that observation from the group.
It seems that this should be doable with a combination of rowwise() and group_by(), but both functions cannot be used simultaneously.
Given this data frame:
df <- data_frame(grouping = rep(LETTERS[1:5], 3),
value = 1:15) %>%
arrange(grouping)
df
#> Source: local data frame [15 x 2]
#>
#> grouping value
#> (chr) (int)
#> 1 A 1
#> 2 A 6
#> 3 A 11
#> 4 B 2
#> 5 B 7
#> 6 B 12
#> 7 C 3
#> 8 C 8
#> 9 C 13
#> 10 D 4
#> 11 D 9
#> 12 D 14
#> 13 E 5
#> 14 E 10
#> 15 E 15
I'd like to get the group mean for each observation with that observation excluded from the group, resulting in:
#> grouping value special_mean
#> (chr) (int)
#> 1 A 1 8.5 # i.e. (6 + 11) / 2
#> 2 A 6 6 # i.e. (1 + 11) / 2
#> 3 A 11 3.5 # i.e. (1 + 6) / 2
#> 4 B 2 9.5
#> 5 B 7 7
#> 6 B 12 4.5
#> 7 C 3 ...
I've attempted nesting rowwise() inside a function called by do(), but haven't gotten it to work, along these lines:
special_avg <- function(chunk) {
chunk %>%
rowwise() #%>%
# filter or something...?
}
df %>%
group_by(grouping) %>%
do(special_avg(.))
No need to define a custom function, instead we could simply sum all elements of the group, subtract the current value, and divide by number of elements per group minus 1.
df %>% group_by(grouping) %>%
mutate(special_mean = (sum(value) - value)/(n()-1))
# grouping value special_mean
# (chr) (int) (dbl)
#1 A 1 8.5
#2 A 6 6.0
#3 A 11 3.5
#4 B 2 9.5
#5 B 7 7.0
I came across this old question just by chance and I wondered if there is a general solution which would work for other aggregation functions besides mean() as well, e.g., max() as requested by jlesuffleur or median().
The idea is to omit the actual row from computing the aggregate by looping over the rows within the actual group:
library(dplyr)
df %>%
group_by(grouping) %>%
mutate(special_mean = sapply(1:n(), function(i) mean(value[-i])))
grouping value special_mean
<chr> <int> <dbl>
1 A 1 8.5
2 A 6 6
3 A 11 3.5
4 B 2 9.5
5 B 7 7
...
This will work for max() as well
df %>%
group_by(grouping) %>%
mutate(special_max = sapply(1:n(), \(i) max(value[-i])))
grouping value special_max
<chr> <int> <int>
1 A 1 11
2 A 6 11
3 A 11 6
4 B 2 12
5 B 7 12
6 B 12 7
...
For the sake of completeness, here is also a data.table solution:
library(data.table)
setDT(df)[, special_mean := sapply(1:.N, function(i) mean(value[-i])), by = grouping][]