dplyr: max value in a group, excluding the value in each row? - r

I have a data frame that looks as follows:
> df <- data_frame(g = c('A', 'A', 'B', 'B', 'B', 'C'), x = c(7, 3, 5, 9, 2, 4))
> df
Source: local data frame [6 x 2]
g x
1 A 7
2 A 3
3 B 5
4 B 9
5 B 2
6 C 4
I know how to add a column with the maximum x value for each group g:
> df %>% group_by(g) %>% mutate(x_max = max(x))
Source: local data frame [6 x 3]
Groups: g
g x x_max
1 A 7 7
2 A 3 7
3 B 5 9
4 B 9 9
5 B 2 9
6 C 4 4
But what I would like is to get is the maximum x value for each group g, excluding the x value in each row.
For the given example, the desired output would look like this:
Source: local data frame [6 x 3]
Groups: g
g x x_max x_max_exclude
1 A 7 7 3
2 A 3 7 7
3 B 5 9 9
4 B 9 9 5
5 B 2 9 9
6 C 4 4 NA
I thought I might be able to use row_number() to remove particular elements and take the max of what remained, but hit warning messages and got incorrect -Inf output:
> df %>% group_by(g) %>% mutate(x_max = max(x), r = row_number(), x_max_exclude = max(x[-r]))
Source: local data frame [6 x 5]
Groups: g
g x x_max r x_max_exclude
1 A 7 7 1 -Inf
2 A 3 7 2 -Inf
3 B 5 9 1 -Inf
4 B 9 9 2 -Inf
5 B 2 9 3 -Inf
6 C 4 4 1 -Inf
Warning messages:
1: In max(c(4, 9, 2)[-1:3]) :
no non-missing arguments to max; returning -Inf
2: In max(c(4, 9, 2)[-1:3]) :
no non-missing arguments to max; returning -Inf
3: In max(c(4, 9, 2)[-1:3]) :
no non-missing arguments to max; returning -Inf
What is the most {readable, concise, efficient} way to get this output in dplyr? Any insight into why my attempt using row_number() doesn't work would also be much appreciated. Thanks for the help.

You could try:
df %>%
group_by(g) %>%
arrange(desc(x)) %>%
mutate(max = ifelse(x == max(x), x[2], max(x)))
Which gives:
#Source: local data frame [6 x 3]
#Groups: g
#
# g x max
#1 A 7 3
#2 A 3 7
#3 B 9 5
#4 B 5 9
#5 B 2 9
#6 C 4 NA
Benchmark
I've tried the solutions so far on the benchmark:
df <- data.frame(g = sample(LETTERS, 10e5, replace = TRUE),
x = sample(1:10, 10e5, replace = TRUE))
library(microbenchmark)
mbm <- microbenchmark(
steven = df %>%
group_by(g) %>%
arrange(desc(x)) %>%
mutate(max = ifelse(x == max(x), x[2], max(x))),
eric = df %>%
group_by(g) %>%
mutate(x_max = max(x),
x_max2 = sort(x, decreasing = TRUE)[2],
x_max_exclude = ifelse(x == x_max, x_max2, x_max)) %>%
select(-x_max2),
arun = setDT(df)[order(x), x_max_exclude := c(rep(x[.N], .N-1L), x[.N-1L]), by=g],
times = 50
)
#Arun's data.table solution is the fastest:
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# steven 158.58083 163.82669 197.28946 210.54179 212.1517 260.1448 50 b
# eric 223.37877 228.98313 262.01623 274.74702 277.1431 284.5170 50 c
# arun 44.48639 46.17961 54.65824 47.74142 48.9884 102.3830 50 a

Interesting problem. Here's one way using data.table:
require(data.table)
setDT(df)[order(x), x_max_exclude := c(rep(x[.N], .N-1L), x[.N-1L]), by=g]
The idea is to order by column x and on those indices, we group by g. Since we've the ordered indices, for the first .N-1 rows, the max value is the value at .N. And for the .Nth row, it's the value at .N-1th row.
.N is a special variable that holds the number of observations in each group.
I'll leave it to you and/or the dplyr experts to translate this (or answer with another approach).

This is the best I've come up with so far. Not sure if there's a better way.
df %>%
group_by(g) %>%
mutate(x_max = max(x),
x_max2 = sort(x, decreasing = TRUE)[2],
x_max_exclude = ifelse(x == x_max, x_max2, x_max)) %>%
select(-x_max2)

Another way with a functional:
df %>% group_by(g) %>% mutate(x_max_exclude = max_exclude(x))
Source: local data frame [6 x 3]
Groups: g
g x x_max_exclude
1 A 7 3
2 A 3 7
3 B 5 9
4 B 9 5
5 B 2 9
6 C 4 NA
We write a function called max_exclude that does the operation that you describe.
max_exclude <- function(v) {
res <- c()
for(i in seq_along(v)) {
res[i] <- suppressWarnings(max(v[-i]))
}
res <- ifelse(!is.finite(res), NA, res)
as.numeric(res)
}
It works with base R too:
df$x_max_exclude <- with(df, ave(x, g, FUN=max_exclude))
Source: local data frame [6 x 3]
g x x_max_exclude
1 A 7 3
2 A 3 7
3 B 5 9
4 B 9 5
5 B 2 9
6 C 4 NA
Benchmark
Here's a lesson kids, beware of for loops!
big.df <- data.frame(g=rep(LETTERS[1:4], each=1e3), x=sample(10, 4e3, replace=T))
microbenchmark(
plafort_dplyr = big.df %>% group_by(g) %>% mutate(x_max_exclude = max_exclude(x)),
plafort_ave = big.df$x_max_exclude <- with(big.df, ave(x, g, FUN=max_exclude)),
StevenB = (big.df %>%
group_by(g) %>%
mutate(max = ifelse(row_number(desc(x)) == 1, x[row_number(desc(x)) == 2], max(x)))
),
Eric = df %>%
group_by(g) %>%
mutate(x_max = max(x),
x_max2 = sort(x, decreasing = TRUE)[2],
x_max_exclude = ifelse(x == x_max, x_max2, x_max)) %>%
select(-x_max2),
Arun = setDT(df)[order(x), x_max_exclude := c(rep(x[.N], .N-1L), x[.N-1L]), by=g]
)
Unit: milliseconds
expr min lq mean median uq max neval
plafort_dplyr 75.219042 85.207442 89.247409 88.203225 90.627663 179.553166 100
plafort_ave 75.907798 84.604180 87.136122 86.961251 89.431884 104.884294 100
StevenB 4.436973 4.699226 5.207548 4.931484 5.364242 11.893306 100
Eric 7.233057 8.034092 8.921904 8.414720 9.060488 15.946281 100
Arun 1.789097 2.037235 2.410915 2.226988 2.423638 9.326272 100

Related

efficient way to rowwise mutate with sample

For each 0 in x, I want to randomly insert a number between 1:10 but i'm looking for an efficent way to do this in dplyr and/or data.table as I have a very large dataset (10m rows).
library(tidyverse)
df <- data.frame(x = 1:10)
df[4, 1] = 0
df[6, 1] = 0
df
# x
# 1 1
# 2 2
# 3 3
# 4 0
# 5 5
# 6 0
# 7 7
# 8 8
# 9 9
# 10 10
This doesnt work as it replaces each year with the same value:
set.seed(1)
df %>%
mutate(x2 = ifelse(x == 0, sample(1:10, 1), x))
# x x2
# 1 1 1
# 2 2 2
# 3 3 3
# 4 0 9
# 5 5 5
# 6 0 9
# 7 7 7
# 8 8 8
# 9 9 9
# 10 10 10
It can be achieved though with rowwise but is slow on a large dataset:
set.seed(1)
#use rowwise
df %>%
rowwise() %>%
mutate(x2 = ifelse(x == 0, sample(1:10, 1), x))
# x x2
# <dbl> <dbl>
# 1 1 1
# 2 2 2
# 3 3 3
# 4 0 9
# 5 5 5
# 6 0 4
# 7 7 7
# 8 8 8
# 9 9 9
# 10 10 10
Any suggestions to speed this up?
Thanks
Not in tidyverse, but you could just do something like this:
is_zero <- (df$x == 0)
replacements <- sample(1:10, sum(is_zero))
df$x[is_zero] <- replacements
Of course, you can collapse that down if you'd like.
df$x[df$x == 0] <- sample(1:10, sum(df$x == 0))
Using the above solutions and microbenchmark and a slight modification to the dataset for setup:
library(data.table)
library(tidyverse)
df <- data.frame(x = 1:100000, y = rbinom(100000, size = 1, 0.5)) %>%
mutate(x = ifelse(y == 0, 0, x)) %>%
dplyr::select(-y)
dt <- setDT(df)
test <- microbenchmark::microbenchmark(
base1 = {
df$x[df$x == 0] <- sample(1:10, sum(df$x == 0), replace = T)
},
dplyr1 = {
df %>%
mutate(x2 = replace(x, which(x == 0), sample(1:10, sum(x == 0), replace = T)))
},
dplyr2 = {
df %>% group_by(id=row_number()) %>%
mutate(across(c(x),.fns = list(x2 = ~ ifelse(.==0, sample(1:10, 1, replace = T), .)) )) %>%
ungroup() %>% select(-id)
},
data.table = {
dt[x == 0, x := sample(1:10, .N, replace = T)]
},
times = 500L
)
test
# Unit: microseconds
# expr min lq mean median uq max neval cld
# base1 733.7 785.9 979.0938 897.25 1137.0 1839.4 500 a
# dplyr1 5207.1 5542.1 6129.2276 5967.85 6476.0 21790.7 500 a
# dplyr2 15963406.4 16156889.2 16367969.8704 16395715.00 16518252.9 19276215.5 500 b
# data.table 1547.4 2229.3 2422.1278 2455.60 2573.7 15076.0 500 a
I thought data.table would be quickest but the base solution seems best (assuming I've set up the mircobenchmark correctly?).
EDIT based on #chinsoon12 comment
1e5 rows:
Unit: microseconds
expr min lq mean median uq max neval cld
base1 730.4 839.30 1380.465 1238.00 1322.85 28977.3 500 a
data.table 1394.8 1831.85 2030.215 1946.95 2060.40 29821.9 500 b
1e6 rows:
Unit: milliseconds
expr min lq mean median uq max neval cld
base1 9.8703 11.6596 16.030715 11.76195 12.04145 326.0118 500 b
data.table 2.3772 2.7939 3.855672 3.04700 3.25900 61.4083 500 a
data.table is the quickest
Maybe try with across() from dplyr in this way:
library(tidyverse)
#Data
df <- data.frame(x = 1:10)
df[4, 1] = 0
df[6, 1] = 0
#Code
df %>% group_by(id=row_number()) %>%
mutate(across(c(x),.fns = list(x2 = ~ ifelse(.==0, sample(1:10, 1), .)) )) %>%
ungroup() %>% select(-id)
Output:
# A tibble: 10 x 2
x x_x2
<dbl> <dbl>
1 1 1
2 2 2
3 3 3
4 0 5
5 5 5
6 0 6
7 7 7
8 8 8
9 9 9
10 10 10
I am adding a different answer because there are already votes on the base option I provided. But here can be a dplyr way using replace.
library(dplyr)
df %>%
mutate(x2 = replace(x, which(x == 0), sample(1:10, sum(x == 0))))
Here is a data.table option using similar logic to Adam's answer. This filters for rows that meet your criteria: x == 0, and then samples 1:10 .N times (which, without a grouping variable, is the number of rows of the filtered data.table).
library(data.table)
set.seed(1)
setDT(df)[x == 0, x := sample(1:10, .N)]
df
x
1: 1
2: 2
3: 3
4: 9
5: 5
6: 4
7: 7
8: 8
9: 9
10: 10

Row Minimum except certain columns

I have a data frame below. I need to find the the row min and max except few column that are characters.
df
x y z
1 1 1 a
2 2 5 b
3 7 4 c
I need
df
x y z Min Max
1 1 1 a 1 1
2 2 5 b 2 5
3 7 4 c 4 7
Another dplyr possibility could be:
df %>%
mutate(Max = do.call(pmax, select_if(., is.numeric)),
Min = do.call(pmin, select_if(., is.numeric)))
x y z Max Min
1 1 1 a 1 1
2 2 5 b 5 2
3 7 4 c 7 4
Or a variation proposed be #G. Grothendieck:
df %>%
mutate(Min = pmin(!!!select_if(., is.numeric)),
Max = pmax(!!!select_if(., is.numeric)))
Another base R solution. Subset only the columns with numbers and then use apply in each row to get the minimum and maximum value with range.
cbind(df, t(apply(df[sapply(df, is.numeric)], 1, function(x)
setNames(range(x, na.rm = TRUE), c("min", "max")))))
# x y z min max
#1 1 1 a 1 1
#2 2 5 b 2 5
#3 7 4 c 4 7
1) This one-liner uses no packages:
transform(df, min = pmin(x, y), max = pmax(x, y))
giving:
x y z min max
1 1 1 a 1 1
2 2 5 b 2 5
3 7 4 c 4 7
2) If you have many columns and don't want to list them all or determine yourself which are numeric then this also uses no packages.
ix <- sapply(df, is.numeric)
transform(df, min = apply(df[ix], 1, min), max = apply(df[ix], 1, max))
If your actual data has NAs and if you want to ignore them when taking the min or max then min, max, pmin and pmax all take an optional na.rm = TRUE argument.
Note
Lines <- "x y z
1 1 1 a
2 2 5 b
3 7 4 c"
df <- read.table(text = Lines)
1) We can use select_if. Here, we can use select_if to select the columns that are numeric, then with pmin, pmax get the rowwise min and max and bind it with the original dataset
library(dplyr)
library(purrr)
df %>%
select_if(is.numeric) %>%
transmute(Min = reduce(., pmin, na.rm = TRUE),
Max = reduce(., pmax, na.rm = TRUE)) %>%
bind_cols(df, .)
# x y z Min Max
#1 1 1 a 1 1
#2 2 5 b 2 5
#3 7 4 c 4 7
NOTE: Here, we use only a single expression of select_if
2) The same can be done in base R (no packages used)
i1 <- names(which(sapply(df, is.numeric)))
df['Min'] <- do.call(pmin, c(df[i1], na.rm = TRUE))
df['Max'] <- do.call(pmax, c(df[i1], na.rm = TRUE))
Also, as stated in the comments, this is generalized option. If it is only for two columns, just doing pmin(x, y) or pmax(x,y) is possible and that wouldn't check if the columns are numeric or not and it is not a general solution
NOTE: All of the solutions mentioned here are either answered first or from the comments with the OP
data
df <- structure(list(x = c(1L, 2L, 7L), y = c(1L, 5L, 4L), z = c("a",
"b", "c")), class = "data.frame", row.names = c("1", "2", "3"
))

What's a tidyverse approach to iterating over rows in a data frame when vectorisation is not feasible?

I want to know the best way to iterate over rows of a data frame when the value of a variable at row n depends on the value of variable(s) at row n-1 and/or n-2. Ideally I would like to do this in a "tidyverse" way, perhaps with purrr::pmap().
For example, say I have this data frame:
library(dplyr)
x <- tibble(t = c(1:10),
a = c(seq(100, 140, 10), rep(NA_real_, 5)),
b = c(runif(5), rep(NA_real_, 5)),
c = c(runif(5), rep(NA_real_, 5)))
x
#> # A tibble: 10 x 4
#> t a b c
#> <int> <dbl> <dbl> <dbl>
#> 1 1 100 0.750 0.900
#> 2 2 110 0.898 0.657
#> 3 3 120 0.731 0.000137
#> 4 4 130 0.208 0.696
#> 5 5 140 0.670 0.882
#> 6 6 NA NA NA
#> 7 7 NA NA NA
#> 8 8 NA NA NA
#> 9 9 NA NA NA
#> 10 10 NA NA NA
I have known values up to time (t) = 5. Beyond that, I wish to project values, using the following formulae:
a = lag(a) * 1.1
b = a * lag(b)
c = b * lag(a, 2)
This code achieves the desired output, but it's a clunky, horrible for loop that scales poorly to larger datasets:
for(i in 1:nrow(x)) {
x <- x %>%
mutate(a = if_else(!is.na(a), a, lag(a, 1) * 1.1),
b = if_else(!is.na(b), b, a * lag(b, 1)),
c = if_else(!is.na(c), c, b * lag(a, 2)))
}
x
#> # A tibble: 10 x 4
#> t a b c
#> <int> <dbl> <dbl> <dbl>
#> 1 1 100 7.50e- 1 9.00e- 1
#> 2 2 110 8.98e- 1 6.57e- 1
#> 3 3 120 7.31e- 1 1.37e- 4
#> 4 4 130 2.08e- 1 6.96e- 1
#> 5 5 140 6.70e- 1 8.82e- 1
#> 6 6 154 1.03e+ 2 1.34e+ 4
#> 7 7 169. 1.75e+ 4 2.45e+ 6
#> 8 8 186. 3.26e+ 6 5.02e+ 8
#> 9 9 205. 6.68e+ 8 1.13e+11
#> 10 10 225. 1.51e+11 2.80e+13
I think that for this sort of intrinsically iterative process it is genuinely hard to beat a for loop. The method proposed by #Shree depends on the NAs being continuous and starting in a known spot.
Here is my mild improvement on your loop, which I think is more readable and about 2.5 times the speed and will probably scale up better than your approach which combines vectorized operations with the loop. By moving out of the tidyverse altogether and embracing a rowwise loop that really works on each row one at a time, we get some efficiencies on both counts:
method_peter <- function(x){
for(i in 2:nrow(x)){
x[i, "a"] <- ifelse(is.na(x[i, "a"]), x[i - 1, "a"] * 1.1, x[i, "a"])
x[i, "b"] <- ifelse(is.na(x[i, "b"]), x[i, "a"] * x[i - 1, "b"], x[i, "b"])
x[i, "c"] <- ifelse(is.na(x[i, "c"]), x[i, "b"] * x[i - 2, "a"], x[i, "c"])
}
return(x)
}
There's doubtless more efficiencies possible, and of course this is an ideal candidate to rewrite it in C++ :).
This is about twice as fast as your method as seen by this:
method_matt <- function(x){
for(i in 1:nrow(x)) {
x <- x %>%
mutate(a = if_else(!is.na(a), a, lag(a, 1) * 1.1),
b = if_else(!is.na(b), b, a * lag(b, 1)),
c = if_else(!is.na(c), c, b * lag(a, 2)))
}
return(x)
}
set.seed(123)
x <- tibble(t = c(1:10),
a = c(seq(100, 140, 10), rep(NA_real_, 5)),
b = c(runif(5), rep(NA_real_, 5)),
c = c(runif(5), rep(NA_real_, 5)))
stopifnot(identical(method_matt(x), method_peter(x)))
library(microbenchmark)
microbenchmark(
method_matt(x),
method_peter(x)
)
which returns:
Unit: milliseconds
expr min lq mean median uq max neval
method_matt(x) 24.1975 25.50925 30.64438 26.33310 31.8681 74.5093 100
method_peter(x) 10.0005 10.56050 13.33751 11.06495 13.5913 42.0568 100
#Shree's method is much faster again and is ideal for the example data, but I'm not sure it is flexible enough to work in all your use cases.
I would like to see a tidyverse solution if there is one.
Edit: Added tidyverse approach
Here's a readable and flexible tidyverse approach. The downside is that it is very slow.
accumutate <- function(df, ...){
df %>% group_by(row_number()) %>%
nest() %>%
pull(data) %>%
accumulate(function(x,y){bind_rows(x,y) %>% mutate(!!!enquos(...)) }) %>%
.[[length(.)]]
}
x %>%
accumutate(a = ifelse(is.na(a), 1.1 * lag(a,1), a)) %>%
accumutate(b = ifelse(is.na(b), a * lag(b), b)) %>%
accumutate(c = ifelse(is.na(c),b * lag(a, 2), c))
#> # A tibble: 10 x 4
#> t a b c
#> <int> <dbl> <dbl> <dbl>
#> 1 1 100 2.88e- 1 4.56e- 2
#> 2 2 110 7.88e- 1 5.28e- 1
#> 3 3 120 4.09e- 1 8.92e- 1
#> 4 4 130 8.83e- 1 5.51e- 1
#> 5 5 140 9.40e- 1 4.57e- 1
#> 6 6 154 1.45e+ 2 1.88e+ 4
#> 7 7 169. 2.45e+ 4 3.43e+ 6
#> 8 8 186. 4.57e+ 6 7.04e+ 8
#> 9 9 205. 9.37e+ 8 1.59e+11
#> 10 10 225. 2.11e+11 3.94e+13
Created on 2020-10-07 by the reprex package (v0.3.0)
Here's another approach that you might find interesting. It's not concise or especially readable, but it's tidyverse (or at least functionally) inspired. And it performs fairly well.
It uses a semigroup pattern, converting the mutate expressions into binary functions, creating corresponding lists and then using accumulate.
library(tidyverse)
library(dplyr)
library(microbenchmark)
options(width =100)
set.seed(123)
# Create the data frame
x <- tibble(t = c(1:100),
a = c(seq(100, 140, 10), rep(NA_real_,100- 5)),
b = c(runif(5), rep(NA_real_, 100-5)),
c = c(runif(5), rep(NA_real_, 100-5)))
a_mappend <- function(a1, a2) {
ifelse(is.na(a2), a1 * 1.1, a2)
}
b_mappend <- function(ab1, ab2) {
list(a = ab2$a, b = ifelse(is.na(ab2$b), ab2$a * ab1$b,ab2$b))
}
c_mappend <- function(abc12, abc23) {
list(abc1 = list(a = abc12$abc2$a, b = abc12$abc2$b, c = abc12$abc2$c),
abc2 = list(a = abc23$abc2$a, b = abc23$abc2$b, c = ifelse(is.na(abc23$abc2$c),abc12$abc1$a * abc23$abc2$b,abc23$abc2$c)))
}
method_ian <- function(x) {
x %>%
mutate(a = accumulate(a, a_mappend)) %>%
mutate(b = list(a, b) %>%
pmap(~ list(a = .x, b = .y)) %>%
accumulate(b_mappend) %>% map_dbl(~ .x$b)) %>%
mutate(c = list(a, b, c, c(a[-1], NA), c(b[-1], NA), c(c[-1], NA)) %>%
pmap(~ list(abc1 = list(a = ..1, b = ..2, c = ..3),
abc2 = list(a = ..4, b = ..5, c = ..6))) %>%
accumulate(c_mappend) %>% map_dbl(~ .x$abc1$c))
}
method_matt <- function(x){
for(i in 1:nrow(x)) {
x <- x %>%
mutate(a = if_else(!is.na(a), a, lag(a, 1) * 1.1),
b = if_else(!is.na(b), b, a * lag(b, 1)),
c = if_else(!is.na(c), c, b * lag(a, 2)))
}
return(x)
}
method_peter <- function(x){
for(i in 2:nrow(x)){
x[i, "a"] <- ifelse(is.na(x[i, "a"]), x[i - 1, "a"] * 1.1, x[i, "a"])
x[i, "b"] <- ifelse(is.na(x[i, "b"]), x[i, "a"] * x[i - 1, "b"], x[i, "b"])
x[i, "c"] <- ifelse(is.na(x[i, "c"]), x[i, "b"] * x[i - 2, "a"], x[i, "c"])
}
return(x)
}
stopifnot(identical(method_matt(x), method_ian(x)))
microbenchmark( method_matt(x), method_peter(x), method_ian(x))
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> method_matt(x) 324.90086 330.93192 337.46518 334.55447 338.38461 426.30457 100
#> method_peter(x) 208.27498 211.60526 213.59438 212.66088 214.36421 242.59854 100
#> method_ian(x) 13.06774 13.43105 14.30003 13.86428 14.32263 19.54843 100
Created on 2020-10-06 by the reprex package (v0.3.0)
I don't think there's any simple way in tidyverse to do calculations with row-dependencies. Something with Reduce or gather + spread could be possible but I don't expect them to score poits on readability.
Anyways, on the bright side, your calculations are vectorizable using dplyr and zoo packages -
x %>%
mutate(
a = ifelse(is.na(a), na.locf(a) * 1.1^(t-5), a),
b = ifelse(is.na(b), na.locf(b) * c(rep(1, 5), cumprod(a[6:n()])), b),
c = ifelse(is.na(c), b * lag(a, 2), c)
)
# A tibble: 10 x 4
t a b c
<int> <dbl> <dbl> <dbl>
1 1 100 1.85e- 1 9.43e- 1
2 2 110 7.02e- 1 1.29e- 1
3 3 120 5.73e- 1 8.33e- 1
4 4 130 1.68e- 1 4.68e- 1
5 5 140 9.44e- 1 5.50e- 1
6 6 154 1.45e+ 2 1.89e+ 4
7 7 169. 2.46e+ 4 3.45e+ 6
8 8 186. 4.59e+ 6 7.07e+ 8
9 9 205. 9.40e+ 8 1.59e+11
10 10 225. 2.12e+11 3.95e+13
Data -
set.seed(2)
x <- tibble(t = c(1:10),
a = c(seq(100, 140, 10), rep(NA_real_, 5)),
b = c(runif(5), rep(NA_real_, 5)),
c = c(runif(5), rep(NA_real_, 5)))

Counting amount of zeros within a "melted" data frame

Hei, I learn R and I try to count how many zeros I have within the melted data. So, I want to know how many zeros corresponds to column a and b and print two results out.
I generated an example:
library(reshape)
library(plyr)
library(dplyr)
id = c(1,2,3,4,5,6,7,8,9,10)
b = c(0,0,5,6,3,7,2,8,1,8)
c = c(0,4,9,87,0,87,0,4,5,0)
test = data.frame(id,b,c)
test_melt = melt(test, id.vars = "id")
test_melt
I imagine for that I should create an if statement. Something with
if (test$value == 0){print()}, but how can I tell R to count zeros for a columns that have been melted?
With your data:
test_melt %>%
group_by(variable) %>%
summarize(zeroes = sum(value == 0))
# # A tibble: 2 x 2
# variable zeroes
# <fctr> <int>
# 1 b 2
# 2 c 4
Base R:
aggregate(test_melt$value, by = list(variable = test_melt$variable),
FUN = function(x) sum(x == 0))
# variable x
# 1 b 2
# 2 c 4
... and for curiosity:
library(microbenchmark)
microbenchmark(
dplyr = group_by(test_melt, variable) %>% summarize(zeroes = sum(value == 0)),
base1 = aggregate(test_melt$value, by = list(variable = test_melt$variable), FUN = function(x) sum(x == 0)),
# #PankajKaundal's suggested "formula" notation reads easier
base2 = aggregate(value ~ variable, test_melt, function(x) sum(x == 0))
)
# Unit: microseconds
# expr min lq mean median uq max neval
# dplyr 916.421 986.985 1069.7000 1022.1760 1094.7460 2272.636 100
# base1 647.658 682.302 783.2065 715.3045 765.9940 1905.411 100
# base2 813.219 867.737 950.3247 897.0930 959.8175 2017.001 100
sum(test_melt$value==0)
This should do it.
This might help . Is this what you're looking for ?
> test_melt[4] <- 1
> test_melt2 <- aggregate(V4 ~ value + variable, test_melt, sum)
> test_melt2
value variable V4
1 0 b 2
2 1 b 1
3 2 b 1
4 3 b 1
5 5 b 1
6 6 b 1
7 7 b 1
8 8 b 2
9 0 c 4
10 4 c 2
11 5 c 1
12 9 c 1
13 87 c 2
V4 is the count

dplyr filter: Get rows with minimum of variable, but only the first if multiple minima

I want to make a grouped filter using dplyr, in a way that within each group only that row is returned which has the minimum value of variable x.
My problem is: As expected, in the case of multiple minima all rows with the minimum value are returned. But in my case, I only want the first row if multiple minima are present.
Here's an example:
df <- data.frame(
A=c("A", "A", "A", "B", "B", "B", "C", "C", "C"),
x=c(1, 1, 2, 2, 3, 4, 5, 5, 5),
y=rnorm(9)
)
library(dplyr)
df.g <- group_by(df, A)
filter(df.g, x == min(x))
As expected, all minima are returned:
Source: local data frame [6 x 3]
Groups: A
A x y
1 A 1 -1.04584335
2 A 1 0.97949399
3 B 2 0.79600971
4 C 5 -0.08655151
5 C 5 0.16649962
6 C 5 -0.05948012
With ddply, I would have approach the task that way:
library(plyr)
ddply(df, .(A), function(z) {
z[z$x == min(z$x), ][1, ]
})
... which works:
A x y
1 A 1 -1.04584335
2 B 2 0.79600971
3 C 5 -0.08655151
Q: Is there a way to approach this in dplyr? (For speed reasons)
Update
With dplyr >= 0.3 you can use the slice function in combination with which.min, which would be my favorite approach for this task:
df %>% group_by(A) %>% slice(which.min(x))
#Source: local data frame [3 x 3]
#Groups: A
#
# A x y
#1 A 1 0.2979772
#2 B 2 -1.1265265
#3 C 5 -1.1952004
Original answer
For the sample data, it is also possible to use two filter after each other:
group_by(df, A) %>%
filter(x == min(x)) %>%
filter(1:n() == 1)
Just for completeness: Here's the final dplyr solution, derived from the comments of #hadley and #Arun:
library(dplyr)
df.g <- group_by(df, A)
filter(df.g, rank(x, ties.method="first")==1)
For what it's worth, here's a data.table solution, to those who may be interested:
# approach with setting keys
dt <- as.data.table(df)
setkey(dt, A,x)
dt[J(unique(A)), mult="first"]
# without using keys
dt <- as.data.table(df)
dt[dt[, .I[which.min(x)], by=A]$V1]
This can be accomplished by using row_number combined with group_by. row_number handles ties by assigning a rank not only by the value but also by the relative order within the vector. To get the first row of each group with the minimum value of x:
df.g <- group_by(df, A)
filter(df.g, row_number(x) == 1)
For more information see the dplyr vignette on window functions.
dplyr offers slice_min function, wich do the job with the argument with_ties = FALSE
library(dplyr)
df %>%
group_by(A) %>%
slice_min(x, with_ties = FALSE)
Output :
# A tibble: 3 x 3
# Groups: A [3]
A x y
<fct> <dbl> <dbl>
1 A 1 0.273
2 B 2 -0.462
3 C 5 1.08
Another way to do it:
set.seed(1)
x <- data.frame(a = rep(1:2, each = 10), b = rnorm(20))
x <- dplyr::arrange(x, a, b)
dplyr::filter(x, !duplicated(a))
Result:
a b
1 1 -0.8356286
2 2 -2.2146999
Could also be easily adapted for getting the row in each group with maximum value.
In case you are looking to filter the minima of x and then the minima of y. An intuitive way of do it is just using filtering functions:
> df
A x y
1 A 1 1.856368296
2 A 1 -0.298284187
3 A 2 0.800047796
4 B 2 0.107289719
5 B 3 0.641819999
6 B 4 0.650542284
7 C 5 0.422465687
8 C 5 0.009819306
9 C 5 -0.482082635
df %>% group_by(A) %>%
filter(x == min(x), y == min(y))
# A tibble: 3 x 3
# Groups: A [3]
A x y
<chr> <dbl> <dbl>
1 A 1 -0.298
2 B 2 0.107
3 C 5 -0.482
This code will filter the minima of x and y.
Also you can do a double filter
that looks even more readable:
df %>% group_by(A) %>%
filter(x == min(x)) %>%
filter(y == min(y))
# A tibble: 3 x 3
# Groups: A [3]
A x y
<chr> <dbl> <dbl>
1 A 1 -0.298
2 B 2 0.107
3 C 5 -0.482
I like sqldf for its simplicity..
sqldf("select A,min(X),y from 'df.g' group by A")
Output:
A min(X) y
1 A 1 -1.4836989
2 B 2 0.3755771
3 C 5 0.9284441
For the sake of completeness, here's the base R answer:
df[with(df, ave(x, A, FUN = \(x) rank(x, ties.method = "first")) == 1), ]
# A x y
#1 A 1 0.1076158
#4 B 2 -1.3909084
#7 C 5 0.3511618

Resources