Related
I have a vector of threshold values, thresholds, and another vector, x. I'd like to create a new vector, say vec_sum, of the same length as thresholds, that stores, for each element of thresholds, the sum of values of x larger than this element.
What is the fastest way of doing this?
The naive way I'm doing it is
vec_sum <- rep(NA,length(thresholds))
for(i in seq_along(thresholds))
{
vec_sum[i] <- sum(x[x>thresholds[i]])
}
In case it helps, thresholds is already sorted.
Here is another solution using cumsum:
f1 <- function(v, th){
v2 <- v[order(v)]
v2s <- rev(cumsum(rev(v2)))
return(v2s[findInterval(th, v2) + 1])
}
Here are some tests and comparison with the other answer (as well as the example data) by Ronak:
f2 <- function(x, thresholds){
if (all(x < thresholds[1])) return(rep(0, length(thresholds)))
if (all(x > thresholds[length(thresholds)])) return(rep(sum(x), length(thresholds)))
return(rev(cumsum(rev(tapply(x,
findInterval(x, thresholds, left.open = TRUE), sum)[-1]))))
}
test_th <- c(3, 5, 10)
test_x <- c(2, 3, 1, 19, 4, 6, 5, 15, 7:14, 16:18, 20)
vec_sum <- rep(NA,length(test_th))
for(i in seq_along(test_th)) {
vec_sum[i] <- sum(test_x[test_x>test_th[i]])
}
all(dplyr::near(f1(test_x, test_th), vec_sum))
# [1] TRUE
all(dplyr::near(f2(test_x, test_th), vec_sum))
# [1] TRUE
set.seed(123)
test_x <- rnorm(10000)
test_th <- sort(rnorm(100)) ## f2 requires sorted threshold values
vec_sum <- rep(NA,length(test_th))
for(i in seq_along(test_th)) {
vec_sum[i] <- sum(test_x[test_x>test_th[i]])
}
all(dplyr::near(f1(test_x, test_th), vec_sum))
# [1] TRUE
all(dplyr::near(f2(test_x, test_th), vec_sum))
# [1] FALSE
# Warning message:
# In x - y : longer object length is not a multiple of shorter object length
library(microbenchmark)
microbenchmark(
a = f1(test_x, test_th),
b = f2(test_x, test_th)
)
# Unit: microseconds
# expr min lq mean median uq max neval
# a 587.116 682.864 900.3572 694.713 703.726 10647.206 100
# b 1157.213 1203.063 1260.0663 1223.600 1258.552 2143.069 100
Not sure if this is any faster, but we can use findInterval to cut x by thresholds. We take sum of each group using tapply and take cumsum in reverse.
as.integer(rev(cumsum(rev(tapply(x,
findInterval(x, thresholds, left.open = TRUE), sum)[-1]))))
Tested on
thresholds <- c(3, 5, 10)
x <- c(2, 3, 1, 19, 4, 6, 5, 15, 7:14, 16:18, 20) #1:20 in random order
vec_sum <- rep(NA,length(thresholds))
for(i in seq_along(thresholds)) {
vec_sum[i] <- sum(x[x>thresholds[i]])
}
vec_sum
#[1] 204 195 155
Using the proposed solution
as.integer(rev(cumsum(rev(tapply(x,
findInterval(x, thresholds, left.open = TRUE), sum)[-1]))))
#[1] 204 195 155
Explaining the answer. findInterval returns groups where each value of x belongs
findInterval(x, thresholds, left.open = TRUE)
#[1] 0 0 0 3 1 2 1 3 2 2 2 2 3 3 3 3 3 3 3 3
We use tapply to get sum of each group
tapply(x, findInterval(x, thresholds, left.open = TRUE), sum)
# 0 1 2 3
# 6 9 40 155
0-group should be excluded since they are smaller than all the values of threshold (hence -1). Group 2 should also contain sum from group 1 and group 3 should contain sum of group 1 and 2. So we reverse the sequence and take cumsum
cumsum(rev(tapply(x, findInterval(x, thresholds, left.open = TRUE), sum)[-1]))
# 3 2 1
#155 195 204
To get it in original order and to match it with threshold we reverse it again
rev(cumsum(rev(tapply(x, findInterval(x, thresholds, left.open = TRUE), sum)[-1])))
# 1 2 3
#204 195 155
Edge Cases :
If there are all values below threshold or all values above threshold, we might need to do an extra check and return the following.
if (all(x < thresholds[1])) rep(0, length(thresholds))
if (all(x > thresholds[length(thresholds)])) rep(sum(x), length(thresholds))
I want a variable such as desired_output, based on a cumulative sum over cumsumover, where the cumsum function resets every time it reaches the next number in thresh.
cumsumover <- c(1, 2, 7, 4, 2, 5)
thresh <- c(3, 7, 11)
desired_output <- c(3, 3 ,7 ,11 ,11 ,11) # same length as cumsumover
This question is similar, but I can't wrap my head around the code.
dplyr / R cumulative sum with reset
Compared to similar questions my condition is specified in a vector of different length than the cumsumover.
Any help would be greatly appreciated. Bonus if both a base R and a tidyverse approach is provided.
In base R, we can use cut with breaks as thresh and labels as letters of same length as thresh.
cut(cumsum(cumsumover),breaks = c(0, thresh[-1], max(cumsum(cumsumover))),
labels = letters[seq_along(thresh)])
#[1] a a b c c c
Replaced the last element of thresh with max(cumsum(cumsumover)) so that anything outside last element of thresh is assigned the last label.
If we want labels as thresh instead of letters
cut(cumsum(cumsumover),breaks = c(0, thresh[-1], max(cumsum(cumsumover))),labels = thresh)
#[1] 3 3 7 11 11 11
Here is another solution:
data:
cumsumover <- c(1, 2, 7, 4, 2, 5)
thresh <- c(3, 7, 11)
code:
outp <- letters[1:3] # to make solution more general
cumsumover_copy <- cumsumover # I use <<- inside sapply so therefore I make a copy to stay save
unlist(
sapply(seq_along(thresh), function(x) {
cs_over <- cumsum(cumsumover_copy)
ntimes = sum( cs_over <= thresh[x] )
cumsumover_copy <<- cumsumover_copy[-(1:ntimes)]
return( rep(outp[x], ntimes) )
} )
)
result:
#[1] "a" "a" "b" "c" "c" "c"
Using .bincode you can do this:
thresh[.bincode(cumsum(cumsumover), c(-Inf,thresh[-1],Inf))]
[1] 3 3 7 11 11 11
.bincode is used by cut, which basically adds labels and checks, so it's more efficient:
x <-rep(cumsum(cumsumover),10000)
microbenchmark::microbenchmark(
bincode = thresh[.bincode(x, c(-Inf,thresh[-1],Inf))],
cut = cut(x,breaks = c(-Inf, thresh[-1], Inf),labels = thresh))
# Unit: microseconds
# expr min lq mean median uq max neval
# bincode 450.2 459.75 654.794 482.10 642.20 5028.4 100
# cut 1739.3 1864.90 2622.593 2215.15 2713.25 12194.8 100
I have a sorted vector, let's say
v <- c(1, 1, 2, 3, 5, 8, 13, 21, 34)
Now I want to find the index i of the first element which is bigger than for example a <- 15.
I could do something like i <- which(v > a)[1].
But I want to exploit the fact that v is sorted, which I don't think which cares about.
I could write it myself and divide the interval recursively in halves and search in those partial intervals...
Is there any built-in solution? As usual the main issue is speed and my own function would be slower surely.
Thank you.
For speed-glutton
a <- 10
v <- sort(runif(1e7,0,1000));
Rcpp::cppFunction('int min_index(NumericVector v, double a) {
NumericVector::iterator low=std::lower_bound (v.begin(), v.end(), a);
return (low - v.begin());
}')
microbenchmark::microbenchmark(which(v > a)[1], min_index(v, a), unit="relative")
#Unit: relative
# expr min lq mean median uq max neval
#which(v > a)[1] 61299.15 67211.58 14346.42 8797.526 8683.39 11163.27 100
#min_index(v, a) 1.00 1.00 1.00 1.000 1.00 1.00 100
There is uniroot. It is using bisection and is faster on much longer vectors.
v <- c(1,1,2,3,5,8,13,21,34)
a <- 15
root <- uniroot(f = function(x) v[x] - a, interval = c(1, length(v)))
my_index <- floor(root$root)
Just wonder if the following may be useful.
Filter(function(x) x > 15, v)[1]
#[1] 21
Find(function(x) x > 15, v, right = FALSE, nomatch = NULL)
#[1] 21
Position(function(x) x > 15, v, right = FALSE, nomatch = NA_integer_)
#[1] 8
which isn't exactly slow, so what about min(which()):
v <- c(1,1,2,3,5,8,13,21,34)
system.time(
print(min(which(v > 5)))
)
# [1] 6
# user system elapsed
0 0 0
Suppose that my vector numbers contains c(1,2,3,5,7,8), and I wish to find if it contains 3 consecutive numbers, which in this case, are 1,2,3.
numbers = c(1,2,3,5,7,8)
difference = diff(numbers) //The difference output would be 1,1,2,2,1
To verify that there are 3 consecutive integers in my numbers vector, I've tried the following with little reward.
rep(1,2)%in%difference
The above code works in this case, but if my difference vector = (1,2,2,2,1), it would still return TRUE even though the "1"s are not consecutive.
Using diff and rle, something like this should work:
result <- rle(diff(numbers))
any(result$lengths>=2 & result$values==1)
# [1] TRUE
In response to the comments below, my previous answer was specifically only testing for runs of length==3 excluding longer lengths. Changing the == to >= fixes this. It also works for runs involving negative numbers:
> numbers4 <- c(-2, -1, 0, 5, 7, 8)
> result <- rle(diff(numbers4))
> any(result$lengths>=2 & result$values==1)
[1] TRUE
Benchmarks!
I am including a couple functions of mine. Feel free to add yours. To qualify, you need to write a general function that tells if a vector x contains n or more consecutive numbers. I provide a unit test function below.
The contenders:
flodel.filter <- function(x, n, incr = 1L) {
if (n > length(x)) return(FALSE)
x <- as.integer(x)
is.cons <- tail(x, -1L) == head(x, -1L) + incr
any(filter(is.cons, rep(1L, n-1L), sides = 1, method = "convolution") == n-1L,
na.rm = TRUE)
}
flodel.which <- function(x, n, incr = 1L) {
is.cons <- tail(x, -1L) == head(x, -1L) + incr
any(diff(c(0L, which(!is.cons), length(x))) >= n)
}
thelatemail.rle <- function(x, n, incr = 1L) {
result <- rle(diff(x))
any(result$lengths >= n-1L & result$values == incr)
}
improved.rle <- function(x, n, incr = 1L) {
result <- rle(diff(as.integer(x)) == incr)
any(result$lengths >= n-1L & result$values)
}
carl.seqle <- function(x, n, incr = 1) {
if(!is.numeric(x)) x <- as.numeric(x)
z <- length(x)
y <- x[-1L] != x[-z] + incr
i <- c(which(y | is.na(y)), z)
any(diff(c(0L, i)) >= n)
}
Unit tests:
check.fun <- function(fun)
stopifnot(
fun(c(1,2,3), 3),
!fun(c(1,2), 3),
!fun(c(1), 3),
!fun(c(1,1,1,1), 3),
!fun(c(1,1,2,2), 3),
fun(c(1,1,2,3), 3)
)
check.fun(flodel.filter)
check.fun(flodel.which)
check.fun(thelatemail.rle)
check.fun(improved.rle)
check.fun(carl.seqle)
Benchmarks:
x <- sample(1:10, 1000000, replace = TRUE)
library(microbenchmark)
microbenchmark(
flodel.filter(x, 6),
flodel.which(x, 6),
thelatemail.rle(x, 6),
improved.rle(x, 6),
carl.seqle(x, 6),
times = 10)
# Unit: milliseconds
# expr min lq median uq max neval
# flodel.filter(x, 6) 96.03966 102.1383 144.9404 160.9698 177.7937 10
# flodel.which(x, 6) 131.69193 137.7081 140.5211 185.3061 189.1644 10
# thelatemail.rle(x, 6) 347.79586 353.1015 361.5744 378.3878 469.5869 10
# improved.rle(x, 6) 199.35402 200.7455 205.2737 246.9670 252.4958 10
# carl.seqle(x, 6) 213.72756 240.6023 245.2652 254.1725 259.2275 10
After diff you can check for any consecutive 1s -
numbers = c(1,2,3,5,7,8)
difference = diff(numbers) == 1
## [1] TRUE TRUE FALSE FALSE TRUE
## find alteast one consecutive TRUE
any(tail(difference, -1) &
head(difference, -1))
## [1] TRUE
It's nice to see home-grown solutions here.
Fellow Stack Overflow user Carl Witthoft posted a function he named seqle() and shared it here.
The function looks like this:
seqle <- function(x,incr=1) {
if(!is.numeric(x)) x <- as.numeric(x)
n <- length(x)
y <- x[-1L] != x[-n] + incr
i <- c(which(y|is.na(y)),n)
list(lengths = diff(c(0L,i)),
values = x[head(c(0L,i)+1L,-1L)])
}
Let's see it in action. First, some data:
numbers1 <- c(1, 2, 3, 5, 7, 8)
numbers2 <- c(-2, 2, 3, 5, 6, 7, 8)
numbers3 <- c(1, 2, 2, 2, 1, 2, 3)
Now, the output:
seqle(numbers1)
# $lengths
# [1] 3 1 2
#
# $values
# [1] 1 5 7
#
seqle(numbers2)
# $lengths
# [1] 1 2 4
#
# $values
# [1] -2 2 5
#
seqle(numbers3)
# $lengths
# [1] 2 1 1 3
#
# $values
# [1] 1 2 2 1
#
Of particular interest to you is the "lengths" in the result.
Another interesting point is the incr argument. Here we can set the increment to, say, "2" and look for sequences where the difference between the numbers are two. So, for the first vector, we would expect the sequence of 3, 5, and 7 to be detected.
Let's try:
> seqle(numbers1, incr = 2)
$lengths
[1] 1 1 3 1
$values
[1] 1 2 3 8
So, we can see that we have a sequence of 1 (1), 1 (2), 3 (3, 5, 7), and 1 (8) if we set incr = 2.
How does it work with ECII's second challenge? Seems OK!
> numbers4 <- c(-2, -1, 0, 5, 7, 8)
> seqle(numbers4)
$lengths
[1] 3 1 2
$values
[1] -2 5 7
Simple but works
numbers = c(-2,2,3,4,5,10,6,7,8)
x1<-c(diff(numbers),0)
x2<-c(0,diff(numbers[-1]),0)
x3<-c(0,diff(numbers[c(-1,-2)]),0,0)
rbind(x1,x2,x3)
colSums(rbind(x1,x2,x3) )==3 #Returns TRUE or FALSE where in the vector the consecutive intervals triplet takes place
[1] FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE
sum(colSums(rbind(x1,x2,x3) )==3) #How many triplets of consecutive intervals occur in the vector
[1] 3
which(colSums(rbind(x1,x2,x3) )==3) #Returns the location of the triplets consecutive integers
[1] 2 3 7
Note that this will not work for consecutive negative intervals c(-2,-1,0) because of how diff() works
GOAL
Given a list of lists my goal is to reverse its structure (R language).
So, I want to bring the elements of the nested lists to be elements of the tier one list.
Probably an example better specifies my purpose. Given:
z <- list(z1 = list(a = 1, b = 2, c = 3), z2 = list(b = 4, a = 1, c = 0))
I want an output equivalent to the subsequent R object:
o <- list(a = list(z1 = 1, z2 = 1), b = list(z1 = 2, z2 = 4), c = list(z1 = 3, z2 = 0))
SOLUTIONS
MY SOLUTION
I created my own solution, which I am attaching below, but let me know if there is some better.
revert_list_str_1 <- function(ls) {
res <- lapply(names(ls[[1]]), function(n, env) {
name <- paste(n, 'elements', sep = '_')
assign(name, vector('list', 0))
inner <- sapply(ls, function(x) {
assign(name, c(get(name), x[which(names(x) == n)]))
})
names(inner) <- names(ls)
inner
})
names(res) <- names(ls[[1]])
res
}
Executing str(revert_list_str_1(z)) I obtain the subsequent output, corresponding to what I wanted.
List of 3
$ a:List of 2
..$ z1: num 1
..$ z2: num 1
$ b:List of 2
..$ z1: num 2
..$ z2: num 4
$ c:List of 2
..$ z1: num 3
..$ z2: num 0
But as I said I'd like to investigate (and learn) the existence of a more elegant and dynamic solution.
In fact my solution works fully only if all the nested lists have the same names (also in different order). This because of names(ls[[1]]). I would also point out that it acts only on lists of 2 levels, like the one reported.
So, do you know other solutions that are more dynamic? Can rapply and/or Filter functions be useful for this task?
end edit 1.
ANALYSIS OF PROPOSED SOLUTIONS
I've done a little analysis of the proposes solutions, thans you all !.
The analysis consists of verifying the following points for all functions:
accepted classes (nested list elements)
type preserved also if there are elements with different types (if they are atomic)
object contained in the elements preserved (e.g. a matrix)
columns considered (for columns I mean the names of the nested lists)
not common columns ignored (the classification 'not' is understood positively in this case)
not common columns preserved
it works also when columns do not match (based only on the names of the first nested list)
In all this cases the classification 'yes' is understood positively execept for point 2.1.
This are all the functions I've considered (the comments relate to the analysis items mentioned above):
# yes 1.1
# yes 1.2
# yes 2.1, not 2.2, not 2.3
revert_list_str_1 <- function(ls) { # #leodido
# see above
}
# not 1.1
# not 1.2
# not 2.1, not 2.2, not 2.3
revert_list_str_2 <- function(ls) { # #mnel
# convert each component of list to a data.frame
# so rbind.data.frame so named elements are matched
x <- data.frame((do.call(rbind, lapply(ls, data.frame))))
# convert each column into an appropriately named list
o <- lapply(as.list(x), function(i, nam) as.list(`names<-`(i, nam)), nam = rownames(x))
o
}
# yes 1.1
# yes 1.2
# yes 2.1, not 2.2, yes 2.3
revert_list_str_3 <- function(ls) { # #mnel
# unique names
nn <- Reduce(unique, lapply(ls, names))
# convert from matrix to list `[` used to ensure correct ordering
as.list(data.frame(do.call(rbind,lapply(ls, `[`, nn))))
}
# yes 1.1
# yes 1.2
# yes 2.1, not 2.2, yes 2.3
revert_list_str_4 <- function(ls) { # #Josh O'Brien
# get sub-elements in same order
x <- lapply(ls, `[`, names(ls[[1]]))
# stack and reslice
apply(do.call(rbind, x), 2, as.list)
}
# not 1.1
# not 1.2
# not 2.1, not 2.2, not 2.3
revert_list_str_5 <- function(ls) { # #mnel
apply(data.frame((do.call(rbind, lapply(ls, data.frame)))), 2, as.list)
}
# not 1.1
# not 1.2
# not 2.1, yes 2.2, yes 2.3
revert_list_str_6 <- function(ls) { # #baptiste + #Josh O'Brien
b <- recast(z, L2 ~ L1)
apply(b, 1, as.list)
}
# yes 1.1
# yes 1.2
# not 2.1, yes 2.2, yes 2.3
revert_list_str_7 <- function(ll) { # #Josh O'Brien
nms <- unique(unlist(lapply(ll, function(X) names(X))))
ll <- lapply(ll, function(X) setNames(X[nms], nms))
ll <- apply(do.call(rbind, ll), 2, as.list)
lapply(ll, function(X) X[!sapply(X, is.null)])
}
CONSIDERATIONS
From this analysis emerges that:
functions revert_list_str_7 and revert_list_str_6 are the most flexible regarding the names of the nested list
functions revert_list_str_4, revert_list_str_3 followed by my own function are complete enough, good trade-offs.
the most complete in absolute function is revert_list_str_7.
BENCHMARKS
To complete the work I've done some little benchmarks (with microbenchmark R package) on this 4 functions (times = 1000 for each benchmark).
BENCHMARK 1
Input:
list(z1 = list(a = 1, b = 2, c = 3), z2 = list(a = 0, b = 3, d = 22, f = 9)).
Results:
Unit: microseconds
expr min lq median uq max
1 func_1 250.069 467.5645 503.6420 527.5615 2028.780
2 func_3 204.386 393.7340 414.5485 429.6010 3517.438
3 func_4 89.922 173.7030 189.0545 194.8590 1669.178
4 func_6 11295.463 20985.7525 21433.8680 21934.5105 72476.316
5 func_7 348.585 387.0265 656.7270 691.2060 2393.988
Winner: revert_list_str_4.
BENCHMARK 2
Input:
list(z1 = list(a = 1, b = 2, c = 'ciao'), z2 = list(a = 0, b = 3, c = 5)).
revert_list_str_6 excluded because it does not support different type of nested child elements.
Results:
Unit: microseconds
expr min lq median uq max
1 func_1 249.558 483.2120 502.0915 550.7215 2096.978
2 func_3 210.899 387.6835 400.7055 447.3785 1980.912
3 func_4 92.420 170.9970 182.0335 192.8645 1857.582
4 func_7 257.772 469.9280 477.8795 487.3705 2035.101
Winner: revert_list_str_4.
BENCHMARK 3
Input:
list(z1 = list(a = 1, b = m, c = 'ciao'), z2 = list(a = 0, b = 3, c = m)).
m is a matrix 3x3 of integers and revert_list_str_6 has been excluded again.
Results:
Unit: microseconds
expr min lq median uq max
1 func_1 261.173 484.6345 503.4085 551.6600 2300.750
2 func_3 209.322 393.7235 406.6895 449.7870 2118.252
3 func_4 91.556 174.2685 184.5595 196.2155 1602.983
4 func_7 252.883 474.1735 482.0985 491.9485 2058.306
Winner: revert_list_str_4. Again!
end edit 2.
CONCLUSION
First of all: thanks to all, wonderful solutions.
In my opinion if you know in advance that you list will have nested list with the same names reverse_str_4 is the winner as best compromise between performances and support for different types.
The most complete solution is revert_list_str_7 although the full flexibility induces an average of about 2.5 times a worsening of performances compared to reverse_str_4 (useful if your nested list have different names).
Edit:
Here's a more flexible version that will work on lists whose elements don't necessarily contain the same set of sub-elements.
fun <- function(ll) {
nms <- unique(unlist(lapply(ll, function(X) names(X))))
ll <- lapply(ll, function(X) setNames(X[nms], nms))
ll <- apply(do.call(rbind, ll), 2, as.list)
lapply(ll, function(X) X[!sapply(X, is.null)])
}
## An example of an 'unbalanced' list
z <- list(z1 = list(a = 1, b = 2),
z2 = list(b = 4, a = 1, c = 0))
## Try it out
fun(z)
Original answer
z <- list(z1 = list(a = 1, b = 2, c = 3), z2 = list(b = 4, a = 1, c = 0))
zz <- lapply(z, `[`, names(z[[1]])) ## Get sub-elements in same order
apply(do.call(rbind, zz), 2, as.list) ## Stack and reslice
EDIT -- working from #Josh O'Briens suggestion and my own improvemes
The problem was that do.call rbind was not calling rbind.data.frame which does some matching of names. rbind.data.frame should work, because data.frames are lists and each sublist is a list, so we could just call it directly.
apply(do.call(rbind.data.frame, z), 1, as.list)
However, while this may be succicint, it is slow because do.call(rbind.data.frame, ...) is inherently slow.
Something like (in two steps)
# convert each component of z to a data.frame
# so rbind.data.frame so named elements are matched
x <- data.frame((do.call(rbind, lapply(z, data.frame))))
# convert each column into an appropriately named list
o <- lapply(as.list(x), function(i,nam) as.list(`names<-`(i, nam)), nam = rownames(x))
o
$a
$a$z1
[1] 1
$a$z2
[1] 1
$b
$b$z1
[1] 2
$b$z2
[1] 4
$c
$c$z1
[1] 3
$c$z2
[1] 0
And an alternative
# unique names
nn <- Reduce(unique,lapply(z, names))
# convert from matrix to list `[` used to ensure correct ordering
as.list(data.frame(do.call(rbind,lapply(z, `[`, nn))))
reshape can get you close,
library(reshape)
b = recast(z, L2~L1)
split(b[,-1], b$L2)
The recently released purrr contains a function, transpose, whose's purpose is to 'revert' a list structure. There is a major caveat to the transpose function, it matches on position and not name, https://cran.r-project.org/web/packages/purrr/purrr.pdf. These means that it is not the correct tool for the benchmark 1 above. I therefore only consider benchmark 2 and 3 below.
Benchmark 2
B2 <- list(z1 = list(a = 1, b = 2, c = 'ciao'), z2 = list(a = 0, b = 3, c = 5))
revert_list_str_8 <- function(ll) { # #z109620
transpose(ll)
}
microbenchmark(revert_list_str_1(B2), revert_list_str_3(B2), revert_list_str_4(B2), revert_list_str_7(B2), revert_list_str_8(B2), times = 1e3)
Unit: microseconds
expr min lq mean median uq max neval
revert_list_str_1(B2) 228.752 254.1695 297.066646 268.8325 293.5165 4501.231 1000
revert_list_str_3(B2) 211.645 232.9070 277.149579 250.9925 278.6090 2512.361 1000
revert_list_str_4(B2) 79.673 92.3810 112.889130 100.2020 111.4430 2522.625 1000
revert_list_str_7(B2) 237.062 252.7030 293.978956 264.9230 289.1175 4838.982 1000
revert_list_str_8(B2) 2.445 6.8440 9.503552 9.2880 12.2200 148.591 1000
Clearly function transpose is the winner! It also utilizes much less code.
Benchmark 3
B3 <- list(z1 = list(a = 1, b = m, c = 'ciao'), z2 = list(a = 0, b = 3, c = m))
microbenchmark(revert_list_str_1(B3), revert_list_str_3(B3), revert_list_str_4(B3), revert_list_str_7(B3), revert_list_str_8(B3), times = 1e3)
Unit: microseconds
expr min lq mean median uq max neval
revert_list_str_1(B3) 229.242 253.4360 280.081313 266.877 281.052 2818.341 1000
revert_list_str_3(B3) 213.600 232.9070 271.793957 248.304 272.743 2739.646 1000
revert_list_str_4(B3) 80.161 91.8925 109.713969 98.980 108.022 2403.362 1000
revert_list_str_7(B3) 236.084 254.6580 287.274293 264.922 280.319 2718.628 1000
revert_list_str_8(B3) 2.933 7.3320 9.140367 9.287 11.243 55.233 1000
Again, transpose outperforms all others.
The problem with these above benchmarks test is that they focus on very small lists. For this reason, the numerous loops nested within functions 1-7 do not pose too much of a problem. As the size of the list and therefore the iteration increase, the speed gains of transpose will likely increase.
The purrr package is awesome! It does a lot more than revert lists. In combination with the dplyr package, the purrr package makes it possible to do most of your coding using the poweriful and beautiful functional programming paradigm. Thank the lord for Hadley!
How about this simple solution, which is completely general, and almost as fast as Josh O'Brien's original answer that assumed common internal names (#4).
zv <- unlist(unname(z), recursive=FALSE)
ans <- split(setNames(zv, rep(names(z), lengths(z))), names(zv))
And here is a general version that is robust to not having names:
invertList <- function(z) {
zv <- unlist(unname(z), recursive=FALSE)
zind <- if (is.null(names(zv))) sequence(lengths(z)) else names(zv)
if (!is.null(names(z)))
zv <- setNames(zv, rep(names(z), lengths(z)))
ans <- split(zv, zind)
if (is.null(names(zv)))
ans <- unname(ans)
ans
}
I'd like to add a further solution to this valuable collection (to which I have turned many times):
revert_list_str_9 <- function(x) do.call(Map, c(c, x))
If this were code golf, we'd have a clear winner! Of course, this requires the individual list entries to be in the same order. This can be extended, using various ideas from above, such as
revert_list_str_10 <- function(x) {
nme <- names(x[[1]]) # from revert_list_str_4
do.call(Map, c(c, lapply(x, `[`, nme)))
}
revert_list_str_11 <- function(x) {
nme <- Reduce(unique, lapply(x, names)) # from revert_list_str_3
do.call(Map, c(c, lapply(x, `[`, nme)))
}
Performance-wise it's also not too shabby. If stuff is properly sorted, we have a new base R solution to beat. If not, timings still are very competitive.
z <- list(z1 = list(a = 1, b = 2, c = 3), z2 = list(b = 4, a = 1, c = 0))
microbenchmark::microbenchmark(
revert_list_str_1(z), revert_list_str_2(z), revert_list_str_3(z),
revert_list_str_4(z), revert_list_str_5(z), revert_list_str_7(z),
revert_list_str_9(z), revert_list_str_10(z), revert_list_str_11(z),
times = 1e3
)
#> Unit: microseconds
#> expr min lq mean median uq max
#> revert_list_str_1(z) 51.946 60.9845 67.72623 67.2540 69.8215 1293.660
#> revert_list_str_2(z) 461.287 482.8720 513.21260 490.5495 498.8110 1961.542
#> revert_list_str_3(z) 80.180 89.4905 99.37570 92.5800 95.3185 1424.012
#> revert_list_str_4(z) 19.383 24.2765 29.50865 26.9845 29.5385 1262.080
#> revert_list_str_5(z) 499.433 525.8305 583.67299 533.1135 543.4220 25025.568
#> revert_list_str_7(z) 56.647 66.1485 74.53956 70.8535 74.2445 1309.346
#> revert_list_str_9(z) 6.128 7.9100 11.50801 10.2960 11.5240 1591.422
#> revert_list_str_10(z) 8.455 10.9500 16.06621 13.2945 14.8430 1745.134
#> revert_list_str_11(z) 14.953 19.8655 26.79825 22.1805 24.2885 2084.615
Unfortunately, this is not my creation, but exists courtesy of #thelatemail.