Generalize R %in% operator to match tuples - r

I spent a while the other day looking for a way to check if a row vector is contained in some set of row vectors in R. Basically, I want to generalize the %in% operator to match a tuple instead of each entry in a vector. For example, I want:
row.vec = c("A", 3)
row.vec
# [1] "A" "3"
data.set = rbind(c("A",1),c("B",3),c("C",2))
data.set
# [,1] [,2]
# [1,] "A" "1"
# [2,] "B" "3"
# [3,] "C" "2"
row.vec %tuple.in% data.set
# [1] FALSE
for my made-up operator %tuple.in% because the row vector c("A",3) is not a row vector in data.set. Using the %in% operator gives:
row.vec %in% data.set
# [1] TRUE TRUE
because "A" and 3 are in data.set, which is not what I want.
I have two questions. First, are there any good existing solutions to this?
Second, since I couldn't find them (even if they exist), I tried to write my own function to do it. It works for an input matrix of row vectors, but I'm wondering if any experts have proposed improvements:
is.tuple.in <- function(matrix1, matrix2){
# Apply rbind() so that matrix1 has columns even if it is a row vector.
matrix1 = rbind(matrix1)
if(ncol(matrix1) != ncol(matrix2)){
stop("Matrices must have the same number of columns.") }
# Now check for the first row and handle other rows recursively
row.vec = matrix1[1,]
tuple.found = FALSE
for(i in 1:nrow(matrix2)){
# If we find a match, then this row exists in matrix 2 and we can break the loop
if(all(row.vec == matrix2[i,])){
tuple.found = TRUE
break
}
}
# If there are more rows to be checked, use a recursive call
if(nrow(matrix1) > 1){
return(c(tuple.found, is.tuple.in(matrix1[2:nrow(matrix1),],matrix2)))
} else {
return(tuple.found)
}
}
I see a couple problems with that that I'm not sure how to fix. First, I'd like the base case to be clear at the start of the function. I didn't manage to do this because I pass matrix1[2:nrow(matrix1),] in the recursive call, which produces an error if matrix1 has one row. So instead of getting to a case where matrix1 is empty, I have an if condition at the end deciding if more iterations are necessary.
Second, I think the use of rbind() at the start is sloppy, but I needed it for when matrix1 had been reduced to a single row. Without using rbind(), ncol(matrix1) produced an error in the 1-row case. I figure my trouble here has to do with a lack of knowledge about R data types.
Any help would be appreciated.

I'm wondering if you have made this a bit more complicated than it is. For example,
set.seed(1618)
vec <- c(1,3)
mat <- matrix(rpois(1000,3), ncol = 2)
rownames(mat) <- 1:nrow(mat)
mat[sapply(1:nrow(mat), function(x) all(vec %in% mat[x, ])), ]
# gives me this
# [,1] [,2]
# 6 3 1
# 38 3 1
# 39 3 1
# 85 1 3
# 88 1 3
# 89 1 3
# 95 3 1
# 113 1 3
# ...
you could subset this further if you care about the order
or you could modify the function slightly:
mat[sapply(1:nrow(mat), function(x)
all(paste(vec, collapse = '') %in% paste(mat[x, ], collapse = ''))), ]
# [,1] [,2]
# 85 1 3
# 88 1 3
# 89 1 3
# 113 1 3
# 133 1 3
# 139 1 3
# 187 1 3
# ...
another example with a longer vector
set.seed(1618)
vec <- c(1,4,5,2)
mat <- matrix(rpois(10000, 3), ncol = 4)
rownames(mat) <- 1:nrow(mat)
mat[sapply(1:nrow(mat), function(x) all(vec %in% mat[x, ])), ]
# [,1] [,2] [,3] [,4]
# 57 2 5 1 4
# 147 1 5 2 4
# 279 1 2 5 4
# 303 1 5 2 4
# 437 1 5 4 2
# 443 1 4 5 2
# 580 5 4 2 1
# ...
I see a couple that match:
mat[sapply(1:nrow(mat), function(x)
all(paste(vec, collapse = '') %in% paste(mat[x, ], collapse = ''))), ]
# [,1] [,2] [,3] [,4]
# 443 1 4 5 2
# 901 1 4 5 2
# 1047 1 4 5 2
but only three
for your single row case:
vec <- c(1,4,5,2)
mat <- matrix(c(1,4,5,2), ncol = 4)
rownames(mat) <- 1:nrow(mat)
mat[sapply(1:nrow(mat), function(x)
all(paste(vec, collapse = '') %in% paste(mat[x, ], collapse = ''))), ]
# [1] 1 4 5 2
here is a simple function with the above code
is.tuplein <- function(vec, mat, exact = TRUE) {
rownames(mat) <- 1:nrow(mat)
if (exact)
tmp <- mat[sapply(1:nrow(mat), function(x)
all(paste(vec, collapse = '') %in% paste(mat[x, ], collapse = ''))), ]
else tmp <- mat[sapply(1:nrow(mat), function(x) all(vec %in% mat[x, ])), ]
return(tmp)
}
is.tuplein(vec = vec, mat = mat)
# [1] 1 4 5 2
seems to work, so let's make our own %in% operator:
`%tuple%` <- function(x, y) is.tuplein(vec = x, mat = y, exact = TRUE)
`%tuple1%` <- function(x, y) is.tuplein(vec = x, mat = y, exact = FALSE)
and try her out
set.seed(1618)
c(1,2,3) %tuple% matrix(rpois(1002,3), ncol = 3)
# [,1] [,2] [,3]
# 133 1 2 3
# 190 1 2 3
# 321 1 2 3
set.seed(1618)
c(1,2,3) %tuple1% matrix(rpois(1002,3), ncol = 3)
# [,1] [,2] [,3]
# 48 2 3 1
# 64 2 3 1
# 71 1 3 2
# 73 3 1 2
# 108 3 1 2
# 112 1 3 2
# 133 1 2 3
# 166 2 1 3

Does this do what you want (even for more than 2 columns)?
paste(row.vec,collapse="_") %in% apply(data.set,1,paste,collapse="_")

Related

R values that go into matrix multiplication

What is the fastest approach to saving unique values that go into matrix multiplication (without 0)?
For example, if I have a data.table object
library(data.table)
A = data.table(j3=c(3,0,3),j5=c(0,5,5),j7=c(0,7,0),j8=c(8,0,8))
I would like to see which unique values go into A*transpose(A) (or as.matrix(A) %*% as.matrix(t(A))). Right now, I can do it using for loops as:
B=t(A)
L = list()
models = c('A1','A2','A3')
for(i in 1:nrow(A)){
for(j in 1:ncol(B)){
u = union(unlist(A[i,]),B[,j])
u = u[u!=0] # remove 0
L[[paste(models[i],models[j])]]= u
}
}
However, is there a faster and more RAM-efficient way? The output doesn't have to be a list object, as in my case, it can be a data.table (data.frame) as well. Also, the order of values is not important. For example, 3 5 8 is as good as 5 3 8, 8 5 3 etc.
Any help is appreciated.
EDIT: So as.matrix(A) %*% as.matrix(t(A)) is:
[,1] [,2] [,3]
[1,] 73 0 73
[2,] 0 74 25
[3,] 73 25 98
The first element is calculated as 3*3+0*0+0*0+8*8 = 73, the second element is 3*0+0*5+0*7+8*0 = 0, etc. I need unique numbers that go to this calculation but without 0.
Therefore outputs (saved in the list L) are:
> L
$`A1 A1`
[1] 3 8
$`A1 A2`
[1] 3 8 5 7
$`A1 A3`
[1] 3 8 5
$`A2 A1`
[1] 5 7 3 8
$`A2 A2`
[1] 5 7
$`A2 A3`
[1] 5 7 3 8
$`A3 A1`
[1] 3 5 8
$`A3 A2`
[1] 3 5 8 7
$`A3 A3`
[1] 3 5 8
Once again, the output doesn't have to be a list object. I would prefer data.table if it is doable. Is it possible to rewrite my approach as Rcpp function?
Potential optimizations
Following up on #user2554330's answer, note that if A is an m-by-n matrix, then AAT = A %*% t(A) (equivalently tcrossprod(A)) is an m-by-m symmetric matrix. AAT[i, j] and AAT[j, i] are computed using the same entries of A, so you only need to inspect m*(m+1)/2 pairs of rows of A, not m*m.
You can do even better by finding and caching the unique elements of each row before pairing them. Preprocessing in this way avoids redundant computation and should noticeably improve performance when m << n.
Limitations
Another aspect of the problem is how unique works under the hood. unique has an argument nmax that you can use to specify an expected maximum number of unique elements. From ?duplicated:
Except for factors, logical and raw vectors the default nmax = NA is equivalent to nmax = length(x). Since a hash table of size 8*nmax bytes is allocated, setting nmax suitably can save large amounts of memory. For factors it is automatically set to the smaller of length(x) and the number of levels plus one (for NA). If nmax is set too small there is liable to be an error: nmax = 1 is silently ignored.
Long vectors are supported for the default method of duplicated, but may only be usable if nmax is supplied.
These comments apply to unique as well. Since you have a 300-by-4e+07 matrix, you would be evaluating (with preprocessing):
unique(<4e+07-length vector>), 300 times,
unique(<up to 8e+07-length vector>), 299*300/2 times.
That can consume a lot of memory if you don't know anything about your matrix that might allow you to set nmax. And it can take a long time if you don't have access to many CPUs.
So I agree with comments asking you to consider why you need to do this at all and whether your underlying problem has a nicer solution.
Two answers
FWIW, here are two approaches to your general problem that actually take advantage of symmetry. f and g are without and with preprocessing. [[.utri allows you to extract elements from the return value, an m*(m+1)/2-length list, as if it were an m-by-m matrix. as.matrix.utri constructs the full, symmetric m-by-m list matrix.
f <- function(A, nmax = NA) {
a <- seq_len(nrow(A))
J <- cbind(sequence(a), rep.int(a, a))
FUN <- function(i) {
if (i[1L] == i[2L]) {
x <- A[i[1L], ]
} else {
x <- c(A[i[1L], ], A[i[2L], ])
}
unique.default(x[x != 0], nmax = nmax)
}
res <- apply(J, 1L, FUN, simplify = FALSE)
class(res) <- "utri"
res
}
g <- function(A, nmax = NA) {
l <- lapply(asplit(A, 1L), function(x) unique.default(x[x != 0], nmax = nmax))
a <- seq_along(l)
J <- cbind(sequence(a), rep.int(a, a))
FUN <- function(i) {
if (i[1L] == i[2L]) {
l[[i[1L]]]
} else {
unique.default(c(l[[i[1L]]], l[[i[2L]]]))
}
}
res <- apply(J, 1L, FUN, simplify = FALSE)
class(res) <- "utri"
res
}
`[[.utri` <- function(x, i, j) {
stopifnot(length(i) == 1L, length(j) == 1L)
class(x) <- NULL
if (i <= j) {
x[[i + (j * (j - 1L)) %/% 2L]]
} else {
x[[j + (i * (i - 1L)) %/% 2L]]
}
}
as.matrix.utri <- function(x) {
p <- length(x)
n <- as.integer(round(0.5 * (-1 + sqrt(1 + 8 * p))))
i <- rep.int(seq_len(n), n)
j <- rep.int(seq_len(n), rep.int(n, n))
r <- i > j
ir <- i[r]
i[r] <- j[r]
j[r] <- ir
res <- x[i + (j * (j - 1L)) %/% 2L]
dim(res) <- c(n, n)
res
}
Here is a simple test on a 4-by-4 integer matrix:
mkA <- function(m, n) {
A <- sample(0:(n - 1L), size = as.double(m) * n, replace = TRUE,
prob = rep.int(c(n - 1, 1), c(1L, n - 1L)))
dim(A) <- c(m, n)
A
}
set.seed(1L)
A <- mkA(4L, 4L)
A
## [,1] [,2] [,3] [,4]
## [1,] 0 0 2 3
## [2,] 0 1 0 0
## [3,] 2 1 0 3
## [4,] 1 2 0 0
identical(f(A), gA <- g(A))
## [1] TRUE
gA[[1L, 1L]] # used for 'tcrossprod(A)[1L, 1L]'
## [1] 2 3
gA[[1L, 2L]] # used for 'tcrossprod(A)[1L, 2L]'
## [1] 2 3 1
gA[[2L, 1L]] # used for 'tcrossprod(A)[2L, 1L]'
## [1] 2 3 1
gA # under the hood, an 'm*(m+1)/2'-length list
## [[1]]
## [1] 2 3
##
## [[2]]
## [1] 2 3 1
##
## [[3]]
## [1] 1
##
## [[4]]
## [1] 2 3 1
##
## [[5]]
## [1] 1 2 3
##
## [[6]]
## [1] 2 1 3
##
## [[7]]
## [1] 2 3 1
##
## [[8]]
## [1] 1 2
##
## [[9]]
## [1] 2 1 3
##
## [[10]]
## [1] 1 2
##
## attr(,"class")
## [1] "utri"
mgA <- as.matrix(gA) # the full, symmetric, 'm'-by-'m' list matrix
mgA
## [,1] [,2] [,3] [,4]
## [1,] integer,2 integer,3 integer,3 integer,3
## [2,] integer,3 1 integer,3 integer,2
## [3,] integer,3 integer,3 integer,3 integer,3
## [4,] integer,3 integer,2 integer,3 integer,2
mgA[1L, ] # used for first row of 'tcrossprod(A)'
## [[1]]
## [1] 2 3
##
## [[2]]
## [1] 2 3 1
##
## [[3]]
## [1] 2 3 1
##
## [[4]]
## [1] 2 3 1
## If you need names
dimnames(mgA) <- rep.int(list(sprintf("A%d", seq_len(nrow(mgA)))), 2L)
mgA["A1", ]
## $A1
## [1] 2 3
##
## $A2
## [1] 2 3 1
##
## $A3
## [1] 2 3 1
##
## $A4
## [1] 2 3 1
## If you need an 'm'-by-'m' 'data.table' result
DT <- data.table::as.data.table(mgA)
DT
## A1 A2 A3 A4
## 1: 2,3 2,3,1 2,3,1 2,3,1
## 2: 2,3,1 1 1,2,3 1,2
## 3: 2,3,1 1,2,3 2,1,3 2,1,3
## 4: 2,3,1 1,2 2,1,3 1,2
And here are two benchmarks on two large integer matrices, showing that preprocessing can help quite a bit:
set.seed(1L)
A <- mkA(100L, 1e+04L)
microbenchmark::microbenchmark(f(A), g(A), times = 10L, setup = gc(FALSE))
## Unit: milliseconds
## expr min lq mean median uq max neval
## f(A) 2352.0572 2383.3100 2435.7954 2403.8968 2431.6214 2619.553 10
## g(A) 843.0206 852.5757 858.7262 858.2746 863.8239 881.450 10
A <- mkA(100L, 1e+06L)
microbenchmark::microbenchmark(f(A), g(A), times = 10L, setup = gc(FALSE))
## Unit: seconds
## expr min lq mean median uq max neval
## f(A) 290.93327 295.54319 302.57001 301.17810 307.50226 318.14203 10
## g(A) 72.85608 73.83614 76.67941 76.57313 77.78056 83.73388 10
Perhaps we can try this
f <- function(A, models) {
AA <- replace(A, A == 0, NA)
setNames(
c(t(outer(
1:nrow(A),
1:nrow(A),
Vectorize(function(x, y) unique(na.omit(c(t(AA[c(x, y)])))))
))),
t(outer(models, models, paste))
)
}
which gives
$`A1 A1`
[1] 3 8
$`A1 A2`
[1] 3 8 5 7
$`A1 A3`
[1] 3 8 5
$`A2 A1`
[1] 5 7 3 8
$`A2 A2`
[1] 5 7
$`A2 A3`
[1] 5 7 3 8
$`A3 A1`
[1] 3 5 8
$`A3 A2`
[1] 3 5 8 7
$`A3 A3`
[1] 3 5 8
If you care about the speed, you can try
lst <- asplit(replace(A, A == 0, NA), 1)
mat <- matrix(list(), nrow = nrow(A), ncol = nrow(A))
mat[lower.tri(mat)] <- combn(lst, 2, function(...) unique(na.omit(unlist(...))), simplify = FALSE)
mat[upper.tri(mat)] <- t(mat)[upper.tri(mat)]
diag(mat) <- Map(function(x) unname(x)[!is.na(x)], lst)
L <- c(t(mat))
Thanks for posting the additional information in your edits. From what you posted, it appears that for all pairs of rows of a matrix or data table A, you want the unique non-zero values in those two rows.
To do that efficiently I'd suggest ensuring that A is a matrix. Row indexing in dataframes or data tables is a lot slower than doing so in matrices. (Column indexing can be faster, but I doubt if it's worth transposing the table to get that.)
Once you have a matrix, A[i, ] is a vector containing the values in row i, and that's a pretty fast calculation. You want the unique non-zero values in c(A[i, ], A[j, ]). The unique function will produce this, but won't leave out the zeros. I'd suggest experimenting. Depending on the contents of each row, it is conceivable that leaving the zeros out of the rows first before computing the unique entries could be either faster or slower than calculating all the unique values and deleting 0 afterwards.
You say you want to do this for a few hundred rows, but each row is very long. I'd guess you won't be able to improve much on nested loops: the time will be spent on each entry, not on the loops. However, you could experiment with vectorization using the apply() function, e.g.
result <- vector("list", nrows)
for (i in 1:nrows)
result[[i]] <- apply(A, 1, function(row) setdiff(unique(c(row, A[i,])), 0))
This will give a list of lists; if you want to examine entry i, j, you can use result[[c(i,j)]].

Sampling a number of indivuals in subgroups with no repeating group constellation in R

I have a number of individuals that I want to - randomly - divide in subgroups of size groupsize. This process I want to repeat n_group times - with no repeating group constellation.
How can I achieve this in R?
I tried the following so far:
set.seed(1)
individuals <- 1:6
groupsize <- 3
n_groups <- 4
for(i in 1:n_groups) { print(sample(individuals, groupsize))}
[1] 1 4 3
[1] 1 2 6
[1] 3 2 6
[1] 3 1 5
..but am not sure whether that really does not lead to repeating constellations..?
Edit: After looking at the first suggestions and answers I realized, that another restriction could be interesting to me (sorry for not seeing it upfront..).
Is there (in the concrete example above) a way to ensure, that every individual was in contact with every other individual?
Based on your edited question, I assuma that you want to make sure that all indivuals are in at least one subgroup?
Then this might be the solution:
individuals <- 1:6
groupsize <- 3
n_groups <- 4
#sample groups
library(RcppAlgos)
#initialise
answer <- matrix()
# If the length of all unique elements in the answer is smaller than
# the number of individuals, take a new sample
while (length(unique(as.vector(answer))) < length(individuals)) {
answer <- comboSample(individuals, groupsize, n = n_groups)
# Line below isfor demonstration only
#answer <- comboSample(individuals, groupsize, n = n_groups, seed = 123)
}
# sample answer with seed = 123 (see commented line above)
# [,1] [,2] [,3]
# [1,] 1 3 4
# [2,] 1 3 6
# [3,] 2 3 5
# [4,] 2 3 4
test for groups that contain not every individual
# Test with the following matrix
# [,1] [,2] [,3]
# [1,] 1 2 3
# [2,] 1 3 4
# [3,] 1 4 5
# [4,] 2 3 4
# Note that individual '6' is not present
answer <- matrix(c(1,2,3,1,3,4,1,4,5,2,3,4), nrow = 4, ncol = 3)
while (length(unique(as.vector(answer))) < length(individuals)) {
answer <- comboSample(individuals, groupsize, n = n_groups)
}
# is recalculated to (in this case) the following answer
# [,1] [,2] [,3]
# [1,] 4 5 6
# [2,] 3 4 5
# [3,] 1 3 6
# [4,] 2 4 5
PASSED ;-)
You can use while to dynamically update your combination set, which avoids duplicates, e.g.,
res <- c()
while (length(res) < pmin(n_groups, choose(length(individuals), groupsize))) {
v <- list(sort(sample(individuals, groupsize)))
if (!v %in% res) res <- c(res, v)
}
which gives
> res
[[1]]
[1] 2 5 6
[[2]]
[1] 2 3 6
[[3]]
[1] 1 5 6
[[4]]
[1] 1 2 6

Find all combinations of a set of numbers that add up to a certain total

I've seen a few solutions to similar problems, but they all require iteration over the number of items to be added together.
Here's my goal: from a list of numbers, find all of the combinations (without replacement) that add up to a certain total. For example, if I have numbers 1,1,2,3,5 and total 5, it should return 5,2,3, and 1,1,3.
I was trying to use combn but it required you to specify the number of items in each combination. Is there a way to do it that allows for solution sets of any size?
This is precisely what combo/permuteGeneral from RcppAlgos (I am the author) were built for. Since we have repetition of specific elements in our sample vector, we will be finding combinations of multisets that meet our criteria. Note that this is different than the more common case of generating combinations with repetition where each element is allowed to be repeated m times. For many combination generating functions, multisets pose problems as duplicates are introduced and must be dealt with. This can become a bottleneck in your code if the size of your data is decently large. The functions in RcppAlgos handle these cases efficiently without creating any duplicate results. I should mention that there are a couple of other great libraries that handle multisets quite well: multicool and arrangements.
Moving on to the task at hand, we can utilize the constraint arguments of comboGeneral to find all combinations of our vector that meet a specific criteria:
vec <- c(1,1,2,3,5) ## using variables from #r2evans
uni <- unique(vec)
myRep <- rle(vec)$lengths
ans <- 5
library(RcppAlgos)
lapply(seq_along(uni), function(x) {
comboGeneral(uni, x, freqs = myRep,
constraintFun = "sum",
comparisonFun = "==",
limitConstraints = ans)
})
[[1]]
[,1]
[1,] 5
[[2]]
[,1] [,2]
[1,] 2 3
[[3]]
[,1] [,2] [,3]
[1,] 1 1 3
[[4]]
[,1] [,2] [,3] [,4] ## no solutions of length 4
These functions are highly optimized and extend well to larger cases. For example, consider the following example that would produce over 30 million combinations:
## N.B. Using R 4.0.0 with new updated RNG introduced in 3.6.0
set.seed(42)
bigVec <- sort(sample(1:30, 40, TRUE))
rle(bigVec)
Run Length Encoding
lengths: int [1:22] 2 1 2 3 4 1 1 1 2 1 ...
values : int [1:22] 1 2 3 4 5 7 8 9 10 11 ...
bigUni <- unique(bigVec)
bigRep <- rle(bigVec)$lengths
bigAns <- 199
len <- 12
comboCount(bigUni, len, freqs = bigRep)
[1] 32248100
All 300000+ results are returned very quickly:
system.time(bigTest <- comboGeneral(bigUni, len, freqs = bigRep,
constraintFun = "sum",
comparisonFun = "==",
limitConstraints = bigAns))
user system elapsed
0.273 0.004 0.271
head(bigTest)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] 1 1 2 3 4 25 26 26 26 27 28 30
[2,] 1 1 2 3 5 24 26 26 26 27 28 30
[3,] 1 1 2 3 5 25 25 26 26 27 28 30
[4,] 1 1 2 3 7 24 24 26 26 27 28 30
[5,] 1 1 2 3 7 24 25 25 26 27 28 30
[6,] 1 1 2 3 7 24 25 26 26 26 28 30
nrow(bigTest)
[1] 280018
all(rowSums(bigTest) == bigAns)
[1] TRUE
Addendum
I must mention that generally when I see a problem like: "finding all combinations that sum to a particular number" my first thought is integer partitions. For example, in the related problem Getting all combinations which sum up to 100 in R, we can easily solve with the partitions library. However, this approach does not extend to the general case (as we have here) where the vector contains specific repetition or we have a vector that contains values that don't easily convert to an integer equivalent (E.g. the vector (0.1, 0.2, 0.3, 0.4) can easily be treated as 1:4, however treating c(3.98486 7.84692 0.0038937 7.4879) as integers and subsequently applying an integer partitions approach would require an extravagant amount of computing power rendering this method useless).
I took your combn idea and looped over the possible sizes of the sets.
func = function(x, total){
M = length(x)
y = NULL
total = 15
for (m in 1:M){
tmp = combn(x, m)
ind = which(colSums(tmp) == total)
if (length(ind) > 0){
for (j in 1:length(ind))
y = c(y, list(tmp[,ind[j]]))
}
}
return (unique(lapply(y, sort)))
}
x = c(1,1,2,3,5,8,13)
> func(x, 15)
[[1]]
[1] 2 13
[[2]]
[1] 1 1 13
[[3]]
[1] 2 5 8
[[4]]
[1] 1 1 5 8
[[5]]
[1] 1 1 2 3 8
Obviously, this will have problems as M grows since tmp will get big pretty quickly and the length of y can't be (maybe?) pre-determined.
Similar to mickey's answer, we can use combn inside another looping mechanism. I'll use lapply:
vec <- c(1,1,2,3,5)
ans <- 5
Filter(length, lapply(seq_len(length(vec)),
function(i) {
v <- combn(vec, i)
v[, colSums(v) == ans, drop = FALSE]
}))
# [[1]]
# [,1]
# [1,] 5
# [[2]]
# [,1]
# [1,] 2
# [2,] 3
# [[3]]
# [,1]
# [1,] 1
# [2,] 1
# [3,] 3
You can omit the Filter(length, portion, though it may return a number of empty matrices. They're easy enough to deal with and ignore, I just thought removing them would be aesthetically preferred.
This method gives you a matrix with multiple candidates in each column, so
ans <- 4
Filter(length, lapply(seq_len(length(vec)),
function(i) {
v <- combn(vec, i)
v[, colSums(v) == ans, drop = FALSE]
}))
# [[1]]
# [,1] [,2]
# [1,] 1 1
# [2,] 3 3
# [[2]]
# [,1]
# [1,] 1
# [2,] 1
# [3,] 2
If duplicates are a problem, you can always do:
Filter(length, lapply(seq_len(length(vec)),
function(i) {
v <- combn(vec, i)
v <- v[, colSums(v) == ans, drop = FALSE]
v[,!duplicated(t(v)),drop = FALSE]
}))
# [[1]]
# [,1]
# [1,] 1
# [2,] 3
# [[2]]
# [,1]
# [1,] 1
# [2,] 1
# [3,] 2
Now here is a solution involving gtools:
# Creating lists of all permutations of the vector x
df1 <- gtools::permutations(n=length(x),r=length(x),v=1:length(x),repeats.allowed=FALSE)
ls1 <- list()
for(j in 1:nrow(df1)) ls1[[j]] <- x[df1[j,1:ncol(df1)]]
# Taking all cumulative sums and filtering entries equaling our magic number
sumsCum <- t(vapply(1:length(ls1), function(j) cumsum(ls1[[j]]), numeric(length(x))))
indexMN <- which(sumsCum == magicNumber, arr.ind = T)
finalList <- list()
for(j in 1:nrow(indexMN)){
magicRow <- indexMN[j,1]
magicCol <- 1:indexMN[j,2]
finalList[[j]] <- ls1[[magicRow]][magicCol]
}
finalList <- unique(finalList)
where x = c(1,1,2,3,5) and magicNumber = 5. This is a first draft, I am sure it can be improved here and there.
Not the most efficient but the most compact so far:
x <- c(1,1,2,3,5)
n <- length(x)
res <- 5
unique(combn(c(x,rep(0,n-1)), n, function(x) x[x!=0][sum(x)==res], FALSE))[-1]
# [[1]]
# [1] 1 1 3
#
# [[2]]
# [1] 2 3
#
# [[3]]
# [1] 5
#

interleave rows of matrix stored in a list in R

I want to create interleaved matrix from a list of matrices.
Example input:
> l <- list(a=matrix(1:4,2),b=matrix(5:8,2))
> l
$a
[,1] [,2]
[1,] 1 3
[2,] 2 4
$b
[,1] [,2]
[1,] 5 7
[2,] 6 8
Expected output:
1 3
5 7
2 4
6 8
I have checked the interleave function in gdata but it does not show this behaviour for lists. Any help appreciated.
Here is a one-liner:
do.call(rbind, l)[order(sequence(sapply(l, nrow))), ]
# [,1] [,2]
# [1,] 1 3
# [2,] 5 7
# [3,] 2 4
# [4,] 6 8
To help understand, the matrices are first stacked on top of each other with do.call(rbind, l), then the rows are extracted in the right order:
sequence(sapply(l, nrow))
# a1 a2 b1 b2
# 1 2 1 2
order(sequence(sapply(l, nrow)))
# [1] 1 3 2 4
It will work with any number of matrices and it will do "the right thing" (subjective) even if they don't have the same number of rows.
Rather than reinventing the wheel, you can just modify it to get you to your destination.
The interleave function from "gdata" starts with ... to let you specify a number of data.frames or matrices to put together. The first few lines of the function look like this:
head(interleave)
#
# 1 function (..., append.source = TRUE, sep = ": ", drop = FALSE)
# 2 {
# 3 sources <- list(...)
# 4 sources[sapply(sources, is.null)] <- NULL
# 5 sources <- lapply(sources, function(x) if (is.matrix(x) ||
# 6 is.data.frame(x))
You can just rewrite lines 1 and 3 as I did in this Gist to create a list version of interleave (here, I've called it Interleave)
head(Interleave)
#
# 1 function (myList, append.source = TRUE, sep = ": ", drop = FALSE)
# 2 {
# 3 sources <- myList
# 4 sources[sapply(sources, is.null)] <- NULL
# 5 sources <- lapply(sources, function(x) if (is.matrix(x) ||
# 6 is.data.frame(x))
Does it work?
l <- list(a=matrix(1:4,2),b=matrix(5:8,2), c=matrix(9:12,2))
Interleave(l)
# [,1] [,2]
# a 1 3
# b 5 7
# c 9 11
# a 2 4
# b 6 8
# c 10 12

Naming elements of matrix dimensions one at a time, when dimname is NULL

When dimnames is currently NULL, is it possible to re-name a matrix's dimestions one at a time?
For example, this fails:
mtx <- matrix(1:16,4)
dimnames(mtx)[[2]][1] <- 'col1'
with Error in dimnames(mtx)[[2]][1] <- "col1" : 'dimnames' must be a list
However this works:
mtx <- matrix(1:16,4)
dimnames(mtx)[[1]] <- letters[1:4]
dimnames(mtx)[[2]] <- LETTERS[1:4]
dimnames(mtx)[[2]][1] <- 'col1'
dimnames(mtx)[[2]][2] <- 'col2'
My objective is to seperately replace dimnames(mtx)[[2]][1] and dimnames(mtx)[[2]][2] etc ... if this is not possible, i can re-write the loop.
Thanks folks, I have ended up with the below -- I pass the names in via prepend:
mtxNameSticker <- function(mtx, prepend = NULL, MARGIN=2)
{
if (MARGIN == 1) max <- nrow(mtx) else
max <- ncol(mtx)
if (is.null(prepend)) if (MARGIN == 2) prepend <- 'C' else
prepend <- 'R'
if (length(prepend) == 1) prepend <- paste0(prepend, 1:dim(mtx)[[MARGIN]])
dimnames(mtx)[[MARGIN]] <- seq(from=1, by=1, length.out=dim(mtx)[[MARGIN]])
for (i in 1:max){
dimnames(mtx)[[MARGIN]][i] <- prepend[i]
}
return(mtx)
}
For as long as dimnames is NULL and not an appropriate list, you cannot make assignments to it at particular positions. One easy way to create a dummy but complete list of dimnames is to run:
dimnames(mtx) <- lapply(dim(mtx), seq_len)
mtx
# 1 2 3 4
# 1 1 5 9 13
# 2 2 6 10 14
# 3 3 7 11 15
# 4 4 8 12 16
Then, you can make assignments one at a time like you were wishing:
dimnames(mtx)[[2]][1] <- 'col1'
mtx
# col1 2 3 4
# 1 1 5 9 13
# 2 2 6 10 14
# 3 3 7 11 15
# 4 4 8 12 16
You are assigning a vector even though you are asked to supply a list.
Try this:
R> M <- matrix(1:4,2,2)
R> M
[,1] [,2]
[1,] 1 3
[2,] 2 4
R>
Columns:
R> M1 <- M; dimnames(M1) <- list(NULL, c("a","b")); M1
a b
[1,] 1 3
[2,] 2 4
R>
Rows:
R> M2 <- M; dimnames(M2) <- list(c("A","B"), NULL); M2
[,1] [,2]
A 1 3
B 2 4
R>
In response to your comment. #DirkEddelbuettel is correct, you are assigning a vector to what should be a list.
The reason for this is that you are assigning dimnames when the dimnames are NULL (not assigned yet)
The way R evaluates the following
x <- NULL
x[[2]][1] <- 'col1'
str(x)
## chr [1:2] NA "col1"
R returns a vector of length 2, not a list of length 2.
For your assignment to work, R would have to evaluate
x <- NULL
x[[2]][1] <- 'col1'
str(x)
to give
## List of 2
## $ : NULL
## $ : chr "col1"
Which is what would happen if x was originally defined as x <- list(NULL,NULL)
however, the dimnames must be NULL or a list of appropriate length vectors
The following does work (and is really #flodel solution)
dimnames(mtx) <- list(character(nrow(mtx)), character(ncol(mtx)))
# or
# dimnames(mtx) <- lapply(dim(mtx), character)
dimnames(mtx)[[2]][1] <- 'col1'
It seems you are allowed to set the name of the dimension without actually having any names for the dimension:
dimnames(mtx) = list(NULL,col1=NULL)
mtx
# col1
# [,1] [,2] [,3] [,4]
# [1,] 1 5 9 13
# [2,] 2 6 10 14
# [3,] 3 7 11 15
# [4,] 4 8 12 16

Resources