I am trying to solve the following problem in R. Generically, given a sequence [a,b], I am to generate lists from this sequence that have a length n, whose elements pairwise at least have a difference of d.
I was thinking of using seq() but you can only create evenly-spaced sequences using this function.
This may be what you are after, generate all permutations of the possible different values that could exist in the sequence for size n and then check which satisfy your requirements of having their terminal value be b.
This is quite intensive and slow for larger vectors, but should return all possible valid sequences (unless I've made a mistake).
# sequence length of n which includes a, b
# therefore need to find n - 1 values (then check that last val of cumsum == b)
# vals must be greater than or equal to d
# vals have upper bound is if all but one value was d, b - ((n - 1) * d)
library(gtools)
library(matrixStats)
# parameters
a = 1
b = 20
n = 5
d = 2
# possible values that differences can be
poss_diffs <- d:(b - ((n - 1) * d))
# generate all possible permutations of differences
diff_perms_n <- permutations(n = length(poss_diffs), r = n - 1, v = poss_diffs)
# turn differences into sequences, add column for the a value
seqs_n <- matrixStats::rowCumsums(cbind(a, diff_perms_n))
# filter to only valid sequences, last column == b
valid_seqs <- seqs_n[seqs_n[, ncol(seqs_n)] == b, ]
# check that diffs are all greater than d
valid_seqs_diffs <- matrixStats::rowDiffs(valid_seqs)
print(head(valid_seqs))
print(head(valid_seqs_diffs))
# > print(head(valid_seqs))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1 3 6 10 20
# [2,] 1 3 6 11 20
# [3,] 1 3 6 12 20
# [4,] 1 3 6 14 20
# [5,] 1 3 6 15 20
# [6,] 1 3 6 16 20
# > print(head(valid_seqs_diffs))
# [,1] [,2] [,3] [,4]
# [1,] 2 3 4 10
# [2,] 2 3 5 9
# [3,] 2 3 6 8
# [4,] 2 3 8 6
# [5,] 2 3 9 5
# [6,] 2 3 10 4
Related
I have a number of individuals that I want to - randomly - divide in subgroups of size groupsize. This process I want to repeat n_group times - with no repeating group constellation.
How can I achieve this in R?
I tried the following so far:
set.seed(1)
individuals <- 1:6
groupsize <- 3
n_groups <- 4
for(i in 1:n_groups) { print(sample(individuals, groupsize))}
[1] 1 4 3
[1] 1 2 6
[1] 3 2 6
[1] 3 1 5
..but am not sure whether that really does not lead to repeating constellations..?
Edit: After looking at the first suggestions and answers I realized, that another restriction could be interesting to me (sorry for not seeing it upfront..).
Is there (in the concrete example above) a way to ensure, that every individual was in contact with every other individual?
Based on your edited question, I assuma that you want to make sure that all indivuals are in at least one subgroup?
Then this might be the solution:
individuals <- 1:6
groupsize <- 3
n_groups <- 4
#sample groups
library(RcppAlgos)
#initialise
answer <- matrix()
# If the length of all unique elements in the answer is smaller than
# the number of individuals, take a new sample
while (length(unique(as.vector(answer))) < length(individuals)) {
answer <- comboSample(individuals, groupsize, n = n_groups)
# Line below isfor demonstration only
#answer <- comboSample(individuals, groupsize, n = n_groups, seed = 123)
}
# sample answer with seed = 123 (see commented line above)
# [,1] [,2] [,3]
# [1,] 1 3 4
# [2,] 1 3 6
# [3,] 2 3 5
# [4,] 2 3 4
test for groups that contain not every individual
# Test with the following matrix
# [,1] [,2] [,3]
# [1,] 1 2 3
# [2,] 1 3 4
# [3,] 1 4 5
# [4,] 2 3 4
# Note that individual '6' is not present
answer <- matrix(c(1,2,3,1,3,4,1,4,5,2,3,4), nrow = 4, ncol = 3)
while (length(unique(as.vector(answer))) < length(individuals)) {
answer <- comboSample(individuals, groupsize, n = n_groups)
}
# is recalculated to (in this case) the following answer
# [,1] [,2] [,3]
# [1,] 4 5 6
# [2,] 3 4 5
# [3,] 1 3 6
# [4,] 2 4 5
PASSED ;-)
You can use while to dynamically update your combination set, which avoids duplicates, e.g.,
res <- c()
while (length(res) < pmin(n_groups, choose(length(individuals), groupsize))) {
v <- list(sort(sample(individuals, groupsize)))
if (!v %in% res) res <- c(res, v)
}
which gives
> res
[[1]]
[1] 2 5 6
[[2]]
[1] 2 3 6
[[3]]
[1] 1 5 6
[[4]]
[1] 1 2 6
I have a large set of size M (let's say 10), and I want to, repeatedly for a certain number of occasions (let's say 13), randomly split it into M/N smaller groups of size N (let's say 2). I'd like no element in the large set to be in a repeating group until they have been in a small group with every one else. (The actual problem here: I have a class of 10 people and I want to split them into 5 pairs for a duration of 13 weeks, but I don't want anyone to be in a repeat pairing until they have been in a pairing with everyone in the class.)
How can I do this? I started by [generating non-repeating permutations from my larger group][1], but the trouble I am having is that these unique permutations don't necessarily yield unique groups. (Someone seems to have posed this same question, but [it was resolved in Python][2]. I don't understand Python, and so I'm looking for an easy R solution.)
Any help much appreciated.
Edit: Thanks to all for suggestions. I realize my original question wasn't exactly clear. The solutions suggested below work well when I only want to split the set into a single subset of size N, each time. But my problem is actually that I want to split the set into M/N subsets of size N. For example, in the case of my class, I want to split the 10 students into 5 pairs of 2 on 13 different occasions, and I want pairs to be unique until they no longer can be (i.e., after 9 occasions have passed). Unless I'm failing to see how they can be applied, I don't think any of these solutions quite solves this problem.
I see that the OP has provided a solution from the linked math.so solution, but I would like to provide a working solution of the other answer on that page that gets to the heart of this problem. That solution mentions Round-robin tournament. From the wikipedia page, the algorithm is straightforward.
One simply fixes a position in a matrix and rotates the other indices clockwise. Given M initial players, there are M - 1 unique rounds. Thus, for our given situation, we can only obtain 9 unique sets of groups.
Below, is a very straightforward base R implementation:
roll <- function( x , n ){
if( n == 0 )
return(x)
c(tail(x,n), head(x,-n))
}
RoundRobin <- function(m, n) {
m <- as.integer(m)
n <- as.integer(n)
if (m %% 2L != 0L) {
m <- m + 1L
}
myRounds <- list(n)
myRounds[[1]] <- 1:m
for (i in 2:n) {
myRounds[[i]] <- myRounds[[i - 1L]]
myRounds[[i]][2:m] <- roll(myRounds[[i]][-1], 1)
}
lapply(myRounds, matrix, nrow = 2)
}
The roll function was obtained from this answer.
Here is sample output for 10 students and 4 weeks:
RoundRobin(10, 4)
[[1]]
[,1] [,2] [,3] [,4] [,5]
[1,] 1 3 5 7 9
[2,] 2 4 6 8 10
[[2]]
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 4 6 8
[2,] 10 3 5 7 9
[[3]]
[,1] [,2] [,3] [,4] [,5]
[1,] 1 10 3 5 7
[2,] 9 2 4 6 8
[[4]]
[,1] [,2] [,3] [,4] [,5]
[1,] 1 9 2 4 6
[2,] 8 10 3 5 7
When we hit the 10th week, we see our first repeat "round".
RoundRobin(10, 13)[c(1, 2, 9, 10, 11)]
[[1]]
[,1] [,2] [,3] [,4] [,5] ## <- first week
[1,] 1 3 5 7 9
[2,] 2 4 6 8 10
[[2]]
[,1] [,2] [,3] [,4] [,5] ## <- second week
[1,] 1 2 4 6 8
[2,] 10 3 5 7 9
[[3]]
[,1] [,2] [,3] [,4] [,5] ## <- ninth week
[1,] 1 4 6 8 10
[2,] 3 5 7 9 2
[[4]]
[,1] [,2] [,3] [,4] [,5] ## <- tenth week
[1,] 1 3 5 7 9
[2,] 2 4 6 8 10
[[5]]
[,1] [,2] [,3] [,4] [,5] ## <- eleventh week
[1,] 1 2 4 6 8
[2,] 10 3 5 7 9
Note, this is a deterministic algorithm and given the simplicity, it is pretty efficient. E.g. if you have 1000 students and want to find all 999 unique pairings, you can run this function without fear:
system.time(RoundRobin(1000, 999))
user system elapsed
0.038 0.001 0.039
I think you maybe want something like this. It will produce a data frame with the unique combinations in rows. These are sampled randomly until all unique combinations are exhausted. Thereafter, if more samples are required it will sample randomly with replacement from unique combinations:
create_groups <- function(M, N, samples)
{
df <- seq(N) %>%
lapply(function(x) M) %>%
do.call(expand.grid, .) %>%
apply(1, sort) %>%
t() %>%
as.data.frame() %>%
unique()
df <- df[apply(df, 1, function(x) !any(duplicated(x))), ]
df <- df[sample(nrow(df)), ]
if(samples <= nrow(df)) return(df[seq(samples), ])
rbind(df, df[sample(seq(nrow(df)), samples - nrow(df), TRUE), ])
}
It's easy to see how it works if we want groups of 4 elements from 5 objects (there are only 5 possible combinations):
create_groups(letters[1:5], 4, 5)
#> V1 V2 V3 V4
#> 1 a b d e
#> 2 a b c d
#> 3 a c d e
#> 4 b c d e
#> 5 a b c e
We have a randomly-ordered sample of 4 objects drawn from the set, but no repeats. (the elements within each sample are ordered alphabetically however)
If we want more than 5 samples, the algorithm ensures that all unique combinations are exhausted before resampling:
create_groups(letters[1:5], 4, 6)
#> V1 V2 V3 V4
#> 1 a b c e
#> 2 a c d e
#> 3 a b d e
#> 4 b c d e
#> 5 a b c d
#> 6 a b d e
Here we see there are no repeated rows until row 6, which is a repeat of row 3.
For the example in your question, there are 45 unique combinations of 2 elements drawn from 10 objects, so we get no repeats in our 13 samples:
create_groups(1:10, 2, 13)
#> V1 V2
#> 1 7 8
#> 2 4 10
#> 3 2 8
#> 4 3 10
#> 5 3 9
#> 6 1 8
#> 7 4 9
#> 8 8 9
#> 9 7 9
#> 10 4 6
#> 11 5 7
#> 12 9 10
#> 13 4 7
I am not sure combn + sample can work for your goal
as.data.frame(t(combn(M, N))[sample(K <- choose(length(M), N), i, replace = K < i), ])
which gives
V1 V2
1 4 9
2 4 8
3 1 9
4 6 10
5 5 9
6 2 10
7 3 7
8 7 8
9 6 7
10 1 7
11 6 8
12 5 6
13 3 8
With apologies to all for not writing a clear question, here is a solution based on the solution suggested in this post. (Depending on the seed, it can get stuck, and if weeks are larger, the code to recycle old groups has to be adjusted a little.)
set.seed(1)
m<-10
n<-2
weeks<-13
groupmat<-combn(m,n)
students <- c(1:m)
pickedpairs <- matrix(
data=NA,
nrow=n,
ncol=0
)
while( ncol(pickedpairs) < ((m-1)*(m/n)) ) {
thisweekspairs <- matrix(sample(students),nrow=n,ncol=m/n)
#check if this weeks pairs
#are already in pickedpairs
#if so, skip iteration
pairsprez <- lapply(1:ncol(thisweekspairs),function(j) {
#j<-1
apply(pickedpairs,2,function(x) sum(x%in%thisweekspairs[,j])==n)
}) %>% Reduce(f="|") %>% sum
if(pairsprez>=1) {
pickedpairs<<-pickedpairs
} else {
pickedpairs<<-cbind(pickedpairs,thisweekspairs)
}
print(ncol(pickedpairs))
}
uniquepairs <- lapply(1:(ncol(pickedpairs)/(m/n)),function(i) {
pickedpairs[,(1 + (m/n)*(i-1)):((m/n)*i)]
})
#generate weeks' number of unique pairs
combine(
uniquepairs,
uniquepairs[sample(1:length(uniquepairs),weeks-length(uniquepairs))]
)
We could use slice_sample with combn
library(dplyr)
library(purrr)
combn(M, N, simplify = FALSE) %>%
invoke(rbind, .) %>%
as_tibble %>%
slice_sample(n = i)
# A tibble: 13 x 2
# V1 V2
# <int> <int>
# 1 4 5
# 2 3 8
# 3 9 10
# 4 5 7
# 5 8 9
# 6 3 9
# 7 5 10
# 8 4 10
# 9 2 5
#10 5 6
#11 6 9
#12 2 7
#13 4 9
I have the following list of numbers (1,3,4,5,7,9,10,12,15) and I want to find out all the possible combinations of 3 numbers from this list that would sum to 20.
My research on stackoverflow has led me to this post:
Finding all possible combinations of numbers to reach a given sum
There is a solution provided by Mark which stand as follows:
subset_sum = function(numbers,target,partial=0){
if(any(is.na(partial))) return()
s = sum(partial)
if(s == target) print(sprintf("sum(%s)=%s",paste(partial[-1],collapse="+"),target))
if(s > target) return()
for( i in seq_along(numbers)){
n = numbers[i]
remaining = numbers[(i+1):length(numbers)]
subset_sum(remaining,target,c(partial,n))
}
}
However I am having a hard time trying to tweak this set of codes to match my problem. Or may be there is a simpler solution?
I want the output in R to show me the list of numbers.
Any help would be appreciated.
You can use combn function and filter to meet your criteria. I have performed below calculation in 2 steps but one can perform it in single step too.
v <- c(1,3,4,5,7,9,10,12,15)
AllComb <- combn(v, 3) #generates all combination taking 3 at a time.
PossibleComb <- AllComb[,colSums(AllComb) == 20] #filter those with sum == 20
#Result: 6 sets of 3 numbers (column-wise)
PossibleComb
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 1 1 1 3 3 4
# [2,] 4 7 9 5 7 7
# [3,] 15 12 10 12 10 9
#
# Result in list
split(PossibleComb, col(PossibleComb))
# $`1`
# [1] 1 4 15
#
# $`2`
# [1] 1 7 12
#
# $`3`
# [1] 1 9 10
#
# $`4`
# [1] 3 5 12
#
# $`5`
# [1] 3 7 10
#
# $`6`
# [1] 4 7 9
The combn also have a FUN parameter which we can describe to output as list and then Filter the list elements based on the condition
Filter(function(x) sum(x) == 20, combn(v, 3, FUN = list))
#[[1]]
#[1] 1 4 15
#[[2]]
#[1] 1 7 12
#[[3]]
#[1] 1 9 10
#[[4]]
#[1] 3 5 12
#[[5]]
#[1] 3 7 10
#[[6]]
#[1] 4 7 9
data
v <- c(1,3,4,5,7,9,10,12,15)
For example, Suppose I want to generate all possible permutations in the series 1:10 taken 3 at a time. But, the 3 numbers chosen have to be in ascending order. Hence, 3,4,5 is acceptable but not 5,4,3. The second condition is that they can't have jumps, they have to be consecutive in order. Hence, 1,2,4 is unacceptable. How to get this in R?
We can create the combinations of numbers using combn, then subset the columns by creating a logical index by checking the difference of the rows are equal to 1, and transpose the output
m1 <- combn(1:10, 3)
t(m1[,colSums(diff(m1)==1)==2])
# [,1] [,2] [,3]
#[1,] 1 2 3
#[2,] 2 3 4
#[3,] 3 4 5
#[4,] 4 5 6
#[5,] 5 6 7
#[6,] 6 7 8
#[7,] 7 8 9
#[8,] 8 9 10
These consist of the sequences 1:3, 2:4, ..., 8:10. In general, to obtain all such subsequences of length k among 1:n, you can start with the smallest 1:k and keep adding 1 to its elements:
subseq <- function(n,k) if (1 <= k && k <= n) outer(1:k, 0:(n-k), "+")
The sequences are in the columns, already in lexicographic order. Since no sorting is actually done, this is a O(kn) algorithm, which is asymptotically optimal.
Example: subseq(10,3) produces
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 2 3 4 5 6 7 8
[2,] 2 3 4 5 6 7 8 9
[3,] 3 4 5 6 7 8 9 10
A slightly faster R implementation might avoid outer like this:
subseq <- function(n=10, k=3) if (1 <= k && k <= n) matrix(rep(0:(n-k), each=k), k) + 1:k
I have a numeric called area of length 166860. This consists of 412 different elements, most of length 405 and some of length 809. I have their start and end ids.
My goal is to extract them and put them in a matrix/data frame with 412 columns
Right now, I'm trying this code:
m = matrix(NA,ncol=412, nrow=809)
for (j in 1:412){
temp.start = start.ids[j]
temp.end = end.ids[j]
m[,j] = area[temp.start:temp.end]
}
But I just end up with this error message:
"Error in m[, j] = area[temp.start:temp.end] :
number of items to replace is not a multiple of replacement length"
Here's a quite easy approach:
Example data:
area <- c(1:4, 1:5, 1:6, 1:3)
# [1] 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3
start.ids <- which(area == 1)
# [1] 1 5 10 16
end.ids <- c(which(area == 1)[-1] - 1, length(area))
# [1] 4 9 15 18
Create a list with one-row matrices:
mats <- mapply(function(x, y) t(area[seq(x, y)]), start.ids, end.ids)
# [[1]]
# [,1] [,2] [,3] [,4]
# [1,] 1 2 3 4
#
# [[2]]
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1 2 3 4 5
#
# [[3]]
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 1 2 3 4 5 6
#
# [[4]]
# [,1] [,2] [,3]
# [1,] 1 2 3
Use the function rbind.fill.matrix from the plyr package to create the matrix and transpose it (t):
library(plyr)
m <- t(rbind.fill.matrix(mats))
# [,1] [,2] [,3] [,4]
# 1 1 1 1 1
# 2 2 2 2 2
# 3 3 3 3 3
# 4 4 4 4 NA
# 5 NA 5 5 NA
# 6 NA NA 6 NA
You are setting the column length to be 412, and matrices cannot be flexible/variable in their length. This means the value you assign to the columns must either have a length of 412 or something less that can fill into a length of 412. From the manual on ?matrix:
If there are too few elements in data to fill the matrix, then the elements in data are recycled. If data has length zero, NA of an appropriate type is used for atomic vectors (0 for raw vectors) and NULL for lists.
As another commenter said, you may have intended to assign to the rows in which case m[j, ] is the way to do that, but you have to then pad the value you are assigning with NA or allow NA's to be filled so the value being assigned is always of length 809.
m = matrix(NA,ncol=412, nrow=809)
for (j in 1:412){
temp.start = start.ids[j]
temp.end = end.ids[j]
val <- area[temp.start:temp.end]
m[j, ] = c(val, rep(NA, 809 - length(val)))
}
How about this? I've manufactured some sample data:
#here are the random sets of numbers - length either 408 or 809
nums<-lapply(1:412,function(x)runif(sample(c(408,809),1)))
#this represents your numeric (one list of all the numbers)
nums.vec<-unlist(nums)
#get data about the series (which you have)
nums.lengths<-sapply(nums,function(x)length(x))
nums.starts<-cumsum(c(1,nums.lengths[-1]))
nums.ends<-nums.starts+nums.lengths-1
new.vec<-unlist(lapply(1:412,function(x){
v<-nums.vec[nums.starts[x]:nums.ends[x]]
c(v,rep(0,(809-length(v))))
}))
matrix(new.vec,ncol=412)
What about
m[j,] = area[temp.start:temp.end]
?
Edit:
a <- area[temp.start:temp.end]
m[1:length(a),j] <- a
Maybe others have better answers. As I see it, you have two options:
Change m[,j] to m[1:length(area[temp.start:temp.end]),j] and then you will not get an error but you would have some NA's left.
Use a list of matrices instead, so you would get different dimensions for each matrix.