R constrainted all combinations [duplicate] - r

I am trying to randomly sample 7 numbers from 0 to 7 (with replacement), but subject to the constraint that the numbers chosen add up to 7. So for instance, the output 0 1 1 2 3 0 0 is okay, but the output 1 2 3 4 5 6 7 is not. Is there a way to use the sample command with added constraints?
I intend to use the replicate() function with the sample command as an argument, to return a list of N different vectors form the sample command. The way I am currently using the sample command (without any constraints), I need N to be very large in order to get as many possible vectors that sum to exactly 7 as possible. I figure there must be an easier way to do this!
Here is my code for that part:
x <- replicate(100000, sample(0:7, 7, replace=T))
Ideally, I want 10,000 or 100,000 vectors in x to sum to 7, but would need an enormous N value to do this. Thanks for any help.

To make sure you're sampling uniformly, you could just generate all the permutations and limit to those that sum to 7:
library(gtools)
perms <- permutations(8, 7, 0:7, repeats.allowed=T)
perms7 <- perms[rowSums(perms) == 7,]
From nrow(perms7), we see there are only 1716 possible permutations that sum to 7. Now you can uniformly sample from the permutations:
set.seed(144)
my.perms <- perms7[sample(nrow(perms7), 100000, replace=T),]
head(my.perms)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 0 0 0 2 5 0 0
# [2,] 1 3 0 1 2 0 0
# [3,] 1 4 1 1 0 0 0
# [4,] 1 0 0 3 0 3 0
# [5,] 0 2 0 0 0 5 0
# [6,] 1 1 2 0 0 2 1
An advantage of this approach is that it's easy to see that we're sampling uniformly at random. Also, it's quite quick -- building perms7 took 0.3 seconds on my computer and building a 1 million-row my.perms took 0.04 seconds. If you need to draw many vectors this will be quite a bit quicker than a recursive approach because you're just using matrix indexing into perms7 instead of generating each vector separately.
Here's a distribution of counts of numbers in the sample:
# 0 1 2 3 4 5 6 7
# 323347 188162 102812 51344 22811 8629 2472 423

Start with all zeroes, add one to any element, do 7 times:
sumTo = function(){
v = rep(0,7)
for(i in 1:7){
addTo=sample(7)[1]
v[addTo]=v[addTo]+1
}
v
}
Or equivalently, just choose which of the 7 elements you are going to increment in one sample of length 7, then tabulate those, making sure you tabulate up to 7:
sumTo = function(){tabulate(sample(7, 7, replace = TRUE), 7)}
> sumTo()
[1] 2 1 0 0 4 0 0
> sumTo()
[1] 1 3 1 0 1 0 1
> sumTo()
[1] 1 1 0 2 1 0 2
I don't know if this will produce a uniform sample from all possible combinations...
The distribution of individual elements over 100,000 reps is:
> X = replicate(100000,sumTo())
> table(X)
X
0 1 2 3 4 5 6
237709 277926 138810 38465 6427 627 36
Didn't hit a 0,0,0,0,0,7 that time!

This recursive algorithm will output a distribution with a higher probability for large numbers than the other solutions. The idea is to throw a random number y in 0:7 in any of the seven available slots, then repeat with a random number in 0:(7-y), etc:
sample.sum <- function(x = 0:7, n = 7L, s = 7L) {
if (n == 1) return(s)
x <- x[x <= s]
y <- sample(x, 1)
sample(c(y, Recall(x, n - 1L, s - y)))
}
set.seed(123L)
sample.sum()
# [1] 0 4 0 2 0 0 1
Drawing 100,000 vectors took 11 seconds on my machine and here is the distribution I get:
# 0 1 2 3 4 5 6 7
# 441607 98359 50587 33364 25055 20257 16527 14244

There may be an easier and/or more elegant way, but here's a brute-force method using the LSPM:::.nPri function. The link includes the definition for an R-only version of the algorithm, for those interested.
#install.packages("LSPM", repos="http://r-forge.r-project.org")
library(LSPM)
# generate all possible permutations, since there are only ~2.1e6 of them
# (this takes < 40s on my 2.2Ghz laptop)
x <- lapply(seq_len(8^7), nPri, n=8, r=7, replace=TRUE)
# set each permutation that doesn't sum to 7 to NULL
y <- lapply(x, function(p) if(sum(p-1) != 7) NULL else p-1)
# subset all non-NULL permutations
z <- y[which(!sapply(y, is.null))]
Now you can sample from z and be assured that you're getting a permutation that sums to 7.

I find this question intriguing and gave it some extra thought. Another (more general) approach to (approximate) sample uniformly from all feasible solutions, without generating and storing all permutations (which is clearly not possible in the case with much more than 7 numbers), in R by sample(), could be a simple MCMC implementation:
S <- c(0, 1, 1, 2, 3, 0, 0) #initial solution
N <- 100 #number of dependent samples (or burn in period)
series <- numeric(N)
for(i in 1:N){
b <- sample(1:length(S), 2, replace=FALSE) #pick 2 elements at random
opt <- sum(S[-b]) #sum of complementary elements
a <- sample(0:(7-opt), 1) #sample a substistute
S[b[1]] <- a #change elements
S[b[2]] <- 7 - opt - a
}
S #new sample
This is of course really fast for a few samples. The "distribution":
#"distribution" N=100.000: 0 1 2 3 4 5 6 7
# 321729 189647 103206 52129 22287 8038 2532 432
Of course in this case, where it's actually possible to find and store all combinations, and if you want a huge sample from all feasible outcomes, just use partitions::compositions(7, 7), as also suggested by Josh O'Brien in the comments, to avoid calculating all the permutations, when only a small fraction is needed:
perms7 <- partitions::compositions(7, 7)
>tabulate(perms7[, sample(ncol(perms7), 100000, TRUE)]+1, 8)
#"distribution" N=100.000: 0 1 2 3 4 5 6 7
# 323075 188787 102328 51511 22754 8697 2413 435

Related

Create a matrix of 0s and 1s, such that each row has only one 1 and each column has at least two 1s

I want to create a 100*4 matrix of 0s and 1s, such that each row has only one 1 and each column has at least two 1s, in R.
MyMat <- as.matrix(rsparsematrix(nrow=100, ncol=4, nnz = 100))
I am thinking of rsparsematrix but yet I am not sure how to apply my required conditions.
edit. My other try would be dummy_cols, but then no matter what. I am stuck with applying the two conditions yet. I guess there must be a more straightforward way of creating such a matrix.
1) A matrix consisting of 25 4x4 identity matrices stacked one on top of each other satisfies these requirements
m <- matrix(1, 25) %x% diag(4)
2) Exchanging the two arguments of %x% would also work and gives a different matrix which also satisfies this.
3) Any permutation of the rows and the columns of the two solution matrices in (1) and (2) would also satisfy the conditions.
m[sample(100), sample(4)]
4) If the objective is to generate a random table containing 0/1 values whose row sums are each 1 and whose column sums are each 25 then use r2dtable:
r <- r2dtable(1, rep(1, 100), rep(25, 4))[[1]]
5) or if it is desired to allow any column sums of at least 2 then:
rsums <- rep(1, 100)
csums <- rmultinom(1, 92, rep(0.25, 4)) + 2
r <- r2dtable(1, rsums, csums)[[1]]
Stochastically, with two rules:
All rows must have exactly one 1; and
All columns must have at least two 1s.
I control the first implicitly by construction; I test against the second.
nr <- 100 ; nc <- 4
set.seed(42)
lim <- 10000
while (lim > 0) {
lim <- lim - 1
M <- t(replicate(nr, sample(c(1, rep(0, nc-1)))))
if (all(colSums(M > 0) >= 2)) break
}
head(M)
# [,1] [,2] [,3] [,4]
# [1,] 1 0 0 0
# [2,] 0 0 0 1
# [3,] 0 0 0 1
# [4,] 0 1 0 0
# [5,] 0 0 0 1
# [6,] 0 1 0 0
colSums(M)
# [1] 25 30 21 24
lim
# [1] 9999
My use of lim is hardly needed in this example, but is there as a mechanism to stop this from running infinitely: if you change the dimensions and/or the rules, it might become highly unlikely or infeasible to meet all rules, so this keeps the execution time limited. (10000 is completely arbitrary.)
My point in the comment is that it would be rather difficult to find a 100x4 matrix that matches rule 1 that does not match rule 2. In fact, since the odds of a 0 or a 1 in any one cell is 0.75 and 0.25, respectively, to find a column (among 100 rows) that contains fewer than two 1s would be around 1.1e-11.
Here is a simple way to generate the 100 rows with the 1's randomly positioned and then create the matrix by transposing the rows object. The matrix generation is wrapped by a while loop (THX r2evans) to ensure each column contains at least two 1's.
minval <- 0
while(minval < 2) {
rows <- replicate(100, sample(c(0,0,0,1), 4))
m <- t(rows)
minval <- min(colSums(m))
}
m
[,1] [,2] [,3] [,4]
[1,] 0 0 0 1
[2,] 1 0 0 0
[3,] 0 0 0 1
[4,] 0 0 1 0
[5,] 1 0 0 0
[6,] 0 0 0 1
[7,] 1 0 0 0
[8,] 0 0 1 0
[9,] 0 1 0 0
[10,] 1 0 0 0
Code:
v <- tabulate(sample(1:4, 100-2*4, replace=TRUE), nbins=4) + 2
m <- diag(length(v))[sample(rep(seq_along(v), v)),]
Result check:
> dim(m)
[1] 100 4
> range(rowSums(m))
[1] 1 1
> range(colSums(m))
[1] 20 30
This works with any matrix size - just adjust the numbers 4 and 100. The first one controls the number of columns and the second one - the number of rows:
v <- tabulate(sample(1:10, 200-2*10, replace=TRUE), nbins=10) + 2
m <- diag(length(v))[sample(rep(seq_along(v), v)),]
> dim(m)
[1] 200 10
> range(rowSums(m))
[1] 1 1
> range(colSums(m))
[1] 15 31
Explanation: this works backwards from the properties of the resulting matrix. If you have 100 rows and 4 columns, with each row having only one 1 then the matrix will have 100 1s in total. Which means that the sum of all column-sums should also be 100. So we start with a vector of numbers (summing up to 100) which represents how many 1s each column will have. Say this vector is c(50,25,20,5). This tells us that there will be 50 rows of the form (1,0,0,0), 25 rows with the form (0,1,0,0), and so on. The final step is to generate all these rows and shuffle them.
The trick here:
v <- tabulate(sample(1:4, 100-2*4, replace=TRUE), nbins=4) + 2
Is to generate random column-sums while making sure the minimum is at least 2. We do this by generating values summing up to 92 and then adding 2 to each value (which, with 4 columns, ends up as additional 8).

Average length of runs selected from a rasterbrick in R

I am working with a rasterbrick "a" with thousands of layers, closer description is not necessary for my problem. I am using following function to create a rasterlayer of the total amount of runs of at least 5 days with values greater than 1 (one layer in brick is one day):
indices<-rep(1:69,each=90)
ff<-function(x,na.rm=TRUE){
y<-x > 1
n<- ave(y,cumsum(y == 0), FUN = cumsum)
sum(n==5)
}
Y<-stackApply(a,indices,fun=ff)
This works great, I tested that. In a similar manner, I wrote new function:
fff<-function(x,na.rm = TRUE){
y <- x > 1
n <- ave(y, cumsum(y == 0), FUN = cumsum)
mean(n[n >= 5])
}
X<-stackApply(a,indices,fun=fff)
Using this function, I wanted to create a rasterlayer of average lengths of those runs greater than 5 days. It seems reasonable and fine, but it does not work correctly. For example, when there is a run of 6 days (satisfying my criterion of value>1), it counts two runs, one of 5 and another one of six, and thus the average is 5,5 instead of 6. I am not sure how to adjust my function fff. If there is a way to do it, it would be great, otherwise I would be greatful if anyone shares another way how to calculate means of those runs. Thanks!
In the future, please include a minimal, reproducible, self-contained example. Do not describe the behavior of your code, but show it. Also, be very clear aobut the question. As-is it is hard to see that your question is not about raster data at all, as you are looking for a function that works on any numeric vector (that you may then apply to raster data).
You are looking for function that finds local maxima larger than 5, in the cumulated sum of neighbors that are > 1; and then average these local maxima.
You have this function
ff<-function(x,na.rm=TRUE){
y<-x > 1
n <- ave(y,cumsum(y == 0), FUN = cumsum)
sum(n==5)
}
Example data
x <- c(-1:10, -1:3, -1:6)
x
# [1] -1 0 1 2 3 4 5 6 7 8 9 10 -1 0 1 2 3 -1 0 1 2 3 4 5 6
ff(x)
# [1] 2
(two local maxima that are larger than 5)
To write the function you want we can start with what we have
y <-x > 1
n <- ave(y,cumsum(y == 0), FUN = cumsum)
n
# [1] 0 0 0 1 2 3 4 5 6 7 8 9 0 0 0 1 2 0 0 0 1 2 3 4 5
In this case, you need to find the numbers 9 and 5. You can start with
n[n<5] <- 0
n
# [1] 0 0 0 0 0 0 0 5 6 7 8 9 0 0 0 0 0 0 0 0 0 0 0 0 5
And now we can use diff to find the local maxima. These are the values for which the difference with the previous value is negative. Note the zero added to n to consider the last element of the vector.
i <- which(diff(c(n, 0)) < 0)
i
# [1] 12 25
n[i]
# [1] 9 5
Such that we can put the above together in a function like this
f <- function(x) {
y <- x > 1
n <- ave(y,cumsum(y == 0), FUN = cumsum)
n[n<5] <- 0
i <- which(diff(c(n, 0)) < 0)
mean(n[i])
}
f(x)
# [1] 7
If you have NAs you may do
f <- function(x) {
y <- x > 1
y[is.na(y)] <- FALSE
n <- ave(y,cumsum(y == 0), FUN = cumsum)
n[n<5] <- 0
i <- which(diff(c(n, 0)) < 0)
mean(n[i])
}

overlapping segments R

there is a data frame with which I am working it looks like this
the two columns denote start and end of a chunk. I need to know how many of these chunks are present at every position from 0 to 23110906. Sometimes the chunks overlap and sometimes there might be a region which has no chunk covering at all. It is like segments in R. but I dont need a visualisation I just need a way to find quickly the number of chunks at every postion. Is there an easy way?
Here's some data
m = matrix(c(10, 20, 25, 30), 2)
An IRanges notion is coverage()
> cvg = coverage(IRanges(start=m[,1], end=m[,2]))
> cvg
integer-Rle of length 30 with 4 runs
Lengths: 9 10 6 5
Values : 0 1 2 1
Which is a compact run-length encoding; query at the ith location
> cvg[22]
integer-Rle of length 1 with 1 run
Lengths: 1
Values : 2
> runValue(cvg[22])
[1] 2
Do math on the Rle
> cvg > 1
logical-Rle of length 30 with 3 runs
Lengths: 19 6 5
Values : FALSE TRUE FALSE
or coerce to an integer vector
> as(cvg, "integer")
[1] 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1
This
> cumsum(tabulate(m[,1], 30)) - cumsum(tabulate(m[,2], 30))
[1] 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 0
will also be reasonably fast.
Note subtle differences between these, from differences in the notion of whether the ends are included (IRanges: yes; tabulate: no) in the range. If these are actually genome coordinates then GenomicRanges is the place to go, to account for seqname (chromosome) and strand.
The data structure you are looking for is called interval tree, which is a type of sorted binary tree that contains (guess what) intervals, each of which usually has start and end positions.
I never used an interval tree to store points as you need, but I guess you can define your intervals as interval.start = interval.end.
Building the tree will take linear time and querying the intervals of your data frame will take logarithmic time, which is much faster than pteetor's quadratic time approach.
The R package IRanges from Bioconductor may help you. I would try the function findOverlaps() and then table() the results. I invite you to read the documentation to see whether it fits your specific needs.
I took that matrix and examined the overlaps, of which there were only five intervals with any overlaps and none with 2, assuming they were ordered by their starting postions:
> sum( mat[1:28,2] > mat[2:29,1] )
[1] 5
> sum( mat[1:27,2] > mat[3:29,1] )
[1] 0
So which ones were they?
> which( mat[1:28,2] > mat[2:29,1] )
[1] 19 21 23 25 28
So it seemed rather wasteful of machine resources and time to create a vector that was 23 million items long and it would be a lot easier to simply build a function that would count the number of intervals in which any particular position was within:
fchunk <- function(pos) {sum( mat[ , 1] <= pos & mat[,2] >= pos)}
#--------
> fchunk(16675330)
[1] 2
> fchunk(16675329)
[1] 1
These are the intervals where there are 2:
sapply( which( mat[1:28,2] > mat[2:29,1] ) ,
function(int1) c( mat[int1+1, 1], mat[int1, 2] ) )
#--------
[,1] [,2] [,3] [,4] [,5]
n7 16675330 18097680 20233612 21288777 22847516
n8 16724700 18445265 20741145 22780817 22967567
If you really want the count at every position -- all 23,110,906 positions -- this code will tell you.
countChunks = function(i) sum(dfrm$n7 <= i & i <= dfrm$n8)
counts = sapply(1:23110906, countChunks)
But it's very slow. Faster code would require some clever optimization to eliminate the (very) redundant counting down by these two lines.
If you simply want the count at one position, i, just call countChunks(i).

R: sample() command subject to a constraint

I am trying to randomly sample 7 numbers from 0 to 7 (with replacement), but subject to the constraint that the numbers chosen add up to 7. So for instance, the output 0 1 1 2 3 0 0 is okay, but the output 1 2 3 4 5 6 7 is not. Is there a way to use the sample command with added constraints?
I intend to use the replicate() function with the sample command as an argument, to return a list of N different vectors form the sample command. The way I am currently using the sample command (without any constraints), I need N to be very large in order to get as many possible vectors that sum to exactly 7 as possible. I figure there must be an easier way to do this!
Here is my code for that part:
x <- replicate(100000, sample(0:7, 7, replace=T))
Ideally, I want 10,000 or 100,000 vectors in x to sum to 7, but would need an enormous N value to do this. Thanks for any help.
To make sure you're sampling uniformly, you could just generate all the permutations and limit to those that sum to 7:
library(gtools)
perms <- permutations(8, 7, 0:7, repeats.allowed=T)
perms7 <- perms[rowSums(perms) == 7,]
From nrow(perms7), we see there are only 1716 possible permutations that sum to 7. Now you can uniformly sample from the permutations:
set.seed(144)
my.perms <- perms7[sample(nrow(perms7), 100000, replace=T),]
head(my.perms)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 0 0 0 2 5 0 0
# [2,] 1 3 0 1 2 0 0
# [3,] 1 4 1 1 0 0 0
# [4,] 1 0 0 3 0 3 0
# [5,] 0 2 0 0 0 5 0
# [6,] 1 1 2 0 0 2 1
An advantage of this approach is that it's easy to see that we're sampling uniformly at random. Also, it's quite quick -- building perms7 took 0.3 seconds on my computer and building a 1 million-row my.perms took 0.04 seconds. If you need to draw many vectors this will be quite a bit quicker than a recursive approach because you're just using matrix indexing into perms7 instead of generating each vector separately.
Here's a distribution of counts of numbers in the sample:
# 0 1 2 3 4 5 6 7
# 323347 188162 102812 51344 22811 8629 2472 423
Start with all zeroes, add one to any element, do 7 times:
sumTo = function(){
v = rep(0,7)
for(i in 1:7){
addTo=sample(7)[1]
v[addTo]=v[addTo]+1
}
v
}
Or equivalently, just choose which of the 7 elements you are going to increment in one sample of length 7, then tabulate those, making sure you tabulate up to 7:
sumTo = function(){tabulate(sample(7, 7, replace = TRUE), 7)}
> sumTo()
[1] 2 1 0 0 4 0 0
> sumTo()
[1] 1 3 1 0 1 0 1
> sumTo()
[1] 1 1 0 2 1 0 2
I don't know if this will produce a uniform sample from all possible combinations...
The distribution of individual elements over 100,000 reps is:
> X = replicate(100000,sumTo())
> table(X)
X
0 1 2 3 4 5 6
237709 277926 138810 38465 6427 627 36
Didn't hit a 0,0,0,0,0,7 that time!
This recursive algorithm will output a distribution with a higher probability for large numbers than the other solutions. The idea is to throw a random number y in 0:7 in any of the seven available slots, then repeat with a random number in 0:(7-y), etc:
sample.sum <- function(x = 0:7, n = 7L, s = 7L) {
if (n == 1) return(s)
x <- x[x <= s]
y <- sample(x, 1)
sample(c(y, Recall(x, n - 1L, s - y)))
}
set.seed(123L)
sample.sum()
# [1] 0 4 0 2 0 0 1
Drawing 100,000 vectors took 11 seconds on my machine and here is the distribution I get:
# 0 1 2 3 4 5 6 7
# 441607 98359 50587 33364 25055 20257 16527 14244
There may be an easier and/or more elegant way, but here's a brute-force method using the LSPM:::.nPri function. The link includes the definition for an R-only version of the algorithm, for those interested.
#install.packages("LSPM", repos="http://r-forge.r-project.org")
library(LSPM)
# generate all possible permutations, since there are only ~2.1e6 of them
# (this takes < 40s on my 2.2Ghz laptop)
x <- lapply(seq_len(8^7), nPri, n=8, r=7, replace=TRUE)
# set each permutation that doesn't sum to 7 to NULL
y <- lapply(x, function(p) if(sum(p-1) != 7) NULL else p-1)
# subset all non-NULL permutations
z <- y[which(!sapply(y, is.null))]
Now you can sample from z and be assured that you're getting a permutation that sums to 7.
I find this question intriguing and gave it some extra thought. Another (more general) approach to (approximate) sample uniformly from all feasible solutions, without generating and storing all permutations (which is clearly not possible in the case with much more than 7 numbers), in R by sample(), could be a simple MCMC implementation:
S <- c(0, 1, 1, 2, 3, 0, 0) #initial solution
N <- 100 #number of dependent samples (or burn in period)
series <- numeric(N)
for(i in 1:N){
b <- sample(1:length(S), 2, replace=FALSE) #pick 2 elements at random
opt <- sum(S[-b]) #sum of complementary elements
a <- sample(0:(7-opt), 1) #sample a substistute
S[b[1]] <- a #change elements
S[b[2]] <- 7 - opt - a
}
S #new sample
This is of course really fast for a few samples. The "distribution":
#"distribution" N=100.000: 0 1 2 3 4 5 6 7
# 321729 189647 103206 52129 22287 8038 2532 432
Of course in this case, where it's actually possible to find and store all combinations, and if you want a huge sample from all feasible outcomes, just use partitions::compositions(7, 7), as also suggested by Josh O'Brien in the comments, to avoid calculating all the permutations, when only a small fraction is needed:
perms7 <- partitions::compositions(7, 7)
>tabulate(perms7[, sample(ncol(perms7), 100000, TRUE)]+1, 8)
#"distribution" N=100.000: 0 1 2 3 4 5 6 7
# 323075 188787 102328 51511 22754 8697 2413 435

How to efficiently retrieve top K-similar vectors by cosine similarity using R?

I'm working on a high-dimensional problem (~4k terms) and would like to retrieve top k-similar (by cosine similarity) and can't afford to do a pair-wise calculation.
My training set is 6million x 4k matrix and I would like to make predictions for 600k x 4k matrix.
What is the most efficient way to retrieve the k-similar items for each item in my 600k x 4k matrix?
Ideally, I would like to get a matrix which is 600k x 10 (i.e., top 10-similar items for each of the 600k items).
ps: I've researched the SO website and found almost all "cosine similarity in R" questions refer to cosine_sim(vector1, vector2). But this question refers to cosine_sim(matrix1, matrix2).
Update
The following code uses a naive method to find the cosine similarity between each row in the testset and every row in the training set.
set.seed(123)
train<-matrix(round(runif(30),0),nrow=6,ncol=5)
set.seed(987)
test<-matrix(round(runif(20),0),nrow=4,ncol=5)
train
[1,] 0 1 1 0 1
[2,] 1 1 1 1 1
[3,] 0 1 0 1 1
[4,] 1 0 1 1 1
[5,] 1 1 0 1 0
[6,] 0 0 0 1 0
test
[1,] 0 1 1 0 0
[2,] 1 0 1 0 1
[3,] 1 0 0 0 0
[4,] 1 0 0 1 1
coSim<-function(mat1, mat2, topK){
require(plyr)
#mat2: is the testset
#mat1: is the training set. We will find cosine similarity between each row in testset and every row in trainingset.
#topK: user-input. for each row in testset we will return 'topk' similar rows(index) from the testset
#set up an empty result matrix. nrow(result) will be the same as the cartesian product between mat1 & mat2.
result<-matrix(rep(NA, nrow(mat1)*nrow(mat2)), nrow=nrow(mat1)*nrow(mat2), ncol=3)
k=1
for(i in 1:nrow(mat2)){
for(j in 1:nrow(mat1)){
result[k,1]<-i
result[k,2]<-j
result[k,3]<-crossprod(mat1[j,], mat2[i,])/sqrt(crossprod(mat1[j,]) * crossprod(mat2[i,]))
k<-k+1
}
}
#sort the result matrix by cosine similarity found for each row in testset. not sure how to keep topK from each group so convert to df
result<-as.data.frame(result)
colnames(result)<-c("testRowId", "trainRowId","CosineSimilarity")
result<-ddply(result, "testRowId", function(x) head(x[order(x$CosineSimilarity, decreasing = TRUE) , ], topK))
resultMat<-matrix(result$trainRowId, nrow=nrow(mat2), ncol=topK,byrow=T)
finalResult<-list(similarity=result, index=resultMat)
}
system.time(cosineSim<-coSim(train, test, topK=2)) #0.12 secs
cosineSim
$similarity
testRowId trainRowId CosineSimilarity
1 1 1 0.8164966
2 1 2 0.6324555
3 2 4 0.8660254
4 2 2 0.7745967
5 3 5 0.5773503
6 3 4 0.5000000
7 4 4 0.8660254
8 4 2 0.7745967
$index
[,1] [,2]
[1,] 1 2
[2,] 4 2
[3,] 5 4
[4,] 4 2
set.seed(123)
train<-matrix(round(runif(1000000),0),nrow=5000,ncol=200)
set.seed(987)
test<-matrix(round(runif(400000),0),nrow=2000,ncol=200)
system.time(cosineSim<-coSim(train, test, topK=50)) #380secs
When I run the same function with 5000x200 matrix for training and 2000x200 matrix for testing, it took over 380secs.
Ideally, I would like to see some ideas where I do not have to calculate the similarity between each and every row. If that is not possible, some pointers on how to vectorise the above code will be helpful.
No need to compute the similarity for every row. You can use this instead:
coSim2<-function(mat1, mat2, topK){
#similarity computation:
xy <- tcrossprod(mat1, mat2)
xx <- rowSums(mat1^2)
yy <- rowSums(mat2^2)
result <- xy/sqrt(outer(xx,yy))
#top similar rows from train (per row in test):
top <- apply(result, 2, order, decreasing=TRUE)[1:topK,]
result_df <- data.frame(testRowId=c(col(top)), trainRowId=c(top))
result_df$CosineSimilarity <- result[as.matrix(result_df[,2:1])]
list(similarity=result_df, index=t(top))
}
Test data (I've reduced your train matrix)
set.seed(123)
train<-matrix(round(runif(100000),0),nrow=500,ncol=200)
set.seed(987)
test<-matrix(round(runif(400000),0),nrow=2000,ncol=200)
Result:
> system.time(cosineSim<-coSim(train, test, topK=50)) #380secs
user system elapsed
41.71 1.59 43.72
> system.time(cosineSim2<-coSim2(train, test, topK=50)) #380secs
user system elapsed
0.46 0.02 0.49
Using your full 5000 x 200 train matrix, coSim2 runs in 7.8 sec.
Also note:
> any(cosineSim$similarity != cosineSim2$similarity)
[1] FALSE
> any(cosineSim$index != cosineSim2$index)
[1] FALSE
You can't use identical because my function returns integers instead of doubles for row IDs.

Resources