I have data that is in the form of a list stored in a string. There are around 7300 rows, and the lists are in the form "23.53.24.33.55" and so on.
Each of these numbers is a code for items and I'm calculating the Sorensen-Dice distance between the itemsets. There are 238 unique item codes.
I start with a dataset matrix_ob.csv which has 7300 rows and 239 columns where each list as shown above is onehotencoded into binary values indicating presence or absence of the item in the row (so in the example shown above there would be 233 zeroes and 5 ones for 23, 53,23,33 and 55). Initially I did this process using for loops but that worked earlier when the datasets were smaller, now I have so many rows that using a for loop seems incorrect. The distance matrix at the end would have dimensions of 7300x7300, which takes significantly longer to do.
I take the two lists that are being compared, and add the onehotencoded values. This means that each of the 238 binary values can take the values 0,1 or 2. 2 means the item code is present in both lists. If the itemsets are identical the distance is 0, if the itemsets are different, the distance is 1. The third ifelse case is when some of the items are present in both itemsets, for instance "23.25.27" and "23.53.24.33.55" have 1 item in common for both. I count the 0s, 1s and 2s and input them into the formula for the coefficient, and I get the Sorensen-Dice distance which is simply 1 minus the coefficient.
Currently this is the code I use:
sorensen_dice<-matrix(0, nrow = nrow(matrix_ob), ncol = nrow(matrix_ob))
for (i in c(1:nrow(sorensen_dice))) {
for (j in c(1:nrow(sorensen_dice))) {
if (i==j) {
sorensen_dice[i,j] = 0
}
else {
temp<-matrix_ob[i,]+matrix_ob[j,]
if (length(table(list(temp)))==2) {
sorensen_dice[i,j] = 1
}
else {
a<-2*(table(list(temp))[[3]])
bc<-table(list(temp))[[2]]
p<-a+bc
sorensen_dice[i,j] = 1-(a/p)
}
}
}
print(i)
}
colnames(sorensen_dice)<-data_ob$LIST
rownames(sorensen_dice)<-data_ob$LIST
write.csv(sorensen_dice,file="sorensen_dice_ob.csv")
Each iteration out of 7000 takes a minute to execute which I see from the print line. However I don't understand how to vectorise something like this, there are no functions for this distance measure in R. What would be a good way to vectorise my calculations here? I'm essentially going 7000 times on a nested for loop to do this. I thought that maybe I could do it by only restricting the for loops to do one half of the matrix since this is a symmetrical matrix. This however just feels like a paperthin solution when I don't really understand properly how to vectorise something that's in a nested for loop.
Alternatively, the philentropy package has the distance() function which has the sorensen distance as well as the dice distance as options, which is the sorensen-dice distance?
Related
I have a Rna-seq dataset (rows=samples, columns=genes) which goes into a clustering function. The genes are grouped into n clusters labelled with numbers where the ones belonging to cluster 0 are the non-clustered genes. The ones clustered goes back into clustering and again we get n clusters labelled with numbers with 0 being again the non-clustered genes. The process goes ahead until no further genes are classified into cluster 0. I need to loop into through this process in order to return the final clustering result along with the union of the genes belonging to cluster 0 at each iteration. I'm aware this could be done with either while or repeat. I had a try using repeat but is not working and the problem is that I have not really clear how to set this up properly.
#define my dataset
dat<-my_dataset
repeat{
#run the clustering
aa<-cluster(dat)
#if the cluster 0 has length 0 (no genes), assign clustering results to Mod and stop the loop
if (length(which(aa$colors==0))==0){
Mod<-aa
break
#otherwise, store the genes belonging to cluster 0 and set up a new dataset made up by clustered genes
} else{
noPass<-rownames(dat)[aa$colors==0]
dat<-dat[,which(aa$colors!=0)]
}
return(list(aa,noPass))
}
Any suggestion is really appreciated.
This sounds like a suitable problem for recursion:
define a function that will cluster the elements in its argument; if there are none left-out from the clustering, then return the elements, otherwise call itself using only the elements that are still in, and so on until there are none left-out.
cluster_until_none_left_out = function( elements ) {
aa=cluster(data[elements])
if (sum(aa$colors==0)==0) { return( list(aa,elements) ) }
else{ return( cluster_until_none_left_out(elements[aa$colors!=0]) ) }
}
It's a bit difficult to provide a full, running example without any real data, but here's a very simple one using mock data consisting of a vector of random integers, which a mock 'clustering' function splits into those that are greater than half the mean (the single 'cluster'), and those that are less than this (the left-out, 'unclustered' elements).
data=sample(1:100)
cluster = function(vec) {
answer=data.frame(vec)
answer$colors=ifelse(vec>mean(vec)/2,1,0)
return(answer)
}
sum(cluster(data)==0)
# shows that one round of clustering leaves some elements in cluster 0
initial_elements=1:100
clustering_elements=cluster_until_none_left_out(initial_elements)[[2]]
sum(cluster(data[clustering_elements])==0)
# 0 (now nobody left in cluster 0)
The final clustering is now returned together with the indices of the elements it uses. The 'left-out' elements (cluster 0) are all the rest.
cluster_0 = initial_elements[ -clustering_elements ]
Preface: I am fairly novice at using R, I've used SAS my entire adult life and am not used to working with matrices either.
I am currently working on a project for an evolutionary biology class that requires running the rbinom function through a nested loop over a matrix.
The probability in the first row is set to 0.1 but then the value in subsequent rows must use the probability from the previous row. I cannot figure out how to reference the value in the previous row. My code is below, if anybody knows the syntax for this I would greatly appreciate it! Currently I have it set to i-1 but I know that's not right.
#equation = rbinom(1,2*N,p) / (2*N)
p<-0.1
N<-10
T<-5 #number generations
L<-3 #number independent SNP's
alleles<-matrix(nrow=T,ncol=L) #initialize a matrix of allele frequencies
each generation
alleles[1,]<-p #initialize first row to equal p
for (j in 1:ncol(alleles)) {
for (i in 2:nrow(alleles)) {
alleles[i,j]<-(rbinom(1,(2*N),(i-1))/(2*N))
}
}
alleles
I've got a big problem.
I've got a large raster (rows=180, columns=480, number of cells=86400)
At first I binarized it (so that there are only 1's and 0's) and then I labelled the clusters.(Cells that are 1 and connected to each other got the same label.)
Now I need to calculate all the distances between the cells, that are NOT 0.
There are quiet a lot and that's my big problem.
I did this to get the coordinates of the cells I'm interested in (get the positions (i.e. cell numbers) of the cells, that are not 0):
V=getValues(label)
Vu=c(1:max(V))
pos=which(V %in% Vu)
XY=xyFromCell(label,pos)
This works very well. So XY is a matrix, which contains all the coordinates (of cells that are not 0). But now I'm struggling. I need to calculate the distances between ALL of these coordinates. Then I have to put each one of them in one of 43 bins of distances. It's kind of like this (just an example):
0<x<0.2 bin 1
0.2<x<0.4 bin2
When I use this:
pD=pointDistance(XY,lonlat=FALSE)
R says it's not possible to allocate vector of this size. It's getting too large.
Then I thought I could do this (create an empty data frame df or something like that and let the function pointDistance run over every single value of XY):
for (i in 1:nrow(XY))
{pD=PointDistance(XY,XY[i,],lonlat=FALSE)
pDbin=as.matrix(table(cut(pD,breaks=seq(0,8.6,by=0.2),Labels=1:43)))
df=cbind(df,pDbin)
df=apply(df,1,FUN=function(x) sum(x))}
It is working when I try this with e.g. the first 50 values of XY.
But when I use that for the whole XY matrix it's taking too much time.(Sometimes this XY matrix contains 10000 xy-coordinates)
Does anyone have an idea how to do it faster?
I don't know if this will works fast or not. I recommend you try this:
Let say you have dataframe with value 0 or 1 in each cell. To find coordinates all you have to do is write the below code:
cord_matrix <- which(dataframe == 1, arr.ind = TRUE)
Now, you get the coordinate matrix with row index and column index.
To find the euclidean distance use dist() function. Go through it. It will look like this:
dist_vector <- dist(cord_matrix)
It will return lower triangular matrix. can be transformed into vector/symmetric matrix. Now all you have to do is calculating bins according to your requirement.
Let me know if this works within the specific memory space.
I have a matrix composing values 0, 1, and 2. 99% of the values are 0. The matrix has 1 million rows and 700 columns. There will be at least one non-zero values each row.
I need to compute the distance between each pair of columns using this formula for distance between column x and y:
D=(Sum(|xi-yi|)/2L for i from 1 to L, L=1 million, i.e. the number of rows.
I wrote a piece of R code but it's taking too long to compute, is it possible to use dynamic programing to do it faster? Here is my code:
#mac is the matrix
nCols=ncol(mac)
nRows=nrow(mac)
#the pairwise distance matrix
distMat=matrix(data=-1,nrow=nCols,ncol=nCols)
abs.dist=function(x){return(abs(x[1]-x[2]))}
for(i in 1:(nCols-1)){
for(j in (i+1):nCols){
d1=apply(mac[,c(i,j),1,abs.dist)
k=sum(d1)/(2*nRows)
distMat[i,j]=k
distMat[j,i]=k
}
}
for(i in 1:nCols) distMat[i,i]=0
Thanks a lot for any help?
I will just summarize what is in the comments already:
#mac is the matrix
nCols=ncol(mac)
nRows=nrow(mac)
#the pairwise distance matrix
distMat=matrix(data=-1,nrow=nCols,ncol=nCols)
for(i in 1:(nCols-1)){
for(j in (i+1):nCols){
d1=abs(mac[,i]-mac[,j])
k=sum(d1)/(2*nRows)
distMat[i,j]=k
distMat[j,i]=k
}
}
diag(distMat) <- 0
This is approximately 100 times faster for a 2000x500 matrix.
It took about half a minute for a 1e6x700 matrix.
Computing a distance matrix means you need (n^2-n)/2 operations. I'm not surprised it is taking a while.
Since you need all pairs, these calculations have to be done independently. Dynamic programming will not help. DP helps when you build the solution from smaller parts. Everything here is independent so DP won't help (as far as I know).
You said most entries are 0. Try looking at a sparse matrix library. This blog post may give you some ideas for doing this in R.
Ok I have the following problem:
I have several ranks in a matrix in r. (I've got this by ranking asset returns. Ranks>=3 get an NA, Ranks <3 get the rank number. If some assets share a rank, less NAs are in a row). Here are two example rows and two example rows of a matrix with returns.
ranks<-matrix(c(1,1,2,NA,NA, 1,2,NA,NA,NA),nrow=2,ncol=5)
returns<-matrix(c(0.3,0.1,-0.5,-0.7,0.2,0.1,0.4,0.05,-0.7,-0.3),nrow=2,ncol=5)
Now if all assets are equally bought for our portfolio, I can calculate the average return with:
Mat.Ret<-returns*ranks
Mean.Ret<-rowMeans(Mat.Ret,na.rm=TRUE)
However I want to have the option of giving a vector of weights for the two ranks and these weights say how big of a percentage this particular rank should have in my portfolio. As an example we have a vector of
weights<-c(0.7,0.3)
Now how would I use this in my code? I want to calculate basically ranks*returns*weights. If only ONE rank 1 and ONE rank 2 are in the table, the code works. But how would I do this variable? I mean a solution would be to calculate for each rank how many times it exists in a particular row and then divide the weight by this count. And then I would multiply this "net weight" * rank * returns.
But I have no clue how to do this in code..any help?
UPDATE AFTER FIRST COMMENTS
Ok I want to Keep it flexible to adjust the weights depending on how many times a certain rank is given. A user can choose the top 5 ranked assets, so none or several assets may share ranks. So the distribution of weights must be very flexible. I've programmed a formula which doesn't work yet since I'm obviously not yet experienced enough with the whole matrix and vector selection syntax I guess. This is what I got so far:
ranks<-apply(ranks,1,function(x)distributeWeightsPerMatrixRow(x,weights))
distributeWeightsPerMatrixRow<-function(MatrixRow,Weights){
if(length(Weights)==length(MatrixRow[!is.na(MatrixRow)])){
MatrixRow <- Weights[MatrixRow]
} else {
for(i in 1:length(MatrixRow)){
if(!is.na(MatrixRow[i])){
EqWeights<-length(MatrixRow[MatrixRow==MatrixRow[i]])
MatrixRow[i]<-sum(Weights[MatrixRow[i]:(MatrixRow[i]+EqWeights-1)])/EqWeights
}
}
}
return(MatrixRow)
}
EDIT2:
Function seems to work, however now the resulting ranks object is the transposed version of the original matrix without the column names..
Since your ranks are integers above zero, you can use this matrix for indexing the vector ranks:
mat.weights <- weights[ranks]
mat.weighted.ret <- returns * ranks * mat.weights
Update based on comment.
I suppose you're looking for something like this:
if (length(unique(na.omit(as.vector(ranks)))) == 1)
mat.weights <- (!is.na(ranks)) * 0.5
else
mat.weights <- weights[ranks]
mat.weighted.ret <- returns * ranks * mat.weights
If there is only one rank. All weights become 0.5.