I am trying to implement a block bootstrap procedure, but I haven't figured out a way of doing this efficiently.
My data.frame has the following structure:
CHR POS var_A var_B
1 192 0.9 0.7
1 2000 0.8 0.3
2 3 0.21 0.76
2 30009 0.36 0.15
...
The first column is the chromosome identification, the second column is the position, and the last two columns are variables for which I want to calculate a correlation. The problem is that each row is not entirely independent to one another, depending on the distance between them (the closer the more dependent), and so I cannot simply do cor(df$var_A, df$var_B).
The way out of this problem that is commonly used with this type of data is performing a block bootstrap. That is, I need to divide my data into blocks of length X, randomly select one row inside that block, and then calculate my statistic of interest. Note, however, that these blocks need to be defined based on the column POS, and not based on the row number. Also, this procedure needs to be done for each chromosome.
I tried to implement this, but I came up with the slowest code possible (it didn't even finish running) and I am not 100% sure it works.
x = 1000
cors = numeric()
iter = 1000
for(j in 1:iter) {
df=freq[0,]
for (i in unique(freq$CHR)) {
t = freq[freq$CHR==i,]
fim = t[nrow(t),2]
i = t[1,2]
f = i + x
while(f < fim) {
rows = which(t$POS>=i & t$POS<f)
s = sample(rows)
df = rbind(df,t[s,])
i = f
f = f + x
}
}
cors = c(cors, cor(df$var_A, df$var_B))
}
Could anybody help me out? I am sure there is a more efficient way of doing this.
Thank you in advance.
One efficient way to try would be to use the 'boot' package, of which functions include parallel processing capabilities.
In particular, the 'tsboot', or time series boot function, will select ordered blocks of data. This could work if your POS variable is some kind of ordered observation.
The boot package functions are great, but they need a little help first. To use bootstrap functions in the boot package, one must first wrap the statistic of interest in a function which includes an index argument. This is the device the bootstrap generated index will use to pass sampled data to your statistic.
cor_hat <- function(data, index) cor(y = data[index,]$var_A, x = data[index,]$var_B)
Note cor_hat in the arguments below. The sim = "fixed", l = 1000 arguments, which indicate you want fixed blocks of length(l) 1000. However, you could do blocks of any size, 5 or 10 if your trying to capture nearest neighbor dynamics moving over time. The multicore argument speaks for itself, but it maybe "snow" if you are using windows.
library(boot)
tsboot(data, cor_hat, R = 1000, sim = "fixed", l = 1000, parallel = "multicore", ncpus = 4)
In addition, page 194 of Elements of Statistical Learning provides a good example of the framework using the traditional boot function, all of which is relevant to tsboot.
Hope that helps, good luck.
Justin
r
I hope I understood you right:
# needed for round_any()
library(plyr)
res <- lapply(unique(freq$CHR),function(x){
freq_sel <- freq[freq$CHR==x,]
blocks <- lapply(seq(1,round_any(max(freq_sel$POS),1000,ceiling),1000), function(ix) freq_sel[freq_sel$POS > ix & freq_sel$POS <= ix+999,])
do.call(rbind,lapply(blocks,function(x) if (nrow(x) > 1) x[sample(1:nrow(x),1),] else x))
})
This should return a list with an entry for each chromosome. Within each entry, there's an observation per 1kb-block if present. The number of blocks is determined by the maximum POS value.
EDIT:
library(doParallel)
library(foreach)
library(plyr)
cl <- makeCluster(detectCores())
registerDoParallel(cl)
res <- foreach(x=unique(freq$CHR),.packages = 'plyr') %dopar% {
freq_sel <- freq[freq$CHR==x,]
blocks <- lapply(seq(1,round_any(max(freq_sel$POS),1000,ceiling),1000), function(ix) freq_sel[freq_sel$POS > ix & freq_sel$POS <= ix+999,])
do.call(rbind,lapply(blocks,function(x) if (nrow(x) > 1) x[sample(1:nrow(x),1),] else x))
}
stopCluster(cl)
This is a simple parallelisation with foreach on each Chromosome. It could be better to restructure the function and base the parallel processing on another level (such as the 1000 iterations or maybe the blocks). In any case, I can just stress again what I was saying in my comment: Before you work on parallelising your code, you should be sure that it's as efficient as possible. Meaning you might want to look into the boot package or similar to get an increase in efficiency. That said, with the number of iterations you're planning, parallel processing might be useful once you're comfortable with your function.
So, after a while I came up with an answer to my problem. Here it goes.
You'll need the package dplyr.
l = 1000
teste = freq %>%
mutate(w = ceiling(POS/l)) %>%
group_by(CHR, w) %>%
sample_n(1)
This code creates a new variable named w based on the position in the genome (POS). This variable w is the window to which each row was assigned, and it depends on l, which is the length of your window.
You can repeat this code several times, each time sampling one row per window/CHR (with the sample_n(1)) and apply whatever statistic of interest that you want.
Related
I have the following MATLAB code and I'm working to translating it to R:
nproc=40
T=3
lambda=4
tarr = zeros(1, nproc);
i = 1;
while (min(tarr(i,:))<= T)
tarr = [tarr; tarr(i, :)-log(rand(1, nproc))/lambda];
i = i+1;
end
tarr2=tarr';
X=min(tarr2);
stairs(X, 0:size(tarr, 1)-1);
It is the Poisson Process from the renewal processes perspective. I've done my best in R but something is wrong in my code:
nproc<-40
T<-3
lambda<-4
i<-1
tarr=array(0,nproc)
lst<-vector('list', 1)
while(min(tarr[i]<=T)){
tarr<-tarr[i]-log((runif(nproc))/lambda)
i=i+1
print(tarr)
}
tarr2=tarr^-1
X=min(tarr2)
plot(X, type="s")
The loop prints an aleatory number of arrays and only the last is saved by tarr after it.
The result has to look like...
Thank you in advance. All interesting and supportive comments will be rewarded.
Adding on to the previous comment, there are a few things which are happening in the matlab script that are not in the R:
[tarr; tarr(i, :)-log(rand(1, nproc))/lambda]; from my understanding, you are adding another row to your matrix and populating it with tarr(i, :)-log(rand(1, nproc))/lambda].
You will need to use a different method as Matlab and R handle this type of thing differently.
One glaring thing that stands out to me, is that you seem to be using R: tarr[i] and M: tarr(i, :) as equals where these are very different, as what I think you are trying to achieve is all the columns in a given row i so in R that would look like tarr[i, ]
Now the use of min is also different as R: min() will return the minimum of the matrix (just one number) and M: min() returns the minimum value of each column. So for this in R you can use the Rfast package Rfast::colMins.
The stairs part is something I am not familiar with much but something like ggplot2::qplot(..., geom = "step") may work.
Now I have tried to create something that works in R but am not sure really what the required output is. But nevertheless, hopefully some of the basics can help you get it done on your side. Below is a quick try to achieve something!
nproc <- 40
T0 <- 3
lambda <- 4
i <- 1
tarr <- matrix(rep(0, nproc), nrow = 1, ncol = nproc)
while(min(tarr[i, ]) <= T0){
# Major alteration, create a temporary row from previous row in tarr
temp <- matrix(tarr[i, ] - log((runif(nproc))/lambda), nrow = 1)
# Join temp row to tarr matrix
tarr <- rbind(tarr, temp)
i = i + 1
}
# I am not sure what was meant by tarr' in the matlab script I took it as inverse of tarr
# which in matlab is tarr.^(-1)??
tarr2 = tarr^(-1)
library(ggplot2)
library(Rfast)
min_for_each_col <- colMins(tarr2, value = TRUE)
qplot(seq_along(min_for_each_col), sort(min_for_each_col), geom="step")
As you can see I have sorted the min_for_each_col so that the plot is actually a stair plot and not some random stepwise plot. I think there is a problem since from the Matlab code 0:size(tarr2, 1)-1 gives the number of rows less 1 but I cant figure out why if grabbing colMins (and there are 40 columns) we would create around 20 steps. But I might be completely misunderstanding! Also I have change T to T0 since in R T exists as TRUE and is not good to overwrite!
Hope this helps!
I downloaded GNU Octave today to actually run the MatLab code. After looking at the code running, I made a few tweeks to the great answer by #Croote
nproc <- 40
T0 <- 3
lambda <- 4
i <- 1
tarr <- matrix(rep(0, nproc), nrow = 1, ncol = nproc)
while(min(tarr[i, ]) <= T0){
temp <- matrix(tarr[i, ] - log(runif(nproc))/lambda, nrow = 1) #fixed paren
tarr <- rbind(tarr, temp)
i = i + 1
}
tarr2 = t(tarr) #takes transpose
library(ggplot2)
library(Rfast)
min_for_each_col <- colMins(tarr2, value = TRUE)
qplot(seq_along(min_for_each_col), sort(min_for_each_col), geom="step")
Edit: Some extra plotting tweeks -- seems to be closer to the original
qplot(seq_along(min_for_each_col), c(1:length(min_for_each_col)), geom="step", ylab="", xlab="")
#or with ggplot2
df1 <- cbind(min_for_each_col, 1:length(min_for_each_col)) %>% as.data.frame
colnames(df1)[2] <- "index"
ggplot() +
geom_step(data = df1, mapping = aes(x = min_for_each_col, y = index), color = "blue") +
labs(x = "", y = "")
I'm not too familiar with renewal processes or matlab so bear with me if I misunderstood the intention of your code. That said, let's break down your R code step by step and see what is happening.
The first 4 lines assign numbers to variables.
The fifth line creates an array with 40 (nproc) zeros.
The sixth line (which doesnt seem to be used later) creates an empty vector with mode 'list'.
The seventh line starts a while loop. I suspect this line is supposed to say while the min value of tarr is less than or equal to T ...
or it's supposed to say while i is less than or equal to T ...
It actually takes the minimum of a single boolean value (tarr[i] <= T). Now this can work because TRUE and FALSE are treated like numbers. Namely:
TRUE == 1 # returns TRUE
FALSE == 0 # returns TRUE
TRUE == 0 # returns FALSE
FALSE == 1 # returns FALSE
However, since the value of tarr[i] depends on a random number (see line 8), this could lead to the same code running differently each time it is executed. This might explain why the code "prints an aleatory number of arrays ".
The eight line seems to overwrite the assignment of tarr with the computation on the right. Thus it takes the single value of tarr[i] and subtracts from it the natural log of runif(proc) divided by 4 (lambda) -- which gives 40 different values. These fourty different values from the last time through the loop are stored in tarr.
If you want to store all fourty values from each time through the loop, I'd suggest storing it in say a matrix or dataframe instead. If that's what you want to do, here's an example of storing it in a matrix:
for(i in 1:nrow(yourMatrix)){
//computations
yourMatrix[i,] <- rowCreatedByComputations
}
See this answer for more info about that. Also, since it's a set number of values per run, you could keep them in a vector and simply append to the vector each loop like this:
vector <- c(vector,newvector)
The ninth line increases i by one.
The tenth line prints tarr.
the eleveth line closes the loop statement.
Then after the loop tarr2 is assigned 1/tarr. Again this will be 40 values from the last time through the loop (line 8)
Then X is assigned the min value of tarr2.
This single value is plotted in the last line.
Also note that runif samples from the uniform distribution -- if you're looking for a Poisson distribution see: Poisson
Hope this helped! Let me know if there's more I can do to help.
I am simulating data and filling a matrix using a for loop in R. Currently the loop is running slower than I would like. I've done some work to vectorize some of the variables to improve the loops speed but it still taking some time. I believe the
mat[j,year] <- sum(vec==1)/x
part of the loop is slowing things down. I've looked into filling matrices more efficiently but could not find anything to help my current problem. Eventually this will be used as a part of a shiny app so all of variables I assign will need to be easily assigned different values.
Any advice to speed up the loop or more efficiently write this loop would be greatly appreciated.
Here is the loop:
#These variables are all specified because they need to change with different simulations
num.sims <- 20
time <- 50
mat <- matrix(nrow = num.sims, ncol = time)
x <- 1000
init <- 0.5*x
vec <- vector(length = x)
ratio <- 1
freq <- -0.4
freq.vec <- numeric(nrow(mat))
## start a loop
for (j in 1:num.sims) {
vec[1:init] <- 1; vec[(init+1):x] <- 2
year <- 2
freq.vec[j] <- sum(vec==1)/x
for (i in 1:(x*(time-1))) {
freq.1 <- sum(vec==1)/x; freq.2 <- 1 - freq.1
fit.ratio <- exp(freq*(freq.1-0.5) + log(ratio))
Pr.1 <- fit.ratio*freq.1/(fit.ratio*freq.1 + freq.2)
vec[ceiling(x*runif(1))] <- sample(c(1,2), 1, prob=c(Pr.1,1-Pr.1))
## record data
if (i %% x == 0) {
mat[j,year] <- sum(vec==1)/x
year <- year + 1
}}}
The inner loop is what is slowing you down. You're doing x number of iterations to update each cell in the matrix. Since each trip to modify vec depends on the previous iteration, this would be difficult to simplify. #Andrew Feierman is probably correct that this would benefit from being moved to C++, at least the four lines before the if statement.
Alternatively, this only takes 10-20 seconds to run. Unless you're going to scale this up or run it many times, it might not be worth the trouble to speed it up. If you do keep it as is, you could put a progress bar in Shiny to let the user know things are still working.
Depending on how often you will need to call this loop, it could be worth rewriting it in C++. R is built on C++, and any C++ will run many, many times faster than even efficient R code.
sourceCpp is a good package to start with: https://www.rdocumentation.org/packages/Rcpp/versions/0.12.11/topics/sourceCpp
I've inherited R some code and it runs incredibly slowly. Most of the time is spent evaluating the functions of the form (there are about 15 such functions with different integrands G):
TMin <- 0.5
F <- function (t, d) {
result <- ifelse(((d > 0) & (t > TMin)),
mapply(function(t, d) integrate(G, lower=0, upper=t, t, d)$value, t, d),
0)
return(result)
}
For testing, I'm using the following dummy function, but in the real code the Gs are much more complicated involving exp(), log(), dlnorm(), plnorm() etc..
G <- function(x, t, d) {
mean(rnorm(1e5))
x + t - d
}
F will be calculated around 2 million times in the worst case.
The function gets called in 3 different ways, either:
t is a single number and d is a numeric vector or,
t is a numeric vector and d is a single number or,
t is a numeric vector and is a numeric vector
Is there a (simple) way to speed up this function?
For far I've tried variations along the lines of (to get rid of the ifelse loop):
F2 <- function (t,d) {
TempRes <- mapply(function(t, d) integrate(G, lower=0, upper=t, t, d)$value, t, d)
TempRes[(d <= 0) | (t <= TMin)] <- 0
result <- TempRes
return(result)
}
and
F3 <- function (t,d) {
result <- rep(0, max(length(t),length(d)))
test <- ((d > 0) & (t > TMin))
result[test] <- mapply(function(t, d) integrate(G, lower=0, upper=t, t, d)$value, t, d)[test]
return(result)
}
but they take almost exactly the same time.
You are performing a large number of independent integrations. You can speed things up by performing these integrations on separate cores simultaneously (if you have a multicore processor available). The problem is that R performs its calculations in a single threaded manner by default. However, there are a number packages available that allow multithreading support. I have recently answered a few similar questions here and here, with some additional info regarding the relevant packages and functions.
Additionally, As #Mike Dunlavey already mentioned, you should avoid performing the integrations for values of t and d that do not match your criteria. (You are currently performing unneeded function evaluations for these values and then you overwrite the result with 0 afterwards).
I have added a possible improvement below. Note that you will have to create a separate file with your function G included in order to evaluate it on the cluster nodes. In the code below it is assumed that this file is called functionG.R
The snippet:
library(doParallel)
F4 <- function(t,d) {
results = vector(mode="numeric",max(length=length(t),length(d))) # Zero vector
logicalVector <- ((d > 0) & (t > TMin))
relevantT <- t[logicalVector]
relevantD <- d[logicalVector] # when d is single element, NA values created
if(length(relevantT) > 1 | length(relevantD) > 1)
{
if(length(d)==1) # d is only one element instead of vector --> replicate it
relevantD <- rep(d,length(relevantT))
if(length(t)==1) # t is only one element instead of vector --> replicate it
relevantT <- rep(t,length(relevantD))
cl <- makeCluster(detectCores());
registerDoParallel(cl)
clusterEvalQ(cl,eval(parse("functionG.R")))
integrationResults <- foreach(i=1:length(relevantT),.combine="c") %dopar%
{
integrate(G,lower=0,upper=relevantT[i],relevantT[i],relevantD[i])$value;
}
stopCluster(cl)
results[logicalVector] <- integrationResults
}
else if(length(relevantT==1)) # Cluster overhead not needd
{
results[logicalVector] = integrate(G,lower=0,upper=relevantT,relevantT,relevantD)$value;
}
return(results)
}
My CPU contains 6 physical cores with hyperthreading enabled (x2). These are the results:
> t = -5000:20000
> d = -5000:20000
>
> start = Sys.time()
> testF3 = F3(t,d)
> timeNeededF3 = Sys.time()-start
>
> start = Sys.time()
> testF4 = F4(t,d)
> timeNeededF4 = Sys.time()-start;
> timeNeededF3
Time difference of 3.452825 mins
> timeNeededF4
Time difference of 29.52558 secs
> identical(testF3,testF4)
[1] TRUE
It seems that the cores are constantly in use while running this code. However, you can potentially optimize this code further by presplitting the data more efficiently around the cores and then subsequently utilize an apply type functions on the separate cores.
If more optimization is required you could also take a deeper look at the integrate function. You can potentially play around with the settings and obtain a performance gain by allowing a less strict numerical approximation. As an alternative you could implement your own simple version of adaptive Simpson quadrature and play around with the discrete stepsizes. Most likely you could obtain massive performance increases like this (if you are able/willing to allow more error in the approximation).
EDIT:
Updated code in order for it to work in all scenario's: d and/or t valid/invalid numbers or vectors
REPLY TO COMMENTS
#mawir: you are correct. ifelse(test, yes, no) will return corresponding yes values for the rows of which test evaluates to TRUE, it will return the respective no values for the rows for which test evaluate to FALSE. However, it WILL first have to evaluate your yes expression in order to create the yes vector of length(test). This piece of code demonstrates this:
> t = -5000:5
> d = -5000:5
>
> start = Sys.time()
> testF1 = F(t,d)
> timeNeededF1 = Sys.time()-start
> timeNeededF1
Time difference of 43.31346 secs
>
> start = Sys.time()
> testF4 = F4(t,d)
> timeNeededF4 = Sys.time()-start
> timeNeededF4
Time difference of 2.284134 secs
Only the last 5 values of t and d are relevant in this scenario.
However, inside the F1 function the ifelse evaluates the mapply on all d and t values first in order to create the yes vector. This is why the function execution takes so long. Next, it selects the elements for which the conditions are met, or 0 otherwise. The F4 function works around this issue.
Futhermore, you say that you obtain speedup in the scenario where t and d are non-vectors. However, in this scenario no parallelisation is used. You should normally obtain the maximum speedup in the senario's where one or both of t/d are vectors.
ANOTHER EDIT, in response to Roland's comment:
You could potentially replace clusterEvalQ(cl,eval(parse("functionG.R"))) with clusterExport(cl,"G") if you prefer not to create a separate function file(s).
As a generality, the place to look is in the innermost loop, and you can speed it up either by making it take less time or by calling it fewer times. You have an inner loop running mapply, but then you extract element [test] from it. Does that mean all the other elements are discarded? If so, why bother spending time to calculate the extra elements?
I made some functions in R to match up chemical mass spectra (a matrix with two columns with integer masses and intensities) to libraries of such spectra based on a custom spectral similarity function and matching of the so-called retention index of the compounds (i.e. the elution time) (see here for an example, http://webbook.nist.gov/cgi/cbook.cgi?ID=C630035&Mask=200). For that, the list element "RI" of each record has to be compared to that in the library and when the absolute deviation is less than a given tolerance it should add the best spectral library match to my records. Below is some code that I wrote to do this, but the problem is that it is far too slow for my purposes (I typically have about 1000 sample spectra and 200 000 library spectra). I tried parallelizing it but that also didn't seem to help much. Any thoughts perhaps on how I could make the code below more efficient, e.g. using more vectorization, or using inline C code, or some other R tricks? I am aware of the general advice in this respect, but don't quite see how to implement it easily in this case (and I am not so proficient in C yet unfortunately)... Any thoughts or advice? Oh yes, and how could I add a progress bar when using sfLapply? Would it help perhaps to put my spectra in one big (sparse, as there are lots of zeros) matrix first, to avoid the merge step in the spectral similarity function, or use additional criteria, such as only consider the spectra when the biggest/most intense peak in the query spectrum has the same mass as the library spectrum (or is contained in the set of say the 5 biggest peaks in the library spectrum)? Anyway, any thoughts on how to speed up this task would be much appreciated!
EDIT: one residual query I still have is how I could avoid making a complete copy of sample records recs in function addbestlibmatches1, but rather change only the records in place for which there is a library match? Also, passing the selection of the library records for which there is a retention index match is probably not efficient (in function addbestlibmatch). Any thoughts how I could avoid this?
# EXAMPLE DATA
rec1=list(RI=1100,spectrum=as.matrix(cbind(mz=c(57,43,71,41,85,56,55,70,42,84,98,99,39,69,58,113,156),int=c(999,684,396,281,249,173,122,107,94,73,51,48,47,47,37,33,32))))
randrec=function() list(RI=runif(1,1000,1200),spectrum=as.matrix(cbind(mz=seq(30,600,1),int=round(runif(600-30+1,0,999)))))
# spectral library
libsize=2000 # my real lib has 200 000 recs
lib=lapply(1:libsize,function(i) randrec())
lib=append(list(rec1),lib)
# sample spectra
ssize=100 # I usually have around 1000
s=lapply(1:ssize,function(i) randrec())
s=append(s,list(rec1)) # we add the first library record as the last sample spectrum, so this should match
# SPECTRAL SIMILARITY FUNCTION
SpecSim=function (ms1,ms2,log=F) {
alignment = merge(ms1,ms2,by=1,all=T)
alignment[is.na(alignment)]=0
if (nrow(alignment)!=0) {
alignment[,2]=100*alignment[,2]/max(alignment[,2]) # normalize base peak intensities to 100
alignment[,3]=100*alignment[,3]/max(alignment[,3])
if (log==T) {alignment[,2]=log2(alignment[,2]+1);alignment[,3]=log2(alignment[,3]+1)} # work on log2 intensity scale if requested
u = alignment[,2]; v = alignment[,3]
similarity_score = as.vector((u %*% v) / (sqrt(sum(u^2)) * sqrt(sum(v^2))))
similarity_score[is.na(similarity_score)]=-1
return(similarity_score)} else return(-1) }
# FUNCTION TO CALCULATE SIMILARITY VECTOR OF SPECTRUM WITH LIBRARY SPECTRA
SpecSimLib=function(spec,lib,log=F) {
sapply(1:length(lib), function(i) SpecSim(spec,lib[[i]]$spectrum,log=log)) }
# FUNCTION TO ADD BEST MATCH OF SAMPLE RECORD rec IN SPECTRAL LIBRARY lib TO ORIGINAL RECORD
# we only compare spectra when list element RI in the sample record is within tol of that in the library
# when there is a spectral match > specsimcut within a RI devation less than tol,
# we add the record nrs in the library with the best spectral matches, the spectral similarity and the RI deviation to recs
addbestlibmatch=function(rec,lib,xvar="RI",tol=10,log=F,specsimcut=0.8) {
nohit=list(record=-1,specmatch=NA,RIdev=NA)
selected=abs(sapply(lib, "[[", xvar)-rec[[xvar]])<tol
if (sum(selected)!=0) {
specsims=SpecSimLib(rec$spectrum,lib[selected],log) # HOW CAN I AVOID PASSING THE RIGHT LIBRARY SUBSET EACH TIME?
maxspecsim=max(specsims)
if (maxspecsim>specsimcut) {besthsel=which(specsims==maxspecsim)[[1]] # nr of best hit among selected elements, in case of ties we just take the 1st hit
idbesth=which(selected)[[besthsel]] # record nr of best hit in library lib
return(modifyList(rec,list(record=idbesth,specsim=specsims[[besthsel]],RIdev=rec[[xvar]]-lib[[idbesth]][[xvar]])))}
else {return(rec)} } else {return(rec)}
}
# FUNCTION TO ADD BEST LIBRARY MATCHES TO RECORDS RECS
library(pbapply)
addbestlibmatches1=function(recs,lib,xvar="RI",tol=10,log=F,specsimcut=0.8) {
pblapply(1:length(recs), function(i) addbestlibmatch(recs[[i]],lib,xvar,tol,log,specsimcut))
}
# PARALLELIZED VERSION
library(snowfall)
addbestlibmatches2=function(recs,lib,xvar="RI",tol=10,log=F,specsimcut=0.8,cores=4) {
sfInit(parallel=TRUE,cpus=cores,type="SOCK")
sfExportAll()
sfLapply(1:length(recs), function(i) addbestlibmatch(recs[[i]],lib,xvar,tol,log,specsimcut))
sfStop()
}
# TEST TIMINGS
system.time(addbestlibmatches1(s,lib))
#|++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
#user system elapsed
#83.60 0.06 83.82
system.time(addbestlibmatches2(s,lib))
#user system elapsed - a bit better, but not much
#2.59 0.74 42.37
Without looking at all of your code in detail, I think there is room for improvement in the SpecSim function without going all C++ yet. You're using merge, which coerces your matrices to data.frames. That's always going to be bad for performance. Most of your code time is probably in merge() and subsetting. data.frame subsetting is slow, matrix or vector is fast.
SpecSim2 <- function (ms1,ms2,log=F) {
i <- unique(c(ms1[,1], ms2[,1]))
y <- x <- numeric(length(i))
x[match(ms1[,1], i)] <- ms1[, 2]
y[match(ms2[,1], i)] <- ms2[, 2]
x <- 100 * x / max(x)
y <- 100 * y / max(y)
if (log){
x <- log2(x + 1)
y <- log2(y + 1)
}
similarity.score <- x %*% y / (sqrt(sum(x^2)) * sqrt(sum(y^2)))
if (is.na(similarity.score)){
return(-1)
} else {
return(similarity.score)
}
}
Here are some timings for the rewrite and the original, respectively:
> system.time(addbestlibmatches1(s,lib))
|++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
user system elapsed
4.16 0.00 4.17
> system.time(addbestlibmatches1(s,lib))
|++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
user system elapsed
34.25 0.02 34.34
May not get you the speed you need, but better than 8x improvement...
As to how to deal with the addbestlibmatches(), I think you need to rethink the problem as a matrix problem. Rather than using lists to hold your library, use a vector for your RI values for both the library and the samples. Then you can do your initial screen like this:
selected <- outer(sRI, libRI, FUN = '-') < 10
If your library is one big matrix (masses x spectra), then you can subset the library for selected spectra and calculate the distance between a sample and the entire portion of the library that was selected in one pass like this:
SpecSimMat <- function(x, lib, log = F){
stopifnot(require(matrixStats))
x <- 100 * x / max(x)
lib <- sweep(lib, 2, colMaxs(lib))
x %*% lib / (sqrt(sum(x^2)) * sqrt(colSums(lib^2)))
}
where x is a sample and lib is a matrix of the selected spectra (masses x spectra). In this way, your subsetting a matrix (fast) and then performing a series of matrix operations (fast).
I'm analyzing large sets of data using the following script:
M <- c_alignment
c_check <- function(x){
if (x == c_1) {
1
}else{
0
}
}
both_c_check <- function(x){
if (x[res_1] == c_1 && x[res_2] == c_1) {
1
}else{
0
}
}
variance_function <- function(x,y){
sqrt(x*(1-x))*sqrt(y*(1-y))
}
frames_total <- nrow(M)
cols <- ncol(M)
c_vector <- apply(M, 2, max)
freq_vector <- matrix(nrow = sum(c_vector))
co_freq_matrix <- matrix(nrow = sum(c_vector), ncol = sum(c_vector))
insertion <- 0
res_1_insertion <- 0
for (res_1 in 1:cols){
for (c_1 in 1:conf_vector[res_1]){
res_1_insertion <- res_1_insertion + 1
insertion <- insertion + 1
res_1_subset <- sapply(M[,res_1], c_check)
freq_vector[insertion] <- sum(res_1_subset)/frames_total
res_2_insertion <- 0
for (res_2 in 1:cols){
if (is.na(co_freq_matrix[res_1_insertion, res_2_insertion + 1])){
for (c_2 in 1:max(c_vector[res_2])){
res_2_insertion <- res_2_insertion + 1
both_res_subset <- apply(M, 1, both_c_check)
co_freq_matrix[res_1_insertion, res_2_insertion] <- sum(both_res_subset)/frames_total
co_freq_matrix[res_2_insertion, res_1_insertion] <- sum(both_res_subset)/frames_total
}
}
}
}
}
covariance_matrix <- (co_freq_matrix - crossprod(t(freq_vector)))
variance_matrix <- matrix(outer(freq_vector, freq_vector, variance_function), ncol = length(freq_vector))
correlation_coefficient_matrix <- covariance_matrix/variance_matrix
A model input would be something like this:
1 2 1 4 3
1 3 4 2 1
2 3 3 3 1
1 1 2 1 2
2 3 4 4 2
What I'm calculating is the binomial covariance for each state found in M[,i] with each state found in M[,j]. Each row is the state found for that trial, and I want to see how the state of the columns co-vary.
Clarification: I'm finding the covariance of two multinomial distributions, but I'm doing it through binomial comparisons.
The input is a 4200 x 510 matrix, and the c value for each column is about 15 on average. I know for loops are terribly slow in R, but I'm not sure how I can use the apply function here. If anyone has a suggestion as to properly using apply here, I'd really appreciate it. Right now the script takes several hours. Thanks!
I thought of writing a comment, but I have too much to say.
First of all, if you think apply goes faster, look at Is R's apply family more than syntactic sugar? . It might be, but it's far from guaranteed.
Next, please don't grow matrices as you move through your code, that slows down your code incredibly. preallocate the matrix and fill it up, that can increase your code speed more than a tenfold. You're growing different vectors and matrices through your code, that's insane (forgive me the strong speech)
Then, look at the help page of ?subset and the warning given there:
This is a convenience function intended for use interactively. For
programming it is better to use the standard subsetting functions like
[, and in particular the non-standard evaluation of argument subset
can have unanticipated consequences.
Always. Use. Indices.
Further, You recalculate the same values over and over again. fre_res_2 for example is calculated for every res_2 and state_2 as many times as you have combinations of res_1 and state_1. That's just a waste of resources. Get out of your loops what you don't need to recalculate, and save it in matrices you can just access again.
Heck, now I'm at it: Please use vectorized functions. Think again and see what you can drag out of the loops : This is what I see as the core of your calculation:
cov <- (freq_both - (freq_res_1)*(freq_res_2)) /
(sqrt(freq_res_1*(1-freq_res_1))*sqrt(freq_res_2*(1-freq_res_2)))
As I see it, you can construct a matrix freq_both, freq_res_1 and freq_res_2 and use them as input for that one line. And that will be the whole covariance matrix (don't call it cov, cov is a function). Exit loops. Enter fast code.
Given the fact I have no clue what's in c_alignment, I'm not going to rewrite your code for you, but you definitely should get rid of the C way of thinking and start thinking R.
Let this be a start: The R Inferno
It's not really the 4 way nested loops but the way your code is growing memory on each iteration. That's happening 4 times where I've placed # ** on the cbind and rbind lines. Standard advice in R (and Matlab and Python) in situations like this is to allocate in advance and then fill it in. That's what the apply functions do. They allocate a list as long as the known number of results, assign each result to each slot, and then merge all the results together at the end. In your case you could just allocate the correct size matrix in advance and assign into it at those 4 points (roughly speaking). That should be as fast as the apply family, and you might find it easier to code.