Generating all gapped k-mer sequences from string - r

I am interested in making all gapped-kmers from a sequence, with gapped-kmer defined as a sequence of length k separated by up to m positions from another sequence of length k. So for example, "sequence CAGAT the gappy pair
kernel with k = 1 and m = 2 finds pairs of monomers with zero to two irrelevant positions in between. i.e. it finds the features CA, C.G, C..A, AG, A.A, A..T, GA, G.T and AT"
replacefxn <- function(x, k, m) {
substr(x, k + 1, k + m) <- paste(rep("X", m), collapse = "")
return(x)
}
gappedkmersfxn <- function(x, k, m) {
n <- (2 * k + m)
subseq <-
substring(x, seq(from = 1, to = (nchar(x) - n + 1)), seq(from = n, to = nchar(x)))
return(sapply(subseq, replacefxn, k, m))
}
allgappedkmersfxn <- function(x, k, m) {
kmers <- list()
for (i in 0:m) {
kmers[[i]] <- gappedkmersfxn(x, k, i)
}
kmers <- unlist(kmers)
return(kmers)
}
allgappedkmersfxn is how I have it implemented currently, but it does not add the features with no gap (m is maximum gap, but goes from 0 to m), thus not giving me all of the desired features (see example of "CAGAT"). In addition, it is very slow and inefficient when doing millions of sequences at a time. It is also coded poorly, but with limited experience in R I am not sure how to improve it.
What would the most efficient way of doing this be while making sure that all expected subsequences (ex: CAGAT -> CA, C.G, C..A, AG, A.A, A..T, GA, G.T and AT for k=1, m=2) are included in the output?
Thanks!

You may want to look at the implementation of a gappy pair kernel in the Bioconductor kebabs package.
Installation:
## try http:// if https:// URLs are not supported
source("https://bioconductor.org/biocLite.R")
biocLite("kebabs")
To generate a k = 1, m = 2 kernel:
library(kebabs)
gappyK1M2 <- gappyPairKernel(k = 1, m = 2)
To generate an explicit representation from a DNA sequence:
dnaseqs <- DNAStringSet("CAGAT")
dnaseqsrep <- getExRep(dnaseqs, gappyK1M2)
The k-mers are stored in the Dimnames slot:
dnaseqsrep#Dimnames[[2]]
[1] "A.A" "AG" "AT" "A..T" "CA" "C..A" "C.G" "GA" "G.T"

Related

Workaround for error when creating a correlation matrix of a huge matrix? [duplicate]

This question already has an answer here:
correlation matrix using large data sets in R when ff matrix memory allocation is not enough
(1 answer)
Closed 1 year ago.
I have a matrix of size 55422 x 888, and so the rcorr function is producing this error:
M1<-matrix(rnorm(36),nrow=55422, ncol=888)
cor <- rcorr(t(M1), type = "pearson")
Error in double(p * p) : vector size cannot be NA
In addition: Warning message:
In p * p : NAs produced by integer overflow
Is there anything I can do to solve this?
This isn't a complete/working solution, but will give you an idea of some of the issues.
Your correlation matrix will contain n*(n-1)/2 = 1535771331 unique elements. If each correlation takes one millisecond to compute, computing the correlation matrix will take (n^2-n)/2/(1e6*3600) = 0.42 hours and require (n^2-n)/2*8/(2^30) = 11.4 Gb of storage. These requirements are not impossible if you have a lot of RAM and time ...
In fact it's a little bit worse than this, since rcorr returns its results as a symmetric matrix (i.e., not taking advantage of the symmetry), and returns the n and P matrices as well, so the storage requirement will be approximately 5 times as great as stated above (double for the full matrix, x 2.5 because we have two double-precision and one integer matrix).
Getting to your specific question, the section on long vectors in the R internals manual discusses the maximum sizes of objects in R. The 'standard' limitation is that the total number of elements of the matrix should be less than 2^31 ((n^2-n)/2/(2^31-1) = 0.72), but the redundancy in the matrix gets you in trouble (as would the storage of the correlation, p-values, and the sample sizes).
If you still want to go ahead, here is an implementation by A.N. Spiess, copied from here, that breaks the problem into blocks and stores the results in a disk-backed array (i.e., not in RAM). This won't get you the p-values (and it's still not clear what you're going to do with all those values ...), but it works at least up to 40,000 columns (takes about a minute).
However, it seems to crap out on your actual problem size (888 x 55242) with a too-large length. I'd have to look more closely and see if there is a limitation here we can get around ... It seems that we are actually still limited by the matrix dimensions ... (maximum matrix dimension is sqrt(2^31-1) approx. 46341 ... With more work, we could still do the block-diagonal thing and split this into several components ...
set.seed(101)
nc <- 55422
nr <- 888
d <- matrix(rnorm(nr*nc), ncol = nc)
t1 <- system.time(b1 <- bigcor(d))
bigcor <- function(
x,
y = NULL,
fun = c("cor", "cov"),
size = 2000,
verbose = TRUE,
...)
{
if (!require("ff")) stop("please install the ff package")
fun <- match.arg(fun)
if (fun == "cor") FUN <- cor else FUN <- cov
if (fun == "cor") STR <- "Correlation" else STR <- "Covariance"
if (!is.null(y) & NROW(x) != NROW(y)) stop("'x' and 'y' must have compatible dimensions!")
NCOL <- ncol(x)
if (!is.null(y)) YCOL <- NCOL(y)
## calculate remainder, largest 'size'-divisible integer and block size
REST <- NCOL %% size
LARGE <- NCOL - REST
NBLOCKS <- NCOL %/% size
## preallocate square matrix of dimension
## ncol(x) in 'ff' single format
if (is.null(y)) resMAT <- ff(vmode = "double", dim = c(NCOL, NCOL))
else resMAT <- ff(vmode = "double", dim = c(NCOL, YCOL))
## split column numbers into 'nblocks' groups + remaining block
GROUP <- rep(1:NBLOCKS, each = size)
if (REST > 0) GROUP <- c(GROUP, rep(NBLOCKS + 1, REST))
SPLIT <- split(1:NCOL, GROUP)
## create all unique combinations of blocks
COMBS <- expand.grid(1:length(SPLIT), 1:length(SPLIT))
COMBS <- t(apply(COMBS, 1, sort))
COMBS <- unique(COMBS)
if (!is.null(y)) COMBS <- cbind(1:length(SPLIT), rep(1, length(SPLIT)))
## initiate time counter
timeINIT <- proc.time()
## iterate through each block combination, calculate correlation matrix
## between blocks and store them in the preallocated matrix on both
## symmetric sides of the diagonal
for (i in 1:nrow(COMBS)) {
COMB <- COMBS[i, ]
G1 <- SPLIT[[COMB[1]]]
G2 <- SPLIT[[COMB[2]]]
## if y = NULL
if (is.null(y)) {
if (verbose) cat(sprintf("#%d: %s of Block %s and Block %s (%s x %s) ... ", i, STR, COMB[1],
COMB[2], length(G1), length(G2)))
RES <- FUN(x[, G1], x[, G2], ...)
resMAT[G1, G2] <- RES
resMAT[G2, G1] <- t(RES)
} else ## if y = smaller matrix or vector
{
if (verbose) cat(sprintf("#%d: %s of Block %s and 'y' (%s x %s) ... ", i, STR, COMB[1],
length(G1), YCOL))
RES <- FUN(x[, G1], y, ...)
resMAT[G1, ] <- RES
}
if (verbose) {
timeNOW <- proc.time() - timeINIT
cat(timeNOW[3], "s\n")
}
gc()
}
return(resMAT)
}

How to create matrix of all 2^n binary sequences of length n using recursion in R?

I know I can use expand.grid for this, but I am trying to learn actual programming. My goal is to take what I have below and use a recursion to get all 2^n binary sequences of length n.
I can do this for n = 1, but I don't understand how I would use the same function in a recursive way to get the answer for higher dimensions.
Here is for n = 1:
binseq <- function(n){
binmat <- matrix(nrow = 2^n, ncol = n)
r <- 0 #row counter
for (i in 0:1) {
r <- r + 1
binmat[r,] <- i
}
return(binmat)
}
I know I have to use probably a cbind in the return statement. My intuition says the return statement should be something like cbind(binseq(n-1), binseq(n)). But, honestly, I'm completely lost at this point.
The desired output should basically recursively produce this for n = 3:
binmat <- matrix(nrow = 8, ncol = 3)
r <- 0 # current row of binmat
for (i in 0:1) {
for (j in 0:1) {
for (k in 0:1) {
r <- r + 1
binmat[r,] <- c(i, j, k)}
}
}
binmat
It should just be a matrix as binmat is being filled recursively.
I quickly wrote this function to generate all N^K permutations of length K for given N characters. Hope it will be useful.
gen_perm <- function(str=c(""), lst=5, levels = c("0", "1", "2")){
if (nchar(str) == lst){
cat(str, "\n")
return(invisible(NULL))
}
for (i in levels){
gen_perm(str = paste0(str,i), lst=lst, levels=levels)
}
}
# sample call
gen_perm(lst = 3, levels = c("x", "T", "a"))
I will return to your problem when I get more time.
UPDATE
I modified the code above to work for your problem. Note that the matrix being populated lives in the global environment. The function also uses the tmp variable to pass rows to the global environment. This was the easiest way for me to solve the problem. Perhaps, there are other ways.
levels <- c(0,1)
nc <- 3
m <- matrix(numeric(0), ncol = nc)
gen_perm <- function(row=numeric(), lst=nc, levels = levels){
if (length(row) == lst){
assign("tmp", row, .GlobalEnv)
with(.GlobalEnv, {m <- rbind(m, tmp); rownames(m) <- NULL})
return(invisible(NULL))
}
for (i in levels){
gen_perm(row=c(row,i), lst=lst, levels=levels)
}
}
gen_perm(lst=nc, levels=levels)
UPDATE 2
To get the expected output you provided, run
m <- matrix(numeric(0), ncol = 3)
gen_perm(lst = 3, levels = c(0,1))
m
levels specifies a range of values to generate (binary in our case) to generate permutations, m is an empty matrix to fill up, gen_perm generates rows and adds them to the matrix m, lst is a length of the permutation (matches the number of columns in the matrix).

Creating a summation formula in R

I am a beginner in R and I am given the following problem to code:
Let
and
be the summations I am trying to recreate into R.
Right now this is my code for the first summation (code snippet):
z <- 1:J
L<-1000
D<-0
for(k in z){
for(j in D:D+L-1){
X[k] = 1/L*sum(X[j])
}
}
I had no idea how to create latex formulas in the questions so if you run the code snippets you see the formulas I am trying to recreate in R.
My question is, am I on the right path? I am not sure how to use the for loop to create the summation.
J <- 5
L <- 100
D <- 1 # in R we start to count at 1 (one). (thanks to Darren)
x <- matrix(1:(L*J), nrow = length(D:(D+L-1)), ncol = J)
funXj_ <- function(j, D, L) sum(x[D:(D+L-1), j], na.rm = T)
X_ <- sapply(1:J, funXj_, D = D, L = L)
#5050 15050 25050 35050 45050
I believe your x_j^t is some sort of 2-Dimensional array. (so I took a matrix as example)
we're of course free to alter our borders:
J as 4, D as 2, L as 80
sapply(1:4, funXj_, D = 2, L = 80)

Sum matrix recursively from previews operations

Having the following matrix and vector:
a<-matrix(c(1,4,7,
2,5,8,
3,6,9), nrow = 3)
b <- c(1,1,1)
How do I sum recursiverly over each line of the matrix inside a funciton till obtain a desired result using last result to calculate next operation as shown:
b<-b+a[1,]
b<-b+a[2,]
b<-b+a[3,]
b<-b+a[1,]
b<-b+a[2,]
sum(b)>100 # Sum recursiverly till obtain this result sum(b)>100
This operation looks similar to this answer Multiply recursiverly in r. However it uses results from previews operations to calculate next ones.
Here's a recursive function to do what you're after,
# Sample Data
a<-matrix(c(1,4,7,
2,5,8,
3,6,9), nrow = 3)
b <- c(1,1,1)
We create a function that references itself with a value that increments modulo the number of rows
recAdd <- function(b, a, start = 1, size = NROW(a)) {
if(sum(b) > 100) return(b)
return(recAdd(b + a[start,], a, start = start %% size + 1, size))
}
> recAdd(b,a)
[1] 30 38 46
EDIT: Alternatively, here's a way with no recursion at all, which is much faster on large ratios of target number to sum of the matrix (but is slower on data of this size). Basically we get to take advantage of Euclid
nonrecAdd <- function(b, a, target = 100) {
Remaining <- target - sum(b)
perloop <- sum(a)
nloops <- Remaining %/% perloop
Remaining <- Remaining %% perloop
if(Remaining > 0) {
cumulativeRowsums <- cumsum(rowSums(a))
finalindex <- which((Remaining %/% cumulativeRowsums) == 0)[1]
b + colSums(a) * nloops + colSums(a[1:finalindex,,drop = FALSE])
} else {
b + colSums(a) * nloops
}
}

Interpreting [R] Greatest Common Divisor (GCD) (and LCM) Function in {numbers} package

I don't have background in programming (except from wrestling with R to get things done), and I'm trying to verbalize what the formula for the greater common divisor in the R {numbers} package is trying to do at each step. I need help with understanding the flow of steps within the function:
function (n, m)
{
stopifnot(is.numeric(n), is.numeric(m))
if (length(n) != 1 || floor(n) != ceiling(n) || length(m) !=
1 || floor(m) != ceiling(m))
stop("Arguments 'n', 'm' must be integer scalars.")
if (n == 0 && m == 0)
return(0)
n <- abs(n)
m <- abs(m)
if (m > n) {
t <- n
n <- m
m <- t
}
while (m > 0) {
t <- n
n <- m
m <- t%%m
}
return(n)
}
<environment: namespace:numbers>
For instance, in the if (m > n) {} part the n becomes t and ultimately it becomes m? I'm afraid to ask, because it may be painfully obvious, but I don't know what is going on. The same apply to, I guess, he else part of the equation with %% being perhaps modulo.
What it says is:
Stop if either m or n are not numeric, more than one number, or have decimals, and return the message, "Arguments 'n', 'm' must be integer scalars."
If they both are zero, return zero.
Using absolute values from now on.
Make sure that n > m because of the algorithm we'll end up applying in the next step. If this is not the case flip them: initially place n in a temporary variable "t", and assign m to n, so that now the larger number is at the beginning of the (n, m) expression. At this point both the initial (n, m) values contain m. Finish it up by retrieving the value in the temporary variable and assigning it to m.
Now they apply the modified Euclidean algorithm to find the GCD - a more efficient version of the algorithm that shortcuts the multiple subtractions, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two.
The smaller number at the beginning of the algorithm will end up being the larger after the first iteration, therefore we'll assign it to n to get ready for the second iteration. To do so, though, we need to get the current n out of the way by assigning it to the temporary variable t. After that we get the modulo resulting from dividing the original larger number (n), which now is stored in t, by the smaller number m. The result will replace the number stored in m.
As long as there is a remainder (modulo) the process will go on, this time with the initial smaller number, m playing the role of the big guy. When there is no remainder, the smaller of the numbers in that particular iteration is returned.
ADDENDUM:
Now that I know how to read this function, I see that it is limited to two numbers in the input to the function. So I entertained myself putting together a function that can work with three integers in the input:
require(numbers)
GCF <- function(x,y,z){
tab.x <- tabulate(primeFactors(x))
tab.y <- tabulate(primeFactors(y))
tab.z <- tabulate(primeFactors(z))
max.len <- max(length(tab.x), length(tab.y), length(tab.z))
tab_x = c(tab.x, rep(0, max.len - length(tab.x)))
tab_y = c(tab.y, rep(0, max.len - length(tab.y)))
tab_z = c(tab.z, rep(0, max.len - length(tab.z)))
GCD_elem <- numeric()
for(i in 1:max.len){
GCD_elem[i] <- min(tab_x[i], tab_y[i], tab_z[i]) * i
}
GCD_elem <- GCD_elem[!GCD_elem==0]
GrCD <- prod(GCD_elem)
print(GrCD)
}
Also for the LCM:
LCM <- function(x,y,z){
tab.x <- tabulate(primeFactors(x))
tab.y <- tabulate(primeFactors(y))
tab.z <- tabulate(primeFactors(z))
max.len <- max(length(tab.x), length(tab.y), length(tab.z))
tab_x = c(tab.x, rep(0, max.len - length(tab.x)))
tab_y = c(tab.y, rep(0, max.len - length(tab.y)))
tab_z = c(tab.z, rep(0, max.len - length(tab.z)))
LCM_elem <- numeric()
for(i in 1:max.len){
LCM_elem[i] <- i^(max(tab_x[i], tab_y[i], tab_z[i]))
}
LCM_elem <- LCM_elem[!LCM_elem==0]
LCM <- prod(LCM_elem)
print(LCM)
}

Resources