How to substitute multiple words with spaces in R? - r

Here is an example:
drugs<-c("Lapatinib-Ditosylate", "Caffeic-Acid-Phenethyl-Ester", "Pazopanib-HCl", "D-Pantethine")
ads<-"These are recently new released drugs Lapatinib Ditosylate, Pazopanib HCl, and Caffeic Acid Phenethyl Ester"
What I wanted is to correct the drug names in ads with the names in drugs such that a desired output would be:
"These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"

If you create a vector of words to be replaced, then you can loop over that vector and the vector of words to replace them (drugs), replacing all instances of one element in each interation of the loop.
to_repl <- gsub('-', ' ', drugs)
for(i in seq_along(drugs))
ads <- gsub(to_repl[i], drugs[i], ads)
ads
# "These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"
Contrary to popular belief, for-loops in R are no slower than lapply
f_lapply <- function(ads){
to_repl <- gsub('-', ' ', drugs)
invisible(lapply(seq_along(to_repl), function(i) {
ads <<- gsub(to_repl[i], drugs[i], ads)
}))
ads
}
f_loop <- function(ads){
to_repl <- gsub('-', ' ', drugs)
for(i in seq_along(to_repl))
ads <- gsub(to_repl[i], drugs[i], ads)
ads
}
f_loop(ads) == f_lapply(ads)
# [1] TRUE
microbenchmark::microbenchmark(f_loop(ads), f_lapply(ads), times = 1e4)
# Unit: microseconds
# expr min lq mean median uq max neval
# f_loop(ads) 59.488 95.180 118.0793 107.487 120.205 7426.866 10000
# f_lapply(ads) 69.333 114.462 147.9732 130.872 152.205 27283.670 10000
Or, using more general examples:
loop_over <- 1:1e5
microbenchmark::microbenchmark(
for_loop = {for(i in loop_over) 1},
lapply = {lapply(loop_over, function(x) 1)}
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# for_loop 4.66174 5.865842 7.725975 6.354867 7.449429 35.26807 100
# lapply 94.09223 114.378778 125.149863 124.665128 134.217326 170.16889 100
loop_over <- 1:1e5
microbenchmark::microbenchmark(
for_loop = {y <- numeric(1e5); for(i in seq_along(loop_over)) y[i] <- loop_over[i]},
lapply = {lapply(loop_over, function(x) x)}
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# for_loop 11.00184 11.49455 15.24015 12.10461 15.26050 134.139 100
# lapply 71.41820 81.14660 93.64569 87.05162 98.59295 357.219 100

This can also be done using lapply() which will be faster than for loop. Modifying #IceCreamToucan's answer, this can be done in lapply as follows
to_repl <- gsub('-', ' ', drugs)
invisible(lapply(seq_along(to_repl), function(i) {
ads <<- gsub(to_repl[i], drugs[i], ads)
}))
# [1] "These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"
Microbenchmark
Unit: microseconds
expr min lq mean median uq max neval
lapply 80.514 87.4935 110.1103 93.304 96.1995 1902.861 100
for.loop 2285.164 2318.5665 2463.1554 2338.216 2377.4120 7510.763 100

Related

R - fast way to find all vector elements that contain all search terms

I have the same question answered here R - Find all vector elements that contain all strings / patterns - str_detect grep. But the suggested solution is taking too long.
I have 73,360 observations with sentences. I want a TRUE return for matches that contain ALL search strings.
sentences <- c("blue green red",
"blue green yellow",
"green red yellow ")
search_terms <- c("blue","red")
pattern <- paste0("(?=.*", search_terms,")", collapse="")
grepl(pattern, sentences, perl = TRUE)
-output
[1] TRUE FALSE FALSE
This gives the right result, but it takes a very very very long time. Is there a faster way? I tried str_detect and got same delayed result.
BTW the "sentences" contain special characters like [],.- but no special characters like ñ.
UPDATED: below are my bemchmark results using the suggested methods, thanks to #onyambu's input.
Unit: milliseconds
expr min lq mean median uq max neval
OP_solution() 7033.7550 7152.0689 7277.8248 7251.8419 7391.8664 7690.964 100
map_str_detect() 2239.8715 2292.1271 2357.7432 2348.9975 2397.1758 2774.349 100
unlist_lapply_fixed() 308.1492 331.9948 345.6262 339.9935 348.9907 586.169 100
Reduce_lapply winnnnssss! Thanks #onyambu
Unit: milliseconds
expr min lq mean median uq max neval
Reduce_lapply() 49.02941 53.61291 55.96418 55.31494 56.76109 80.64735 100
unlist_lapply_fixed() 318.25518 335.58883 362.03831 346.71509 357.97142 566.95738 100
EDIT:
Another option is to loop around the search pattern instead of looping through the sentences:
use:
Reduce("&", lapply(search_terms, grepl, sentences, fixed = TRUE))
[1] TRUE FALSE FALSE
benchmark
Unit: milliseconds
expr min lq mean median uq max neval
OP_solution() 80.6365 81.61575 85.76427 83.20265 87.32975 163.0302 100
map_str_detect() 546.4681 563.08570 596.26190 571.52185 603.03980 1383.7969 100
unlist_lapply_fixed() 61.8119 67.49450 71.41485 69.56290 73.77240 104.8399 100
Reduce_lapply() 3.0604 3.11205 3.406012 3.14535 3.43130 6.3526 100
Note that this is amaxingly fast!
OLD POST:
Make use of the all function as shown below:
unlist(lapply(strsplit(sentences, " ", fixed = TRUE), \(x)all(search_terms %in% x)))
the bencmark:
OP_solution <- function(){
pattern <- paste0("(?=.*", search_terms,")", collapse="")
grepl(pattern, sentences, perl = TRUE)
}
map_str_detect <- function(){
purrr::map_lgl(
.x = sentences,
.f = ~ all(stringr::str_detect(.x, search_terms))
)
}
unlist_lapply_fixed <- function() unlist(lapply(strsplit(sentences, " ", fixed = TRUE), \(x)all(search_terms %in% x)))
sentences <- rep(sentences, 10000)
microbenchmark::microbenchmark( OP_solution(),map_str_detect(),
unlist_lapply_fixed(), check = 'equal')
Unit: milliseconds
expr min lq mean median uq max neval
OP_solution() 80.5368 81.40265 85.14451 82.73985 86.41345 118.7052 100
map_str_detect() 542.3555 553.84080 587.15748 566.66570 607.77130 782.5189 100
unlist_lapply_fixed() 60.4955 66.94420 71.94195 69.30135 72.16735 113.6567 100
you could potentially try a mix of purrr and stringr functions to solve:
library(tidyverse)
purrr::map_lgl(
.x = sentences,
.f = ~ all(stringr::str_detect(.x, search_terms))
)

Count number of palindromes within a string

I have written the below code to count the number of palindromic strings in a given string:
countPalindromes <- function(str){
len <- nchar(str)
count <- 0
for(i in 1:len){
for(j in i:len){
subs <- substr(str, i, j)
rev <- paste(rev(substring(subs, 1:nchar(subs), 1:nchar(subs))), collapse = "")
if(subs == rev){
count <- count + 1
}
}
}
count
}
This is actually working fine but the code needs to be optimized in such a way so that it executes at a faster rate.
Please suggest some ways to optimize this piece of code.
Here's a solution that uses the wonderful stringi package - just as Andre suggested - together with a wee bit of vectorization.
cp <- function(s) {
lenstr <- stri_length(s) # Get the length
res <- sapply(1:lenstr, function(i) {
# Get all substrings
sub_string <- stringi::stri_sub(s, i, i:lenstr)
# Count matches
sum((sub_string == stringi::stri_reverse(sub_string)))
})
sum(res)
}
This should give the same result as your function
> cp("enafdemderredmedfane")
[1] 30
> countPalindromes("enafdemderredmedfane")
[1] 30
There is not much speedup for short strings, but for longer strings you can really see a benefit:
> microbenchmark::microbenchmark(countPalindromes("howdoyoudo"), cp("howdoyoudo"))
Unit: microseconds
expr min lq mean median uq max neval cld
countPalindromes("howdoyoudo") 480.979 489.6180 508.9044 494.9005 511.201 662.605 100 b
cp("howdoyoudo") 156.117 163.1555 175.4785 169.5640 179.993 324.145 100 a
Compared to
> microbenchmark::microbenchmark(countPalindromes("enafdemderredmedfane"), cp("enafdemderredmedfane"))
Unit: microseconds
expr min lq mean median uq max neval cld
countPalindromes("enafdemderredmedfane") 2031.565 2115.0305 2475.5974 2222.354 2384.151 6696.484 100 b
cp("enafdemderredmedfane") 324.991 357.6055 430.8334 387.242 478.183 1298.390 100 a
Working with a vector the process is faster, I am thinking of eliminating the double for, but I can not find an efficient way.
countPalindromes_new <- function(str){
len <- nchar(str)
strsp <- strsplit(str, "")[[1]]
count <- 0
for(i in 1:len){
for(j in i:len){
if(all(strsp[i:j] == strsp[j:i])){
count <- count + 1
}
}
}
count
}
> microbenchmark::microbenchmark(countPalindromes("howdoyoudo"), cp("howdoyoudo"), countPalindromes_new("howdoyoudo"))
Unit: microseconds
expr min lq mean median uq max neval
countPalindromes("howdoyoudo") 869.121 933.1215 1069.68001 963.201 1022.081 6712.751 100
cp("howdoyoudo") 192.000 202.8805 243.11972 219.308 258.987 477.441 100
countPalindromes_new("howdoyoudo") 49.068 53.3340 62.32815 57.387 63.574 116.481 100
> microbenchmark::microbenchmark(countPalindromes("enafdemderredmedfane"), cp("enafdemderredmedfane"), countPalindromes_new("enafdemderredmedfane"))
Unit: microseconds
expr min lq mean median uq max neval
countPalindromes("enafdemderredmedfane") 3578.029 3800.9620 4170.0888 3987.416 4173.6550 10205.445 100
cp("enafdemderredmedfane") 391.254 438.4010 609.8782 481.708 534.6135 6116.270 100
countPalindromes_new("enafdemderredmedfane") 200.534 214.1875 235.3501 223.148 245.5475 448.854 100
UPDATE (NEW VERSION WIHTOUT LEN 1 COMPARASION):
countPalindromes_new2 <- function(str){
len <- nchar(str)
strsp <- strsplit(str, "")[[1]]
count <- len
for(i in 1:(len-1)){
for(j in (i + 1):len){
if(all(strsp[i:j] == strsp[j:i])){
count <- count + 1
}
}
}
count
}
Simply: normally I'm against using new libraries everywhere. But stringi is THE library for working with strings in R.
string_vec <- c("anna","nothing","abccba")
string_rev <- stringi::stri_reverse(string_vec)
sum(string_vec == string_rev)
#evals 2

Fast way for character matching in R

I'm trying to find whether vector of characters maps to another another, and looking for a fast way of doing it in R.
Specifically, my character alphabet is amino acids:
aa.LETTERS <- c('G','P','A','V','L','I','M','C','F','Y','W','H','K','R','Q','N','E','D','S','T')
I have a vector of peptide and protein sequences:
set.seed(1)
peptides.vec <- sapply(1:100,function(p) paste(aa.LETTERS[sample(20,ceiling(runif(1,8,12)),replace=T)],collapse=""))
proteins.vec <- sapply(1:1000,function(p) paste(aa.LETTERS[sample(20,ceiling(runif(1,200,400)),replace=T)],collapse=""))
I want to try and see if for each peptide sequence in peptides.vec if it exists in any sequence in proteins.vec.
This is one of the obvious ways of doing it:
mapping.mat <- do.call(rbind,lapply(peptides.vec,function(p){
grepl(p,proteins.vec)
}))
Another one is using the Biostrings Bioconductor package:
require(Biostrings)
peptides.set <- AAStringSet(x=peptides.vec)
proteins.set <- AAStringSet(x=proteins.vec)
mapping.mat <- vcountPDict(peptides.set,proteins.set)
Both are slow for the dimensions I'm working with:
> microbenchmark(do.call(rbind,lapply(peptides.vec,function(p){
grepl(p,proteins.vec)
})),times=100)
Unit: milliseconds
expr min lq mean median uq max neval
do.call(rbind, lapply(peptides.vec, function(p) { grepl(p, proteins.vec) })) 477.2509 478.8714 482.8937 480.4398 484.3076 509.8098 100
> microbenchmark(vcountPDict(peptides.set,proteins.set),times=100)
Unit: milliseconds
expr min lq mean median uq max neval
vcountPDict(peptides.set, proteins.set) 283.32 284.3334 285.0205 284.7867 285.2467 290.6725 100
Any idea how to get this done faster?
As mentioned in my comment, adding fixed = TRUE will lead to some performance improvement, and "stringi" is likely to give a good boost too.
Here are some tests:
N <- as.integer(length(proteins.vec))
funOP <- function() {
do.call(rbind, lapply(peptides.vec, function(p) grepl(p, proteins.vec)))
}
funBASE_1 <- function() {
# Just adds "fixed = TRUE"
do.call(rbind, lapply(peptides.vec, function(p) grepl(p, proteins.vec, fixed = TRUE)))
}
funBASE_2 <- function() {
# Does away with the `do.call` but probably won't improve performance
vapply(peptides.vec, function(x) grepl(x, proteins.vec, fixed = TRUE), logical(N))
}
library(stringi)
funSTRINGI <- function() {
# Should be considerably faster
vapply(peptides.vec, function(x) stri_detect_fixed(proteins.vec, x), logical(N))
}
library(microbenchmark)
microbenchmark(funOP(), funBASE_1(), funBASE_2(), funSTRINGI())
# Unit: milliseconds
# expr min lq mean median uq max neval
# funOP() 344.500600 348.562879 352.94847 351.585206 356.508197 371.99683 100
# funBASE_1() 128.724523 129.763464 132.55028 132.198112 135.277821 139.65782 100
# funBASE_2() 128.564914 129.831660 132.33836 131.607216 134.380077 140.46987 100
# funSTRINGI() 8.629728 8.825296 9.22318 9.038496 9.444376 11.28491 100
Go "stringi"!

R *apply vector as input; matrix as output

I'd like to apply over each element of a vector, a function that outputs a vector.
After applying the function to each element of that vector, I should have many vectors, which I'd like to rbind in order to have a matrix.
The code should be equivalent to the following:
my_function <- function(x) x:(x+10)
my_vec <- 1:10
x <- vector()
for(i in seq_along(vec)){
x <- rbind(x,my_function(my_vec[i]))
}
Of course, my_function and my_vec are just examples.
try:
tmp <- lapply(my_vec, my_function)
do.call(rbind, tmp)
or, like Heroka suggested, use sapply. i prefer lapply, then bind my output the way i like (rbind/cbind) instead of potentially transposing.
Here is an alternative:
matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE )
Speed is almost the same:
library(microbenchmark)
my_function <- function(x) sin(x:(x+10))
for ( n in 1:4 )
{
my_vec <- 1:10^n
print(
microbenchmark( mra68 = matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE ),
stas.g = do.call(rbind, lapply(my_vec, my_function)),
times = 1000 )
)
print("identical?")
print( identical( matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE ),
do.call(rbind, lapply(my_vec, my_function)) ) )
}
.
Unit: microseconds
expr min lq mean median uq max neval
mra68 38.496 40.307 68.00539 41.213 110.052 282.148 1000
stas.g 41.213 42.572 72.86443 43.930 115.939 445.186 1000
[1] "identical?"
[1] TRUE
Unit: microseconds
expr min lq mean median uq max neval
mra68 793.002 810.212 850.4857 818.3640 865.2375 7231.669 1000
stas.g 876.786 894.901 946.8165 906.2235 966.9100 7051.873 1000
[1] "identical?"
[1] TRUE
Unit: milliseconds
expr min lq mean median uq max neval
mra68 2.605448 3.028442 5.269003 4.020940 7.807512 14.51225 1000
stas.g 2.959604 3.390071 5.823661 4.500546 8.800462 92.54977 1000
[1] "identical?"
[1] TRUE
Unit: milliseconds
expr min lq mean median uq max neval
mra68 27.29810 30.99387 51.44223 41.20167 79.46185 559.0059 1000
stas.g 33.63622 37.22420 60.10224 49.07643 92.94333 395.3315 1000
[1] "identical?"
[1] TRUE
>

Efficient use of vectors

I am attempting to copy one vector to another using the following syntax:
data<-NULL
for( i in 1:nrow(line)){
data=append(data,line[i*4])
}
From what I have seen, the use of append in this way results in a lot of copying of data, which makes R very slow. What is the syntax for copying the 4th element of one array to another, given that the list you are copying from is of a given size?
Here are three methods with their benchmarks. You can see that preallocating the vector as in the method2 function is quite a bit faster, while the lapply method is middle, and your function is the slowest.
Of course, these are 1D vectors as opposed to arrays of n-D, but I would expected the benchmarks would be similar or even more pronounced.
method1 <- function(line) {
data<-NULL
for( i in 1:length(line)){
data=append(data,line[i])
}
}
method2 <- function(line) {
data <- vector(mode="numeric", length = length(line))
for (i in 1:length(line)) {
data[i] <- line[i]
}
}
library(microbenchmark)
r <- rnorm(1000)
microbenchmark(method2(r), unit="ms")
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> method2(r) 2.18085 2.279676 2.428731 2.371593 2.500495 5.24888 100
microbenchmark(lapply(r, function(x) { data<-append(data, x) }), unit="ms")
#> Unit: milliseconds
#> expr min lq
#> lapply(r, function(x) { data <- append(data, x) }) 3.014673 3.091299
#> mean median uq max neval
#> 3.287216 3.150052 3.260199 6.036501 100
microbenchmark(method1(r), unit="ms")
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> method1(r) 3.938684 3.978002 5.71831 4.020001 4.280521 98.58584 100
Didn't realize OP wanted only every fourth. Why not just use a data frame or data.table?
d <- data.frame(matrix(rnorm(1000), ncol=1))
microbenchmark(d2 <- d[seq(1,nrow(d), 4),])
#> Unit: microseconds
#> expr min lq mean median uq
#> d2 <- d[seq(1, nrow(d), 4), ] 64.846 65.9915 73.08007 67.225 73.8225
#> max neval
#> 220.438 100
library(data.table)
dt <- data.table(d)
microbenchmark(d2 <- dt[seq(1,nrow(d), 4),])
#> Unit: microseconds
#> expr min lq mean median uq
#> d2 <- dt[seq(1, nrow(d), 4), ] 298.163 315.2025 324.8793 320.554 330.416
#> max neval
#> 655.124 100
If you're trying to extract every fourth element from a vector, you could index using seq to grab the correct elements:
data <- letters[seq(4, length(letters), by=4)]
data
# [1] "d" "h" "l" "p" "t" "x"
Growing the vector one at a time as you show in your question will be slow because you will need to keep re-allocating your vector (see the second circle of The R Inferno for details). However, even pre-allocating your vector and constructing it with a for loop will be slow compared to constructing it in a single vectorized indexing operation.
To get a sense of the speed improvements, consider a comparison to the sort of method you've described, except using pre-allocation:
for.prealloc <- function(x) {
data <- vector(mode="numeric", length = floor(length(x)/4))
for (i in 1:floor(length(x)/4)) {
data[i] <- x[i*4]
}
data
}
josilber <- function(x) x[seq(4, length(x), by=4)]
r <- rnorm(10000)
all.equal(for.prealloc(r), josilber(r))
# [1] TRUE
library(microbenchmark)
microbenchmark(for.prealloc(r), josilber(r))
# Unit: microseconds
# expr min lq mean median uq max neval
# for.prealloc(r) 1846.014 2035.7890 2351.9681 2094.804 2244.56 5283.285 100
# josilber(r) 95.757 97.4125 125.9877 113.179 138.96 259.606 100
The approach I propose is 20x faster than using for and a pre-allocated vector (and it will be even faster than using append and a non-pre-allocated vector).

Resources