Vectorizing for loops to speed up a program in R - r

I'm looking for some simple vectorized approach for my for loop in R.
I have the following data frame with sentences and two dictionaries of positive and negative words:
# Create data.frame with sentences
sent <- data.frame(words = c("just right size and i love this notebook", "benefits great laptop",
"wouldnt bad notebook", "very good quality", "orgtop",
"great improvement for that bad product but overall is not good", "notebook is not good but i love batterytop"), user = c(1,2,3,4,5,6,7),
stringsAsFactors=F)
# Create pos/negWords
posWords <- c("great","improvement","love","great improvement","very good","good","right","very","benefits",
"extra","benefit","top","extraordinarily","extraordinary","super","benefits super","good","benefits great",
"wouldnt bad")
negWords <- c("hate","bad","not good","horrible")
And now I create replicates of the original data frame to simulate a big data set:
# Replicate original data.frame - big data simulation (700.000 rows of sentences)
df.expanded <- as.data.frame(replicate(100000,sent$words))
# library(zoo)
sent <- coredata(sent)[rep(seq(nrow(sent)),100000),]
rownames(sent) <- NULL
For my next step, I'll have to do descending ordering of words in dictionaries with their sentiment score (pos word = 1 and neg word = -1).
# Ordering words in pos/negWords
wordsDF <- data.frame(words = posWords, value = 1,stringsAsFactors=F)
wordsDF <- rbind(wordsDF,data.frame(words = negWords, value = -1))
wordsDF$lengths <- unlist(lapply(wordsDF$words, nchar))
wordsDF <- wordsDF[order(-wordsDF[,3]),]
rownames(wordsDF) <- NULL
Then I define the following function with a for loop:
# Sentiment score function
scoreSentence2 <- function(sentence){
score <- 0
for(x in 1:nrow(wordsDF)){
matchWords <- paste("\\<",wordsDF[x,1],'\\>', sep="") # matching exact words
count <- length(grep(matchWords,sentence)) # count them
if(count){
score <- score + (count * wordsDF[x,2]) # compute score (count * sentValue)
sentence <- gsub(paste0('\\s*\\b', wordsDF[x,1], '\\b\\s*', collapse='|'), ' ', sentence) # remove matched words from wordsDF
# library(qdapRegex)
sentence <- rm_white(sentence)
}
}
score
}
And I call the previous function on sentences in my data frame:
# Apply scoreSentence function to sentences
SentimentScore2 <- unlist(lapply(sent$words, scoreSentence2))
# Time consumption for 700.000 sentences in sent data.frame:
# user system elapsed
# 1054.19 0.09 1056.17
# Add sentiment score to origin sent data.frame
sent <- cbind(sent, SentimentScore2)
Desired output is:
Words user SentimentScore2
just right size and i love this notebook 1 2
benefits great laptop 2 1
wouldnt bad notebook 3 1
very good quality 4 1
orgtop 5 0
.
.
.
And so forth...
Please, can anyone help me to reduce computing time of my original approach. Because of my beginners programming skills in R I'm in the end :-)
Any of your help or advice will be very appreciated. Thank you very much in advance.

In the spirit of "Teach somebody to fish is better than to give a fish", I'll walk you through that:
Make a copy of your code: you are going to mess it up!
Find the bottleneck:
1a: make the problem smaller:
Rep <- 100
df.expanded <- as.data.frame(replicate(nRep,sent$words))
library(zoo)
sent <- coredata(sent)[rep(seq(nrow(sent)),nRep),]
1b: keep a reference solution: you'll be changing your code and there are few activities as amazing at introducing bugs than optimizing a code!
sentRef <- sent
and add the same but commented out at the end of your code to remember where is your reference. To make it even easier to check you are not messing up your code you can test it automatically at the end of your code:
library("testthat")
expect_equal(sent,sentRef)
1c: Trigger the profiler around the code to look at:
Rprof()
SentimentScore2 <- unlist(lapply(sent$words, scoreSentence2))
Rprof(NULL)
1d: view the result, with base R:
summaryRprof()
There are also nicer tools, you can check package
profileR
or
lineprof
lineprof
is my tool of choice and here a real added value, allowing to narrow down the problem to these two lines:
matchWords <- paste("\\<",wordsDF[x,1],'\\>', sep="") # matching exact words
count <- length(grep(matchWords,sentence)) # count them
Fix it.
3.1 Fortunately the main problem is fairly easy: you don't need the first line to be in the function, move it before. By the way the same applies to your paste0(). Your code becomes:
matchWords <- paste("\\<",wordsDF[,1],'\\>', sep="") # matching exact words
matchedWords <- paste0('\\s*\\b', wordsDF[,1], '\\b\\s*')
# Sentiment score function
scoreSentence2 <- function(sentence){
score <- 0
for(x in 1:nrow(wordsDF)){
count <- length(grep(matchWords[x],sentence)) # count them
if(count){
score <- score + (count * wordsDF[x,2]) # compute score (count * sentValue)
sentence <- gsub(matchedWords[x],' ', sentence) # remove matched words from wordsDF
require(qdapRegex)
# sentence <- rm_white(sentence)
}
}
score
}
That changes the execution time for 1000 reps from
5.64s to 2.32s. Not a bad investment!
3.2 The next bootle neck is the "count <-" line, but I think
shadow had just the right answer :-) Combined we get :
matchWords <- paste("\\<",wordsDF[,1],'\\>', sep="") # matching exact words
matchedWords <- paste0('\\s*\\b', wordsDF[,1], '\\b\\s*')
# Sentiment score function
scoreSentence2 <- function(sentence){
score <- 0
for(x in 1:nrow(wordsDF)){
count <- grepl(matchWords[x],sentence) # count them
score <- score + (count * wordsDF[x,2]) # compute score (count * sentValue)
sentence <- gsub(matchedWords[x],' ', sentence) # remove matched words from wordsDF
require(qdapRegex)
# sentence <- rm_white(sentence)
}
score
}
Here that makes 0.18s or 31 times faster...

You can easily vectorize your scoreSentence2 function, since grep, grepl are already vectorized:
scoreSentence <- function(sentence){
score <- rep(0, length(sentence))
for(x in 1:nrow(wordsDF)){
matchWords <- paste("\\<",wordsDF[x,1],'\\>', sep="") # matching exact words
count <- grepl(matchWords, sentence) # count them
score <- score + (count * wordsDF[x,2]) # compute score (count * sentValue)
sentence <- gsub(paste0('\\s*\\b', wordsDF[x,1], '\\b\\s*', collapse='|'), ' ', sentence) # remove matched words from wordsDF
sentence <- rm_white(sentence)
}
return(score)
}
scoreSentence(sent$words)
Note thet the count does not actually count the number of times the expression appears in one sentence (neither in your nor in my version). It just tells you if the expression appears at all. If you want to actually count them, you could use the following instead.
count <- sapply(gregexpr(matchWords, sentence), function(x) length(x[x>0]))

Related

In R distance between two sentences: Word-level comparison by minimum edit distance

While trying to learn R, I want to implement the algorithm below in R. Consider the two lists below:
List 1: "crashed", "red", "car"
List 2: "crashed", "blue", "bus"
I want to find out how many actions it would take to transform 'list1' into 'list2'.
As you can see I need only two actions:
1. Replace "red" with "blue".
2. Replace "car" with "bus".
But, how we can find the number of actions like this automatically.
We can have several actions to transform the sentences: ADD, REMOVE, or REPLACE the words in the list.
Now, I will try my best to explain how the algorithm should work:
At the first step: I will create a table like this:
rows: i= 0,1,2,3,
columns: j = 0,1,2,3
(example: value[0,0] = 0 , value[0, 1] = 1 ...)
crashed red car
0 1 2 3
crashed 1
blue 2
bus 3
Now, I will try to fill the table. Please, note that each cell in the table shows the number of actions we need to do to reformat the sentence (ADD, remove, or replace).
Consider the interaction between "crashed" and "crashed" (value[1,1]), obviously we don't need to change it so the value will be '0'. Since they are the same words. Basically, we got the diagonal value = value[0,0]
crashed red car
0 1 2 3
crashed 1 0
blue 2
bus 3
Now, consider "crashed" and the second part of the sentence which is "red". Since they are not the same word we can use calculate the number of changes like this :
min{value[0,1] , value[0,2] and value[1,1]} + 1
min{ 1, 2, 0} + 1 = 1
Therefore, we need to just remove "red".
So, the table will look like this:
crashed red car
0 1 2 3
crashed 1 0 1
blue 2
bus 3
And we will continue like this :
"crashed" and "car" will be :
min{value[0,3], value[0,2] and value[1,2]} + 1
min{3, 2, 1} +1 = 2
and the table will be:
crashed red car
0 1 2 3
crashed 1 0 1 2
blue 2
bus 3
And we will continue to do so. the final result will be :
crashed red car
0 1 2 3
crashed 1 0 1 2
blue 2 1 1 2
bus 3 2 2 2
As you can see the last number in the table shows the distance between two sentences: value[3,3] = 2
Basically, the algorithm should look like this:
if (characters_in_header_of_matrix[i]==characters_in_column_of_matrix [j] &
value[i,j] == value[i+1][j-1] )
then {get the 'DIAGONAL VALUE' #diagonal value= value[i, j-1]}
else{
value[i,j] = min(value[i-1, j], value[i-1, j-1], value[i, j-1]) + 1
}
endif
for finding the difference between the elements of two lists that you can see in the header and the column of the matrix, I have used the strcmp() function which will give us a boolean value(TRUE or FALSE) while comparing the words. But, I fail at implementing this.
I'd appreciate your help on this one, thanks.
The question
After some clarification in a previous post, and after the update of the post, my understanding is that Zero is asking: 'how one can iteratively count the number of word differences in two strings'.
I am unaware of any implementation in R, although i would be surprised if i doesn't already exists. I took a bit of time out to create a simple implementation, altering the algorithm slightly for simplicity (For anyone not interested scroll down for 2 implementations, 1 in pure R, one using the smallest amount of Rcpp). The general idea of the implementation:
Initialize with string_1 and string_2 of length n_1 and n_2
Calculate the cumulative difference between the first min(n_1, n_2) elements,
Use this cumulative difference as the diagonal in the matrix
Set the first off-diagonal element to the very first element + 1
Calculate the remaining off diagonal elements as: diag(i) - diag(i-1) + full_matrix(i-1,j)
In the previous step i iterates over diagonals, j iterates over rows/columns (either one works), and we start in the third diagonal, as the first 2x2 matrix is filled in step 1 to 4
Calculate the remaining abs(n_1 - n_2) elements as full_matrix[,min(n_1 - n_2)] + 1:abs(n_1 - n_2), applying the latter over each value in the prior, and bind them appropriately to the full_matrix.
The output is a matrix with dimensions row and column names of the corresponding strings, which has been formatted for some easier reading.
Implementation in R
Dist_between_strings <- function(x, y,
split = " ",
split_x = split, split_y = split,
case_sensitive = TRUE){
#Safety checks
if(!is.character(x) || !is.character(y) ||
nchar(x) == 0 || nchar(y) == 0)
stop("x, y needs to be none empty character strings.")
if(length(x) != 1 || length(y) != 1)
stop("Currency the function is not vectorized, please provide the strings individually or use lapply.")
if(!is.logical(case_sensitive))
stop("case_sensitivity needs to be logical")
#Extract variable names of our variables
# used for the dimension names later on
x_name <- deparse(substitute(x))
y_name <- deparse(substitute(y))
#Expression which when evaluated will name our output
dimname_expression <-
parse(text = paste0("dimnames(output) <- list(",make.names(x_name, unique = TRUE)," = x_names,",
make.names(y_name, unique = TRUE)," = y_names)"))
#split the strings into words
x_names <- str_split(x, split_x, simplify = TRUE)
y_names <- str_split(y, split_y, simplify = TRUE)
#are we case_sensitive?
if(isTRUE(case_sensitive)){
x_split <- str_split(tolower(x), split_x, simplify = TRUE)
y_split <- str_split(tolower(y), split_y, simplify = TRUE)
}else{
x_split <- x_names
y_split <- y_names
}
#Create an index in case the two are of different length
idx <- seq(1, (n_min <- min((nx <- length(x_split)),
(ny <- length(y_split)))))
n_max <- max(nx, ny)
#If we have one string that has length 1, the output is simplified
if(n_min == 1){
distances <- seq(1, n_max) - (x_split[idx] == y_split[idx])
output <- matrix(distances, nrow = nx)
eval(dimname_expression)
return(output)
}
#If not we will have to do a bit of work
output <- diag(cumsum(ifelse(x_split[idx] == y_split[idx], 0, 1)))
#The loop will fill in the off_diagonal
output[2, 1] <- output[1, 2] <- output[1, 1] + 1
if(n_max > 2)
for(i in 3:n_min){
for(j in 1:(i - 1)){
output[i,j] <- output[j,i] <- output[i,i] - output[i - 1, i - 1] + #are the words different?
output[i - 1, j] #How many words were different before?
}
}
#comparison if the list is not of the same size
if(nx != ny){
#Add the remaining words to the side that does not contain this
additional_words <- seq(1, n_max - n_min)
additional_words <- sapply(additional_words, function(x) x + output[,n_min])
#merge the additional words
if(nx > ny)
output <- rbind(output, t(additional_words))
else
output <- cbind(output, additional_words)
}
#set the dimension names,
# I would like the original variable names to be displayed, as such i create an expression and evaluate it
eval(dimname_expression)
output
}
Note that the implementation is not vectorized, and as such can only take single string inputs!
Testing the implementation
To test the implementation, one could use the strings given. As they were said to be contained in lists, we will have to convert them to strings. Note that the function lets one split each string differently, however it assumes space separated strings. So first I'll show how one could achieve a conversion to the correct format:
list_1 <- list("crashed","red","car")
list_2 <- list("crashed","blue","bus")
string_1 <- paste(list_1,collapse = " ")
string_2 <- paste(list_2,collapse = " ")
Dist_between_strings(string_1, string_2)
output
#Strings in the given example
string_2
string_1 crashed blue bus
crashed 0 1 2
red 1 1 2
car 2 2 2
This is not exactly the output, but it yields the same information, as the words are ordered as they were given in the string.
More examples
Now i stated it worked for other strings as well and this is indeed the fact, so lets try some random user-made strings:
#More complicated strings
string_3 <- "I am not a blue whale"
string_4 <- "I am a cat"
string_5 <- "I am a beautiful flower power girl with monster wings"
string_6 <- "Hello"
Dist_between_strings(string_3, string_4, case_sensitive = TRUE)
Dist_between_strings(string_3, string_5, case_sensitive = TRUE)
Dist_between_strings(string_4, string_5, case_sensitive = TRUE)
Dist_between_strings(string_6, string_5)
Running these show that these do yield the correct answers. Note that if either string is of size 1, the comparison is a lot faster.
Benchmarking the implementation
Now as the implementation is accepted, as correct, we would like to know how well it performs (For the uninterested reader, one can scroll past this section, to where a faster implementation is given). For this purpose, i will use much larger strings. For a complete benchmark i should test various string sizes, but for the purposes i will only use 2 rather large strings of size 1000 and 2500. For this purpose i use the microbenchmark package in R, which contains a microbenchmark function, which claims to be accurate down to nanoseconds. The function itself executes the code 100 (or a user defined) number of times, returning the mean and quartiles of the run times. Due to other parts of R such as the Garbage Cleaner, the median is mostly considered a good estimate of the actual average run-time of the function.
The execution and results are shown below:
#Benchmarks for larger strings
set.seed(1)
string_7 <- paste(sample(LETTERS,1000,replace = TRUE), collapse = " ")
string_8 <- paste(sample(LETTERS,2500,replace = TRUE), collapse = " ")
microbenchmark::microbenchmark(String_Comparison = Dist_between_strings(string_7, string_8, case_sensitive = FALSE))
# Unit: milliseconds
# expr min lq mean median uq max neval
# String_Comparison 716.5703 729.4458 816.1161 763.5452 888.1231 1106.959 100
Profiling
Now i find the run-times very slow. One use case for the implementation could be an initial check of student hand-ins to check for plagiarism, in which case a low difference count very likely shows plagiarism. These can be very long and there may be hundreds of handins, an as such i would like the run to be very fast.
To figure out how to improve my implementation i used the profvis package with the corrosponding profvis function. To profile the function i exported it in another R script, that i sourced, running the code 1 once prior to profiling to compile the code and avoid profiling noise (important). The code to run the profiling can be seen below, and the most important part of the output is visualized in an image below it.
library(profvis)
profvis(Dist_between_strings(string_7, string_8, case_sensitive = FALSE))
Now, despite the colour, here i can see a clear problem. The loop filling the off-diagonal by far is responsible for most of the runtime. R (like python and other not compiled languages) loops are notoriously slow.
Using Rcpp to improve performance
To improve the implementation, we could implement the loop in c++ using the Rcpp package. This is rather simple. The code is not unlike the one we would use in R, if we avoid iterators. A c++ script can be made in file -> new file -> c++ File. The following c++ code would be pasted into the corrosponding file and sourced using the source button.
//Rcpp Code
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix Cpp_String_difference_outer_diag(NumericMatrix output){
long nrow = output.nrow();
for(long i = 2; i < nrow; i++){ // note the
for(long j = 0; j < i; j++){
output(i, j) = output(i, i) - output(i - 1, i - 1) + //are the words different?
output(i - 1, j);
output(j, i) = output(i, j);
}
}
return output;
}
The corresponding R function needs to be altered to use this function instead of looping. The code is similar to the first function, only switching the loop for a call to the c++ function.
Dist_between_strings_cpp <- function(x, y,
split = " ",
split_x = split, split_y = split,
case_sensitive = TRUE){
#Safety checks
if(!is.character(x) || !is.character(y) ||
nchar(x) == 0 || nchar(y) == 0)
stop("x, y needs to be none empty character strings.")
if(length(x) != 1 || length(y) != 1)
stop("Currency the function is not vectorized, please provide the strings individually or use lapply.")
if(!is.logical(case_sensitive))
stop("case_sensitivity needs to be logical")
#Extract variable names of our variables
# used for the dimension names later on
x_name <- deparse(substitute(x))
y_name <- deparse(substitute(y))
#Expression which when evaluated will name our output
dimname_expression <-
parse(text = paste0("dimnames(output) <- list(", make.names(x_name, unique = TRUE)," = x_names,",
make.names(y_name, unique = TRUE)," = y_names)"))
#split the strings into words
x_names <- str_split(x, split_x, simplify = TRUE)
y_names <- str_split(y, split_y, simplify = TRUE)
#are we case_sensitive?
if(isTRUE(case_sensitive)){
x_split <- str_split(tolower(x), split_x, simplify = TRUE)
y_split <- str_split(tolower(y), split_y, simplify = TRUE)
}else{
x_split <- x_names
y_split <- y_names
}
#Create an index in case the two are of different length
idx <- seq(1, (n_min <- min((nx <- length(x_split)),
(ny <- length(y_split)))))
n_max <- max(nx, ny)
#If we have one string that has length 1, the output is simplified
if(n_min == 1){
distances <- seq(1, n_max) - (x_split[idx] == y_split[idx])
output <- matrix(distances, nrow = nx)
eval(dimname_expression)
return(output)
}
#If not we will have to do a bit of work
output <- diag(cumsum(ifelse(x_split[idx] == y_split[idx], 0, 1)))
#The loop will fill in the off_diagonal
output[2, 1] <- output[1, 2] <- output[1, 1] + 1
if(n_max > 2)
output <- Cpp_String_difference_outer_diag(output) #Execute the c++ code
#comparison if the list is not of the same size
if(nx != ny){
#Add the remaining words to the side that does not contain this
additional_words <- seq(1, n_max - n_min)
additional_words <- sapply(additional_words, function(x) x + output[,n_min])
#merge the additional words
if(nx > ny)
output <- rbind(output, t(additional_words))
else
output <- cbind(output, additional_words)
}
#set the dimension names,
# I would like the original variable names to be displayed, as such i create an expression and evaluate it
eval(dimname_expression)
output
}
Testing the c++ implementation
To be sure the implementation is correct we check if the same output is obtained with the c++ implementation.
#Test the cpp implementation
identical(Dist_between_strings(string_3, string_4, case_sensitive = TRUE),
Dist_between_strings_cpp(string_3, string_4, case_sensitive = TRUE))
#TRUE
Final benchmarks
Now is this actually faster? To see this we could run another benchmark using the microbenchmark package. The code and results are shown below:
#Final microbenchmarking
microbenchmark::microbenchmark(R = Dist_between_strings(string_7, string_8, case_sensitive = FALSE),
Rcpp = Dist_between_strings_cpp(string_7, string_8, case_sensitive = FALSE))
# Unit: milliseconds
# expr min lq mean median uq max neval
# R 721.71899 753.6992 850.21045 787.26555 907.06919 1756.7574 100
# Rcpp 23.90164 32.9145 54.37215 37.28216 47.88256 243.6572 100
From the microbenchmark median improvement factor of roughly 21 ( = 787 / 37), which is a massive improvement from just implementing a single loop!
There is already an edit-distance function in R we can take advantage of: adist().
As it works on the character level, we'll have to assign a character to each unique word in our sentences, and stitch them together to form pseudo-words we can calculate the distance between.
s1 <- c("crashed", "red", "car")
s2 <- c("crashed", "blue", "bus")
ll <- list(s1, s2)
alnum <- c(letters, LETTERS, 0:9)
ll2 <- relist(alnum[factor(unlist(ll))], ll)
ll2 <- sapply(ll2, paste, collapse="")
adist(ll2)
# [,1] [,2]
# [1,] 0 2
# [2,] 2 0
Main limitation here, as far as I can tell, is the number of unique characters available, which in this case is 62, but can be extended quite easily, depending on your locale. E.g: intToUtf8(c(32:126, 161:300), TRUE).

Increasing speed using while loops: finding MULTIPLE chains of infection in R

I recently asked a question about improving performance in my code (Faster method than "while" loop to find chain of infection in R).
Background:
I'm analyzing large tables (300 000 - 500 000 rows) that store data output by a disease simulation model. In the model, animals on a landscape infect other animals. For example, in the example pictured below, animal a1 infects every animal on the landscape, and the infection moves from animal to animal, branching off into "chains" of infection.
In my original question, I asked how I could output a data.frame corresponding to animal "d2"s "chain of infection (see below, outlined in green, for illustration of one "chain"). The suggested solution worked well for one animal.
In reality, I will need to calculate chains for about 400 animals, corresponding to a subset of all animals (allanimals table).
I've included a link to an example dataset that is large enough to play with.
Here is the code for one chain, starting with animal 5497370, and note that I've slightly changed column names from my previous question, and updated the code!
The code:
allanimals <- read.csv("https://www.dropbox.com/s/0o6w29lz8yzryau/allanimals.csv?raw=1",
stringsAsFactors = FALSE)
# Here's an example animal
ExampleAnimal <- 5497370
ptm <- proc.time()
allanimals_ID <- setdiff(unique(c(allanimals$ID, allanimals$InfectingAnimal_ID)), -1)
infected <- rep(NA_integer_, length(allanimals_ID))
infected[match(allanimals$ID, allanimals_ID)] <-
match(allanimals$InfectingAnimal_ID, allanimals_ID)
path <- rep(NA_integer_, length(allanimals_ID))
curOne <- match(ExampleAnimal, allanimals_ID)
i <- 1
while (!is.na(nextOne <- infected[curOne])) {
path[i] <- curOne
i <- i + 1
curOne <- nextOne
}
chain <- allanimals[path[seq_len(i - 1)], ]
chain
proc.time() - ptm
# check it out
chain
I'd like to output chains for each animal in "sel.set":
sel.set <- allanimals %>%
filter(HexRow < 4 & Year == 130) %>%
pull("ID")
If possible, I'd like to store each "chain" data.frame as list with length = number of chains.
So I'll return the indices to access the data frame rather than all data frame subsets. You'll just need to use lapply(test, function(path) allanimals[path, ]) or with a more complicated function inside the lapply if you want to do other things on the data frame subsets.
One could think of just lapply on the solution for one animal:
get_path <- function(animal) {
curOne <- match(animal, allanimals_ID)
i <- 1
while (!is.na(nextOne <- infected[curOne])) {
path[i] <- curOne
i <- i + 1
curOne <- nextOne
}
path[seq_len(i - 1)]
}
sel.set <- allanimals %>%
filter(HexRow < 4 & Year == 130) %>%
pull("ID")
system.time(
test <- lapply(sel.set, get_path)
) # 0.66 seconds
We could rewrite this function as a recursive function (this will introduce my third and last solution).
system.time(
sel.set.match <- match(sel.set, allanimals_ID)
) # 0
get_path_rec <- function(animal.match) {
`if`(is.na(nextOne <- infected[animal.match]),
NULL,
c(animal.match, get_path_rec(nextOne)))
}
system.time(
test2 <- lapply(sel.set.match, get_path_rec)
) # 0.06
all.equal(test2, test) # TRUE
This solution is 10 times as fast. I don't understand why though.
Why I wanted to write a recursive function? I thought you might have a lot of cases where you want for example to get the path of animalX and animalY where animalY infected animalX. So when computing the path of animalX, you would recompute all path of animalY.
So I wanted to use memoization to store already computed results and memoization works well with recursive functions. So my last solution:
get_path_rec_memo <- memoise::memoize(get_path_rec)
memoise::forget(get_path_rec_memo)
system.time(
test3 <- lapply(sel.set.match, get_path_rec_memo)
) # 0.12
all.equal(test3, test) # TRUE
Unfortunately, this is slower than the second solution. Hope it will be useful for the whole dataset.

Speed up for loop in R

I have following data.frame and dictionaries with pos/negWords:
sent <- data.frame(words = c("just right size and i love this notebook", "benefits great laptop",
"wouldnt bad notebook", "very good quality", "orgtop",
"great improvement", "notebook is not good but i love batterytop"), user = c(1,2,3,4,5,6,7),
stringsAsFactors=F)
posWords <- c("great","improvement","love","great improvement","very good","good","right","very","benefits",
"extra","benefit","top","extraordinarily","extraordinary","super","benefits super")
negWords <- c("hate","bad","not good","horrible")
And the following function, which is matching words in each sentence with pos/negWords from dictionaries and compute sentiment value according frequency of occurance - but it is exact match.
# descending order for words length (prepare data for function below)
wordsDF <- data.frame(words = posWords, value = 1,stringsAsFactors=F)
wordsDF <- rbind(wordsDF,data.frame(words = negWords, value = -1))
wordsDF$lengths <- unlist(lapply(wordsDF$words, nchar))
wordsDF <- wordsDF[order(-wordsDF[,3]),]
rownames(wordsDF) <- NULL
scoreSentence <- function(sentence){
score <- 0
for(x in 1:nrow(wordsDF)){
match <- paste("\\<",wordsDF[x,1],'\\>', sep="") # matching exact words
count <- length(grep(match,sentence)) # count them
if(count){
score <- score + (count * wordsDF[x,2]) # compute score (count * sentValue)
sentence <- gsub(paste0('\\s*\\b', wordsDF[x,1], '\\b\\s*', collapse='|'), '', sentence) # remove words which were matched
} score
}
which generate desired output with calling:
SentimentScore <- unlist(lapply(sent$words, scoreSentence))
bbb <- cbind(sent, SentimentScore)
This resulted into mentioned desired output:
words user SentimentScore
1 just right size and i love this notebook 1 2
2 benefits great laptop 2 2
3 wouldnt bad notebook 3 -1
4 very good quality 4 1
5 orgtop 5 0
6 great improvement 6 1
7 notebook is not good but i love batterytop 7 0
For those purposes for loop was used, but I have 7000 pos/negWords and 200.000 sentences, so it is neverending...
Please, do you have some better solution for this task. Mainly to have the same result in SentimentScore :-)
I'll appreciate any of your advice or solution. Many thanks in advance.
First, you should run on subpieces of your data.frame, because list resizing during lapply probably generates a huge overhead :
ptm = proc.time(); f=lapply(1:100000, function(X){X}); print(proc.time()-ptm)
user system elapsed
0.056 0.004 0.061
ptm = proc.time(); f=lapply(1:1000000, function(X){X}); print(proc.time()-ptm)
user system elapsed
1.112 0.004 1.119
Here a factor 10 in sequence size yields a factor 21 in computation time. So use small lists, then concatenate them in one big list.
The declaration of a big data.frame does not take long compared to its extension, so you have to declare it and then fill it with your sublists :
bbb = data.frame( words=sent[1], user=sent[2], scoreSentence=rep(0, nrow(sent)) )
MAX_SIZE = 10000
for ( ii in 0:(ceiling(nrow(sent)/MAX_SIZE)-1) ) {
selected_rows = (1 + ii * MAX_SIZE):min( (ii+1)*MAX_SIZE, nrow(sent) )
bbb[selected_rows, "scoreSentence"] = unlist(lapply(sent$words[selected_rows], scoreSentence))
}
MAX_SIZE must be big enough because a for loop is slower than lapply (you want to do as little loop through the for as possible) but not too big or the list extension overhead will make the program slower.
Alternative with parallelisation
Parallelisation is a good way to make a set of complex calculations faster by running each of them on a different core. In your case we make the calculations complex by sending big chunks of sentences.
With mclapply from the parallel package, you send each chunk to a different thread each thread is fast because the chunck is not too big. A wrapper for scoreSentence that handle a vector is recquired :
bbb = data.frame( words=sent[1], user=sent[2], scoreSentence=rep(0, nrow(sent)) )
MAX_SIZE = 10000
mc_list = list()
mc_list[[ceiling(nrow(sent)/MAX_SIZE)]] = 0
for ( ii in 0:(ceiling(nrow(sent)/MAX_SIZE)-1) ) {
mc_list[[ii+1]] = (1 + ii * MAX_SIZE):min( (ii+1)*MAX_SIZE, nrow(sent) )
}
bbb[,"scoreSentence"] = unlist(mclapply(mc_list, scoreSentenceWrapper))
scoreSentenceWrapper <- function(selected_rows) {
return(unlist(lapply(sent$words[selected_rows], scoreSentence)))
}

Efficiency of transforming counts to percentages and index scores

I currently have the following code that produces the desired results I want (Data_Index and Data_Percentages)
Input_Data <- read.csv("http://dl.dropbox.com/u/881843/RPubsData/gd/2010_pop_estimates.csv", row.names=1, stringsAsFactors = FALSE)
Input_Data <- data.frame(head(Input_Data))
Rows <-nrow(Input_Data)
Vars <-ncol(Input_Data) - 1
#Total population column
TotalCount <- Input_Data[1]
#Total population sum
TotalCountSum <- sum(TotalCount)
Input_Data[1] <- NULL
VarNames <- colnames(Input_Data)
Data_Per_Row <- c()
Data_Index_Row <- c()
for (i in 1:Rows) {
#Proportion of all areas population found in this row
OAPer <- TotalCount[i, ] / TotalCountSum * 100
Data_Per_Col <- c()
Data_Index_Col <- c()
for(u in 1:Vars) {
# For every column value in the selected row
# the percentage of that value compared to the
# total population (TotalCount) for that row is calculated
VarPer <- Input_Data[i, u] / TotalCount[i, ] * 100
# Once the percentage is calculated the index
# score is calculated by diving this percentage
# by the proportion of the total population in that
# area compared to all areas
VarIndex <- VarPer / OAPer * 100
# Binds results for all columns in the row
Data_Per_Col <- cbind(Data_Per_Col, VarPer)
Data_Index_Col <- cbind(Data_Index_Col, VarIndex)
}
# Binds results for completed row with previously completed rows
Data_Per_Row <- rbind(Data_Per_Row, Data_Per_Col)
Data_Index_Row <- rbind(Data_Index_Row, Data_Index_Col)
}
colnames(Data_Per_Row) <- VarNames
colnames(Data_Index_Row) <- VarNames
# Changes the index scores to range from -1 to 1
OldRange <- (max(Data_Index_Row) - min(Data_Index_Row))
NewRange <- (1 - -1)
Data_Index <- (((Data_Index_Row - min(Data_Index_Row)) * NewRange) / OldRange) + -1
Data_Percentages <- Data_Per_Row
# Final outputs
Data_Index
Data_Percentages
The problem I have is that the code is very slow. I want to be able to use it on dataset that has 200,000 rows and 200 columns (which using the code at present will take around 4 days). I am sure there must be a way of speeding this process up, but I am not sure how exactly.
What the code is doing is taking (in this example) a population counts table divided into age bands and by different areas and turning it into percentages and index scores. Currently there are 2 loops so that every value in all the rows and columns are selected individually have calculations performed on them. I assume it is these loops that is making it run slow, are there any alternatives that produce the same results, but quicker? Thanks for any help you can offer.
This is your entire code. The for-loop is not necessary. And so is apply. The division can be implemented by diving a matrix entirely.
df <- Input_Data
total_count <- df[, 1]
total_sum <- sum(total_count)
df <- df[, -1]
# equivalent of your for-loop
oa_per <- total_count/total_sum * 100
Data_Per_Row <- df/matrix(rep(total_count, each=5), ncol=5, byrow=T)*100
Data_Index_Row <- Data_Per_Row/oa_per * 100
names(Data_Per_Row) <- names(Data_Index_Row) <- names(df)
# rest of your code: identical
OldRange = max(Data_Index_Row) - min(Data_Index_Row)
NewRange = (1 - -1)
Data_Index = (((Data_Index_Row - min(Data_Index_Row)) * NewRange) / OldRange) + -1
Data_Percentages <- Data_Per_Row
get rid of the "i" loop
use apply to calculate OAPer
OAPer<-apply(TotalCount,1,
function(x,tcs)x/tcs*100,
tcs = TotalCountSum)
Likewise, you can vectorize the work inside the "u" loop as well, would appreciate some comments in your code

Pulling information from the first approximate match of a text string in R (and summing the total number of matches)

I'm having trouble summing approximate matches of text strings, as well as pulling information from the string that was matched
first in time.
I have data that look like this:
text<-c("THEN it goes West","AT it falls East","it goes West", "it falls East", "AT it goes West")
date<-c(2008,2009,2003,2006,2011)
ID<-c(1,2,3,4,5)
data<-cbind(text,date,ID)
data<-as.data.frame(data)
Notice that the latest text strings have all-caps "THEN" and "AT" added to the earlier text strings.
I would like a table that looks like this:
ID Sum Originaltext Originaldate
[1,] "4" "3" "it goes West" "2003"
[2,] "2" "2" "it falls East" "2006"
This includes:
The ID number corresponding with the text with the earliest date (the "original" text that the others were derived from).
Sums of all approximate matches for each.
The text corresponding with the earliest date.
And the date of the text corresponding with the earliest date.
I have tens of millions of cases, so I'm having trouble automating the process.
I run Windows 7, and have access to fast-computing servers.
IDEAS
#order them backwards in time
data<-data[order(data$date, decreasing = TRUE),]
#find the strings with the latest date
pattern<-"AT|THEN"
k <- vector("list", length(data$text))
for (j in 1:length(data$text)){
k[[j]]<- grep(pattern,data$text[[j]], ignore.case=FALSE)
}
k<-subset(data$text, k==1)
k<-unique(k)
#this is a problem, because case nos. 1 and 5 are still in the dataset, but they derive from the same tweet.
From here, I can use "agrep", but I'm not sure in what context. Any help would be greatly appreciated!
NOTE: While the three answers below do answer my question the way I originally asked it, I have not mentioned that my text cases do vary even without the words "AT" and "THEN". In fact, most of them do not match exactly. I should have put this in the original question. However, I would still love an answer.
Thanks!
A data.table solution avoiding stringr. I am sure this could be improved
Dealing with text data
# make the factor columns character
.data <- lapply(data, function(x) if(is.factor(x)) {as.character(x)} else { x})
library(data.table)
DT <- as.data.table(.data)
DT[, original_text := text]
# using `%like% which is an easy data.table wrapper for grepl
DT[text %like% "^THEN", text := substr(text, 6, nchar(text))]
DT[text %like% "^AT", text := substr(text, 4, nchar(text))]
# or avoiding the two vector scans and replacing in one fell swoop
DT[,text := gsub('(^THEN )|(^AT )', '', text)]
DT[, c(sum=.N, .SD[which.min(date)]) ,by=text]
using factor levels (could be faster)
# assuming that text is a factor
DTF <- as.data.table(data)
DTF[, original_text := text]
levels_text <- DTF[, levels(text)]
new_levels <- gsub('(^THEN )|(^AT )', x= levels_text ,'')
# reset the levels
setattr(DTF[['text']], 'levels', new_levels)
# coerce to character and do the same count / min date
DTF[, c(sum=.N, .SD[which.min(date)]) ,by=list(text = as.character(text))]
I'm going to give you a base solution but I really think this is a big problem for base and the data.table package is what is needed (but I don't know how to use data.table very well:
dat <- data[order(data$date), ]
Trim <- function (x) gsub("^\\s+|\\s+$", "", x)
dat$text2 <- Trim(gsub("AT|THEN", "", dat$text))
dat2 <- split(dat, dat$text2)
FUN <- function(x) {
c(ID = x[1, 3], Sum = nrow(x), Original.Text = as.character(x[1, 1]),
Original.Date = as.character(x[1, 2]))
}
data.frame(do.call(rbind, lapply(dat2, FUN)), row.names = NULL)
I don't really know how close each text string is so maybe my exact matching is not appropriate but if that's the case use agrep to develop a new variable. Sorry for the lack of annotations but I am pressed for time and I think data.table is more appropriate anyway.
EDIT: I still think that data.table is better and should be out the door but maybe running in parallel is smart. You're on a windows machine so this would work to use multiple cores of a computer:
dat <- data[order(data$date), ]
Trim <- function (x) gsub("^\\s+|\\s+$", "", x)
dat$text2 <- Trim(gsub("AT|THEN", "", dat$text))
dat2 <- split(dat, dat$text2)
FUN <- function(x) {
c(ID = x[1, 3], Sum = nrow(x), Original.Text = as.character(x[1, 1]),
Original.Date = as.character(x[1, 2]))
}
library(parallel)
detectCores() #make sure you have > 1 core
cl <- makeCluster(mc <- getOption("cl.cores", detectCores()))
clusterExport(cl=cl, varlist=c("FUN", "dat2"), envir=environment())
x <- parLapply(cl, dat2, FUN)
stopCluster(cl) #stop the cluster
data.frame(do.call(rbind, x), row.names = NULL)
plyr might be too slow given the number of records you mention, but here is a solution for you:
library(stringr)
data$original_text <- data$text
data$text[grep("^THEN", data$text)] <- str_trim(str_sub(data$text[grep("^THEN", data$text)],6))
data$text[grep("^AT", data$text)] <- str_trim(str_sub(data$text[grep("^AT", data$text)],4))
result <- ddply(data, .(text), function(x) {
sum <- nrow(x)
x <- x[which(x$date==min(x$date)),]
return(data.frame(id=unique(x$ID), Sum = sum, Originaltext = unique(x$original_text), Originaldate = unique(x$date)))
})
> result[, -1]
id Sum Originaltext Originaldate
1 4 2 it falls East 2006
2 3 3 it goes West 2003
If you have access to a multicore machine (4 or more cores), then here is a HPC solution
library(multicore)
library(stringr)
data$original_text <- data$text
data$text[grep("^THEN", data$text)] <- str_trim(str_sub(data$text[grep("^THEN", data$text)],6))
data$text[grep("^AT", data$text)] <- str_trim(str_sub(data$text[grep("^AT", data$text)],4))
fux <- function(foo) {
sum <- nrow(x)
x <- x[which(x$date==min(x$date)),]
return(data.frame(id=unique(x$ID), Sum = sum, Originaltext = unique(x$original_text), Originaldate = unique(x$date)))
}
x <- split(data, data$text)
result <- mclapply(x, fux, mc.cores = 4, mc.preschedule = TRUE)

Resources