This is a question for anyone familiar with the 'stringdist' package.
I am trying to write a function that does the following:
Searches a very long list of characters such as this (only 16 of ~1 million shown):
> stripList
[1] "AAAAAAAAAAAAAAAAAAAAAAAAAAAADAABAAADCDDAD" "BAAAABBBDACDBABAAADDCBDADBCCBDCDDCDBCDDBA"
[3] "BDDABDCCAAABABBAACADCBDADBCCBDCDDCDBCDDBA" "AADBBACDDDBABDCABAADBCADCBDDDCCC"
[5] "BBCDBBDCCBABDBCABDBBDBDDDADCDDADDDCDDCDDD" "BDDCDACABDCCBACBADCDCBDADBCCBDCDDCDDCDDBA"
[7] "BCDBADCBBDDBBBBDCBDADBCCBDCDDCDBCDDDDAAAA" "DABDDCDACABDCCBACBADC"
[9] "CABABDDCCCCACDCCDCCDADCAAAAAAAAACADADDADA" "BAABCBBBDBCDCDDADDDDCDDADBCCBDCDD"
[11] "BBDDDACDCABDDDBBACDCBDADBCCDDCDDCDDCDDBDD" "BDDABDCCAAABABBBACADCBDADBCCBDCDDCDBCDDBA"
[13] "BDDBBBBDDBDABBACDBDCBDADBCCBDCDD" "BDDABDCCAAABABBBACADCBDADBCCBDCDDCDBCDDBA"
[15] "DABDDCDACABDCCBACBADC" "BBADBACDDBABAACABCABCDCBDADBCCBDCDDCDDDDD"
For instances of each sequence of a query sequences list that is structured like this.
Ex:
SeqName1 # queryNames
BBCDBBDCCBABDBCA # querySeqs
SeqName2 # queryNames
BBBDCCDCCCCDDDCAAACD # querySeqs
I want to see how many times (if any) the query sequence shows up in any of my 'stripList' and allow for 1 insertion, 1 deletion, 1 substitution, and 1 transposition, and get an output like this:
>dt
queryNames TimesFound
SeqName1 5
seqName2 145
To do so I am using the 'amatch' function of the 'stringdist' package in the following manner:
dt<-rapply(as.list(querySeqs), function(x) amatch(x, stripList, method = "osa", useBytes = TRUE, weight = c(d = 0.5, i = 0.5, s = 0.9, t = 0.9), maxDist=0.9))
dt<-data.frame(dt)
colnames(dt) <- "TimesFound"
dt<-cbind(queryNames,dt)
I have a few questions:
In the 'amatch' function, when using method = "osa", how is the "weight" argument interpreted? As an example, if I were to use:
method = "osa", weight = c(d = 0.5, i = 0.5, s = 0.9, t = 0.9), maxDist=0.9
am I saying that I want 90% matching of my "querySeqs"? Meaning, do those fractions pertain to "querySeqs" or my table (stripList)?
What function does "maxDist" have? (Is it also interpreted as a percent?)
Is there a way to maximize the runtime efficiency of my code above (by perhaps using data.table, etc)? I only ask because my actual datasets are ~2000 sequence queries being searched through ~1,000,000 sequence lists.
Is there a better way than 'amatch' to look for whole sequences (not just substrings of them like 'agrep' does)?
I apologize if these are elementary questions, the documentation on this is vague to me and frankly, Im still learning.
Thanks in advance.
This question seems to be up here for a while, and I've only just found it. In short:
(1) The weights are penalties for each action. It allows you to to tell amatch, that e.g. a deletion or insertion is ok, but you thing a transposition should be punished more. Judging by your question, you can probably leave the weights as they are.
(2) Maxdist tells amatch that if two strings are more than maxDist apart, they will never be considered a match. The default is zero so only exact matches are allowed. It is not a percentage. Relevant values of maxDist depend on what distance function is used. I think that you could use method='osa', maxDist=1 (allowing for a single transposition, insertion, deletion or substitution, but no combinations) Or possibly maxDist=4 if you're willing to allow combinations of up to four edits. For the edit-like distances, the distance is bound by the number of characters in the largest string. Please see the R-journal paper for the ranges of all supported distances. http://journal.r-project.org/archive/2014-1/loo.pdf
(3) I am optimizing the code all the time. Version 0.9 will use multithreading. I see you are using rapply, you could probably avoid this by just using
amatch(querySeqs,stripList,method='osa',maxDist=4)
(4) At the moment, I think that amatch is the best implementation for R (but as I'm the author, I may be biased :)).
Related
I have a customer who sends electronic payments but doesn't bother to specify which invoices. I'm left guessing which ones and I would rather not try every single combination manually. I need some sort of pseudo-code to do it and then I can adapt it but I'm not sure I can come up with a good algorithm myself. . I'm familiar with php, bash, and python but I can adapt.
I would need an array with the following numbers: [357.15, 223.73, 106.99, 89.96, 312.39, 120.00]. Those are the amounts of the invoices. Then I would need to find a sum of any combination of two or more of those numbers that adds up to 596.57. Once found the program would need to tell me exactly which numbers it used to reach the sum so I can then know which invoices got paid.
This is very similar to the Subset Sum problem and can be solved using a similar approach to the typical brute-force method used for that problem. I have to do this often enough that I keep a simple template of this algorithm handy for when I need it. What is posted below is a slightly modified version1.
This has no restrictions on whether the values are integer or float. The basic idea is to iterate over the list of input values and keep a running list of every subset that sums to less than the target value (since there might be a later value in the inputs that will yield the target). It could be modified to handle negative values as well by removing the rule that only keeps candidate subsets if they sum to less than the target. In that case, you'd keep all subsets, and then search through them at the end.
import copy
def find_subsets(base_values, taget):
possible_matches = [[0, []]] # [[known_attainable_value, [list, of, components]], [...], ...]
matches = [] # we'll return ALL subsets that sum to `target`
for base_value in base_values:
temp = copy.deepcopy(possible_matches) # Can't modify in loop, so use a copy
for possible_match in possible_matches:
new_val = possible_match[0] + base_value
if new_val <= target:
new_possible_match = [new_val, possible_match[1]]
new_possible_match[1].append(base_value)
temp.append(new_possible_match)
if new_val == target:
matches.append(new_possible_match[1])
possible_matches = temp
return matches
find_subsets([list, of input, values], target_sum)
This is a very inefficient algorithm and it will blow up quickly as the size of the input grows. The Subset Sum problem is NP-Complete, so you are not likely to find a generalized solution that will work in all cases and is efficient.
1: The way lists are being used here is kludgy. If the goal was to simply find any match, the nested lists could be replaced with a dictionary, and we could exit right away once a match is found. But doing that will cause intermediate subsets that sum to the same value to also map to the same dictionary slot, so only one subset with that sum is kept. Since we need to report all matching subsets (because the values represent checks and are presumably not fungible even if the dollar amounts are equal), a dictionary won't work.
You can use itertools.combinations(t,r) to list all combinations of r elements in array t.
So we loop on the possible values of r, then on the results of itertools.combinations:
import itertools
def find_sum(t, obj):
t = [x for x in t if x < obj] # filter out elements which are too big
for r in range(1, len(t)+1): # loop on number of elements
for subt in itertools.combinations(t, r): # loop on combinations of r elements
if sum(subt) == obj:
return subt
return None
find_sum([1,2,3,4], 6)
# (2, 4)
find_sum([1,2,3,4], 10)
# (1, 2, 3, 4)
find_sum([1,2,3,4], 11)
# none
find_sum([35715, 22373, 10699, 8996, 31239, 12000], 59657)
# none
Rounding errors:
The code above is meant to be used with integers, rather than floats.
To use with floats, replace the test sum(subt) == obj with the more forgiving test sum(subt) - obj < 0.01.
Relevant documentation:
itertools.combinations
Trying to clean up some dirty data (for work), my data frame has a column for customer information (for our example lets say store and product) in a long weird string, as well as a column for store and a column for product. I can parse the store and the product from the string. Here is where I arrive at my problem.
let's say (consider these vectors part of a larger dataframe, appended with data$ if that helps, I was just working with them as vectors thinking it may speed up the code not having to pull the whole dataframe):
WeirdString <- c("fname: john; lname:smith; store:Amazon Inc.; product:Echo", "fname: cindy; lname:smith; store:BestBuy; product:Ps-4","fname: jon; lname:smith; store:WALMART; product:Pants")
so I parse this to be:
WS_Store <- c("Amazon Inc.", "BestBuy", "WALMART")
WS_Prod <- c("Echo", "Ps-4", "Pants")
What's in the tables (i.e. the non-parsed columns) is:
DB_Store <- c("Amazon", "BEST BUY", "Other")
DB_Prod <- c("ECHO", "PS4", "Jeans")
I currently am using a for loop to loop through i to grepl the "true" string from the parsed string. This takes forever, and I know R was designed to use vectorized code, So my question is, how do I eliminate the loop and use something like lapply (which I tried, and failed at, because I'm not savvy enough with lapply), or some other vectorized thing?
My current code:
for(i in 1:nrow(data)){ # could be i in length(DB_prod) or whatever, all vectors are the same length)
Diff_Store[i] <- !grepl(DB_Store[i], WS_Store[i], ignore.case=T)
Diff_Prod[i] <- !grepl(DB_Prod[i] , WS_Prod[i] , ignore.case=T)
}
I intend to append those columns back into the dataframe, as the true goal is to diagnose why the database has this problem.
If there's a better way than this, rather than trying to vectorize it, I'm open to it. The data in the DB_Store is restricted to a specific number of "stores" (in the table it comes from) but in the string, it seems to be open, which is why I use the DB as the pattern, not the x. Product is similar, but not as restricted, this is why some have dashes and some don't. I would love to match "close things" like Ps-4 vs. PS4, but I will probably just build a table of matches once I see how weird the string gets. To be true though, the string may not match, which is represented by the Pants/Jeans thing. The dataset is 2.5 million records, and there are many different "stores" and "products", and I do want to make sure they match on the same line, not "is it in the database" (which is what previous questions seem to ask, can I see if a string is in a list of strings, rather than a 1:1 comparison, and the last question did end in a loop, which takes minutes and hours to run)
Thanks!
Please check if this works for you:
check <- function(vec_a, vec_b){
mat <- cbind(vec_a, vec_b)
diff <- apply(mat, 1, function(x) !grepl(pattern = x[1], x = x[2], ignore.case = TRUE))
diff
}
Use your different vectors for stores (or products) in the arguments vec_a and vec_b, respectively (example: diff_stores <- check(DB_Store, WS_Store) ). This function will return a logical vector with TRUE values referring to items that weren't a match in the two original vectors. Is this what you wanted?
For my data analytics problem, I usually needs to regulate names, that names A, and B, I'd consider them the same or very similar, if A and B share substantial number of common substrings, regardless of the order of those substring.
For example, for "COLD", and c("FLOOD", "COLD/WIND CHILL"), I'd like to choose "COLD/WIND CHILL" to be much more similar to "COLD" than with "FLOOD".
My current assignment is in R. So my concrete questions are the following:
Is there such metrics already defined in R?
Is it possible to provide my own implementation and somehow integrate with R's stringdist package?
For my requirement, I could simply use regular expression search as long as I could find A in B or B in A, I may just consider their distance to be 0.
Thanks a lot!
Edit:
In the context of the following:
> vv <- c("FLOOD", "COLD/WIND CHILL")
> sapply(vv, adist, y = "COLD")
FLOOD COLD/WIND CHILL
3 11
I wish the distance from "COLD" to "COLD/WIND CHILL" would be smaller than "COLD" to "FLOOD".
It seems that the metrics has to ignore the remaining part to be deleted, after finding the matched substring.
Edit1:
My original problem has been solved. Here is a follow up with related problem of using amatch of stringdist in R:
It seems to me that I was not able to reproduce the equivalent result of those with adist, and even stringdist in the same package with amatch.
Below is the illustration:
vv <- c("FLOOD", "COLD/WIND CHILL")
sapply(vv, adist, y = "COLD",costs=list(deletions=0))
FLOOD COLD/WIND CHILL
2 0
stringdist("COLD", c("FLOOD", " COLD/WIND CHILL"), method = 'lv', weight=c(0.001, 0.99, 0.99, 0.99))
[1] 1.981 1.002
amatch("COLD", c("FLOOD", " COLD/WIND CHILL"), method = 'lv', weight=c(0.0001, 0.999, 0.999, 0.999), maxDist = 100)
[1] 1
In the above context, by using the computation of stringdist, amatch should return 2, instead of 1.
Based on the document of stringdist,
"weight:
For method='osa' or 'dl', the penalty for deletion, insertion, substitution and transposition, in that order. When method='lv', the penalty for transposition is ignored. "
I have chosen the weights accordingly to remove penalty to deletion, while maxing the penalty to the other operations. It's encouraging that stringdist shows the expected behavior with the weights setting.
I'd assume that amatch would use stringdist to do the calculation, but it seems strange the behavior of amatch contradicts with the behavior of stringdist!
I wish to get amatch working so that I don't have to re-implement it using adist or stringdist.
Thanks for help again.
You can use adist for fuzzy distance. The distance is a generalized Levenshtein distance.
vv <- c("COLD","FLOOD")
sapply(vv,adist,y="COLD/WIND CHILL")
## COLD FLOOD
## 11 13 ## the distance to COLD < distance to FLOOD
edit after OP update:
You can play with costs parameter to set how you wan the distance to be computed in terms of : deletions,substitutions, insertions . Here for example:
sapply(vv, adist, y = "COLD",costs=list(deletions=0))
FLOOD COLD/WIND CHILL
2 0
Here is one direction to pursue. Basically, it intends to break up your text into trigrams (sequences of three letter) and return associations between each trigram and all others if they reach the level you set (here, 0.8). The glitch is that this code works only at the word level, not at trigrams as it is supposed to. Perhaps if the text file were larger there would be a difference?
library(tm)
library("RWeka")
text <- c("FLOOD", "COLD/WIND CHILL", "OLD", "FRIGID", "FLOW")
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 3, max = 3))
corpus <- Corpus(VectorSource(text))
tdm <- TermDocumentMatrix(corpus, control = list(tokenize = BigramTokenizer))
lapply(tdm$dimnames$Terms, function(x) findAssocs(tdm, x, 0.8))
I would like to find all combinations of vector elements that matches a specific condition. The function expand.grid returns all possible combinations without checking for a specific condition. It is possible to test for a specific condition after using the expand.grid function, but in some situations the number of possible combinations is too large to generate them with expand.grid. Therefore is there a function that allows me to check for a condition while generating all possible combinations.
This is a simplified version of the problem:
A <- seq.int(12, from=0, by=1)*15
B <- seq.int(27, from=0, by=1)*23
C <- seq.int(18, from=0, by=1)*18
D <- seq.int(33, from=0, by=1)*10
out<-expand.grid(A,B,C,D) #out is a dataframe with 235144 x 4 as dimensions
idx<-which(rowSums(out)<=400 & rowSums(out)>=300) #Only a small fraction of 'out' is needed
results <- out(idx,)
In a word, no. After all, if you knew a priori which combinations were desirable/undesirable, you could exclude them from the expansion, e.g. expand.grid(A[A<20],B[B<15],...) . In the general case, which I'm assuming is your real question, you have no simple way to exclude portions of the input vectors.
You might just want to write a multilevel loop which tests each combination in turn and saves or rejects it. This will be slow (again, unless you come up with some clever algorithm to predict regions which are all TRUE or FALSE). So, in the long run, you may be better off using some of the R-packages which partition large calculations (and datasets) so as to avoid exceeding your memory limits.
Now that I've said all that, someone's going to post a link to a package which does exactly that :-(
(I've tried asking this on BioStars, but for the slight chance that someone from text mining would think there is a better solution, I am also reposting this here)
The task I'm trying to achieve is to align several sequences.
I don't have a basic pattern to match to. All that I know is that the "True" pattern should be of length "30" and that the sequences I have had missing values introduced to them at random points.
Here is an example of such sequences, were on the left we see what is the real location of the missing values, and on the right we see the sequence that we will be able to observe.
My goal is to reconstruct the left column using only the sequences I've got on the right column (based on the fact that many of the letters in each position are the same)
Real_sequence The_sequence_we_see
1 CGCAATACTAAC-AGCTGACTTACGCACCG CGCAATACTAACAGCTGACTTACGCACCG
2 CGCAATACTAGC-AGGTGACTTCC-CT-CG CGCAATACTAGCAGGTGACTTCCCTCG
3 CGCAATGATCAC--GGTGGCTCCCGGTGCG CGCAATGATCACGGTGGCTCCCGGTGCG
4 CGCAATACTAACCA-CTAACT--CGCTGCG CGCAATACTAACCACTAACTCGCTGCG
5 CGCACGGGTAAGAACGTGA-TTACGCTCAG CGCACGGGTAAGAACGTGATTACGCTCAG
6 CGCTATACTAACAA-GTG-CTTAGGC-CTG CGCTATACTAACAAGTGCTTAGGCCTG
7 CCCA-C-CTAA-ACGGTGACTTACGCTCCG CCCACCTAAACGGTGACTTACGCTCCG
Here is an example code to reproduce the above example:
ATCG <- c("A","T","C","G")
set.seed(40)
original.seq <- sample(ATCG, 30, T)
seqS <- matrix(original.seq,200,30, T)
change.letters <- function(x, number.of.changes = 15, letters.to.change.with = ATCG)
{
number.of.changes <- sample(seq_len(number.of.changes), 1)
new.letters <- sample(letters.to.change.with , number.of.changes, T)
where.to.change.the.letters <- sample(seq_along(x) , number.of.changes, F)
x[where.to.change.the.letters] <- new.letters
return(x)
}
change.letters(original.seq)
insert.missing.values <- function(x) change.letters(x, 3, "-")
insert.missing.values(original.seq)
seqS2 <- t(apply(seqS, 1, change.letters))
seqS3 <- t(apply(seqS2, 1, insert.missing.values))
seqS4 <- apply(seqS3,1, function(x) {paste(x, collapse = "")})
require(stringr)
# library(help=stringr)
all.seqS <- str_replace(seqS4,"-" , "")
# how do we allign this?
data.frame(Real_sequence = seqS4, The_sequence_we_see = all.seqS)
I understand that if all I had was a string and a pattern I would be able to use
library(Biostrings)
pairwiseAlignment(...)
But in the case I present we are dealing with many sequences to align to one another (instead of aligning them to one pattern).
Is there a known method for doing this in R?
Writing an alignment algorithm in R looks like a bad idea to me, but there is an R interface to the MUSCLE algorithm in the bio3d package (function seqaln()). Be aware of the fact that you have to install this algorithm first.
Alternatively, you can use any of the available algorithms (eg ClustalW, MAFFT, T-COFFEE) and import the multiple sequence alignemts in R using bioconductor functionality. See eg here..
Though this is quite an old thread, I do not want to miss the opportunity to mention that, since Bioconductor 3.1, there is a package 'msa' that implements interfaces to three different multiple sequence alignment algorithms: ClustalW, ClustalOmega, and MUSCLE. The package runs on all major platforms (Linux/Unix, Mac OS, and Windows) and is self-contained in the sense that you need not install any external software. More information can be found on http://www.bioinf.jku.at/software/msa/ and http://www.bioconductor.org/packages/release/bioc/html/msa.html.
You can perform multiple alignment in R with the DECIPHER package.
Following your example, it would look something like:
library(DECIPHER)
dna <- DNAStringSet(all.seqS)
aligned_DNA <- AlignSeqs(dna)
It is fast and at least as accurate as the other methods listed here (see the paper). I hope that helps!
You are looking for a global alignment algorithm on multiple sequences.
Did you look at Wikipedia before asking ?
First learn what global alignment is, then look for multiple sequence alignment.
Wikipedia doesn't give a lot of details about algorithms, but this paper is better.