Efficient string similarity grouping - r

Setting:
I have data on people, and their parent's names, and I want to find siblings (people with identical parent names).
pdata<-data.frame(parents_name=c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else"))
The expected output here would be a column indicating that the first two observations belong to family X, while the third and fourth columns are each in a separate family. E.g:
person_id parents_name family_id
1 "peter pan + marta steward", 1
2 "pieter pan + marta steward", 1
3 "armin dolgner + jane johanna dough", 2
4 "jack jackson + sombody else" 3
Current approach:
I am flexible regarding the distance metric. Currently, I use Levenshtein edit-distance to match obs, allowing for two-character differences. But other variants such as "largest common sub string" would be fine if they run faster.
For smaller subsamples I use stringdist::stringdist in a loop or stringdist::stringdistmatrix, but this is getting increasingly inefficient as sample size increases.
The matrix version explodes once a certain sample size is used. My terribly inefficient attempt at looping is here:
#create data of the same complexity using random last-names
#(4mio obs and ~1-3 kids per parents)
pdata<-data.frame(parents_name=paste0(rep(c("peter pan + marta ",
"pieter pan + marta ",
"armin dolgner + jane johanna ",
"jack jackson + sombody "),1e6),stringi::stri_rand_strings(4e6, 5)))
for (i in 1:nrow(pdata)) {
similar_fatersname0<-stringdist::stringdist(pdata$parents_name[i],pdata$parents_name[i:nrow(pdata)],nthread=4)<2
#[create grouping indicator]
}
My question: There should be substantial efficiency gains, e.g. because I could stop comparing strings once I found them to sufficiently different in something that is easier to assess, eg. string length, or first word. The string length variant already works and reduces complexity by a factor ~3. But thats by far too little. Any suggestions to reduce computation time are appreciated.
Remarks:
The strings are actually in unicode and not in the Latin alphabet (Devnagari)
Pre-processing to drop unused characters etc is done

There are two challenges:
A. The parallel execution of Levenstein distance - instead of a sequential loop
B. The number of comparisons: if our source list has 4 million entries, theoretically we should run 16 trillion of Levenstein distance measures, which is unrealistic, even if we resolve the first challenge.
To make my use of language clear, here are our definitions
we want to measure the Levenstein distance between expressions.
every expression has two sections, the parent A full name and the parent B full name which are separated by a plus sign
the order of the sections matters (i.e. two expressions (1, 2) are identical if Parent A of expression 1 = Parent A of expression 2 and Parent B or expression 1= Parent B of expression 2. Expressions will not be considered identical if Parent A of expression 1 = Parent B of expression 2 and Parent B of expression 1 = Parent A of expression 2)
a section (or a full name) is a series of words, which are separated by spaces or dashes and correspond to the the first name and last name of a person
we assume the maximum number of words in a section is 6 (your example has sections of 2 or 3 words, I assume we can have up to 6)
the sequence of words in a section matters (the section is always a first name followed by a last name and never the last name first, e.g. Jack John and John Jack are two different persons).
there are 4 million expressions
expressions are assumed to contain only English characters. Numbers, spaces, punctuation, dashes, and any non-English character can be ignored
we assume the easy matches are already done (like the exact expression matches) and we do not have to search for exact matches
Technically the goal is to find series of matching expressions in the 4-million expressions list. Two expressions are considered matching expression if their Levenstein distance is less than 2.
Practically we create two lists, which are exact copies of the initial 4-million expressions list. We call then the Left list and the Right list. Each expression is assigned an expression id before duplicating the list.
Our goal is to find entries in the Right list which have a Levenstein distance of less than 2 to entries of the Left list, excluding the same entry (same expression id).
I suggest a two step approach to resolve the two challenges separately. The first step will reduce the list of the possible matching expressions, the second will simplify the Levenstein distance measurement since we only look at very close expressions. The technology used is any traditional database server because we need to index the data sets for performance.
CHALLENGE A
The challenge A consists of reducing the number of distance measurements. We start from a maximum of approx. 16 trillion (4 million to the power of two) and we should not exceed a few tens or hundreds of millions.
The technique to use here consists of searching for at least one similar word in the complete expression. Depending on how the data is distributed, this will dramatically reduce the number of possible matching pairs. Alternatively, depending on the required accuracy of the result, we can also search for pairs with at least two similar words, or with at least half of similar words.
Technically I suggest to put the expression list in a table. Add an identity column to create a unique id per expression, and create 12 character columns. Then parse the expressions and put each word of each section in a separate column. This will look like (I have not represented all the 12 columns, but the idea is below):
|id | expression | sect_a_w_1 | sect_a_w_2 | sect_b_w_1 |sect_b_w_2 |
|1 | peter pan + marta steward | peter | pan | marta |steward |
There are empty columns (since there are very few expressions with 12 words) but it does not matter.
Then we replicate the table and create an index on every sect... column.
We run 12 joins which try to find similar words, something like
SELECT L.id, R.id
FROM left table L JOIN right table T
ON L.sect_a_w_1 = R.sect_a_w_1
AND L.id <> R.id
We collect the output in 12 temp tables and run an union query of the 12 tables to get a short list of all expressions which have a potential matching expressions with at least one identical word. This is the solution to our challenge A. We now have a short list of the most likely matching pairs. This list will contain millions of records (pairs of Left and Right entries), but not billions.
CHALLENGE B
The goal of challenge B is to process a simplified Levenstein distance in batch (instead of running it in a loop).
First we should agree on what is a simplified Levenstein distance.
First we agree that the levenstein distance of two expressions is the sum of the levenstein distance of all the words of the two expressions which have the same index. I mean the Levenstein distance of two expressions is the distance of their two first words, plus the distance of their two second words, etc.
Secondly, we need to invent a simplified Levenstein distance. I suggest to use the n-gram approach with only grams of 2 characters which have an index absolute difference of less than 2 .
e.g. the distance between peter and pieter is calculated as below
Peter
1 = pe
2 = et
3 = te
4 = er
5 = r_
Pieter
1 = pi
2 = ie
3 = et
4 = te
5 = er
6 = r_
Peter and Pieter have 4 common 2-grams with an index absolute difference of less than 2 'et','te','er','r_'. There are 6 possible 2-grams in the largest of the two words, the distance is then 6-4 = 2 - The Levenstein distance would also be 2 because there's one move of 'eter' and one letter insertion 'i'.
This is an approximation which will not work in all cases, but I think in our situation it will work very well. If we're not satisfied with the quality of the results we can try with 3-grams or 4-grams or allow a larger than 2 gram sequence difference. But the idea is to execute much fewer calculations per pair than in the traditional Levenstein algorithm.
Then we need to convert this into a technical solution. What I have done before is the following:
First isolate the words: since we need only to measure the distance between words, and then sum these distances per expression, we can further reduce the number of calculations by running a distinct select on the list of words (we have already prepared the list of words in the previous section).
This approach requires a mapping table which keeps track of the expression id, the section id, the word id and the word sequence number for word, so that the original expression distance can be calculated at the end of the process.
We then have a new list which is much shorter, and contains a cross join of all words for which the 2-gram distance measure is relevant.
Then we want to batch process this 2-gram distance measurement, and I suggest to do it in a SQL join. This requires a pre-processing step which consists of creating a new temporary table which stores every 2-gram in a separate row – and keeps track of the word Id, the word sequence and the section type
Technically this is done by slicing the list of words using a series (or a loop) of substring select, like this (assuming the word list tables - there are two copies, one Left and one Right - contain 2 columns word_id and word) :
INSERT INTO left_gram_table (word_id, gram_seq, gram)
SELECT word_id, 1 AS gram_seq, SUBSTRING(word,1,2) AS gram
FROM left_word_table
And then
INSERT INTO left_gram_table (word_id, gram_seq, gram)
SELECT word_id, 2 AS gram_seq, SUBSTRING(word,2,2) AS gram
FROM left_word_table
Etc.
Something which will make “steward” look like this (assume the word id is 152)
| pk | word_id | gram_seq | gram |
| 1 | 152 | 1 | st |
| 2 | 152 | 2 | te |
| 3 | 152 | 3 | ew |
| 4 | 152 | 4 | wa |
| 5 | 152 | 5 | ar |
| 6 | 152 | 6 | rd |
| 7 | 152 | 7 | d_ |
Don't forget to create an index on the word_id, the gram and the gram_seq columns, and the distance can be calculated with a join of the left and the right gram list, where the ON looks like
ON L.gram = R.gram
AND ABS(L.gram_seq + R.gram_seq)< 2
AND L.word_id <> R.word_id
The distance is the length of the longest of the two words minus the number of the matching grams. SQL is extremely fast to make such a query, and I think a simple computer with 8 gigs of RAM would easily do several hundred of million lines in a reasonable time frame.
And then it's only a matter of joining the mapping table to calculate the sum of word to word distance in every expression, to get the total expression to expression distance.

You are using the stringdist package anyway, does stringdist::phonetic() suit your needs? It computes the soundex code for each string, eg:
phonetic(pdata$parents_name)
[1] "P361" "P361" "A655" "J225"
Soundex is a tried-and-true method (almost 100 years old) for hashing names, and that means you don't need to compare every single pair of observations.
You might want to go further and do soundex on first name and last name seperately for father and mother.

My suggestion is to use a data science approach to identify only similar (same cluster) names to compare using stringdist.
I have modified a little bit the code generating "parents_name" adding more variability in first and second names in a scenario close to reality.
num<-4e6
#Random length
random_l<-round(runif(num,min = 5, max=15),0)
#Random strings in the first and second name
parent_rand_first<-stringi::stri_rand_strings(num, random_l)
order<-sample(1:num, num, replace=F)
parent_rand_second<-parent_rand_first[order]
#Paste first and second name
parents_name<-paste(parent_rand_first," + ",parent_rand_second)
parents_name[1:10]
Here start the real analysis, first extract feature from the names such as global length, length of the first, length of the second one, numeber of vowels and consonansts in both first and second name (and any other of interest).
After that bind all these feature and clusterize the data.frame in a high number of clusters (eg. 1000)
features<-cbind(nchars,nchars_first,nchars_second,nvowels_first,nvowels_second,nconsonants_first,nconsonants_second)
n_clusters<-1000
clusters<-kmeans(features,centers = n_clusters)
Apply stringdistmatrix only inside each cluster (containing similar couple of names)
dist_matrix<-NULL
for(i in 1:n_clusters)
{
cluster_i<-clusters$cluster==i
parents_name<-as.character(parents_name[cluster_i])
dist_matrix[[i]]<-stringdistmatrix(parents_name,parents_name,"lv")
}
In dist_matrix you have the distance beetwen each element in the cluster and you are able to assign the family_id using this distance.
To compute the distance in each cluster (in this example) the code takes approximately 1 sec (depending on the dimension of the cluster), in 15mins all the distances are computed.
WARNING: dist_matrix grow very fast, in your code is better if you will analyze it inside di for loop extracting famyli_id and then you can discard it.

You may improve by not comparing all the couples of lines.
Instead, create a new variable that will be helpfull for decide if it is worth comparing.
For exemple, create a new variable "score" contaning the ordered list of letters used in parents_name (for exemple if "peter pan + marta steward" then the score will be "ademnprstw"), and calculate distance only between lines where score are matching.
Of course, you can find a score that fits better your need, and improve a little to enable comparison when not all the letters used are common ..

I faced the same performance issue couple years ago. I had to match people's duplicates based on their typed names. My dataset had 200k names and the matrix approach exploded. After searching for some day about a better method, the method I'm proposing here did the job for me in some minutes:
library(stringdist)
parents_name <- c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else")
person_id <- 1:length(parents_name)
family_id <- vector("integer", length(parents_name))
#Looping through unassigned family ids
while(sum(family_id == 0) > 0){
ids <- person_id[family_id == 0]
dists <- stringdist(parents_name[family_id == 0][1],
parents_name[family_id == 0],
method = "lv")
matches <- ids[dists <= 3]
family_id[matches] <- max(family_id) + 1
}
result <- data.frame(person_id, parents_name, family_id)
That way the while will compare fewer matches on every iteration. From that, you might implement different performance boosters, like filtering the names with the same first letter before comparing, etc.

Making equivalency groups on non transitive relation does not make sense. If A is like B and B is like C, but A is not like C, how would you make families from that? Using something like soundex (that was idea of Neal Fultz, not mine) seems the only meaningful option and it solves your problem with performance too.

What I have used to reduce the permutations involved in this sort of name matching, is create a function that counts the syllables in the name (surname) involved. Then store this in the database, as a pre-processed value. This becomes a Syllable Hash function.
Then you can choose to group words together with the same number of syllables as each other. (Although I use algorithms that allow 1 or 2 syllables difference, which may be presented as legitimate spelling / typo errors...But my research has found that 95% of misspellings share the same number of syllables)
In this case Peter and Pieter would have the same syllable count (2), but Jones and Smith do not (they have 1). (For example)
If your function does not get 1 syllable for Jones, then you may need to increase your tolerance to allow for at least 1 syllable difference in the Syllable Hash function grouping that you use. (To account for incorrect syllable function results, and to catch the matching surname correctly in the grouping)
My syllable counting function may not apply completely - as you might need to cope with non-English letter sets...(So I have not pasted the code...Its in C anyway) Mind you - the Syllable count function does not have to be accurate in terms of TRUE syllable count; it simply needs to act as a reliable Hashing function - which it does. Far superior to SoundEx which relies on the first letter being accurate.
Give it a go, you might be surprised how much improvement you get by implementing a Syllable Hash function. You may have to ask SO for help getting the function into your language.

If I get it right, you want to compare every parent pair (every row in parent_name data frame) with all other pairs (rows), and keep rows that have Levenstein distance smaller or equal to 2.
I have written following code for the beginning:
pdata<-data.frame(parents_name=c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else"))
fuzzy_match <- list()
system.time(for (i in 1:nrow(pdata)){
fuzzy_match[[i]] <- cbind(pdata, parents_name_2 = pdata[i,"parents_name"],
dist = as.integer(stringdist(pdata[i,"parents_name"], pdata$parents_name)))
fuzzy_match[[i]] <- fuzzy_match[[i]][fuzzy_match[[i]]$dist <= 2,]
})
fuzzy_final <- do.call(rbind, fuzzy_match)
Does it return what you wanted?

it reproduces your output, i guess you will have to decide partial matching criteria, i kept the default agrep ones
pdata$parents_name<-as.character(pdata$parents_name)
x00<-unique(lapply(pdata$parents_name,function(x) agrep(x,pdata$parents_name)))
x=c()
for (i in 1:length(x00)){
x=c(x,rep(i,length(x00[[i]])))
}
pdata$person_id=seq(1:nrow(pdata))
pdata$family_id=x

Related

Assign integral value to list of relative values

I have an assortment of syrups, each of which has a value - the amount of sugar per volume. As people blend these syrups, I track which ones are used, and created a table to get a Relative Weight of each blend. I understand > Data > Sort > Options > Custom Sort Order.
However, I really don't wish to sort each table, and am looking for a way to parse a column of this list as entered, and return a column with results in an Integral Relative Value of each row, as compared to the weights of syrups in the other rows of the table.
Unique Name weight Not Unique Relative Value
blueberry .250 2
raspberry .333 3
orange .425 4
tangerine .333 3
blackberry .225 1
I am attempting to find the "Relative Sort", a nested function which can assign an integral value of the Unique Name which compares the weights of the syrups. A "Lookup" only works if there is an absolute equality, right?
What if someone doesn't use "blackberry syrup", then "blueberry" is the lightest, and should be labeled as 1.
Is this too complicated for LibreOffice Calc?
It's a recursive greater than/less than/equal to comparison?
IF the problem is calculating the right hand column below from entries that may be sorted ascending by value as on the left:
then an answer is, in C2 and copied down to suit (provided C1 is blank or 0):
=IF(B1<>B2,C1+1,C1)
Without sorting the RANK function might be simpler and adequate (though in the example returning 5 rather than 4).

Create teams of 2-people according to players preferences

Goal
Imagine a even number of individuals that have to create teams of two people. Each individual has to call a R function and enter two parameters; 1) Their own name and 2) the names of the other individuals in the order of their preference. When everyone has voted a function extracts the best combination of teams. The goal is to construct this function.
Example
John, Peter, Mary and Jessica need to construct teams of two people. John preferentially wants to be with Peter, moderatly with Mary and would prefer not be with Jessica. Therefore John enters the following code line:
Create_binom(MyName='John', Preferences = c('Peter', 'Mary', 'Jessica'))
The three other players do the same, they enter their names and their preferences:
Create_binom(MyName='Mary', Preferences = c('Peter', 'Jessica', 'John'))
Create_binom(MyName='Peter', Preferences = c('John', 'Jessica', 'Mary'))
Create_binom(MyName='Jessica', Preferences = c('Mary', 'John', 'Peter'))
At the last call (when the last player called the function entering its name and preferences), the function recognizes that all players have indicated their preferences and the function create the best combinations of 2-people teams and print it. The best combination is defined as the combination that minimizes the sum of the positions of the teammates in the preferences of all players. We'll call this sum the "index" of the combination.
Index of a combination
The combination (John with Mary) and (Peter with Jessica) has associated index 2 + 3 + 2 + 3 = 10.
The combination (Mary with Jessica) and (Peter with John) has associated index 1 + 1 + 2 + 1 = 5. With this combination only one player (Mary) will not play with her best friend! It is the best possible solution in my example.
Of course, in a perfect world, everybody would be associated with his/her best friend and the index of the perfect combination would equal the number of players, but this is often not possible.
Output of the example (Best combination for the example)
Here is what the function should print out:
list(c("Peter", "John"),c("Mary", "Jessica"))
...or any of the 8 equivalents results.
Special case
Of course if there are several best combinations, then the function should just pick one of the best combination at random.

K Nearest Neighbor Questions

Hi I am having trouble understanding the workings of the K nearest neighbor algorithm specifically when trying to implement it in code. I am implementing this in R but just want to know the workings, I'm not so much worried about the code as much as the process. I will post what I have, my data, and what my questions are:
Training Data (just a portion of it):
Feature1 | Feature2 | Class
2 | 2 | A
1 | 4 | A
3 | 10 | B
12 | 100 | B
5 | 5 | A
So far in my code:
kNN <- function(trainingData, sampleToBeClassified){
#file input
train <- read.table(trainingData,sep=",",header=TRUE)
#get the data as a matrix (every column but the class column)
labels <- as.matrix(train[,ncol(train)])
#get the classes (just the class column)
features <- as.matrix(train[,1:(ncol(train)-1)])
}
And for this I am calculating the "distance" using this formula:
distance <- function(x1,x2) {
return(sqrt(sum((x1 - x2) ^ 2)))
}
So is the process for the rest of the algorithm as follows:?
1.Loop through every data (in this case every row for the 2 columns) and calculate the distance from the one number at a time and compare it to the sampleToBeClassified?
2.In the starting case that I want 1 nearest-neighbor classification, would I just be storing the variable that has the least distance to my sampleToBeClassified?
3.Whatever the closest distance variable is find out what class it is, then that class becomes the class of the sampleToBeClassified?
My main question is what role do the features play in this? My instinct is that the two features together are what defines that data item as a certain class, so what should I be calculating the distance between?
Am I on the right track at all?
Thanks
It looks as though you're on the right track. The three steps in your process seem to be correct for the 1-nearest neighbor cases. For kNN, you just need to make a list of the k nearest neighbors and then determine which class is most prevalent in that list.
As for features, these are just attributes that define each instance and (hopefully) give us an indication as to what class they belong to. For instance, if we're trying to classify animals we could use height and mass as features. So if we have an instance in the class elephant, its height might be 3.27m and its mass might be 5142kg. An instance in the class dog might have a height of 0.59m and a mass of 10.4kg. In classification, if we get something that's 0.8m tall and has a mass of 18.5kg, we know it's more likely to be a dog than a elephant.
Since we're only using 2 features here we can easily plot them on a graph with one feature as the X-axis and the other feature as the Y (it doesn't really matter which one) with the different classes denoted by different colors or symbols or something. If you plot the sample of your training data above, it's easy to see the separation between Class A and B.

Calculate correlation coefficient between words?

For a text analysis program, I would like to analyze the co-occurrence of certain words in a text. For example, I would like to see that e.g. the words "Barack" and "Obama" appear more often together (i.e. have a positive correlation) than others.
This does not seem to be that difficult. However, to be honest, I only know how to calculate the correlation between two numbers, but not between two words in a text.
How can I best approach this problem?
How can I calculate the correlation between words?
I thought of using conditional probabilities, since e.g. Barack Obama is much more probable than Obama Barack; however, the problem I try to solve is much more fundamental and does not depend on the ordering of the words
The Ngram Statistics Package (NSP) is devoted precisely to this task. They have a paper online which describes the association measures they use. I haven't used the package myself, so I cannot comment on its reliability/requirements.
Well a simple way to solve your question is by shaping the data in a 2x2 matrix
obama | not obama
barack A B
not barack C D
and score all occuring bi-grams in the matrix. That way you can for instance use simple chi squared.
I don't know how this is commonly done, but I can think of one crude way to define a notion of correlation that captures word adjacency.
Suppose the text has length N, say it is an array
text[0], text[1], ..., text[N-1]
Suppose the following words appear in the text
word[0], word[1], ..., word[k]
For each word word[i], define a vector of length N-1
X[i] = array(); // of length N-1
as follows: the ith entry of the vector is 1 if the word is either the ith word or the (i+1)th word, and zero otherwise.
// compute the vector X[i]
for (j = 0:N-2){
if (text[j] == word[i] OR text[j+1] == word[i])
X[i][j] = 1;
else
X[i][j] = 0;
}
Then you can compute the correlation coefficient between word[a] and word[b] as the dot product between X[a] and X[b] (note that the dot product is the number of times these words are adjacent) divided by the lenghts (the length is the square root of the number of appearances of the word, well maybe twice that). Call this quantity COR(X[a],X[b]). Clearly COR(X[a],X[a]) = 1, and COR(X[a],X[b]) is larger if word[a], word[b] are often adjacent.
This can be generalized from "adjacent" to other notions of near - for example we could have chosen to use 3 word (or 4, 5, etc.) blocks instead. One can also add weights, probably do many more things as well if desired. One would have to experiment to see what is useful, if any of it is of use at all.
This problem sounds like a bigram, a sequence of two "tokens" in a larger body of text. See this Wikipedia entry, which has additional links to the more general n-gram problem.
If you want to do a full analysis, you'd most likely take any given pair of words and do a frequency analysis. E.g., the sentence "Barack Obama is the Democratic candidate for President," has 8 words, so there are 8 choose 2 = 28 possible pairs.
You can then ask statistical questions like, "in how many pairs does 'Obama' follow 'Barack', and in how many pairs does some other word (not 'Obama') follow 'Barack'? In this case, there are 7 pairs that include 'Barack' but in only one of them is it paired with 'Obama'.
Do the same for every possible word pair (e.g., "in how many pairs does 'candidate' follow 'the'?"), and you've got a basis for comparison.

Probability of 3-character string appearing in a randomly generated password

If you have a randomly generated password, consisting of only alphanumeric characters, of length 12, and the comparison is case insensitive (i.e. 'A' == 'a'), what is the probability that one specific string of length 3 (e.g. 'ABC') will appear in that password?
I know the number of total possible combinations is (26+10)^12, but beyond that, I'm a little lost. An explanation of the math would also be most helpful.
The string "abc" can appear in the first position, making the string look like this:
abcXXXXXXXXX
...where the X's can be any letter or number. There are (26 + 10)^9 such strings.
It can appear in the second position, making the string look like:
XabcXXXXXXXX
And there are (26 + 10)^9 such strings also.
Since "abc" can appear at anywhere from the first through 10th positions, there are 10*36^9 such strings.
But this overcounts, because it counts (for instance) strings like this twice:
abcXXXabcXXX
So we need to count all of the strings like this and subtract them off of our total.
Since there are 6 X's in this pattern, there are 36^6 strings that match this pattern.
I get 7+6+5+4+3+2+1 = 28 patterns like this. (If the first "abc" is at the beginning, the second can be in any of 7 places. If the first "abc" is in the second place, the second can be in any of 6 places. And so on.)
So subtract off 28*36^6.
...but that subtracts off too much, because it subtracted off strings like this three times instead of just once:
abcXabcXabcX
So we have to add back in the strings like this, twice. I get 4+3+2+1 + 3+2+1 + 2+1 + 1 = 20 of these patterns, meaning we have to add back in 2*20*(36^3).
But that math counted this string four times:
abcabcabcabc
...so we have to subtract off 3.
Final answer:
10*36^9 - 28*36^6 + 2*20*(36^3) - 3
Divide that by 36^12 to get your probability.
See also the Inclusion-Exclusion Principle. And let me know if I made an error in my counting.
If A is not equal to C, the probability P(n) of ABC occuring in a string of length n (assuming every alphanumeric symbol is equally likely) is
P(n)=P(n-1)+P(3)[1-P(n-3)]
where
P(0)=P(1)=P(2)=0 and P(3)=1/(36)^3
To expand on Paul R's answer. Probability (for equally likely outcomes) is the number of possible outcomes of your event divided by the total number of possible outcomes.
There are 10 possible places where a string of length 3 can be found in a string of length 12. And there are 9 more spots that can be filled with any other alphanumeric characters, which leads to 36^9 possibilities. So the number of possible outcomes of your event is 10 * 36^9.
Divide that by your total number of outcomes 36^12. And your answer is 10 * 36^-3 = 0.000214
EDIT: This is not completely correct. In this solution, some cases are double counted. However they only form a very small contribution to the probability so this answer is still correct up to 11 decimal places. If you want the full answer, see Nemo's answer.

Resources