Related
I have got a code that generates all possible correct strings of balanced brackets. So if the input is n = 4 there should be 4 brackets in the string and thus the answers the code will give are: {}{} and
{{}}.
Now, what I would like to do is print the number of possible strings. For example, for n = 4 the outcome would be 2.
Given my code, is this possible and how would I make that happen?
Just introduce a counter.
// Change prototype to return the counter
int findBalanced(int p,int n,int o,int c)
{
static char str[100];
// The counter
static int count = 0;
if (c == n) {
// Increment it on every printout
count ++;
printf("%s\n", str);
// Just return zero. This is not used anyway and will give
// Correct result for n=0
return 0;
} else {
if (o > c) {
str[p] = ')';
findBalanced(p + 1, n, o, c + 1);
}
if (o < n) {
str[p] = '(';
findBalanced(p + 1, n, o + 1, c);
}
}
// Return it
return count;
}
What you're looking for is the n-th Catalan number. You'll need to implement binomial coefficient to calculate it, but that's pretty much it.
Is it possible to decimate a thousand items till there is only one left and if so how many decimation cycles are required? By decimate, I mean remove only a tenth and leave the rest, e.g, 1st cycle: 1000 - 900 (remove a tenth: 100)
2nd cycle: 900 - 810 (remove a tenth: 90)
This depends on the numerical type you use.
If you use floating point numbers, you will eventually reach a value smaller than 1 (not exactly though, as the output of the following java program shows):
class Main {
public static void main(String[] args) {
int i = 0;
double val = 1000;
while (val > 1) {
val -= val/10;
i++;
System.out.println("val = " + val + " in iteration " + i);
}
}
}
Excerpt from the output:
val = 900.0 in iteration 1
val = 810.0 in iteration 2
val = 729.0 in iteration 3
...
val = 1.0611166119964726 in iteration 65
val = 0.9550049507968253 in iteration 66
If you use an integer type, will won't reach 1, because as soon as val == 9, it won't get any smaller, as the divison result is 0. I added a check to catch that condition to prevent an endless loop.
Let's have a look:
class Main {
public static void main(String[] args) {
int i = 0;
int val = 1000;
while (val > 1) {
int subtract = val/10;
if (subtract == 0) {
System.out.println("subtracted value is 0 for value " + val + " at iteration " + i);
break;
}
val -= subtract;
i++;
System.out.println("val = " + val + " in iteration " + i);
}
}
}
Output:
val = 900 in iteration 1
val = 810 in iteration 2
...
val = 10 in iteration 49
val = 9 in iteration 50
value 9 repeats at iteration 50
I have a question which asks us to reduce the string as follows.
The input is a string having only A, B or C. Output must be length of
the reduced string
The string can be reduced by the following rules
If any 2 different letters are adjacent, these two letters can be
replaced by the third letter.
Eg ABA -> CA -> B . So final answer is 1 (length of reduced string)
Eg ABCCCCCCC
This doesn't become CCCCCCCC, as it can be reduced alternatively by
ABCCCCCCC->AACCCCCC->ABCCCCC->AACCCC->ABCCC->AACC->ABC->AA
as here length is 2 < (length of CCCCCCCC)
How do you go about this problem?
Thanks a lot!
To make things clear: the question states it wants the minimum length of the reduced string. So in the second example above there are 2 solutions possible, one CCCCCCCC and the other AA. So 2 is the answer as length of AA is 2 which is smaller than the length of CCCCCCCC = 8.
The way this question is phrased, there are only three distinct possibilities:
If the string has only one unique character, the length is the same as the length of the string.
2/3. If the string contains more than one unique character, the length is either 1 or 2, always (based on the layout of the characters).
Edit:
As a way of proof of concept here is some grammar and its extensions:
I should note that although this seems to me a reasonable proof for the fact that the length will reduce to either 1 or 2, I am reasonably sure that determining which of these lengths will result is not as trivial as I originally thought ( you would still have to recurse through all options to find it out)
S : A|B|C|()
S : S^
where () denotes the empty string, and s^ means any combination of the previous [A,B,C,()] characters.
Extended Grammar:
S_1 : AS^|others
S_2 : AAS^|ABS^|ACS^|others
S_3 : AAAS^|
AABS^ => ACS^ => BS^|
AACS^ => ABS^ => CS^|
ABAS^ => ACS^ => BS^|
ABBS^ => CBS^ => AS^|
ABCS^ => CCS^ | AAS^|
ACAS^ => ABS^ => CS^|
ACBS^ => AAS^ | BBS^|
ACCS^ => BCS^ => AS^|
The same thing will happen with extended grammars starting with B, and C (others). The interesting cases are where we have ACB and ABC (three distinct characters in sequence), these cases result in grammars that appear to lead to longer lengths however:
CCS^: CCAS^|CCBS^|CCCS^|
CBS^ => AS^|
CAS^ => BS^|
CCCS^|
AAS^: AAAS^|AABS^|AACS^|
ACS^ => BS^|
ABS^ => CS^|
AAAS^|
BBS^: BBAS^|BBBS^|BBCS^|
BCS^ => AS^|
BAS^ => CS^|
BBBS^|
Recursively they only lead to longer lengths when the remaining string contains their value only. However we have to remember that this case also can be simplified, since if we got to this area with say CCCS^, then we at one point previous had ABC ( or consequently CBA ). If we look back we could have made better decisions:
ABCCS^ => AACS^ => ABS^ => CS^
CBACS^ => CBBS^ => ABS^ => CS^
So in the best case at the end of the string when we make all the correct decisions we end with a remaining string of 1 character followed by 1 more character(since we are at the end). At this time if the character is the same, then we have a length of 2, if it is different, then we can reduce one last time and we end up with a length of 1.
You can generalize the result based on individual character count of string. The algo is as follows,
traverse through the string and get individual char count.
Lets say if
a = no# of a's in given string
b = no# of b's in given string
c = no# of c's in given string
then you can say that, the result will be,
if((a == 0 && b == 0 && c == 0) ||
(a == 0 && b == 0 && c != 0) ||
(a == 0 && b != 0 && c == 0) ||
(a != 0 && b == 0 && c == 0))
{
result = a+b+c;
}
else if(a != 0 && b != 0 && c != 0)
{
if((a%2 == 0 && b%2 == 0 && c%2 == 0) ||
(a%2 == 1 && b%2 == 1 && c%2 == 1))
result = 2;
else
result = 1;
}
else if((a == 0 && b != 0 && c != 0) ||
(a != 0 && b == 0 && c != 0) ||
(a != 0 && b != 0 && c == 0))
{
if(a%2 == 0 && b%2 == 0 && c%2 == 0)
result = 2;
else
result = 1;
}
I'm assuming that you are looking for the length of the shortest possible string that can be obtained after reduction.
A simple solution would be to explore all possibilities in a greedy manner and hope that it does not explode exponentially. I'm gonna write Python pseudocode here because that's easier to comprehend (at least for me ;)):
from collections import deque
def try_reduce(string):
queue = deque([string])
min_length = len(string)
while queue:
string = queue.popleft()
if len(string) < min_length:
min_length = len(string)
for i in xrange(len(string)-1):
substring = string[i:(i+2)]
if substring == "AB" or substring == "BA":
queue.append(string[:i] + "C" + string[(i+2):])
elif substring == "BC" or substring == "CB":
queue.append(string[:i] + "A" + string[(i+2):])
elif substring == "AC" or substring == "CA":
queue.append(string[:i] + "B" + string[(i+2):])
return min_length
I think the basic idea is clear: you take a queue (std::deque should be just fine), add your string into it, and then implement a simple breadth first search in the space of all possible reductions. During the search, you take the first element from the queue, take all possible substrings of it, execute all possible reductions, and push the reduced strings back to the queue. The entire space is explored when the queue becomes empty.
Let's define an automaton with the following rules (K>=0):
Incoming: A B C
Current: --------------------------
<empty> A B C
A(2K+1) A(2K+2) AB AC
A(2K+2) A(2K+3) AAB AAC
AB CA CB ABC
AAB BA ACB BC
ABC CCA AAB AAC
and all rules obtained by permutations of ABC to get the complete definition.
All input strings using a single letter are irreducible. If the input string contains at least two different letters, the final states like AB or AAB can be reduced to a single letter, and the final states like ABC can be reduced to two letters.
In the ABC case, we still have to prove that the input string can't be reduced to a single letter by another reduction sequence.
Compare two characters at a time and replace if both adjacent characters are not same. To get optimal solution, run once from start of the string and once from end of the string. Return the minimum value.
int same(char* s){
int i=0;
for(i=0;i<strlen(s)-1;i++){
if(*(s+i) == *(s+i+1))
continue;
else
return 0;
}
return 1;
}
int reduceb(char* s){
int ret = 0,a_sum=0,i=0;
int len = strlen(s);
while(1){
i=len-1;
while(i>0){
if ((*(s+i)) == (*(s+i-1))){
i--;
continue;
} else {
a_sum = (*(s+i)) + (*(s+i-1));
*(s+i-1) = SUM - a_sum;
*(s+i) = '\0';
len--;
}
i--;
}
if(same(s) == 1){
return strlen(s);
}
}
}
int reducef(char* s){
int ret = 0,a_sum=0,i=0;
int len = strlen(s);
while(1){
i=0;
while(i<len-1){
if ((*(s+i)) == (*(s+i+1))){
i++;
continue;
} else {
a_sum = (*(s+i)) + (*(s+i+1));
*(s+i) = SUM - a_sum;
int j=i+1;
for(j=i+1;j<len;j++)
*(s+j) = *(s+j+1);
len--;
}
i++;
}
if(same(s) == 1){
return strlen(s);
}
}
}
int main(){
int n,i=0,f=0,b=0;
scanf("%d",&n);
int a[n];
while(i<n){
char* str = (char*)malloc(101);
scanf("%s",str);
char* strd = strdup(str);
f = reducef(str);
b = reduceb(strd);
if( f > b)
a[i] = b;
else
a[i] = f;
free(str);
free(strd);
i++;
}
for(i=0;i<n;i++)
printf("%d\n",a[i]);
}
import java.io.*;
import java.util.*;
class StringSim{
public static void main(String args[]){
Scanner sc = new Scanner(System.in);
StringTokenizer st = new StringTokenizer(sc.nextLine(), " ");
int N = Integer.parseInt(st.nextToken());
String op = "";
for(int i=0;i<N;i++){
String str = sc.nextLine();
op = op + Count(str) + "\n";
}
System.out.println(op);
}
public static int Count( String str){
int min = Integer.MAX_VALUE;
char pre = str.charAt(0);
boolean allSame = true;
//System.out.println("str :" + str);
if(str.length() == 1){
return 1;
}
int count = 1;
for(int i=1;i<str.length();i++){
//System.out.println("pre: -"+ pre +"- char at "+i+" is : -"+ str.charAt(i)+"-");
if(pre != str.charAt(i)){
allSame = false;
char rep = (char)(('a'+'b'+'c')-(pre+str.charAt(i)));
//System.out.println("rep :" + rep);
if(str.length() == 2)
count = 1;
else if(i==1)
count = Count(rep+str.substring(2,str.length()));
else if(i == str.length()-1)
count = Count(str.substring(0,str.length()-2)+rep);
else
count = Count(str.substring(0,i-1)+rep+str.substring(i+1,str.length()));
if(min>count) min=count;
}else if(allSame){
count++;
//System.out.println("count: " + count);
}
pre = str.charAt(i);
}
//System.out.println("min: " + min);
if(allSame) return count;
return min;
}
}
Wouldn't a good start be to count which letter you have the most of and look for ways to remove it? Keep doing this until we only have one letter. We might have it many times but as long as it is the same we do not care, we are finished.
To avoid getting something like ABCCCCCCC becoming CCCCCCCC.
We remove the most popular letter:
-ABCCCCCCC
-AACCCCCC
-ABCCCCC
-AACCCC
-ABCCC
-AACC
-ABC
-AA
I disagree with the previous poster who states we must have a length of 1 or 2 - what happens if I enter the start string AAA?
import java.util.LinkedList;
import java.util.List;
import java.util.Scanner;
public class Sample {
private static char[] res = {'a', 'b', 'c'};
private char replacementChar(char a, char b) {
for(char c : res) {
if(c != a && c != b) {
return c;
}
}
throw new IllegalStateException("cannot happen. you must've mucked up the resource");
}
public int processWord(String wordString) {
if(wordString.length() < 2) {
return wordString.length();
}
String wordStringES = reduceFromEnd(reduceFromStart(wordString));
if(wordStringES.length() == 1) {
return 1;
}
String wordStringSE = reduceFromStart(reduceFromEnd(wordString));
if(wordString.length() == 1) {
return 1;
}
int aLen;
if(isReduced(wordStringSE)) {
aLen = wordStringSE.length();
} else {
aLen = processWord(wordStringSE);
}
int bLen;
if(isReduced(wordStringES)) {
bLen = wordStringES.length();
} else {
bLen = processWord(wordStringES);
}
return Math.min(aLen, bLen);
}
private boolean isReduced(String wordString) {
int length = wordString.length();
if(length < 2) {
return true;
}
for(int i = 1; i < length; ++i) {
if(wordString.charAt(i) != wordString.charAt(i - 1)) {
return false;
}
}
return wordString.charAt(0) == wordString.charAt(length - 1);
}
private String reduceFromStart(String theWord) {
if(theWord.length() < 2) {
return theWord;
}
StringBuilder buffer = new StringBuilder();
char[] word = theWord.toCharArray();
char curChar = word[0];
for(int i = 1; i < word.length; ++i) {
if(word[i] != curChar) {
curChar = replacementChar(curChar, word[i]);
if(i + 1 == word.length) {
buffer.append(curChar);
break;
}
} else {
buffer.append(curChar);
if(i + 1 == word.length) {
buffer.append(curChar);
}
}
}
return buffer.toString();
}
private String reduceFromEnd(String theString) {
if(theString.length() < 2) {
return theString;
}
StringBuilder buffer = new StringBuilder(theString);
int length = buffer.length();
while(length > 1) {
char a = buffer.charAt(0);
char b = buffer.charAt(length - 1);
if(a != b) {
buffer.deleteCharAt(length - 1);
buffer.deleteCharAt(0);
buffer.append(replacementChar(a, b));
length -= 1;
} else {
break;
}
}
return buffer.toString();
}
public void go() {
Scanner scanner = new Scanner(System.in);
int numEntries = Integer.parseInt(scanner.nextLine());
List<Integer> counts = new LinkedList<Integer>();
for(int i = 0; i < numEntries; ++i) {
counts.add((processWord(scanner.nextLine())));
}
for(Integer count : counts) {
System.out.println(count);
}
}
public static void main(String[] args) {
Sample solution = new Sample();
solution.go();
}
}
This is greedy approach and traversing the path starts with each possible pair and checking the min length.
import java.io.*;
import java.util.*;
class StringSim{
public static void main(String args[]){
Scanner sc = new Scanner(System.in);
StringTokenizer st = new StringTokenizer(sc.nextLine(), " ");
int N = Integer.parseInt(st.nextToken());
String op = "";
for(int i=0;i<N;i++){
String str = sc.nextLine();
op = op + Count(str) + "\n";
}
System.out.println(op);
}
public static int Count( String str){
int min = Integer.MAX_VALUE;
char pre = str.charAt(0);
boolean allSame = true;
//System.out.println("str :" + str);
if(str.length() == 1){
return 1;
}
int count = 1;
for(int i=1;i<str.length();i++){
//System.out.println("pre: -"+ pre +"- char at "+i+" is : -"+ str.charAt(i)+"-");
if(pre != str.charAt(i)){
allSame = false;
char rep = (char)(('a'+'b'+'c')-(pre+str.charAt(i)));
//System.out.println("rep :" + rep);
if(str.length() == 2)
count = 1;
else if(i==1)
count = Count(rep+str.substring(2,str.length()));
else if(i == str.length()-1)
count = Count(str.substring(0,str.length()-2)+rep);
else
count = Count(str.substring(0,i-1)+rep+str.substring(i+1,str.length()));
if(min>count) min=count;
}else if(allSame){
count++;
//System.out.println("count: " + count);
}
pre = str.charAt(i);
}
//System.out.println("min: " + min);
if(allSame) return count;
return min;
}
}
Following NominSim's observations, here is probably an optimal solution that runs in linear time with O(1) space usage. Note that it is only capable of finding the length of the smallest reduction, not the reduced string itself:
def reduce(string):
a = string.count('a')
b = string.count('b')
c = string.count('c')
if ([a,b,c].count(0) >= 2):
return a+b+c
elif (all(v % 2 == 0 for v in [a,b,c]) or all(v % 2 == 1 for v in [a,b,c])):
return 2
else:
return 1
There is some underlying structure that can be used to solve this problem in O(n) time.
The rules given are (most of) the rules defining a mathematical group, in particular the group D_2 also sometimes known as K (for Klein's four group) or V (German for Viergruppe, four group). D_2 is a group with four elements, A, B, C, and 1 (the identity element). One of the realizations of D_2 is the set of symmetries of a rectangular box with three different sides. A, B, and C are 180 degree rotations about each of the axes, and 1 is the identity rotation (no rotation). The group table for D_2 is
|1 A B C
-+-------
1|1 A B C
A|A 1 C B
B|B C 1 A
C|C B A 1
As you can see, the rules correspond to the rules given in the problem, except that the rules involving 1 aren't present in the problem.
Since D_2 is a group, it satisfies a number of rules: closure (the product of any two elements of the group is another element), associativity (meaning (x*y)*z = x*(y*z) for any elements x, y, z; i.e., the order in which strings are reduced doesn't matter), existence of identity (there is an element 1 such that 1*x=x*1=x for any x), and existence of inverse (for any element x, there is an element x^{-1} such that x*x^{-1}=1 and x^{-1}*x=1; in our case, every element is its own inverse).
It's also worth noting that D_2 is commutative, i.e., x*y=y*x for any x,y.
Given any string of elements in D_2, we can reduce to a single element in the group in a greedy fashion. For example, ABCCCCCCC=CCCCCCCC=CCCCCC=CCCC=CC=1. Note that we don't write the element 1 unless it's the only element in the string. Associativity tells us that the order of the operations doesn't matter, e.g., we could have worked from right to left or started in the middle and gotten the same result. Let's try from the right: ABCCCCCCC=ABCCCCC=ABCCC=ABC=AA=1.
The situation of the problem is different because operations involving 1 are not allowed, so we can't just eliminate pairs AA, BB, or CC. However, the situation is not that different. Consider the string ABB. We can't write ABB=A in this case. However, we can eliminate BB in two steps using A: ABB=CB=A. Since order of operation doesn't matter by associativity, we're guaranteed to get the same result. So we can't go straight from ABB to A but we can get the same result by another route.
Such alternate routes are available whenever there are at least two different elements in a string. In particular, in each of ABB, ACC, BAA, BCC, CAA, CBB, AAB, AAC, BBA, BBC, CCA, CCB, we can act as if we have the reduction xx=1 and then drop the 1.
It follows that any string that is not homogeneous (not all the same letter) and has a double-letter substring (AA, BB, or CC) can be reduced by removing the double letter. Strings that contain just two identical letters can't be further reduced (because there is no 1 allowed in the problem), so it seems safe to hypothesize that any non-homogeneous string can be reduced to A, B, C, AA, BB, CC.
We still have to be careful, however, because CCAACC could be turned into CCCC by removing the middle pair AA, but that is not the best we can do: CCAACC=AACC=CC or AA takes us down to a string of length 2.
Another situation we have to be careful of is AABBBB. Here we could eliminate AA to end with BBBB, but it's better to eliminate the middle B's first, then whatever: AABBBB=AABB=AA or BB (both of which are equivalent to 1 in the group, but can't be further reduced in the problem).
There's another interesting situation we could have: AAAABBBB. Blindly eliminating pairs takes us to either AAAA or BBBB, but we could do better: AAAABBBB=AAACBBB=AABBBB=AABB=AA or BB.
The above indicate that eliminating doubles blindly is not necessarily the way to proceed, but nevertheless it was illuminating.
Instead, it seems as if the most important property of a string is non-homogeneity. If the string is homogeneous, stop, there's nothing we can do. Otherwise, identify an operation that preserves the non-homogeneity property if possible. I assert that it is always possible to identify an operation that preserves non-homogeneity if the string is non-homogeneous and of length four or greater.
Proof: if a 4-substring contains two different letters, a third letter can be introduced at a boundary between two different letters, e.g., AABA goes to ACA. Since one or the other of the original letters must be unchanged somewhere within the string, it follows that the result is still non-homogeneous.
Suppose instead we have a 4-substring that has three different elements, say AABC, with the outer two elements different. Then if the middle two elements are different, perform the operation on them; the result is non-homogeneous because the two outermost elements are still different. On the other hand, if the two inner elements are the same, e.g., ABBC, then they have to be different from both outermost elements (otherwise we'd only have two elements in the set of four, not three). In that case, perform either the first or third operation; that leaves either the last two elements different (e.g., ABBC=CBC) or the first two elements different (e.g., ABBC=ABA) so non-homogeneity is preserved.
Finally, consider the case where the first and last elements are the same. Then we have a situation like ABCA. The middle two elements both have to be different from the outer elements, otherwise we'd have only two elements in this case, not three. We can take the first available operation, ABCA=CCA, and non-homogeneity is preserved again.
End of proof.
We have a greedy algorithm to reduce any non-homogeneous string of length 4 or greater: pick the first operation that preserves non-homogeneity; such an operation must exist by the above argument.
We have now reduced to the case where we have a non-homogeneous string of 3 elements. If two are the same, we either have doubles like AAB etc., which we know can be reduced to a single element, or we have two elements with no double like ABA=AC=B which can also be reduced to a single element, or we have three different elements like ABC. There are six permutations, all of which =1 in the group by associativity and commutativity; all of them can be reduced to two elements by any operation; however, they can't possibly be reduced below a homogeneous pair (AA, BB, or CC) since 1 is not allowed in the problem, so we know that's the best we can do in this case.
In summary, if a string is homogeneous, there's nothing we can do; if a string is non-homogeneous and =A in the group, it can be reduced to A in the problem by a greedy algorithm which maintains non-homogeneity at each step; the same if the string =B or =C in the group; finally if a string is non-homogeneous and =1 in the group, it can be reduced by a greedy algorithm which maintains non-homogeneity as long as possible to one of AA, BB or CC. Those are the best we can do by the group properties of the operation.
Program solving the problem:
Now, since we know the possible outcomes, our program can run in O(n) time as follows: if all the letters in the given string are the same, no reduction is possible so just output the length of the string. If the string is non-homogeneous, and is equal to the identity in the group, output the number 2; otherwise output the number 1.
To quickly decide whether an element equals the identity in the group, we use commutativity and associativity as follows: just count the number of A's, B's and C's into the variables a, b, c. Replace a = a mod 2, b = b mod 2, c = c mod 2 because we can eliminate pairs AA, BB, and CC in the group. If none of the resulting a, b, c is equal to 0, we have ABC=1 in the group, so the program should output 2 because a reduction to the identity 1 is not possible. If all three of the resulting a, b, c are equal to 0, we again have the identity (A, B, and C all cancelled themselves out) so we should output 2. Otherwise the string is non-identity and we should output 1.
//C# Coding
using System;
using System.Collections.Generic;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
/*
Keep all the rules in Dictionary object 'rules';
key - find string, value - replace with value
eg: find "AB" , replace with "AA"
*/
Dictionary<string, string> rules = new Dictionary<string, string>();
rules.Add("AB", "AA");
rules.Add("BA", "AA");
rules.Add("CB", "CC");
rules.Add("BC", "CC");
rules.Add("AA", "A");
rules.Add("CC", "C");
// example string
string str = "AABBCCCA";
//output
Console.WriteLine(fnRecurence(rules, str));
Console.Read();
}
//funcation for applying all the rules to the input string value recursivily
static string fnRecurence(Dictionary<string, string> rules,string str)
{
foreach (var rule in rules)
{
if (str.LastIndexOf(rule.Key) >= 0)
{
str = str.Replace(rule.Key, rule.Value);
}
}
if(str.Length >1)
{
int find = 0;
foreach (var rule in rules)
{
if (str.LastIndexOf(rule.Key) >= 0)
{
find = 1;
}
}
if(find == 1)
{
str = fnRecurence(rules, str);
}
else
{
//if not find any exit
find = 0;
str = str;
return str;
}
}
return str;
}
}
}
Here is my C# solution.
public static int StringReduction(string str)
{
if (str.Length == 1)
return 1;
else
{
int prevAns = str.Length;
int newAns = 0;
while (prevAns != newAns)
{
prevAns = newAns;
string ansStr = string.Empty;
int i = 1;
int j = 0;
while (i < str.Length)
{
if (str[i] != str[j])
{
if (str[i] != 'a' && str[j] != 'a')
{
ansStr += 'a';
}
else if (str[i] != 'b' && str[j] != 'b')
{
ansStr += 'b';
}
else if (str[i] != 'c' && str[j] != 'c')
{
ansStr += 'c';
}
i += 2;
j += 2;
}
else
{
ansStr += str[j];
i++;
j++;
}
}
if (j < str.Length)
{
ansStr += str[j];
}
str = ansStr;
newAns = ansStr.Length;
}
return newAns;
}
}
Compare two characters at a time and replace if both adjacent characters are not same. To get optimal solution, run once from start of the string and once from end of the string. Return the minimum value.
Rav solution is :-
int same(char* s){
int i=0;
for(i=0;i<strlen(s)-1;i++){
if(*(s+i) == *(s+i+1))
continue;
else
return 0;
}
return 1;
}
int reduceb(char* s){
int ret = 0,a_sum=0,i=0;
int len = strlen(s);
while(1){
i=len-1;
while(i>0){
if ((*(s+i)) == (*(s+i-1))){
i--;
continue;
} else {
a_sum = (*(s+i)) + (*(s+i-1));
*(s+i-1) = SUM - a_sum;
*(s+i) = '\0';
len--;
}
i--;
}
if(same(s) == 1){
return strlen(s);
}
}
}
int reducef(char* s){
int ret = 0,a_sum=0,i=0;
int len = strlen(s);
while(1){
i=0;
while(i<len-1){
if ((*(s+i)) == (*(s+i+1))){
i++;
continue;
} else {
a_sum = (*(s+i)) + (*(s+i+1));
*(s+i) = SUM - a_sum;
int j=i+1;
for(j=i+1;j<len;j++)
*(s+j) = *(s+j+1);
len--;
}
i++;
}
if(same(s) == 1){
return strlen(s);
}
}
}
int main(){
int n,i=0,f=0,b=0;
scanf("%d",&n);
int a[n];
while(i<n){
char* str = (char*)malloc(101);
scanf("%s",str);
char* strd = strdup(str);
f = reducef(str);
b = reduceb(strd);
if( f > b)
a[i] = b;
else
a[i] = f;
free(str);
free(strd);
i++;
}
for(i=0;i<n;i++)
printf("%d\n",a[i]);
}
#Rav
this code will fail for input "abccaccba".
solution should be only "b"
but this code wont give that. Since i am not getting correct comment place(due to low points or any other reason) so i did it here.
This problem can be solved by greedy approach. Try to find the best position to apply transformation until no transformation exists. The best position is the position with max number of distinct neighbors of the transformed character.
You can solve this using 2 pass.
In the first pass you apply
len = strlen (str) ;
index = 0 ;
flag = 0 ;
/* 1st pass */
for ( i = len-1 ; i > 0 ; i -- ) {
if ( str[i] != str[i-1] ) {
str[i-1] = getChar (str[i], str[i-1]) ;
if (i == 1) {
output1[index++] = str[i-1] ;
flag = 1 ;
break ;
}
}
else output1[index++] = str[i] ;
}
if ( flag == 0 )
output1[index++] = str[i] ;
output1[index] = '\0';
And in the 2nd pass you will apply the same on 'output1' to get the result.
So, One is forward pass another one is backward pass.
int previous = a.charAt(0);
boolean same = true;
int c = 0;
for(int i = 0; i < a.length();++i){
c ^= a.charAt(i)-'a'+1;
if(a.charAt(i) != previous) same = false;
}
if(same) return a.length();
if(c==0) return 2;
else return 1;
import java.util.Scanner;
public class StringReduction {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
String str = sc.nextLine();
int length = str.length();
String result = stringReduction(str);
System.out.println(result);
}
private static String stringReduction(String str) {
String result = str.substring(0);
if(str.length()<2){
return str;
}
if(str.length() == 2){
return combine(str.charAt(0),str.charAt(1));
}
for(int i =1;i<str.length();i++){
if(str.charAt(i-1) != str.charAt(i)){
String temp = str.substring(0, i-1) + combine(str.charAt(i-1),str.charAt(i)) + str.substring(i+1, str.length());
String sub = stringReduction(temp);
if(sub.length() < result.length()){
result = sub;
}
}
}
return result;
}
private static String combine(char c1, char c2) {
if(c1 == c2){
return "" + c1 + c2;
}
else{
if(c1 == 'a'){
if(c2 == 'b'){
return "" + 'c';
}
if(c2 == 'c') {
return "" + 'b';
}
}
if(c1 == 'b'){
if(c2 == 'a'){
return "" + 'c';
}
if(c2 == 'c') {
return "" + 'a';
}
}
if(c1 == 'c'){
if(c2 == 'a'){
return "" + 'b';
}
if(c2 == 'b') {
return "" + 'a';
}
}
return null;
}
}
}
JAVASCRIPT SOLUTION:
function StringChallenge(str) {
// code goes here
if(str.length == 1) {
return 1;
} else {
let prevAns = str.length;
let newAns = 0;
while(prevAns != newAns) {
prevAns = newAns;
let ansStr = "";
let i = 1;
let j = 0;
while(i < str.length) {
if(str[i] !== str[j]) {
if(str[i] != 'a' && str[j] != 'a') {
ansStr += 'a';
} else if(str[i] != 'b' && str[j] !='b') {
ansStr +='b';
} else if(str[i] != 'c' && str[j] != 'c') {
ansStr += 'c';
}
i += 2;
j += 2;
} else {
ansStr += str[j];
j++;
i++;
}
}
if(j < str.length) {
ansStr += str[j];
}
str = ansStr;
newAns = ansStr.length;
}
return newAns;
}
}
Are there any examples for a recursive function that calls an other function which calls the first one too ?
Example :
function1()
{
//do something
function2();
//do something
}
function2()
{
//do something
function1();
//do something
}
Mutual recursion is common in code that parses mathematical expressions (and other grammars). A recursive descent parser based on the grammar below will naturally contain mutual recursion: expression-terms-term-factor-primary-expression.
expression
+ terms
- terms
terms
terms
term + terms
term - terms
term
factor
factor * term
factor / term
factor
primary
primary ^ factor
primary
( expression )
number
name
name ( expression )
The proper term for this is Mutual Recursion.
http://en.wikipedia.org/wiki/Mutual_recursion
There's an example on that page, I'll reproduce here in Java:
boolean even( int number )
{
if( number == 0 )
return true;
else
return odd(abs(number)-1)
}
boolean odd( int number )
{
if( number == 0 )
return false;
else
return even(abs(number)-1);
}
Where abs( n ) means return the absolute value of a number.
Clearly this is not efficient, just to demonstrate a point.
An example might be the minmax algorithm commonly used in game programs such as chess. Starting at the top of the game tree, the goal is to find the maximum value of all the nodes at the level below, whose values are defined as the minimum of the values of the nodes below that, whose values are defines as the maximum of the values below that, whose values ...
I can think of two common sources of mutual recursion.
Functions dealing with mutually recursive types
Consider an Abstract Syntax Tree (AST) that keeps position information in every node. The type might look like this:
type Expr =
| Int of int
| Var of string
| Add of ExprAux * ExprAux
and ExprAux = Expr of int * Expr
The easiest way to write functions that manipulate values of these types is to write mutually recursive functions. For example, a function to find the set of free variables:
let rec freeVariables = function
| Int n -> Set.empty
| Var x -> Set.singleton x
| Add(f, g) -> Set.union (freeVariablesAux f) (freeVariablesAux g)
and freeVariablesAux (Expr(loc, e)) =
freeVariables e
State machines
Consider a state machine that is either on, off or paused with instructions to start, stop, pause and resume (F# code):
type Instruction = Start | Stop | Pause | Resume
The state machine might be written as mutually recursive functions with one function for each state:
type State = State of (Instruction -> State)
let rec isOff = function
| Start -> State isOn
| _ -> State isOff
and isOn = function
| Stop -> State isOff
| Pause -> State isPaused
| _ -> State isOn
and isPaused = function
| Stop -> State isOff
| Resume -> State isOn
| _ -> State isPaused
It's a bit contrived and not very efficient, but you could do this with a function to calculate Fibbonacci numbers as in:
fib2(n) { return fib(n-2); }
fib1(n) { return fib(n-1); }
fib(n)
{
if (n>1)
return fib1(n) + fib2(n);
else
return 1;
}
In this case its efficiency can be dramatically enhanced if the language supports memoization
In a language with proper tail calls, Mutual Tail Recursion is a very natural way of implementing automata.
Here is my coded solution. For a calculator app that performs *,/,- operations using mutual recursion. It also checks for brackets (()) to decide the order of precedence.
Flow:: expression -> term -> factor -> expression
Calculator.h
#ifndef CALCULATOR_H_
#define CALCULATOR_H_
#include <string>
using namespace std;
/****** A Calculator Class holding expression, term, factor ********/
class Calculator
{
public:
/**Default Constructor*/
Calculator();
/** Parameterized Constructor common for all exception
* #aparam e exception value
* */
Calculator(char e);
/**
* Function to start computation
* #param input - input expression*/
void start(string input);
/**
* Evaluates Term*
* #param input string for term*/
double term(string& input);
/* Evaluates factor*
* #param input string for factor*/
double factor(string& input);
/* Evaluates Expression*
* #param input string for expression*/
double expression(string& input);
/* Evaluates number*
* #param input string for number*/
string number(string n);
/**
* Prints calculates value of the expression
* */
void print();
/**
* Converts char to double
* #param c input char
* */
double charTONum(const char* c);
/**
* Get error
*/
char get_value() const;
/** Reset all values*/
void reset();
private:
int lock;//set lock to check extra parenthesis
double result;// result
char error_msg;// error message
};
/**Error for unexpected string operation*/
class Unexpected_error:public Calculator
{
public:
Unexpected_error(char e):Calculator(e){};
};
/**Error for missing parenthesis*/
class Missing_parenthesis:public Calculator
{
public:
Missing_parenthesis(char e):Calculator(e){};
};
/**Error if divide by zeros*/
class DivideByZero:public Calculator{
public:
DivideByZero():Calculator(){};
};
#endif
===============================================================================
Calculator.cpp
//============================================================================
// Name : Calculator.cpp
// Author : Anurag
// Version :
// Copyright : Your copyright notice
// Description : Calculator using mutual recursion in C++, Ansi-style
//============================================================================
#include "Calculator.h"
#include <iostream>
#include <string>
#include <math.h>
#include <exception>
using namespace std;
Calculator::Calculator():lock(0),result(0),error_msg(' '){
}
Calculator::Calculator(char e):result(0), error_msg(e) {
}
char Calculator::get_value() const {
return this->error_msg;
}
void Calculator::start(string input) {
try{
result = expression(input);
print();
}catch (Unexpected_error e) {
cout<<result<<endl;
cout<<"***** Unexpected "<<e.get_value()<<endl;
}catch (Missing_parenthesis e) {
cout<<"***** Missing "<<e.get_value()<<endl;
}catch (DivideByZero e) {
cout<<"***** Division By Zero" << endl;
}
}
double Calculator::expression(string& input) {
double expression=0;
if(input.size()==0)
return 0;
expression = term(input);
if(input[0] == ' ')
input = input.substr(1);
if(input[0] == '+') {
input = input.substr(1);
expression += term(input);
}
else if(input[0] == '-') {
input = input.substr(1);
expression -= term(input);
}
if(input[0] == '%'){
result = expression;
throw Unexpected_error(input[0]);
}
if(input[0]==')' && lock<=0 )
throw Missing_parenthesis(')');
return expression;
}
double Calculator::term(string& input) {
if(input.size()==0)
return 1;
double term=1;
term = factor(input);
if(input[0] == ' ')
input = input.substr(1);
if(input[0] == '*') {
input = input.substr(1);
term = term * factor(input);
}
else if(input[0] == '/') {
input = input.substr(1);
double den = factor(input);
if(den==0) {
throw DivideByZero();
}
term = term / den;
}
return term;
}
double Calculator::factor(string& input) {
double factor=0;
if(input[0] == ' ') {
input = input.substr(1);
}
if(input[0] == '(') {
lock++;
input = input.substr(1);
factor = expression(input);
if(input[0]==')') {
lock--;
input = input.substr(1);
return factor;
}else{
throw Missing_parenthesis(')');
}
}
else if (input[0]>='0' && input[0]<='9'){
string nums = input.substr(0,1) + number(input.substr(1));
input = input.substr(nums.size());
return stod(nums);
}
else {
result = factor;
throw Unexpected_error(input[0]);
}
return factor;
}
string Calculator::number(string input) {
if(input.substr(0,2)=="E+" || input.substr(0,2)=="E-" || input.substr(0,2)=="e-" || input.substr(0,2)=="e-")
return input.substr(0,2) + number(input.substr(2));
else if((input[0]>='0' && input[0]<='9') || (input[0]=='.'))
return input.substr(0,1) + number(input.substr(1));
else
return "";
}
void Calculator::print() {
cout << result << endl;
}
void Calculator::reset(){
this->lock=0;
this->result=0;
}
int main() {
Calculator* cal = new Calculator;
string input;
cout<<"Expression? ";
getline(cin,input);
while(input!="."){
cal->start(input.substr(0,input.size()-2));
cout<<"Expression? ";
cal->reset();
getline(cin,input);
}
cout << "Done!" << endl;
return 0;
}
==============================================================
Sample input-> Expression? (42+8)*10 =
Output-> 500
Top down merge sort can use a pair of mutually recursive functions to alternate the direction of merge based on level of recursion.
For the example code below, a[] is the array to be sorted, b[] is a temporary working array. For a naive implementation of merge sort, each merge operation copies data from a[] to b[], then merges b[] back to a[], or it merges from a[] to b[], then copies from b[] back to a[]. This requires n · ceiling(log2(n)) copy operations. To eliminate the copy operations used for merging, the direction of merge can be alternated based on level of recursion, merge from a[] to b[], merge from b[] to a[], ..., and switch to in place insertion sort for small runs in a[], as insertion sort on small runs is faster than merge sort.
In this example, MergeSortAtoA() and MergeSortAtoB() are the mutually recursive functions.
Example java code:
static final int ISZ = 64; // use insertion sort if size <= ISZ
static void MergeSort(int a[])
{
int n = a.length;
if(n < 2)
return;
int [] b = new int[n];
MergeSortAtoA(a, b, 0, n);
}
static void MergeSortAtoA(int a[], int b[], int ll, int ee)
{
if ((ee - ll) <= ISZ){ // use insertion sort on small runs
InsertionSort(a, ll, ee);
return;
}
int rr = (ll + ee)>>1; // midpoint, start of right half
MergeSortAtoB(a, b, ll, rr);
MergeSortAtoB(a, b, rr, ee);
Merge(b, a, ll, rr, ee); // merge b to a
}
static void MergeSortAtoB(int a[], int b[], int ll, int ee)
{
int rr = (ll + ee)>>1; // midpoint, start of right half
MergeSortAtoA(a, b, ll, rr);
MergeSortAtoA(a, b, rr, ee);
Merge(a, b, ll, rr, ee); // merge a to b
}
static void Merge(int a[], int b[], int ll, int rr, int ee)
{
int o = ll; // b[] index
int l = ll; // a[] left index
int r = rr; // a[] right index
while(true){ // merge data
if(a[l] <= a[r]){ // if a[l] <= a[r]
b[o++] = a[l++]; // copy a[l]
if(l < rr) // if not end of left run
continue; // continue (back to while)
while(r < ee){ // else copy rest of right run
b[o++] = a[r++];
}
break; // and return
} else { // else a[l] > a[r]
b[o++] = a[r++]; // copy a[r]
if(r < ee) // if not end of right run
continue; // continue (back to while)
while(l < rr){ // else copy rest of left run
b[o++] = a[l++];
}
break; // and return
}
}
}
static void InsertionSort(int a[], int ll, int ee)
{
int i, j;
int t;
for (j = ll + 1; j < ee; j++) {
t = a[j];
i = j-1;
while(i >= ll && a[i] > t){
a[i+1] = a[i];
i--;
}
a[i+1] = t;
}
}
I'm looking for a string similarity algorithm that yields better results on variable length strings than the ones that are usually suggested (levenshtein distance, soundex, etc).
For example,
Given string A: "Robert",
Then string B: "Amy Robertson"
would be a better match than
String C: "Richard"
Also, preferably, this algorithm should be language agnostic (also works in languages other than English).
Simon White of Catalysoft wrote an article about a very clever algorithm that compares adjacent character pairs that works really well for my purposes:
http://www.catalysoft.com/articles/StrikeAMatch.html
Simon has a Java version of the algorithm and below I wrote a PL/Ruby version of it (taken from the plain ruby version done in the related forum entry comment by Mark Wong-VanHaren) so that I can use it in my PostgreSQL queries:
CREATE FUNCTION string_similarity(str1 varchar, str2 varchar)
RETURNS float8 AS '
str1.downcase!
pairs1 = (0..str1.length-2).collect {|i| str1[i,2]}.reject {
|pair| pair.include? " "}
str2.downcase!
pairs2 = (0..str2.length-2).collect {|i| str2[i,2]}.reject {
|pair| pair.include? " "}
union = pairs1.size + pairs2.size
intersection = 0
pairs1.each do |p1|
0.upto(pairs2.size-1) do |i|
if p1 == pairs2[i]
intersection += 1
pairs2.slice!(i)
break
end
end
end
(2.0 * intersection) / union
' LANGUAGE 'plruby';
Works like a charm!
marzagao's answer is great. I converted it to C# so I thought I'd post it here:
Pastebin Link
/// <summary>
/// This class implements string comparison algorithm
/// based on character pair similarity
/// Source: http://www.catalysoft.com/articles/StrikeAMatch.html
/// </summary>
public class SimilarityTool
{
/// <summary>
/// Compares the two strings based on letter pair matches
/// </summary>
/// <param name="str1"></param>
/// <param name="str2"></param>
/// <returns>The percentage match from 0.0 to 1.0 where 1.0 is 100%</returns>
public double CompareStrings(string str1, string str2)
{
List<string> pairs1 = WordLetterPairs(str1.ToUpper());
List<string> pairs2 = WordLetterPairs(str2.ToUpper());
int intersection = 0;
int union = pairs1.Count + pairs2.Count;
for (int i = 0; i < pairs1.Count; i++)
{
for (int j = 0; j < pairs2.Count; j++)
{
if (pairs1[i] == pairs2[j])
{
intersection++;
pairs2.RemoveAt(j);//Must remove the match to prevent "GGGG" from appearing to match "GG" with 100% success
break;
}
}
}
return (2.0 * intersection) / union;
}
/// <summary>
/// Gets all letter pairs for each
/// individual word in the string
/// </summary>
/// <param name="str"></param>
/// <returns></returns>
private List<string> WordLetterPairs(string str)
{
List<string> AllPairs = new List<string>();
// Tokenize the string and put the tokens/words into an array
string[] Words = Regex.Split(str, #"\s");
// For each word
for (int w = 0; w < Words.Length; w++)
{
if (!string.IsNullOrEmpty(Words[w]))
{
// Find the pairs of characters
String[] PairsInWord = LetterPairs(Words[w]);
for (int p = 0; p < PairsInWord.Length; p++)
{
AllPairs.Add(PairsInWord[p]);
}
}
}
return AllPairs;
}
/// <summary>
/// Generates an array containing every
/// two consecutive letters in the input string
/// </summary>
/// <param name="str"></param>
/// <returns></returns>
private string[] LetterPairs(string str)
{
int numPairs = str.Length - 1;
string[] pairs = new string[numPairs];
for (int i = 0; i < numPairs; i++)
{
pairs[i] = str.Substring(i, 2);
}
return pairs;
}
}
Here is another version of marzagao's answer, this one written in Python:
def get_bigrams(string):
"""
Take a string and return a list of bigrams.
"""
s = string.lower()
return [s[i:i+2] for i in list(range(len(s) - 1))]
def string_similarity(str1, str2):
"""
Perform bigram comparison between two strings
and return a percentage match in decimal form.
"""
pairs1 = get_bigrams(str1)
pairs2 = get_bigrams(str2)
union = len(pairs1) + len(pairs2)
hit_count = 0
for x in pairs1:
for y in pairs2:
if x == y:
hit_count += 1
break
return (2.0 * hit_count) / union
if __name__ == "__main__":
"""
Run a test using the example taken from:
http://www.catalysoft.com/articles/StrikeAMatch.html
"""
w1 = 'Healed'
words = ['Heard', 'Healthy', 'Help', 'Herded', 'Sealed', 'Sold']
for w2 in words:
print('Healed --- ' + w2)
print(string_similarity(w1, w2))
print()
A shorter version of John Rutledge's answer:
def get_bigrams(string):
'''
Takes a string and returns a list of bigrams
'''
s = string.lower()
return {s[i:i+2] for i in xrange(len(s) - 1)}
def string_similarity(str1, str2):
'''
Perform bigram comparison between two strings
and return a percentage match in decimal form
'''
pairs1 = get_bigrams(str1)
pairs2 = get_bigrams(str2)
return (2.0 * len(pairs1 & pairs2)) / (len(pairs1) + len(pairs2))
Here's my PHP implementation of suggested StrikeAMatch algorithm, by Simon White. the advantages (like it says in the link) are:
A true reflection of lexical similarity - strings with small differences should be recognised as being similar. In particular, a significant substring overlap should point to a high level of similarity between the strings.
A robustness to changes of word order - two strings which contain the same words, but in a different order, should be recognised as being similar. On the other hand, if one string is just a random anagram of the characters contained in the other, then it should (usually) be recognised as dissimilar.
Language Independence - the algorithm should work not only in English, but in many different languages.
<?php
/**
* LetterPairSimilarity algorithm implementation in PHP
* #author Igal Alkon
* #link http://www.catalysoft.com/articles/StrikeAMatch.html
*/
class LetterPairSimilarity
{
/**
* #param $str
* #return mixed
*/
private function wordLetterPairs($str)
{
$allPairs = array();
// Tokenize the string and put the tokens/words into an array
$words = explode(' ', $str);
// For each word
for ($w = 0; $w < count($words); $w++)
{
// Find the pairs of characters
$pairsInWord = $this->letterPairs($words[$w]);
for ($p = 0; $p < count($pairsInWord); $p++)
{
$allPairs[] = $pairsInWord[$p];
}
}
return $allPairs;
}
/**
* #param $str
* #return array
*/
private function letterPairs($str)
{
$numPairs = mb_strlen($str)-1;
$pairs = array();
for ($i = 0; $i < $numPairs; $i++)
{
$pairs[$i] = mb_substr($str,$i,2);
}
return $pairs;
}
/**
* #param $str1
* #param $str2
* #return float
*/
public function compareStrings($str1, $str2)
{
$pairs1 = $this->wordLetterPairs(strtoupper($str1));
$pairs2 = $this->wordLetterPairs(strtoupper($str2));
$intersection = 0;
$union = count($pairs1) + count($pairs2);
for ($i=0; $i < count($pairs1); $i++)
{
$pair1 = $pairs1[$i];
$pairs2 = array_values($pairs2);
for($j = 0; $j < count($pairs2); $j++)
{
$pair2 = $pairs2[$j];
if ($pair1 === $pair2)
{
$intersection++;
unset($pairs2[$j]);
break;
}
}
}
return (2.0*$intersection)/$union;
}
}
This discussion has been really helpful, thanks. I converted the algorithm to VBA for use with Excel and wrote a few versions of a worksheet function, one for simple comparison of a pair of strings, the other for comparing one string to a range/array of strings. The strSimLookup version returns either the last best match as a string, array index, or similarity metric.
This implementation produces the same results listed in the Amazon example on Simon White's website with a few minor exceptions on low-scoring matches; not sure where the difference creeps in, could be VBA's Split function, but I haven't investigated as it's working fine for my purposes.
'Implements functions to rate how similar two strings are on
'a scale of 0.0 (completely dissimilar) to 1.0 (exactly similar)
'Source: http://www.catalysoft.com/articles/StrikeAMatch.html
'Author: Bob Chatham, bob.chatham at gmail.com
'9/12/2010
Option Explicit
Public Function stringSimilarity(str1 As String, str2 As String) As Variant
'Simple version of the algorithm that computes the similiarity metric
'between two strings.
'NOTE: This verision is not efficient to use if you're comparing one string
'with a range of other values as it will needlessly calculate the pairs for the
'first string over an over again; use the array-optimized version for this case.
Dim sPairs1 As Collection
Dim sPairs2 As Collection
Set sPairs1 = New Collection
Set sPairs2 = New Collection
WordLetterPairs str1, sPairs1
WordLetterPairs str2, sPairs2
stringSimilarity = SimilarityMetric(sPairs1, sPairs2)
Set sPairs1 = Nothing
Set sPairs2 = Nothing
End Function
Public Function strSimA(str1 As Variant, rRng As Range) As Variant
'Return an array of string similarity indexes for str1 vs every string in input range rRng
Dim sPairs1 As Collection
Dim sPairs2 As Collection
Dim arrOut As Variant
Dim l As Long, j As Long
Set sPairs1 = New Collection
WordLetterPairs CStr(str1), sPairs1
l = rRng.Count
ReDim arrOut(1 To l)
For j = 1 To l
Set sPairs2 = New Collection
WordLetterPairs CStr(rRng(j)), sPairs2
arrOut(j) = SimilarityMetric(sPairs1, sPairs2)
Set sPairs2 = Nothing
Next j
strSimA = Application.Transpose(arrOut)
End Function
Public Function strSimLookup(str1 As Variant, rRng As Range, Optional returnType) As Variant
'Return either the best match or the index of the best match
'depending on returnTYype parameter) between str1 and strings in rRng)
' returnType = 0 or omitted: returns the best matching string
' returnType = 1 : returns the index of the best matching string
' returnType = 2 : returns the similarity metric
Dim sPairs1 As Collection
Dim sPairs2 As Collection
Dim metric, bestMetric As Double
Dim i, iBest As Long
Const RETURN_STRING As Integer = 0
Const RETURN_INDEX As Integer = 1
Const RETURN_METRIC As Integer = 2
If IsMissing(returnType) Then returnType = RETURN_STRING
Set sPairs1 = New Collection
WordLetterPairs CStr(str1), sPairs1
bestMetric = -1
iBest = -1
For i = 1 To rRng.Count
Set sPairs2 = New Collection
WordLetterPairs CStr(rRng(i)), sPairs2
metric = SimilarityMetric(sPairs1, sPairs2)
If metric > bestMetric Then
bestMetric = metric
iBest = i
End If
Set sPairs2 = Nothing
Next i
If iBest = -1 Then
strSimLookup = CVErr(xlErrValue)
Exit Function
End If
Select Case returnType
Case RETURN_STRING
strSimLookup = CStr(rRng(iBest))
Case RETURN_INDEX
strSimLookup = iBest
Case Else
strSimLookup = bestMetric
End Select
End Function
Public Function strSim(str1 As String, str2 As String) As Variant
Dim ilen, iLen1, ilen2 As Integer
iLen1 = Len(str1)
ilen2 = Len(str2)
If iLen1 >= ilen2 Then ilen = ilen2 Else ilen = iLen1
strSim = stringSimilarity(Left(str1, ilen), Left(str2, ilen))
End Function
Sub WordLetterPairs(str As String, pairColl As Collection)
'Tokenize str into words, then add all letter pairs to pairColl
Dim Words() As String
Dim word, nPairs, pair As Integer
Words = Split(str)
If UBound(Words) < 0 Then
Set pairColl = Nothing
Exit Sub
End If
For word = 0 To UBound(Words)
nPairs = Len(Words(word)) - 1
If nPairs > 0 Then
For pair = 1 To nPairs
pairColl.Add Mid(Words(word), pair, 2)
Next pair
End If
Next word
End Sub
Private Function SimilarityMetric(sPairs1 As Collection, sPairs2 As Collection) As Variant
'Helper function to calculate similarity metric given two collections of letter pairs.
'This function is designed to allow the pair collections to be set up separately as needed.
'NOTE: sPairs2 collection will be altered as pairs are removed; copy the collection
'if this is not the desired behavior.
'Also assumes that collections will be deallocated somewhere else
Dim Intersect As Double
Dim Union As Double
Dim i, j As Long
If sPairs1.Count = 0 Or sPairs2.Count = 0 Then
SimilarityMetric = CVErr(xlErrNA)
Exit Function
End If
Union = sPairs1.Count + sPairs2.Count
Intersect = 0
For i = 1 To sPairs1.Count
For j = 1 To sPairs2.Count
If StrComp(sPairs1(i), sPairs2(j)) = 0 Then
Intersect = Intersect + 1
sPairs2.Remove j
Exit For
End If
Next j
Next i
SimilarityMetric = (2 * Intersect) / Union
End Function
I'm sorry, the answer was not invented by the author. This is a well known algorithm that was first present by Digital Equipment Corporation and is often referred to as shingling.
http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-TN-1997-015.pdf
I translated Simon White's algorithm to PL/pgSQL. This is my contribution.
<!-- language: lang-sql -->
create or replace function spt1.letterpairs(in p_str varchar)
returns varchar as
$$
declare
v_numpairs integer := length(p_str)-1;
v_pairs varchar[];
begin
for i in 1 .. v_numpairs loop
v_pairs[i] := substr(p_str, i, 2);
end loop;
return v_pairs;
end;
$$ language 'plpgsql';
--===================================================================
create or replace function spt1.wordletterpairs(in p_str varchar)
returns varchar as
$$
declare
v_allpairs varchar[];
v_words varchar[];
v_pairsinword varchar[];
begin
v_words := regexp_split_to_array(p_str, '[[:space:]]');
for i in 1 .. array_length(v_words, 1) loop
v_pairsinword := spt1.letterpairs(v_words[i]);
if v_pairsinword is not null then
for j in 1 .. array_length(v_pairsinword, 1) loop
v_allpairs := v_allpairs || v_pairsinword[j];
end loop;
end if;
end loop;
return v_allpairs;
end;
$$ language 'plpgsql';
--===================================================================
create or replace function spt1.arrayintersect(ANYARRAY, ANYARRAY)
returns anyarray as
$$
select array(select unnest($1) intersect select unnest($2))
$$ language 'sql';
--===================================================================
create or replace function spt1.comparestrings(in p_str1 varchar, in p_str2 varchar)
returns float as
$$
declare
v_pairs1 varchar[];
v_pairs2 varchar[];
v_intersection integer;
v_union integer;
begin
v_pairs1 := wordletterpairs(upper(p_str1));
v_pairs2 := wordletterpairs(upper(p_str2));
v_union := array_length(v_pairs1, 1) + array_length(v_pairs2, 1);
v_intersection := array_length(arrayintersect(v_pairs1, v_pairs2), 1);
return (2.0 * v_intersection / v_union);
end;
$$ language 'plpgsql';
A version in beautiful Scala:
def pairDistance(s1: String, s2: String): Double = {
def strToPairs(s: String, acc: List[String]): List[String] = {
if (s.size < 2) acc
else strToPairs(s.drop(1),
if (s.take(2).contains(" ")) acc else acc ::: List(s.take(2)))
}
val lst1 = strToPairs(s1.toUpperCase, List())
val lst2 = strToPairs(s2.toUpperCase, List())
(2.0 * lst2.intersect(lst1).size) / (lst1.size + lst2.size)
}
String Similarity Metrics contains an overview of many different metrics used in string comparison (Wikipedia has an overview as well). Much of these metrics is implemented in a library simmetrics.
Yet another example of metric, not included in the given overview is for example compression distance (attempting to approximate the Kolmogorov's complexity), which can be used for a bit longer texts than the one you presented.
You might also consider looking at a much broader subject of Natural Language Processing. These R packages can get you started quickly (or at least give some ideas).
And one last edit - search the other questions on this subject at SO, there are quite a few related ones.
A faster PHP version of the algorithm:
/**
*
* #param $str
* #return mixed
*/
private static function wordLetterPairs ($str)
{
$allPairs = array();
// Tokenize the string and put the tokens/words into an array
$words = explode(' ', $str);
// For each word
for ($w = 0; $w < count($words); $w ++) {
// Find the pairs of characters
$pairsInWord = self::letterPairs($words[$w]);
for ($p = 0; $p < count($pairsInWord); $p ++) {
$allPairs[$pairsInWord[$p]] = $pairsInWord[$p];
}
}
return array_values($allPairs);
}
/**
*
* #param $str
* #return array
*/
private static function letterPairs ($str)
{
$numPairs = mb_strlen($str) - 1;
$pairs = array();
for ($i = 0; $i < $numPairs; $i ++) {
$pairs[$i] = mb_substr($str, $i, 2);
}
return $pairs;
}
/**
*
* #param $str1
* #param $str2
* #return float
*/
public static function compareStrings ($str1, $str2)
{
$pairs1 = self::wordLetterPairs(mb_strtolower($str1));
$pairs2 = self::wordLetterPairs(mb_strtolower($str2));
$union = count($pairs1) + count($pairs2);
$intersection = count(array_intersect($pairs1, $pairs2));
return (2.0 * $intersection) / $union;
}
For the data I had (approx 2300 comparisons) I had a running time of 0.58sec with Igal Alkon solution versus 0.35sec with mine.
Posting marzagao's answer in C99, inspired by these algorithms
double dice_match(const char *string1, const char *string2) {
//check fast cases
if (((string1 != NULL) && (string1[0] == '\0')) ||
((string2 != NULL) && (string2[0] == '\0'))) {
return 0;
}
if (string1 == string2) {
return 1;
}
size_t strlen1 = strlen(string1);
size_t strlen2 = strlen(string2);
if (strlen1 < 2 || strlen2 < 2) {
return 0;
}
size_t length1 = strlen1 - 1;
size_t length2 = strlen2 - 1;
double matches = 0;
int i = 0, j = 0;
//get bigrams and compare
while (i < length1 && j < length2) {
char a[3] = {string1[i], string1[i + 1], '\0'};
char b[3] = {string2[j], string2[j + 1], '\0'};
int cmp = strcmpi(a, b);
if (cmp == 0) {
matches += 2;
}
i++;
j++;
}
return matches / (length1 + length2);
}
Some tests based on the original article:
#include <stdio.h>
void article_test1() {
char *string1 = "FRANCE";
char *string2 = "FRENCH";
printf("====%s====\n", __func__);
printf("%2.f%% == 40%%\n", dice_match(string1, string2) * 100);
}
void article_test2() {
printf("====%s====\n", __func__);
char *string = "Healed";
char *ss[] = {"Heard", "Healthy", "Help",
"Herded", "Sealed", "Sold"};
int correct[] = {44, 55, 25, 40, 80, 0};
for (int i = 0; i < 6; ++i) {
printf("%2.f%% == %d%%\n", dice_match(string, ss[i]) * 100, correct[i]);
}
}
void multicase_test() {
char *string1 = "FRaNcE";
char *string2 = "fREnCh";
printf("====%s====\n", __func__);
printf("%2.f%% == 40%%\n", dice_match(string1, string2) * 100);
}
void gg_test() {
char *string1 = "GG";
char *string2 = "GGGGG";
printf("====%s====\n", __func__);
printf("%2.f%% != 100%%\n", dice_match(string1, string2) * 100);
}
int main() {
article_test1();
article_test2();
multicase_test();
gg_test();
return 0;
}
Here is the R version:
get_bigrams <- function(str)
{
lstr = tolower(str)
bigramlst = list()
for(i in 1:(nchar(str)-1))
{
bigramlst[[i]] = substr(str, i, i+1)
}
return(bigramlst)
}
str_similarity <- function(str1, str2)
{
pairs1 = get_bigrams(str1)
pairs2 = get_bigrams(str2)
unionlen = length(pairs1) + length(pairs2)
hit_count = 0
for(x in 1:length(pairs1)){
for(y in 1:length(pairs2)){
if (pairs1[[x]] == pairs2[[y]])
hit_count = hit_count + 1
}
}
return ((2.0 * hit_count) / unionlen)
}
Building on Michael La Voie's awesome C# version, as per the request to make it an extension method, here is what I came up with. The primary benefit of doing it this way is that you can sort a Generic List by the percent match. For example, consider you have a string field named "City" in your object. A user searches for "Chester" and you want to return results in descending order of match. For example, you want literal matches of Chester to show up before Rochester. To do this, add two new properties to your object:
public string SearchText { get; set; }
public double PercentMatch
{
get
{
return City.ToUpper().PercentMatchTo(this.SearchText.ToUpper());
}
}
Then on each object, set the SearchText to what the user searched for. Then you can sort it easily with something like:
zipcodes = zipcodes.OrderByDescending(x => x.PercentMatch);
Here's the slight modification to make it an extension method:
/// <summary>
/// This class implements string comparison algorithm
/// based on character pair similarity
/// Source: http://www.catalysoft.com/articles/StrikeAMatch.html
/// </summary>
public static double PercentMatchTo(this string str1, string str2)
{
List<string> pairs1 = WordLetterPairs(str1.ToUpper());
List<string> pairs2 = WordLetterPairs(str2.ToUpper());
int intersection = 0;
int union = pairs1.Count + pairs2.Count;
for (int i = 0; i < pairs1.Count; i++)
{
for (int j = 0; j < pairs2.Count; j++)
{
if (pairs1[i] == pairs2[j])
{
intersection++;
pairs2.RemoveAt(j);//Must remove the match to prevent "GGGG" from appearing to match "GG" with 100% success
break;
}
}
}
return (2.0 * intersection) / union;
}
/// <summary>
/// Gets all letter pairs for each
/// individual word in the string
/// </summary>
/// <param name="str"></param>
/// <returns></returns>
private static List<string> WordLetterPairs(string str)
{
List<string> AllPairs = new List<string>();
// Tokenize the string and put the tokens/words into an array
string[] Words = Regex.Split(str, #"\s");
// For each word
for (int w = 0; w < Words.Length; w++)
{
if (!string.IsNullOrEmpty(Words[w]))
{
// Find the pairs of characters
String[] PairsInWord = LetterPairs(Words[w]);
for (int p = 0; p < PairsInWord.Length; p++)
{
AllPairs.Add(PairsInWord[p]);
}
}
}
return AllPairs;
}
/// <summary>
/// Generates an array containing every
/// two consecutive letters in the input string
/// </summary>
/// <param name="str"></param>
/// <returns></returns>
private static string[] LetterPairs(string str)
{
int numPairs = str.Length - 1;
string[] pairs = new string[numPairs];
for (int i = 0; i < numPairs; i++)
{
pairs[i] = str.Substring(i, 2);
}
return pairs;
}
My JavaScript implementation takes a string or array of strings, and an optional floor (the default floor is 0.5). If you pass it a string, it will return true or false depending on whether or not the string's similarity score is greater than or equal to the floor. If you pass it an array of strings, it will return an array of those strings whose similarity score is greater than or equal to the floor, sorted by score.
Examples:
'Healed'.fuzzy('Sealed'); // returns true
'Healed'.fuzzy('Help'); // returns false
'Healed'.fuzzy('Help', 0.25); // returns true
'Healed'.fuzzy(['Sold', 'Herded', 'Heard', 'Help', 'Sealed', 'Healthy']);
// returns ["Sealed", "Healthy"]
'Healed'.fuzzy(['Sold', 'Herded', 'Heard', 'Help', 'Sealed', 'Healthy'], 0);
// returns ["Sealed", "Healthy", "Heard", "Herded", "Help", "Sold"]
Here it is:
(function(){
var default_floor = 0.5;
function pairs(str){
var pairs = []
, length = str.length - 1
, pair;
str = str.toLowerCase();
for(var i = 0; i < length; i++){
pair = str.substr(i, 2);
if(!/\s/.test(pair)){
pairs.push(pair);
}
}
return pairs;
}
function similarity(pairs1, pairs2){
var union = pairs1.length + pairs2.length
, hits = 0;
for(var i = 0; i < pairs1.length; i++){
for(var j = 0; j < pairs2.length; j++){
if(pairs1[i] == pairs2[j]){
pairs2.splice(j--, 1);
hits++;
break;
}
}
}
return 2*hits/union || 0;
}
String.prototype.fuzzy = function(strings, floor){
var str1 = this
, pairs1 = pairs(this);
floor = typeof floor == 'number' ? floor : default_floor;
if(typeof(strings) == 'string'){
return str1.length > 1 && strings.length > 1 && similarity(pairs1, pairs(strings)) >= floor || str1.toLowerCase() == strings.toLowerCase();
}else if(strings instanceof Array){
var scores = {};
strings.map(function(str2){
scores[str2] = str1.length > 1 ? similarity(pairs1, pairs(str2)) : 1*(str1.toLowerCase() == str2.toLowerCase());
});
return strings.filter(function(str){
return scores[str] >= floor;
}).sort(function(a, b){
return scores[b] - scores[a];
});
}
};
})();
The Dice coefficient algorithm (Simon White / marzagao's answer) is implemented in Ruby in the
pair_distance_similar method in the amatch gem
https://github.com/flori/amatch
This gem also contains implementations of a number of approximate matching and string comparison algorithms: Levenshtein edit distance, Sellers edit distance, the Hamming distance, the longest common subsequence length, the longest common substring length, the pair distance metric, the Jaro-Winkler metric.
A Haskell version—feel free to suggest edits because I haven't done much Haskell.
import Data.Char
import Data.List
-- Convert a string into words, then get the pairs of words from that phrase
wordLetterPairs :: String -> [String]
wordLetterPairs s1 = concat $ map pairs $ words s1
-- Converts a String into a list of letter pairs.
pairs :: String -> [String]
pairs [] = []
pairs (x:[]) = []
pairs (x:ys) = [x, head ys]:(pairs ys)
-- Calculates the match rating for two strings
matchRating :: String -> String -> Double
matchRating s1 s2 = (numberOfMatches * 2) / totalLength
where pairsS1 = wordLetterPairs $ map toLower s1
pairsS2 = wordLetterPairs $ map toLower s2
numberOfMatches = fromIntegral $ length $ pairsS1 `intersect` pairsS2
totalLength = fromIntegral $ length pairsS1 + length pairsS2
Clojure:
(require '[clojure.set :refer [intersection]])
(defn bigrams [s]
(->> (split s #"\s+")
(mapcat #(partition 2 1 %))
(set)))
(defn string-similarity [a b]
(let [a-pairs (bigrams a)
b-pairs (bigrams b)
total-count (+ (count a-pairs) (count b-pairs))
match-count (count (intersection a-pairs b-pairs))
similarity (/ (* 2 match-count) total-count)]
similarity))
Here is another version of Similarity based in Sørensen–Dice index (marzagao's answer), this one written in C++11:
/*
* Similarity based in Sørensen–Dice index.
*
* Returns the Similarity between _str1 and _str2.
*/
double similarity_sorensen_dice(const std::string& _str1, const std::string& _str2) {
// Base case: if some string is empty.
if (_str1.empty() || _str2.empty()) {
return 1.0;
}
auto str1 = upper_string(_str1);
auto str2 = upper_string(_str2);
// Base case: if the strings are equals.
if (str1 == str2) {
return 0.0;
}
// Base case: if some string does not have bigrams.
if (str1.size() < 2 || str2.size() < 2) {
return 1.0;
}
// Extract bigrams from str1
auto num_pairs1 = str1.size() - 1;
std::unordered_set<std::string> str1_bigrams;
str1_bigrams.reserve(num_pairs1);
for (unsigned i = 0; i < num_pairs1; ++i) {
str1_bigrams.insert(str1.substr(i, 2));
}
// Extract bigrams from str2
auto num_pairs2 = str2.size() - 1;
std::unordered_set<std::string> str2_bigrams;
str2_bigrams.reserve(num_pairs2);
for (unsigned int i = 0; i < num_pairs2; ++i) {
str2_bigrams.insert(str2.substr(i, 2));
}
// Find the intersection between the two sets.
int intersection = 0;
if (str1_bigrams.size() < str2_bigrams.size()) {
const auto it_e = str2_bigrams.end();
for (const auto& bigram : str1_bigrams) {
intersection += str2_bigrams.find(bigram) != it_e;
}
} else {
const auto it_e = str1_bigrams.end();
for (const auto& bigram : str2_bigrams) {
intersection += str1_bigrams.find(bigram) != it_e;
}
}
// Returns similarity coefficient.
return (2.0 * intersection) / (num_pairs1 + num_pairs2);
}
Why not for a JavaScript implementation, I also explained the algorithm.
Algorithm
Input : France and French.
Map them both to their upper case characters (making the algorithm insensitive to case differences), then split them up into their character pairs:
FRANCE: {FR, RA, AN, NC, CE}
FRENCH: {FR, RE, EN, NC, CH}
Find there intersection:
Result:
Implementation
function similarity(s1, s2) {
const
set1 = pairs(s1.toUpperCase()), // [ FR, RA, AN, NC, CE ]
set2 = pairs(s2.toUpperCase()), // [ FR, RE, EN, NC, CH ]
intersection = set1.filter(x => set2.includes(x)); // [ FR, NC ]
// Tips: Instead of `2` multiply by `200`, To get percentage.
return (intersection.length * 2) / (set1.length + set2.length);
}
function pairs(input) {
const tokenized = [];
for (let i = 0; i < input.length - 1; i++)
tokenized.push(input.substring(i, 2 + i));
return tokenized;
}
console.log(similarity("FRANCE", "FRENCH"));
Ranking Results By ( Word - Similarity )
Sealed - 80%
Healthy - 55%
Heard - 44%
Herded - 40%
Help - 25%
Sold - 0%
From same original source.
What about Levenshtein distance, divided by the length of the first string (or alternatively divided my min/max/avg length of both strings)? That has worked for me so far.
Hey guys i gave this a try in javascript, but I'm new to it, anyone know faster ways to do it?
function get_bigrams(string) {
// Takes a string and returns a list of bigrams
var s = string.toLowerCase();
var v = new Array(s.length-1);
for (i = 0; i< v.length; i++){
v[i] =s.slice(i,i+2);
}
return v;
}
function string_similarity(str1, str2){
/*
Perform bigram comparison between two strings
and return a percentage match in decimal form
*/
var pairs1 = get_bigrams(str1);
var pairs2 = get_bigrams(str2);
var union = pairs1.length + pairs2.length;
var hit_count = 0;
for (x in pairs1){
for (y in pairs2){
if (pairs1[x] == pairs2[y]){
hit_count++;
}
}
}
return ((2.0 * hit_count) / union);
}
var w1 = 'Healed';
var word =['Heard','Healthy','Help','Herded','Sealed','Sold']
for (w2 in word){
console.log('Healed --- ' + word[w2])
console.log(string_similarity(w1,word[w2]));
}
I was looking for pure ruby implementation of the algorithm indicated by #marzagao's answer. Unfortunately, link indicated by #marzagao is broken. In #s01ipsist answer, he indicated ruby gem amatch where implementation is not in pure ruby. So I searchd a little and found gem fuzzy_match which has pure ruby implementation (though this gem use amatch) at here. I hope this will help someone like me.
**I've converted marzagao's answer to Java.**
import org.apache.commons.lang3.StringUtils; //Add a apache commons jar in pom.xml
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class SimilarityComparator {
public static void main(String[] args) {
String str0 = "Nischal";
String str1 = "Nischal";
double v = compareStrings(str0, str1);
System.out.println("Similarity betn " + str0 + " and " + str1 + " = " + v);
}
private static double compareStrings(String str1, String str2) {
List<String> pairs1 = wordLetterPairs(str1.toUpperCase());
List<String> pairs2 = wordLetterPairs(str2.toUpperCase());
int intersection = 0;
int union = pairs1.size() + pairs2.size();
for (String s : pairs1) {
for (int j = 0; j < pairs2.size(); j++) {
if (s.equals(pairs2.get(j))) {
intersection++;
pairs2.remove(j);
break;
}
}
}
return (2.0 * intersection) / union;
}
private static List<String> wordLetterPairs(String str) {
List<String> AllPairs = new ArrayList<>();
String[] Words = str.split("\\s");
for (String word : Words) {
if (StringUtils.isNotBlank(word)) {
String[] PairsInWord = letterPairs(word);
Collections.addAll(AllPairs, PairsInWord);
}
}
return AllPairs;
}
private static String[] letterPairs(String str) {
int numPairs = str.length() - 1;
String[] pairs = new String[numPairs];
for (int i = 0; i < numPairs; i++) {
try {
pairs[i] = str.substring(i, i + 2);
} catch (Exception e) {
pairs[i] = str.substring(i, numPairs);
}
}
return pairs;
}
}
Here's another c++ implementation that follows the original article, that minimizes dynamic memory allocations.
It obtains the same matching values in the examples, but I think it's better to take into account also the single character words.
//---------------------------------------------------------------------------
// Similarity based on Sørensen–Dice index
double calc_similarity( const std::string_view s1, const std::string_view s2 )
{
// Check banal cases
if( s1.empty() || s2.empty() )
{// Empty string is never similar to another
return 0.0;
}
else if( s1==s2 )
{// Perfectly equal
return 1.0;
}
else if( s1.length()==1 || s2.length()==1 )
{// Single (not equal) characters have zero similarity
return 0.0;
}
/////////////////////////////////////////////////////////////////////////
// Represents a pair of adjacent characters
class charpair_t final
{
public:
charpair_t(const char a, const char b) noexcept : c1(a), c2(b) {}
[[nodiscard]] bool operator==(const charpair_t& other) const noexcept { return c1==other.c1 && c2==other.c2; }
private:
char c1, c2;
};
/////////////////////////////////////////////////////////////////////////
// Collects and access a sequence of adjacent characters (skipping spaces)
class charpairs_t final
{
public:
charpairs_t(const std::string_view s)
{
assert( !s.empty() );
const std::size_t i_last = s.size()-1;
std::size_t i = 0;
chpairs.reserve(i_last);
while( i<i_last )
{
// Accepting also single-character words (the second is a space)
//if( !std::isspace(s[i]) ) chpairs.emplace_back( std::tolower(s[i]), std::tolower(s[i+1]) );
// Skipping single-character words (as in the original article)
if( std::isspace(s[i]) ) ; // Skip
else if( std::isspace(s[i+1]) ) ++i; // Skip also next
else chpairs.emplace_back( std::tolower(s[i]), std::tolower(s[i+1]) );
++i;
}
}
[[nodiscard]] auto size() const noexcept { return chpairs.size(); }
[[nodiscard]] auto cbegin() const noexcept { return chpairs.cbegin(); }
[[nodiscard]] auto cend() const noexcept { return chpairs.cend(); }
auto erase(std::vector<charpair_t>::const_iterator i) { return chpairs.erase(i); }
private:
std::vector<charpair_t> chpairs;
};
charpairs_t chpairs1{s1},
chpairs2{s2};
const double orig_avg_bigrams_count = 0.5 * static_cast<double>(chpairs1.size() + chpairs2.size());
std::size_t matching_bigrams_count = 0;
for( auto ib1=chpairs1.cbegin(); ib1!=chpairs1.cend(); ++ib1 )
{
for( auto ib2=chpairs2.cbegin(); ib2!=chpairs2.cend(); )
{
if( *ib1==*ib2 )
{
++matching_bigrams_count;
ib2 = chpairs2.erase(ib2); // Avoid to match the same character pair multiple times
break;
}
else ++ib2;
}
}
return static_cast<double>(matching_bigrams_count) / orig_avg_bigrams_count;
}