Comparing Strings for match in a vectorized way - r

I have a large data frame, which contains two columns containing strings. When these columns are unequal, I want to do an operation.
The problem is that when I use a simple != operator, it gives incorrect results. I.e. apparently, 'Tout_Inclus' & 'Tout_Inclus' are unequal.
This leads me to string comparison functions, like strcmp from pracma package. However, this is not vectorised - my dataframe has 9.6M rows, therefore I think this would crash/take ages if I looped through.
Has anyone got any vectorised methods for comparing strings?
My dataframe looks like this:
City_Break City_Break
City_Break City_Break
Court_Break Court_Break
Petit_Budget Petit_Budget
Pas_Cher Pas_Cher
Deals Deals_Pas_Chers
Vacances Vacances_Éco
Hôtel_Vol Hôtel_Vol
Dernière_Minute Dernière_Minute
Formule Formule_Éco
Court_Séjour Court_Séjour
Voyage Voyage_Pas_Cher
Séjour Séjour_Pas_Cher
Congés Congés_Éco
when I do something like df[colA != colB,] it gives incorrect results, where strings (by looking at them) are equal.
I've ensured encoding is UTF-8, strings are not factors, and I also tried removing special characters before doing the comparison.
By the way, these strings are from multiple languages.
edit: I've already trimmed whitespaces, and still no luck

Try removing leading/trailing whitespace from both columns, and then compare:
df[trimws(df$colA, "both") != trimws(df$colB, "both"), ]

If evertyhing else is fine(trim, etc..), yours could be an encoding problem. In UTF-8 the same accented character could be rapresented with different byte sequences. It may be single byte coded or with modifier byte. However, very strange with 'Tout_Inclus'.
Just to have a check, from stringi package try this:
stringi::stri_compare(df$colA,df$colB, "fr_FR")
What's the output?

Related

R delete special characters (faster way)

I have a huge data frame, with some columns containing "characters". The problem is that I have some "wrong" characters, like this:
mutate_all(data, funs(tolower))
> Error in mutate_impl(.data, dots) : Evaluation error: invalid input
> 'https://www.ps.f/c-w/nos-promions/v-ambght-rembment.html#modalit<e9>s'
> in 'utf8towcs'.
So I deleted the "wrong" characters (note: I can't just easily remove all the characters, because I need the ":" to separate the data).
I found an solution:
library(qdap)
keep <- c(":")
data$column <- strip(data$column, keep, lower = TRUE)
See: How to remove specific special characters in R
That worked... but it is really slow. So therefore my question: how can I apply a function on all my columns (columns that are character) which is quicker then what I just did?
EDIT
Some example what happened in my script:
View(data$column)
"CP:main:234e5qhaw/00:lcd-monitor-with-smatimge-lite"
"CP:main:234e5qhaw/00:lcd-monitor-with-smarimge-lite"
"CP:main:234e5qhaw/00:lcd-monitor-with-sartimge-lite"
"CP:main:bri953/00:faq:skça_sorulan_sorular:xc000003329:f02:9044:9512"
tolower(data$column)
Error in tolower(data$column) :
invalid input "CP:main:bri953/00:faq:skça_sorulan_sorular:xc000003329:f02:9044:9512" in 'utf8towcs'
Optimal situation: keep as much as possible from the original data. But I can imagine that "special" characters must be replaced. But I really need to keep the ":" to separate the data in a later stage.

Sanitize strings for unique legal symbols in R

I want to clean up strings so they can be parsed as unique legal symbols. I intend to clean up a lot of strings, so there is an undesirable risk of duplicated symbols in the output. It would suffice to take every illegal character and replace it with its base 32 encoding. Desired behavior:
sanitize("_bad_symbol$not*a&list%$('")
## [1] "L4bad_symbolEQnotFIaEYlistEUSCQJY"
I think all I need is a complete list of possible characters to grep for. I know about letters and LETTERS, but what about everything else?
Does a better solution already exist? Because I would love that.
EDIT: just found about make.names() from this post. I could go with that in a pinch, but I would rather not.
With make.names() and make.unique() together, the problem is solved.
make.unique(make.names(c("asdflkj###$", "asdflkj####")))
## [1] "asdflkj...." "asdflkj.....1"

Using grep() with Unicode characters in R

(strap in!)
Hi, I'm running into issues involving Unicode encoding in R.
Basically, I'm importing data sets that contain Unicode (UTF-8) characters, and then running grep() searches to match values. For example, say I have:
bigData <- c("foo","αβγ","bar","αβγγ (abgg)", ...)
smallData <- c("αβγ","foo", ...)
What I'm trying to do is take the entries in smallData and match them to entries in bigData. (The actual sets are matrixes with columns of values, so what I'm trying to do is find the indexes of the matches, so I can tell what row to add the values to.) I've been using
matches <- grepl(smallData[i], bigData, fixed=T)
which usually results in a vector of matches. For i=2, it would return 1, since "foo" is element 1 of bigData. This is peachy and all is well. But RStudio seems to not be dealing with unicode characters properly. When I import the sets and view them, they use the character IDs.
dataset <- read_csv("[file].csv", col_names = FALSE, locale = locale())
Using View(dataset) shows "aß<U+03B3>" instead of "αβγ." The same goes for
dataset[1]
A tibble: 1x1 <chr>
[1] aß<U+03B3>
print(dataset[1])
A tibble: 1x1 <chr>
[1] aß<U+03B3>
However, and this is why I'm stuck rather than just adjusting the encoding:
paste(dataset[1])
[1] "αβγ"
Encoding(toString(dataset[1]))
[1] "UTF-8"
So it appears that R is recognizing in certain contexts that it should display Unicode characters, while in others it just sticks to--ASCII? I'm not entirely sure, but certainly a more limited set.
In any case, regardless of how it displays, what I want to do is be able to get
grep("αβγ", bigData)
[1] 2 4
However, none of the following work:
grep("αβ", bigData) #(Searching the two letters that do appear to convert)
grep("<U+03B3>",bigData,fixed=T) #(Searching the code ID itself)
grep("αβ", toString(bigData)) #(converts the whole thing to one string)
grep("\\β", bigData) #(only mentioning because it matches, bizarrely, to ß)
The only solution I've found is:
grep("\u03B3", bigData)
[1] 2 4
Which is not ideal for a couple reasons, most jarringly that it doesn't look like it's possible to just take every <U+####> and replace it with \u####, since not every Unicode character is converted to the <U+####> format, but none of them can be searched. (i.e., α and ß didn't turn into their unicode keys, but they're also not searchable by themselves. So I'd have to turn them into their keys, then alter their keys to a form that grep() can use, then search.)
That means I can't just regex the keys into a searchable format--and even if I could, I have a lot of entries including characters that'd need to be escaped (e.g., () or ), so having to remove the fixed=T term would be its own headache involving nested escapes.
Anyway...I realize that a significant part of the problem is that my set apparently involves every sort of character under the sun, and it seems I have thoroughly entrapped myself in a net of regular expressions.
Is there any way of forcing a search with (arbitrary) unicode characters? Or do I have to find a way of using regular expressions to escape every ( and α in my data set? (coordinate to that second question: is there a method to convert a unicode character to its key? I can't seem to find anything that does that specific function.)

remove/replace specific words or phrases from character strings - R

I looked around both here and elsewhere, I found many similar questions but none which exactly answer mine. I need to clean up naming conventions, specifically replace/remove certain words and phrases from a specific column/variable, not the entire dataset. I am migrating from SPSS to R, I have an example of the code to do this in SPSS below, but I am not sure how to do it in R.
EG:
"Acadia Parish" --> "Acadia" (removes Parish and space before Parish)
"Fifth District" --> "Fifth" (removes District and space before District)
SPSS syntax:
COMPUTE county=REPLACE(county,' Parish','').
There are only a few instances of this issue in the column with 32,000 cases, and what needs replacing/removing varies and the cases can repeat (there are dozens of instances of a phrase containing 'Parish'), meaning it's much faster to code what needs to be removed/replaced, it's not as simple or clean as a regular expression to remove all spaces, all characters after a specific word or character, all special characters, etc. And it must include leading spaces.
I have looked at the replace() gsub() and other similar commands in R, but they all involve creating vectors, or it seems like they do. What I'd like is syntax that looks for characters I specify, which can include leading or trailing spaces, and replaces them with something I specify, which can include nothing at all, and if it does not find the specific characters, the case is unchanged.
Yes, I will end up repeating the same syntax many times, it's probably easier to create a vector but if possible I'd like to get the syntax I described, as there are other similar operations I need to do as well.
Thank you for looking.
> x <- c("Acadia Parish", "Fifth District")
> x2 <- gsub("^(\\w*).*$", "\\1", x)
> x2
[1] "Acadia" "Fifth"
Legend:
^ Start of pattern.
() Group (or token).
\w* One or more occurrences of word character more than 1 times.
.* one or more occurrences of any character except new line \n.
$ end of pattern.
\1 Returns group from regexp
Maybe I'm missing something but I don't see why you can't simply use conditionals in your regex expression, then trim out the annoying white space.
string <- c("Arcadia Parish", "Fifth District")
bad_words <- c("Parish", "District") # Write all the words you want removed here!
bad_regex <- paste(bad_words, collapse = "|")
trimws( sub(bad_regex, "", string) )
# [1] "Arcadia" "Fifth"
dataframename$varname <- gsub(" Parish","", dataframename$varname)

Finding number of occurrences of a word in a file using R functions

I am using the following code for finding number of occurrences of a word memory in a file and I am getting the wrong result. Can you please help me to know what I am missing?
NOTE1: The question is looking for exact occurrence of word "memory"!
NOTE2: What I have realized they are exactly looking for "memory" and even something like "memory," is not accepted! That was the part which has brought up the confusion I guess. I tried it for word "action" and the correct answer is 7! You can try as well.
#names=scan("hamlet.txt", what=character())
names <- scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character())
Read 28230 items
> length(grep("memory",names))
[1] 9
Here's the file
The problem is really Shakespeare's use of punctuation. There are a lot of apostrophes (') in the text. When the R function scan encounters an apostrophe it assumes it is the start of a quoted string and reads all characters up until the next apostrophe into a single entry of your names array. One of these long entries happens to include two instances of the word "memory" and so reduces the total number of matches by one.
You can fix the problem by telling scan to regard all quotation marks as normal characters and not treat them specially:
names <- scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character(), quote=NULL )
Be careful when using the R implementation of grep. It does not behave in exactly the same way as the usual GNU/Linux program. In particular, the way you have used it here WILL find the number of matching words and not just the total number of matching lines as some people have suggested.
As pointed by #andrew, my previous answer would give wrong results if a word repeats on the same line. Based on other answers/comments, this one seems ok:
names = scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character(), quote=NULL )
idxs = grep("memory", names, ignore.case = TRUE)
length(idxs)
# [1] 10

Resources