Apostrophes and regular expressions; Cleaning text in R - r

I working on cleaning a large collection of text. My process thus far is:
Remove any non-ASCII characters
Remove URLs
Remove email addresses
Correct kerning (i.e., "B A D" becomes "BAD")
Correct elongated words (i.e., "baaaaaad" becomes "bad")
Ensure there is a space after every comma
Replace all numerals and punctuation with a space - except apostrophes
Remove any term 22 characters or longer (anything this size is likely garbage)
Remove any single letters that are leftover
Remove any blank lines
My issue is in the next-to-last step. Originally, my code was:
gsub(pattern = "\\b\\S\\b", replacement = "", perl = TRUE)
but this wrecked any contractions that were left (that I left in on purpose). Then I tried
gsub(pattern = "\\b(\\S^'\\s)\\b", replacement = "", perl = TRUE)
but this left a lot of single characters.
Then I realized that I needed to keep three single-letter words: "A", "I", and "O" (either case).
Any suggestions?

You can use
gsub("(?i)\\b(?<!')(?![AOI])\\p{L}\\b", "", x, perl=TRUE)
Details:
(?i) - case insensitive matching on
\b - a word boundary
(?<!') - no ' is allowed immediately on the left
(?![AOI]) - the next char cannot be A, I, or O
\p{L} - any Unicod letter
\b - a word boundary

Related

gsub specific pattern and position in character string

This is probably a fairly easy fix, but I'm not as good w the RegExpr as would be ideal, so help is appreciated. I have looked elsewhere and nothing is working for me.
I am trying to standardize some names of university degrees. I need the following format:
Degree Code - Major Name
EG - "BA - Computer Stuff"
IE a word, single space, dash, single space, word.
It does not recognize multiple spaces on one or both sides of the dash, and if it sees no spaces, it replaces the letters on either side of the dash with lowercase s, where I thought that \s or \s white space and it would substitute.
This one bit of format fixing is part of a larger mutate statement, IE a single line with brackets ala the ve example elsewhere will not work for me.
I have example data:
data <- data.frame( var = c("BA-English" , "BA - English" , "BA - Chemistry" , "BS - Rubber Chickens") )
var %>%
mutate(var = gsub("\\w\\S-\\S\\w", "\\w\\s-\\s\\w", var) ) -> var_fix )
Any help is very much appreciated. Thank you
You can use
gsub("\\s*-\\s*", " - ", var)
## Or, if the hyphen is in between word chars
gsub("\\b\\s*-\\s*\\b", " - ", var)
See the regex demo #1 and regex demo #2.
Details:
\b - a word boundary
\s* - zero or more whitespaces
- - a hyphen
Note: in case you want to normalize hyphens, you can also consider using gsub("(*UCP)\\s*[\\p{Pd}\\x{00AD}\\x{2212}]\\s*", " - ", var, perl=TRUE) / gsub("(*UCP)\\b\\s*[\\p{Pd}\\x{00AD}\\x{2212}]\\s*\\b", " - ", var, perl=TRUE), where (*UCP) makes the word boundary and whitespace patterns Unicode-aware, \p{Pd} matches any Unicode dash, \x{00AD} matches a soft hyphen and \x{2212} matches a minus symbol.

Can quantifiers be used in regex replacement in R?

My objective would be replacing a string by a symbol repeated as many characters as have the string, in a way as one can replace letters to capital letters with \\U\\1, if my pattern was "...(*)..." my replacement for what is captured by (*) would be something like x\\q1 or {\\q1}x so I would get so many x as characters captured by *.
Is this possible?
I am thinking mainly in sub,gsub but you can answer with other libraris like stringi,stringr, etc.
You can use perl = TRUE or perl = FALSE and any other options with convenience.
I assume the answer can be negative, since seems to be quite limited options (?gsub):
a replacement for matched pattern in sub and gsub. Coerced to character if possible. For fixed = FALSE this can include backreferences "\1" to "\9" to parenthesized subexpressions of pattern. For perl = TRUE only, it can also contain "\U" or "\L" to convert the rest of the replacement to upper or lower case and "\E" to end case conversion. If a character vector of length 2 or more is supplied, the first element is used with a warning. If NA, all elements in the result corresponding to matches will be set to NA.
Main quantifiers are (?base::regex):
?
The preceding item is optional and will be matched at most once.
*
The preceding item will be matched zero or more times.
+
The preceding item will be matched one or more times.
{n}
The preceding item is matched exactly n times.
{n,}
The preceding item is matched n or more times.
{n,m}
The preceding item is matched at least n times, but not more than m times.
Ok, but it seems to be an option (which is not in PCRE, not sure if in PERL or where...) (*) which captures the number of characters the star quantifier is able to match (I found it at https://www.rexegg.com/regex-quantifier-capture.html) so then it could be used \q1 (same reference) to refer to the first captured quantifier (and \q2, etc.). I also read that (*) is equivalent to {0,} but I'm not sure if this is really the fact for what I'm interested in.
EDIT UPDATE:
Since asked by commenters I update my question with an specific example provide by this interesting question. I modify a bit the example. Let's say we have a <- "I hate extra spaces elephant" so we are interested in keeping the a unique space between words, the 5 first characters of each word (till here as the original question) but then a dot for each other character (not sure if this is what is expected in the original question but doesn't matter) so the resulting string would be "I hate extra space. eleph..." (one . for the last s in spaces and 3 dots for the 3 letters ant in the end of elephant). So I started by keeping the 5 first characters with
gsub("(?<!\\S)(\\S{5})\\S*", "\\1", a, perl = TRUE)
[1] "I hate extra space eleph"
How should I replace the exact number of characters in \\S* by dots or any other symbol?
Quantifiers cannot be used in the replacement pattern, nor the information how many chars they match.
What you need is a \G base PCRE pattern to find consecutive matches after a specific place in the string:
a <- "I hate extra spaces elephant"
gsub("(?:\\G(?!^)|(?<!\\S)\\S{5})\\K\\S", ".", a, perl = TRUE)
See the R demo and the regex demo.
Details
(?:\G(?!^)|(?<!\S)\S{5}) - the end of the previous successful match or five non-whitespace chars not preceded with a non-whitespace char
\K - a match reset operator discarding text matched so far
\S - any non-whitespace char.
gsubfn is like gsub except the replacement string can be a function which inputs the match and outputs the replacement. The function can optionally be expressed a formula as we do here replacing each string of word characters with the output of the function replacing that string. No complex regular expressions are needed.
library(gsubfn)
gsubfn("\\w+", ~ paste0(substr(x, 1, 5), strrep(".", max(0, nchar(x) - 5))), a)
## [1] "I hate extra space. eleph..."
or almost the same except function is slightly different:
gsubfn("\\w+", ~ paste0(substr(x, 1, 5), substring(gsub(".", ".", x), 6)), a)
## [1] "I hate extra space. eleph..."

Remove characters prior to parentheses but after the preceding comma in R

I have the following dataframe:
df<-c("red apples,(golden,red delicious),bananas,(cavendish,lady finger),golden pears","yellow pineapples,red tomatoes,(roma,vine),orange carrots")
I want to remove the word preceding a comma and parentheses so my output would yield:
[1] "golden,red delicious),cavendish,lady finger),golden pears" "yellow pineapples,roma,vine),orange carrots"
Ideally, the right parenthesis would be removed as well. But I can manage that delete with gsub.
I feel like a lookbehind might work but can't seem to code it correctly.
Thanks!
edit: I amended the dataframe so that the word I want deleted is a string of two words.
We can use base R with gsub to remove the characters. We match a word (\\w+) followed by space (\\s+) followed by word (\\w+) comma (,) and (, replace with blank ("")
gsub("\\w+\\s+\\w+,\\(", "", df)
#[1] "golden,red delicious),cavendish,lady finger),golden pears"
#[2] "yellow pineapples,roma,vine),orange carrots"
Or if the , is one of the patterns to check for the words, we can create the pattern with characters that are not a ,
gsub("[^,]+,\\(", "", df)
#[1] "golden,red delicious),cavendish,lady finger),golden pears"
#[2] "yellow pineapples,roma,vine),orange carrots"
Using the tidyverse package stringr, I was able to make your data appear the way you'd want it with two function calls separated by a pipe. The pipe comes from the package magrittr which loads with dplyr and/or tidyverse.
I used stringr::str_replace_all to perform two substitutions which remove the words you wanted to take out. Note the syntax for multiple substitutions within this function.
str_replace_all( c( "first string to get rid of" = "string to replace it with", "second string to get rid of" = "second replacement string")
You might find it more intuitive to combine all the "get rid of strings" first followed by combining the replacement strings, but each element within the c() is the string to be replaced (in quotes) connected to its replacement (also in quotes) with "=". Each of those replaced=replacement pairs is separated by a comma.
Using str_replace, I first took out all text which starts with "," and ends with ",)" using this regular expression ",[a-z ]+,\\(" which refers to comma, followed by any number of lowercase letters and spaces (allowing for chunks with multiple words to be detected) followed by ",(". Note the escape for the "(". If you thought there might be capital letters you would use [a-zA-Z ] instead. In either case, note the space before the "]".
Because you wanted to take out the word, but not the comma preceding it, I replaced the removed text with ",".
This doesn't remove "red apples" in the first string because it doesn't follow a comma. The expression "^[a-z ]+,\\(" refers to any number of lowercase letters and spaces coming before ",(" at the beginning of the string (the ^ "anchors" your pattern to the beginning of the string). Therefore it removes "red apples" or any other example where the text you want to remove starts the string. For these cases, it makes sense to replace it with nothing ("") because you want the first character of the remaining string to appear at the beginning.
Together, the two substitutions remove the offending text whether it starts the string or is in the middle of it or ends it so in that sense it's more or less generalized.
str_remove_all("\\)") removes the right parentheses throughout
library(stringr)
library(magrittr)
df<-c("red apples,(golden,red delicious),bananas,(cavendish,lady finger),
golden pears","yellow pineapples,red tomatoes,(roma,vine),orange carrots")
str_replace_all(df, c(",[a-z ]+,\\(" = ",",
"^[a-z ]+,\\(" = "")) %>%
str_remove_all("\\)")
[1] "golden,red delicious,cavendish,lady finger,golden pears"
[2] "yellow pineapples,roma,vine,orange carrots"

using regular expressions (regex) to make replace multiple patterns at the same time in R

I have a vector of strings and I want to remove -es from all strings (words) ending in either -ses or -ces at the same time. The reason I want to do it at the same time and not consequitively is that sometimes it happens that after removing one ending, the other ending appears while I don't want to apply this pattern to a single word twice.
I have no idea how to use two patterns at the same time, but this is the best I could:
text <- gsub("[sc]+s$", "[sc]", text)
I know the replacement is not correct, but I wonder how can I show that I want to replace it with the letter I just detected (c or s in this case). Thank you in advance.
To remove es at the end of words, that is preceded with s or c, you may use
gsub("([sc])es\\b", "\\1", text)
gsub("(?<=[sc])es\\b", "", text, perl=TRUE)
To remove them at the end of strings, you can go on using your $ anchor:
gsub("([sc])es$", "\\1", text)
gsub("(?<=[sc])es$", "", text, perl=TRUE)
The first gsub TRE pattern is ([sc])es\b: a capturing group #1 that matches either s or c, and then es is matched, and then \b makes sure the next char is not a letter, digit or _. The \1 in the replacement is the backreference to the value stored in the capturing group #1 memory buffer.
In the second example with the PCRE regex (due to perl=TRUE), (?<=[sc]) positive lookbehind is used instead of the ([sc]) capturing group. Lookbehinds are not consuming text, the text they match does not land in the match value, and thus, there is no need to restore it anyhow. The replacement is an empty string.
Strings ending with "ces" and "ses" follow the same pattern, i.e. "*es$"
If I understand it correctly than you don't need two patterns.
Example:
x = c("ces", "ses", "mes)
gsub( pattern = "*([cs])es$", replacement = "\\1", x)
[1] "c" "s" "mes"
Hope it helps.
M

Locate a sub-string with a general character '[a,z]-\n' and replace the non-general part of the sub-string '-\n'

I have text I am cleaning up in R. I want to use stringi, but am happy to use other packages.
Some of the words are broken over two lines. So I get a sub-string "halfword-\nsecondhalfword".
I also have strings that are just "----\nword" and " -\n" (and some others that I do not want to replace.
What I want to do is identify all sub-strings "[a-z]-\n" and then keep the generic letter [a,z], but remove the -\n characters.
I do not want to remove all -\n , and I do not want to remove the letter [a-z].
Thanks!
You may make use of word boundaries to match -<LF> only in between word characters:
gsub("\\b-\n\\b", "", x)
gsub("(*UCP)\\b-\n\\b", "", x, perl=TRUE)
stringr::str_replace_all(x, "\\b-\n\\b", "", x)
The latter two support word boundaries between any Unicode word characters.
See the regex demo.
If you want to only remove -<LF> between letters you may use
gsub("([a-zA-Z])-\n([a-zA-Z])", "\\1\\2", x)
gsub("(\\p{L})-\n(\\p{L})", "\\1\\2", x, perl=TRUE)
stringr::str_replace_all(x, "(\\p{L})-\n(\\p{L})", "\\1\\2")
If you need to only support lowercase letters, remove A-Z in the first gsub and replace \p{L} with \p{Ll} in the latter two.
See this regex demo.

Resources