Extract string within first two quotation marks using regular expressions? - r

There is a vector of strings that looks like the following (text with two or more substrings in quotation marks):
vec <- 'ab"cd"efghi"j"kl"m"'
The text within the first pair of quotation marks (cd) contains a useful identifier (cd is the desired output). I have been studying how to use regular expressions but I haven't learned how to find the first and second occurrences of something like quotation marks.
Here's how I have been getting cd:
tmp <- strsplit(vec,split="")[[1]]
paste(tmp[(which(tmp=='\"')[1]+1):(which(tmp=='\"')[2]-1)],collapse="")
"cd"
My question is, is there another way to find "cd" using regular expressions? in order to learn more how to use them. I prefer base R solutions but will accept an answer using packages if that's the only way. Thanks for your help.

Match everything except " then capture everything upto next " and replace captured group by itself.
gsub( '[^"]*"([^"]*).*', '\\1', vec)
[1] "cd"
For detailed explanation of regex you can see this demo

Related

How can I edit my regex so that it captures only the substring between (and not including) quotation marks?

I'm a total novice to regex, and have a hard time wrapping my head around it. Right now I have a column filled with strings, but the only relevant text to my analysis is between quotation marks. I've tried this:
response$text <- stri_extract_all_regex(response$text, '"\\S+"')
but when I view response$text, the output comes out like this:
"\"caring\""
How do I change my regex expression so that instead the output reads:
caring
You can use
library(stringi)
response$text <- stri_extract_all_regex(response$text, '(?<=")[^\\s"]+(?=")')
Or, with stringr:
library(stringr)
response$text <- str_extract_all(response$text, '(?<=")[^\\s"]+(?=")')
However, with several words inside quotes, I'd rather use stringr::str_match_all:
library(stringr)
matches <- str_match_all(response$text, '"([^\\s"]+)"')
response$text <- lapply(matches, function(x) x[,2])
See this regex demo.
With the capturing group approach used in "([^\\s"]+)" it becomes possible to avoid overlapping matches between quoted substrings, and str_match_all becomes handy since the matches it returns contain the captured substrings as well (unlike *extract* functions).

How to remove "\" from paste function output with quotation marks?

I'm working with the following code:
Y_Columns <- c("Y.1.1")
paste('{"ImportId":"', Y_Columns, '"}', sep = "")
The paste function produces the following output:
"{\"ImportId\":\"Y.1.1\"}"
How do I get the paste function to omit the \? Such that, the output is:
"{"ImportId":"Y.1.1"}"
Thank you for your help.
Note: I did do a search on SO to see if there were any Q's that asked "what is an escape character in R". But I didn't review all the 160 answers, only the first 20.
This is one way of demonstrating what I wrote in my comment:
out <- paste('{"ImportId":"', Y_Columns, '"}', sep = "")
out
#[1] "{\"ImportId\":\"Y.1.1\"}"
?print
print(out,quote=FALSE)
#[1] {"ImportId":"Y.1.1"}
Both R and regex patterns use escape characters to allow special characters to be displayed in print output or input. (And sometimes regex patterns need to have doubled escapes.) R has a few characters that need to be "escaped" in certain situation. You illustrated one such situation: including double-quote character inside a result that will be printed with surrounding double-quotes. If you were intending to include any single quotes inside a character value that was delimited by single quotes at the time of creation, they would have needed to be escaped as well.
out2 <- '\'quoted\''
nchar(out2)
#[1] 8 ... note that neither the surround single-quotes nor the backslashes get counted
> out2
[1] "'quoted'" ... and the default output quote-char is a double-quote.
Here's a good Q&A to review:How to replace '+' using gsub() function in R
It has two answers, both useful: one shows how to double escape a special character and the other shows how to use teh fixed argument to get around that requirement.
And another potentially useful Q&A on the topic of handling Windows paths:
File path issues in R using Windows ("Hex digits in character string" error)
And some further useful reading suggestions: Look at the series of help pages that start with capital letters. (Since I can never remember which one has which nugget of essential information, I tried ?Syntax first and it has a "See Also" list of essential reading: Arithmetic, Comparison, Control, Extract, Logic, NumericConstants, Paren, Quotes, Reserved. and I then realized what I wanted to refer you to was most likely ?Quotes where all the R-specific escape sequence letters should be listed.

How do you remove an isolated number from a string in R?

This is a silly question, but I can't seem to find a solution in R online. I am trying to remove an isolated number from a long string. For example, I would like to remove the number 27198 from the sentence below.
x <- "hello3 my name 27198 is 5joey"
I tried the following:
gsub("[0-9]","",x)
Which results in:
"hello my name is joey"
But I want:
"hello3 my name is 5joey"
This seems really simple, but I am not well versed with regular expressions. Thanks for your help!
We can specify word boundary (\\b) at the end of one or more digits ([0-9]+)
gsub("\\b[0-9]+\\b", "", x)
#[1] "hello3 my name is 5joey"

How to remove characters before matching pattern and after matching pattern in R in one line?

I have this vector Target <- c( "tes_1123_SS1G_340T01", "tes_23_SS2G_340T021". I want to remove anything before SS and anything after T0 (including T0).
Result I want in one line of code:
SS1G_340 SS2G_340
Code I have tried:
gsub("^.*?SS|\\T0", "", Target)
We can use str_extract
library(stringr)
str_extract(Target, "SS[^T]*")
#[1] "SS1G_340" "SS2G_340"
Try this:
gsub(".*(SS.*)T0.*","\\1",Target)
[1] "SS1G_340" "SS2G_340"
Why it works:
With regex, we can choose to keep a pattern and remove everything outside of that pattern with a two-step process. Step 1 is to put the pattern we'd like to keep in parentheses. Step 2 is to reference the number of the parentheses-bound pattern we'd like to keep, as sometimes we might have multiple parentheses-bound elements. See the example below for example:
gsub(".*(SS.*)+(T0.*)","\\1",Target)
[1] "SS1G_340" "SS2G_340"
Note that I've put the T0.* in parentheses this time, but we still get the correct answer because I've told gsub to return the first of the two parentheses-bound patterns. But now see what happens if I use \\2 instead:
gsub(".*(SS.*)+(T0.*)","\\2",Target)
[1] "T01" "T021"
The .* are wild cards by the way. If you'd like to learn more about using regex in R, here's a reference that can get you started.

remove/replace specific words or phrases from character strings - R

I looked around both here and elsewhere, I found many similar questions but none which exactly answer mine. I need to clean up naming conventions, specifically replace/remove certain words and phrases from a specific column/variable, not the entire dataset. I am migrating from SPSS to R, I have an example of the code to do this in SPSS below, but I am not sure how to do it in R.
EG:
"Acadia Parish" --> "Acadia" (removes Parish and space before Parish)
"Fifth District" --> "Fifth" (removes District and space before District)
SPSS syntax:
COMPUTE county=REPLACE(county,' Parish','').
There are only a few instances of this issue in the column with 32,000 cases, and what needs replacing/removing varies and the cases can repeat (there are dozens of instances of a phrase containing 'Parish'), meaning it's much faster to code what needs to be removed/replaced, it's not as simple or clean as a regular expression to remove all spaces, all characters after a specific word or character, all special characters, etc. And it must include leading spaces.
I have looked at the replace() gsub() and other similar commands in R, but they all involve creating vectors, or it seems like they do. What I'd like is syntax that looks for characters I specify, which can include leading or trailing spaces, and replaces them with something I specify, which can include nothing at all, and if it does not find the specific characters, the case is unchanged.
Yes, I will end up repeating the same syntax many times, it's probably easier to create a vector but if possible I'd like to get the syntax I described, as there are other similar operations I need to do as well.
Thank you for looking.
> x <- c("Acadia Parish", "Fifth District")
> x2 <- gsub("^(\\w*).*$", "\\1", x)
> x2
[1] "Acadia" "Fifth"
Legend:
^ Start of pattern.
() Group (or token).
\w* One or more occurrences of word character more than 1 times.
.* one or more occurrences of any character except new line \n.
$ end of pattern.
\1 Returns group from regexp
Maybe I'm missing something but I don't see why you can't simply use conditionals in your regex expression, then trim out the annoying white space.
string <- c("Arcadia Parish", "Fifth District")
bad_words <- c("Parish", "District") # Write all the words you want removed here!
bad_regex <- paste(bad_words, collapse = "|")
trimws( sub(bad_regex, "", string) )
# [1] "Arcadia" "Fifth"
dataframename$varname <- gsub(" Parish","", dataframename$varname)

Resources