regex to replace text *outside* of {} - r

I want to use regex to replace commands or tags around strings. My use case is converting LaTeX commands to bookdown commands, which means doing things like replacing \citep{*} with [#*], \ref{*} with \#ref(*), etc. However, lets stick to the generalized question:
Given a string <begin>somestring<end> where <begin> and <end> are known and somestring is an arbitrary sequence of characters, can we use regex to susbstitute <newbegin> and <newend> to get the string <newbegin>somestring<newend>?
For example, consider the LaTeX command \citep{bonobo2017}, which I want to convert to [#bonobo2017]. For this example:
<begin> = \citep{
somestring = bonobo2017
<end> = }
<newbegin> = [#
<newend> = ]
This question is basically the inverse of this question.
I'm hoping for an R or notepad++ solution.
Additional Examples
Convert \citet{bonobo2017} to #bonobo2017
Convert \ref{myfigure} to \#ref(myfigure)
Convert \section{Some title} to # Some title
Convert \emph{something important} to *something important*
I'm looking for a template regex that I can fill in my <begin>, <end>, <newbegin> and <newend> on a case-by-case basis.

You can try something like this with dplyr + stringr:
string = "\\citep{bonobo2017}"
begin = "\\citep{"
somestring = "bonobo2017"
end = "}"
newbegin = "[#"
newend = "]"
library(stringr)
library(dplyr)
string %>%
str_extract(paste0("(?<=\\Q", begin, "\\E)\\w+(?=\\Q", end, "\\E)")) %>%
paste0(newbegin, ., newend)
or:
string %>%
str_replace_all(paste0("\\Q", begin, "\\E|\\Q", end, "\\E"), "") %>%
paste0(newbegin, ., newend)
You can also make it a function for convenience:
convertLatex = function(string, BEGIN, END, NEWBEGIN, NEWEND){
string %>%
str_replace_all(paste0("\\Q", BEGIN, "\\E|\\Q", END, "\\E"), "") %>%
paste0(NEWBEGIN, ., NEWEND)
}
convertLatex(string, begin, end, newbegin, newend)
# [1] "[#bonobo2017]"
Notes:
Notice that I manually added an additional \ to "\\citep{bonobo2017}", this is because raw strings don't exist in R(I hope they do exist), so a single \ would be treated as an escape character. I need another \ to escape the first \.
The regex in str_extract uses positive lookbehind and positve lookahead to extract the somestring in between begin and end.
str_replace takes another approach of removing begin and end from string.
The "\\Q", "\\E" pair in the regex means "Backslash all nonalphanumeric characters" and "\\E" ends the expression. This is especially useful in your case since you likely have special characters in your Latex command. This expression automatically escapes them for you.

Related

Regex match after last / and first underscore

Assuming I have the following string:
string = "path/stack/over_flow/Pedro_account"
I am intrested in matching the first 2 characters after the last / and before the first _. So in this case the desired out put is:
Pe
What I have so far is a mix of substr and str_extract:
substr(str_extract(string, "[^/]*$"),1,2)
which of course will give an answer but I belive there is a nice regex for it as well, and that is what I'm looking for.
You can use
library(stringr)
str_extract(string, "(?<=/)[^/]{2}(?=[^/]*$)")
## => [1] "Pe"
See the R demo and the regex demo. Details:
(?<=/) - a location immediately preceded with a / char
[^/]{2} - two chars other than /
(?=[^/]*$) - a location immediately preceded with zero or more chars other than / till the end of string.
Using basename to get the last folder name, then substring:
substr(basename("path/stack/over_flow/Pedro_account"), 1, 2)
# [1] "Pe"
Remove everything till last / and extract first 2 characters.
Base R -
string = "path/stack/over_flow/Pedro_account"
substr(sub('.*/', '', string), 1, 2)
#[1] "Pe"
stringr
substr(stringr::str_remove(string, '.*/'), 1, 2)
You can use str_match with a capture group:
/ Match literally
([^/_]{2}) Capture 2 chars other than / or _ in group 1
[^/]* Match optional chars other than /
$ End of string
See a regex demo and a R demo.
Example
library(stringr)
string = "path/stack/over_flow/Pedro_account"
str_match(string, "/([^/_]{2})[^/]*$")[,2]
Output
[1] "Pe"

Remove all punctuation except underline between characters in R with POSIX character class

I would like to use R to remove all underlines expect those between words. At the end the code removes underlines at the end or at the beginning of a word.
The result should be
'hello_world and hello_world'.
I want to use those pre-built classes. Right know I have learn to expect particular characters with following code but I don't know how to use the word boundary sequences.
test<-"hello_world and _hello_world_"
gsub("[^_[:^punct:]]", "", test, perl=T)
You can use
gsub("[^_[:^punct:]]|_+\\b|\\b_+", "", test, perl=TRUE)
See the regex demo
Details:
[^_[:^punct:]] - any punctuation except _
| - or
_+\b - one or more _ at the end of a word
| - or
\b_+ - one or more _ at the start of a word
One non-regex way is to split and use trimws by setting the whitespace argument to _, i.e.
paste(sapply(strsplit(test, ' '), function(i)trimws(i, whitespace = '_')), collapse = ' ')
#[1] "hello_world and hello_world"
We can remove all the underlying which has a word boundary on either of the end. We use positive lookahead and lookbehind regex to find such underlyings. To remove underlying at the start and end we use trimws.
test<-"hello_world and _hello_world_"
gsub("(?<=\\b)_|_(?=\\b)", "", trimws(test, whitespace = '_'), perl = TRUE)
#[1] "hello_world and hello_world"
You could use:
test <- "hello_world and _hello_world_"
output <- gsub("(?<![^\\W])_|_(?![^\\W])", "", test, perl=TRUE)
output
[1] "hello_world and hello_world"
Explanation of regex:
(?<![^\\W]) assert that what precedes is a non word character OR the start of the input
_ match an underscore to remove
| OR
_ match an underscore to remove, followed by
(?![^\\W]) assert that what follows is a non word character OR the end of the input

Regex, match year listed as range

I have a list of years like this:
2018-
2001–2020
1999-
2005-
I would like to create a regex to match the year with these criteria:
xxxx- matches xxxx
yyyy-nnnn matches nnnn
Can you please help me?
I've tried [[:digit:]]{4}$, or alternatively [[:digit:]]{4}-$, but they only partially work.
To get the last year in the "range," established by - character, the cleanest way
my $year = (split /-/, $range)[-1];
If there isn't anything after the last delimiter then the last returned element by split is what is before it, so the last element in its return list (obtained with index -1) is either the second given year -- as in 2001-2020 -- or the only one, as in other examples. This performs no checking of input.
With a regex, one way is to seek the last number in the string
my ($year) = $range =~ /([0-9]+)[^0-9]*$/;
where if you use [0-9]{4} then there is a small additional measure of checking.
The POSIX character class [[:digit:]] and its negation [[:^digit:]] (or \P{PosixDigit}) can be used instead if desired, but note that these match all manner of Unicode "digit characters," just like \d and \D do (a few hundred), on top of the ascii [0-9] (unless /a modifier is used).
A full test program, for both
use warnings;
use strict;
use feature 'say';
my #ranges = qw(2018- 2001-2020 1999- 2005-);
foreach my $range (#ranges) {
my $year = (split /-/, $range)[-1];
# Or, using regex
# my ($year) = $range =~ /([0-9]+)[^0-9]*$/;
say $year;
}
Prints as desired.
We can capture the 4 digits as group, followed by a - at the end ($) of the string and replace with the backreference (\\1) of the captured group
sub(".*(\\d{4})-?$", "\\1", str1)
#[1] "2018" "2020" "1999" "2005"
data
str1 <- c("2018-", "2001-2020", "1999-", "2005-")
You can split the text on "-" and get the last number.
x <- c("2018-", "2001-2020", "1999-", "2005-")
sapply(strsplit(str1, '-', fixed = TRUE), tail, 1)
#[1] "2018" "2020" "1999" "2005"

String recognition in idl

I have the following strings:
F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_East_A.dat
F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_Froemke-Hoy.dat
and from each I want to extract the three variables, 1. SWIR32 2. the date and 3. the text following the date. I want to automate this process for about 200 files, so individually selecting the locations won't exactly work for me.
so I want:
variable1=SWIR32
variable2=2005210
variable3=East_A
variable4=SWIR32
variable5=2005210
variable6=Froemke-Hoy
I am going to be using these to add titles to graphs later on, but since the position of the text in each string varies I am unsure how to do this using strmid
I think you want to use a combination of STRPOS and STRSPLIT. Something like the following:
s = ['F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_East_A.dat', $
'F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_Froemke-Hoy.dat']
name = STRARR(s.length)
date = name
txt = name
foreach sub, s, i do begin
sub = STRMID(sub, 1+STRPOS(sub, '\', /REVERSE_SEARCH))
parts = STRSPLIT(sub, '_', /EXTRACT)
name[i] = parts[0]
date[i] = parts[1]
txt[i] = STRJOIN(parts[2:*], '_')
endforeach
You could also do this with a regular expression (using just STRSPLIT) but regular expressions tend to be complicated and error prone.
Hope this helps!

grep on two strings

I'm working to grab two different elements in a string.
The string look like this,
str <- c('a_abc', 'b_abc', 'abc', 'z_zxy', 'x_zxy', 'zxy')
I have tried with the different options in ?grep, but I can't get it right, 'm doing something like this,
grep('[_abc]:[_zxy]',str, value = TRUE)
and what I would like is,
[1] "a_abc" "b_abc" "z_zxy" "x_zxy"
any help would be appreciated.
Use normal parentheses (, not the square brackets [
grep('_(abc|zxy)',str, value = TRUE)
[1] "a_abc" "b_abc" "z_zxy" "x_zxy"
To make the grep a bit more flexible, you could do something like:
grep('_.{3}$',str, value = TRUE)
Which will match an underscore _ followed by any character . three times {3} followed immediately by the end of the string $
this should work: grep('_abc|_zxy', str, value=T)
X|Y matches when either X matches or Y matches
In this case just doing:
str[grep("_",str)]
will work... is it more complicated in your specific case?

Resources