Extract "words" from a string - r

I have a table with 153 rows by 9 columns. My interest is the character string in the first column, I want to extract the fourth word and create a new list from this fourth word, this list will be 153 rows, 1 column.
An example of the first two rows of column 1 of this database table:
[1] Resistance_Test DevID (Ohms) 428
[2] Diode_Test SUBLo (V) 353
"Words" are separated by spaces, so the fourth word of the first row is "428" and the fourth word of the second row is "353". How can I create a new list containing the fourth word of all 153 rows?

Use gsub() with a regular expression
x <- c("Resistance_Test DevID (Ohms) 428", "Diode_Test SUBLo (V) 353")
ptn <- "(.*? ){3}"
gsub(ptn, "", x)
[1] "428" "353"
This works because the regular expression (.*? ){3} finds exactly three {3} sets of characters followed by a space (.*? ), and then replaces this with ane empty string.
See ?gsub and ?regexp for more information.
If your data has structure that you don't mention in your question, then possibly the regular expression becomes even easier.
For example, if you are always interested in the last word of each line:
ptn <- "(.*? )"
gsub(ptn, "", x)
Or perhaps you know for sure you can only search for digits and discard everything else:
ptn <- "\\D"
gsub(ptn, "", x)

You could use word() from the stringrpackage:
> x <- c("Resistance_Test DevID (Ohms) 428", "Diode_Test SUBLo (V) 353")
> library(stringr)
> word(string = x, start = 4, end = 4)
[1] "428" "353"
Specifying the position of both the start and end words to be the same, you will always get the fourth word.
I hope this helps.

We can use sub. We match the pattern one or more non-white space (\\S+) followed by one or more white space (\\s+) that gets repeated 3 times ({3}) followed by word that is captured in a group ((\\w+)) followed by one or more characters. We replace it by the second backreference.
sub("(\\S+\\s+){3}(\\w+).*", "\\2", str1)
#[1] "428" "353"
This selects by the nth word, so
sub("(\\S+\\s+){3}(\\w+).*", "\\2", str2)
#[1] "428" "353" "428"
Another option is stri_extract
library(stringi)
stri_extract_last_regex(str1, "\\w+")
#[1] "428" "353"
data
str1 <- c("Resistance_Test DevID (Ohms) 428", "Diode_Test SUBLo (V) 353")
str2 <- c(str1, "Resistance_Test DevID (Ohms) 428 something else")

If you are not familiar with regular expressions, the function strsplit can help you :
data <- c('Resistance_Test DevID (Ohms) 428', 'Diode_Test SUBLo (V) 353')
unlist(lapply(strsplit(data, ' '), function(x) x[4]))
[1] "428" "353"

Related

Regex: extract a number after a string that contains a number

Suppose I have a string:
str <- "England has 90 cases(1 discharged, 5 died); Scotland has 5 cases(2 discharged, 1 died)"
How can I grab the number of discharged cases in England?
I have tried
sub("(?i).*England has [\\d] cases(.*?(\\d+).*", "\\1", str),
It's returning the original string. Many Thanks!
We can use regmatches/gregexpr to match one or more digits (\\d+) followed by a space, 'discharged' to extract the number of discharges
as.integer(regmatches(str, gregexpr("\\d+(?= discharged)", str, perl = TRUE))[[1]])
#[1] 1 2
If it is specific only to 'England', start with the 'England' followed by characters tat are not a ( ([^(]+) and (, then capture the digits (\\d+) as a group, in the replacement specify the backreference (\\1) of the captured group
sub("England[^(]+\\((\\d+).*", "\\1", str)
#[1] "1"
Or if we go by the OP's option, the ( should be escaped as it is a metacharacter to capture group (after the cases). Also, \\d+ can be placed outside the square brackets
sub("(?i)England has \\d+ cases\\((\\d+).*", "\\1", str)
#[1] "1"
We can use str_match to capture number before "discharged".
stringr::str_match(str, "England.*?(\\d+) discharged")[, 2]
#[1] "1"
the regex is \d+(?= discharged) and get the first match

How would I remove the text before the initial period, the initial period itself and text after final period in a string?

I need to remove the text before the leading period (as well as the leading period) and the text following the last period from a string.
Given this string for example:
"ABCD.EF.GH.IJKL.MN"
I'd like to get the output:
[1] "IJKL"
I have tried the following:
split_string <- sub("^.*?\\.","", string)
split_string <- sub("^\\.+|\\.[^.]*$", "", string)
I believe I have it working for the period and text after for that string output I want. However, the first line needs to be executed multiple times to remove the text before that period in question e.g. '.I'.
One option in base R is to capture as a group ((...)) the word followed by the dot (\\.) and the word (\\w+) till the end ($) of the string. In the replacement, use the backreference (\\1) of the captured word
sub(".*\\.(\\w+)\\.\\w+$", "\\1", str1)
#[1] "IJKL"
Here, we match characters (.*) till the . (\\. - escaped to get the literal value because . is a metacharacter that will match any character if not escaped), followed by the word captured ((\\w+)), followed by a dot and another word at the end ($)of the string. The replacement part is mentioned above
Or another option is regmatches/regexpr from base R
regmatches(str1, regexpr("\\w+(?=\\.\\w+$)", str1, perl = TRUE))
#[1] "IJKL"
Or another option is word from stringr
library(stringr)
word(str1, -2, sep="[.]")
#[1] "IJKL"
data
str1 <- "ABCD.EF.GH.IJKL.MN"
Here is a janky dplyr version in case the other values are of importance and you want to select them later on, just include them in the "select".
df<- data.frame(x=c("ABCD.EF.GH.IJKL.MN"))
df2<-df %>%
separate(x, into=c("var1", "var2","var3","var4","var5")) %>%
select("var4")
Split into groups at period and take the second one from last.
sapply(strsplit(str1, "\\."), function(x) x[length(x) - 1])
#[1] "IJKL"
Get indices of the periods and use substr to extract the relevant portion
sapply(str1, function(x){
ind = gregexpr("\\.", x)[[1]]
substr(x, ind[length(ind) - 1] + 1, ind[length(ind)] - 1)
}, USE.NAMES = FALSE)
#[1] "IJKL"
These alternatives all use no packages or regular expressions.
1) basename/dirname Assuming the test input s shown in the Note at the end convert the dots to slashes and then use dirname and basename.
basename(dirname(chartr(".", "/", s)))
## [1] "IJKL" "IJKL"
2) strsplit Using strsplit split the strings at dot creating a list of character vectors, one vector per input string, and then for each such vector take the last 2 elements using tail and the first of those using indexing.
sapply(strsplit(s, ".", fixed = TRUE), function(x) tail(x, 2)[1])
## [1] "IJKL" "IJKL"
3) read.table It is not clear from the question what the general case is but if all the components of s have the same number of dot separated fields then we can use read.table to create a data.frame with one row per input string and one column per dot-separated component. Then take the column just before the last.
dd <- read.table(text = s, sep = ".", as.is = TRUE)
dd[[ncol(dd)-1]]
## [1] "IJKL" "IJKL"
4) substr Again, the general case is not clear but if the string of interest is always at character positions 12-15 then a simple solution is:
substr(s, 12, 15)
## [1] "IJKL" "IJKL"
Note
s <- c("ABCD.EF.GH.IJKL.MN", "ABCD.EF.GH.IJKL.MN")

Regex: how to keep all digits when splitting a string?

Question
Using a regular expression, how do I keep all digits when splitting a string?
Overview
I would like to split each element within the character vector sample.text into two elements: one of only digits and one of only the text.
Current Attempt is Dropping Last Digit
This regular expression - \\d\\s{1} - inside of base::strsplit() removes the last digit. Below is my attempt, along with my desired output.
# load necessary data -----
sample.text <-
c("111110 Soybean Farming", "0116 Soybeans")
# split string by digit and one space pattern ------
strsplit(sample.text, split = "\\d\\s{1}")
# [[1]]
# [1] "11111" "Soybean Farming"
#
# [[2]]
# [1] "011" "Soybeans"
# desired output --------
# [[1]]
# [1] "111110" "Soybean Farming"
#
# [[2]]
# [1] "0116" "Soybeans"
# end of script #
Any advice on how I can split sample.text to keep all digits would be much appreciated! Thank you.
Because you're splitting on \\d, the digit there is consumed in the regex, and not present in the output. Use lookbehind for a digit instead:
strsplit(sample.text, split = "(?<=\\d) ", perl=TRUE)
http://rextester.com/GDVFU71820
Some alternative solutions, using very simple pattern matching on the first occurrence of space:
1) Indirectly by using sub to substitute your own delimiter, then strsplit on your delimiter:
E.g. you can substitute ';' for the first space (if you know that character does not exist in your data):
strsplit( sub(' ', ';', sample.text), split=';')
2) Using regexpr and regmatches
You can effectively match on the first " " (space character), and split as follows:
regmatches(sample.text, regexpr(" ", sample.text), invert = TRUE)
Result is a list, if that's what you are after per your sample desired output:
[[1]]
[1] "111110" "Soybean Farming"
[[2]]
[1] "0116" "Soybeans"
3) Using stringr library:
library(stringr)
str_split_fixed(sample.text, " ", 2) #outputs a character matrix
[,1] [,2]
[1,] "111110" "Soybean Farming"
[2,] "0116" "Soybeans"

How to extract substring from a string?

There are some strings which show the following pattern
ABC, DEF.JHI
AB,DE.(JH)
Generally, it includes three sections which are separated with , and . The last character can be either normal character or sth like ). I would like to extract the last part. For example, I would like to generate the following two strings based on the above ones
JHI
(JH)
Is there a way to do that in R?
library(stringr)
str1 <- c("ABC, DEF.JHI","AB,DE.(JH)")
str_extract(str1,perl('(?<=\\.).*'))
#[1] "JHI" "(JH)"
(?<=\\.) search for . followed by .* all characters
You can just split on the . using strsplit and extract the second element.
str1 <- c("ABC, DEF.JHI","AB,DE.(JH)")
unlist(lapply(strsplit(str1, "\\."), "[", 2))
# [1] "JHI" "(JH)"
Here's another possibility:
sapply(strsplit(str1, "\\.\\(|\\.|\\)"), "[[", 2)
Riffing on #josiber's answer you could remove the part of the string before the .
str1 <- c("ABC, DEF.JHI","AB,DE.(JH)")
gsub(".*\\.", "", str1)
# [1] "JHI" "(JH)"
EDIT
In case your third element is not always preceded by a ., to extract the final part
str1 <- c("ABC, DEF.JHI","AB,DE.(JH)", "ABC.DE, (JH)")
gsub(".*[,.]", "" , str1)
# [1] "JHI" "(JH)" " (JH)"

Count the number of all words in a string

Is there a function to count the number of words in a string?
For example:
str1 <- "How many words are in this sentence"
to return a result of 7.
Use the regular expression symbol \\W to match non-word characters, using + to indicate one or more in a row, along with gregexpr to find all matches in a string. Words are the number of word separators plus 1.
lengths(gregexpr("\\W+", str1)) + 1
This will fail with blank strings at the beginning or end of the character vector, when a "word" doesn't satisfy \\W's notion of non-word (one could work with other regular expressions, \\S+, [[:alpha:]], etc., but there will always be edge cases with a regex approach), etc. It is likely more efficient than strsplit solutions, which will allocate memory for each word. Regular expressions are described in ?regex.
Update As noted in the comments and in a different answer by #Andri the approach fails with (zero) and one-word strings, and with trailing punctuation
str1 = c("", "x", "x y", "x y!" , "x y! z")
lengths(gregexpr("[A-z]\\W+", str1)) + 1L
# [1] 2 2 2 3 3
Many of the other answers also fail in these or similar (e.g., multiple spaces) cases. I think my answer's caveat about 'notion of one word' in the original answer covers problems with punctuation (solution: choose a different regular expression, e.g., [[:space:]]+), but the zero and one word cases are a problem; #Andri's solution fails to distinguish between zero and one words. So taking a 'positive' approach to finding words one might
sapply(gregexpr("[[:alpha:]]+", str1), function(x) sum(x > 0))
Leading to
sapply(gregexpr("[[:alpha:]]+", str1), function(x) sum(x > 0))
# [1] 0 1 2 2 3
Again the regular expression might be refined for different notions of 'word'.
I like the use of gregexpr() because it's memory efficient. An alternative using strsplit() (like #user813966, but with a regular expression to delimit words) and making use of the original notion of delimiting words is
lengths(strsplit(str1, "\\W+"))
# [1] 0 1 2 2 3
This needs to allocate new memory for each word that is created, and for the intermediate list-of-words. This could be relatively expensive when the data is 'big', but probably it's effective and understandable for most purposes.
Most simple way would be:
require(stringr)
str_count("one, two three 4,,,, 5 6", "\\S+")
... counting all sequences on non-space characters (\\S+).
But what about a little function that lets us also decide which kind of words we would like to count and which works on whole vectors as well?
require(stringr)
nwords <- function(string, pseudo=F){
ifelse( pseudo,
pattern <- "\\S+",
pattern <- "[[:alpha:]]+"
)
str_count(string, pattern)
}
nwords("one, two three 4,,,, 5 6")
# 3
nwords("one, two three 4,,,, 5 6", pseudo=T)
# 6
I use the str_count function from the stringr library with the escape sequence \w that represents:
any ‘word’ character (letter, digit or underscore in the current
locale: in UTF-8 mode only ASCII letters and digits are considered)
Example:
> str_count("How many words are in this sentence", '\\w+')
[1] 7
Of all other 9 answers that I was able to test, only two (by Vincent Zoonekynd, and by petermeissner) worked for all inputs presented here so far, but they also require stringr.
But only this solution works with all inputs presented so far, plus inputs such as "foo+bar+baz~spam+eggs" or "Combien de mots sont dans cette phrase ?".
Benchmark:
library(stringr)
questions <-
c(
"", "x", "x y", "x y!", "x y! z",
"foo+bar+baz~spam+eggs",
"one, two three 4,,,, 5 6",
"How many words are in this sentence",
"How many words are in this sentence",
"Combien de mots sont dans cette phrase ?",
"
Day after day, day after day,
We stuck, nor breath nor motion;
"
)
answers <- c(0, 1, 2, 2, 3, 5, 6, 7, 7, 7, 12)
score <- function(f) sum(unlist(lapply(questions, f)) == answers)
funs <-
c(
function(s) sapply(gregexpr("\\W+", s), length) + 1,
function(s) sapply(gregexpr("[[:alpha:]]+", s), function(x) sum(x > 0)),
function(s) vapply(strsplit(s, "\\W+"), length, integer(1)),
function(s) length(strsplit(gsub(' {2,}', ' ', s), ' ')[[1]]),
function(s) length(str_match_all(s, "\\S+")[[1]]),
function(s) str_count(s, "\\S+"),
function(s) sapply(gregexpr("\\W+", s), function(x) sum(x > 0)) + 1,
function(s) length(unlist(strsplit(s," "))),
function(s) sapply(strsplit(s, " "), length),
function(s) str_count(s, '\\w+')
)
unlist(lapply(funs, score))
Output (11 is the maximum possible score):
6 10 10 8 9 9 7 6 6 11
You can use strsplit and sapply functions
sapply(strsplit(str1, " "), length)
str2 <- gsub(' {2,}',' ',str1)
length(strsplit(str2,' ')[[1]])
The gsub(' {2,}',' ',str1) makes sure all words are separated by one space only, by replacing all occurences of two or more spaces with one space.
The strsplit(str,' ') splits the sentence at every space and returns the result in a list. The [[1]] grabs the vector of words out of that list. The length counts up how many words.
> str1 <- "How many words are in this sentence"
> str2 <- gsub(' {2,}',' ',str1)
> str2
[1] "How many words are in this sentence"
> strsplit(str2,' ')
[[1]]
[1] "How" "many" "words" "are" "in" "this" "sentence"
> strsplit(str2,' ')[[1]]
[1] "How" "many" "words" "are" "in" "this" "sentence"
> length(strsplit(str2,' ')[[1]])
[1] 7
You can use str_match_all, with a regular expression that would identify your words.
The following works with initial, final and duplicated spaces.
library(stringr)
s <- "
Day after day, day after day,
We stuck, nor breath nor motion;
"
m <- str_match_all( s, "\\S+" ) # Sequences of non-spaces
length(m[[1]])
Try this function from stringi package
require(stringi)
> s <- c("Lorem ipsum dolor sit amet, consectetur adipisicing elit.",
+ "nibh augue, suscipit a, scelerisque sed, lacinia in, mi.",
+ "Cras vel lorem. Etiam pellentesque aliquet tellus.",
+ "")
> stri_stats_latex(s)
CharsWord CharsCmdEnvir CharsWhite Words Cmds Envirs
133 0 30 24 0 0
Also from stringi package, the straight forward function stri_count_words
stringi::stri_count_words(str1)
#[1] 7
You can use wc function in library qdap:
> str1 <- "How many words are in this sentence"
> wc(str1)
[1] 7
You can remove double spaces and count the number of " " in the string to get the count of words. Use stringr and rm_white {qdapRegex}
str_count(rm_white(s), " ") +1
Try this
length(unlist(strsplit(str1," ")))
require(stringr)
str_count(x,"\\w+")
will be fine with double/triple spaces between words
All other answers have issues with more than one space between the words.
The solution 7 does not give the correct result in the case there's just one word.
You should not just count the elements in gregexpr's result (which is -1 if there where not matches) but count the elements > 0.
Ergo:
sapply(gregexpr("\\W+", str1), function(x) sum(x>0) ) + 1
require(stringr)
Define a very simple function
str_words <- function(sentence) {
str_count(sentence, " ") + 1
}
Check
str_words(This is a sentence with six words)
Use nchar
if vector of strings is called x
(nchar(x) - nchar(gsub(' ','',x))) + 1
Find out number of spaces then add one
You could use stringr functions str_split() and boundary(), which will recognize the boundaries of words while ignoring punctuation and any extra spaces
sapply(str_split("It's 12 o'clock already", boundary("word")), length)
#[1] 4
sapply(str_split(" It's >12 o'clock already ?! ", boundary("word")), length)
#[1] 4
With stringr package, one can also write a simple script that could traverse a vector of strings for example through a for loop.
Let's say
df$text
contains a vector of strings that we are interested in analysing. First, we add additional columns to the existing dataframe df as below:
df$strings = as.integer(NA)
df$characters = as.integer(NA)
Then we run a for-loop over the vector of strings as below:
for (i in 1:nrow(df))
{
df$strings[i] = str_count(df$text[i], '\\S+') # counts the strings
df$characters[i] = str_count(df$text[i]) # counts the characters & spaces
}
The resulting columns: strings and character will contain the counts of words and characters and this will be achieved in one-go for a vector of strings.
I've found the following function and regex useful for word counts, especially in dealing with single vs. double hyphens, where the former generally should not count as a word break, eg, well-known, hi-fi; whereas double hyphen is a punctuation delimiter that is not bounded by white-space--such as for parenthetical remarks.
txt <- "Don't you think e-mail is one word--and not two!" #10 words
words <- function(txt) {
length(attributes(gregexpr("(\\w|\\w\\-\\w|\\w\\'\\w)+",txt)[[1]])$match.length)
}
words(txt) #10 words
Stringi is a useful package. But it over-counts words in this example due to hyphen.
stringi::stri_count_words(txt) #11 words
There's a simple solution using split and len:
text = 'This is a test for counting words'
# default separator: space
result = len(text.split())
print("There are " + str(result) + " words.")
You can get more details at
https://www.delftstack.com/howto/python/python-count-words-in-string/

Resources