I have one input file which has one paragraph. I need to split paragraph by pattern into two sub-paragraphs.
paragraph.xml
<Text>
This is first line.
This is second line.
\delemiter\new\one
This is third line.
This is fourth line.
</Text>
R code:
doc<-xmlTreeParse("paragraph.xml")
top = xmlRoot(doc)
text<-top[[1]]
I need to split this paragraph into 2 paragraphs.
paragraph1
This is first line.
This is second line.
paragraph2
This is third line.
This is fourth line.
I found strsplit function is very useful but it never split multi line text.
Since you have xml files, it is better to use XML package facilities. I see you start using it here a continuity of what you have start.
library(XML)
doc <- xmlParse('paragraph.xml') ## equivalent xmlTreeParse (...,useInternalNodes =TRUE)
## extract the text of the node Text
mytext = xpathSApply(doc,'//Text/text()',xmlValue)
## convert it to a list of lines using scan
lines <- scan(text=mytext,sep='\n',what='character')
## get the delimiter index
delim <- which(lines == "\\delemiter\\new\\one")
## get the 2 paragraphes
p1 <- lines[seq(delim-1)]
p2 <- lines[seq(delim+1,length(lines))]
Then you can use paste or write to get the paragraph structure, for example, using write:
write(p1,"",sep='\n')
This is first line.
This is second line.
Here is a sort of roundabout possibility, using split, grepl, and cumsum.
Some sample data:
temp <- c("This is first line.", "This is second line.",
"\\delimiter\\new\\one", "This is third line.",
"This is fourth line.", "\\delimiter\\new\\one",
"This is fifth line")
# [1] "This is first line." "This is second line." "\\delimiter\\new\\one"
# [4] "This is third line." "This is fourth line." "\\delimiter\\new\\one"
# [7] "This is fifth line"
Use split after generating "groups" by using cumsum on grepl:
temp1 <- split(temp, cumsum(grepl("delimiter", temp)))
temp1
# $`0`
# [1] "This is first line." "This is second line."
#
# $`1`
# [1] "\\delimiter\\new\\one" "This is third line." "This is fourth line."
#
# $`2`
# [1] "\\delimiter\\new\\one" "This is fifth line"
If further cleanup is desired, here's one option:
lapply(temp1, function(x) {
x[grep("delimiter", x)] <- NA
x[complete.cases(x)]
})
# $`0`
# [1] "This is first line." "This is second line."
#
# $`1`
# [1] "This is third line." "This is fourth line."
#
# $`2`
# [1] "This is fifth line"
Related
I need to count the lines of 221 poems and tried counting the line breaks \n.
However, some lines have double line breaks \n\n to make a new verse. These I only want counted as one. The amount and position of double line breaks is random in each poem.
Minimal working example:
library("quanteda")
poem1 <- "This is a line\nThis is a line\n\nAnother line\n\nAnd another one\nThis is the last one"
poem2 <- "Some poetry\n\nMore poetic stuff\nAnother very poetic line\n\nThis is the last line of the poem"
poems <- quanteda::corpus(poem1, poem2)
The resulting line count should be 5 lines for poem1 and 4 lines for poem2.
I tried stringi::stri_count_fixed(texts(poems), pattern = "\n"), but the regex pattern is not elaborate enough to account for the random double line break problem.
You can use stringr::str_count with the \R+ pattern to find the number of consecutive line break sequences in the string:
> poem1 <- "This is a line\nThis is a line\n\nAnother line\n\nAnd another one\nThis is the last one"
> poem2 <- "Some poetry\n\nMore poetic stuff\nAnother very poetic line\n\nThis is the last line of the poem"
> library(stringr)
> str_count(poem1, "\\R+")
[1] 4
> str_count(poem2, "\\R+")
[1] 3
So the line count is str_count(x, "\\R+") + 1.
The \R pattern matches any line break sequence, CRLF, LF or CR. \R+ matches a sequence of one or more such line break sequence.
See the R code DEMO online:
poem1 <- "This is a line\nThis is a line\n\nAnother line\n\nAnd another one\nThis is the last one"
poem2 <- "Some poetry\n\nMore poetic stuff\nAnother very poetic line\n\nThis is the last line of the poem"
library(stringr)
str_count(poem1, "\\R+")
# => [1] 4
str_count(poem2, "\\R+")
# => [1] 3
## Line counts:
str_count(poem1, "\\R+") + 1
# => [1] 5
str_count(poem2, "\\R+") + 1
# => [1] 4
I wonder if you can change the formation of sentences. Instead of punctuation to form the sentence, I would like a new row/ new line forming the sentence.
This is a very minimal question, so I'm going to have to guess here what you intend, but I'm guessing that you want to segment your documents into lines, rather than sentences. There are two ways to do this: to have a new corpus where each sentence is a document, or a new tokens object where each "token" is a line.
Getting both is a matter of using the *_segment() functions. Here's two ways, with some sample text I will create where each line is a "sentence".
library("quanteda")
## Package version: 2.0.0
txt <- c(
d1 = "Sentence one.\nSentence two is on this line.\nLine three",
d2 = "This is a single sentence."
)
cat(txt)
## Sentence one.
## Sentence two is on this line.
## Line three This is a single sentence.
To make this into tokens, we use char_segment() with a newline being the segmentation pattern, and then coerce this into a list and then into tokens:
# as tokens
char_segment(txt, pattern = "\n", remove_pattern = FALSE) %>%
as.list() %>%
as.tokens()
## Tokens consisting of 4 documents.
## d1.1 :
## [1] "Sentence one."
##
## d1.2 :
## [1] "Sentence two is on this line."
##
## d1.3 :
## [1] "Line three"
##
## d2.1 :
## [1] "This is a single sentence."
If you want to make each of the lines into a "document" that can be segmented further, then use corpus_segment() after constructing a corpus from the txt object:
# as documents
corpus(txt) %>%
corpus_segment(pattern = "\n", extract_pattern = FALSE)
## Corpus consisting of 4 documents.
## d1.1 :
## "Sentence one."
##
## d1.2 :
## "Sentence two is on this line."
##
## d1.3 :
## "Line three"
##
## d2.1 :
## "This is a single sentence."
Trying to postprocess the LaTeX (pdf_book output) of a bookdown document to collapse biblatex citations to be able to sort them chronologically using \usepackage[sortcites]{biblatex} later on. Thus, I need to find }{ after \\autocites and replace it with ,. I am experimenting with gsub() but can't find the correct incantation.
# example input
testcase <- "text \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990} text {keep}{separate}"
# desired output
"text \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}"
A simple approach was to replace all }{
> gsub('\\}\\{', ',', testcase, perl=TRUE)
[1] "text \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep,separate}"
But this also collapses {keep}{separate}.
I was then trying to replace }{ within a 'word' (string of characters without whitspace) starting with \\autocites by using different groups and failed bitterly:
> gsub('(\\\\autocites)([^ \f\n\r\t\v}{}]+)((\\}\\{})+)', '\\1\\2\\3', testcase, perl=TRUE)
[1] "text \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990} some text {keep}{separate}"
Addendum:
The actual document contains more lines/elements than the testcase above. Not all elements contain \\autocites and in rare cases one element has more than one \\autocites. I didn't originally think this was relevant. A more realistic testcase:
testcase2 <- c("some text",
"text \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990} text {keep}{separate}",
"text \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990} text {keep}{separate} \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}")
A single gsub call is enough:
gsub("(?:\\G(?!^)|\\\\autocites)\\S*?\\K}{", ",", testcase, perl=TRUE)
## => [1] "text \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}"
See the regex demo. Here, (?:\G(?!^)|\\autocites) matches the end of the previous match or \autocites string, then it matches any 0 or more non-whitespace chars, but as few as possible, then \K discards the text from the current match buffer and consumes the }{ substring that is eventually replaced with a comma.
There is also a very readable solution with one regex and one fixed text replacements using stringr::str_replace_all:
library(stringr)
str_replace_all(testcase, "\\\\autocites\\S+", function(x) gsub("}{", ",", x, fixed=TRUE))
# => [1] "text \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}"
Here, \\autocites\S+ matches \autocites and then 1+ non-whitespace chars, and gsub("}{", ",", x, fixed=TRUE) replaces (very fast) each }{ with , in the matched text.
Not the prettiest solution, but it works. This repeatedly replaces }{ with , but only if it follows autocities with no intervening blanks.
while(length(grep('(autocites\\S*)\\}\\{', testcase, perl=TRUE))) {
testcase = sub('(autocites\\S*)\\}\\{', '\\1,', testcase, perl=TRUE)
}
testcase
[1] "text \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}"
I'll make the input string slightly bigger to make the algorithm more clear.
str <- "
text \\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990} text {keep}{separate}
text \\autocites[cf.~][]{wattPattern1947}{foxMapping2000}{runkleGap1990} text {keep}{separate}
"
We will firstly extract all the citation blocks, replace "}{" with "," in them and then put them back into the string.
# pattern for matching citation blocks
pattern <- "\\\\autocites(\\[[^\\[\\]]*\\])*(\\{[[:alnum:]]*\\})+"
cit <- str_extract_all(str, pattern)[[1]]
cit
#> [1] "\\autocites[cf.~][]{foxMapping2000}{wattPattern1947}{runkleGap1990}"
#> [2] "\\autocites[cf.~][]{wattPattern1947}{foxMapping2000}{runkleGap1990}"
Replace in citation blocks:
newcit <- str_replace_all(cit, "\\}\\{", ",")
newcit
#> [1] "\\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990}"
#> [2] "\\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990}"
Break the original string in the places where citation block was found
strspl <- str_split(str, pattern)[[1]]
strspl
#> [1] "\ntext " " text {keep}{separate}\ntext " " text {keep}{separate}\n"
Insert modified citation blocks:
combined <- character(length(strspl) + length(newcit))
combined[c(TRUE, FALSE)] <- strspl
combined[c(FALSE, TRUE)] <- newcit
combined
#> [1] "\ntext "
#> [2] "\\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990}"
#> [3] " text {keep}{separate}\ntext "
#> [4] "\\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990}"
#> [5] " text {keep}{separate}\n"
Paste it together to finalize:
newstr <- paste(combined, collapse = "")
newstr
#> [1] "\ntext \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}\ntext \\autocites[cf.~][]{foxMapping2000,wattPattern1947,runkleGap1990} text {keep}{separate}\n"
I suspect there could be a more elegant fully-regex solution based on the same idea, but I wasn't able to find one.
I found an incantation that works. It's not pretty:
gsub("\\\\autocites[^ ]*",
gsub("\\}\\{",",",
gsub(".*(\\\\autocites[^ ]*).*","\\\\\\1",testcase) #all those extra backslashes are there because R is ridiculous.
),
testcase)
I broke it in to lines to hopefully make it a little more intelligible. Basically, the innermost gsub extracts just the autocites (anything that follows \\autocites up to the first space), then the middle gsub replaces the }{s with commas, and the outermost gsub replaces the result of the middle one for the pattern extracted in the innermost one.
This will only work with a single autocites in a string, of course.
Also, fortune(365).
I have a large corpus of text in a vector of strings (app. 700.000 strings). I'm trying to replace specific words/phrases within the corpus. That is, I have a vector of app 40.000 phrases and a corresponding vector of replacements.
I'm looking for an efficient way of solving the problem
I can do it in a for loop, looping through each pattern + replacement. But it scales badly (3 days or so !)
I'v also tried qdap::mgsub(), but it seems to scale badly as well
txt <- c("this is a random sentence containing bca sk",
"another senctence with bc a but also with zqx tt",
"this sentence contains non of the patterns",
"this sentence contains only bc a")
patterns <- c("abc sk", "bc a", "zqx tt")
replacements <- c("#a-specfic-tag-#abc sk",
"#a-specfic-tag-#bc a",
"#a-specfic-tag-#zqx tt")
#either
txt2 <- qdap::mgsub(patterns, replacements, txt)
#or
for(i in 1:length(patterns)){
txt <- gsub(patterns[i], replacements[i], txt)
}
Both solutions scale badly for my data with app 40.000 patterns/replacements and 700.000 txt strings
I figure there must be a more efficient way of doing this?
If you can tokenize the texts first, then vectorized replacement is much faster. It's also faster if a) you can use a multi-threaded solution and b) you use fixed instead of regular expression matching.
Here's how to do all that in the quanteda package. The last line pastes the tokens back into a single "document" as a character vector, if that is what you want.
library("quanteda")
## Package version: 1.4.3
## Parallel computing: 2 of 12 threads used.
## See https://quanteda.io for tutorials and examples.
##
## Attaching package: 'quanteda'
## The following object is masked from 'package:utils':
##
## View
quanteda_options(threads = 4)
txt <- c(
"this is a random sentence containing bca sk",
"another sentence with bc a but also with zqx tt",
"this sentence contains none of the patterns",
"this sentence contains only bc a"
)
patterns <- c("abc sk", "bc a", "zqx tt")
replacements <- c(
"#a-specfic-tag-#abc sk",
"#a-specfic-tag-#bc a",
"#a-specfic-tag-#zqx tt"
)
This will tokenize the texts and then use fast replacement of the hashed types, using a fixed pattern match (but you could have used valuetype = "regex" for regular expression matching). By wrapping patterns inside the phrases() function, you are telling tokens_replace() to look for token sequences rather than individual matches, so this solves the multi-word issue.
toks <- tokens(txt) %>%
tokens_replace(phrase(patterns), replacements, valuetype = "fixed")
toks
## tokens from 4 documents.
## text1 :
## [1] "this" "is" "a" "random" "sentence"
## [6] "containing" "bca" "sk"
##
## text2 :
## [1] "another" "sentence"
## [3] "with" "#a-specfic-tag-#bc a"
## [5] "but" "also"
## [7] "with" "#a-specfic-tag-#zqx tt"
##
## text3 :
## [1] "this" "sentence" "contains" "none" "of" "the"
## [7] "patterns"
##
## text4 :
## [1] "this" "sentence" "contains"
## [4] "only" "#a-specfic-tag-#bc a"
Finally if you really want to put this back into character format, then convert to a list of character types and then paste them together.
sapply(as.list(toks), paste, collapse = " ")
## text1
## "this is a random sentence containing bca sk"
## text2
## "another sentence with #a-specfic-tag-#bc a but also with #a-specfic-tag-#zqx tt"
## text3
## "this sentence contains none of the patterns"
## text4
## "this sentence contains only #a-specfic-tag-#bc a"
You'll have to test this on your large corpus, but 700k strings does not sound like too large a task. Please try this and report how it did!
Create a vector of all words in each phrase
txt1 = strsplit(txt, " ")
words = unlist(txt1)
Use match() to find the index of words to replace, and replace them
idx <- match(words, patterns)
words[!is.na(idx)] = replacements[idx[!is.na(idx)]]
Re-form the phrases and paste together
phrases = relist(words, txt1)
updt = sapply(phrases, paste, collapse = " ")
I guess this won't work if patterns can have more than one word...
Create a map between the old and new values
map <- setNames(replacements, patterns)
Create a pattern that contains all patterns in a single regular expression
pattern = paste0("(", paste0(patterns, collapse="|"), ")")
Find all matches, and extract them
ridx <- gregexpr(pattern, txt)
m <- regmatches(txt, ridx)
Unlist, map, and relist the matches to their replacement values, and update the original vector
regmatches(txt, ridx) <- relist(map[unlist(m)], m)
this an exemplary excerpt of my data set. It looks like as follows:
Description;ID;Date
wa119:d Here comes the first row;id_112;2018/03/02
ax21:3 Here comes the second row;id_115;2018/03/02
bC230:13 Here comes the third row;id_234;2018/03/02
I want to delete those words which contain a a colon. In this case, this would be wa119:d, ax21:3 and bC230:13 so that my new data set should look like as follows:
Description;ID;Date
Here comes the first row;id_112;2018/03/02
Here comes the second row;id_115;2018/03/02
Here comes the third row;id_234;2018/03/02
Unfortunately, I was not able to find a regular expression / solution with gsub? Can anyone help?
Here's one approach:
## reading in yor data
dat <- read.table(text ='
Description;ID;Date
wa119:d Here comes the first row;id_112;2018/03/02
ax21:3 Here comes the second row;id_115;2018/03/02
bC230:13 Here comes the third row;id:234;2018/03/02
', sep = ';', header = TRUE, stringsAsFactors = FALSE)
## \\w+ = one or more word characters
gsub('\\w+:\\w+\\s+', '', dat$Description)
## [1] "Here comes the first row"
## [2] "Here comes the second row"
## [3] "Here comes the third row"
More info on \\w a shorthand character class that is the same as [A-Za-z0-9_]:https://www.regular-expressions.info/shorthand.html
Supposing the column you want to modify is dat:
dat <- c("wa119:d Here comes the first row",
"ax21:3 Here comes the second row",
"bC230:13 Here comes the third row")
Then you can take each element, split it into words, remove the words containing a colon, and then paste what's left back together, yielding what you want:
dat_colon_words_removed <- unlist(lapply(dat, function(string){
words <- strsplit(string, split=" ")[[1]]
words <- words[!grepl(":", words)]
paste(words, collapse=" ")
}))
Another solution that will exactly match expected result from OP could be as:
#data
df <- read.table(text = "Description;ID;Date
wa119:d Here comes the first row;id_112;2018/03/02
ax21:3 Here comes the second row;id_115;2018/03/02
bC230:13 Here comes the third row;id:234;2018/03/02", stringsAsFactors = FALSE, sep="\n")
gsub("[a-zA-Z0-9]+:[a-zA-Z0-9]+\\s", "", df$V1)
#[1] "Description;ID;Date"
#[2] "Here comes the first row;id_112;2018/03/02"
#[3] "Here comes the second row;id_115;2018/03/02"
#[4] "Here comes the third row;id:234;2018/03/02"