How to extract repeated pattterns from a string - r

I need to extract certain patterns from the text below.
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Budget 2016-2017
Curabitur dictum gravida mauris. Budget 2015-2016 mauris ut leo. Cras
viverra metus rhoncus sem
I need to get the 'Budget \d{4}-\d{4}' part of the text so it looks like:
[1] "Budget 2016-2017" "Budget 2015-2016"

You can get what you want with the following:
library(stringr)
string <- "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Budget 2016-2017 Curabitur dictum gravida mauris. Budget 2015-2016 mauris ut leo. Cras viverra metus rhoncus sem"
unlist(str_extract_all(string, 'Budget [0-9]{4}-[0-9]{4}'))
Result:
> unlist(str_extract_all(string, 'Budget [0-9]{4}-[0-9]{4}'))
[1] "Budget 2016-2017" "Budget 2015-2016"

something close
s <- "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Budget 2016-2017 Curabitur dictum gravida mauris. Budget 2015-2016 mauris ut leo. Cras viverra metus rhoncus sem"
gsub(".*(Budget [0-9]{4}-[0-9]{4}).*", "\\1", s)
[1] "Budget 2015-2016"

Related

Pull all 8 digit numbers from a data frame

I have this assignment where I need to pull all the 8 digit numbers from a text file. I've converted the text file into a dataframe and now have some 67 columns with 18000 rows. There are empty cells as well.
Within this table, some 8 digit number exist, (not in any particular row or column) which is what I want to extract.
I need all these numbers to be extracted into one single column without checking for duplicates.
The only code I've written so far:
data <- read.delim("cerupload_adsi_1_01-02-2019.txt", header = FALSE, sep="|")
You may use regmatches() and match for a juxtaposition of exactly 8 digits with regex "\\d{8}". Specifying word boundaries "\\b" might make this more robust.
Example
txt <- "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod
tempor invidunt ut labore et dolore 235462354 magna aliquyam erat, sed diam voluptua. At
vero eos et accusam et justo duo dolores et ea rebum. Stet clita 235 kasd gubergren, no sea
takimata sanctus est Lorem ipsum dolor sit amet. 12345678 Lorem ipsum dolor 345.454 sit amet,
12345678 consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et
dolore magna aliquyam erat, sed diam 345 voluptua. At vero eos et accusam et justo duo
dolores et ea rebum. Stet clita 12345.67 12345.678 kasd gubergren, no sea takimata sanctus
est Lorem ipsum dolor sit amet. 12345678"
regmatches(txt, gregexpr("\\b\\d{8}\\b", txt))
# [[1]]
# [1] "12345678" "12345678" "12345678"
First, put all of your data into a simple integer vector:
data = as.integer(unlist(data))
Next, remove any elements that weren't convertible to integers (optional):
data = data[!is.na(data)]
Next, find the integers that are 8 characters long:
data = data[nchar(as.character(data))==8]
Then, make a data.frame with the integer vector as a column:
data = data.frame(x=data)
Using str_extract_all from stringr
temp <- data.frame(col = unlist(stringr::str_extract_all(unlist(data), "\\d{8}$")))
temp
# col
#1 12352318
#2 98765432
data
Tested on this sample data with two columns.
data <- data.frame(a = "This is a text with number 1234 and 12352318",
b = "More random text 123456789 98765432")

replace string with a random character from a selection

How can you take the string and replace every instance of ".", ",", " " (i.e. dot, comma or space) with one random character selected from c('|', ':', '#', '*')?
Say I have a string like this
Aenean ut odio dignissim augue rutrum faucibus. Fusce posuere, tellus eget viverra mattis, erat tellus porta mi, at facilisis sem nibh non urna. Phasellus quis turpis quis mauris suscipit vulputate. Sed interdum lacus non velit. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae;
To get one random character, we can treat the characters as a vector then use sample function to select one out. I assume first I need to search for dot, comma or space, then use gsub function to replace all these?
Given your clarification, try this one:
x <- c("this, is nice.", "nice, this is.")
gr <- gregexpr("[., ]", x)
regmatches(x,gr) <- lapply(lengths(gr), sample, x=c('|',':','#','*'))
x
#[1] "this|*is#nice:" "nice#|this*is:"
Here is another option with chartr
pat <- paste(sample(c('|', ';', '#', '*'), 3), collapse="")
chartr('., ', pat, x)
#[1] "this|*is*nice;" "nice|*this*is;"
data
x <- c("this, is nice.", "nice, this is.")

Save .dta files with long strings in R

I have to save an R-dataset in Stata's .dta format.
The dataset contains, among other data, a single column containing long strings (column 3).
test data:
r_data <- data.frame( ae= 1, be= 2, ce= "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet"
,stringsAsFactors = FALSE )
export to dta
library(foreign)
write.dta(r_data, file = "r_data.dta")
results in this warning message:
Warning message:
In write.dta(r_data, file = "r_data.dta") :
character strings of >244 bytes in column 3 will be truncated
Furthermore, I can't open the file in Stata (14 SE) at all due to an error stating:
. use "r_data.dta"
file not Stata format
.dta file contains 1 invalid storage-type code.
File uses invalid codes other than code 0.
r(610);
How can I save longer strings as a .dta file?
R-solution prefered because I am not experienced with Stata.
PS: The indirect route via a CSV-file does not work, because the resulting CSV-file is too big for my little RAM when importing in Stata.
Old question, but deserves to be closed:
Use the haven package to write to a dta-file in Stata 14 format.
library(haven)
r_data <- data.frame(ae = 1, be = 2, ce = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet",
stringsAsFactors = FALSE)
write_dta(r_data, "r_data.dta")

extract semi-structured text from Word documents

I want to text-mine a set of files based on the below form. I can create a corpus where each file is a document (using tm), but I'm thinking it might be better to create a corpus where each section in the 2nd form table was a document having the following meta data:
Author : John Smith
DateTimeStamp: 2013-04-18 16:53:31
Description :
Heading : Current Focus
ID : Smith-John_e.doc Current Focus
Language : en_CA
Origin : Smith-John_e.doc
Name : John Smith
Title : Manager
TeamMembers : Joe Blow, John Doe
GroupLeader : She who must be obeyed
where Name, Title, TeamMembers and GroupLeader are extracted from the first table on the form. In this way, each chunk of text to be analyzed would maintain some of its context.
What is the best way to approach this? I can think of 2 ways:
somehow parse the corpus I have into child corpora.
somehow parse the document into subdocuments and make a corpus from those.
Any pointers would be much appreciated.
This is the form:
Here is an RData file of a corpus with 2 documents. exc[[1]] came from a .doc and exc[[2]] came from a docx. They both used the form above.
Here's a quick sketch of a method, hopefully it might provoke someone more talented to stop by and suggest something more efficient and robust... Using the RData file in your question, I found that the doc and docx files have slightly different structures and so require slightly different approaches (though I see in the metadata that your docx is 'fake2.txt', so is it really docx? I see in your other Q that you used a converter outside of R, that must be why it's txt).
library(tm)
First get custom metadata for the doc file. I'm no regex expert, as you can see, but it's roughly 'get rid of trailing and leading spaces' then 'get rid of "Word"', then get rid of punctuation...
# create User-defined local meta data pairs
meta(exc[[1]], type = "corpus", tag = "Name1") <- gsub("^\\s+|\\s+$","", gsub("Name", "", gsub("[[:punct:]]", '', exc[[1]][3])))
meta(exc[[1]], type = "corpus", tag = "Title") <- gsub("^\\s+|\\s+$","", gsub("Title", "", gsub("[[:punct:]]", '', exc[[1]][4])))
meta(exc[[1]], type = "corpus", tag = "TeamMembers") <- gsub("^\\s+|\\s+$","", gsub("Team Members", "", gsub("[[:punct:]]", '', exc[[1]][5])))
meta(exc[[1]], type = "corpus", tag = "ManagerName") <- gsub("^\\s+|\\s+$","", gsub("Name of your", "", gsub("[[:punct:]]", '', exc[[1]][7])))
Now have a look at the result
# inspect
meta(exc[[1]], type = "corpus")
Available meta data pairs are:
Author :
DateTimeStamp: 2013-04-22 13:59:28
Description :
Heading :
ID : fake1.doc
Language : en_CA
Origin :
User-defined local meta data pairs are:
$Name1
[1] "John Doe"
$Title
[1] "Manager"
$TeamMembers
[1] "Elise Patton Jeffrey Barnabas"
$ManagerName
[1] "Selma Furtgenstein"
Do the same for the docx file
# create User-defined local meta data pairs
meta(exc[[2]], type = "corpus", tag = "Name2") <- gsub("^\\s+|\\s+$","", gsub("Name", "", gsub("[[:punct:]]", '', exc[[2]][2])))
meta(exc[[2]], type = "corpus", tag = "Title") <- gsub("^\\s+|\\s+$","", gsub("Title", "", gsub("[[:punct:]]", '', exc[[2]][4])))
meta(exc[[2]], type = "corpus", tag = "TeamMembers") <- gsub("^\\s+|\\s+$","", gsub("Team Members", "", gsub("[[:punct:]]", '', exc[[2]][6])))
meta(exc[[2]], type = "corpus", tag = "ManagerName") <- gsub("^\\s+|\\s+$","", gsub("Name of your", "", gsub("[[:punct:]]", '', exc[[2]][8])))
And have a look
# inspect
meta(exc[[2]], type = "corpus")
Available meta data pairs are:
Author :
DateTimeStamp: 2013-04-22 14:06:10
Description :
Heading :
ID : fake2.txt
Language : en
Origin :
User-defined local meta data pairs are:
$Name2
[1] "Joe Blow"
$Title
[1] "Shift Lead"
$TeamMembers
[1] "Melanie Baumgartner Toby Morrison"
$ManagerName
[1] "Selma Furtgenstein"
If you have a large number of documents then a lapply function that includes these meta functions would be the way to go.
Now that we've got the custom metadata, we can subset the documents to exclude that part of the text:
# create new corpus that excludes part of doc that is now in metadata. We just use square bracket indexing to subset the lines that are the second table of the forms (slightly different for each doc type)
excBody <- Corpus(VectorSource(c(paste(exc[[1]][13:length(exc[[1]])], collapse = ","),
paste(exc[[2]][9:length(exc[[2]])], collapse = ","))))
# get rid of all the white spaces
excBody <- tm_map(excBody, stripWhitespace)
Have a look:
inspect(excBody)
A corpus with 2 text documents
The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
create_date creator
Available variables in the data frame are:
MetaID
[[1]]
|CURRENT RESEARCH FOCUS |,| |,|Lorem ipsum dolor sit amet, consectetur adipiscing elit. |,|Donec at ipsum est, vel ullamcorper enim. |,|In vel dui massa, eget egestas libero. |,|Phasellus facilisis cursus nisi, gravida convallis velit ornare a. |,|MAIN AREAS OF EXPERTISE |,|Vestibulum aliquet faucibus tortor, sed aliquet purus elementum vel. |,|In sit amet ante non turpis elementum porttitor. |,|TECHNOLOGY PLATFORMS, INSTRUMENTATION EMPLOYED |,| Vestibulum sed turpis id nulla eleifend fermentum. |,|Nunc sit amet elit eu neque tincidunt aliquet eu at risus. |,|Cras tempor ipsum justo, ut blandit lacus. |,|INDUSTRY PARTNERS (WITHIN THE PAST FIVE YEARS) |,| Pellentesque facilisis nisl in libero scelerisque mattis eu quis odio. |,|Etiam a justo vel sapien rhoncus interdum. |,|ANTICIPATED PARTICIPATION IN PROGRAMS, EITHER APPROVED OR UNDER DEVELOPMENT |,|(Please include anticipated percentages of your time.) |,| Proin vitae ligula quis enim vulputate sagittis vitae ut ante. |,|ADDITIONAL ROLES, DISTINCTIONS, ACADEMIC QUALIFICATIONS AND NOTES |,|e.g., First Aid Responder, Other languages spoken, Degrees, Charitable Campaign |,|Canvasser (GCWCC), OSH representative, Social Committee |,|Sed nec tellus nec massa accumsan faucibus non imperdiet nibh. |,,
[[2]]
CURRENT RESEARCH FOCUS,,* Lorem ipsum dolor sit amet, consectetur adipiscing elit.,* Donec at ipsum est, vel ullamcorper enim.,* In vel dui massa, eget egestas libero.,* Phasellus facilisis cursus nisi, gravida convallis velit ornare a.,MAIN AREAS OF EXPERTISE,* Vestibulum aliquet faucibus tortor, sed aliquet purus elementum vel.,* In sit amet ante non turpis elementum porttitor. ,TECHNOLOGY PLATFORMS, INSTRUMENTATION EMPLOYED,* Vestibulum sed turpis id nulla eleifend fermentum.,* Nunc sit amet elit eu neque tincidunt aliquet eu at risus.,* Cras tempor ipsum justo, ut blandit lacus.,INDUSTRY PARTNERS (WITHIN THE PAST FIVE YEARS),* Pellentesque facilisis nisl in libero scelerisque mattis eu quis odio.,* Etiam a justo vel sapien rhoncus interdum.,ANTICIPATED PARTICIPATION IN PROGRAMS, EITHER APPROVED OR UNDER DEVELOPMENT ,(Please include anticipated percentages of your time.),* Proin vitae ligula quis enim vulputate sagittis vitae ut ante.,ADDITIONAL ROLES, DISTINCTIONS, ACADEMIC QUALIFICATIONS AND NOTES,e.g., First Aid Responder, Other languages spoken, Degrees, Charitable Campaign Canvasser (GCWCC), OSH representative, Social Committee,* Sed nec tellus nec massa accumsan faucibus non imperdiet nibh.,,
Now the documents are ready for text mining, with the data from the upper table moved out of the document and into the document metadata.
Of course all of this depends on the documents being highly regular. If there are different numbers of lines in the first table in each doc, then the simple indexing method might fail (give it a try and see what happens) and something more robust will be needed.
UPDATE: A more robust method
Having read the question a little more carefully, and got a bit more education about regex, here's a method that is more robust and doesn't depend on indexing specific lines of the documents. Instead, we use regular expressions to extract text from between two words to make the metadata and split the document
Here's how we make the User-defined local meta data (a method to replace the one above)
library(gdata) # for the trim function
txt <- paste0(as.character(exc[[1]]), collapse = ",")
# inspect the document to identify the words on either side of the string
# we want, so 'Name' and 'Title' are on either side of 'John Doe'
extract <- regmatches(txt, gregexpr("(?<=Name).*?(?=Title)", txt, perl=TRUE))
meta(exc[[1]], type = "corpus", tag = "Name1") <- trim(gsub("[[:punct:]]", "", extract))
extract <- regmatches(txt, gregexpr("(?<=Title).*?(?=Team)", txt, perl=TRUE))
meta(exc[[1]], type = "corpus", tag = "Title") <- trim(gsub("[[:punct:]]","", extract))
extract <- regmatches(txt, gregexpr("(?<=Members).*?(?=Supervised)", txt, perl=TRUE))
meta(exc[[1]], type = "corpus", tag = "TeamMembers") <- trim(gsub("[[:punct:]]","", extract))
extract <- regmatches(txt, gregexpr("(?<=your).*?(?=Supervisor)", txt, perl=TRUE))
meta(exc[[1]], type = "corpus", tag = "ManagerName") <- trim(gsub("[[:punct:]]","", extract))
# inspect
meta(exc[[1]], type = "corpus")
Available meta data pairs are:
Author :
DateTimeStamp: 2013-04-22 13:59:28
Description :
Heading :
ID : fake1.doc
Language : en_CA
Origin :
User-defined local meta data pairs are:
$Name1
[1] "John Doe"
$Title
[1] "Manager"
$TeamMembers
[1] "Elise Patton Jeffrey Barnabas"
$ManagerName
[1] "Selma Furtgenstein"
Similarly we can extract the sections of your second table into separate
vectors and then you can make them into documents and corpora or just work
on them as vectors.
txt <- paste0(as.character(exc[[1]]), collapse = ",")
CURRENT_RESEARCH_FOCUS <- trim(gsub("[[:punct:]]","", regmatches(txt, gregexpr("(?<=CURRENT RESEARCH FOCUS).*?(?=MAIN AREAS OF EXPERTISE)", txt, perl=TRUE))))
[1] "Lorem ipsum dolor sit amet consectetur adipiscing elit Donec at ipsum est vel ullamcorper enim In vel dui massa eget egestas libero Phasellus facilisis cursus nisi gravida convallis velit ornare a"
MAIN_AREAS_OF_EXPERTISE <- trim(gsub("[[:punct:]]","", regmatches(txt, gregexpr("(?<=MAIN AREAS OF EXPERTISE).*?(?=TECHNOLOGY PLATFORMS, INSTRUMENTATION EMPLOYED)", txt, perl=TRUE))))
[1] "Vestibulum aliquet faucibus tortor sed aliquet purus elementum vel In sit amet ante non turpis elementum porttitor"
And so on. I hope that's a bit closer to what you're after. If not, it might be best to break down your task into a set of smaller, more focused questions, and ask them separately (or wait for one of the gurus to stop by this question!).

two column beamer/sweave slide with grid graphic

I'm trying to make a presentation on ggplot2 graphics using beamer + sweave. Some slides should have two columns; the left one for the code, the right one for the resulting graphic. Here's what I tried,
\documentclass[xcolor=dvipsnames]{beamer}
\usepackage{/Library/Frameworks/R.framework/Resources/share/texmf/tex/latex/Sweave}
\usepackage[english]{babel}
\usepackage{tikz}
\usepackage{amsmath,amssymb}% AMS standards
\usepackage{listings}
\usetheme{Madrid}
\usecolortheme{dove}
\usecolortheme{rose}
\SweaveOpts{pdf=TRUE, echo=FALSE, fig=FALSE, eps=FALSE, tidy=T, width=4, height=4}
\title{Reproducible data analysis with \texttt{ggplot2} \& \texttt{R}}
\subtitle{subtitle}
\author{Baptiste Augui\'e}
\date{\today}
\institute{Here}
\begin{document}
\begin{frame}[fragile]
\frametitle{Some text to show the space taken by the title}
\begin{columns}[t] \column{0.5\textwidth}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi.
\column{0.5\textwidth}
\begin{figure}[!ht]
\centering
<<fig=TRUE>>=
grid.rect(gp=gpar(fill="slateblue"))
#
\end{figure}
\end{columns}
\end{frame}
\begin{frame}[fragile]
\frametitle{Some text to show the space taken by the title}
\begin{columns}[t]
\column{0.5\textwidth}
<<echo=TRUE,fig=FALSE>>=
library(ggplot2)
p <-
qplot(mpg, wt, data=mtcars, colour=cyl) +
theme_grey(base_family="Helvetica")
#
\column{0.5\textwidth}
\begin{figure}[!ht]
\centering
<<fig=TRUE>>=
print(p)
#
\end{figure}
\end{columns}
\end{frame}
\end{document}
And the two pages of output.
I have two issues with this output:
the echo-ed sweave code ignores the columns environment and spans the two columns
the column margins for either graphic are unecessarily wide
Any ideas?
Thanks.
As for the first question, the easy way is to set keep.source=TRUE in SweaveOpts. For more fancy control, see fancyvrb and FAQ #9 of Sweave manual.
The width of the figure can be set by \setkeys{Gin}{width=1.0\textwidth}
here is a slight modification:
... snip ...
\SweaveOpts{pdf=TRUE, echo=FALSE, fig=FALSE, eps=FALSE, tidy=T, width=4, height=4, keep.source=TRUE}
\title{Reproducible data analysis with \texttt{ggplot2} \& \texttt{R}}
... snip ...
\begin{document}
\setkeys{Gin}{width=1.1\textwidth}
... snip...
<<echo=TRUE,fig=FALSE>>=
library(ggplot2)
p <-
qplot(mpg,
wt,
data=mtcars,
colour=cyl) +
theme_grey(base_family=
"Helvetica")
#

Resources