tm Package error: Error definining Document Term Matrix - r

I am analyzing the Reuters 21578 corpus, all the Reuters news articles from 1987, using the "tm" package. After importing the XML files into an R data file, I clean the text--convert to plaintext, convert to lwer case, remove stop words etc. (as seen below)--then I try to convert the corpus to a document term matrix but receive an error message:
Error in UseMethod("Content", x) :
no applicable method for 'Content' applied to an object of class "character"
All pre-processing steps work correctly up until document term matrix.
I created a non-random subset of the corpus (with 4000 documents) and the document term matrix command works fine on that.
My code is below. Thanks for the help.
##Import
file <- "reut-full.xml"
reuters <- Corpus(ReutersSource(file), readerControl = list(reader = readReut21578XML))
## Convert to Plain Text Documents
reuters <- tm_map(reuters, as.PlainTextDocument)
## Convert to Lower Case
reuters <- tm_map(reuters, tolower)
## Remove Stopwords
reuters <- tm_map(reuters, removeWords, stopwords("english"))
## Remove Punctuations
reuters <- tm_map(reuters, removePunctuation)
## Stemming
reuters <- tm_map(reuters, stemDocument)
## Remove Numbers
reuters <- tm_map(reuters, removeNumbers)
## Eliminating Extra White Spaces
reuters <- tm_map(reuters, stripWhitespace)
## create a term document matrix
dtm <- DocumentTermMatrix(reuters)
Error in UseMethod("Content", x) :
no applicable method for 'Content' applied to an object of class "character"

Related

R, tm-error of transformation drops documents

I want to create a network based on the weight of keywords from text. Then I got an error when running the codes related to tm_map:
library (tm)
library(NLP)
lirary (openNLP)
text = c('.......')
corp <- Corpus(VectorSource(text))
corp <- tm_map(corp, stripWhitespace)
Warning message:
In tm_map.SimpleCorpus(corp, stripWhitespace) :
transformation drops documents
corp <- tm_map(corp, tolower)
Warning message:
In tm_map.SimpleCorpus(corp, tolower) : transformation drops documents
The codes were working 2 months ago, now I'm trying for a new data and it is not working anymore. Anyone please shows me where was I wrong. Thank you.
I even tried with the command below, but it doesn't work either.
corp <- tm_map(corp, content_transformer(stripWhitespace))
The code should still be working. You get a warning, not an error. This warning only appears when you have a corpus based on a VectorSource in combination when you use Corpus instead of VCorpus.
The reason is that there is a check in the underlying code to see if the number of names of the corpus content matches the length of the corpus content. With reading the text as a vector there are no document names and this warning pops up. And this is only a warning, no documents have been dropped.
See the difference between the 2 examples
library(tm)
text <- c("this is my text with some other text and some more")
# warning based on Corpus and Vectorsource
text_corpus <- Corpus(VectorSource(text))
# warning appears running following line
tm_map(text_corpus, content_transformer(tolower))
<<SimpleCorpus>>
Metadata: corpus specific: 1, document level (indexed): 0
Content: documents: 1
Warning message:
In tm_map.SimpleCorpus(text_corpus, content_transformer(tolower)) :
transformation drops documents
# Using VCorpus
text_corpus <- VCorpus(VectorSource(text))
# warning doesn't appear
tm_map(text_corpus, content_transformer(tolower))
<<VCorpus>>
Metadata: corpus specific: 0, document level (indexed): 0
Content: documents: 1
tm_map(text_corpus, content_transformer(tolower))

Error in creating TermDocumentMatrix using tm package in R

I am unable to create a term document matrix using tm package in R which throws the following error as I try to create one out of a preprocessed corpus.
Error in UseMethod("TermDocumentMatrix", x) :
no applicable method for 'TermDocumentMatrix' applied to an object of class
"character"
Below is my script that I am using. I am using R v3.4.1 with tm package v0.7-1.
data <- readLines("Data/en_US/en_US_sample.txt", n = 100)
data <- Corpus(VectorSource(data))
data <- tm_map(data, removePunctuation)
data <- tm_map(data, removeNumbers)
data <- tm_map(data, content_transformer(tolower))
data <- tm_map(data, removeWords, stopwords("en"))
data <- tm_map(data, stripWhitespace)
words <- TermDocumentMatrix("data")
I believe TermDocumentMatrix requires the corpus to be in some specified text document format so I tried coercing my corpus to PlainTextDocument using tm_map but it doesn't solve the problem. When I am loading the my text data using Corpus on VectorSource, object created shows the class as SimpleCorpus which might be the problem but I am not totally sure.
Any help would be much appreciated. Thanks!
You did everything right, just in your last line you accidentally passed a character "data" (note the quotation marks) to the function TermDocumentMatrix() instead of the object data.

How do I keep special characters which searching for term frequencies?

I have a document out of which I have special characters along with text such as !, #, #, $, % and more. The following code is used to obtain the most frequent terms list. But when it is performed, the special characters are missing in the frequent terms list i.e. if "#StackOverFlow" is the word present 100 times in the document, I get it as "StackOverFlow" without # in the frequent terms list. Here is my code:
review_text <- paste(rome_1$text, collapse=" ")
#The special characters are present within review_text
review_source <- VectorSource(review_text)
corpus <- Corpus(review_source)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeWords, stopwords("english"))
dtm <- DocumentTermMatrix(corpus)
dtm2 <- as.matrix(dtm)
frequency <- colSums(dtm2)
frequency <- sort(frequency, decreasing = TRUE)
head(frequency)
Where exactly have I gone wrong here?
As you can see in the DocumentTermMatrix documentation :
This is different for a SimpleCorpus. In this case all options are
processed in a fixed order in one pass to improve performance. It
always uses the Boost Tokenizer (via Rcpp) and takes no custom
functions as option arguments.
It seems that SimpleCorpus objects (created by Corpus function) use a pre-defined Boost tokenizer which automatically splits words removing punctuations (including #).
You could use VCorpus instead, and removes the punctuations characters you want e.g. :
library(tm)
review_text <-
"I love #StackOverflow. #Stackoverflow is great, but Stackoverflow exceptions are not!"
review_source <- VectorSource(review_text)
corpus <- VCorpus(review_source) # N.B. use VCorpus here !!
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeWords, stopwords("english"))
patternRemover <- content_transformer(function(x,patternToRemove) gsub(patternToRemove,'',x))
corpus <- tm_map(corpus, patternRemover, '\\!|\\.|\\,|\\;|\\?') # remove only !.,;?
dtm <- DocumentTermMatrix(corpus,control=list(tokenize='words'))
dtm2 <- as.matrix(dtm)
frequency <- colSums(dtm2)
frequency <- sort(frequency, decreasing = TRUE)
Result :
> frequency
#stackoverflow exceptions great love stackoverflow
2 1 1 1 1

R tm document names missing

Using R{tm} package, i create a corpus, per usual:
mycorpus <- Corpus(DirSource(folder,pattern="txt"))
Please note I am not using an encoding variable. The summary (mycorpus) shows document names listed. However after a series of tm_map transforms:
(content_transformer(tolower),content_transformer(removeWords), stopwords("SMART"),stripWhitespace)
ending with mycorpus<- tm_map(mycorpus, PlainTextDocument) and mydtm <- DocumentTermMatrix(mycorpus, control = list(...))
I get an error with inspect(mydtm[1:10, intersect(colnames(dtm), 'toyota')]) to get my variable of choice:
Terms
Docs toyota
character(0) 0
character(0) 0
character(0) 0
character(0) 0
character(0) 1
character(0) 0
character(0) 0
character(0) 0
character(0) 1
character(0) 0
The file names (doc ids) have disappeared. Any idea what could be causing this error? more importantly, how do i reinstate the document names? Many thanks.
Code below will work for single file. You likely could use something like list.files to read all files in the directory.
First, I would wrap the cleaning functions in a custom function. Note the order matters and you have to use content_transformer if the function is not from tm.
clean.corpus<-function(corpus){
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, content_transformer(tolower))
corpus <- tm_map(corpus, removeWords, custom.stopwords)
return(corpus)
}
Then concatenate English words with custom words. This is passed as the last part of the custom function above.
custom.stopwords <- c(stopwords('english'), 'lol', 'smh')
doc<-read.csv('coffee.csv', header=TRUE)
The CSV is a data frame with a column of tweets in a text document and another column with an ID for each tweet. The file from my workshop with this file is here.
The csv file is now in memory so next step is to read it in tabular fashion with a specific mapping when making a corpus. Here the content is in a column called text and the unique ID is in a column name "id".
custom.reader <- readTabular(mapping=list(content="text", id="id"))
corpus <- VCorpus(DataframeSource(doc), readerControl=list(reader=custom.reader))
corpus<-clean.corpus(corpus)
The corpus creation uses the readerControl and then once done you can apply the pre-processing steps. Without the reader control the package assigns the 0 character as the name.
The corpus content of document 1 can be accessed here
corpus[[1]][1]
You can review the corpus meta data for the first document with this code
corpus[[1]][2]
So I think you are needing to use readTabular and readerControl in your corpus construction no matter the source.
I was having the same problem and I realized that it was due to tolower. tolower, unlike removeNumbers, removePunctuation, removeWords, stemDocument, stripWhitespace are not tranformations defined in the tm package. To get a list of transformations defined in the tm package that can be directly applied to a corpus, type:
getTransformations()
[1] “removeNumbers” “removePunctuation” “removeWords” “stemDocument” “stripWhitespace”
Thus, in order to use tolower it first must make a transformation for tolower for it to handle corpus objects properly.
docs <- tm_map(docs,content_transformer(tolower))
The above line of code should stop the files from being renamed to character(0)
The same trick can be applied to any R function to work with corpuses. For example for gsub, the following syntax applies:
docs <- tm_map(docs, content_transformer(gsub), pattern = “internt”, replacement = “internet”)

Convert TDM CSV file into Corpus Format in Text Mining

I am using tm package for text mining in R. I performed following steps:
Import the data in R system and Creating Text Corpus
dataorg <- read.csv("Report_2014.csv")
corpus <- Corpus(VectorSource(data$Resolution))
Clean the data
mystopwords <- c("through","might","much","had","got","with","these")
cleanset <- tm_map(corpus, removeWords, mystopwords)
cleanset <- tm_map(cleanset, tolower)
cleanset <- tm_map(cleanset, removePunctuation)
cleanset <- tm_map(cleanset, removeNumbers)
Creating Term Document Matrix
tdm <- TermDocumentMatrix(cleanset)
At this point I export the TDM data into csv in order to perform some manual cleansing of the terms
write.csv(inspect(tdm), file="tdmfile.csv")
Now the problem is that I want to bring back the cleaned tdm csv file into R system and perform further text analysis like clustering, frequency analysis.
But I am not able to convert the csv file back into corpus format acceptable by tm package algorithms so I am not able to proceed further with my text analysis.
It would be really helpful if somebody can help me out to convert cleaned csv file into corpus format which is acceptable by text analysis functions of tm package.
First read the csv back into R
df<-read.csv("tdmfile.csv")
Then convert the vector (referenced by the column name) into a corpus
corpus<-Corpus(VectorSource(df$column))
If the above doesn't work, try converting the df into utf-8 before the corpus
convert <- iconv(df,to="utf-8-mac")
you are using keyword Dataorg...but i did n't see anywhere you are mentioning it in your code....
if you want convert your csv file into Corpus Format just fellow this link
R text mining documents from CSV file (one row per doc)

Resources