I am trying to create Term-Document matrix using R from a corpus of file. But on running the code I am getting this error followed by 2 warnings:
Error in simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms), :
'i, j' invalid
Calls: DocumentTermMatrix ... TermDocumentMatrix.VCorpus -> simple_triplet_matrix -> .Call
In addition: Warning messages:
1: In mclapply(unname(content(x)), termFreq, control) :
scheduled core 1 encountered error in user code, all values of the job will be affected
2: In simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms), :
NAs introduced by coercion
My code is given below:
library(tm)
library(RWeka)
library(tmcn.word2vec)
#Reading data
data <- read.csv("Train.csv", header=T)
#text <- data$EventDescription
#Pre-processing
corpus <- Corpus(VectorSource(data$EventDescription))
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, tolower)
corpus <- tm_map(corpus, PlainTextDocument)
#dataframe <- data.frame(text=unlist(sapply(corpus,'[',"content")))
#Reading dictionary file
dict <- scan("dictionary.txt", what='character',sep='\n')
#Bigram Tokenization
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 4))
tdm_doc <- DocumentTermMatrix(corpus,control=list(stopwords = dict, tokenize=BigramTokenizer))
tdm_dic <- DocumentTermMatrix(corpus,control=list(tokenize=BigramTokenizer, dictionary=dict))
As given in other answers in SO, I have tried installing SnowballC package and other listed ideas. Still I am getting the same error. Can anyone help me in this regard? Thanks in advance.
I had the same problem for getting my DocumnetTermMatrix and I solved it by removing the following command:
corpus <- tm_map(corpus, PlainTextDocument)
I had a similar error when cleaning a corpus. To fix the problem I added the following after the offending line of code and it fixed it. Some of the tm_map functions do not return a corpus...
corpus <- Corpus(VectorSource(corpus))
For me the problem arose after stem completion. I would suggest trying to make a tdm after each tm_map call. That will tell you which cleaning step is causing the problem.
Best of luck!
Related
I have a dataframe with 60000 rows/phrase which I would like to use as stopwords and remove from text.
I use tm package and I use this line, after I read the csv file with the list of stopwords:
corpus <- tm_map(corpus, removeWords, df$mylistofstopwords)
but I receive this error:
In addition: Warning message:
In gsub(sprintf("(*UCP)\\b(%s)\\b", paste(sort(words, decreasing = TRUE), :
PCRE pattern compilation error
'regular expression is too large'
at ''
Is there any problem because the list is to big? Is there anything I could make to fix it?
You could probably resolve your issue by splitting the stopword list into multiple parts, something like the following:
chunk <- 1000
i <- 0
n <- length(df$mylistofstopwords)
while (i != n) {
i2 <- min(i + chunk, n)
corpus <- tm_map(corpus, removeWords, df$mylistofstopwords[(i+1):i2])
i <- i2
}
Or, you could just use a package that can handle long stopword lists. corpus is one such package. quanteda is another. Here's how to get a document-by-term matrix in corpus
library(corpus)
x <- term_matrix(corpus, drop = df$mylistofstopwords)
Here, the input argument corpus can be a tm corpus.
I am a beginner at R and am trying to create a word cloud. My code and the error message that I am unable to fix are below:
I imported a csv file consisting of tweets, made a list from the column in the csv file that contained the text from the tweets, then tried the code below which gave me the error message above:
myCorpus <- Corpus(VectorSource(tweets))
myCorpus <- tm_map(myCorpus, tolower)
myCorpus <- tm_map(myCorpus, removePunctuation)
myCorpus <- tm_map(myCorpus, removeNumbers)
myCorpus <- tm_map(myCorpus, stripWhitespace)
myCorpus <- tm_map(myCorpus, removeWords, stopwords('english'))
Error in strwidth(words[i], cex = size[i], ...) : invalid 'cex' value
In addition: Warning messages:
1: In max(freq) : no non-missing arguments to max; returning -Inf
2: In max(freq) : no non-missing arguments to max; returning -Inf
In this post it states that this error message is due to having words that are less than 3 characters in length: word cloud -Error in strwidth(words[i], cex = size[i], ...) : invalid 'cex' value
My code is different than in this example so I tried to set the characters to greater than 3 in length via the line of code below to avoid the above error message. However this returns the new error message below:
myCorpus <- which(length(myCorpus)>3)
Error in UseMethod("TermDocumentMatrix", x) :
no applicable method for 'TermDocumentMatrix' applied to an object of class "c('integer', 'numeric')"
I would greatly appreciate guidance on how to fix this. Thank you very much for the help.
Length doesn't return the number of characters in a string, but the number of elements in the vector.
Based on the documentation, one way to filter words 3 or less characters is the following:
for(i in 1:length(myCorpus)) {
myCorpus[[i]] <- removeWords(myCorpus[[i]], ".{3}?")
}
I couldn't resolve the error above but the similar code below successfully creates a word cloud without the error message:
file<-read.csv("file",stringsAsFactors = FALSE)
library(tm)
library(SnowballC)
library(wordcloud)
file<-Corpus(VectorSource(file))
file<-tm_map(file, PlainTextDocument)
file<- tm_map(file, removePunctuation)
file<- tm_map(file, removeWords, stopwords('english'))
file<- tm_map(file, stemDocument)
wordcloud(file, max.words = 100, random.order = FALSE)
I am running the following code and receiving this error:
Error in .jcall("RWekaInterfaces", "[S", "tokenize", .jcast(tokenizer,
: java.lang.NullPointerException
setwd("C:\\Users\\jbarr\\Desktop\\test)
library (tm); library (wordcloud);library (RWeka); library (tau);library(xlsx);
Comment <- read.csv("testfile.csv",stringsAsFactors=FALSE)
str(Comment)
review_source <- VectorSource(Comment)
corpus <- Corpus(review_source)
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeWords,stopwords(kind = "english"))
corpus <- tm_map(corpus, content_transformer(tolower))
corpus <- tm_map(corpus, removeWords, c("member", "advise", "inform", "informed", "caller", "call","provided", "advised"))
dtm <- DocumentTermMatrix(corpus)
dtm2 <- as.matrix(dtm)
wordfreq <- colSums(dtm2)
wordfreq <- sort(wordfreq, decreasing=TRUE)
head(wordfreq, n=100)
wfreq <- head(wordfreq, 500)
set.seed(142)
words <- names(wfreq)
dark2 <- brewer.pal(6, "Dark2")
wordcloud(words[1:100], wordfreq[1:100], rot.per=0.35, scale=c(2.7, .4), colors=dark2, random.order=FALSE)
write.xlsx(wfreq, "C:\\Users\\jbarr\\Desktop\\test")
The interesting problem is, I have ran this code on multiple files, and only specific ones have the error.
Sanmeet is right - it's a problem with NAs in your data frame.
just prior to your line: review_source <- VectorSource(Comment)
insert the line below:
Comment[which(is.na(Comment))] <- "NULLVALUEENTERED"
This will change all of your NA values to the phrase NULLVALUEENTERED (feel free to change that). No more NAs, and the code should run fine.
You are getting the error in tokenizer due to NA in your string vector Comment
Comment <- read.csv("testfile.csv",stringsAsFactors=FALSE)
str(Comment)
length(Comment)
Comment = Comment[complete.cases(Comment)]
length(Comment)
Or you can also use is.na as below
Comment = Comment[!is.na(Comment)]
Now apply the preprocessing steps, create the corpus etc
Hope this helps.
A Suggestion: I get this error when reading an excel (.xlsx) file using:
df2 <- read.xlsx2("foobar.xlsx", sheetName = "Sheet1", startRow = 1, endRow = 0).
Notice it appears that the value for endRow should be NULL or a valid number. But
df2 <- read.xlsx2("foobar.xlsx", sheetName = "Sheet1")
works fine. So you might want to check your argument values and argument to parameter alignment.
Seems like there are NAs in your data Frame. Run is.na() and remove those rows. Try running the code again. It should work.
I have been working through numerous online examples of the {tm} package in R, attempting to create a TermDocumentMatrix. Creating and cleaning a corpus has been pretty straightforward, but I consistently encounter an error when I attempt to create a matrix. The error is:
Error in UseMethod("meta", x) :
no applicable method for 'meta' applied to an object of class "character"
In addition: Warning message:
In mclapply(unname(content(x)), termFreq, control) :
all scheduled cores encountered errors in user code
For example, here is code from Jon Starkweather's text mining example. Apologies in advance for such long code, but this does produce a reproducible example. Please note that the error comes at the end with the {tdm} function.
#Read in data
policy.HTML.page <- readLines("http://policy.unt.edu/policy/3-5")
#Obtain text and remove mark-up
policy.HTML.page[186:202]
id.1 <- 3 + which(policy.HTML.page == " TOTAL UNIVERSITY </div>")
id.2 <- id.1 + 5
text.data <- policy.HTML.page[id.1:id.2]
td.1 <- gsub(pattern = "<p>", replacement = "", x = text.data,
ignore.case = TRUE, perl = FALSE, fixed = FALSE, useBytes = FALSE)
td.2 <- gsub(pattern = "</p>", replacement = "", x = td.1, ignore.case = TRUE,
perl = FALSE, fixed = FALSE, useBytes = FALSE)
text.d <- td.2; rm(text.data, td.1, td.2)
#Create corpus and clean
library(tm)
library(SnowballC)
txt <- VectorSource(text.d); rm(text.d)
txt.corpus <- Corpus(txt)
txt.corpus <- tm_map(txt.corpus, tolower)
txt.corpus <- tm_map(txt.corpus, removeNumbers)
txt.corpus <- tm_map(txt.corpus, removePunctuation)
txt.corpus <- tm_map(txt.corpus, removeWords, stopwords("english"))
txt.corpus <- tm_map(txt.corpus, stripWhitespace); #inspect(docs[1])
txt.corpus <- tm_map(txt.corpus, stemDocument)
# NOTE ERROR WHEN CREATING TDM
tdm <- TermDocumentMatrix(txt.corpus)
The link provided by jazzurro points to the solution. The following line of code
txt.corpus <- tm_map(txt.corpus, tolower)
must be changed to
txt.corpus <- tm_map(txt.corpus, content_transformer(tolower))
There are 2 reasons for this issue in tm v0.6.
If you are doing term level transformations like tolower etc., tm_map returns character vector instead of PlainTextDocument.
Solution: Call tolower through content_transformer or call tm_map(corpus, PlainTextDocument) immediately after tolower
If the SnowballC package is not installed and if you are trying to stem the documents then also this can occur.
Solution: install.packages('SnowballC')
There is No need to apply content_transformer.
Create the corpus in this way:
trainData_corpus <- Corpus((VectorSource(trainData$Comments)))
Try it.
I'm getting started with the tm package in R, so please bear with me and apologies for the big ol' wall of text. I have created a fairly large corpus of Socialist/Communist propaganda and would like to extract newly coined political terms (multiple words, e.g. "struggle-criticism-transformation movement").
This is a two-step question, one regarding my code so far and one regarding how I should go on.
Step 1: To do this, I wanted to identify some common ngrams first. But I get stuck very early on. Here is what I've been doing:
library(tm)
library(RWeka)
a <-Corpus(DirSource("/mycorpora/1965"), readerControl = list(language="lat")) # that dir is full of txt files
summary(a)
a <- tm_map(a, removeNumbers)
a <- tm_map(a, removePunctuation)
a <- tm_map(a , stripWhitespace)
a <- tm_map(a, tolower)
a <- tm_map(a, removeWords, stopwords("english"))
a <- tm_map(a, stemDocument, language = "english")
# everything works fine so far, so I start playing around with what I have
adtm <-DocumentTermMatrix(a)
adtm <- removeSparseTerms(adtm, 0.75)
inspect(adtm)
findFreqTerms(adtm, lowfreq=10) # find terms with a frequency higher than 10
findAssocs(adtm, "usa",.5) # just looking for some associations
findAssocs(adtm, "china",.5)
# ... and so on, and so forth, all of this works fine
The corpus I load into R works fine with most functions I throw at it. I haven't had any problems creating TDMs from my corpus, finding frequent words, associations, creating word clouds and so on. But when I try to use identify ngrams using the approach outlined in the tm FAQ, I'm apparently making some mistake with the tdm-constructor:
# Trigram
TrigramTokenizer <- function(x) NGramTokenizer(x,
Weka_control(min = 3, max = 3))
tdm <- TermDocumentMatrix(a, control = list(tokenize = TrigramTokenizer))
inspect(tdm)
I get this error message:
Error in rep(seq_along(x), sapply(tflist, length)) :
invalid 'times' argument
In addition: Warning message:
In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
Any ideas? Is "a" not the right class/object? I'm confused. I assume there's a fundamental mistake here, but I'm not seeing it. :(
Step 2: Then I would like to identify ngrams that are significantly overrepresented, when I compare the corpus against other corpora. For example I could compare my corpus against a large standard english corpus. Or I create subsets that I can compare against each other (e.g. Soviet vs. a Chinese Communist terminology). Do you have any suggestions how I should go about doing this? Any scripts/functions I should look into? Just some ideas or pointers would be great.
Thanks for your patience!
I could not reproduce your problem, are you using the latest versions of R, tm, RWeka, etc.?
require(tm)
a <- Corpus(DirSource("C:\\Downloads\\Only1965\\Only1965"))
summary(a)
a <- tm_map(a, removeNumbers)
a <- tm_map(a, removePunctuation)
a <- tm_map(a , stripWhitespace)
a <- tm_map(a, tolower)
a <- tm_map(a, removeWords, stopwords("english"))
# a <- tm_map(a, stemDocument, language = "english")
# I also got it to work with stemming, but it takes so long...
adtm <-DocumentTermMatrix(a)
adtm <- removeSparseTerms(adtm, 0.75)
inspect(adtm)
findFreqTerms(adtm, lowfreq=10) # find terms with a frequency higher than 10
findAssocs(adtm, "usa",.5) # just looking for some associations
findAssocs(adtm, "china",.5)
# Trigrams
require(RWeka)
TrigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 3, max = 3))
tdm <- TermDocumentMatrix(a, control = list(tokenize = TrigramTokenizer))
tdm <- removeSparseTerms(tdm, 0.75)
inspect(tdm[1:5,1:5])
And here's what I get
A term-document matrix (5 terms, 5 documents)
Non-/sparse entries: 11/14
Sparsity : 56%
Maximal term length: 28
Weighting : term frequency (tf)
Docs
Terms PR1965-01.txt PR1965-02.txt PR1965-03.txt
†chinese press 0 0 0
†renmin ribao 0 1 1
— renmin ribao 2 5 2
“ chinese people 0 0 0
“renmin ribaoâ€\u009d editorial 0 1 0
etc.
Regarding your step two, here are some pointers to useful starts:
http://quantifyingmemory.blogspot.com/2013/02/mapping-significant-textual-differences.html
http://tedunderwood.com/2012/08/14/where-to-start-with-text-mining/ and here's his code https://dl.dropboxusercontent.com/u/4713959/Neuchatel/NassrProgram.R
Regarding Step 1, Brian.keng gives a one liner workaround here https://stackoverflow.com/a/20251039/3107920 that solves this issue on Mac OSX - it seems to be related to parallelisation rather than ( the minor nightmare that is ) java setup on mac.
You may want to explicitly access the functions like this
BigramTokenizer <- function(x) {
RWeka::NGramTokenizer(x, RWeka::Weka_control(min = 2, max = 3))
}
myTdmBi.d <- TermDocumentMatrix(
myCorpus.d,
control = list(tokenize = BigramTokenizer, weighting = weightTfIdf)
)
Also, some other things that randomly came up.
myCorpus.d <- tm_map(myCorpus.d, tolower) # This does not work anymore
Try this instead
myCorpus.d <- tm_map(myCorpus.d, content_transformer(tolower)) # Make lowercase
In the RTextTools package,
create_matrix(as.vector(C$V2), ngramLength=3) # ngramLength throws an error message.
Further to Ben's answer - I couldn't reproduce this either, but in the past I've had trouble with the plyr package and conflicting dependencies. In my case there was a conflict between Hmisc and ddply. You could try adding this line just prior to the offending line of code:
tryCatch(detach("package:Hmisc"), error = function(e) NULL)
Apologies if this is completely tangental to your problem!