non-sense word associations in Text Mining - r

Hi, I ran this text analysis for word associations. However, the word associations do not make any sense. For example, I was interested in the association between "women" and other words. But the output provides non-sense word associations, such as "bagthey". Does anyone know where the problem is? I also attached my code below. I tried both running or not running the "Eliminate extra white spaces" codes.
Data could be downloaded here: https://drive.google.com/file/d/1zaCrraYYNTXsrbfx0bG53pjxo9AMZu5M/view?usp=sharing
company <- read.csv("C:/Data.csv")
#### Set up data for analysis ####
# Create Corpus #To use the tm package we first transform the dataset to a corpus
corpus_review=Corpus(VectorSource(company$review))
# Convert all text to lowercase
corpus_review=tm_map(corpus_review, tolower)
# Stem words (i.e., to ensure no duplication of words for example work and working)
corpus_review=tm_map(corpus_review, stemDocument)
# Remove punctuations
corpus_review <- tm_map(corpus_review, removePunctuation, preserve_intra_word_contractions = TRUE, preserve_intra_word_dashes = TRUE, ucp = TRUE)
# Convert the text to lower case
corpus_review <- tm_map(corpus_review, content_transformer(tolower))
# Remove numbers
corpus_review <- tm_map(corpus_review, removeNumbers)
# Remove english common stopwords
corpus_review <- tm_map(corpus_review, removeWords, stopwords("english"))
# Remove own words
corpus_review=tm_map(corpus_review, removeWords,c("also", "get","like", "made", "can", "im", "just","a", "I"))
# Eliminate extra white spaces
#corpus_review <- tm_map(corpus_review, stripWhitespace)
# Text stemming - which reduces words to their root form
corpus_review <- tm_map(corpus_review, stemDocument)
# Build a term-document matrix
TextDoc_dtm <- TermDocumentMatrix(corpus_review)
dtm_m <- as.matrix(TextDoc_dtm)
# Sort by deceasing value of frequency
dtm_v <- sort(rowSums(dtm_m),decreasing=TRUE)
dtm_d <- data.frame(word = names(dtm_v),freq=dtm_v)
head(dtm_d, 5)
#### Word Associations####
associations <- findAssocs(TextDoc_dtm, "women", 0.1)
associations_df <- list_vect2df(associations)[, 2:3]
ggplot(associations_df, aes(y = associations_df[, 1])) +
geom_point(aes(x = associations_df[, 2]),
data = associations_df, size = 3) +
ggtitle("Word Associations to 'the key word'") +
theme_gdocs()

Related

R Tm package dictionary matching leads to higher frequency than actual words of text

I have been using the code below to load text as a corpus and using the tm package to clean the text. As a next step I am loading a dictionary and cleaning it as well. Then I am matching the words from the text with the dictionary to calculate a score. However, the matching results in a higher number of matches than actual words in the text (e.g., the competence score is 1500 but the actual number of words in the text is only 1000).
I think it is related to the stemming of the text and the dictionary as the matches are lower when there is no stemming performed.
Do you have any ideas why this is happening?
Thank you very much.
R Code
Step 1 Storing data as corpus
file.path <- file.path(here("Generated Files", "Data Preparation")) corpus <- Corpus(DirSource(file.path))
Step 2 Cleaning data
#Removing special characters
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
corpus <- tm_map(corpus, toSpace, "/")
corpus <- tm_map(corpus, toSpace, "#")
corpus <- tm_map(corpus, toSpace, "\\|")
#Convert the text to lower case
corpus <- tm_map(corpus, content_transformer(tolower))
#Remove numbers
corpus <- tm_map(corpus, removeNumbers)
#Remove english common stopwords
corpus <- tm_map(corpus, removeWords, stopwords("english"))
#Remove your own stop word
specify your stopwords as a character vector
corpus <- tm_map(corpus, removeWords, c("view", "pdf"))
#Remove punctuations
corpus <- tm_map(corpus, removePunctuation)
#Eliminate extra white spaces
corpus <- tm_map(corpus, stripWhitespace)
#Text stemming
corpus <- tm_map(corpus, stemDocument)
#Unique words
corpus <- tm_map(corpus, unique)
Step 3 DTM
dtm <- DocumentTermMatrix(corpus)
Step 4 Load Dictionaries
dic.competence <- read_excel(here("Raw Data", "6. Dictionaries", "Brand.xlsx"))
dic.competence <- tolower(dic.competence$COMPETENCE)
dic.competence <- stemDocument(dic.competence)
dic.competence <- unique(dic.competence)
Step 5 Count frequencies
corpus.terms = colnames(dtm)
competence = match(corpus.terms, dic.competence, nomatch=0)
Step 6 Calculate scores
competence.score = sum(competence) / rowSums(as.matrix(dtm))
competence.score.df = data.frame(scores = competence.score)
What does competence return when you run that line? I'm not sure how your dictionary is set up, so I can't say for certain what's happening there. I brought in my own random corpus text as the primary text and brought in a separate corpus as the dictionary and your code worked great. The row names of competence.score.df were the names of the different txt files in my corpus and the scores were all in a 0-1 range.
# this is my 'dictionary' of terms:
tdm <- TermDocumentMatrix(Corpus(DirSource("./corpus/corpus3")),
control = list(removeNumbers = TRUE,
stopwords = TRUE,
stemming = TRUE,
removePunctuation = TRUE))
# then I used your programming and it worked as I think you were expecting
# notice what I used here for the dictionary
(competence = match(colnames(dtm),
Terms(tdm)[1:10], # I only used the first 10 in my test of your code
nomatch = 0))
(competence.score = sum(competence)/rowSums(as.matrix(dtm)))
(competence.score.df = data.frame(scores = competence.score))

Remove languages other than English from corpus or data frame in R

I am currently looking to perform some text mining on 25000 YouTube comments, which I gathered using the tuber package. I am very new to coding and with all these different information out there, this can be a bit overwhelming at times. So I already cleaned my corpus, that I created:
# Build a corpus, and specify the source to be character vectors
corpus <- Corpus(VectorSource(comments_final$textOriginal))
# Convert to lower case
corpus <- tm_map(corpus, content_transformer(tolower))
# Remove URLs
removeURL <- function(x) gsub("http[^[:space:]]*", "", x)
corpus <- tm_map(corpus, content_transformer(removeURL))
# Remove anything other than English letters or space
removeNumPunct <- function(x) gsub("[^[:alpha:][:space:]]*", "", x)
corpus <- tm_map(corpus, content_transformer(removeNumPunct))
# Add extra stopwords
myStopwords <- c(stopwords('english'),"im", "just", "one","youre",
"hes","shes","its","were","theyre","ive","youve","weve","theyve","id")
# Remove stopwords from corpus
corpus <- tm_map(corpus, removeWords, myStopwords)
# Remove extra whitespace
corpus <- tm_map(corpus, stripWhitespace)
# Remove other languages or more specifically anything with a non "a-z""0-9" character
corpus <- tm_map(corpus, content_transformer(function(s){
gsub(pattern = '[^a-zA-Z0-9\\s]+',
x = s,
replacement = " ",
ignore.case = TRUE,
perl = TRUE)}))
# Replace word elongations using the textclean package by Tyler Rinker.
corpus <- tm_map(corpus, replace_word_elongation)
# Creating data frame from corpus
corpus_asdataframe<-data.frame(text = sapply(corpus, as.character),stringsAsFactors = FALSE)
# Due to pre-processing some rows are empty. Therefore, the empty rows should be removed.
# Remove empty rows from data frame and "NA's"
corpus_asdataframe <-corpus_asdataframe[!apply(is.na(corpus_asdataframe) | corpus_asdataframe == "", 1, all),]
corpus_asdataframe<-as.data.frame(corpus_asdataframe)
# Create corpus of clean data frame
corpus <- Corpus(VectorSource(corpus_asdataframe$corpus_asdataframe))
So now the issue is that there are a lot of Spanish or German comments in my corpus, which I would like to exclude. I thought that maybe it is possible to download an English dictionary and maybe use an inner jointo detect english words and remove all other languages. However, I am very new to coding (I am studying Business Administration and never had to do anything with computer science) and so my skills are not sufficient for applying my idea to my corpus (or data frame). I really hope find a little help here. That would me very much appreciated! Thank you and best regards from Germany!
dftest <- data.frame(
id = 1:3,
text = c(
"Holla this is a spanish word",
"English online here",
"Bonjour, comment ça va?"
)
)
library("cld3")
subset(dftest, detect_language(dftest$text) == "en")
## id text
## 1 1 Holla this is a spanish word
## 2 2 English online here
CREDIT: Ken Benoit at: Find in a dfm non-english tokens and remove them

Issue with stemCompletion of Corpus for text mining in R (tm package)

I have a problem with the word stemming completion of my created corpus using the tm package.
Here are the most important lines of my code:
# Build a corpus, and specify the source to be character vectors
corpus <- Corpus(VectorSource(comments_final$textOriginal))
corpus
# Convert to lower case
corpus <- tm_map(corpus, content_transformer(tolower))
# Remove URLs
removeURL <- function(x) gsub("http[^[:space:]]*", "", x)
corpus <- tm_map(corpus, content_transformer(removeURL))
# Remove anything other than English letters or space
removeNumPunct <- function(x) gsub("[^[:alpha:][:space:]]*", "", x)
corpus <- tm_map(corpus, content_transformer(removeNumPunct))
# Remove stopwords
myStopwords <- c(setdiff(stopwords('english'), c("r", "big")),
"use", "see", "used", "via", "amp")
corpus <- tm_map(corpus, removeWords, myStopwords)
# Remove extra whitespace
corpus <- tm_map(corpus, stripWhitespace)
# Remove other languages or more specifically anything with a non "a-z" and "0-9" character
corpus <- tm_map(corpus, content_transformer(function(s){
gsub(pattern = '[^a-zA-Z0-9\\s]+',
x = s,
replacement = " ",
ignore.case = TRUE,
perl = TRUE)
}))
# Keep a copy of the generated corpus for stem completion later as dictionary
corpus_copy <- corpus
# Stemming words of corpus
corpus <- tm_map(corpus, stemDocument, language="english")
Now to complete the word stemming I apply stemCompletion of the tm package.
# Completing the stemming with the generated dictionary
corpus <- tm_map(corpus, content_transformer(stemCompletion), dictionary = corpus_copy, type="prevalent")
However, this is where my corpus gets destroyed and messed up and the stemCompletion does not work properly. Peculiarly, R does not indicate an error, the code runs but the result is terrible.
Does anybody know a solution for this? BTW my "comments_final" data frame consist of youtube comments, which I downloaded using the tubeR package.
Thank you so much for your help in advance, I really need help for my master's thesis thank you.
It does seem to work in a bit weird way, so I came up with my own stemCompletion function and applied it to the corpus. In your case try this:
stemCompletion2 <- function(x, dictionary) {
# split each word and store it
x <- unlist(strsplit(as.character(x), " "))
# # Oddly, stemCompletion completes an empty string to
# a word in dictionary. Remove empty string to avoid issue.
x <- x[x != ""]
x <- stemCompletion(x, dictionary=dictionary)
x <- paste(x, sep="", collapse=" ")
PlainTextDocument(stripWhitespace(x))
}
corpus <- lapply(corpus, stemCompletion2, corpus_copy)
corpus <- as.VCorpus(corpus)`
Hope this helps!
I am new in supervised methods. Here is my way to normalize my data:
corpuscleaned1 <- tm_map(AI_corpus, removePunctuation) ## Revome punctuation.
corpuscleaned2 <- tm_map(corpuscleaned1, stripWhitespace) ## Remove Whitespace.
corpuscleaned3 <- tm_map(corpuscleaned2, removeNumbers) ## Remove Numbers.
corpuscleaned4 <- tm_map(corpuscleaned3, stemDocument, language = "english") ## Remove StemW.
corpuscleaned5 <- tm_map(corpuscleaned4, removeWords, stopwords("en")) ## Remove StopW.
head(AI_corpus[[1]]$content) ## Examine original txt.
head(corpuscleaned5[[1]]$content) ## Examine clean txt.
AI_corpus <- my corpus about Amnesty Int. reports 1993-2013.

Find 2 words phrase using tm R

I know this has been asked multiple times. For example
Finding 2 & 3 word Phrases Using R TM Package
However, I don't know why none of these solutions work with my data. The result is always one-gram word no matter how many ngram I chose (2, 3 or 4) for the ngram.
Could anybody know the reason why? I suspect the encoding is the reason.
Edited: a small part of the data.
comments <- c("Merge branch 'master' of git.internal.net:/git/live/LegacyCodebase into problem_70918\n",
"Merge branch 'master' of git.internal.net:/git/live/LegacyCodebase into tm-247\n",
"Merge branch 'php5.3-upgrade-sprint6-7' of git.internal.net:/git/pn-project/LegacyCodebase into release2012.08\n",
"Merge remote-tracking branch 'dmann1/p71148-s3-callplan_mapping' into lcst-operational-changes\n",
"Merge branch 'master' of git.internal.net:/git/live/LegacyCodebase into TASK-360148\n",
"Merge remote-tracking branch 'grockett/rpr-pre' into rpr-lite\n"
)
cleanCorpus <- function(vector){
corpus <- Corpus(VectorSource(vector), readerControl = list(language = "en_US"))
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, tolower)
#corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removePunctuation)
#corpus <- tm_map(corpus, PlainTextDocument)
corpus <- tm_map(corpus, removeWords, stopwords("english"))
return(corpus)
}
# this function is provided by a team member (in the link I posted above)
test <- function(keywords_doc){
BigramTokenizer <- function(x)
unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), use.names = FALSE)
# creating of document matrix
keywords_matrix <- TermDocumentMatrix(keywords_doc, control = list(tokenize = BigramTokenizer))
# remove sparse terms
keywords_naremoval <- removeSparseTerms(keywords_matrix, 0.99)
# Frequency of the words appearing
keyword.freq <- rowSums(as.matrix(keywords_naremoval))
subsetkeyword.freq <-subset(keyword.freq, keyword.freq >=20)
frequentKeywordSubsetDF <- data.frame(term = names(subsetkeyword.freq), freq = subsetkeyword.freq)
# Sorting of the words
frequentKeywordDF <- data.frame(term = names(keyword.freq), freq = keyword.freq)
frequentKeywordSubsetDF <- frequentKeywordSubsetDF[with(frequentKeywordSubsetDF, order(-frequentKeywordSubsetDF$freq)), ]
frequentKeywordDF <- frequentKeywordDF[with(frequentKeywordDF, order(-frequentKeywordDF$freq)), ]
# Printing of the words
# wordcloud(frequentKeywordDF$term, freq=frequentKeywordDF$freq, random.order = FALSE, rot.per=0.35, scale=c(5,0.5), min.freq = 30, colors = brewer.pal(8,"Dark2"))
return(frequentKeywordDF)
}
corpus <- cleanCorpus(comments)
t <- test(corpus)
> head(t)
term freq
added added 6
html html 6
tracking tracking 6
common common 4
emails emails 4
template template 4
Thanks,
I haven't found the reason either, but if you are only interested in the counts regardless in which documents the bigrams occured, you could get them alternatively via this pipeline:
library(tm)
lilbrary(dplyr)
library(quanteda)
# ..construct the corpus as in your post ...
corpus %>%
unlist() %>%
tokens() %>%
tokens_ngrams(2:2, concatenator = " ") %>%
unlist() %>%
as.data.frame() %>%
group_by_(".") %>%
summarize(cnt=n()) %>%
arrange(desc(cnt))

Make all words uppercase in Wordcloud in R

When creating Wordclouds it is most common to make all the words lowercase. However, I want the wordclouds to display the words uppercase. After forcing the words to be uppercase the wordcloud still display lowercase words. Any ideas why?
Reproducable code:
library(tm)
library(wordcloud)
data <- data.frame(text = c("Creativity is the art of being ‘productive’ by using
the available resources in a skillful manner.
Scientifically speaking, creativity is part of
our consciousness and we can be creative –
if we know – ’what goes on in our mind during
the process of creation’.
Let us now look at 6 examples of creativity which blows the mind."))
text <- paste(data$text, collapse = " ")
# I am using toupper() to force the words to become uppercase.
text <- toupper(text)
source <- VectorSource(text)
corpus <- VCorpus(source, list(language = "en"))
# This is my function for cleaning the text
clean_corpus <- function(corpus){
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeWords, c(stopwords("en")))
return(corpus)
}
clean_corp <- clean_corpus(corpus)
data_tdm <- TermDocumentMatrix(clean_corp)
data_m <- as.matrix(data_tdm)
commonality.cloud(data_m, colors = c("#224768", "#ffc000"), max.words = 50)
This produces to following output
It's because behind the scenes TermDocumentMatrix(clean_corp) is doing TermDocumentMatrix(clean_corp, control = list(tolower = TRUE)). If you set it to TermDocumentMatrix(clean_corp, control = list(tolower = FALSE)), then the words stay uppercase. Alternatively, you can also adjust the row names of your matrix afterwards: rownames(data_m) <- toupper(rownames(data_m)).

Resources