I am trying to create a wordcloud for my internship and I'm struggling with it, i would appreciate any help!
I have a repository of 200 pdf documents, and I have to build a wordcloud of the terms that are repeated the most. For that, I build a corpus, turn it into a tdm, and use the command wordcloud. The clouds only show single words ("country", "fiscal", "debt", etc.) and I would like to add terms with several words that I have seen frequently repeated in the papers ("fiscal rules", "stability and growth pact", etc.). Because the term document matrix only counts how often a single word is repeated, I would need to find a way to count how many times these "blocks" or "tokens" show up in the text, but I can't find a way to do it from the corpus or group of pdfs. ¿Would anyone know a way to do this? I have tried "tokenizing" or building a dictionary, but it doesn't seem to work when it comes to counting their frequency.
I am attaching the code of how I have been building the wordclouds, and how I attempted to count double words.
Thank you so much in advance!
This is how I got the cloud
files = list.files(pattern = "pdf$")
files
Excel_Metadata <- read_excel("Y:/Excel.xlsx",
range = "A1:D203")
View(Excel_Metadata)
corp = Corpus(URISource(Excel_Metadata$files),
readerControl=list(reader=readPDF))
files.tdm <- TermDocumentMatrix(corp,
control =
list(removePunctuation = TRUE,
stopwords = TRUE,
tolower = TRUE,
stemming = TRUE,
removeNumbers = TRUE))
inspect(files.tdm)
matrixfiles <- as.matrix(files.tdm)
v <- sort(rowSums(matrixfiles),decreasing=TRUE)
d <- data.frame(word= names(v), freq=v)
set.seed(6984)
pdf("folder/cloud.pdf")
wordcloud(words= d$word, freq=d$freq, scale= , min.freq= 30, max.words=150, random.order=FALSE, rot.per=0.35, colors=brewer.pal(8, "Paired"), family="serif")
dev.off()
This is what I have been trying to get the two-word terms
text_tokens("files.tdm", text_filter(combine("fiscal rules")))
inspect(files.tdm)
install.packages("tidytext")
library(tidytext)
filesver = tidy(corp)
token = text_tokens(filesver$text, text_filter(combine("fiscal rules")))
matrixfiles <- as.matrix(files.tdm)
v <- sort(rowSums(matrixfiles),decreasing=TRUE)
d <- data.frame(word= names(v), freq=v)
wordcloud(words= d$word, freq=d$freq, scale= , min.freq= 10, max.words=150, random.order=FALSE, rot.per=0.35, colors=brewer.pal(8, "Paired"), family="serif")
Related
I am trying to use a structural topic model to look at topic changes in YouTube comments and i am following this website.: https://bookdown.org/valerie_hase/TextasData_HS2021/tutorial-13-topic-modeling.html
tokens <- data_pr$content %>%
tokens(what = "word",
remove_punct = TRUE,
remove_numbers = TRUE,
remove_url = TRUE)
#applying relative pruning
dfm <- dfm_trim(dfm(tokens),
min_docfreq = 0.02,
max_docfreq = 0.99,
docfreq_type = "prop", verbose = TRUE)
#removing further words
dfm = dfm_remove(dfm, c("d", "dass", "war", "haben", "sein", "mal", "gar" ))
dfm <- dfm_replace(dfm, "videos", "video" )
# Stm Model
model <- stm(documents = dfm_stm$documents,
vocab = dfm_stm$vocab,
K = 6,
prevalence = ~year,
data = data_pr,
verbose = TRUE)
The model works, unless I use relative pruning or I delete words which my preprocessing of the tokens didnt find. Because the original data containing the meta data and my dfm dont have the same amount of observations i get the following error:
(In this example I have only deleated some words like "dass" which my stopword list didn't find.)
"Error in stm(documents = dfm_stm$documents, vocab = dfm_stm$vocab, K = 10, :
number of observations in content covariate (9913) prevalence covariate (9915) and documents (9913) are not all equal."
I am wondering how to deal with this, since even in the tutorial I use the use relative pruning of tokens
.
Thank you for your help!
I tried the model without relative pruning and it works. I am now wondering if I need to preprocess all the data before creating the dfm, but thats not how the tutorials do it.
I have 13 pdf files with 250 questions in each file.
Each question starts with the question number like this:
1 Question tekst … key word 1 ….
a. Answer option A
b. Answer option B
c. Answer option C
d. Answer option D
2 Question tekst … key word 2 ….
a. Answer option A
b. Answer option B
c. Answer option C
d. Answer option D
Between the questions there is an empty line. I would like to use this empty lin to define separate questions.
The PDF's look like this one (starting from page 8) PDF format.
I am looking for questions on the same topic using tm and quanteda packages.
The output is now like this:
[text12, 6690] xxxxx |key word|
[text13, 5908] yyyyy | key word|
How can I get the questions regarding one topic (ex. key word) sorted by the question number and file name?
Thank you!
This my code so far
library(tm, pdftools, quanteda)
files <- list.files(pattern = "pdf$")
corp <- Corpus(URISource(files),
readerControl = list(reader = readPDF))
stopwords<-c(stopwords("english"),tolower("input$term"))
corp <-tm_map(corp, removeWords,stopwords)
opinions.tdm <- TermDocumentMatrix(corp,
control =
list(removePunctuation = FALSE,
stopwords = TRUE,
tolower = TRUE,
stemming = TRUE,
removeNumbers = FALSE,
stripWhitespace= FALSE,
bounds = list(global = c(3, Inf))))
# find words with given freq
findFreqTerms(opinions.tdm, lowfreq = 1, highfreq = 10)
ft <- findFreqTerms(opinions.tdm, lowfreq = 3, highfreq = 10)
# Find questions with a given topic, eg. asthma
corp <- as.VCorpus(corp)
corp <- tm_map(corp, content_transformer(tolower))
corp_crude <- corpus(corp)
kwic(corp_crude, window=20, "asthma")
I want to create a wordcloud with R. I want to visualize the occurence of variable names, which may consist of more than one word and also special characters and numbers, for example one variable name is "S & P 500 dividend yield".
The variable names are in a text file and they are no further separated. Every line of the text file contains a new variable name.
I tried the folowing code, however the variable names are split into different characters:
library(tm)
library(SnowballC)
library(wordcloud)
library(RColorBrewer)
# load the text:
text <- readLines("./Overview_used_series.txt")
docs <- Corpus(VectorSource(text))
inspect(docs)
# build a term-document matrix:
tdm <- TermDocumentMatrix(docs)
m <- as.matrix(tdm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
head(d, 10)
# generate the wordcloud:
pdf("Word cloud.pdf")
wordcloud(words = d$word, freq = d$freq, min.freq = 1,
max.words=200, random.order=FALSE, rot.per=0.35,
colors=brewer.pal(8, "Dark2"))
dev.off()
How can I treat the variable names, so that they are visualized in the wordcloud with their original names as in the text file?
If you have a file as you specified with a variable name per line, there is no need to use tm. You can easily create your own word frequency table to use as input. When using tm, it will split words based a space and will not respect your variable names.
Starting from when the text is loaded, just create a data.frame with where frequency is set to 1 and then you can just aggregate everything. wordcloud also accepts data.frame like this and you can just create a wordcloud from this. Note that I adjusted the scale a bit, because when you have long variable names, they might not get printed. You will get a warning message when this happens.
I'm not inserting the resulting picture.
#text <- readLines("./Overview_used_series.txt")
text <- c("S & P 500 dividend yield", "S & P 500 dividend yield", "S & P 500 dividend yield",
"visualize ", "occurence ", "variable names", "visualize ", "occurence ",
"variable names")
# freq = 1 adds a columns with just 1's for every value.
my_data <- data.frame(text = text, freq = 1, stringsAsFactors = FALSE)
# aggregate the data.
my_agr <- aggregate(freq ~ ., data = my_data, sum)
wordcloud(words = my_agr$text, freq = my_agr$freq, min.freq = 1,
max.words=200, random.order=FALSE, rot.per=0.35,
colors=brewer.pal(8, "Dark2"), scale = c(2, .5))
I have ranked the tokens in my texts according so a criterion and they all have a value. My list looks like this:
value,token
3,tok1
2.84123,tok2
1.5,tok3
1.5,tok4
1.01,tok5
0.9,tok6
0.9,tok7
0.9,tok8
0.81,tok9
0.73,tok10
0.72,tok11
0.65,tok12
0.65,tok13
0.6451231,tok14
0.6,tok15
0.5,tok16
0.4,tok17
0.3001,tok18
0.3,tok19
0.2,tok20
0.2,tok21
0.1,tok22
0.05,tok23
0.04123,tok24
0.03,tok25
0.02,tok26
0.01,tok27
0.01,tok28
0.01,tok29
0.007,tok30
I then try to produce wordcloud with the following code:
library(tm)
library(wordcloud)
tokList = read.table("tokens.txt", header = TRUE, sep = ',')
# Create corpus
corp <- Corpus(DataframeSource(tokList))
corpPTD <- tm_map(corp, PlainTextDocument)
wordcloud(corpPTD, max.words = 50, random.order=FALSE)
Which produces:
But that is not what I want. I would like a wordcloud, where I visualize the tokens (so "tok1", "tok2", ...) according to the value that's in the table. So if the first token has a 3 then I want that word to be three times bigger than the next element in the list.
Can somebody maybe help?
Simply this will also work (assuming that your minimum value is not zero, if zero then filter out the corresponding tokens):
library(RColorBrewer)
wordcloud(tokList$token, tokList$value/min(tokList$value), max.words = 50, min.freq = 1,
random.order=FALSE, colors=brewer.pal(6,"Dark2"), random.color=TRUE)
I want to make a tag cloud to visualize the gene frequency.
library(wordcloud)
genes_snv <- read.csv("genes.txt", sep="", header=FALSE)
wordcloud(genes_snv$V1,
min.freq=15,
scale=c(5,0.5),
max.words=100,
random.order=FALSE,
rot.per=0.3,
colors=brewer.pal(8, "Dark2"))
This is my code, but it converts everything to lowercase (not useful with gene names). How can I avoid this?
genes.txt starts with
Fcrl5
Etv3
Etv3
Lrrc71
Lrrc71
(...)
When freq argument is missing wordcloud calls tm::TermDocumentMatrix, which I guess internally calls function tolower before computing frequency.
To avoid calls to tm we can supply our own frequency, see example:
# dummy data
set.seed(1)
genes <- c("Fcrl5","Etv3","Etv3","Lrrc71","Lrrc71")
genes <- unlist(sapply(genes, function(i)rep(i, sample(1:100,1))))
# get frequency
plotDat <- as.data.frame(table(genes))
# plot
wordcloud(word = plotDat$genes, freq = plotDat$Freq,
min.freq=15,
scale=c(5,0.5),
max.words=100,
random.order=FALSE,
rot.per=0.3,
colors=brewer.pal(8, "Dark2"))