LDA topic model plotting by year - r

I'm trying to plot tweet topics by year from this file
https://www.mediafire.com/file/64lzbt46v01jbe1/cleaned.xlsx/file
works fine to get the topics, but when I try to plot them by year I have a dimensions problem:
library(readxl)
library(tm)
tweets <- read_xlsx("C:/cleaned.xlsx")
mytextdata <- tweets$textdata
# Convert to tm corpus and use its API
corpus <- Corpus(VectorSource(mytextdata)) # Create corpus object
dtm <- DocumentTermMatrix(corpus)
ui = unique(dtm$i)
dtm.new = dtm[ui,]
k <- 7
ldaTopics <- LDA(dtm.new, method = "Gibbs", control=list(alpha = 0.1, seed = 77), k = k)
tmResult <- posterior(ldaTopics)
theta <- tmResult$topics
dim(theta)
dim(theta)=4857 and I have 4876 dates in my cleaned.xls file and I need them to be the same to run this aggregate function
topic_proportion_per_decade <- aggregate(theta, by = list(decade = textdata$decade), mean)
from here
https://tm4ss.github.io/docs/Tutorial_6_Topic_Models.html
I think that the problem is that the cleaned.xls file is not clean enough and that's why theta misses some rows..
But in fact I really don't know why theta misses some rows..
I also don't know how to clean the file better if that was the problem, the file looks good to me, there are some rows that are only numbers or non-english words but I prefer to keep them..

The problem is that ui = unique(dtm$i) removes several documents (I don't know why you do this, so I won't comment on that part). So your theta doesn't have the same number of rows as the data. We can solve this by only keeping the rows which are still in theta:
library("dplyr")
library("reshape2")
library("ggplot2")
tweets_clean <- tweets %>%
mutate(id = rownames(.)) %>%
filter(id %in% rownames(theta)) %>% # keep only rows still in theta
cbind(theta) %>% # now we can attach the topics to the data.frame
mutate(year = format(date, "%Y")) # make year variable
I then used dplyr functions to make the aggregation, since I think it makes for easier to read code:
tweets_clean_yearly <- tweets_clean %>%
group_by(year) %>%
summarise_at(vars(as.character(1:7)), funs(mean)) %>%
melt(id.vars = "year")
Then we can plot this:
ggplot(tweets_clean_yearly, aes(x = year, y = value, fill = variable)) +
geom_bar(stat = "identity") +
ylab("proportion")
Note: I tested if theta and tweets had really the same documents with:
tweets_clean <- tweets %>%
mutate(id = rownames(.)) %>%
filter(id %in% rownames(theta))
all.equal(tweets_clean$id, rownames(theta))

Related

why my bar chart not showing all the data

I am working on a music streaming project, and I am trying to get the top15 global streamings in 2020 and make it an interactive graph.
It successfully showed the top 15 song names as a dataframe, but it failed to show as a bar graph, I wonder where did I do wrong here? Although it worked after I flip the bar graph into horizontal, but the data seem to look a bit off.
It looks like this as a vertical bar graph:
The horizontical bar graph looks like this, but the data seem incorrect:
Here is the code I have:
library("dplyr")
library("ggplot2")
# load the .csv into R studio, you can do this 1 of 2 ways
#read.csv("the name of the .csv you downloaded from kaggle")
spotiify_origional <- read.csv("charts.csv")
spotiify_origional <- read.csv("https://raw.githubusercontent.com/info201a-au2022/project-group-1-section-aa/main/data/charts.csv")
View(spotiify_origional)
# filters down the data
# removes the track id, explicit, and duration columns
spotify_modify <- spotiify_origional %>%
select(name, country, date, position, streams, artists, genres = artist_genres)
#returns all the data just from 2022
#this is the data set you should you on the project
spotify_2022 <- spotify_modify %>%
filter(date >= "2022-01-01") %>%
arrange(date) %>%
group_by(date)
# use write.csv() to turn the new dataset into a .csv file
write.csv(Your DataFrame,"Path to export the DataFrame\\File Name.csv", row.names = FALSE)
write.csv(spotify_2022, "/Users/oliviasapp/Documents/info201/project-group-1-section-aa/data/spotify_2022.csv" , row.names = FALSE)
# then I pushed the spotify_2022.csv to the GitHub repo
View(spotiify_origional)
spotify_2022_global <- spotify_modify %>%
filter(date >= "2022-01-01") %>%
filter(country == "global") %>%
arrange(date) %>%
group_by(streams)
View(spotify_2022_global)
top_15 <- spotify_2022_global[order(spotify_2022_global$streams, decreasing = TRUE), ]
top_15 <- top_15[1:15,]
top_15$streams <- as.numeric(top_15$streams)
View(top_15)
col_chart <- ggplot(data = top_15) +
geom_col(mapping = aes(x = name, y = streams)) +
ggtitle("Top 15 Songs Daily Streamed Globally") +
theme(plot.title = element_text(hjust = 0.5))
col_chart <- col_chart + coord_cartesian(ylim = c(999000,1000000)) + coord_flip()
col_chart
Thank you so much! Any suggestions will hugely help!
top_15 <- spotify_2022_global[order(spotify_2022_global$streams, decreasing = TRUE), ]
This code sorts in decreasing order, but the streams data here is still of character type, so numbers like 999975 will be "higher" than 1M, which is why your data looks weird. One song had two weeks just under 1M which is why it shows up with ~2M.
If you use this instead you'll get more what you intended:
top_15 <- spotify_2022_global[order(as.numeric(spotify_2022_global$streams), decreasing = TRUE), ]
However, this is finding the highest song-weeks, not the highest songs, so in this case all 15 highest song-weeks were one song.
I'd suggest you group_by(name) and then summarize to get total streams by song, filter top 15, and then make name an ordered factor, e.g. with forcats::fct_reorder.

R: Creative visualization in RStudio

I am at the final stages of a project where i have been comparing the appraisal price vs the sold price of different properties. The complete code for data collection and tidying is below.
At this stage i am looking at different ways to visualize my data. However, I am quite new to it so my question is whether anyone has any "new" or special ways they visualizing data that they find usefull og intuitive. I have given a couple of examples of what i am able to visualize now using ggplot.
Additionally: Now my visualizations plots all 1275 observations every time. I would however also like to visualize the data both with mean and median for the Percentage, Sold and Tax variables which i am most interested in. For example to visualize the mean value of the Percentage column based on different years.
Appreciate any help!
Complete code:
#Step 1: Load needed library
library(tidyverse)
library(rvest)
library(jsonlite)
library(stringi)
library(dplyr)
library(data.table)
library(ggplot2)
#Step 2: Access the URL of where the data is located
url <- "https://www.forsvarsbygg.no/ListApi/ListContent/78635/SoldEstates/0/10/"
#Step 3: Direct JSON as format of data in URL
data <- jsonlite::fromJSON(url, flatten = TRUE)
#Step 4: Access all items in API
totalItems <- data$TotalNumberOfItems
#Step 5: Summarize all data from API
allData <- paste0('https://www.forsvarsbygg.no/ListApi/ListContent/78635/SoldEstates/0/', totalItems,'/') %>%
jsonlite::fromJSON(., flatten = TRUE) %>%
.[1] %>%
as.data.frame() %>%
rename_with(~str_replace(., "ListItems.", ""), everything())
#Step 6: removing colunms not needed
allData <- allData[, -c(1,4,8,9,11,12,13,14,15)]
#Step 7: remove whitespace and change to numeric in columns SoldAmount and Tax
#https://stackoverflow.com/questions/71440696/r-warning-argument-is-not-an-atomic-vector-when-attempting-to-remove-whites/71440806#71440806
allData[c("Tax", "SoldAmount")] <- lapply(allData[c("Tax", "SoldAmount")], function(z) as.numeric(gsub(" ", "", z)))
#Step 8: Remove rows where value is NA
#https://stackoverflow.com/questions/4862178/remove-rows-with-all-or-some-nas-missing-values-in-data-frame
alldata <- allData %>%
filter(across(where(is.numeric),
~ !is.na(.)))
#Step 9: Remove values below 10000 NOK on SoldAmount og Tax.
alldata <- alldata %>%
filter_all(any_vars(is.numeric(.) & . > 10000))
#Step 10: Calculate percentage change between tax and sold amount and create new column with percent change
#df %>% mutate(Percentage = number/sum(number))
alldata_Percent <- alldata %>% mutate(Percentage = (SoldAmount-Tax)/Tax)
Visualization
# Plot Percentage difference based on County
ggplot(data=alldata_Percent,mapping = aes(x = Percentage, y = County)) +
geom_point(size = 1.5)
#Plot County with both Date and Percentage difference The The
theme_set(new = ggthemes::theme_economist())
p <- ggplot(data = alldata_Percent,
mapping = aes(x = Date, y = Percentage, colour = County)) +
geom_line(na.rm = TRUE) +
geom_point(na.rm = TRUE)
p

Dataframe is too big for supercomputer

I am trying to create a matrix of donors and recipients, populated with the sum of donations produced in each couple keeping the eventual NAs.
It works well for small datasets (See toy example below) but when I switch to national datasets (3m entries) several problems emerge: besides being painstakingly slow, the creation of the fill df consume all the memory of the (super)computer and I get the error "Error: cannot allocate vector of size 1529.0 Gb"
How should I tackle the problem?
Thanks a lot!
library(dplyr)
library(tidyr)
libray(bigmemory)
candidate_id <- c("cand_1","cand_1","cand_1","cand_2","cand_3")
donor_id <- c("don_1","don_1","don_2","don_2","don_3")
donation <- c(1,2,3.5,4,10)
df = data.frame(candidate_id,donor_id,donation)
colnames(df) <- c("candidate_id","donor_id","donation")
fill <- df %>%
group_by(df$candidate_id,df$donor_id) %>%
summarise(tot_donation=sum(as.numeric(donation))) %>%
complete(df$candidate_id,df$donor_id)
fill <- unique(fill[ ,1:3])
colnames(fill) <- c("candidate_id","donor_id","tot_donation")
nrow = length(unique(df$candidate_id))
ncol = length(unique(df$donor_id))
row_names = unique(fill$candidate_id)
col_names = unique(fill$donor_id)
x <- big.matrix(nrow, ncol, init=NA,dimnames=list(row_names,col_names))
for (i in 1:nrow){
for (j in 1:ncol){
x[i,j] <- fill[which(fill$candidate_id == row_names[i] &
fill$donor_id == col_names[j]), 3]
}
}
I see you're using unique because your output has duplicated values.
Based on this question,
you should try the following in order to avoid duplication:
fill <- df %>%
group_by(candidate_id, donor_id) %>%
summarise(tot_donation=sum(donation)) %>%
ungroup %>%
complete(candidate_id, donor_id)
Can you then try to create your desired output?
I think unique can be very resource-heavy,
so try to avoid calling it.
The tidyr version of what Benjamin suggested should be:
spread(fill, donor_id, tot_donation)
EDIT: By the way, since you tagged the question with sparse-matrix,
you could indeed use sparsity to your advantage:
library(Matrix)
library(dplyr)
df <- data.frame(
candidate_id = c("cand_1","cand_1","cand_1","cand_2","cand_3"),
donor_id = c("don_1","don_1","don_2","don_2","don_3"),
donation = c(1,2,3.5,4,10)
)
summ <- df %>%
group_by(candidate_id, donor_id) %>%
summarise(tot_donation=sum(donation)) %>%
ungroup
num_candidates <- nlevels(df$candidate_id)
num_donors <- nlevels(df$donor_id)
smat <- Matrix(0, num_candidates, num_donors, sparse = TRUE, dimnames = list(
levels(df$candidate_id),
levels(df$donor_id)
))
indices <- summ %>%
select(candidate_id, donor_id) %>%
mutate_all(unclass) %>%
as.matrix
smat[indices] <- summ$tot_donation
smat
3 x 3 sparse Matrix of class "dgCMatrix"
don_1 don_2 don_3
cand_1 3 3.5 .
cand_2 . 4.0 .
cand_3 . . 10
You might try
library(reshape2)
dcast(fill, candidate_id ~ donor_id,
value.var = "tot_donation",
fun.aggregate = sum)
I don't know if it will avoid the memory issue, but it will likely be much faster than a double for loop.
I have to run to a meeting, but part of me wonders if there is a way to do this with outer.

How to generate A LIST OF datasets & graphs and EXPORTING them?

Here is the dataset :
# dataset call DT
DT <- data.table(
Store = rep(c("store_A","store_B","store_C","store_D","store_E"),4),
Amount = sample(1000,20))
I have TWO targets have to achieve :
1.Generate INDEPENDENT Grouped dataset for exporting EXCEL.CSV files.
2.Generate INDEPENDENT Graph for exporting PNG files.
*Not Necessary to run both in one operation.
Constraints :
I can only perform these with ONE by ONE basic operation like :
# For dataset & CSV export
store_A <- DT %>% group_by(Store) %>% summarise(Total = sum(Amount))
fwrite(store_A,"PATH/store_A.csv")
store_B <- DT %>% group_by(Store) %>% summarise(Total = sum(Amount))
fwrite(store_B,"PATH/store_A.csv")
.....
# For graph :
Plt_A <- ggplot(store_A,aes(x = Store, y = Total)) + geom_point()
ggsave("PATH/Plt_A.png")
Plt_B <- ggplot(store_B,aes(x = Store, y = Total)) + geom_point()
ggsave("PATH/Plt_B.png")
.....
*Approaches written by ' for - loops ' can be found but confusing which is
more efficient and WORKS in generate graph,
for loops VS lapply family --
As real dataset has over 2 millions rows 70 cols and 10k groups to generate, for loops maybe runned terribly SLOW and crash R itself.
The bottleneck in actual dataset contains 10k of "Store" groups.
As everything needs to be in loop:
require(tidyverse)
require(data.table)
setwd("Your working directory")
# dataset call DT
DT <- data.table(
Store = rep(c("store_A","store_B","store_C","store_D","store_E"),4),
Amount = sample(1000,20)) %>%
#Arrange by store and amount
arrange(Store, Amount) %>%
#Nesting by store, thus the loop counter/index will go by store
nest(-Store)
#Export CSVs by store
i <- 1
for (i in 1:nrow(DT)) {
write.csv(DT$data[i], paste(DT$Store[i], "csv", sep = "."))
}
#Export Graphs by store
i <- 1
for (i in 1:nrow(DT)) {
Graph <- DT$data[i] %>%
as.data.frame() %>%
ggplot(aes(Amount)) + geom_histogram()
ggsave(Graph, file = paste0(DT$Store[i],".png"), width = 14, height = 10, units = "cm")
}

Chronological Sentiment Analysis -- Cannot group by lines

My data text is a novel in plain text. I used packages tm and tidytext. Data processing went well and I created my DocumentTermMatrix without trouble.
text <- read_lines("GoneWithTheWind2.txt")
set.seed(314)
text <- iconv(text,'UTF-8',sub="")
myCorpus <- tm_map(myCorpus, removeWords, c(stopwords("english"),
stopwords("SMART"), mystopwords, Top200Words))
myDtm <- TermDocumentMatrix(myCorpus, control=list(minWordLength= 1))`
However, I could not run the coding using inner_join between bing lexicon and the DocumentTermMatrix to do chronological sentiment analysis of this novel over time. I wrote the function below based on an online example but did not know what to group by in count(sentiment) (I place ???? in hold), because the plain text and the DocumentTermMatrix has no "lines" columns.
bing <- get_sentiments("bing")
m <- as.matrix(myDtm)
v <- sort(rowSums(m),decreasing=TRUE)
myNames <- names(v)
d <- data.frame(term=myNames, freq = v)
wind_polarity <- d %>%
# Inner join to the lexicon
inner_join(bing, by=c("term"="word")) %>%
# Count by sentiment, **????**
count(sentiment, **????**) %>%
# Spread sentiments
spread(sentiment, n, fill=0) %>%
mutate(
# Add polarity field
polarity = positive - negative,
# Add line number field
line_number = row_number())
Then plot by ggplot.
I tried adding a column "Index" indicating the line number for each document (line) in text but this column disappears somewhere in the process. Any suggestions would be highly appreciated.
Below an approach that calculates the polarity per line (based on a minimum example of three lines). You might join your dtm with the lexicon directly to maintain information on the counts. Then turn polarity information into numeric representation and do your calculations per line. You might certainly rewrite the code and make it more elegant (I am not very familiar with dplyr vocabulary, sorry). I hope that helps anyway.
library(tm)
library(tidytext)
text <- c("I like coffe."
,"I rather like tea."
,"I hate coffee and tea, but I love orange juice.")
myDtm <- TermDocumentMatrix(VCorpus(VectorSource(text)),
control = list(removePunctuation = TRUE,
stopwords = TRUE))
bing <- tidytext::get_sentiments("bing")
wind_polarity <- as.matrix(myDtm) %>%
data.frame(terms = rownames(myDtm), ., stringsAsFactors = FALSE) %>%
inner_join(bing, by= c("terms"="word")) %>%
mutate(terms = NULL,
polarity = ifelse( (.[,"sentiment"] == "positive"), 1,-1),
sentiment = NULL) %>%
{ . * .$polarity } %>%
mutate(polarity = NULL) %>%
colSums
#the polarity per line which you may plot, e.g., with base or ggplot
# X1 X2 X3
# 1 1 0

Resources