How use docvars in Quanteda to create document variables from raw text - r

I am using Quanteda to analyze party platforms, which are txt files. I have binded two sets of party platforms into one dfm:
corp20dr <- corp20d + corp20r
summary(corp20dr)`
Document-feature matrix of: 28 documents, 6,595 features (85.0% sparse).
> summary(corp20dr)
Corpus consisting of 28 documents:
Text Types Tokens Sentences
akdem20.txt 1895 7624 332
azdem20.txt 908 2921 94
cadem20.txt 3255 19881 150
medem20.txt 355 863 39
.....................................
wvgop20.txt 1419 5013 106
wygop20.txt 428 1085 45
I would like to compare the Democratic (corp20d) and Republican platforms (corp20r). But, I seem to need to use docvars to make comparisons between the different groups (15 Dem, 13 GOP). When I use textplot_keyness, I intend to get a comparison of all the texts, but the result is to draw the first text against all other texts in the corpus.
corp20dr_dfm <- dfm(corpus(corp20dr),
remove = stopwords("english"), stem = TRUE, remove_numbers = TRUE, ```
remove_punct = TRUE)
corp20dr_dfm
result_keyness <- textstat_keyness(corp20dr_dfm)
textplot_keyness(result_keyness,
color = c('blue', 'red'))
The result is a comparison of the Alaska platform to the "reference" which seems to be the other 27 documents. I was hoping to use the compare differences in word usage between the two groups of corpora (15 Democratic platforms compared to the 13 Republican platforms), but I seem to have to identify each group using docvars. But I am not sure how to do this. Any help would be appreciated.

The keyness function only compares one reference document to all others, so you should group the documents by the original corpus, before calling textstat_keyness(). You can do this by using dfm_group() on a new docvar that identifies the corpus. See below for a reproducible example.
library("quanteda")
## Package version: 2.1.2
corp_a <- corpus(data_corpus_inaugural[1:5])
corp_b <- corpus(data_corpus_inaugural[6:10])
# this is the key: identifying the original corpus
# will be used to group the dfm later into just two combined "documents"
corp_a$source <- "a"
corp_b$source <- "b"
corp <- corp_a + corp_b
summary(corp)
## Corpus consisting of 10 documents, showing 10 documents:
##
## Text Types Tokens Sentences Year President FirstName
## 1789-Washington 625 1537 23 1789 Washington George
## 1793-Washington 96 147 4 1793 Washington George
## 1797-Adams 826 2577 37 1797 Adams John
## 1801-Jefferson 717 1923 41 1801 Jefferson Thomas
## 1805-Jefferson 804 2380 45 1805 Jefferson Thomas
## 1809-Madison 535 1261 21 1809 Madison James
## 1813-Madison 541 1302 33 1813 Madison James
## 1817-Monroe 1040 3677 121 1817 Monroe James
## 1821-Monroe 1259 4886 131 1821 Monroe James
## 1825-Adams 1003 3147 74 1825 Adams John Quincy
## Party source
## none a
## none a
## Federalist a
## Democratic-Republican a
## Democratic-Republican a
## Democratic-Republican b
## Democratic-Republican b
## Democratic-Republican b
## Democratic-Republican b
## Democratic-Republican b
Now we can go through the steps of forming the dfm, grouping, and getting the keyness statistics. (Here, I've removed stopwords and punctuation as well.)
# using the separate package since we are moving textstat_*() functions
# to this module package with quanteda v3 release planned in 2021
library("quanteda.textstats")
corp %>%
tokens(remove_punct = TRUE) %>%
tokens_remove(stopwords("en")) %>%
dfm() %>%
dfm_group(groups = "source") %>%
textstat_keyness() %>%
head()
## feature chi2 p n_target n_reference
## 1 love 11.236174 0.0008021834 10 1
## 2 mind 10.108762 0.0014756604 11 3
## 3 good 9.971163 0.0015901101 17 8
## 4 may 9.190508 0.0024327341 38 31
## 5 can 8.887529 0.0028712512 27 19
## 6 shall 7.728615 0.0054352433 23 16

Related

Seperating characters in dfm object R

all,
I have imported the sotu corpus from quanteda in R. I am somewhat new to dfm objects and am wanting to separate the doc_id column to give me a name and a year column. If this was a tibble, this code works:
library(quanteda)
library(quanteda.corpora)
library(tidyverse)
sotu <- as_tibble(data_corpus_sotu)
sotusubsetted <- sotu %>%
separate(doc_id, c("name","year"),"-")
However, since I am new with dfm and regex, I am not sure if there is an equivalent process if I load in the data as:
library(quanteda)
library(quanteda.corpora)
library(tidyverse)
sotu <- corpus(data_corpus_sotu)
sotudfm <- dfm(sotu)
Is there some equivalent way to do this with dfm objects?
The safest method is also one that will work for any core quanteda object, meaning equally for a corpus, tokens, or dfm object. These involve using the accessor functions, not addressing the internals of the corpus or dfm objects directly, which is strongly discouraged. You can do that, but your code could break in the future if those object structures are changed. In addition, our accessor functions are generally also the most efficient method.
For this task, you want to use the docnames() functions or accessing the document IDs, and this works for the corpus as well as for the dfm.
library("quanteda")
## Package version: 2.1.2
data("data_corpus_sotu", package = "quanteda.corpora")
data.frame(doc_id = docnames(data_corpus_sotu[1:5])) %>%
tidyr::separate(doc_id, c("name", "year"), "-")
## name year
## 1 Washington 1790
## 2 Washington 1790b
## 3 Washington 1791
## 4 Washington 1792
## 5 Washington 1793
data.frame(doc_id = docnames(dfm(data_corpus_sotu[1:5]))) %>%
tidyr::separate(doc_id, c("name", "year"), "-")
## name year
## 1 Washington 1790
## 2 Washington 1790b
## 3 Washington 1791
## 4 Washington 1792
## 5 Washington 1793
You could also have taken this from the "President" docvar field and the "Date":
data.frame(
name = data_corpus_sotu$President,
year = lubridate::year(data_corpus_sotu$Date)
) %>%
head()
## name year
## 1 Washington 1790
## 2 Washington 1790
## 3 Washington 1791
## 4 Washington 1792
## 5 Washington 1793
## 6 Washington 1794
Created on 2021-02-13 by the reprex package (v1.0.0)
The following code will do exactly what you want, albeit it might break some operations in quanteda that will look for docid_ in sotudfm#docvars, the data frame that stores the documents relational data. For instance, it will break any filtering by sotudfm#Dimnames$docs, which is where the dimension names of the documents are listed.
sotudfm#docvars <- sotudfm#docvars %>% separate(col = docid_, c("name","year"),"-")
> sotudfm#docvars %>% as_tibble()
# A tibble: 241 x 10
docname_ name year segid_ FirstName President Date delivery type party
<chr> <chr> <chr> <int> <chr> <chr> <date> <fct> <fct> <fct>
1 Washington-1790 Washington 1790 1 George Washington 1790-01-08 spoken SOTU Independent
2 Washington-1790b Washington 1790b 1 George Washington 1790-12-08 spoken SOTU Independent
3 Washington-1791 Washington 1791 1 George Washington 1791-10-25 spoken SOTU Independent
4 Washington-1792 Washington 1792 1 George Washington 1792-11-06 spoken SOTU Independent
5 Washington-1793 Washington 1793 1 George Washington 1793-12-03 spoken SOTU Independent
6 Washington-1794 Washington 1794 1 George Washington 1794-11-19 spoken SOTU Independent
7 Washington-1795 Washington 1795 1 George Washington 1795-12-08 spoken SOTU Independent
8 Washington-1796 Washington 1796 1 George Washington 1796-12-07 spoken SOTU Independent
9 Adams-1797 Adams 1797 1 John Adams 1797-11-22 spoken SOTU Federalist
10 Adams-1798 Adams 1798 1 John Adams 1798-12-08 spoken SOTU Federalist
Here is the code that ended up working for me:
sotudfm#docvars <- sotudfm#docvars %>%
separate(col = docname_, c("name","year"),"-")
This kept the doc_id intact when I ran
head(sotudfm, 10)
It appears that docid_ and docname_ are identical.

R stops scraping when there is missing data

I am using this code to loop through multiple url's to scrape data. The code works fine until it comes to a date that has missing data. This is the error message that pops up:
Error in data.frame(away, home, away1H, home1H, awayPinnacle, homePinnacle) :
arguments imply differing number of rows: 7, 8
I am very new to coding and could not figure out how to make it keep scraping despite the missing data.
library(rvest)
library(dplyr)
get_data <- function(date) {
# Specifying URL
url <- paste0('https://classic.sportsbookreview.com/betting-odds/nba-basketball/money-line/1st-half/?date=', date)
# Reading the HTML code from website
oddspage <- read_html(url)
# Using CSS selectors to scrape away teams
awayHtml <- html_nodes(oddspage,'.eventLine-value:nth-child(1) a')
#Using CSS selectors to scrape 1Q scores
away1QHtml <- html_nodes(oddspage,'.current-score+ .first')
away1Q <- html_text(away1QHtml)
away1Q <- as.numeric(away1Q)
home1QHtml <- html_nodes(oddspage,'.score-periods+ .score-periods .current-score+ .period')
home1Q <- html_text(home1QHtml)
home1Q <- as.numeric(home1Q)
#Using CSS selectors to scrape 2Q scores
away2QHtml <- html_nodes(oddspage,'.first:nth-child(3)')
away2Q <- html_text(away2QHtml)
away2Q <- as.numeric(away2Q)
home2QHtml <- html_nodes(oddspage,'.score-periods+ .score-periods .period:nth-child(3)')
home2Q <- html_text(home2QHtml)
home2Q <- as.numeric(home2Q)
#Creating First Half Scores
away1H <- away1Q + away2Q
home1H <- home1Q + home2Q
#Using CSS selectors to scrape scores
awayScoreHtml <- html_nodes(oddspage,'.first.total')
awayScore <- html_text(awayScoreHtml)
awayScore <- as.numeric(awayScore)
homeScoreHtml <- html_nodes(oddspage, '.score-periods+ .score-periods .total')
homeScore <- html_text(homeScoreHtml)
homeScore <- as.numeric(homeScore)
# Converting away data to text
away <- html_text(awayHtml)
# Using CSS selectors to scrape home teams
homeHtml <- html_nodes(oddspage,'.eventLine-value+ .eventLine-value a')
# Converting home data to text
home <- html_text(homeHtml)
# Using CSS selectors to scrape Away Odds
awayPinnacleHtml <- html_nodes(oddspage,'.eventLine-consensus+ .eventLine-book .eventLine-book-value:nth-child(1) b')
# Converting Away Odds to Text
awayPinnacle <- html_text(awayPinnacleHtml)
# Converting Away Odds to numeric
awayPinnacle <- as.numeric(awayPinnacle)
# Using CSS selectors to scrape Pinnacle Home Odds
homePinnacleHtml <- html_nodes(oddspage,'.eventLine-consensus+ .eventLine-book .eventLine-book-value+ .eventLine-book-value b')
# Converting Home Odds to Text
homePinnacle <- html_text(homePinnacleHtml)
# Converting Home Odds to Numeric
homePinnacle <- as.numeric(homePinnacle)
# Create Data Frame
df <- data.frame(away,home,away1H,home1H,awayPinnacle,homePinnacle)
}
date_vec <- sprintf('201902%02d', 02:06)
all_data <- do.call(rbind, lapply(date_vec, get_data))
View(all_data)
I'd recommending purrr::map() instead of lapply. Then you can wrap your call to get_data() with possibly(), which is a nice way to catch errors and keep going.
library(purrr)
map_dfr(date_vec, possibly(get_data, otherwise = data.frame()))
Output:
away home away1H home1H awayPinnacle homePinnacle
1 L.A. Clippers Detroit 47 65 116 -131
2 Milwaukee Washington 73 50 -181 159
3 Chicago Charlotte 60 51 192 -220
4 Brooklyn Orlando 48 44 121 -137
5 Indiana Miami 53 54 117 -133
6 Dallas Cleveland 58 55 -159 140
7 L.A. Lakers Golden State 58 63 513 -651
8 New Orleans San Antonio 50 63 298 -352
9 Denver Minnesota 61 64 107 -121
10 Houston Utah 63 50 186 -213
11 Atlanta Phoenix 58 57 110 -125
12 Philadelphia Sacramento 52 62 -139 123
13 Memphis New York 42 41 -129 114
14 Oklahoma City Boston 58 66 137 -156
15 L.A. Clippers Toronto 51 65 228 -263
16 Atlanta Washington 61 57 172 -196
17 Denver Detroit 55 68 -112 -101
18 Milwaukee Brooklyn 51 42 -211 184
19 Indiana New Orleans 53 50 -143 127
20 Houston Phoenix 63 57 -256 222
21 San Antonio Sacramento 59 63 -124 110

Using str_split to fill rows down data frame with number ranges and multiple numbers

I have a dataframe with crop names and their respective FAO codes. Unfortunately, some crop categories, such as 'other cereals', have multiple FAO codes, ranges of FAO codes or even worse - multiple ranges of FAO codes.
Snippet of the dataframe with the different formats for FAO codes.
> FAOCODE_crops
SPAM_full_name FAOCODE
1 wheat 15
2 rice 27
8 other cereals 68,71,75,89,92,94,97,101,103,108
27 other oil crops 260:310,312:339
31 other fibre crops 773:821
Using the following code successfully breaks down these numbers,
unlist(lapply(unlist(strsplit(FAOCODE_crops$FAOCODE, ",")), function(x) eval(parse(text = x))))
[1] 15 27 56 44 79 79 83 68 71 75 89 92 94 97 101 103 108
... but I fail to merge these numbers back into the dataframe, where every FAOCODE gets its own row.
> FAOCODE_crops$FAOCODE <- unlist(lapply(unlist(strsplit(MAPSPAM_crops$FAOCODE, ",")), function(x) eval(parse(text = x))))
Error in `$<-.data.frame`(`*tmp*`, FAOCODE, value = c(15, 27, 56, 44, :
replacement has 571 rows, data has 42
I fully understand why it doesn't merge successfully, but I can't figure out a way to fill the table with a new row for each FAOCODE as idealized below:
SPAM_full_name FAOCODE
1 wheat 15
2 rice 27
8 other cereals 68
8 other cereals 71
8 other cereals 75
8 other cereals 89
And so on...
Any help is greatly appreciated!
We can use separate_rows to separate the ,. After that, we can loop through the FAOCODE using map and ~eval(parse(text = .x)) to evaluate the number range. Finnaly, we can use unnest to expand the data frame.
library(tidyverse)
dat2 <- dat %>%
separate_rows(FAOCODE, sep = ",") %>%
mutate(FAOCODE = map(FAOCODE, ~eval(parse(text = .x)))) %>%
unnest(cols = FAOCODE)
dat2
# # A tibble: 140 x 2
# SPAM_full_name FAOCODE
# <chr> <dbl>
# 1 wheat 15
# 2 rice 27
# 3 other cereals 68
# 4 other cereals 71
# 5 other cereals 75
# 6 other cereals 89
# 7 other cereals 92
# 8 other cereals 94
# 9 other cereals 97
# 10 other cereals 101
# # ... with 130 more rows
DATA
dat <- read.table(text = " SPAM_full_name FAOCODE
1 wheat 15
2 rice 27
8 'other cereals' '68,71,75,89,92,94,97,101,103,108'
27 'other oil crops' '260:310,312:339'
31 'other fibre crops' '773:821'",
header = TRUE, stringsAsFactors = FALSE)

How to apply stopwords accurately in French using R

I'm trying to pull a book using the Gutenberg library and then remove French stopwords. I've been able to do this accurately in English by doing this:
twistEN <- gutenberg_download(730)
twistEN <- twistEN[118:nrow(twistEN),]
twistEN <- twistEN %>%
unnest_tokens(word, text)
data(stop_words)
twistEN <- twistEN %>%
anti_join(stop_words)
countsEN <- twistEN %>%
count(word, sort=TRUE)
top.en <- countsEN[1:20,]
I can see here that the top 20 words (by frequency) in the English version of Oliver Twist are these:
word n
<chr> <int>
1 oliver 746
2 replied 464
3 bumble 364
4 sikes 344
5 time 329
6 gentleman 309
7 jew 294
8 boy 291
9 fagin 291
10 dear 277
11 door 238
12 head 226
13 girl 223
14 night 218
15 sir 210
16 lady 209
17 hand 205
18 eyes 204
19 rose 201
20 cried 182
I'm trying to accomplish the same thing with the French version of the same novel:
twistFR <- gutenberg_download(16023)
twistFR <- twistFR[123:nrow(twistFR),]
twistFR <- twistFR %>%
unnest_tokens(word, text)
stop_french <- data.frame(word = stopwords::stopwords("fr"), stringsAsFactors = FALSE)
stop_french <- get_stopwords("fr","snowball")
as.data.frame(stop_french)
twistFR <- twistFR %>%
anti_join(stop_words, by = c('word')) %>%
anti_join(stop_french, by = c("word"))
countsFR <- twistFR %>%
count(word, sort=TRUE)
top.fr <- countsFR[1:20,]
I did alter the code for the French stopwords based on info I found online, and it is removing some stopwords. But this is the list I'm getting:
word n
<chr> <int>
1 dit 1375
2 r 1311
3 tait 1069
4 re 898
5 e 860
6 qu'il 810
7 plus 780
8 a 735
9 olivier 689
10 si 673
11 bien 656
12 tout 635
13 tre 544
14 d'un 533
15 comme 519
16 c'est 494
17 pr 481
18 pondit 472
19 juif 450
20 monsieur 424
At least half of these words should be getting captured by a stopwords list and they're not. Is there something I'm doing wrong in my code? I'm new to tidy text, so I'm sure there are better ways to get at this.
I used a few different packages to get what you want. I used the stopwords from tidystopwords as these are based on the universal dependency models. But you could use the stopwords from snowball, stopwords or from the proustr package. You might even decide to use the stopwords from multiple packages depending on your requirements and what you consider to be stopwords. All stopword lists are slightly different.
I use the udpipe package to split the text into it's separate tokens. This takes longer than unnest_tokens from tidytext (but I use the default option, which includes pos tagging and lemmatisation). I find that unnest_tokens doesn't work well with non english languages.
library(gutenbergr)
library(tidystopwords)
library(udpipe)
library(dplyr)
# get twist in French
twistFR <- gutenberg_download(16023)
# Convert all lines to utf8 (needed on my system)
twistFR$text <- iconv(twistFR$text, to = "UTF-8")
# get french stopwords based on ud language model
my_french_stopswords <- generate_stoplist(lang_name = "French")
my_french_stopswords <- data.frame(word = my_french_stopswords, stringsAsFactors = FALSE)
# download udpipe model for french language
ud_model <- udpipe_download_model(language = "french")
ud_model_fr <- udpipe_load_model(ud_model)
# set parallel.cores. Udpipe annotate can take a while as it does a lot more than just tokenizing.
ud_twistFR <- udpipe_annotate(ud_model_fr, twistFR$text[123:nrow(twistFR)], parallel.cores = 3)
# transform to data.frame
ud_twistFR_df <- data.frame(ud_twistFR, stringsAsFactors = FALSE)
# put tokens in lowercase, remove stopwords and punctuations
ud_twistFR_df <- ud_twistFR_df %>%
mutate(token = tolower(token)) %>%
anti_join(my_french_stopswords, by = c("token" = "word")) %>%
filter(upos != "PUNCT") # remove punctuations.
# count tokens
ud_countsFR <- ud_twistFR_df %>%
count(token, sort=TRUE)
ud_countsFR[1:20,]
# A tibble: 20 x 2
token n
<chr> <int>
1 pas 1558
2 dit 1366
3 m. 915
4 olivier 843
5 plus 775
6 bien 652
7 répondit 469
8 juif 435
9 monsieur 412
10 bumble 367
11 enfant 355
12 sikes 341
13 jeune 336
14 air 290
15 porte 281
16 tête 279
17 encore 278
18 homme 267
19 même 261
20 demanda 257
It turns out that my main problem was actually not the stop words. It was that accented characters were coming through as codes instead of as the accents. I applied this:
twistFR$text <- iconv(twistFR$text, "latin1", "UTF-8")
And the situation pretty much resolved itself. I did also apply the stopwords-iso larger list. Thanks for both of your comments!

R tidyr::spread duplicate error

I have the following data:
ID AGE SEX RACE COUNTRY VISITNUM VSDTC VSTESTCD VSORRES
32320058 58 M WHITE UKRAINE 2 2016-04-28 DIABP 74
32320058 58 M WHITE UKRAINE 1 2016-04-21 HEIGHT 183
32320058 58 M WHITE UKRAINE 1 2016-04-21 SYSBP 116
32320058 58 M WHITE UKRAINE 2 2016-04-28 SYSBP 116
32320058 58 M WHITE UKRAINE 1 2016-04-21 WEIGHT 109
22080090 75 M WHITE MEXICO 1 2016-05-17 DIABP 81
22080090 75 M WHITE MEXICO 1 2016-05-17 HEIGHT 176
22080090 75 M WHITE MEXICO 1 2016-05-17 SYSBP 151
I would like to reshape the data using tidyr::spread to get the following output:
ID AGE SEX RACE COUNTRY VISITNUM VSDTC DIABP SYSBP WEIGHT HEIGHT
32320058 58 M WHITE UKRAINE 2 2016-04-28 74 116 NA NA
32320058 58 M WHITE UKRAINE 1 2016-04-21 NA 116 109 183
22080090 75 M WHITE MEXICO 1 2016-05-17 81 151 NA 176
I receive duplicate errors, although I don't have duplicates in my data!
df1=spread(df,VSTESTCD,VSORRES)
Error: Duplicate identifiers for rows (36282, 36283), (59176, 59177), (59179, 59180)
I assume that I understand your question
# As many rows are identical, we should create a unique identifier column
# Let's take iris dataset as an example
# install caret package if you don't have it
install.packages("caret")
# require library
library(tidyverse)
library(caret)
# check the dataset (iris)
head(iris)
# assume that I gather all columns in iris dataset, except Species variable
# Create an unique identifier column and transform wide data to long data as follow
iris_gather<- iris %>% dplyr::mutate(ID=row_number(Species)) %>% tidyr::gather(key=Type,value=my_value,1:4)
# check first six rows
head(iris_gather)
# using *spread* to spread out the data
iris_spread<- iris_gather %>% dplyr::group_by(ID) %>% tidyr::spread(key=Type,value=my_value) %>% dplyr::ungroup() %>% dplyr::select(-ID)
# Check first six rows of iris_spread
head(iris_spread)

Resources