time series analysis of text in r - r

If i have some data like so:
df = data.frame(person = c('jim','john','pam','jim'),
date =c('2018-01-01','2018-02-01','2018-03-01','2018-04-01'),
text = c('the lonely engineer','tax season is upon us, engineers, do your taxes!','i am so lonely','rage coding is the best') )
and I wanted to understand trending terms by date, how can I go about that?
xCorp = corpus(df, text_field = 'text')
x = tokens(xCorp) %>% tokens_remove(
c(
stopwords('english'),
'western digital',
'wd',
'nil'),
padding = T
) %>%
dfm(
remove_numbers = TRUE,
remove_punct = TRUE,
remove_symbols = T,
concatenator = ' '
)
x2 = dfm(x, groups = 'date')
This would get me part of the way there, but not sure if it's the best way.

Using the tidyverse, I was able to do the following:
df = df %>%
group_by(date) %>%
unnest_tokens(word,text) %>%
count(word,sort = T) %>%
}

Related

R: How to create a Drilldown Highchart using loops

when doing a job I have found a problem that I don't know how to solve.
I have a data frame that has 2 columns:
date
value
And it has a total of 1303 rows.
For each date there are 12 values (1 for each month), except in the last year that only has 7
The work I have to do would be to create a 'drilldown' style chart using the 'highcharter' library. The problem is that I don't know how to do it efficiently.
The solution that comes to my mind is not very efficient, below I show my solution so you can see what I mean.
dataframe
# Load packages
library(tidyverse)
library(highcharter)
library(lubridate)
# Load dataset
df <- read.csv('example.csv')
# Prepare df to use
dfDD <- tibble(name = year(df$date),
y = round(df$value, digits = 2),
drilldown = name)
# Create a data frame to use in 'drilldown' (for each year)
df1913 <- df %>%
filter(year(date) == 1913) %>%
data.frame()
df1914 <- df %>%
filter(year(date) == 1914) %>%
data.frame()
# Create a drilldown chart using Highcharter library
highchart() %>%
hc_chart(type = "column") %>%
hc_title(text = "Example Drilldown") %>%
hc_xAxis(type = "category") %>%
hc_legend(enabled = FALSE) %>%
hc_plotOptions(series = list(boderWidth = 2,
dataLabels = list(enabled = TRUE))) %>%
hc_add_series(data = dfDD,
name = "Mean",
colorByPoint = TRUE) %>%
hc_drilldown(allowPointDrilldown = TRUE,
series = list(list(id = 1913,
data = list_parse2(df1913)),
list(id = 1914,
data = list_parse2(df1914))))
Seeing my solution for the first time, I realized that in order to complete the graph I would have to create a subset of values for each year. Having realized that I tried to find a more efficient solution using a 'for loop' but so far I can't get it to work.
Is there a more efficient way to create this graph using a 'loop'!?
If it can be done in another way than using loops, I would also like to know.
Thank you for reading my question and I hope I explained myself well.
Using split and purrr::imap you could split your data by years and loop over the resulting list to convert your data to the nested list object required by hc_drilldown. Note: It's important to make the id a numeric and to pass a unnamed list.
library(tidyverse)
library(highcharter)
library(lubridate)
series <- split(df, year(df$date)) %>%
purrr::imap(function(x, y) list(id = as.numeric(y), data = list_parse2(x)))
# Unname list
names(series) <- NULL
highchart() %>%
hc_chart(type = "column") %>%
hc_title(text = "Example Drilldown") %>%
hc_xAxis(type = "category") %>%
hc_legend(enabled = FALSE) %>%
hc_plotOptions(series = list(boderWidth = 2,
dataLabels = list(enabled = TRUE))) %>%
hc_add_series(data = dfDD,
name = "Mean",
colorByPoint = TRUE) %>%
hc_drilldown(allowPointDrilldown = TRUE,
series = series)

Web scraping - SAPPLY function - Error in readBin(5L, "raw", 65536L) : Failure when receiving data from the peer

I am struggling with web scraping. Chunk A works fine but Chunk B doesn't, somehow.
If you could possibly run code A and B on your computer and give me a hint, what's wrong with B?
The best solution would be, if someone knows a code that takes all the hyperlinks with the same name (here 'Statement'). Therefore, I would not need so many different for-loops.
Lots of kisses to everyone who is already puzzling!
Ciocclata
What I tried so far: tried to separately scrape the code from the web pages. I checked the html source code.
CHUNK A (works)
library(rvest)
library(dplyr)
library(data.table)
library(lubridate)
statements = data.table(NULL)
#2005,2004,2002: (YES)
for (pages_years in c(2005,2004,2002)){
link = paste0("https://www.federalreserve.gov/monetarypolicy/fomchistorical", pages_years , ".htm")
page = read_html(link)
statement_links = page %>% html_nodes(".col-md-6+ .col-md-6 p:nth-child(2) a") %>% html_attr("href") %>% paste("https://www.federalreserve.gov", ., sep="")
get_statement = function(statement_link){
statement_page = read_html(statement_link)
statement = statement_page %>% html_nodes("td p") %>% html_text() %>% paste(collapse = ",")
return(statement)
}
statement = sapply(statement_links, FUN = get_statement, USE.NAMES = F)
get_date = function(date_link){
date_page = read_html(date_link)
date = date_page %>% html_nodes("i") %>% html_text()
return(date)
}
date = sapply(statement_links, FUN = get_date, USE.NAMES = F)
date = format(mdy(date), "%Y-%m-%d")
print(paste("Page:", pages_years))
statements = rbind(statements, data.table(date, statement, stringsAsFactors = F))
}
DT = statements
DT = DT[date<=2021]
DT = DT[order(date)]
View(DT)
CHUNK B (doesn't work)
library(rvest)
library(dplyr)
library(data.table)
library(lubridate)
statements = data.table(NULL)
#2010,2008,2007:
for (pages_years in c(2010,2008,2007)){
link = paste0("https://www.federalreserve.gov/monetarypolicy/fomchistorical", pages_years , ".htm")
page = read_html(link)
statement_links = page %>% html_nodes(".col-md-12 p:nth-child(1) a , .col-md-6+ .col-md-6 p:nth-child(2) a") %>% html_attr("href") %>% paste("https://www.federalreserve.gov", ., sep="")
get_statement = function(statement_link){
statement_page = read_html(statement_link)
statement = statement_page %>% html_nodes(".hidden-sm+ .col-md-8") %>% html_text()
return(statement)
}
statement = sapply(statement_links, FUN = get_statement, USE.NAMES = F)
get_date = function(date_link){
date_page = read_html(date_link)
date = date_page %>% html_nodes(".article__time") %>% html_text()
return(date)
}
date = sapply(statement_links, FUN = get_date, USE.NAMES = F)
date = format(mdy(date), "%Y-%m-%d")
print(paste("Page:", pages_years))
statements = rbind(statements, data.frame(date, statement, stringsAsFactors = F))
}
DT = statements
DT = DT[date<=2021]
DT = DT[order(date)]
View(DT)
It may be better to have a tryCatch option. With possibly/safely from purrr, this can be done more easily
library(purrr)
library(rvest)
library(dplyr)
get_statement <- possibly(function(statement_link){
statement_page = read_html(statement_link)
statement = statement_page %>% html_nodes(".hidden-sm+ .col-md-8") %>% html_text()
return(statement)
}, otherwise = NA)
get_date <- possibly(function(date_link){
date_page = read_html(date_link)
date = date_page %>% html_nodes(".article__time") %>% html_text()
return(date)
}, otherwise = NA)
for (pages_years in c(2010,2008,2007)){
link = paste0("https://www.federalreserve.gov/monetarypolicy/fomchistorical", pages_years , ".htm")
page = read_html(link)
statement_links = page %>% html_nodes(".col-md-12 p:nth-child(1) a , .col-md-6+ .col-md-6 p:nth-child(2) a") %>% html_attr("href") %>% paste("https://www.federalreserve.gov", ., sep="")
statement = sapply(statement_links, FUN = get_statement, USE.NAMES = FALSE)
date = sapply(statement_links, FUN = get_date, USE.NAMES = FALSE)
date = format(mdy(date), "%Y-%m-%d")
print(paste("Page:", pages_years))
statements = rbind(statements, data.frame(date, statement, stringsAsFactors = FALSE))
}
#[1] "Page: 2010"
#[1] "Page: 2008"
#[1] "Page: 2007"
DT = statements
DT = DT[date<=2021]
DT = DT[order(date)]
-output
> dim(DT)
[1] 30 2

How to put line breaks in the labels of the table output by gt_regression() and output it by LaTeX?

According to the documentation of the gtsummay package, you can use <br> in add_significance_stars() to break the labels of the table showing the results of the regression model in HTML, but it does not work for LaTeX.
I have tried other line break methods such as \n, but it still does not work. How can I make line breaks in LaTeX?
Here is an example in HTML.
df <-
mtcars %>%
lm(mpg ~ ., data = .)
df %>%
tbl_regression() %>%
add_significance_stars(
hide_se = TRUE,
pattern = "{estimate}{stars}<br>({std.error})"
) %>%
modify_header(estimate ~ "OLS<br>result")
And here is a LaTeX example.
df %>%
tbl_regression() %>%
add_significance_stars(
hide_se = TRUE,
pattern = "{estimate}{stars}<br>({std.error})"
) %>%
modify_header(estimate ~ "OLS<br>result") %>%
as_kable_extra(
format = "latex",
booktabs = TRUE
)
I created a table based on the answer, but I found that this method causes the layout to be broken when using tbl_merge().
I will present the problem code again.
# make nested dataframe
nest_df <-
mtcars %>%
tibble() %>%
group_nest(vs)
# make function
mod_fun <- function(df){lm(mpg ~ ., data = df)}
# map function
nest_df <-
nest_df %>%
mutate(model = map(data, mod_fun))
# make table
nest_df <-
nest_df %>%
mutate(
tbl = map(
.x = model,
~ tbl_regression(
.x,
) %>%
add_significance_stars(
hide_se = TRUE,
pattern = "{estimate}{stars}\\\\&({std.error})"
) %>%
modify_header(estimate ~ "OLS\\\\&result")
)
)
# merge table
nest_df_m <-
tbl_merge(
tbls = nest_df$tbl,
tab_spanner = c("type1", "type2")
)
# output merged table
nest_df_m %>%
as_kable_extra(
format = "latex",
booktabs = TRUE,
escape = FALSE
) %>%
kable_styling(position = "center")
Maybe this fits your need. You could a line break by
adding \\\\ (which gives an \\ in the latex code),
adding an & to put the std.error in the same column as the estimate,
setting escape=FALSE in as_kable_extra.
df %>%
tbl_regression() %>%
add_significance_stars(
hide_se = TRUE,
pattern = "{estimate}{stars}\\\\&({std.error})"
) %>%
modify_header(estimate ~ "OLS\\\\&result") %>%
as_kable_extra(
format = "latex",
booktabs = TRUE,
escape = FALSE
)

Display Alternative color_bar value in Formattable Table

Is it possible to populate a formattable color_bar with an alternative display value (i.e. a value other than the value used to determine the size of the color_bar)
In the table below I want to override the values with the following display values for ttl to:
c(1000,1230,1239,1222,1300,1323,1221)
library(tidyverse)
library(knitr)
library(kableExtra)
library(formattable)
tchart <- data.frame(id = 1:7,
Student = c("Billy", "Jane", "Lawrence", "Thomas", "Clyde", "Elizabeth", "Billy Jean"),
grade3 = c(55,70,75,64,62,55,76),
ttl = c(105,120,125,114,112,105,126),
avg =c(52.31,53.0,54.2,51.9,52.0,52.7,53.0))
tchart %>%
mutate(id = cell_spec(id, "html", background = "red", color = "white", align = "center")) %>%
mutate(grade3 = color_bar("lightgreen")(grade3)) %>%
mutate(ttl = color_bar("lightgray")(ttl)) %>%
mutate(avg = color_tile("white","red")(avg)) %>%
kable("html", escape = F) %>%
kable_styling("hover", full_width = F) %>%
column_spec(4, width = "4cm")
I checked the documentation and didn't see this as a possibility, but I was hoping there was a workaround or custom function solution.
I don't think you can quite pass it another set of values, but there are a couple of options that you might find workable.
One thing to note first is that color_bar() can accept two values - a color, and a function that will take the vector of values and transform them to numbers between 0 and 1. By default, that function is formattable::proportion(), which compares everything against the max value. But if you used your display values for ttl, you could conceivably transform the bars to be whatever length you wanted by writing your own function. (See: https://rdrr.io/cran/formattable/man/color_bar.html)
Another possibility would be to make your own formatter. Some examples here:
https://www.littlemissdata.com/blog/prettytables
So, I think you can put the numbers you want in the display, and hopefully can use a function to transform or map those values to get the bar lengths between 0 and 1 that you're looking for.
Add a new variable ttl_bar to determine the size of the bar, and let variable ttl display the value. I use gsub() to replace the ttl_bar to ttl.
tchart <- data.frame(id = 1:7,
Student = c("Billy", "Jane", "Lawrence", "Thomas", "Clyde", "Elizabeth", "Billy Jean"),
grade3 = c(55,70,75,64,62,55,76),
ttl = c(1000,1230,1239,1222,1300,1323,1221),
avg =c(52.31,53.0,54.2,51.9,52.0,52.7,53.0),
ttl_bar = c(105,120,125,114,112,105,126))
tchart %>%
mutate(id = cell_spec(id, "html", background = "red", color = "white", align = "center")) %>%
mutate(grade3 = color_bar("lightgreen")(grade3)) %>%
mutate(avg = color_tile("white","red")(avg)) %>%
mutate(ttl = pmap(list(ttl_bar, ttl, color_bar("lightgray")(ttl_bar)), gsub)) %>%
select(-ttl_bar) %>%
kable("html", escape = F) %>%
kable_styling("hover", full_width = F) %>%
column_spec(4, width = "4cm")
In a more careful way, rewrite gsub() as this mutate(ttl = pmap(list(ttl_bar, ttl, color_bar("lightgray")(ttl_bar)), ~ gsub(paste0(">", ..1, "<"), paste0(">", ..2, "<"), ..3))).
I come up with a better way to use function in color_bar() as the following code.
override = function(x, y) y / 200
tchart <- data.frame(id = 1:7,
Student = c("Billy", "Jane", "Lawrence", "Thomas", "Clyde", "Elizabeth", "Billy Jean"),
grade3 = c(55,70,75,64,62,55,76),
ttl = c(105,120,125,114,112,105,126),
avg =c(52.31,53.0,54.2,51.9,52.0,52.7,53.0),
ttl_bar = c(1000,1230,1239,1222,1300,1323,1221))
tchart %>%
mutate(id = cell_spec(id, "html", background = "red", color = "white", align = "center")) %>%
mutate(grade3 = color_bar("lightgreen")(grade3)) %>%
mutate(avg = color_tile("white","red")(avg)) %>%
mutate(ttl = color_bar("lightgray", fun = override, ttl)(ttl_bar)) %>%
select(-ttl_bar) %>%
kable("html", escape = F) %>%
kable_styling("hover", full_width = F) %>%
column_spec(4, width = "4cm")

Disaggregate in the context of a time series

I have a dataset that I want to visualize overall and disaggregated by a few different variables. I created a flexdashboard with a toy shiny app to select the type of disaggregation, and working code to plot the correct subset.
My approach is repetitive, which is a hint to me that I'm missing out on a better way to do this. The piece that's tripping me up is the need to count by date and expand the matrix. I'm not sure how get group counts by week in one pipe. I do it in several steps and combine.
Thoughts?
(ps. I asked this question on RStudio Community, but I think it's probably more of a "SO question". I don't have permissions to delete it from RSC, so apologies for the cross-post.)
---
title: "test"
output:
flexdashboard::flex_dashboard:
theme: bootstrap
runtime: shiny
---
```{r setup, include=FALSE}
library(flexdashboard)
library(tidyverse)
library(tibbletime)
library(dygraphs)
library(magrittr)
library(xts)
```
```{r global, include=FALSE}
set.seed(1)
dat <- data.frame(date = seq(as.Date("2018-01-01"),
as.Date("2018-06-30"),
"days"),
sex = sample(c("male", "female"), 181, replace=TRUE),
lang = sample(c("english", "spanish"), 181, replace=TRUE),
age = sample(20:35, 181, replace=TRUE))
dat <- sample_n(dat, 80)
```
Sidebar {.sidebar}
=====================================
```{r}
radioButtons("diss", label = "Disaggregation",
choices = list("All" = 1, "By Sex" = 2, "By Language" = 3),
selected = 1)
```
Page 1
=====================================
```{r}
# all
all <- reactive(
dat %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
as_tbl_time(index = date) %>% # convert to tibble time object
select(date, new) %>%
collapse_by('1 week', side="start", clean=TRUE) %>%
group_by(date) %>%
mutate(total = sum(new, na.rm=TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(date = seq(date[1],
date[length(date)],
by = "1 week"),
fill = list(total = 0))
)
# males only
males <- reactive(
dat %>%
filter(sex=="male") %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
as_tbl_time(index = date) %>%
select(date, new) %>%
collapse_by('1 week', side="start", clean=TRUE) %>%
group_by(date) %>%
mutate(total_m = sum(new, na.rm=TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(date = seq(date[1],
date[length(date)],
by = "1 week"),
fill = list(total_m = 0))
)
# females only
females <- reactive(
dat %>%
filter(sex=="female") %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
as_tbl_time(index = date) %>%
select(date, new) %>%
collapse_by('1 week', side="start", clean=TRUE) %>%
group_by(date) %>%
mutate(total_f = sum(new, na.rm=TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(date = seq(date[1],
date[length(date)],
by = "1 week"),
fill = list(total_f = 0))
)
# english only
english <- reactive(
dat %>%
filter(lang=="english") %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
as_tbl_time(index = date) %>%
select(date, new) %>%
collapse_by('1 week', side="start", clean=TRUE) %>%
group_by(date) %>%
mutate(total_e = sum(new, na.rm=TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(date = seq(date[1],
date[length(date)],
by = "1 week"),
fill = list(total_e = 0))
)
# spanish only
spanish <- reactive(
dat %>%
filter(lang=="spanish") %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
as_tbl_time(index = date) %>%
select(date, new) %>%
collapse_by('1 week', side="start", clean=TRUE) %>%
group_by(date) %>%
mutate(total_s = sum(new, na.rm=TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(date = seq(date[1],
date[length(date)],
by = "1 week"),
fill = list(total_s = 0))
)
# combine
totals <- reactive({
all <- all()
females <- females()
males <- males()
english <- english()
spanish <- spanish()
all %>%
select(date, total) %>%
full_join(select(females, date, total_f), by = "date") %>%
full_join(select(males, date, total_m), by = "date") %>%
full_join(select(english, date, total_e), by = "date") %>%
full_join(select(spanish, date, total_s), by = "date")
})
# convert to xts
totals_ <- reactive({
totals <- totals()
xts(totals, order.by = totals$date)
})
# plot
renderDygraph({
totals_ <- totals_()
if (input$diss == 1) {
dygraph(totals_[, "total"],
main= "All") %>%
dySeries("total", label = "All") %>%
dyRangeSelector() %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
} else if (input$diss == 2) {
dygraph(totals_[, c("total_f", "total_m")],
main = "By sex") %>%
dyRangeSelector() %>%
dySeries("total_f", label = "Female") %>%
dySeries("total_m", label = "Male") %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
} else {
dygraph(totals_[, c("total_e", "total_s")],
main = "By language") %>%
dyRangeSelector() %>%
dySeries("total_e", label = "English") %>%
dySeries("total_s", label = "Spanish") %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
}
})
```
Update:
#Jon Spring suggested writing a function to reduce some repetition (applied below), which is a nice improvement. The basic approach is the same, however. Segment, calculate, combine, plot. Is there a way to do this without breaking apart and putting back together?
---
title: "test"
output:
flexdashboard::flex_dashboard:
theme: bootstrap
runtime: shiny
---
```{r setup, include=FALSE}
library(flexdashboard)
library(tidyverse)
library(tibbletime)
library(dygraphs)
library(magrittr)
library(xts)
```
```{r global, include=FALSE}
# generate data
set.seed(1)
dat <- data.frame(date = seq(as.Date("2018-01-01"),
as.Date("2018-06-30"),
"days"),
sex = sample(c("male", "female"), 181, replace=TRUE),
lang = sample(c("english", "spanish"), 181, replace=TRUE),
age = sample(20:35, 181, replace=TRUE))
dat <- sample_n(dat, 80)
# Jon Spring's function
prep_dat <- function(filtered_dat, col_name = "total") {
filtered_dat %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
tibbletime::as_tbl_time(index = date) %>% # convert to tibble time object
select(date, new) %>%
tibbletime::collapse_by("1 week", side = "start", clean = TRUE) %>%
group_by(date) %>%
mutate(total = sum(new, na.rm = TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(
date = seq(date[1], date[length(date)], by = "1 week"),
fill = list(total = 0)
)
}
```
Sidebar {.sidebar}
=====================================
```{r}
radioButtons("diss", label = "Disaggregation",
choices = list("All" = 1, "By Sex" = 2, "By Language" = 3),
selected = 1)
```
Page 1
=====================================
```{r}
# all
all <- reactive(
prep_dat(dat)
)
# males only
males <- reactive(
prep_dat(
dat %>%
filter(sex == "male")
) %>%
rename("total_m" = "total")
)
# females only
females <- reactive(
prep_dat(
dat %>%
filter(sex == "female")
) %>%
rename("total_f" = "total")
)
# english only
english <- reactive(
prep_dat(
dat %>%
filter(lang == "english")
) %>%
rename("total_e" = "total")
)
# spanish only
spanish <- reactive(
prep_dat(
dat %>%
filter(lang == "spanish")
) %>%
rename("total_s" = "total")
)
# combine
totals <- reactive({
all <- all()
females <- females()
males <- males()
english <- english()
spanish <- spanish()
all %>%
select(date, total) %>%
full_join(select(females, date, total_f), by = "date") %>%
full_join(select(males, date, total_m), by = "date") %>%
full_join(select(english, date, total_e), by = "date") %>%
full_join(select(spanish, date, total_s), by = "date")
})
# convert to xts
totals_ <- reactive({
totals <- totals()
xts(totals, order.by = totals$date)
})
# plot
renderDygraph({
totals_ <- totals_()
if (input$diss == 1) {
dygraph(totals_[, "total"],
main= "All") %>%
dySeries("total", label = "All") %>%
dyRangeSelector() %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
} else if (input$diss == 2) {
dygraph(totals_[, c("total_f", "total_m")],
main = "By sex") %>%
dyRangeSelector() %>%
dySeries("total_f", label = "Female") %>%
dySeries("total_m", label = "Male") %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
} else {
dygraph(totals_[, c("total_e", "total_s")],
main = "By language") %>%
dyRangeSelector() %>%
dySeries("total_e", label = "English") %>%
dySeries("total_s", label = "Spanish") %>%
dyOptions(useDataTimezone = FALSE,
stepPlot = TRUE,
drawGrid = FALSE,
fillGraph = TRUE)
}
})
```
Thanks for explaining more about your goals. I think the approach #simon-s-a suggests will simplify things. If we can run the grouping dynamically, and structure it so that we don't need to know the possible components in those groups beforehand, it will be a lot easier to maintain.
Here's a minimum viable product that rebuilds the plotting function to include the grouping logic inside it.
Once grouped by date and whatever our grouping variable is, it counts how many rows each group has, then spreads those so each group gets a column.
Then I use padr::pad to pad out any missing time rows in between, and replace all the NA's with zeros.
Finally, that data frame is converted to an xts object and fed into dygraph, which seems to handle the multiple columns automatically.
Here:
---
title: "test"
output:
flexdashboard::flex_dashboard:
theme: bootstrap
runtime: shiny
---
```{r setup, include=FALSE}
library(flexdashboard)
library(tidyverse)
library(tibbletime)
library(dygraphs)
library(magrittr)
library(xts)
```
```{r global, include=FALSE}
# generate data
set.seed(1)
dat <- data.frame(date = seq(as.Date("2018-01-01"),
as.Date("2018-06-30"),
"days"),
sex = sample(c("male", "female"), 181, replace=TRUE),
lang = sample(c("english", "spanish"), 181, replace=TRUE),
age = sample(20:35, 181, replace=TRUE))
dat <- dplyr::sample_n(dat, 80)
```
Sidebar {.sidebar}
=====================================
```{r}
radioButtons("diss", label = "Disaggregation",
choices = list("All" = "Total",
"By Sex" = "sex",
"By Language" = "lang"),
selected = "Total")
```
Page 1
=====================================
```{r plot}
renderDygraph({
grp_col <- rlang::sym(input$diss) # This converts the input selection to a symbol
dat %>%
mutate(Total = 1) %>% # This is a hack to let us "group" by Total -- all one group
# Here's where we unquote the symbol so that dplyr can use it
# to refer to a column. In this case I make a dummy column
# that's a copy of whatever column we want to group
mutate(my_group = !!grp_col) %>%
# Now we make a group for every existing combination of week
# (using lubridate::floor_date) and level of our grouping column,
# count how many rows in each group, and spread that to wide format.
group_by(date = lubridate::floor_date(date, "1 week"), my_group) %>%
count() %>% spread(my_group, n) %>% ungroup() %>%
# padr:pad() fills in any missing weeks in the sequence with new rows
# Then we replace all the NA's with zeroes.
padr::pad() %>% replace(is.na(.), 0) %>%
# Finally we can convert to xts and feed the wide table into digraph.
xts::xts(order.by = .$date) %>%
dygraph() %>%
dyRangeSelector() %>%
dyOptions(
useDataTimezone = FALSE, stepPlot = TRUE,
drawGrid = FALSE, fillGraph = TRUE
)
})
```
This is a good place to make a function, to shorten your code and make it less prone to error.
http://r4ds.had.co.nz/functions.html
A complicating bit is that programming with dplyr often requires wading into a framework called tidyeval, which is very powerful but can be intimidating.
https://dplyr.tidyverse.org/articles/programming.html
(Here's an alternative approach that sidesteps tidyeval: https://cran.r-project.org/web/packages/seplyr/vignettes/using_seplyr.html)
In your scenario, it's possible to avoid these challenges entirely by doing a bit of manipulation before and after your function. It's not as elegant, but works.
BTW, I can't guarantee it'll work since you didn't share a verifiable reprex (e.g. including a sample of data with the same form as yours), but it worked with the fake data I made up. (See bottom.) Sorry, I missed the chunk where your sample data was provided.
prep_dat <- function(filtered_dat, col_name = "total") {
filtered_dat %>%
mutate(new = 1) %>%
arrange(date) %>%
# time series analysis
tibbletime::as_tbl_time(index = date) %>% # convert to tibble time object
select(date, new) %>%
tibbletime::collapse_by("1 week", side = "start", clean = TRUE) %>%
group_by(date) %>%
mutate(total = sum(new, na.rm = TRUE)) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup() %>%
# expand matrix to include weeks without data
complete(
date = seq(date[1], date[length(date)], by = "1 week"),
fill = list(total = 0)
)
}
Then you could call it with your filtered data and the name of the total column. This fragment should be able to replace the ~20 lines you're currently using:
males <- prep_dat(dat_fake %>%
filter(sex == "male")) %>%
rename("total_m" = "total")
Fake data that I tested on:
dat_fake <- tibble(
date = as.Date("2018-01-01") + runif(500, 0, 100),
new = runif(500, 0, 100),
sex = sample(c("male", "female"),
500, replace = TRUE),
lang = sample(c("english", "french", "spanish", "portuguese", "tagalog"),
500, replace = TRUE)
)
I think you can make some gains by changing the order of your preparation. Right now the flow of your app is approximately:
Data => prepare all combinations => select desired visualization => make plot
Consider instead:
Data => select desired visualization => prepare required combination => make plot
This would make use of Shiny's reactivity to (re)prepare the data required for the requested plot in response to changes in the user's selection.
By way of code snippets (Sorry, I don't have sufficient familiarity with flexdashboard and tibbletime to ensure this code runs, but I hope it is enough to highlight the approach):
Your control selects the column you want to focus on (note we use "All" = "'1'" so this evaluates to a constant in the group-by, else it has to be handled separately):
radioButtons("diss", label = "Disaggregation",
choices = list("All" = "'1'",
"By Sex" = "sex",
"By Language" = "lang",
"By other" = "column_name_of_'other'"),
selected = 1)
And then use this in your group by to prepare only the data required for the present visualization (you'll need to adjust the function suggested by #Jon_Spring in response to this earlier group-by):
preped_dat = reactive({
dat %>%
group_by_(input$diss) %>%
# etc
})
Before plotting (you'll need to adjust the plotting function in response to the possible change in data format):
renderDygraph({
totals = preped_data()
dygraph(totals) %>%
dySeries("total", label = ) %>%
dyRangeSelector()
})
With regard to group_by you can use group_by_ if all your arguments are text strings, or group_by(!! sym(input$diss), other_column_name) if you want to mix the text string input from your control with other column names.
One possible disadvantage of this change in approach is reduced responsiveness during interactivity if your data set is large. The present approach does all the computation up front and then minimal computation each selection - this may be preferable if you have a large amount of processing. My suggested approach will have minimal up front processing and moderate computation each selection.

Resources