R dashboard, googlevis options - r

here is my Q:
I am using googlevis (gvisTreeMap) function to illustrate a data frame. it is automatically put the labels on each block, but I want to have the numbers too.
here is my simplified code:
col1 <- c(1,2,3,5,8)
col2 <- c("a","b","c","d","e")
fdata <- data.frame(col1,col2)
total <- data.frame(col1=sum(fdata$col1), col2="Market Share")
fdata1 <- rbind(total, fdata)
fdata1$parent="Market Share"
## Set parent variable to NA at root level
fdata1$parent[fdata1$col2=="Market Share"] <- NA
fdata1$col1.log=log(fdata1$col1)
aa <- gvisTreeMap(fdata1, "col2", "parent",
"col1", "col1.log",
options=list(width=600, height=500,
fontSize=16,
minColor='#EDF8FB',
midColor='#66C2A4',
maxColor='#006D2C',
headerHeight=20,
fontColor='black',
showScale=TRUE, lable="$$"))
plot(aa)
to make it more clear, after I run the code I have five blocks with letters on it, but I want both letters and numbers.
Thanks

I changed your definition of col2 to append the size to each entry (with the exception of the root node):
fdata1$col2=paste0(fdata1$col2, " - ", fdata1$col1)
fdata$col2[1] <- "Market Share"

Related

String match error "invalid regular expression, reason 'Out of memory'"

I have a table that is shaped like this called df (the actual table is 16,263 rows):
title date brand
big farm house 2022-01-01 A
ranch modern 2022-01-01 A
town house 2022-01-01 C
Then I have a table like this called match_list (the actual list is 94,000 rows):
words_for_match
farm
town
clown
beach
city
pink
And I'm trying to filter the first table to just be rows where the title contains a word in the words_for_match list. So I do this:
match_list <- match_list$words_for_match
match_list <- paste(match_list, collapse = "|")
match_list <- sprintf("\\b(%s)\\b", match_list)
df %>%
filter(grepl(match_list, title))
But then I get the following error:
Problem while computing `..1 = grepl(match_list, subject)`.
Caused by error in `grepl()`:
! invalid regular expression, reason 'Out of memory'
If I filter the table with 94,000 rows to just 1,000 then it runs, so it appears to just be a memory issue. So I'm wondering if there's a less memory-intensive way to do this or if this is an example of needing to look beyond my computer for computation. Advice on either pathway (or other options) is welcome. Thanks!
You could keep titles sequentially, let's say you have 10 titles that match 'farm' you do not need to evaluate those titles with other words.
Here a simple implementation :
titles <- c("big farm house", "ranch modern", "town house")
words_for_match <- c("farm", "town", "clown", "beach", "city", "pink")
titles.to.keep <- c()
for(w in words_for_match)
{
w <- sprintf("\\b(%s)\\b", w)
is.match <- grepl(w, titles)
titles.to.keep <- c(titles.to.keep, titles[is.match])
titles <- titles[!is.match]
print(paste(length(titles), "remaining titles"))
}
titles.to.keep
If you have a prior on the frequency of words on match_list, it's better to start with the most frequent ones.
UPDATE
You can also make a mix with your previous strategy to make it faster :
gr.size <- 20
gr.words <- split(words_for_match, ceiling(seq_along(words_for_match) / gr.size))
gr.words <- sapply(gr.words, function(words)
{
words <- paste(words, collapse = "|")
sprintf("\\b(%s)\\b", words)
})
and then iterate on gr.words and not on words_for_match in the first code chunk.

Remove words per year in a corpus

I am working with a corpus with speeches spanning several years (aggregated to person-year level). I want to remove words that occur less than 4 times in a year (not remove it for the whole corpus, but only for the year in which it does not meet the threshold).
I have tried the following:
DT$text <- ifelse(grepl("1998", DT$session), mgsub(DT$text, words_remove_1998, ""), DT$text)
and
DT$text <- ifelse(grepl("1998", DT$session), str_remove_all(DT$text, words_remove_1998), DT$text)
and
DT$text <- ifelse(grepl("1998", DT$session), removeWords(DT$text, words_remove_1998), DT$text)
and
DT$text <- ifelse(grepl("1998", DT$session), drop_element(DT$text, words_remove_1998), DT$text)
However, none seem to work. Mgsub just substitutes the whole speech with "" for 1998, whilst the other options give error messages. The reason that removeWords does not work is that my words_remove_1998 vector is too large. I have tried to split the word vector and loop over the words (see code below), but R does not appear to like this (running forever).
group <- 100
n <- length(words_remove_1998)
r <- rep(1:ceiling(n/group),each=group)[1:n]
d <- split(words_remove_1998,r)
for (i in 1:length(d)) {
DT$text <- ifelse(grepl("1998", DT$session), removeWords(DT$text, c(paste(d[[i]]))), DT$text)
}
Any suggestions for how to solve this?
Thank you for your help!
Reproducible example:
text <- rbind(c("i like ice cream"), c("banana ice cream is my favourite"), c("ice cream is not my thing"))
name <- rbind(c("Arnold Ford"), c("Arnold Ford"), c("Leslie King"))
session <- rbind("1998", "1999", "1998")
DT <- cbind(name, session, text)
words_remove_1998 <- c("like", "ice", "cream")
newtext <- rbind(c("i"), c("banana ice cream is my favourite"), c("is not my thing"))
DT <- cbind(DT, newtext)
My real word vector that I want removed contains 30k elements.
I ended up not using any wrappings, as none of them could handle the size of the data. Insted I did it the old-fashioned and simple way; separate the text into several rows, count the occurences of each word per session (year) and person, then remove the rows corresponding to less than a threshold (same limit as I used to identify the vector with words I wanted to remove). Lastly, I aggregate the data back to it's initial level (person-year).
This only words because I am removing words according to a threshold. If I had a list of words to remove that I could not remove in this way, I would have been in more trouble.
DT_separate <- separate_rows(DT, text)
df <- DT_separate %>%
dplyr::group_by(session, text) %>%
dplyr::mutate(count = dplyr::n())
df <- df[df$count >5, ]
df <- aggregate(
text ~ x, #where x is a person-year id
data=df,
FUN=paste, collapse=' '
)
names(df)[names(df) == 'text'] <- 'text2'
DT <- left_join(DT, df, by="x")
DT$text <- DT$text2
DT <- DT[, !(colnames(DT) %in% c("text2"))]

R: Read text files with blanks and unequal number of columns

I am trying to read many text files into R using read.table. Most of the time we have clean text files which have defined columns.
The data that I am trying to read comes from ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt
You can see that the blanks and length of text files varies by report.
ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt
ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/100917_livecattle.txt
My objective is to read many of these text files and combine them into a dataset.
If I can read one of the them then compiling should not be an issue. However, I am running into several issues because of the format of the text file:
1) the number of FIRMS vary from report to report. For example, sometimes there will be 3 rows (i.e. 3 firms that did business on that data) of data to import and sometimes there may be 10.
2) Blanks are being recognized. For example, under the FIRM section there should be a column for Deliveries (DEL) and Receipts (REC). The data when it is read in THIS section should look like:
df <- data.frame("FIRM_#" = c(407, 685, 800, 905),
"FIRM_NAME" = c("STRAITS FIN LLC", "R.J.O'BRIEN ASSOC", "ROSENTHAL COLLINS LL", "ADM INVESTOR SERVICE"),
"DEL" = c(1,1,15,1), "REC"= c(NA,18,NA,NA))
however when I read this in the fomatting is all messed up and does not put NA for the blank values
3) The above issues apply for "YARDS" and "FUTURE DELIVERIES SCHEDULED" section of the text file.
I have tried to read in sections of the text file and then format it accordingly but since the the number of firms change day to day the code does not generalize.
Any help would greatly be appreciated.
Here an answer which starts from the scratch via rvest for downloading data and includes lots of formatting. The general idea is to identify fixed widths that may be used to separate columns - I used a little help from SO for this purpose link.
You could then use read.fwf() in combination with cat()and tempfile(). In my first attempt this did not work, due to some formatting issues, so I added some additional lines to get the final table format.
Maybe there are some more elegant options and shortcuts I have overseen, but at least, my answer should get you started. Of course, you will have to adapt the selection of lines, identification of widths for spliting tables depending on what parts of the data you need. Once this is settled, you may loop through all the websites to gather data. I hope this helps...
library(rvest)
library(dplyr)
page <- read_html("ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt")
table <- page %>%
html_text("pre") %>%
#reformat by splitting on line breakes
{ unlist(strsplit(., "\n")) } %>%
#select range based on strings in specific lines
"["(.,(grep("FIRM #", .):(grep(" DELIVERIES SCHEDULED", .)-1))) %>%
#exclude empty rows
"["(., !grepl("^\\s+$", .)) %>%
#fix width of table to the right
{ substring(., 1, nchar(gsub("\\s+$", "" , .[1]))) } %>%
#strip white space on the left
{ gsub("^\\s+", "", .) }
headline <- unlist(strsplit(table[1], "\\s{2,}"))
get_split_position <- function(substring, string) {
nchar(string)-nchar(gsub(paste0("(^.*)(?=", substring, ")"), "", string , perl=T))
}
#exclude first element, no split before this element
split_positions <- sapply(headline[-1], function(x) {
get_split_position(x, table[1])
})
#exclude headline from split
table <- lapply(table[-1], function(x) {
substring(x, c(1, split_positions + 1), c(split_positions, nchar(x)))
})
table <- do.call(rbind, table)
colnames(table) <- headline
#strip whitespace
table <- gsub("\\s+", "", table)
table <- as.data.frame(table, stringsAsFactors = FALSE)
#assign NA values
table[ table == "" ] <- NA
#change column type
table[ , c("FIRM #", "DEL", "REC")] <- apply(table[ , c("FIRM #", "DEL", "REC")], 2, as.numeric)
table
# FIRM # FIRM NAME DEL REC
# 1 407 STRAITSFINLLC 1 NA
# 2 685 R.J.O'BRIENASSOC 1 18
# 3 800 ROSENTHALCOLLINSLL 15 NA
# 4 905 ADMINVESTORSERVICE 1 NA

Manipulating textInput in R Shiny

I am relatively new to R and even more new to Shiny (literally first day).
I would like a user to input multiple phrases separated by a comma such as female, aged, diabetes mellitus. I have a dataframe in which one variable, MH2 contains text words. I would like to output a dataframe that contains only the rows in which all of the inputted phrases are present. Sometimes a user may input only one phrase, other times 5.
This is my ui.R
library(shiny)
library(stringr)
# load dataset
load(file = "./data/all_cardiovascular_case_reports.Rdata")
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
textInput(inputId = "phrases",
label = "Please enter all the MeSH terms that you would like to search, each separated by a comma:",
value = ""),
helpText("Example: female, aged, diabetes mellitus")
),
mainPanel(DT::dataTableOutput("dataframe"))
)
)
and here is my server.R
library(shiny)
server <- function(input, output)
{
# where all the code will go
df <- reactive({
# counts how many phrases there are
num_phrases <- str_count(input$phrases, pattern = ", ") + 1
a <- numeric(num_phrases) # initialize vector to hold all phrases
# create vector of all entered phrases
for (i in 1:num_phrases)
{
a[i] <- noquote(strsplit(input$phrases, ", ")[[i]][1])
}
# make all phrases lowercase
a <- tolower(a)
# do exact case match so that each phrase is bound by "\\b"
a <- paste0("\\b", a, sep = "")
exact <- "\\b"
a <- paste0(a, exact, sep = "")
# subset dataframe over and over again until all phrases used
for (i in 1:num_phrases)
{
final <- final[grepl(pattern = a, x = final$MH2, ignore.case = TRUE), ]
}
return(final)
})
output$dataframe <- DT::renderDataTable({df()})
}
When I tried running renderText({num_phrases}) I consistently got 1 even when I would input multiple phrases separated by commas. Since then, whenever I try to input multiple phrases, I run into "error: subscript out of bounds." However, when I enter the words separated by a comma only versus a comma and space (entering "female,aged" instead of "female, aged") then that problem disappears, but my dataframe doesn't subset correctly. It can only subset one phrase.
Please advise.
Thanks.
I think your Shiny logic looks good, but the function for subsetting the dataframe has a few small issues. In particular:
a[i] <- noquote(strsplit(input$phrases, ", ")[[i]][1])
The indices [[i]] and 1 are in the wrong place here, should be [[1]][i]
final <- final[grepl(pattern = a, x = final$MH2, ignore.case = TRUE), ]
You can not match multiple patterns like this, only the first element of a will be used, which is also the warning R gives.
Example working code
I have changed input$phrases to inp_phrases here. If this script does what you want I think you can easily copy it into you reactive, making the necessary changes (i.e. changing inp_phrases back, and adding the return(result) statement.). I was also not entirely clear if you wanted all patterns to be matched within one row, or return all rows were any of the patterns were matched, so I added them both, you can uncomment the one you need:
library(stringr)
# some example data
inp_phrases = "ab, cd"
final = data.frame(index = c(1,2,3,4),MH2 = c("ab cd ef","ab ef","cd ef ab","ef gx"),stringsAsFactors = F)
# this could become just two lines:
a <- sapply(strsplit(inp_phrases, ", ")[[1]], function(x) tolower(noquote(x)))
a <- paste0("\\b", a, "\\b")
# Two options here, uncomment the one you need.
# Top one: match any pattern in a. Bottom: match all patterns in a
# indices = grepl(pattern = paste(a,collapse="|"), x = final$MH2, ignore.case = TRUE)
indices = colSums(do.call(rbind,lapply(a, function(x) grepl(pattern = x, x = final$MH2, ignore.case = TRUE))))==length(a)
result <- final[indices,]
Returns:
index MH2
1 1 ab cd ef
3 3 cd ef ab
... with the second version of indices (match all) or
index MH2
1 1 ab cd ef
2 2 ab ef
3 3 cd ef ab
... with the first version of indices (match any)
Hope this helps!

Map over data frame columns, apply function to data if column meets condition

I'm pulling data from the Google Analytics API, processing it locally, then knitting an .Rmd file into text, tables, and visualisations. As part of the knitting/tabling process, I'm doing some basic formatting (e.g. rounding off percentages and adding % signs).
For this question, I have toPercent(), which works fine if used like this:
toPercent <- function(percentData){
percentData <- round(data, 2)
percentData <- mapply(toString, percentData)
percentData <- paste(percentData, "%", sep="")
}
devices <- toPercent(devices$avgSessionDuration)
However, manually setting the function for every table is time-intensive. I created the percentCheck() to look for columns that matched my criteria:
percentCheck <- function(data){
data[,grep("rate|percent", names(data), ignore.case=TRUE)] <- toPercent(data[,grep("rate|percent", names(data), ignore.case=TRUE)])
}
devices <- percentCheck(devices)
But I know this doesn't work on a dataset with multiple matches (e.g. a column for exitRate and a column for bounceRate).
Q1: Have I written toPercent() in a way that won't return multiple values to one entry?
Q2: How can I structure percentCheck() to map over the dataset and only apply toPercent() if the column name includes a given string?
Version/Packages:
R version 3.1.1 (2014-07-10) -- "Sock it to Me"
library(rga)
library(knitr)
library(stargazer)
Data:
> dput(devices)
structure(list(deviceCategory = c("desktop", "mobile", "tablet"
), sessions = c(817, 38, 1540), avgSessionDuration = c(153.424888853179,
101.942758538617, 110.270988142292), bounceRate = c(39.0192297391397,
50.2915625371891, 50.1343873517787), exitRate = c(25.3257456030279,
32.0236280487805, 29.0991902834008)), .Names = c("deviceCategory",
"sessions", "avgSessionDuration", "bounceRate", "exitRate"), row.names = c(NA,
-3L), class = "data.frame")
How about this modification:
percentCheck <- function(data){
idx <- grepl("rate|percent", names(data), ignore.case=TRUE)
data[idx] <- lapply(data[idx], function(x) paste0(sprintf("%.2f", round(x,2)), "%"))
return(data)
}
Here, I first used grepl to create and index of columns which meet the specified criteria. Then, this index is used in lapply to apply it to all these columns and the function that is applied is similar to your toPercent function, only I found it a bit more compact like this.
Now you can apply it to your whole data set in one go:
percentCheck(devices)
# deviceCategory sessions avgSessionDuration bounceRate exitRate
#1 desktop 817 153.4249 39.02% 25.33%
#2 mobile 38 101.9428 50.29% 32.02%
#3 tablet 1540 110.2710 50.13% 29.10%

Resources