I need to build a "profile" of a data set, showing the number of data entries that lie between two values. I have been able to achieve the result using the "group_by" function, however the resultant output is not in a format that I can use further down my workflow. Here is that output:
What I need, is something that looks like this:
The "Data Count" column, I've not been able to populate but is there for illustration.
The code I am using is as follows;
library(formattable)
PML_Start = 0
PML_Max = 100000000
PML_Interval = 5000000
Lower_Band <- currency(seq(PML_Start, PML_Max-PML_Interval, PML_Interval),digits=0)
Upper_Band <- currency(seq(PML_Start+PML_Interval,PML_Max,PML_Interval),digits = 0)
PML_Profile <- data.frame("Lower Band"=Lower_Band,"Upper Band"=Upper_Band,"Data Count")
I know cannot figure out how to further populate this table. I gave this a go, but didn't really believe it would work.
PML_Profile <- Profiles_on_Data_Provided_26_9_17 %>%
group_by (Lower_Band) %>%
summarise("Premium" = sum(Profiles_on_Data_Provided_26_9_17$`Written Premium - Total`))
Any thoughts?
Related
I'm a newbie to the bigstatsr package. I have a sqlite database which I want to convert to an FBM matrix of 40k rows (genes) 60K columns (samples) for later consumption. I found examples of how to populate the matrix with random values but I'm not sure of what would be the best way to populate it with values from my sqlite database.
Currently I do it sequentially, here's some mock code:
library(bigstatsr)
library(RSQLite)
library(dplyr)
number_genes <- 50e3
number_samples <- 70e3
large_genomic_matrix <- bigstatsr::FBM(nrow = number_genes,
ncol = number_samples,
type = "double",
backingfile = "fbm_large_genomic_matrix")
# Code to get a single df at the time
database_connection <- dbConnect(RSQLite::SQLite(), "database.sqlite")
sample_index_counter <- 1
for(current_sample in vector_with_sample_names){
sqlite_df <- DBI::dbListTables(conn = database_connection) %>%
dplyr::tbl("genomic_data") %>%
dplyr::filter(sample == current_sample) %>%
dplyr::collect()
large_genomic_matrix[, sample_index_counter] <- sqlite_df$value
sample_index_counter <- sample_index_counter + 1
}
big_write(large_genomic_matrix, "large_genomic_matrix.out", every_nrow = 1000, progress = interactive())
I have two questions:
Is there a way of populating the matrix more efficiently? Not sure if big_apply could be used here, perhaps foreach
Do I always have to use big_write in order to load my matrix later? If so why can't I just use the bk file?
Thanks in advance
That is a very good first try that you have by yourself.
What is inefficient here is to test for dplyr::filter(sample == current_sample) for every single sample. I would try to use match() first to get the indices. Then, what would be a bit inefficient is to populate each column individually. As you said, you could use big_apply() to do this by blocks.
big_write() is for writing the FBM to some text file (e.g. csv). What you want here is to use FBM()$save() (second line of the example in the README), and then use big_attach() on the .rds file (next line of the README).
I currently have a list which is made up of around 80+ data frames, what I would like to do is to loop a chunk of code for each individual data frame within the list, without naming each one individually, or splitting them into individual data frames to work on.
Currently I split the list into each individual data frame using the below code:
dat5split <- setNames(split(dat5, dat5$CODE), paste0("df", unique(dat5$CODE)))
list2env(dat5split, globalenv())
I then work through each data frame individually:
# call in SPC function and write to 'results10000'
results10000<-SPC_XBAR(df10000,vol_n,seasonality)
results10000 = results10000 %>%
cbind(Spec = df10000$CODE) %>%
subset(`table_n` == 1)
results10000 <- results10000[order(results10000$tpd),]
results10000$Date <- as.Date(cbind(Date = df10000$CENSUS_DATE))
# call in SPC function and write to 'results10001'
results10001<-SPC_XBAR(df10001,vol_n,seasonality)
results10001 = results10001 %>%
cbind(Spec = df10001$CODE) %>%
subset(`table_n` == 1)
results10001 <- results10001[order(results10001$tpd),]
results10001$Date <- as.Date(cbind(Date = df10001$CENSUS_DATE))
Currently I call in the function 'SPC_XBAR' to where vol_n and seasonality are set earlier in the code. The script then passes the values to the function which then assigns the results to 'results10000, results10001' etc etc. Upon which I do a small bit of data wrangling on each newly created data frame before feeding the results back into sql server at the end.
As you can see each one is being individually hard coded which is not efficient.
What I would like to do is to loop a chunk of code for each individual data frame within the list, without naming each one individually.
I believe a loop would solve this issue but I am a little inexperienced when it comes to the ability to create a loop around it. Any advice would be much appreciated.
Cheers
Have you considered using lapply instead of a loop throughout the list? Check it here...
EDIT: I try to elaborate a bit more... What happens if you do this:
myFunction <- function(x) {
results<-SPC_XBAR(x,vol_n,seasonality)
results = results %>%
cbind(Spec = x$CODE) %>%
subset(`table_n` == 1)
results <- results[order(results$tpd),]
results$Date <- as.Date(cbind(Date = x$CENSUS_DATE))
results
}
lapply(dat5split, myFunction)
I would expect it to return a list of the resulting datasets
I'm using fingertipsR to obtain public health data.
There are indicators at different geographic levels and these indicators are also grouped at profile level.
Here's some code:
library(fingertipsR)
library(fingertipscharts)
library(tidyverse)
library(ggthemes)
fingertips_stats()
inds<-indicators_unique()
profs<-profiles()
it's possible to pull unique indicators for profiles like this and then to add a column like this
smoking<-indicators_unique(ProfileID = 18,DomainID = NULL)%>%mutate(prof_id="18")
What I'd like to do is:
for each unique profile ID generate a dataframe of indicators. There are 53 unique profiles
uniqueprofs<-as_tibble(unique(profs$ProfileID))
How can I purr through this? or loop?
I am routinely stuck on these iteration type problems.
EDIT:
so. if you ctrl + click on
indicators_unique
you'll see the bit:
df <- unique(df[, c("IndicatorID", "IndicatorName")])
I copied all of the function and called it something else
function (ProfileID = NULL, DomainID = NULL, path)
{
if (missing(path))
path <- fingertips_endpoint()
#fingertips_ensure_api_available(endpoint = path)
df <- indicators(ProfileID, DomainID, path = path)
df <- unique(df[, c("IndicatorID", "IndicatorName","ProfileID")])
return(df)
}
And I now get a dataframe containing the ProfileID. If I add "DomainID" I can have that too....
Edit:
Annoyingly, I've asked a similar question and updated it with dplyr group_by and group_walk
I can do this:
inds%>%group_by(ProfileID)%>%group_walk(~ write.csv(.x, paste0(.y$ProfileID, ".csv")))
How do I group_walk and write the dataframes/tibbles to the environment rather than writing them a drive and then loading them in?
Start with some minimal initial code
library(fingertipsR)
library(tidyverse)
profs<-profiles()
indictators_unique is already vectorized so rather than loading the ProfileIDs into a tibble, put them in a list and then you can do a simple
unique_profs <- list(unique(profs$ProfileID))
indicators_unique(ProfileID = unique_profs, DomainID = NULL)
The issue is adding your desired prof_id column. I'm not familiar with these packages. Is there any dataframe that links ProfileID to either IndicatorID or IndicatorName that you can do a join on?
I am trying to build a data frame with book id, title, author, rating, collection, start and finish date from the LibraryThing api with my personal data. I am able to get a nested list fairly easily, and I have figured out how to build a data frame with everything but the dates (perhaps in not the best way but it works). My issue is with the dates.
The list I'm working with normally has 20 elements, but it adds the startfinishdates element only if I added dates to the book in my account. This is causing two issues:
If it was always there, I could extract it like everything else and it would have NA most of the time, and I could use cbind to get it lined up correctly with the other information
When I extract it using the name, and get an object with less elements, I don't have a way to join it back to everything else (it doesn't have the book id)
Ultimately, I want to build this data frame and an answer that tells me how to pull out the book id and associate it with each startfinishdate so I can join on book id is acceptable. I would just add that to the code I have.
I'm also open to learning a better approach from the jump and re-designing the entire thing as I have not worked with lists much in R and what I put together was after much trial and error. I do want to use R though, as ultimately I am going to use this to create an R Markdown page for my web site (for instance, a plot that shows finish dates of books).
You can run the code below and get the data (no api key required).
library(jsonlite)
library(tidyverse)
library(assertr)
data<-fromJSON("http://www.librarything.com/api_getdata.php?userid=cau83&key=392812157&max=450&showCollections=1&responseType=json&showDates=1")
books.lst<-data$books
#create df from json
create.df<-function(item){
df<-map_df(.x=books.lst,~.x[[item]])
df2 <- t(df)
return(df2)
}
ids<-create.df(1)
titles<-create.df(2)
ratings<-create.df(12)
authors<-create.df(4)
#need to get the book id when i build the date df's
startdates.df<-map_df(.x=books.lst,~.x$startfinishdates) %>% select(started_stamp,started_date)
finishdates.df<-map_df(.x=books.lst,~.x$startfinishdates) %>% select(finished_stamp,finished_date)
collections.df<-map_df(.x=books.lst,~.x$collections)
#from assertr: will create a vector of same length as df with all values concatenated
collections.v<-col_concat(collections.df, sep = ", ")
#assemble df
books.df<-as.data.frame(cbind(ids,titles,authors,ratings,collections.v))
names(books.df)<-c("ID","Title","Author","Rating","Collections")
books.df<-books.df %>% mutate(ID=as.character(ID),Title=as.character(Title),Author=as.character(Author),
Rating=as.character(Rating),Collections=as.character(Collections))
This approach is outside the tidyverse meta-package. Using base-R you can make it work using the following code.
Map will apply the user defined function to each element of data$books which is provided in the argument and extract the required fields for your data.frame. Reduce will take all the individual dataframes and merge them (or reduce) to a single data.frame booksdf.
library(jsonlite)
data<-fromJSON("http://www.librarything.com/api_getdata.php?userid=cau83&key=392812157&max=450&showCollections=1&responseType=json&showDates=1")
booksdf=Reduce(function(x,y){rbind(x,y)},
Map(function(x){
lenofelements = length(x)
if(lenofelements>20){
if(!is.null(x$startfinishdates$started_date)){
started_date = x$startfinishdates$started_date
}else{
started_date=NA
}
if(!is.null(x$startfinishdates$started_stamp)){
started_stamp = x$startfinishdates$started_date
}else{
started_stamp=NA
}
if(!is.null(x$startfinishdates$finished_date)){
finished_date = x$startfinishdates$finished_date
}else{
finished_date=NA
}
if(!is.null(x$startfinishdates$finished_stamp)){
finished_stamp = x$startfinishdates$finished_stamp
}else{
finished_stamp=NA
}
}else{
started_stamp = NA
started_date = NA
finished_stamp = NA
finished_date = NA
}
book_id = x$book_id
title = x$title
author = x$author_fl
rating = x$rating
collections = paste(unlist(x$collections),collapse = ",")
return(data.frame(ID=book_id,Title=title,Author=author,Rating=rating,
Collections=collections,Started_date=started_date,Started_stamp=started_stamp,
Finished_date=finished_date,Finished_stamp=finished_stamp))
},data$books))
I am using the package qualtRics in TERR in Spotfire to pull in data directly from specific surveys in Qualtrics. The code I am using is:
registerApiKey(API.TOKEN = "xxxx")
df <- getSurvey(surveyID = "xxxx",
root_url = "https://az1.qualtrics.com", verbose = TRUE)
My output df is a data table. I have 2 different surveys that I am pulling in 4 different times, 2 of those times I am unpivoting data, for a total of 4 data tables.
I want to be able to refresh this data. If I click Reload Data or try to refresh each table individually, nothing happens. I'm assuming I need to add some code that refreshes the data function (?), and I am trying to avoid replacing the data tables each time because, for 2 of those, I have to manually select which columns I am unpivoting (and I have 75+ columns).
Is there a way I can accomplish what I'm looking for? I am a beginner Spotfire/R user, so I am learning as I go!
I am not able to reply to your question as i dont have enough permission so keeping it as separate answer.
Replacing table each time is good idea,
By This you can fix your no of columns for pivoting/UnPivoting.
------R Code
row <- data.frame(Data_Points = nrows,
Col1 = col1, Col2 = col2, YStart = y1, YEnd = y2)
row <- cbind(df, row)
return(row)
And also you can list your fix columns into DocumentProperty and loop it into your DataFunction.
Instead of using spotfire's pivot/unpivot, you can try doing the unpivot within the R code of the data function.