I have two reports made in flexdashboards inside the same project.
The only difference is the data source and data wrangling is the same for both. Object names are same because code is really a copy (the only different lines are for data load, filtered by country).
Every chunk has this option to force fresh data always.
r cache=FALSE
Accessing my flexdashboard
Germany.Rmd
renders ok
But if I open the other flexdashboard
France.Rmd
the data and charts are from Germany.
I suppose the objects (df, plot..) having the same name are conflicting even with no cache.
Is there any way of solving this issue to avoid creating a project for every country?
Related
I'm currently working on an R Shiny App that utilizes googlesheets4 to pull in a large dataset from GoogleSheets upon app launch. Loading in this dataset to my app takes ~2 minutes, which stalls my entire app's load time.
The only visual in my app is based on this GoogleSheets data, so it is very dependent on this specific dataset. Once the dataset gets pulled into my app, it is filter and therefore becomes much smaller (85,000 rows ---> 1,000 rows). This GoogleSheet data is updated every day, so I don't have the luxury of pre-downloading it once and storing it as a .csv forever.
There are two different fixes for this that I have tried but have been unsuccessful...curious if anyone has any thoughts.
Have a separate app running. My first idea was to create a separate Shiny app entirely, that would have a sole purpose of pulling the GoogleSheets df once a day. Once it pulls it, it would conduct the necessary data cleaning to get it down to ~1,000 rows, and then would push the smaller df to a different GoogleSheet link. Then, my original app with the visual would just always reference that new GoogleSheet (which would take much less time to load in).
The problem I ran into here is that I couldn't figure out how to write a new GoogleSheets doc using googlesheets4. If anyone has any idea how to do that it would be much appreciated.
Temporarily delay the load in of the GoogleSheets data, and let visual populate first. My second idea was to have the code that pulls in the GoogleSheets df be delayed upon launch, letting my visual first populate (using old data) and then have the GoogleSheets pull happen. Once the pull is complete, have the visual re-populate with the updated data.
I couldn't figure out the best/right way to make this happen. I tried messing around with sleep.sys() and futures/promises but couldn't get things to work correctly.
Curious if anyone has any thoughts on my 2 different approaches, or if there's a better approach I'm just not considering...
Thanks!
There is a function called write_sheet that allows you to write data to a google sheet. Does that work for you?
googlesheets4::write_sheet(data = your_data,
ss = spread_sheet_identifier,
sheet = "name_of_sheet_to_write_in")
If you on only want to add something without deleting everything in the sheet, the function is sheet_append
googlesheets4::sheet_append(data = your_data,
ss = spread_sheet_identifier,
sheet = "name_of_sheet_to_write_in")
Not sure you can store the credentials in a save way, but couldn't you use github actions? Or alternatively a cron job on your local computer?
I've written a function that takes a data frame of specific points and plots them using ggplot, creating a picture. I'd like to be able to take this picture and use it as a GUI for a Shiny app, where users can click anywhere on the picture, add some information from the click, and then append the click's coordinates and information to a data frame (that's originally empty). Ideally this data frame would be able to be downloaded for further analyses.
Is this possible? If so, how do I go about including the functionality in the Shiny app? I'd share code, but the data frame is long and I don't have anything in the Shiny app yet to share. More specific information about the project upon request.
I'm writing my own R package to carry out some specific analyses for which I make a bunch of API calls to get some data from some websites. I have multiple keys for each API and I want to cycle them for two reasons:
Ensure I don't go over my daily limit
Depending on who is using the package, different keys may be used
All my keys are stored in a .csv file api_details.csv. This file is read by a function that gets the latest usage statistics and returns the key with the most calls available. I can add the .csv file to the package/data folder and it is available when the package is loaded but this presents two problems:
The .csv file is not read properly and all columns names are pasted together to create a single variable name and all values pasted together to create a single observation per row.
As I continue working, I would like to add more keys (and perhaps more details about the keys) to the api_details.csv but I'm not sure about how to do that.
I can save the details as an .RData file but I'm not sure about how it would be updated or read outside of R (by other people). Using .csv means that anyone using the package can easily add/remove some keys.
What's the best method to address 1, 2 above?
Has anyone encountered an issue like this before:
I've created a very basic map plotting 39 places in England using Longitude and Latitude points.
I have a table that accompanies the map, and when you select one of the 39 places on the map, I want this to update the table to show only values for this selection.
I've set up the map as a filter but for some reason, only 7 of the 39 work when selected.
I decided to extract the data and save the file as a packaged workbook, to send to a colleague to investigate. However, when I extract the data and save as a packaged workbook and open in Reader or Desktop, everything works as it should do. All 39 places on the map filter other sheets correctly.
So I guess my question is, why would my original desktop version not work, but by saving it/extracting the data, it works perfectly?
For additional information, I am connecting to a live SQL server data source.
Edit: Added screenshots. I don't have this problem when I extract data and save as a packaged workbook. All the filters work.
I have finally solved this one.
I was clicking on the actual map to 'Use as Filter'. What I have done instead is, go to Dashboard > Actions and Add Action making sure that the Target Filters are set to Selected Fields and not All Fields.
I think I'll avoid the 'Use as Filter' option from now on as that was a massive pain. It still makes no sense that it worked when extracted/saved as a packaged workbook, but then Tableau has lots of these little issues when you start using it more and more.
I currently have an Access database which pulls data from an Oracle for various countries and is currently around 1.3 GB. However, more countries and dimensions should be added, which will further increase its size. My estimate would be around 2 GB, hence the title.
Per country, there is one table. These tables are then linked in a second Access db, where the user can select a country through a form which pulls the data from the respective linked table in Access db1, aggregates it by month, and writes it into a table. This table is then queried from Excel and some graphs are displayed.
Also, there is another form where the user can select certain keys, such as business area, and split the data by this. This can be pulled into a second sheet in the same excel file.
The users would not only want to be able to filter and group by more keys, but also be able to customize the period of time for which data is displayed more freely, such as from day xx to day yy aggregated by week or month (currently, only month-wise starting from the first of each month is supported).
Since the current Access-Access-Excel solution seems to me to be quite a cumbersome one, I was wondering whether I might make the extensions required by the users of this report using R and either shiny or markdown. I know that shiny does not allow files larger than 30MB to be uploaded but I would plan on using it offline. I was just not able to find a file size limit for this - or do the same restrictions apply?
I know some R and I think that the data aggregations needed could be done very fast using dplyr. The problem is that the users do not, so the report needs to be highly customizable while requiring no technical knowledge. Since I have no preexisting knowledge of shiny or markdown, I was wondering whether it was worth going through the trouble of learning one enough to implement this report in them.
Would what I want to do be feasible in shiny or R markdown? If so, would it still load quickly enough to be usable?