So I wrote a shiny app in R and part of the script includes calling a web service to collect data. I run the script on a daily basis to build the shiny app. I would like to show data collected from the web service over time, however the only way I know how would be to write the data to a csv file after every call to the web service and then load that csv file back into R to show how the data has changed over time. This may seem like a strange question, but is there a way to accomplish this solely in R so that the data is stored every time the script is run and new values from the web service call are apended?
Related
I currently have a data_prep.R file that I use to pull in data using SQL queries and then wrangle the data into suitable data frames for use within my {golem} package. At the end of this script I have
usethis::usedata(df, overwrite = T)
From research it seems that this file should go into the /data-raw folder as you are not supposed to execute code in the /R folder. When ran, it constructs my data frames and then places them within the /data folder. However, this script does not seem to get ran whenever I run the application, moreover, the data frames will remain unchanged until I manually run the data_prep.R script again.
My application relies on the new data coming in and as such I would need this data_prep.R file to run whenever the app is launched.
Is there something that I am missing?
I worked this out by placing the data_prep.R script into the application base directory and sourcing the file within the app_server.R file.
source("./data_prep.R")
This runs the script on app start and pulls the data frames from the server allowing the data to be up to date.
The call of usethis::usedata(df, overwrite = T) create you a dataset you can load when your package is in use.
You should load explicitly the dataset in your app using data(df) in the piece of code where the data is needed.
I have the following situation/workflow:
The user utilizes Tool A to capture sensor data and save it to a CSV file.
The user takes the CSV and uploads it to a R Shiny Applications (using fileInput) to process it further.
I would like to get rid of the intermediate CSV and directly open the Shiny application with the data already loaded. I.e. I need a way to transfer the contents of the CSV automatically to the Shiny application.
What could work:
Tool A stores the CSV in a special location that is served by a HTTP server on a fixed endpoint.
The Shiny application requests the file from the known location on startup.
However, this still needs the intermediate CSV file, and adds complexity by introducing an additional server (or at least endpoint). Furthermore it needs additional logic if multiple users are active / multiple files are created at the same time. So it is far from ideal.
Is there any way to get the contents of the CSV directly from Tool A to the Shiny Application? E.g. by mimicking the messages the 'fileInput' widget produces?
I have a working R shiny app which is being hosted from our internal org Amazon AWS server. Now, Users will generate data on that Shiny app using different widgets provided there. Now, I need to save all the data generated in one session into a file that is stored in our internal Amazon S3 bucket.
The possible challenge that we are facing is how to save these data when multiple users could generate data using that Shiny App and then need to be saved and reloaded back to the Shiny app further if needed.
We just can't lose any data even if two users simultaneously add data using the Shiny App.
Please guide what could be our best approach.
I did follow the guideline provided here :
https://deanattali.com/blog/shiny-persistent-data-storage/
But, is there a way where we don't need to create as many .csv files as we have a number of users accessing the app.
I have an app deployed on shinyapps.io . It reads off a CSV in the global.R that is created from another r script that pulls data from various databases and merges and cleans the data. How can I refresh the CSV and republish the app on a weekly basis so the data is refreshed? I’m building this for a client, so I don’t want it run locally.
I'm pretty new to using shiny apps to visualize data. We plan to host our shiny app on our own server. So for that we used docker to deploy our app. However the app is super slow to load, since we have to load a lot of (big) dataframes (up to 10000000 rows x 10 columns), that are saved within a RData object.
My first question is: Will the data be loaded each time a user visits/reloads the website?
I was looking into ways how to spead up loading the data. One possbility might be to use the feather package, which seems to be faster in loading data tables.
Another option would be to put the data into a database. However I do not have experience with that. I saw there are some nice packages like DBI and RMariaDB that seem to work well with shiny app. However, I only find examples where an exterinal database is queried. Is it possible to pack a MySQL database within the docker and access it from within the shiny app? Or is the normal procedure to host the database externally?
I'm really new to all this, so I'm not even sure if I'm asking the right questions. These are the conditions: We have a lot of data in the form of multiple data tables. Those need to be read into our app quickly and needs to be queried quickly through interactive user input. We need to dockerize our app in order to deploy it. What is the best approach here?