I have the following situation/workflow:
The user utilizes Tool A to capture sensor data and save it to a CSV file.
The user takes the CSV and uploads it to a R Shiny Applications (using fileInput) to process it further.
I would like to get rid of the intermediate CSV and directly open the Shiny application with the data already loaded. I.e. I need a way to transfer the contents of the CSV automatically to the Shiny application.
What could work:
Tool A stores the CSV in a special location that is served by a HTTP server on a fixed endpoint.
The Shiny application requests the file from the known location on startup.
However, this still needs the intermediate CSV file, and adds complexity by introducing an additional server (or at least endpoint). Furthermore it needs additional logic if multiple users are active / multiple files are created at the same time. So it is far from ideal.
Is there any way to get the contents of the CSV directly from Tool A to the Shiny Application? E.g. by mimicking the messages the 'fileInput' widget produces?
Related
I have a working R shiny app which is being hosted from our internal org Amazon AWS server. Now, Users will generate data on that Shiny app using different widgets provided there. Now, I need to save all the data generated in one session into a file that is stored in our internal Amazon S3 bucket.
The possible challenge that we are facing is how to save these data when multiple users could generate data using that Shiny App and then need to be saved and reloaded back to the Shiny app further if needed.
We just can't lose any data even if two users simultaneously add data using the Shiny App.
Please guide what could be our best approach.
I did follow the guideline provided here :
https://deanattali.com/blog/shiny-persistent-data-storage/
But, is there a way where we don't need to create as many .csv files as we have a number of users accessing the app.
So for context, I have created a dashboard using Rstudio by using flexfdashboard. I is an R Markdown Shiny app with ui and source code all within the same file.
Data sources for this app are large, locally saved documents in. Txt and. Csv format, with sizes ranging from a few kilobytes to several gigabytes.
I am to present this work to my CEO but I will not have access to my current computer and so ideally would host my dashboard somewhere and simply load the dashboard in browser.
However, the datasets contain sensative customer information and so privacy or controlled access to all assets is essential, ruling out most 3rd party solutions.
Is there a simple method to achieve this that im missing, or will I have to just run it from a local machine?
Thanks
So I wrote a shiny app in R and part of the script includes calling a web service to collect data. I run the script on a daily basis to build the shiny app. I would like to show data collected from the web service over time, however the only way I know how would be to write the data to a csv file after every call to the web service and then load that csv file back into R to show how the data has changed over time. This may seem like a strange question, but is there a way to accomplish this solely in R so that the data is stored every time the script is run and new values from the web service call are apended?
I am using the R shiny package to build a web interface for my executable program. The web interface provides user input and shows output.
On the server background, the R script formats user inputs and saves them to a local input file. Then R calls the system command to run the executable program.
My concern is that if multiple users run the web app at the same time, it is possible that the input file generated by the first user will be overwritten by the second user's input before it is read by the executable program.
One way to solve the conflict is to ask R to create a temporary folder and generate/run the input file under that folder for each user. But I'd like to know whether there is a better or automatic way to resolve this potential conflict with shiny. For example, if use shiny fileInputs, the uploaded files are automatically stored in a temporary folder.
Update
Thanks for the advice.#Symbolix and #Mike Wise
I read the persistent data storage article before but I don't think it is exactly what I wanted. Maybe my understanding is not correct. I end up with creating a temporary folder and run my executable from there.
I'm building an application that takes as an input data stored in an Excel sheet. I want the user to be able to select the file they want to load data from, have the application connect to and read from the Excel file stored on the client's machine, and close the connection. Can I do this without uploading the Excel file to the server? I'm able to do everything except for selecting the file using a filedialog box and passing the path & file name to the procedure that connects to the Excel file and processes. I've tried using the file input control but I'm unable to pass the path & file name to the connection string. Any suggestions as to what other routes I might take?
EDIT:
The application essentially takes user input, either via single inputs into textboxes on the page or a bulk upload via an Excel spreadsheet, processes the inputs and spits out a report in Excel format. The only thing displayed on the page are the loaded inputs (via the 2 methods just described) in a listbox that the user can either add or remove additional items.
The short answer is "Only with an ActiveX control in IE unless you write your own plugin for another browser." My opinion is, "you shouldn't."
There is a good discussion already on Stack Overflow: How to read an excel file contents on client side?
The long answer, given the rest of the information you have provided, is that I would recommend the following:
1) Upload the spreadsheet to the server.
2) Extract the data on the server.
3) Return the data to the client in whatever form suits your situation.
4) Clean the original file on the server.
Some further recommendations for you, since you provided so much detail in your comments:
Rather than using a GUID to generate your server-side filename, use a timestamp in the format YYYY-MM-DD-HH-MM-SS-Ticks followed by the original filename.
Instead of deleting the data files immediately, each time you add a file, remove any files older than N days.
This way if you have any issues processing your files in the future, you'll be able to retrieve the uploaded file to your development environment and debug it there. Assuming the data isn't personal information or sensitive in some other way, of course.
Cheers!