R shiny concurrent file access - r

I am using the R shiny package to build a web interface for my executable program. The web interface provides user input and shows output.
On the server background, the R script formats user inputs and saves them to a local input file. Then R calls the system command to run the executable program.
My concern is that if multiple users run the web app at the same time, it is possible that the input file generated by the first user will be overwritten by the second user's input before it is read by the executable program.
One way to solve the conflict is to ask R to create a temporary folder and generate/run the input file under that folder for each user. But I'd like to know whether there is a better or automatic way to resolve this potential conflict with shiny. For example, if use shiny fileInputs, the uploaded files are automatically stored in a temporary folder.
Update
Thanks for the advice.#Symbolix and #Mike Wise
I read the persistent data storage article before but I don't think it is exactly what I wanted. Maybe my understanding is not correct. I end up with creating a temporary folder and run my executable from there.

Related

Azure Databricks: How do we access R Scripts present on DBFS?

I'm new to DataBricks. I am trying to access a .R file that is present in the DBFS storage but I cannot figure out how to do so. Any help is really appreciated.
I can read data from the storage using the file path /dbfs and also source code from the script but I want to make edits to the script.
You need some editor to do that - for example, you can setup RStudio on your cluster and connect to it via RStudio UI - in this case you can edit R files directly on DBFS.
But really, the simplest for you would be to use Databricks CLI fs command to copy the file to your local machine, make changes in the editor of your choice, and upload file back.

Programatically transfer file to R Shiny application

I have the following situation/workflow:
The user utilizes Tool A to capture sensor data and save it to a CSV file.
The user takes the CSV and uploads it to a R Shiny Applications (using fileInput) to process it further.
I would like to get rid of the intermediate CSV and directly open the Shiny application with the data already loaded. I.e. I need a way to transfer the contents of the CSV automatically to the Shiny application.
What could work:
Tool A stores the CSV in a special location that is served by a HTTP server on a fixed endpoint.
The Shiny application requests the file from the known location on startup.
However, this still needs the intermediate CSV file, and adds complexity by introducing an additional server (or at least endpoint). Furthermore it needs additional logic if multiple users are active / multiple files are created at the same time. So it is far from ideal.
Is there any way to get the contents of the CSV directly from Tool A to the Shiny Application? E.g. by mimicking the messages the 'fileInput' widget produces?

Peforming DML on a table by taking Excel file as an input

I am writing a PLSQL procedure that takes input as an excel file through front end and using that excel input the procedure inserts , updates or deletes the records present in an existing table . Can anyone show me the approach for this?
If that "Excel" file has to be really in native XLS(X) format, a simple option - if you want to stay within Oracle boundaries - is an Apex application which offers a data loading wizard. Takes 4 pages to create it (don't worry, Apex Wizard creates almost everything for you). Once the loading is over, a (stored) procedure can do the rest of processing (you'd call it by pushing a button).
Alternatively, if you save contents of that file as a CSV file, you can load it with SQL*Loader, utility ran at the operating system command prompt. You'd have to create a control file (no wizard to do that, I'm afraid). This approach probably isn't convenient for end users (who's going to type anything at the command prompt?) so you'd have to create some kind of an application to do that.
Or, CSV again, but this time used as an external table. This approach requires the file to be located in a directory accessible by the database server (most frequently, the directory is located on that computer, and you most frequently don't want to allow access to anyone to it). Its advantage is that you can access the CSV file directly from (PL/)SQL, fetch data from it, perform various adjustments etc.
If you're capable of writing programs that aren't part of the Oracle niche (I'm not), go for it (but I can't suggest anything; someone else might).

Deploy Shiny app that can read files from local computer (no possible Data-based solution)

I am trying to deploy a Shiny app on shinyapps, but it is not possible to read and write local files from the computer of the user. The idea is that every user can upload some datasets and images given a local path where this input is stored, not to have a same series of inputs for everyone stored in a ´Data´ folder.
I know there are other options to deploy shiny apps (amazon, shiny server), but I am afraid I will find the same kind of problems. Before losing too much time trying other approaches, I would like to know if there is any way to deploy a shiny app that can read these inputs given a local path and, if not, if there is an easy way to prepare an online app that can do this trasnlating from a shiny-based one. If not, I guess I will have to leave my app as a normal R package.
Thank you in advance.

Deploy R function with directory as argument as executable or web application

I've written an R function (code available on demand) that improves some analysis workflows in my research group (~10 people), and now I would like to deploy it so it's easily accessible to the rest of the group. I am the only person in the group with any knowledge of R.
In a nutshell, the function does the following:
Takes one argument, the directory in which to search for microscopy images in proprietary formats
Asks user (with readline()) which channels should be analysed and what they are named
Generate several histograms and scatter plots of intensity levels per image channel, after various normalisation steps, these are deposited in a .pdf file for each image stack
Perform linear regression, generate a .txt file per image stack
The .pdf and .txt files get output to the directory the user specifies as the argument when running the function. I want to turn it into something somewhat more user-friendly, essentially removing the need to install R + function dependencies. For the sake of universality I would like to deploy it as a web application that takes a .zip file of the images as input, extracts them and then runs the function with that newly created directory as the argument. When it's done, it should output a .zip file of the created .pdfs and .txts. I'm not sure how feasible this is. I have looked into using Shiny for this but I'm having a hard time figuring out how to apply it as I do not have experience with it. I have experience in unix server administration and have a remote server that I can play around with.
The second option would be somewhat less universal, but it would be to deploy it as a Windows executable (I am the only person in my group not to use Windows as a daily OS, but I do have access to a Windows environment). Here, ideally the executable should ask the user to navigate to a directory, then use that directory as the argument to the function and output the generated files in said directory.
As my experience with R is limited, I cannot tell which option would be more feasible and worth it in the long run. My gut feeling says the web application would be the most flexible and user friendly, but more challenging to deploy. Are either of my ideas implementable, and if so, what would be a good way to do so?
Thanks in advance!

Resources