Modifying tables interactively with Shiny - r

I am trying to create an interface where I can interactively modify column values of a given source csv permanently. It must function somewhat like MS excel - the entire table displays and I can change column values on the fly and the resulting modifications reflect in the source csv saved in a specific server directory. I was wondering if R shiny can do this. I have experience in creating fluid/reactive pages and manipulating display (column display,check boxes,sliders, filtering etc.) but I have no clue as to how the source data itself can be modified using the Shiny GUI. Can someone please provide some direction? What package (if available) is needed etc. I have full write access on the source csv so credentials are not a problem.
Once I have some traction I plan to expand the operations onto a database.
Thanks in advance!

Related

Databricks, Folder Management and SQL. What is happening behind the scenes?

New Databricks user.
Im able to create subfolders in the user directory I am provided.
E.g. I am provided /mnt/DUAXXX/USERID/files
and I can create /mnt/DUAXXX/USERID/files/subfolder.
However, I cannot figure out how to create tables in this subfolder and use the resulting dataset.
I issue the following command, because the source datasets reside in this location:
%python
use DUAXXX
However, I want to create the resulting dataset in the subfolder.
Ive tried something like:
create table test
location 'mnt/DUAXXX/USERID/files/subfolder'
select * from
data
This completes, but when I navigate using the Databricks GUI 'Data' tab, the test dataset appears in the DUAXXX folder.
However, when I issue the following command:
dbutils.fs.ls(f"dbfs:/mnt/DUAXXX/USERID/files/subfolder")
I see numerous sorts of .snappy.parquet files.
I know these files are created by the above code.
Its as though the underlying data is stored where I want them in this .snappy.parquet format, but Databricks is creating a link to all these files in the DUAXXX folder.
I realize a lot of this is likely down to how the administrations implemented Databricks, and I have no access to those people. Does anyone know what is actually going on here?
Ultimately, all I am trying to do is create subfolders to organize my datasets, rather than have everything in a single folder.
Thanks.

Is it possible to have a Shiny App output a file?

I am trying to create an easy to use shiny app that uses a R script tool that was made to collect publications by certain authors. This script that was created searches all publications for the specific authors and creates an excel file with a list of their work.
I want to have the Shiny app have an input of a csv file that someone puts in with author's names, then the R script tool that I have would be used to gather all the work from that author, and then the output would be a separate csv file with the list of publications by the author's that was created by the tool.
My question that I have is that I am wondering if this is possible to do, and if so, how would I go about doing it? I don't have extensive knowledge with Shiny, but have worked on creating a csv file input. I am more concerned with being able to link my script and using it to pop out an output of a file as I haven't been able to find any videos or links showing me how to do that.

Creating an R Shiny application that writes to an Excel as well as reading from it

I'm currently working on a shiny application to report on a business wide scale, which is working fine. However I've been asked if I can implement a means of writing information to a central document, which will then be read back into the application again on the next run.
Essentially what I need to make is a shiny app that I can input data into, and this is then retrievable at a later date. Is this achieveable with an Excel document? Organising a database within company filestructure wouldn't be allowed, so this is all I can think of.
Would this be as straightforward as using a package to write to Excel and then having an update script run at the beginning of each update or would there be more to consider? Most importantly, if two users are updating at the same time, would R wait for one update to finish before running the next one?
Thanks a lot in advance!

Tie R script and QlikSense software together

I require qliksense to create an Excel file when the user selects a set of tuples and R to automatically pick up this file, perform my script on the file. My script then creates another CSV file which I then want qliksense to automatically pick up and perform some predefined operations on it. Is there any way I can link the two of these software together in such a manner?
So to clarify the flowchart is: Qlik gets a large data set -> the user selects a set of rows and creates csv -> My custom R script (picks up this csv automatically) is run on the csv and creates a new csv -> qlik picks it up (automatically) and visually displays the results of the program
Is there any kind of wrapper software to tie them together? Or is it a better idea to perhaps just make some sort of UI that works with R in the background and the user can manually pass the file through the UI? Thanks for the help.
Check out the R extension that has been developed on http://branch.Qlik.com , extensions are being created and added all the time. You will probably need to create an account to access the project but once you have the direct link is below.
http://branch.qlik.com/#/project/5677d32d7f70718900987bdd

store dataset filtered using DataTables in Shiny in app

To perhaps save a few people a minute or two ... this question is about DataTables in a Shiny app (http://shiny.rstudio.com/articles/datatables.html) and not Data.Table
I would like to access the index of rows available in DataTables in Shiny. I do not want to save the filtered data as a csv file but, rather, add the data.frame to a dropdown of datasets the app user can work with. I am using reactive values to store data.frames and I don't expect too much trouble adding the file/data to the dropdown list once I can access the data.frame (or row index).
Although, again, I do not want to rely on tabletools or similar, to save data to disk, there are several questions related to mine (see link below). However, there must be an easier way to access the filtered data-frame in a shiny-app. If not, perhaps there should be :)
I was hoping I might be able to use renderDataTable but I am not sure how that would work.
Saving from Shiny renderDataTable
Thanks to #yihui this is now possible using the DT package and input$tableId_rows_all where tableID is the id assigned to your table. See the link below for details.
http://rstudio.github.io/DT/shiny.html

Resources