Tie R script and QlikSense software together - r

I require qliksense to create an Excel file when the user selects a set of tuples and R to automatically pick up this file, perform my script on the file. My script then creates another CSV file which I then want qliksense to automatically pick up and perform some predefined operations on it. Is there any way I can link the two of these software together in such a manner?
So to clarify the flowchart is: Qlik gets a large data set -> the user selects a set of rows and creates csv -> My custom R script (picks up this csv automatically) is run on the csv and creates a new csv -> qlik picks it up (automatically) and visually displays the results of the program
Is there any kind of wrapper software to tie them together? Or is it a better idea to perhaps just make some sort of UI that works with R in the background and the user can manually pass the file through the UI? Thanks for the help.

Check out the R extension that has been developed on http://branch.Qlik.com , extensions are being created and added all the time. You will probably need to create an account to access the project but once you have the direct link is below.
http://branch.qlik.com/#/project/5677d32d7f70718900987bdd

Related

Databricks, Folder Management and SQL. What is happening behind the scenes?

New Databricks user.
Im able to create subfolders in the user directory I am provided.
E.g. I am provided /mnt/DUAXXX/USERID/files
and I can create /mnt/DUAXXX/USERID/files/subfolder.
However, I cannot figure out how to create tables in this subfolder and use the resulting dataset.
I issue the following command, because the source datasets reside in this location:
%python
use DUAXXX
However, I want to create the resulting dataset in the subfolder.
Ive tried something like:
create table test
location 'mnt/DUAXXX/USERID/files/subfolder'
select * from
data
This completes, but when I navigate using the Databricks GUI 'Data' tab, the test dataset appears in the DUAXXX folder.
However, when I issue the following command:
dbutils.fs.ls(f"dbfs:/mnt/DUAXXX/USERID/files/subfolder")
I see numerous sorts of .snappy.parquet files.
I know these files are created by the above code.
Its as though the underlying data is stored where I want them in this .snappy.parquet format, but Databricks is creating a link to all these files in the DUAXXX folder.
I realize a lot of this is likely down to how the administrations implemented Databricks, and I have no access to those people. Does anyone know what is actually going on here?
Ultimately, all I am trying to do is create subfolders to organize my datasets, rather than have everything in a single folder.
Thanks.

Is it possible to have a Shiny App output a file?

I am trying to create an easy to use shiny app that uses a R script tool that was made to collect publications by certain authors. This script that was created searches all publications for the specific authors and creates an excel file with a list of their work.
I want to have the Shiny app have an input of a csv file that someone puts in with author's names, then the R script tool that I have would be used to gather all the work from that author, and then the output would be a separate csv file with the list of publications by the author's that was created by the tool.
My question that I have is that I am wondering if this is possible to do, and if so, how would I go about doing it? I don't have extensive knowledge with Shiny, but have worked on creating a csv file input. I am more concerned with being able to link my script and using it to pop out an output of a file as I haven't been able to find any videos or links showing me how to do that.

Modifying tables interactively with Shiny

I am trying to create an interface where I can interactively modify column values of a given source csv permanently. It must function somewhat like MS excel - the entire table displays and I can change column values on the fly and the resulting modifications reflect in the source csv saved in a specific server directory. I was wondering if R shiny can do this. I have experience in creating fluid/reactive pages and manipulating display (column display,check boxes,sliders, filtering etc.) but I have no clue as to how the source data itself can be modified using the Shiny GUI. Can someone please provide some direction? What package (if available) is needed etc. I have full write access on the source csv so credentials are not a problem.
Once I have some traction I plan to expand the operations onto a database.
Thanks in advance!

Implement whole R script contains functions into shiny

I have a complex R script which contains many new declared functions. These functions are able to check and clear given file under some requirements.
I want to prepare a simple user interface, where people without any knowledge about R will be able to upload source file, choose some options, and download analysed file without looking into a code.
I prepared a simple shiny app which contains my archival R code in the past, however each time when I want to perform some calculations, I had to use reactive() and add () after each variable.
This time, script is too complex to adjust it to reactive() form.
Is there any way to implement whole R script with avoid this?

Drupal - Attach files automatically by name to nodes

i need better file attachement function. Best would be that if you upload files to FTP and have a similar name as the name of the node (containing the same word), so they appear under this node (to not have to add each file separately if you need to have more nodes below). Can you think of a solution? Alternatively, some that will not be as difficult as it always manually add it again.
Dan.
This would take a fair bit of coding. Basically you want to implement hook_cron() and run a function that loops through every file in your FTP folder. The function will look for names of files that have not already been added to any node and then decide which node to add them to.
Bear in mind there will be a delay once you upload your files until they are attached to the node until the next cron job runs.
This is not a good solution and if I could give you any advice it would be not to do it - The reason you upload files through the Drupal interface is so that they are tracked in the files table and can be re-used.
Also the way you're proposing leaves massive amounts of ambiguity as to which file will go where. Consider this:
You have two nodes, one about cars and one about motorcycle sidecars. Your code will have to be extremely complex to make the decision of which node to add to if the file you've uploaded is called 'my-favourite-sidecar.jpg'.

Resources