I am pretty new to coding in general, therefore bear with me if this is a rookie question. I have a R code that I want to be automatically run, this R code generates some results which I want to publish onto a website daily. What is the easiest way to do so?
This is mainly a question on scheduling the R Scripts. Scheduling R Script
Then you can feed the data into your webpage (e.g. from a csv).
Related
I have a current project that consists of 3 parts:
An interface for clients to upload datasets from their equipment.
Process the uploaded datasets using R and some preset coefficients.
Display in a graph with a regression line, allowing the user to click points on the graph to remove them where needed and redraw the regression line automatically (after point removed).
Part 1: This is already done using PHP/Laravel. A simple upload and processing interface.
Part 3: I've been able to set this up in chart.js without any problems.
Part 2 is the sticking point at the moment. What I'd like is to be able to send the data to an rscript and just get the datapoints back so that I can display them. Can anyone give suggestions as to the best way to do this? Is there an api available? Or do I need to install software on the server (not an issue if I do, but I'm hoping to avoid the need to if possible)?
TIA
Carton
There is the package shiny to do everything in R (user side GUI and server side R processing). Since you did it in PHP already, you can either write an R script that is being executed with a shell call from PHP, or build an R REST API with plumber
I am looking for a way to 'auto save' my code each time i run it. I figure the best way to achieve this would be to write code in my models which will overwrite and save the file which is open. I have been experimenting with:
rstudioapi::documentSave(rstudioapi::getActiveDocumentContext()$id)
However, I have not had any success.
Good Morning All,
I worked out the issue. The model I am working on generates tables, which are produced with View(). This means when the code is finished running the script is not the tab displayed.
documentSave()
Will only save the script which is open. Therefore, it is advised that you should consider using
documentSaveAll()
Which will save all tabs open.
I'm currently working on a shiny application to report on a business wide scale, which is working fine. However I've been asked if I can implement a means of writing information to a central document, which will then be read back into the application again on the next run.
Essentially what I need to make is a shiny app that I can input data into, and this is then retrievable at a later date. Is this achieveable with an Excel document? Organising a database within company filestructure wouldn't be allowed, so this is all I can think of.
Would this be as straightforward as using a package to write to Excel and then having an update script run at the beginning of each update or would there be more to consider? Most importantly, if two users are updating at the same time, would R wait for one update to finish before running the next one?
Thanks a lot in advance!
I'm trying to find a way to automate a series of processes that uses several different programmes. (Indifferently on Ubuntu and Windows).
For each programme, I've boiled the process down to either a macro or a script in each programme, so I feel confident that the entire process can be almost entirely automated. I just can't figure out what I can do to create a unifying tool.
The process is the following;
I have a simple text file with data, I use a jedit macro to tidy the data. This then goes to a OpenCalc template to create a graph, that data is then imported to a programme called TXM which (after many clicks) generates a column of data, this is exported to a csv file, that csv file is imported to an R session where upon a script is executed.
I have to repeat this process( and a few more similarones) dozens of times a day, and it's driving me nuts.
My research into how to automate the import treatment export process has shown a few glimmers of hope but I haven't been able to make any real progress.
I tried Autoexpect, but couldn't make it work on Ubuntu. TCL, I think only works for internet applications, Fabric I also haven't been able to make work.
I'm willing to spend a lot of time learning and develloping a tool to achieve this, but at the moment I'm not even sure what terms to search for.
I've figured it out for windows; I created a .bat file in a text editor which, when click prompst the user for names, etc and rewrites another text file. It then executes that .txt file as a script with r with the
command R.exe CMD BATCH c:\user\desktop\mymacro.txt
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Workflow for statistical analysis and report writing
I have been programming with R for not too long but am running into a project organization question that I was hoping somebody could give me some tips on. I am finding that a lot of the analysis I do is ad hoc: that is, I run something, think about the results, tweek it and run some more. This is conceptually different than in a language like C++ where you think about the entire thing you want to run before coding. It is a huge benefit of interpreted languages. However, the issue that comes up is I end up having a lot of .RData files that I save so I don't have to source my script every time. Does anyone have any good ideas about how to organize my project so I can return to it a month later and have a good idea of what each file is associated with?
This is sort of a documentation question I guess. Should I document my entire project at each leg and be vigorous about cleaning up files that will no longer be necessary but were a byproduct of the research? This is my current system but it is a bit cumbersome. Does anyone else have any other suggestions?
Per the comment below: One of the key things that I am trying to avoid is the proliferation of .R analysis files and .RData sets that go along with them.
Some thoughts on research project organisation here:
http://software-carpentry.org/4_0/data/mgmt/
the take-home message being:
Use Version Control for your programs
Use sensible directory names
Use Version Control for your metadata
Really, Version Control is a good thing.
My analysis is a knitr document, with some external .R files which are called from it.
All data is in a database, but during my analysis the processed data are saved as .RData files. Only when I delete the RData, they are recreated from the database when I run the analysis again. Kinda like a cache, saves database access and data processing time when I rerun (parts of) my analysis.
Using a knitr (Sweave, etc) document for the analysis enables you to easily write a documented workflow with the results included. And knitr caches the results of the analysis, so small changes do usually not result in a full rerun of all R code, but only of a small section. Saves quite some running time for a bigger analysis.
(Ah, and as said before: use version control. Another tip: working with knitr and version control is very easy with RStudio.)