Will shiny work online without shiny-server? - r

I need to integrate a shiny application to some existing php/html code. And I've seen that its possible to run the app by typing :
R -e "shiny::runApp('path_to_shiny', port=9999)"
So I've plan to run this script on the server and put a iframe which redirect to this. May it work ?

You can let the Shiny server run on a different port than the webserver (default 80). For example, see the default configuration of shiny server, which lets shiny run on port 3838. This is better than running an R process with the shiny package inside of it, because you get startup scripts for shiny server that handle all sorts of cases that you would have to handle manually otherwise (e.g., restarting the R process when the server reboots, etc).

Yes, you need to add host argument with '0.0.0.0' in your code as well like below,
R -e "shiny::runApp('path_to_shiny', host='0.0.0.0', port=9999)"
And, you also need to be sure that that server is not using port 9999, for example, if web server uses port 80 (i.e. yoururl.com), you may need to change to some port something like yoururl.com:8080, in case of any conflict. So basically you can run two different apps like this.

Related

Launch separate RStudio session on a different port?

I have a RStudio server running on port, say, 8787, which is then accessible by a web browser.
The problem is, if my colleague wants to use RStudio, I'll be disconnected as only one user can use the RStudio.
I'm looking for how can we launch another instance of RStudio session on a separate port number, say, 8989.
This should allow at least 2 different users to run 2 separate RStudio sessions on the same server.
To be clear, I'm on RStudio server free version. I'm not sure whether features like multiple sessions on different ports require paid license or not.
If it helps, I'm using RHEL7.
Thanks!
You do not need a license for this. Even the free version of RStudio Server will allow you to run one session per user.
So you don't need to try to run multiple servers on multiple ports; just set up a regular Linux user account for your colleague on your server (using e.g. adduser), and they'll be able to log into RStudio and run their own R session.

Best way to collect logs from a remote server

I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.

How to configure FastRWeb to use RServer built-in web server

I'm new to RServe (and FastRWeb). I installed RServe 1.7.0 as I want to use its built-in webserver. As I already have apache running on this machine I want to run RServe/FastRWeb on a custom port.
I did cd /usr/local/lib/R/site-library/FastRWeb;sudo ./install.sh, which created /var/FastRWeb/ directory tree.
I'm not seeing any configuration file that mentions port. The default /var/FastRWeb/code/rserve.conf looks like this:
socket /var/FastRWeb/socket
sockmod 0666
source /var/FastRWeb/code/rserve.R
control enable
I'm guessing that means it uses unix sockets, by default? So I think my question is what exactly do I have to put in (and remove from) that file to, say, have it listen on TCP port 8888? And is there anything else I need to do? (I want to be able to connect from other machines, not just localhost.)
Possibly related, is I've looked at /var/FastRWeb/web/index.html and it contains javascript that is going to connect to /cgi-bin/R/ Is that path specific to when using Apache, or is it going to be fine, as-is, when using RServe?
There is an explanation of setting port in the Rserve 1.7.0 release announcement. Therefore, at the top of rserve.conf, I added this line: http.port 8888 Then I used the start script (as root), to start it.
This got me halfway as now http://127.0.0.1:8888/ works, but gives me a page that says:
Error in try(.http.request("/", NULL, NULL, c(48, 6f, 73, 74, 3a, 20, :
could not find function ".http.request"
The second half of the solution is to add this to the top of /var/FastRWeb/code/rserve.R:
library(FastRWeb)
.http.request <- FastRWeb:::.http.request
Then start things going by running /var/FastRWeb/code/start. There is no default handler, so you can test it with http://127.0.0.1:8888/info. Or a more interesting example is http://127.0.0.1:8888/example1.png (to view a chart) or http://127.0.0.1:8888/example2 (to view a mix of html and chart)
Note: I did not delete or edit any other configuration to get this working. That means we also have the unix socket listening. If that is not needed remove those two lines from the Rserve.conf file.
If you want it listening on all IP addresses, not just localhost, then add remote enable to your Rserve.conf file. NOTE: Make sure you understand the security consequences before opening your server to the world.
So, after those two changes, my /var/FastRWeb/code/Rserve.conf file looks like:
http.port 8888
remote enable
source /var/FastRWeb/code/rserve.R
control enable
Did you see Jay Emerson's write-up from a while back about how to use RServe as a backend for web-driven analysis? As I recall, one still uses Apache for the redirection, rather than an explicit port as you surmise here.
Jay's setup was very impressive. He used Rserve to provide mixed table/chart pages written via the grid package, all very slick and very fast, based of an immense data set (from a UN agency, or the World Bank, or something). But I can't find a link to that report right now...

Execute menu driven commands on remote server using Maven

I did a bit of research on using Maven to execute commands on a remote server using some ssh exec plugin.
The thing that gets me though is I need to run a command which launches some menu driven program to stop a server and then start it up again.
I would basically have to perform the following tasks in sequence:
Connect to the remote server using SSH
Login with username/pass
Change directory to a particular location
Run a command at that location to launch a command line menu driven program (i.e. "./control")
Enter two commands to that menu driven program
Disconnect
The two commands are just numbers which represent choices from a menu it prints on the console, like:
Enter the number of the server you wish to stop:
[1] server1
[2] server2
[3] server3
I would enter "2" for example. Is this possible?
One possible solution would be to write a Perl script on the remote server that accepts the server number as an argument.
You could then use Perl's Expect.pm library to supply this argument to your control program when it prompts for input.
This then simplifies your Maven task to executing a script on a remote server, which presumably you are now happy with thanks to your research.
(If you've not used it before, Perl's Expect library is designed for exactly this use case - automatically supplying input to interactive command line programs.)

Amazon EC2 / RStudio : Is there are a way to run a job without maintaining a connection?

I have a long running job that I'd like to run using EC2 + RStudio. I setup the EC2 instance then setup RStudio as a page on my web browser. I need to physically move my laptop that I use to setup the connection and run the web browser throughout the course of the day and my job gets terminated in RStudio but the instance is still running on the EC2 dashboard.
Is there a way to keep a job running without maintaining an active connection?
Does it have to be started / controlled via RStudio?
If you make your task a "normal" R script, executed via Rscript or littler, then you can run them from the shell ... and get to
use old-school tools like nohup, batch or at to control running in the background
use tools like screen, tmux or byobu to maintain one or multiple sessions in which you launch the jobs, and connect / disconnect / reconnect at leisure.
RStudio Server works in similar ways but AFAICT limits you to a single user per user / machine -- which makes perfect sense for interactive work but is limiting if you have a need for multiple sessions.
FWIW, I like byobu with tmux a lot for this.
My original concern that it needed to maintain a live connection was incorrect. It turns out the error was from running out of memory, it just coincided with being disconnected from the internet connection.
An instance is started from the AWS dashboard and stopped or terminated from there also. As long as it is still running it can be accessed from an RStudio tab by copying the public DNS to the address bar on the web page and logging in again.

Resources