Why won't R.matlab connect to MATLAB? - r

I am trying to use R.matlab for the first time. I run the following my_sample_program.R in MATLAB using the command:
system('/usr/local/bin/R CMD BATCH my_sample_program.R out.txt');
where the R code is as follows:
test<-function(x,y)
{
library(R.matlab)
matlab <- Matlab(host="localhost",port=9927)
open(matlab)
setVerbose(matlab,-2)
x <- getVariable(matlab, "x")
y <- getVariable(matlab, "y")
some_other_function(x,y,exact = TRUE)
}
test(x,y)
Running the system command is successful from MATLAB's point of view. But I get the error in the out.txt file that says
Error in throw.default(sprintf("Failed to connect to MATLAB on host '%s' (port %d) after trying %d times for approximately %.1f seconds.", :
Failed to connect to MATLAB on host 'localhost' (port 9927) after trying 30 times for approximately 30.0 seconds.
(I used the command sudo lsof | grep localhost in Terminal to find that MATLAB is associated with port 9927.)
My question: How can I connect to MATLAB via R.matlab?

Related

R ggplot encoding/locale issue with ssh -X

[ This is a cross posting to the R-help mailing list post where this question has remained unanswered so far ]
I am struggling with remote R sessions and a (I suspect) locale related
encoding problem: Using the X11 device (X11forwarding enabled),
whenever I try to plot something containing umlauts using ggplot2, I am
seeing sth like
Error in grid.Call(L_stringMetric, as.graphicsAnnot(x$label)) :
invalid use of -61 < 0 in 'X11_MetricInfo'
Using base graphics is fine as is plotting to another device (pdf, say).
Here is some code to reproduce:
## ssh -X into the remote server
## start R at the remote server
plot(1:10, 1:10, main = "größe")
## this opens a plot window and works as expected
library("ggplot2")
qplot(1:10, 1:10)
## this works still
qplot(1:10, 1:10) + xlab("größe")
## I get the ERROR above
My setup:
locally:
Linux (Debian GNU/Linux 9)
remotely
Linux (RHEL Server release 7.3 (Maipo)
(Maybe) relevant bits of my .ssh/config:
Host theserver
HostName XXX.XXX.XXX.XXX
ForwardX11 yes
ForwardX11Timeout 596h
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
ForwardAgent yes
ServerAliveInterval 300
What version of R do you have (on the remote machine)?
I can replicate this with:
x11(type="Xlib")
library(grid)
convertHeight(stringDescent("größe"), "in")
on R 3.2.5, but not on, e.g., R 3.4.0 (just running R locally in
both cases).

R Running foreach dopar loop on HPC MPIcluster

I got access to an HPC cluster with a MPI partition.
My problem is that -no matter what I try- my code (which works fine on my PC) doesn't run on the HPC cluster. The code looks like this:
library(tm)
library(qdap)
library(snow)
library(doSNOW)
library(foreach)
> cl<- makeCluster(30, type="MPI")
> registerDoSNOW(cl)
> np<-getDoParWorkers()
> np
> Base = "./Files1a/"
> files = list.files(path=Base,pattern="\\.txt");
>
> for(i in 1:length(files)){
...some definitions and variable generation...
+ text<-foreach(k = 1:10, .combine='c') %do%{
+ text= if (file.exists(paste("./Files", k, "a/", files[i], sep=""))) paste(tolower(readLines(paste("./Files", k, "a/", files[i], sep=""))) , collapse=" ") else ""
+ }
+
+ docs <- Corpus(VectorSource(text))
+
+ for (k in 1:10){
+ ID[k] <- paste(files[i], k, sep="_")
+ }
+ data <- as.data.frame(docs)
+ data[["docs"]]=ID
+ rm(docs)
+ data <- sentSplit(data, "text")
+
+ frequency=NULL
+ cs <- ceiling(length(POLKEY$x) / getDoParWorkers())
+ opt <- list(chunkSize=cs)
+ frequency<-foreach(j = 2: length(POLKEY$x), .options.mpi=opt, .combine='cbind') %dopar% ...
+ write.csv(frequency, file =paste("./Result/output", i, ".csv", sep=""))
+ rm(data, frequency)
+ }
When I run the batch job the session gets killed at the time limit. Whereas I receive the following message after the MPI cluster initialization:
Loading required namespace: Rmpi
--------------------------------------------------------------------------
PMI2 initialized but returned bad values for size and rank.
This is symptomatic of either a failure to use the
"--mpi=pmi2" flag in SLURM, or a borked PMI2 installation.
If running under SLURM, try adding "-mpi=pmi2" to your
srun command line. If that doesn't work, or if you are
not running under SLURM, try removing or renaming the
pmi2.h header file so PMI2 support will not automatically
be built, reconfigure and build OMPI, and then try again
with only PMI1 support enabled.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.
The process that invoked fork was:
Local host: ...
MPI_COMM_WORLD rank: 0
If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
30 slaves are spawned successfully. 0 failed.
Unfortunately, it seems that the loop doesn't go through once as no output is returned.
For the sake of completeness, my batch file:
#!/bin/bash -l
#SBATCH --job-name MyR
#SBATCH --output MyR-%j.out
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=6
#SBATCH --mem=24gb
#SBATCH --time=00:30:00
MyRProgram="$HOME/R/hpc_test2.R"
cd $HOME/R
export R_LIBS_USER=$HOME/R/Libs2
# start R with my R program
module load R
time R --vanilla -f $MyRProgram
Does anybody have a suggestion how to solve the problem? What am I doing wrong?
Thanks in advance for your help!
Your script is an MPI application, so you need to execute it appropriately via Slurm. The Open MPI FAQ has a special section on how to do that:
https://www.open-mpi.org/faq/?category=slurm
The most important point is that your script shouldn't execute R directly, but should execute it via the mpirun command, using something like:
mpirun -np 1 R --vanilla -f $MyRProgram
My guess is that the "PMI2" error is caused by not executing R via mpirun. I don't think the "fork" message indicates a real problem and it happens to me at times. I think it happens because R calls "fork" when initializing, but this has never caused a problem for me. I'm not sure why I only get this message occasionally.
Note that it is very important to tell mpirun to only launch one process since the other processes will be spawned, so you should use the mpirun -np 1 option. If Open MPI was properly built with Slurm support, then Open MPI should know where to launch those processes when they are spawned, but if you don't use -np 1, then all 30 processes launched via mpirun will spawn 30 processes each, causing a huge mess.
Finally, I think you should tell makeCluster to spawn only 29 processes to avoid running a total of 31 MPI processes. Depending on your network configuration, even that much oversubscription can cause problems.
I would create the cluster object as follows:
library(snow)
library(Rmpi)
cl<- makeCluster(mpi.universe.size() - 1, type="MPI")
That's safer and makes it easier to keep your R script and job script in sync with each other.

multi-computer makePSOCKcluster on Windows: Building a step-by-step guide

I've been trying to build a cluster using multiple computers for three days now and have failed spectacularly. So now I'm going to try to suck a bunch of you into solving my problem for me. If all goes well, I would hope we can generate a step-by-step guide to use as a reference to do this in the future, because as of yet, I haven't managed to find a decent reference for setting this up (perhaps it's too specific a task?)
In my case, let's assume Windows 7, with PuTTY as the SSH client, and 'localhost' is going to serve as the master.
Furthermore, let's assume only two computers on the same network for now. I imagine the process will generalize easily enough that if I can get it to work on two computers, I can get it to work on three. So we'll work on localhost and remote-computer.
Here's what I've gathered so far (with references linked at the bottom)
Install PuTTY on localhost.
Install PuTTY on remote-computer
Install an SSH server on remote-computer
Assign it a port to listen on? (I'm not sure about this step)
Install R on localhost
Install the same version of R on remote-computer
Add R to the PATH environment variable on both localhost and remote-computer
Run the R code below from localhost
code:
library(parallel)
cl <- makePSOCKcluster(c(rep("localhost", 2),
rep("remote-computer", 2)))
So far, I've done steps 1-3, not sure if I need to do 4, done 5-7, and the code for step 8 just hangs indefinitely.
When I check my SSH server logs, it doesn't appear that I'm hitting the SSH server from localhost. So it appears that my first problem is configuring the SSH correctly. Has anyone succeeded in doing this and would you be willing to share your expertise?
EDIT
Oops: references
http://www.milanor.net/blog/wp-content/uploads/2013/10/03.FirstStepinParallelComputing.pdf
R Parallel - connecting to remote cores
https://stat.ethz.ch/pipermail/r-sig-hpc/2010-October/000780.html
At best, this is a partial answer. I'm still not establishing a cluster, but the steps described here are a pretty good record of how I've gotten to this point.
CONFIGURATIONS:
Install PuTTY on 'remote-computer'
Install SSH server on 'remote-computer'
Install R on 'remote-computer' (Use the same version of R as on 'localhost')
Add R to the PATH
Install PuTTY on 'localhost'
Install R on 'localhost'
Add R to the PATH
TESTING THE CONNECTION: PHASE I
From the command line, run
C:\PuTTYPath\plink.exe -pw [password] [username]#[remote_ip_address] Rscript -e rnorm(100)
(Confirm return of 100 normal random variates
From the command line, run
C:\PuTTYPath\plink.exe -pw [password] [username]#[remoate_ip_address] RScript -e parallel:::.slaveRSOCK() MASTER=[local_ip_address] PORT=100501 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
(Confirm that a session is started on the SSH server logs on 'remote-computer')
TESTING THE CONNECTION: PHASE II
From an R Session, run
system(paste0("C:/PuTTYPath/plink.exe -pw [password] ",
"[username]#[remote_ip_address] ",
"RScript -e rnorm(100)"))
(Confirm return of 100 normal random variates)
From an R session, run
system(paste0("C:/PuTTY/plink.exe ",
"-pw [password] ",
"[username]#[remote_ip_address] ",
"RScript -e parallel:::.slaveRSOCK() ",
"MASTER=[local_ip_address] ",
"PORT=100501 ",
"OUT=/dev/null ",
"TIMEOUT=2592000 ",
"METHODS=TRUE ",
"XDR=TRUE"))
(Confirm that a session is started and maintained on the SSH server logs on 'remote-computer'
ESTABLISH A CLUSTER
From an R Session, run
library(snow)
cl <- makeCluster(spec = c("localhost", "[remote_ip_address]"),
rshcmd = "C:/PuTTY/plink.exe -pw [password]",
host = "[local_ip_address]")
(A session should be started and maintained on the SSH server logs on 'remote-computer'.
Ideally, the function will complete at 'cl' be assigned)
Establishing the cluster is the point at which I'm failing. I run makeCluster and watch my SSH server logs. It shows a connection is made and then immediately closed. makeCluster never finishes running, cl is not assigned, and I'm stuck on how to go on. I'm not even sure if this is an R problem or a configuration problem at this point.
EDIT AND RESOLUTION:
For no good reason, I tried running this with the snow package, as shown in the "Establish a Cluster" section above. When I used the snow package, the cluster is built and runs stably. Not sure why I couldn't get this to work with the parallel package, but at least I've got something functional.
For those who are looking for establishing clusters across several computers in Windows, #Benjamin's answer is almost correct, you need to follow his instructions until the last step, which is ESTABLISH A CLUSTER, and make sure the previous steps are all working in your computer. My solution is based on the package 'Parallel' instead of 'snow', which are essentially same.
Solution
Code template:
machineAddresses <-list(list(host='[Server address]',user='[user name]',rscript="[The Rscript file in the server]",rshcmd="plink -pw [Your password]"))
cl <- makePSOCKcluster(machineAddresses,manual = F)
You have to fill all the [] in your code. In my computer, it is:
machineAddresses <-list(list(host='192.168.1.220',user='jeff',rscript="C:/Program Files/R/R-3.3.2/bin/Rscript",rshcmd="plink -pw qwer"))
cl <- makePSOCKcluster(machineAddresses,manual = F)
Reason
Running cluster in Windows is very tricky, the function makePSOCKcluster usually does not work as expected. The easiest way to make it work is to change manual=F to manual=T and manually create workers. Here is a related post, which talks about why the function makePSOCKcluster will hang forever, and I think these two post basically stuck in the same place. I also post my answer to that question to discuss how to make it work.
R Parallel - connecting to remote cores
As I do not have the reputation to post a comment on Jeff's answer, I will post this as an answer:
The reason I have found that automatic start of cluster nodes using makePSOCKcluster does not work in Windows is that the arg and the outfile arguments in the internal parallel function newPSOCKnode are wrapped in the shQuotes function. This causes the combination of cmd.exe and Rscript.exe to return an error, which leads to makePSOCKcluster hanging forever.
The following two function definitions enable the automatic starting of the cluster nodes using makePSOCKcluter, assuming a proper configuration of ssh or putty/plink for key-based password-less login:
makePSOCKcluster <- function (names, ...)
{
if (is.numeric(names)) {
names <- as.integer(names[1L])
if (is.na(names) || names < 1L)
stop("numeric 'names' must be >= 1")
names <- rep("localhost", names)
}
parallel:::.check_ncores(length(names))
options <- parallel:::addClusterOptions(parallel:::defaultClusterOptions, list(...))
cl <- vector("list", length(names))
for (i in seq_along(cl)) cl[[i]] <- newPSOCKnode(names[[i]],
options = options, rank = i)
class(cl) <- c("SOCKcluster", "cluster")
cl
}
newPSOCKnode <- function (machine = "localhost", ..., options = parallel:::defaultClusterOptions,
rank)
{
options <- parallel:::addClusterOptions(options, list(...))
if (is.list(machine)) {
options <- parallel:::addClusterOptions(options, machine)
machine <- machine$host
}
outfile <- parallel:::getClusterOption("outfile", options)
master <- if (machine == "localhost")
"localhost"
else parallel:::getClusterOption("master", options)
port <- parallel:::getClusterOption("port", options)
setup_timeout <- parallel:::getClusterOption("setup_timeout", options)
manual <- parallel:::getClusterOption("manual", options)
timeout <- parallel:::getClusterOption("timeout", options)
methods <- parallel:::getClusterOption("methods", options)
useXDR <- parallel:::getClusterOption("useXDR", options)
env <- paste0("MASTER=", master, " PORT=", port, " OUT=",
#shQuote(outfile), " SETUPTIMEOUT=", setup_timeout, " TIMEOUT=",
(outfile), " SETUPTIMEOUT=", setup_timeout, " TIMEOUT=",
timeout, " XDR=", useXDR)
arg <- "parallel:::.slaveRSOCK()"
rscript <- if (parallel:::getClusterOption("homogeneous", options)) {
shQuote(parallel:::getClusterOption("rscript", options))
}
else "Rscript"
rscript_args <- parallel:::getClusterOption("rscript_args", options)
if (methods)
rscript_args <- c("--default-packages=datasets,utils,grDevices,graphics,stats,methods",
rscript_args)
cmd <- if (length(rscript_args))
paste(rscript, paste(rscript_args, collapse = " "), "-e",
#shQuote(arg), env)
arg, env)
#else paste(rscript, "-e", shQuote(arg), env)
else paste(rscript, "-e", arg, env)
renice <- parallel:::getClusterOption("renice", options)
if (!is.na(renice) && renice)
cmd <- sprintf("nice +%d %s", as.integer(renice), cmd)
if (manual) {
cat("Manually start worker on", machine, "with\n ",
cmd, "\n")
utils::flush.console()
}
else {
if (machine != "localhost") {
rshcmd <- parallel:::getClusterOption("rshcmd", options)
user <- parallel:::getClusterOption("user", options)
cmd <- shQuote(cmd)
cmd <- paste(rshcmd, "-l", user, machine, cmd)
}
if (.Platform$OS.type == "windows") {
system(cmd, wait = FALSE, input = "")
}
else system(cmd, wait = FALSE)
}
con <- socketConnection("localhost", port = port, server = TRUE,
blocking = TRUE, open = "a+b", timeout = timeout)
structure(list(con = con, host = machine, rank = rank), class = if (useXDR)
"SOCKnode"
else "SOCK0node")
}
I plan to update this response with more complete setup instructions when I have the chance.

How can I shut down Rserve gracefully?

I have tried many options both in Mac and in Ubuntu.
I read the Rserve documentation
http://rforge.net/Rserve/doc.html
and that for the Rserve and RSclient packages:
http://cran.r-project.org/web/packages/RSclient/RSclient.pdf
http://cran.r-project.org/web/packages/Rserve/Rserve.pdf
I cannot figure out what is the correct workflow for opening/closing a connection within Rserve and for shutting down Rserve 'gracefully'.
For example, in Ubuntu, I installed R from source with the ./config --enable-R-shlib (following the Rserve documentation) and also added the 'control enable' line in /etc/Rserve.conf.
In an Ubuntu terminal:
library(Rserve)
library(RSclient)
Rserve()
c<-RS.connect()
c ## this is an Rserve QAP1 connection
## Trying to shutdown the server
RSshutdown(c)
Error in writeBin(as.integer....): invalid connection
RS.server.shutdown(c)
Error in RS.server.shutdown(c): command failed with satus code 0x4e: no control line present (control commands disabled or server shutdown)
I can, however, CLOSE the connection:
RS.close(c)
>NULL
c ## Closed Rserve connection
After closing the connection, I also tried the options (also tried with argument 'c', even though the connection is closed):
RS.server.shutdown()
RSshutdown()
So, my questions are:
1- How can I close Rserve gracefully?
2- Can Rserve be used without RSclient?
I also looked at
How to Shutdown Rserve(), running in DEBUG
but the question refers to the debug mode and is also unresolved. (I don't have enough reputation to comment/ask whether the shutdown works in the non-debug mode).
Also looked at:
how to connect to Rserve with an R client
Thanks so much!
Load Rserve and RSclient packages, then connect to the instances.
> library(Rserve)
> library(RSclient)
> Rserve(port = 6311, debug = FALSE)
> Rserve(port = 6312, debug = TRUE)
Starting Rserve...
"C:\..\Rserve.exe" --RS-port 6311
Starting Rserve...
"C:\..\Rserve_d.exe" --RS-port 6312
> rsc <- RSconnect(port = 6311)
> rscd <- RSconnect(port = 6312)
Looks like they're running...
> system('tasklist /FI "IMAGENAME eq Rserve.exe"')
> system('tasklist /FI "IMAGENAME eq Rserve_d.exe"')
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
Rserve.exe 8600 Console 1 39,312 K
Rserve_d.exe 12652 Console 1 39,324 K
Let's shut 'em down.
> RSshutdown(rsc)
> RSshutdown(rscd)
And they're gone...
> system('tasklist /FI "IMAGENAME eq Rserve.exe"')
> system('tasklist /FI "IMAGENAME eq Rserve_d.exe"')
INFO: No tasks are running which match the specified criteria.
Rserve can be used w/o RSclient by starting it with args and/or a config script. Then you can connect to it from some other program (like Tableau) or with your own code. RSclient provides a way to pass commands/data to Rserve from an instance of R.
Hope this helps :)
On a Windows system, if you want to close an RServe instance, you can use the system function in R to close it down.
For example in R:
library(Rserve)
Rserve() # run without any arguments or ports specified
system('tasklist /FI "IMAGENAME eq Rserve.exe"') # run this to see RServe instances and their PIDs
system('TASKKILL /PID {yourPID} /F') # run this to kill off the RServe instance with your selected PID
If you have closed your RServe instance with that PID correctly, the following message will appear:
SUCCESS: The process with PID xxxx has been terminated.
You can check the RServe instance has been closed down by entering
system('tasklist /FI "IMAGENAME eq Rserve.exe"')
again. If there are no RServe instances running any more, you will get the message
INFO: No tasks are running which match the specified criteria.
More help and info on this topic can be seen in this related question.
Note that the 'RSClient' approach mentioned in an earlier answer is tidier and easier than this one, but I put it forward anyway for those who start RServe without knowing how to stop it.
If you are not able to shut it down within R, run the codes below to kill it in terminal. These codes work on Mac.
$ ps ax | grep Rserve # get active Rserve sessions
You will see outputs like below. 29155 is job id of the active Rserve session.
29155 /Users/userid/Library/R/3.5/library/Rserve/libs/Rserve
38562 0:00.00 grep Rserve
Then run
$ kill 29155

Run R/Rook as a web server on startup

I have created a server using Rook in R - http://cran.r-project.org/web/packages/Rook
Code is as follows
#!/usr/bin/Rscript
library(Rook)
s <- Rhttpd$new()
s$add(
name="pingpong",
app=Rook::URLMap$new(
'/ping' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1>Pong</h1>',req$to_url("/pong")))
res$finish()
},
'/pong' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1>Ping</h1>',req$to_url("/ping")))
res$finish()
},
'/?' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$redirect(req$to_url('/pong'))
res$finish()
}
)
)
## Not run:
s$start(port=9000)
$ ./Rook.r
Loading required package: tools
Loading required package: methods
Loading required package: brew
starting httpd help server ... done
Server started on host 127.0.0.1 and port 9000 . App urls are:
http://127.0.0.1:9000/custom/pingpong
Server started on 127.0.0.1:9000
[1] pingpong http://127.0.0.1:9000/custom/pingpong
Call browse() with an index number or name to run an application.
$
And the process ends here.
Its running fine in the R shell but then i want to run it as a server on system startup.
So once the start is called , R should not exit but wait for requests on the port.
How will i convince R to simply wait or sleep rather than exiting ?
I can use the wait or sleep function in R to wait some N seconds , but that doesnt fit the bill perfectly
Here is one suggestion:
First split the example you gave into (at least) two files: One file contains the definition of the application, which in your example is the value of the app parameter to the Rhttpd$add() function. The other file is the RScript that starts the application defined in the first file.
For example, if the name of your application function is named pingpong defined in a file named Rook.R, then the Rscript might look something like:
#!/usr/bin/Rscript --default-packages=methods,utils,stats,Rook
# This script takes as a single argument the port number on which to listen.
args <- commandArgs(trailingOnly=TRUE)
if (length(args) < 1) {
cat(paste("Usage:",
substring(grep("^--file=", commandArgs(), value=T), 8),
"<port-number>\n"))
quit(save="no", status=1)
} else if (length(args) > 1)
cat("Warning: extra arguments ignored\n")
s <- Rhttpd$new()
app <- RhttpdApp$new(name='pingpong', app='Rook.R')
s$add(app)
s$start(port=args[1], quiet=F)
suspend_console()
As you can see, this script takes one argument that specifies the listening port. Now you can create a shell script that will invoke this Rscript multiple times to start multiple instances of your server listening on different ports in order to enable some concurrency in responding to HTTP requests.
For example, if the Rscript above is in a file named start.r then such a shell script might look something like:
#!/bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <start-port> <instance-count>"
exit 1
fi
start_port=$1
instance_count=$2
end_port=$((start_port + instance_count - 1))
fifo=/tmp/`basename $0`$$
exit_command="echo $(basename $0) exiting; rm $fifo; kill \$(jobs -p)"
mkfifo $fifo
trap "$exit_command" INT TERM
cd `dirname $0`
for port in $(seq $start_port $end_port)
do ./start.r $port &
done
# block until interrupted
read < $fifo
The above shell script takes two arguments: (1) the lowest port-number to listen on and (2) the number of instances to start. For example, if the shell script is in an executable file named start.sh then
./start.sh 9000 3
will start three instances of your Rook application listening on ports 9000, 9001 and 9002, respectively.
You see the last line of the shell script reads from the fifo which prevents the script from exiting until caused to by a received signal. When one of the specified signals is trapped, the shell script kills all the Rook server processes that it started before it exits.
Now you can configure a reverse proxy to forward incoming requests to any of the server instances. For example, if you are using Nginx, your configuration might look something like:
upstream rookapp {
server localhost:9000;
server localhost:9001;
server localhost:9002;
}
server {
listen your.ip.number.here:443;
location /pingpong/ {
proxy_pass http://rookapp/custom/pingpong/;
}
}
Then your service can be available on the public Internet.
The final step is to create a control script with options such as start (to invoke the above shell script) and stop (to send it a TERM signal to stop your servers). Such a script will handle things such as causing the shell script to run as a daemon and keeping track of its process id number. Install this control script in the appropriate location and it will start your Rook application servers when the machine boots. How to do that will depend on your operating system, the identity of which is missing from your question.
Notes
For an example of how the fifo in the shell script can be used to take different actions based on received signals, see this stack overflow question.
Jeffrey Horner has provided an example of a complete Rook server application.
You will see that the example shell script above traps only INT and TERM signals. I chose those because INT results from typing control-C at the terminal and TERM is the signal used by control scripts on my operating system to stop services. You might want to adjust the choice of signals to trap depending on your circumstances.
Have you tried this?
while (TRUE) {
Sys.sleep(0.5);
}

Resources