I have an R code which I am trying to call over HTTP using opencpu, but for the long running code, it is getting timeout. I came across https://github.com/joelkuiper/aplomb
Unfortunately, the documentation is not detailed and I am unable to figure out how to make it work once it is deployed and the container is running.
Look in the file /etc/opencpu/server.conf
You should see parameters timelimit.get and timelimit.post (values are in seconds). Increase them to something that seems reasonable for your code, and save it.
Then restart the service:
sudo service opencpu restart
Then try again - hope it works!
Related
I am used to using R in RStudio. For a new project, I have to use R on the command line, because the data storage and analysis are only allowed to be on a specific server that I connect to using ssh. This server doesn't have rstudio-server to support remote RStudio sessions.
The project involves an extremely large dataset, and some pre-written code to load/format the data that I have been told to run using "source()" before I do anything else. This takes several minutes to run and load the data each time.
What would a good workflow be for something like this? Editing my code in a .r file, saving, then running it would require taking several minutes to load the data each time. But just running R in an interactive session would make it hard to keep track of what I am doing and repeat things if necessary.
Is there some command-line equivalent to RStudio where you can have an interactive session but be editing/saving a file of your code as you go?
Sounds like JuPyteR might be your friend here.
The R kernel works great.
You can use it on a remote server either with exposing an open port (and setting up JuPyteR login credentials)
Or via port forwarding over SSH.
It is a lot like an interactive reply, except it holds state.
And you can go back and rerun cells.
(Of course state can be dangerous for reproduceability)
For RStudio you can launch console and ssh to your remote servers even if your servers don't use expensive RStudio for servers platform. You can then execute all commands from R Studio directly into the ssh with the default shortcut key. This might allow to continue using R studio, track what you're doing in the R script, execute interactively.
I'm new to R, and I'm invoking an R script from a NodeJS app. When the R Script is invoked, it takes a long time in producing output. I investigated and realized that the bulk of that overhead is when it loads the libraries and the model I'm using. Let me clarify that any optimization would work, taking into account that I'm running this code in a Raspberry Pi 2 b+.
My question is: Is there a way to preload all the libraries and the model on R and then trigger predictions on demand? So that I won't need to reload the libraries and the model every time I want a prediction.
No. Since you're just invoking a script the loading of everything it has to be done everytime the script is run; since nothing didn't exist in memory before you invoked it.
One workaround I would suggest is to instead run a R script have your R script running as a service and then query that service from nodejs.
I cannot help you with that since my expertise for R doesn't go very far away and I don't know if having an R server is even possible.
An alternative to that, if it is not too cumbersome, is to port your R project to python and mount a server of some kind (which with python is extremely easy to do) and then poke that server from nodejs. Since you would be running a server you can just cache the libraries at the server startup time and have everything in RAM for your next query.
I am trying to run a 'detection' demo of ChainerCV from here,
https://github.com/chainer/chainercv/tree/master/examples/detection
I am running this demo in BitFusion Ubuntu 14.04 Chainer AMI on AWS with a p2.xlarge instance which uses a single GPU.
When I try to run this demo at first I am getting this error,
no display name and no $DISPLAY environment variable
So I researched it on the web and get a solution for using matplotlib with AGG engine, so I tried to import that and use agg.
That does solve the problem of $DISPLAY but now when I run the demo it gets executed successfully but as an output, all I get is a white blank image.
Can anyone tell me the reason behind this?
The problem seems that you run the demo program in remote machine and X is not properly set.
You can get proper matplotlib output when you connect to your remote machine with ssh -X <your.remote.machine.address>, but it will takes time to show the resultt.
If you want to run demo in remote machine quickly, i recommend you to set MPLBACKEND=Agg, save plot figure as image by plt.savefig(<imagepath>) and do not run plt.show() in demo program.
I have a script in R that is frequently called during the day (by other scripts). I call R in a terminal using
Rscript code.R
I notice it takes a lot of time to load packages and set up R.
Is it possible to run R as a background service which I hit using a port or something?
Yes, look into RServe which has been available for over a dozen years for this reason. There are a couple of fairly high profile applications too.
You can check out this add-in for Rstudio, it is not a port like solution but maybe it can help you https://github.com/bnosac/taskscheduleR
My goal is to follow "Deploying a Meteor app with Nginx from scratch" tutorial available here
After installing Meteor, Node, Forever and Git, do the npm install, I try to "run meteor" to see if it works.
After downloading meteor-tools, the process begins to extract meteor-tools... looks like it is hanging for a couple minutes and then stops without any warning.
So my guess is that something causes the extraction to quit, but i don't know what exactly.
Yes, Meteor likes plenty of RAM. I would recommend using Phusion Passenger with nginx for Meteor, it's very easy to set up, and their tutorials/getting started is very good:
https://www.phusionpassenger.com/library/install/nginx/install/oss/
I haven't found the exact reason. However, I got it working. I was using a DigitalOcean server (512Mb/20G). I tried with a big server (16G/160G) and it works.
So I guess my server Ram OR Disk capacity was too small.
Edit:
setting up smaller configuration, i noticed that the minimum for Meteor to work is: 2Gb of RAM and 40Gb Disk.