I'm inexperienced in using opencpu as a server and so I tried to find a answer to this in the documentation but did not found any answers to this question. Never the less this seems quite basic to me in terms of permission and authentication, so I guess this is documented somewhere and I just did not found it....
The question I have is regarding users and permissions when running a request to the OpenCPU server.
I've written a R package which I want to host using the OpenCPU server. So far I managed to install OpenCPU server without any problems and it works fine for most functions in my R pakage. However one function uses Sys.getenv('USERNAME') to determine the user which runs the code. But when the R code is triggered by a client request I have no clue how to figure out the user.
Min Example:
Suppose I have a function "myFun" included in my R package named "MyRPkg" like:
MyRPkg/R/myFun.R:
myFun(v){
return(Sys.getenv('USERNAME'))
}
When I've installed the package (in the "root" R library) and have my OpenCPU server running than I can access the package and call this function by a POST request like:
SERVERNAME/ocpu/library/MyRPkg/R/myFun/json
and get an empty string as an answer.
[""]
How do I figure out what is happen on the server side in terms of which user "runs" the R code and is it possible to configure this?
My initial thought was that the user should be "data-www" which is the default Apache setting on my system. Don't know at what layer the user is set, Apache, rApache or opencpu, but I'm guessing it should be configurable on OpenCPU level?
The System the server runs on is more or less a linux Ubuntu server.
The OpenCPU system runs on top of your system default Apache2 server. Which uid is used to run the apache2 daemon is configured on your system. By default it is www-data on Debian/Ubuntu. You can probably override this somewhere.
Related
I have an R package that I would like to host through Amazon Web Services that will be accessible via an API. The script should take a couple of input values and return the R output in json format. Also, the API should be able to handle multiple requests simultaneously.
So for example, call http://sampleapi.com/?location=USA?state=Florida. That would then run the R package and return the output data to the calling application.
Has anyone done this before or know of resources you can point me to that would explain how to do so? Thanks!
Thanks for all the suggestions. I decided to use Ruby for the API with the rinruby and rails-api gems and will host that through AWS Elastic Beanstalk. See this question for how I am setting it up - Ruby API - Accept parameters and execute script
In order to start working with the newly developed package "gtrendsR" (the version 1.3.1 released by 2015-12-10) which in fact is a package to Perform and Display Google Trends Queries, you should connect to a google account. I have tried it several times to connect to my gmail account just as it is written in the instructions, but I have not been able to connect yet.
gconnect("usr#gmail.com", "psw")
It gives me this error:
Error in function (type, msg, asError = TRUE) :
Operation timed out after 0 milliseconds with 0 out of 0 bytes received
I have no idea how to fix it guys...!!??
Package Co-Maintainer here. We write about this in README.md and in the help page.
Maybe you have you two-factor authentication? Maybe you are behind some type of firewall and you need to try out in the open? It "works for me" with a dedicated account I created for Google Trends; others use the same trick.
If you file an issue ticket, or better still, read the existing discussion over there, you may get some better help. Right now your question is unanswerable due to lack of specifics or reproducible results.
As Dirk Eddelbuettel wrote, this might be because you are behind a Firewall. If so, you can either unblock the port that you are using for connecting or alternatively use a Proxy Server (I received the same error as you, until I configured a proxy).
For Information on how to configure Internet Access via Proxy in Rstudio see:
RStudio Proxy Configuration on Windows
or
https://support.rstudio.com/hc/en-us/articles/200488488-Configuring-R-to-Use-an-HTTP-or-HTTPS-Proxy
For Setting Proxies in R see:
http://stat.ethz.ch/R-manual/R-patched/library/utils/html/download.file.html
I have an experiment in AzureML which has a R module at its core. Additionally, I have some .RData files stored in Azure blob storage. The blob container is set as private (no anonymous access).
Now, I am trying to make a https call from inside the R script to the azure blob storage container in order to download some files. I am using the httr package's GET() function and properly set up the url, authentication etc...The code works in R on my local machine but the same code gives me the following error when called from inside the R module in the experiment
error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list
Apparently this is an error from the underlying OpenSSL library (which got fixed a while ago). Some suggested workarounds I found here were to set sslversion = 3 and ssl_verifypeer = 1, or turn off verification ssl_verifypeer = 0. Both of these approaches returned the same error.
I am guessing that this has something to do with the internal Azure certificate / validation...? Or maybe I am missing or overseeing something?
Any help or ideas would be greatly appreciated. Thanks in advance.
Regards
After a while, an answer came back from the support team, so I am going to post the relevant part as an answer here for anyone who lands here with the same problem.
"This is a known issue. The container (a sandbox technology known as "drawbridge" running on top of Azure PaaS VM) executing the Execute R module doesn't support outbound HTTPS traffic. Please try to switch to HTTP and that should work."
As well as that a solution is on the way :
"We are actively looking at how to fix this bug. "
Here is the original link as a reference.
hth
i'm refactoring an application build when meteor was in 0.5.x
I need to scale the application, so i will now have different applications able to run on different core. One of them will be dedicated to web-application, but others are server only. For those case i don't want Meteor to serve anything, it must not be an http server.
I tried to configure differently the package list (file .meteor/packages:
# standard package of meteor-platform in server app only
application-configuration
autoupdate
base64
binary-heap
callback-hook
check
ddp
deps
ejson
follower-livedata
geojson-utils
id-map
json
logging
meteor
mongo
observe-sequence
ordered-dict
random
retry
routepolicy
# standard package of meteor-platform in client app
#blaze
#blaze-tools
#boilerplate-generator
#html-tools
#htmljs
#jquery
#minifiers
#minimongo
#reactive-var
#spacebars
#spacebars-compiler
#templating
#tracker
#ui
#webapp
#webapp-hashing
# specific app package
But when i run #> meteor
Then it tells me that the server is listening, so it doesn't work
I also tried to remove "browser platform" :
meteor remove-platform browser
but it tells me that it cannot remove platform in this version of meteor
Where am i wrong ? the list of packages is not the right one for a server only application ?
Not possible at the moment, "maybe in a future version" as someone from MDG says
Meteor relies on the DDP package to listen to incoming requests and DDP listens on websockets, which is basically http.
Therefore it has to listen for something on some port. If it does not listen, you cannot tell the app to do anything or ask it for anything so then, what use is it?
But if you don't want your app to interfere with your other apps in tems of the ports it binds to, then give it a custom port when you are starting it.
$ meteor run --port 12345
I'm looking to call BigQuery from R Studio, installed on a Google Compute Engine.
I have the bq python tool installed on the instance, and I was hoping to use its service accounts and system() to get R to call bq command line tool and so get the data.
However, I run into authentication problems, where it asks for a browser key. I'm pretty sure there is no need to get the key due to the service account, but I don't know how to construct the authetication from with R (it runs on RStudio, so will have multiple users)
I can get an authetication token like this:
library(RCurl)
library(RJSONIO)
metadata <- getURL('http://metadata/computeMetadata/v1beta1/instance/service-accounts/default/token')
tokendata <- fromJSON(metadata)
tokendata$$access_token
But how do I then use this to generate a .bigqueryrc token? Its the lack of this that triggers the authetication attempt.
This works ok:
system('/usr/local/bin/bq')
showing me bq is installed ok.
But when I try something like:
system('/usr/local/bin/bq ls')
I get this:
Welcome to BigQuery! This script will walk you through the process of initializing your .bigqueryrc configuration file.
First, we need to set up your credentials if they do not already exist.
******************************************************************
** No OAuth2 credentials found, beginning authorization process **
******************************************************************
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=XXXXXXXX.apps.googleusercontent.com&access_type=offline
Enter verification code: You have encountered a bug in the BigQuery CLI. Google engineers monitor and answer questions on Stack Overflow, with the tag google-bigquery: http://stackoverflow.com/questions/ask?tags=google-bigquery
etc.
Edit:
I have managed to get bq functioning from RStudio system() commands, by skipping the authetication by logging in to the terminal as the user using RStudio, autheticating there by signing in via the browser, then logging back to RStudio and calling system("bq ls") etc..so this is enough to get me going :)
However, I would still prefer it if BQ can be autheticated within RStudio itself, as many users may log in and I'll need to autheticate via terminal for all of them. And from the service account documentation, and the fact I can get an authetication token, hints at this being easier.
For the time being, you need to run 'bq init' from the command line to set up your credentials prior to invoking bq from a script in GCE. However, the next release of bq will include support for GCE service accounts via a new --use_gce_service_account flag.