We have created RestApi's in R. We are able to run the code by using Plumber. But the thing is we need to host or deploy the R code on web (like web api or web services)
# myfile.R
#' #get /Sample
Sample <- function(samples=10){
print(samples)
}
Note : Please suggest other than Plumber and Shiny
This is for those who wants to have a comparion of API development with R.
Basically concurrent requests are queued by httpuv in plumber so that it is not performant by itself. The author recommends multiple docker containers but it can be complicated as well as response-demanding.
There are other tech eg Rserve and rApache. Rserve forks prosesses and it is possible to configure rApache to pre-fork so as to handle concurrent requests.
See the following posts for comparison
https://www.linkedin.com/pulse/api-development-r-part-i-jaehyeon-kim/
https://www.linkedin.com/pulse/api-development-r-part-ii-jaehyeon-kim/
Related
We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();
I'm inexperienced in using opencpu as a server and so I tried to find a answer to this in the documentation but did not found any answers to this question. Never the less this seems quite basic to me in terms of permission and authentication, so I guess this is documented somewhere and I just did not found it....
The question I have is regarding users and permissions when running a request to the OpenCPU server.
I've written a R package which I want to host using the OpenCPU server. So far I managed to install OpenCPU server without any problems and it works fine for most functions in my R pakage. However one function uses Sys.getenv('USERNAME') to determine the user which runs the code. But when the R code is triggered by a client request I have no clue how to figure out the user.
Min Example:
Suppose I have a function "myFun" included in my R package named "MyRPkg" like:
MyRPkg/R/myFun.R:
myFun(v){
return(Sys.getenv('USERNAME'))
}
When I've installed the package (in the "root" R library) and have my OpenCPU server running than I can access the package and call this function by a POST request like:
SERVERNAME/ocpu/library/MyRPkg/R/myFun/json
and get an empty string as an answer.
[""]
How do I figure out what is happen on the server side in terms of which user "runs" the R code and is it possible to configure this?
My initial thought was that the user should be "data-www" which is the default Apache setting on my system. Don't know at what layer the user is set, Apache, rApache or opencpu, but I'm guessing it should be configurable on OpenCPU level?
The System the server runs on is more or less a linux Ubuntu server.
The OpenCPU system runs on top of your system default Apache2 server. Which uid is used to run the apache2 daemon is configured on your system. By default it is www-data on Debian/Ubuntu. You can probably override this somewhere.
According to the documentation, https://www.rplumber.io/, it says that if we use plumber$run() it will just run locally localhost:8000. And I want to publish it in a remote. How can I start a remote API using plumber package?
See the host parameter on run(). e.g. $run(host="0.0.0.0")
I have an R package that I would like to host through Amazon Web Services that will be accessible via an API. The script should take a couple of input values and return the R output in json format. Also, the API should be able to handle multiple requests simultaneously.
So for example, call http://sampleapi.com/?location=USA?state=Florida. That would then run the R package and return the output data to the calling application.
Has anyone done this before or know of resources you can point me to that would explain how to do so? Thanks!
Thanks for all the suggestions. I decided to use Ruby for the API with the rinruby and rails-api gems and will host that through AWS Elastic Beanstalk. See this question for how I am setting it up - Ruby API - Accept parameters and execute script
i'm refactoring an application build when meteor was in 0.5.x
I need to scale the application, so i will now have different applications able to run on different core. One of them will be dedicated to web-application, but others are server only. For those case i don't want Meteor to serve anything, it must not be an http server.
I tried to configure differently the package list (file .meteor/packages:
# standard package of meteor-platform in server app only
application-configuration
autoupdate
base64
binary-heap
callback-hook
check
ddp
deps
ejson
follower-livedata
geojson-utils
id-map
json
logging
meteor
mongo
observe-sequence
ordered-dict
random
retry
routepolicy
# standard package of meteor-platform in client app
#blaze
#blaze-tools
#boilerplate-generator
#html-tools
#htmljs
#jquery
#minifiers
#minimongo
#reactive-var
#spacebars
#spacebars-compiler
#templating
#tracker
#ui
#webapp
#webapp-hashing
# specific app package
But when i run #> meteor
Then it tells me that the server is listening, so it doesn't work
I also tried to remove "browser platform" :
meteor remove-platform browser
but it tells me that it cannot remove platform in this version of meteor
Where am i wrong ? the list of packages is not the right one for a server only application ?
Not possible at the moment, "maybe in a future version" as someone from MDG says
Meteor relies on the DDP package to listen to incoming requests and DDP listens on websockets, which is basically http.
Therefore it has to listen for something on some port. If it does not listen, you cannot tell the app to do anything or ask it for anything so then, what use is it?
But if you don't want your app to interfere with your other apps in tems of the ports it binds to, then give it a custom port when you are starting it.
$ meteor run --port 12345