Problem accessing or/and running R plumber API in Docker - r

I have some R plumber files working using RStudio, next step is via docker.
The instruction
source('R/to_run_api_shirin_docker.R')
Starting server to listen on port 8000
don't work and gives the following run :
$ docker run -p 8000:8000 plumber_demo:v4
R version 3.6.3 (2020-02-29) -- "Holding the Windsock"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> source('R/to_run_api_shirin_docker.R')
Starting server to listen on port 8000
Symptoms :
when run in RStudio I should see for logs :
Starting server to listen on port 8000
Running the swagger UI at http://127.0.0.1:8000/__swagger__/
System time: 2020-07-22 16:50:21
Request method: GET /__swagger__/
HTTP user agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) enter code hereQtWebEngine/5.12.8 Chrome/69.0.3497.128 Safari/537.36 # 127.0.0.1
System time: 2020-07-22 16:50:21
Request method: GET /swagger.json
HTTP user agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) QtWebEngine/5.12.8 Chrome/69.0.3497.128 Safari/537.36 # 127.0.0.1
and there is nothing at
http://127.0.0.1:8000/__swagger__/,
http://127.0.0.1:8000,
http://localhost:8000/swagger/ ,
http://localhost:8000
--- scripts bellow ---
Dockerfile
FROM rocker/r-ver:3.6.3
RUN R -e 'install.packages("plumber")'
RUN R -e 'install.packages("randomForest")'
COPY mod_prod_rf.rds data/
COPY plumber_api_shirin_docker.R /R/
COPY to_run_api_shirin_docker.R /R/
CMD ["R", "-e source('R/to_run_api_shirin_docker.R')"]
R/to_run_api_shirin_docker.R
# --- launch API ----
plumb_path <- "R/plumber_api_shirin_docker.R"
r <- plumber::plumb(plumb_path)
r$run(host = "127.0.0.1", port = 8000)
Thanks in advance for any help !

I had the same problem when starting with a rocker/r-ver docker repository and trying to add and configure plumber. Resolution for me was to start from a rstudio/plumber repository which has been set up by RStudio folks with plumber and the right network configuration. It allows you to remove some boilerplate and the server responded from the docker container as expected.
Your project might look like this:
FROM rstudio/plumber:v1.0.0
RUN R -e 'install.packages("randomForest")'
COPY mod_prod_rf.rds data/
COPY plumber_api_shirin_docker.R /R/
CMD ["/R/plumber_api_shirin_docker.R"]
Changes from your original:
don't need to install plumber since it's in the base repository
you don't need the to_run_api_shirin_docker.R file since it does this for you
CMD argument is just the file with plumber functions
More info from RStudio on their docker deployment guide here
PS. I didn't figure exactly what the original problem was, but note that in the RStudio plumber Docker configuration they host the API explicitly on 0.0.0.0. Output from running is
> pr <- plumber::plumb(rev(commandArgs())[1]); args <- list(host = '0.0.0.0', port = 8000); if (packageVersion('plumber') >= '1.0.0') { pr$setDocs(TRUE) } else { args$swagger <- TRUE }; do.call(pr$run, args)
Running plumber API at http://0.0.0.0:8000
Running swagger Docs at http://127.0.0.1:8000/__docs__/

Related

Wordpress REST API fails with Python

I've written a simple Python script to upload an image to a WP site:
import requests
import base64
BASE_URL = "https://example.com/wp-json/wp/v2"
media = {
"file": open("image.png", "rb"),
"caption": "a media file",
"description": "some media file"
}
creds = "wp_admin_user" + ":" + "app password"
token = base64.b64encode(creds.encode())
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36",
"Authorization":"Basic " + token.decode("utf-8")
}
r = requests.post(BASE_URL + "/media", headers=header, files=media)
print(r)
When using Python 3.9 on Windows, everything works as expected: I get a <Response [201]> reply and I can see the image in my site's media library.
When running the exact same script on a Linux, it fails with a 503 reply from the WP server:
<Response [503]>
The Linux is running Python 3.9.1
I can run the script again on Windows ten times, and it always works. I've searched the internets for the error and it's usually a WP configuration error, which doesn't seem to be the case here as the script works on Windows.
Any help is much appreciated!
I think that the problem is in the ip of the server where it hangs Linux

AJAX requests failing for ASP.NET Core app in Amazon Linux 2 vs Amazon Linux 1

I have an ASP.NET Core v.3.0 web application hosted in Amazon AWS Elastic Beanstalk using Docker. The application works fine in Docker running 64bit Amazon Linux (v2.16.11). When I update to Docker running 64bit Amazon Linux 2 (v3.4.12) the requests work fine except for AJAX requests which fail with Status Code Error 400 "Bad request". Nothing else has changed in the source code, dockerfile etc. Only the Linux version has changed from Amazon Linux to Amazon Linux 2. Does anybody have an idea what is different between Amazon Linux 1 and Amazon Linux 2 that may be the cause leading to AJAX requests failing?
More info:
I cannot replicate this error with the official ASP.NET core 3.1 examples. I have not updated my application to v3.1, I will do it soon and I will update this question.
The relevant action inside the controller does not return the partial view in Amazon Linux 2. The controller provides a log just before returning the partial view and this is not triggered in Amazon Linux 2.
The nginx access.log file shows the following response of the load balancer:
Amazon Linux 1:
{IP} - - [10/Apr/2022:07:36:01 +0000] "POST {url} HTTP/1.1" 200 3882 "{url2}" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.75 Safari/537.36" "{IP2}"
Amazon Linux 2:
{IP} - - [10/Apr/2022:07:00:14 +0000] "POST {url} HTTP/1.1" 400 0 "{url2}" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.75 Safari/537.36" "{IP2}"
The call is made with jQuery 3.4.1:
var $form = $("#inputForm");
if ($form.length <= 0) return;
var data = $form.serialize();
$.ajax({
url: "...",
type: "POST",
data: data,
error: function (jqXHR, textStatus, errorThrown) {
alert("There was an error when loading the results (error type = " + errorThrown + ").");
},
success: function (result) {
$("#calculationTarget").html(result)
});
The issue is no longer present if the project is updated from ASP.NET Core 3.0 to ASP.NET Core 3.1.
There is a very simple fix, which is updating to ASP.NET Core 3.1.
In this version, the issue that you have having is fixed.
See the steps below for updating.
If you have a global.json file to target a specific .NET Core SDK version, update the version property.
{
"sdk": {
"version": "3.1.101"
}
}
Update the TFM to netcoreapp3.1, as described below.
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>
</Project>
You need to update the package references. To do this, update every Microsoft.AspNetCore.* (* meaning wildcard) to 3.1.0 (or any version later).
If you are using Docker (which I think you are), then you need to use an ASP.NET Core 3.1 base image. See an example below.
$ docker pull mcr.microsoft.com/dotnet/aspnet:3.1
For extra steps and information, see the official guide from migrating to ASP.NET Core 3.1.
In summary, upgrading your current application to ASP.NET Core 3.1 should fix your issue.

How to run r script on remote GPU?

I am trying to run r script on a GPU server provided by the institute.
Specifications of GPU server are as follows:
Host Name: gpu01.cc.iitk.ac.in,
Configuration: Four Tesla T10 GPUs added to each machine with 8 cores in each
Operating System: Linux
Specific Usage: Parallel Programming under Linux using CUDA with C Language
R code:
setwd("~/Documents/tm dataset")
library(ssh)
session <- ssh_connect("dgaurav#gpu01.cc.iitk.ac.in")
print(session)
out <- ssh_exec_wait(session, command = 'articles1_test.R')
Error:
ksh: articles1_test.R: not found
Your dataset and script are only on local machine... you nneed to cooy the to remote server before you can run them.

R ggplot encoding/locale issue with ssh -X

[ This is a cross posting to the R-help mailing list post where this question has remained unanswered so far ]
I am struggling with remote R sessions and a (I suspect) locale related
encoding problem: Using the X11 device (X11forwarding enabled),
whenever I try to plot something containing umlauts using ggplot2, I am
seeing sth like
Error in grid.Call(L_stringMetric, as.graphicsAnnot(x$label)) :
invalid use of -61 < 0 in 'X11_MetricInfo'
Using base graphics is fine as is plotting to another device (pdf, say).
Here is some code to reproduce:
## ssh -X into the remote server
## start R at the remote server
plot(1:10, 1:10, main = "größe")
## this opens a plot window and works as expected
library("ggplot2")
qplot(1:10, 1:10)
## this works still
qplot(1:10, 1:10) + xlab("größe")
## I get the ERROR above
My setup:
locally:
Linux (Debian GNU/Linux 9)
remotely
Linux (RHEL Server release 7.3 (Maipo)
(Maybe) relevant bits of my .ssh/config:
Host theserver
HostName XXX.XXX.XXX.XXX
ForwardX11 yes
ForwardX11Timeout 596h
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
ForwardAgent yes
ServerAliveInterval 300
What version of R do you have (on the remote machine)?
I can replicate this with:
x11(type="Xlib")
library(grid)
convertHeight(stringDescent("größe"), "in")
on R 3.2.5, but not on, e.g., R 3.4.0 (just running R locally in
both cases).

Error during cross platform communication between Node.js and R using Rserve on AWS. Error:connect ETIMEDOUT

I want to pass my R script from my node.js application to an Amazon EC2 instance running Ubuntu and Rserve; which would then execute and evaluate the R script and return the processed output back to my node.js application.
I found two reliable options:
1) RIO - (Used this)
2) rserve-client
Before proceeding to connect with Rserve, I made sure it was initiated and running.
ubuntu#ip-172-31-37-254:~$ sudo netstat -nltp|grep Rserve
tcp 0 0 0.0.0.0:6311 0.0.0.0:* LISTEN 5978/Rserve
Enabled the remote connection parameter for Rserve and got it started successfully.
library(Rserve)
Rserve(args="--RS-enable-remote")
Starting Rserve:
/usr/lib/R/bin/R CMD /usr/local/lib/R/site-library/Rserve/libs//Rserve --RS-enable-remote
R version 3.0.2 (2013-09-25) -- "Frisbee Sailing"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
During startup - Warning message:
Setting LC_CTYPE failed, using "C"
Rserv started in daemon mode.
1) Tried a test script using RIO package from my Node.js application
var rio = require("rio");
console.log("Connecting..");
rio.e({
host : "ec2-52-40-113-159.us-west-2.compute.amazonaws.com",
port : "6311",
command: "pi / 2 * 2",
callback: function (err, res) {
if (!err) {
console.log(res);
} else {
console.log("Rserve call failed. " + err);
}
}
// path :"/usr/local/lib/R/site-library"
});
Note: As stated here, I have the host parameter updated accordingly.
Error while running the app:
node app.js
Connecting..
Rserve call failed. Error: connect ETIMEDOUT 52.40.113.159:6311
I'm not sure, why is it that I am unable to connect. Is there any step that I am missing? Would also appreciate if there is any alternate package or way of accomplishing this task. Thanks.
The issue in your use case is proper ports not open in your Security groups.
You will have to open port 6311 and 5978 in the EC2 security group and the issue will go.
Instead of making port 6311 and 5878 open to world you can add below rules to EC2 security group.
Type: Custom TCP Rule, Protocol: TCP, Port Range: 6311, Source: Node.js Server IP
Type: Custom TCP Rule, Protocol: TCP, Port Range: 5978, Source: Node.js Server IP

Resources