Setting up public plumber API? - rstudio-server

I'm trying to set up a plumber API (0.4.6) on rstudio-server running on AWS Linux, so that our external analytics system can make requests to R. I've got firewall ports open on 8787 (for Rstudio, which is working fine) and on 5762 (for the API, which isn't working). If I kick off a swagger API from within Rstudio, that works fine locally. If I remap the rstudio interface to 5762, that works fine (so not apparently a firewall problem). But we simply cannot find a way to expose a plumber API on 5762.
Suggestions gratefully received…

what IP are you using?
Plumber respond by default on 127.0.0.1
There are probaly rules in places to prevent you from connecting to localhost from an external host.
Try 0.0.0.0
pr$run(host="0.0.0.0")

Related

Unable to access some specific GET calls on a port

I have a Virtual Machine which is deployed on a Windows server. I have a custom port let's say 1100 on which some web APIs are called. Those APIs are accessed internally and externally as well. But there are some GET commands which are giving ERR CONNECTION RESET on the same port when accessed externally. i.e. the same commands are working internally on the VM and as well as on the server but when we try to access it from outside the network it gives an error. Other Calls/APIs are working very well on the same port 1100.
What steps do to make it executable?
Note I am using the Mikrotik Firewall behind the server
Any help would be highly appreciated.

Hosting fastAPI on vast.ai GPU instance

How to allow http traffic on vast.ai instance? I'd like to host GPU related code using fastAPI+nginx, but I am not seeing NGINX homepage after configuration. I am not seeing bad gateway error. What I am getting is "This site can’t be reached".
After configuration, I wanted to see Nginx homepage. It works on AWS, but when setting the instance on AWS, you get the option of choosing "Allow http/https traffic". On vast.ai, I do not see that.
Ok, so it didn't work by ssh-ing into the instance and running the fastAPI.
I rented another instance with Jupyter Notebook enabled.
So Jupyter + ngrok + uvicorn works. Since vast.ai instance IP isn't accessible, ngrok does the trick by providing a unique ip

"An exception occurred" when hitting RPlumber API from Ubuntu 16.04

I am using RPlumber to create an API that makes some data available to the users of the API. I created an Ubuntu 16.04 server on Linode to host the API.
I have successfully installed R on the server, and all of the libraries, and am able to run the script on the machine with command Rscript file_that_runs_rplumber.R. When I run the script, the command line hangs with:
Running plumber API at http://0.0.0.0:8004
Running swagger Docs at http://127.0.0.1:8004/__docs__/
...so I know that the API is successfully running. I am trying to hit this endpoint from my local machine, not from the Linode server, and so I replace the 0.0.0.0 with my IP address 1.2.3.4 lets say. When I visit 1.2.3.4:8008/__docs__/, this page does work, and I get the auto-generated RPlumber API docs:
However when I replace /__docs__/ with one of the API's endpoints, I receive the following:
I can see from the command line for the Linode Server that the R code associated with the endpoint is running, however it is simply not being returned to me. Perhaps this is a security issue, that I cannot access the endpoint on my local machine? How can I update the server so that my local machine (and any other machines as well) can access this API?
Thanks!

Connect to a remote Jupyter runtime over HTTPS with Google Colab

I'm trying to use Google's Colab feature to connect to a remote run-time that is configured with HTTPS. However, I only see an option to inform the port on the UI, not the protocol.
I've checked the Network panel and the website starts a WebSocket connection with http://localhost:8888/http_over_websocket?min_version=0.0.1a3, HTTP-style.
Full details of my setup:
I have a public Jupyter server at https://123.123.123.123:8888 with self-signed certificate and password authentication
I've followed jupyter_http_over_ws' setup on the remote
I started the remote process with jupyter notebook --no-browser --keyfile key.pem --certfile crt.pem --ip 0.0.0.0 --notebook-dir notebook --NotebookApp.allow_origin='https://colab.research.google.com'
I've created a local port forwarding with ssh -L 8888:localhost:8888 dev#123.123.123.123
I've turned on network.websocket.allowInsecureFromHTTPS on Firefox
I've went to https://localhost:8888 and logged in
Naturally, when the UI calls http://localhost:8888/http_over_websocket?min_version=0.0.1a3 it fails. If I manually access https://localhost:8888/http_over_websocket?min_version=0.0.1a3 (note the extra s) it gets through.
I see three options to solve it:
Tell the UI to use secure WS connection
Run a proxy on my local machine to transform the HTTPS into plain HTTP
Turn off HTTPS on my remote
The last two I think will work, but I wouldn't like that way.
How to do #1?
Thanks a lot!
Your option 1 isn't possible in colab today.
Why do you want to use HTTPS over an SSH tunnel that already encrypts forwarded traffic?

Meteor 0.9.2 remote connection issue

Not sure if it's just a coincidence or a bug but after updating to 0.9.2 I lost my remote connections to any of my Meteor apps. localhost:3000 works fine but remote access to host:3000 or any other port I try cannot connect.
I had exactly the same symptoms with the new Meteor (0.9.2.1), I was able to connect fine on my development server using localhost:3000, but I received an error when attempting to connect to that server using the NETBIOS name (which I have been doing successfully since Blaze). Example URL:
v-as-nodejs:3000
This worked fine before but does not with the latest Meteor.
I was also able to overcome this issue by specifying an IP address and port explicitly in the Meteor server startup command:
meteor --port 192.168.1.108:3000
What is interesting is that it seems as long as the IP address in the --port parameter matches the private network address of the server, you can still connect to your server using a logical name. In my case, my server is in a DMZ on my private network, and I can use the public domain name to get to the server. I can also use the server's NETBIOS name, both work fine.
I don't fully understand why this would work unless node.js or Meteor is doing some internal comparison. It is certain though that this is a matter of either the Meteor upgrade or the Node.js upgrade.
Use --port:host:port
example: meteor run --port:192.168.168.164:6969
Binding to a specific IP seems to solve the problem:
meteor run -p 192.168.2.3:8080

Resources