Running RShiny from a Google cloud platform VM - r

I've put together an R shiny doc that I want to run on my google cloud platform virtual machine. When I run the script, it works in that it generates a webpage "Listening on http://127.0.0.1:6840". However, when I click on the link generated via the script, I get the error "500. That’s an error."
The script works locally so I don't believe it's an issue with the code.
Help!

Here is what I did to make it work. Full disclosure - I have no idea what any of this is, just following the steps in the docs:
install R
install Shiny
clone https://github.com/rstudio/shiny-examples
cd into any of the examples (I chose 050-kmeans-example)
run R -e "shiny::runApp(host='0.0.0.0', port=8080)" (8080 is just an example and can certainly be different) to start the server
Receive a message in the console saying "Listening on http://0.0.0.0:8080"
Go back to the GCP and configure a firewall rule to allow communications to port 8080 from the outside world.
Open a new browser tab with external ip address of the VM and append the port (e.g if the VM ext IP is 1.2.3.4, type in http://1.2.3.4:8080)

Related

"An exception occurred" when hitting RPlumber API from Ubuntu 16.04

I am using RPlumber to create an API that makes some data available to the users of the API. I created an Ubuntu 16.04 server on Linode to host the API.
I have successfully installed R on the server, and all of the libraries, and am able to run the script on the machine with command Rscript file_that_runs_rplumber.R. When I run the script, the command line hangs with:
Running plumber API at http://0.0.0.0:8004
Running swagger Docs at http://127.0.0.1:8004/__docs__/
...so I know that the API is successfully running. I am trying to hit this endpoint from my local machine, not from the Linode server, and so I replace the 0.0.0.0 with my IP address 1.2.3.4 lets say. When I visit 1.2.3.4:8008/__docs__/, this page does work, and I get the auto-generated RPlumber API docs:
However when I replace /__docs__/ with one of the API's endpoints, I receive the following:
I can see from the command line for the Linode Server that the R code associated with the endpoint is running, however it is simply not being returned to me. Perhaps this is a security issue, that I cannot access the endpoint on my local machine? How can I update the server so that my local machine (and any other machines as well) can access this API?
Thanks!

Setting up public plumber API?

I'm trying to set up a plumber API (0.4.6) on rstudio-server running on AWS Linux, so that our external analytics system can make requests to R. I've got firewall ports open on 8787 (for Rstudio, which is working fine) and on 5762 (for the API, which isn't working). If I kick off a swagger API from within Rstudio, that works fine locally. If I remap the rstudio interface to 5762, that works fine (so not apparently a firewall problem). But we simply cannot find a way to expose a plumber API on 5762.
Suggestions gratefully received…
what IP are you using?
Plumber respond by default on 127.0.0.1
There are probaly rules in places to prevent you from connecting to localhost from an external host.
Try 0.0.0.0
pr$run(host="0.0.0.0")

Connecting to BigQuery in RShiny

I've tried two methods to connect my Shiny app to a BigQuery table as its source data:
Hadley's bigrquery, and
Mark Edmondson's BigQueryR
They're both failing the same way, so it's clearly a DFU error.
In each case, when I execute the appropriate command to establish the authorized connection (gar_auth_service(json_file = /path/,scope = 'https://www.googleapis.com/auth/bigquery' and bq_auth(path = /path/, respectively), I get this:
This site can’t be reached localhost refused to connect. Try:
Checking the connection Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
This error comes after what appears to be a normal Google login process in the browser. The error page is hosted at localhost:1410, if that's any help.
In the Console, I have:
Created a VM instance (Ubuntu 19)
Successfully installed R, RStudio, and Shiny
Successfully logged in to RStudio in my GCP instance (from the browser, obviously, using the Externa IP I reserved in GCP)
I've also already created a BigQuery table in the same project, and successfully connected to it from an R script on my local machine.
I'm trying to get that same R script to run grom my Google Compute Engine instance.
Have I provided enough details to ask for help? If not, let me know what else I should provide. I'm walking through teaching myself GCP right now, and I'm quite the novice.
Thanks!
To bypass this issue, try connecting to your Ubuntu 19 instance using Chrome Remote Desktop on your Compute Engine instance as documented here.
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer instead of using the External IP. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization. I've tried and I was able to connect both Shiny Server and to the RStudio.

Google Cloud Platform access tensorboard

I am new to Google Cloud (and unix) and have been using ml-engine to train a neural net using Tensorflow.
Here it says that you can monitor the app using tensorboard. How can I access the tensorboard panel? When I run it (from the Cloud Shell Access console) it says it's running at http://0.0.0.0:6006
I don't know the IP of the Cloud Shell console, how can I access the tensorboard panel?
The command I run (and output):
tensorboard --logdir=gs://model_output
Starting TensorBoard 47 at http://0.0.0.0:6006
Thanks!
The easiest is to adjust your command to:
tensorboard --logdir=gs://model_output --port=8080
E.g. adding --port=8080 to your command, which allows you to just use the default Web Preview option of Cloud Shell
I want to give some other suggestions. The solution from #Fematich is very helpful. The small glitch here is that 8080 is the default port, and usually we may run jupyterlab on this port. So, my suggestion is that you need to ssh to two sessions; one on port 8080 and one on port 6006. Then run tensorboard in the session on port 8080, and open web-preview in the second session with changing the port from default 8080 to 6006. So you can update your model in one session freely and observe the graph in another session. I found it pretty helpful.

How to get past the MongoDB port error to launch the examples?

I'm getting started with Meteor, using the examples:
https://www.meteor.com/examples/parties
If I deploy and load the deployment url ( http://radically-finished-parties-app.meteor.com/ ) , the app runs ... nothing magic there... it was an easy example
My issue occurs when I want to run it locally, I get the following message
"You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number"
I got meteor running through the terminal command:
meteor --port 3004
Setup:
- Mac OS 10.9
- Chrome 31
This is happening because you are accessing the mongodb port in your web browser.
When you run a meteor app, e.g on port 3004
Port 3004 would be a web proxy to port 3005
Port 3005 would be the meteor app in a 'raw' sort of sense (without the websockets part.. i think)
Port 3006 would be the mongodb (which you are accessing).
Try using a different port. Or use a simpler port e.g just run meteor and access port 3000 in your web browser.
If the reason you moved the port number up because it said the port is in use the meteor app may not have exited properly on your computer. Restart your machine or have a look at activity monitor to kill the rogue node process.
I think what might have happened is you ran in on 3000, then moved the ports up and the previous one may have not been exited correctly so what you're seeing is a mongodb instance of a previous meteor instance.
This happens when you run another meteor on port 2999, forget about it and try to start a second instance on the usual port.
Try making sure Meteor is using the local embedded mongo db, which it will manage on its own:
export MONGO_URL=''
Something changed in my bash settings that I didn't copy over to zsh. I uninstalled zsh and meteor can now find and access mongo.

Resources