Google Cloud Platform access tensorboard - unix

I am new to Google Cloud (and unix) and have been using ml-engine to train a neural net using Tensorflow.
Here it says that you can monitor the app using tensorboard. How can I access the tensorboard panel? When I run it (from the Cloud Shell Access console) it says it's running at http://0.0.0.0:6006
I don't know the IP of the Cloud Shell console, how can I access the tensorboard panel?
The command I run (and output):
tensorboard --logdir=gs://model_output
Starting TensorBoard 47 at http://0.0.0.0:6006
Thanks!

The easiest is to adjust your command to:
tensorboard --logdir=gs://model_output --port=8080
E.g. adding --port=8080 to your command, which allows you to just use the default Web Preview option of Cloud Shell

I want to give some other suggestions. The solution from #Fematich is very helpful. The small glitch here is that 8080 is the default port, and usually we may run jupyterlab on this port. So, my suggestion is that you need to ssh to two sessions; one on port 8080 and one on port 6006. Then run tensorboard in the session on port 8080, and open web-preview in the second session with changing the port from default 8080 to 6006. So you can update your model in one session freely and observe the graph in another session. I found it pretty helpful.

Related

Running RShiny from a Google cloud platform VM

I've put together an R shiny doc that I want to run on my google cloud platform virtual machine. When I run the script, it works in that it generates a webpage "Listening on http://127.0.0.1:6840". However, when I click on the link generated via the script, I get the error "500. That’s an error."
The script works locally so I don't believe it's an issue with the code.
Help!
Here is what I did to make it work. Full disclosure - I have no idea what any of this is, just following the steps in the docs:
install R
install Shiny
clone https://github.com/rstudio/shiny-examples
cd into any of the examples (I chose 050-kmeans-example)
run R -e "shiny::runApp(host='0.0.0.0', port=8080)" (8080 is just an example and can certainly be different) to start the server
Receive a message in the console saying "Listening on http://0.0.0.0:8080"
Go back to the GCP and configure a firewall rule to allow communications to port 8080 from the outside world.
Open a new browser tab with external ip address of the VM and append the port (e.g if the VM ext IP is 1.2.3.4, type in http://1.2.3.4:8080)

Setting up public plumber API?

I'm trying to set up a plumber API (0.4.6) on rstudio-server running on AWS Linux, so that our external analytics system can make requests to R. I've got firewall ports open on 8787 (for Rstudio, which is working fine) and on 5762 (for the API, which isn't working). If I kick off a swagger API from within Rstudio, that works fine locally. If I remap the rstudio interface to 5762, that works fine (so not apparently a firewall problem). But we simply cannot find a way to expose a plumber API on 5762.
Suggestions gratefully received…
what IP are you using?
Plumber respond by default on 127.0.0.1
There are probaly rules in places to prevent you from connecting to localhost from an external host.
Try 0.0.0.0
pr$run(host="0.0.0.0")

Connecting to BigQuery in RShiny

I've tried two methods to connect my Shiny app to a BigQuery table as its source data:
Hadley's bigrquery, and
Mark Edmondson's BigQueryR
They're both failing the same way, so it's clearly a DFU error.
In each case, when I execute the appropriate command to establish the authorized connection (gar_auth_service(json_file = /path/,scope = 'https://www.googleapis.com/auth/bigquery' and bq_auth(path = /path/, respectively), I get this:
This site can’t be reached localhost refused to connect. Try:
Checking the connection Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
This error comes after what appears to be a normal Google login process in the browser. The error page is hosted at localhost:1410, if that's any help.
In the Console, I have:
Created a VM instance (Ubuntu 19)
Successfully installed R, RStudio, and Shiny
Successfully logged in to RStudio in my GCP instance (from the browser, obviously, using the Externa IP I reserved in GCP)
I've also already created a BigQuery table in the same project, and successfully connected to it from an R script on my local machine.
I'm trying to get that same R script to run grom my Google Compute Engine instance.
Have I provided enough details to ask for help? If not, let me know what else I should provide. I'm walking through teaching myself GCP right now, and I'm quite the novice.
Thanks!
To bypass this issue, try connecting to your Ubuntu 19 instance using Chrome Remote Desktop on your Compute Engine instance as documented here.
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer instead of using the External IP. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization. I've tried and I was able to connect both Shiny Server and to the RStudio.

How do I connect to a dataproc cluster with Jupyter notebooks from cloud shell

I have seen the instructions here https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook for setting up Jupyter notebooks with dataproc but I can't figure out how to alter the process in order to use Cloud shell instead of creating an SSH tunnel locally. I have been able to connect to a datalab notebook by running
datalab connect vmname
from the cloud shell and then using the preview function. I would like to do something similar but with Jupyter notebooks and a dataproc cluster.
In theory, you can mostly follow the same instructions as found https://cloud.google.com/shell/docs/features#web_preview to use local port forwarding to access your Jupyter notebooks on Dataproc via the Cloud Shell's same "web preview" feature. Something like the following in your cloud shell:
gcloud compute ssh my-cluster-m -- -L 8080:my-cluster-m:8123
However, there are two issues which prevent this from working:
You need to modify the Jupyter config to add the following to the bottom of /root/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.allow_origin = '*'
Cloud Shell's web preview needs to add support for websockets.
If you don't do (1) then you'll get popup errors when trying to create a notebook, due to Jupyter refusing the cloud shell proxy domain. Unfortunately (2) requires deeper support from Cloud Shell itself; it'll manifest as errors like A connection to the notebook server could not be established.
Another possible option without waiting for (2) is to run your own nginx proxy as part of the jupyter initialization action on a Dataproc cluster, if you can get it to proxy websockets suitably. See this thread for a similar situation: https://github.com/jupyter/notebook/issues/1311
Generally this type of broken websocket support in proxy layers is a common problem since it's still relatively new; over time more and more things will start to support websockets out of the box.
Alternatively:
Dataproc also supports using a Datalab initialization action; this is set up such that the websockets proxying is already taken care of. Thus, if you're not too dependent on just Jupyter specifically, then the following works in cloud shell:
gcloud dataproc clusters create my-datalab-cluster \
--initialization-actions gs://dataproc-initialization-actions/datalab/datalab.sh
gcloud compute ssh my-datalab-cluster-m -- -L 8080:my-datalab-cluster-m:8080
And then select the usual "Web Preview" on port 8080. Or you can select other Cloud Shell supported ports for the local binding like:
gcloud compute ssh my-datalab-cluster-m -- -L 8082:my-datalab-cluster-m:8080
In which case you'd select 8082 as the web preview port.
You can't connect to Dataproc through a Datalab installed on a VM (on a GCE).
As the documentation you mentionned, you must launch a Dataproc with a Datalab Initialization Action.
Moreover the Datalab connect command is only available if you have created a Datalab thanks to the Datalab create command.
You must create a SSH tunnel to your master node ("vmname-m" if your cluster name is "vmname") with:
gcloud compute ssh --zone YOUR-ZONE --ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n" "vmname-m"

How to get past the MongoDB port error to launch the examples?

I'm getting started with Meteor, using the examples:
https://www.meteor.com/examples/parties
If I deploy and load the deployment url ( http://radically-finished-parties-app.meteor.com/ ) , the app runs ... nothing magic there... it was an easy example
My issue occurs when I want to run it locally, I get the following message
"You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number"
I got meteor running through the terminal command:
meteor --port 3004
Setup:
- Mac OS 10.9
- Chrome 31
This is happening because you are accessing the mongodb port in your web browser.
When you run a meteor app, e.g on port 3004
Port 3004 would be a web proxy to port 3005
Port 3005 would be the meteor app in a 'raw' sort of sense (without the websockets part.. i think)
Port 3006 would be the mongodb (which you are accessing).
Try using a different port. Or use a simpler port e.g just run meteor and access port 3000 in your web browser.
If the reason you moved the port number up because it said the port is in use the meteor app may not have exited properly on your computer. Restart your machine or have a look at activity monitor to kill the rogue node process.
I think what might have happened is you ran in on 3000, then moved the ports up and the previous one may have not been exited correctly so what you're seeing is a mongodb instance of a previous meteor instance.
This happens when you run another meteor on port 2999, forget about it and try to start a second instance on the usual port.
Try making sure Meteor is using the local embedded mongo db, which it will manage on its own:
export MONGO_URL=''
Something changed in my bash settings that I didn't copy over to zsh. I uninstalled zsh and meteor can now find and access mongo.

Resources