Accessing host machine from Minikube - google-cloud-datastore

I have a Google Cloud Datastore emulator running on my local machine at localhost:8742. I'd like to access this from a pod running in minikube. Is there a way to do this?

You should be able to access the Google Cloud Datastore emulator by using the ip address for the host from the VM. For the virtualbox driver (the default in minikube) this ip address is: 10.0.2.2

Telepresence can do this and may prove generally useful to you for your k8s development work.
After installing Telepresence, and following the guide at www.telepresence.io/tutorials/kubernetes-rapid, create a proxy service,
localhost$ telepresence --new-deployment some-name-you-like --expose 8742
Then, you can access the service some-name-you-like from the cluster as you might have done via localhost, for example via some shell or specialized data store client image,
kubectl --restart=Never run -i -t --image=alpine console /bin/sh

Related

APIGEE on premise windows installation

I want to install APIGEE in my machine in windows, so that I can use localhost url in APIGEE proxy. How can I do that ?
While Apigee is more of an enterprise-scale API-Management platform, if you really want to experiment with a localhost Apigee-based API proxy on your workstation, you can. Consider using Apigee Microgateway (https://docs.apigee.com/api-platform/microgateway/edge-microgateway-home), which is Node-based and can be run in Windows, or running the Apigee adapter for Envoy (https://docs.apigee.com/api-platform/envoy-adapter/v2.0.x/concepts), which can run natively, or in a local Docker image, like via Windows Subsystem for Linux.

Mounting Google Cloud network locally

We have a Google Cloud project with several VM instances and also Kubernetes cluster.
I am able to easily access Kubernetes services with kubefwd and I can ping them and also curl them. The problem is that kubefwd works only for Kubernetes, but not for other VM instances.
Is there a way to mount the network locally, so I could ping and curl any instance without it having public IP and with DNS the same as inside the cluster?
I would highly recommend rolling a vpn server like openvpn. You can also run this inside of the Kubernetes Cluster.
I have a make install ready repo for ya to check out at https://github.com/mateothegreat/k8-byexamples-openvpn.
Basically openvpn is running inside of a container (inside of a pod) and you can set the routes that you want the client(s) to be able to see.
I would not rely on kubefwd as it isn't production grade and will give you issues with persistent connections.
Hope this help ya out.. if you still have questions/concerns please reach out.

Connect to a remote Jupyter runtime over HTTPS with Google Colab

I'm trying to use Google's Colab feature to connect to a remote run-time that is configured with HTTPS. However, I only see an option to inform the port on the UI, not the protocol.
I've checked the Network panel and the website starts a WebSocket connection with http://localhost:8888/http_over_websocket?min_version=0.0.1a3, HTTP-style.
Full details of my setup:
I have a public Jupyter server at https://123.123.123.123:8888 with self-signed certificate and password authentication
I've followed jupyter_http_over_ws' setup on the remote
I started the remote process with jupyter notebook --no-browser --keyfile key.pem --certfile crt.pem --ip 0.0.0.0 --notebook-dir notebook --NotebookApp.allow_origin='https://colab.research.google.com'
I've created a local port forwarding with ssh -L 8888:localhost:8888 dev#123.123.123.123
I've turned on network.websocket.allowInsecureFromHTTPS on Firefox
I've went to https://localhost:8888 and logged in
Naturally, when the UI calls http://localhost:8888/http_over_websocket?min_version=0.0.1a3 it fails. If I manually access https://localhost:8888/http_over_websocket?min_version=0.0.1a3 (note the extra s) it gets through.
I see three options to solve it:
Tell the UI to use secure WS connection
Run a proxy on my local machine to transform the HTTPS into plain HTTP
Turn off HTTPS on my remote
The last two I think will work, but I wouldn't like that way.
How to do #1?
Thanks a lot!
Your option 1 isn't possible in colab today.
Why do you want to use HTTPS over an SSH tunnel that already encrypts forwarded traffic?

How do I connect to a dataproc cluster with Jupyter notebooks from cloud shell

I have seen the instructions here https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook for setting up Jupyter notebooks with dataproc but I can't figure out how to alter the process in order to use Cloud shell instead of creating an SSH tunnel locally. I have been able to connect to a datalab notebook by running
datalab connect vmname
from the cloud shell and then using the preview function. I would like to do something similar but with Jupyter notebooks and a dataproc cluster.
In theory, you can mostly follow the same instructions as found https://cloud.google.com/shell/docs/features#web_preview to use local port forwarding to access your Jupyter notebooks on Dataproc via the Cloud Shell's same "web preview" feature. Something like the following in your cloud shell:
gcloud compute ssh my-cluster-m -- -L 8080:my-cluster-m:8123
However, there are two issues which prevent this from working:
You need to modify the Jupyter config to add the following to the bottom of /root/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.allow_origin = '*'
Cloud Shell's web preview needs to add support for websockets.
If you don't do (1) then you'll get popup errors when trying to create a notebook, due to Jupyter refusing the cloud shell proxy domain. Unfortunately (2) requires deeper support from Cloud Shell itself; it'll manifest as errors like A connection to the notebook server could not be established.
Another possible option without waiting for (2) is to run your own nginx proxy as part of the jupyter initialization action on a Dataproc cluster, if you can get it to proxy websockets suitably. See this thread for a similar situation: https://github.com/jupyter/notebook/issues/1311
Generally this type of broken websocket support in proxy layers is a common problem since it's still relatively new; over time more and more things will start to support websockets out of the box.
Alternatively:
Dataproc also supports using a Datalab initialization action; this is set up such that the websockets proxying is already taken care of. Thus, if you're not too dependent on just Jupyter specifically, then the following works in cloud shell:
gcloud dataproc clusters create my-datalab-cluster \
--initialization-actions gs://dataproc-initialization-actions/datalab/datalab.sh
gcloud compute ssh my-datalab-cluster-m -- -L 8080:my-datalab-cluster-m:8080
And then select the usual "Web Preview" on port 8080. Or you can select other Cloud Shell supported ports for the local binding like:
gcloud compute ssh my-datalab-cluster-m -- -L 8082:my-datalab-cluster-m:8080
In which case you'd select 8082 as the web preview port.
You can't connect to Dataproc through a Datalab installed on a VM (on a GCE).
As the documentation you mentionned, you must launch a Dataproc with a Datalab Initialization Action.
Moreover the Datalab connect command is only available if you have created a Datalab thanks to the Datalab create command.
You must create a SSH tunnel to your master node ("vmname-m" if your cluster name is "vmname") with:
gcloud compute ssh --zone YOUR-ZONE --ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n" "vmname-m"

How to run meteor server on a different ip address?

How can i start meteor server on a different IP address? Currently in the examples am only able to run on a localhost:3000 address.
export BIND_IP no longer works, bind IP is defined with --port (or -p or --app-port) option(s):
$ meteor run --port 127.0.0.1:3000
Reference: https://github.com/meteor/meteor/commit/9b8bd31a7b6c857e5d8fc0393982e6e6b2973eb0
If you are looking to run something on another IP address (but still have the files local) you need to look into editing your vhosts file. If you are on a mac, look into Virtual Host X
The proper way to change ports with meteor is this:
meteorapp : meteor --port 5000
According to this change, you should be able to configure your app to bind to a specific IP address by configuring a BIND_IP environment variable.
export BIND_IP=127.0.0.1
You may need to update your app to a newer version of Meteor for this to work correctly.
Using Meteor 1.3.2.4, If your IP is 192.168.0.13 as in my case, on the terminal, type:
meteor --mobile-server 192.168.0.13
or
meteor --port 192.168.0.13:3000
And you will see the Meteor welcome page by typing
http://192.168.0.13:3000
on your browser.
At the moment, you can't - meteor binds to all IP addresses, but there's an issue open to add support for binding to a specific IP.
Deploy it on another server and connect to the internet-ip of the server from outside of the internal net, or connect to the local-ip of the server from the lan.
How to deploy on another server?
'meteor bundle'
and read the README
This isn't possible yet, but there is an open pull request for it. They are waiting for the author to sign the meteor contributor agreement before it can be accepted.
https://github.com/meteor/meteor/pull/469/
If you need it before it's official you can apply the patch yourself (or potentially just replace 127.0.0.1 with the IP address you want to bind to in the same files references by the patch (app/lib/mongo_runner.js and app/meteor/run.js).
Actually, Meteor behaves differently in production and development environments.
Production
Use environment variable BIND_IP
Development
Use --port argument like meteor run --port 192.168.1.1:port
Docs here
According to netstat -tapn Meteor/Node.js listens on all available IP addresses on the machine:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 9098/node
Do you have something like iptables running?

Resources