How to push local (behind NAT) http service into Kubernetes, so that Pods inside Kubernetes can access this local service?
The connection must be established from the local side, since it's in local network.
One way to do so is using ssh port forwarding, but this is inconvenient because you need to set up ssh and apt stuff inside Kubernetes, and ssh connections tends to break after a long time. Lightweight solutions are preferred.
Related
Here is what i have:
GCP instance without external IP (on VPC, and NAT), and it accepts HTTP HTTPS requests
firewall allows ingress TCP for 0.0.0.0 and also for IAP's IP 35.235.240.0/20 on all ports for all instances
I ssh to the instance via IAP and run the application in the terminal on port 5000 and 0.0.0.0 host and leave the terminal hanging, but when I connect in parallel through cloud shell and ssh to this instance through IAP, and then click on web preview on port 5000, I get "Couldn't connect to a server on port 5000".
I have said that it could be a firewall rule blocking IAP, so that's why I gave access to all ports for IAP (for testing)
P.S: the process has been done on a VM with external IP and it got validated ( but without the need to connect to cloud shell to do web preview, I checked the UI with IP:port in the browser )
What did I miss?
You may be following the guide on Building Internet Connectivity for private VMs and this part on Configuring IAP tunnels for interacting with instances and the use of TCP Forwarding in IAP. By Tunneling other TCP connections:
"The local port tunnels data traffic from the local machine to the remote machine in an HTTPS stream. IAP then receives the data, applies access controls, and forwards the unwrapped data to the remote port."
You can create an encrypted tunnel to a port of the VM instance by:
gcloud compute start-iap-tunnel INSTANCE_NAME INSTANCE_PORT \
--local-host-port=localhost:LOCAL_PORT \
--zone=ZONE
I guess you want to use INSTACE_PORT and LOCAL_PORT the same, 5000.
Be aware of it's known limitations.
In most TCP client/server communications, the client uses a random general purpose port number for outgoing traffic. However, my client application, which is running inside a Kubernetes cluster, must use a specific port number for outgoing traffic; this is due to requirements by the server.
This normally works fine when the application is running externally, but when inside a Kubernetes cluster, the source port is modified somewhere along the way from the pod to the worker node (verified with tcpdump on worker node).
For context, I am using a LoadBalancer Service object. The cluster is running kube-proxy in Iptables mode.
So I found that I can achieve this by setting the hostNetwork field to true in the pod's spec.
Not an ideal solution but gets the job done.
I cannot connect to external mongodb server from my docker swarm cluster.
As I understand this is because of cluster uses overlay network driver. Am I right?
If not, how does docker overlay driver works and how can I connect to external mongodb server from cluster?
Q. How does the docker overlay driver work?
I would recommend this good reference for understanding docker swarm network overlay, and more globally, Docker's architecture.
This states that:
Docker uses embedded DNS to provide service discovery for containers running on a single Docker Engine and tasks running in a Docker Swarm. Docker Engine has an internal DNS server that provides name resolution to all of the containers on the host in user-defined bridge, overlay, and MACVLAN networks.
Each Docker container ( or task in Swarm mode) has a DNS resolver that forwards DNS queries to Docker Engine, which acts as a DNS server.
So, in multi-host docker swarm mode, with this example setup :
In this example there is a service of two containers called myservice. A second service (client) exists on the same network. The client executes two curl operations for docker.com and myservice.
These are the resulting actions:
DNS queries are initiated by client for docker.com and myservice.
The container's built-in resolver intercepts the DNS queries on 127.0.0.11:53 and sends them to Docker Engine's DNS server.
myservice resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses. Container names resolve as well, albeit directly to their IP addresses.
docker.com does not exist as a service name in the mynet network and so the request is forwarded to the configured default DNS server.
Back to your question:
How can I connect to an external mongodb server form cluster?
For your external mongodb (let's say you have a DNS for that mongodb.mydomain.com), you are in the same situation as the client in above architecture, wanting to connect to docker.com, except that you certainly don't wan't to expose that mongodb.mydomain.com to the entire web, so you may have declared it in your internal cluster DNS server.
Then, how to tell docker engine to use this internal DNS server to resolve mongodb.mydomain.com?
You have to indicate in your docker service task that you want to use an internal DNS server, like so:
docker service create \
--name myservice \
--network my-overlay-network \
--dns=10.0.0.2 \
myservice:latest
The important thing here is --dns=10.0.0.2. This will tell the Docker engine to use the DNS server at 10.0.0.2:53 as default if it can not resolve the DNS name in the VIP.
Finally, when you say :
I cannot connect to external mongodb server from my docker swarm cluster. As I understand this is because of cluster uses overlay network driver. Am I right?
I would say no, as there is a built in method in docker engine to forward unknown DNS name coming from overlay network to the DNS server you want.
Hope this helps!
So my application server (instance with swarm installed) and many clients (docker containers on a different physical nodes) require that clients should connect to server by hostname and I don't know how container can get node IP where swarm runs to be able to connect.
I know that I can pass hosts rules to each docker container with hostname and IP of the node where swarm runs but I don't know how to get that IP because there is a lot of interfaces with different IPs.
So generally I'm looking for something like docker client can ask swarm what is your IP to be able to connect back.
On a VPS with a static, publicly routable IP, I have a simple web server running (on port 8080) in a container that exports port 8080 (-p 0.0.0.0:8080:8080).
If I spin up another container on the same box and try to curl <public ip of host>:8080 it resolves the address, tries to connect but fails when making the request (it just hangs).
From the host's shell (outside containers), curl <public ip of host>:8080 succeeds.
Why is this happening? My feeling is that, somehow, the virtual network cards fail to communicate with each other. Is there a workaround (besides using docker links)?
According to Docker's advanced networking docs (http://docs.docker.io/use/networking/): "Docker uses iptables under the hood to either accept or drop communication between containers."
As such, I believe you would need to setup inbound and outbound routing with iptables. This article gives a solid description of how to do so: http://blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/