Create multiple Docker network connections - networking

Monday when I got to work I realized that Docker was something that I had to use to fix some server issues in the company at the moment. So since this week all my work has been studying Docker and try to make it work as soon as possible.
So far I understood the containers / swarm / etc, but I am still stuck with the network. Basically I need to run 3 different networks under Docker with different containers on it.
I need to run 3 different networks which will be assigned to 3 public IPs provided by the hoster (OVH) (I don't even know if it will work since only tomorrow I'll get the VPS to work).
So let's say over the network 1 there will be 3 containers to be used as production, network 2 will be used for development and 3rd network to be used as test.
Is this possible to make with Docker?
ATM I'm running tests on a raspbian (jessie) using Docker engine but like I said, I am still stuck with the whole Docker network interface.

Create the networks
docker network create net1
docker network create net2
docker network create net3
Attach the containers to the desired network
docker run --net=net1 --name=container1 [opts] [image]
or, if the container already exists:
docker network connect net1 container1
if you to attach a host IP to the container you can just bind a port to it.
Let's say a container runs o port 80:
docker run --name=container1 --net=net1 -p YOU_IP_ADDR:80:80 [image]

Related

Dockers networking

I have a scenario where I want to establish a communication between the docker container running inside a Virtual machine and console (the VM is present on the same console). This work is to be done on SUSE Linux. How can this be done?
Create a external nat int the host using something `docker network create.
Mention this network while creating the container.

REST request across networks

Let's say I have two docker networks on the same machine. (Network-1 and Network-2)
On each network, I have containers. (Container-1-Network-1 and Container-1-Network-2 etc.)
I need to send a PUT request from Container-1(172.18.0.x) to Container-2 (172.19.0.x) but I get 'connection refused' because different networks can't communicate with each other. What are my options here? Can I move a container to another network, or merge networks into one or link containers somehow (in docker-compose.yml)?
Thanks.
Ideally, you should add the container to every network where it needs to communicate with other containers and each network should be isolated from each other. This is the default design of docker networking.
To add containers to another network, use:
docker network connect $network $container
An easier method when you have lots of containers to manage is to use docker compose to define which networks each container needs to belong to. This automates the docker network connect commands.

Publishing docker swarm mode port only to localhost

I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.
Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).
Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?
EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.
No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:
github.com - moby/moby - Assigning service published ports to IP
github.com - moby/moby - docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0
So you could subscribe to them and/or participate in the discussion.
Further reading:
How to bind the published port to specific eth[x] in docker swarm mode
Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.
In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.
"Temp" solution that I am using is leaning on alpine/socat image.
Idea:
use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
add that container to the same network of the swarm service we want to expose only to localhost
expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
use socat of this container to forward trafic to swarm's machine
Command:
docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.
Daemon command:
docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
My use case:
Sometimes I need to access ES directly, so this approach is just fine for me.
Would like to see some docker's native solution, though.
P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.
See restart policy docs here:
https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart

Is it possible to isolate docker container in user-defined overlay network from outside internet?

With new network feature in docker 1.10 it is possible to create isolated overlay networks - which works very well. Containers in 2 separate networks can not talk to each other. Is it possible, however, to deny container in overlay network to reach public internet? Eg to make ping 8.8.8.8 fail, while having docker host connected to internet.
If you add the --internal flag when creating a network with the docker network create command, then that network will not have outbound network access:
docker network create --internal --subnet 10.1.1.0/24 mynetwork
I assume -- but have not tested -- that this works for overlay networks as well as for host-local networks.

Spark SPARK_PUBLIC_DNS and SPARK_LOCAL_IP on stand-alone cluster with docker containers

So far I have run Spark only on Linux machines and VMs (bridged networking) but now I am interesting on utilizing more computers as slaves. It would be handy to distribute a Spark Slave Docker container on computers and having them automatically connecting themselves to a hard-coded Spark master ip. This short of works already but I am having trouble configuring the right SPARK_LOCAL_IP (or --host parameter for start-slave.sh) on slave containers.
I think I correctly configured the SPARK_PUBLIC_DNS env variable to match the host machine's network-accessible ip (from 10.0.x.x address space), at least it is shown on Spark master web UI and accessible by all machines.
I have also set SPARK_WORKER_OPTS and Docker port forwards as instructed at http://sometechshit.blogspot.ru/2015/04/running-spark-standalone-cluster-in.html, but in my case the Spark master is running on an other machine and not inside Docker. I am launching Spark jobs from an other machine within the network, possibly also running a slave itself.
Things that I've tried:
Not configure SPARK_LOCAL_IP at all, slave binds to container's ip (like 172.17.0.45), cannot be connected to from master or driver, computation still works most of the time but not always
Bind to 0.0.0.0, slaves talk to master and establish some connection but it dies, an other slave shows up and goes away, they continue looping like this
Bind to host ip, start fails as that ip is not visible within the container but would be reachable by others as port-forwarding is configured
I wonder why isn't the configured SPARK_PUBLIC_DNS being used when connecting to slaves? I thought SPARK_LOCAL_IP would only affect on local binding but not being revealed to external computers.
At https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html they instruct to "set SPARK_LOCAL_IP to a cluster-addressable hostname for the driver, master, and worker processes", is this the only option? I would avoid the extra DNS configuration and just use ips to configure traffic between computers. Or is there an easy way to achieve this?
Edit:
To summarize the current set-up:
Master is running on Linux (VM at VirtualBox on Windows with bridged networking)
Driver submits jobs from an other Windows machine, works great
Docker image for starting up slaves is distributed as a "saved" .tar.gz file, loaded (curl xyz | gunzip | docker load) and started on other machines within the network, has this probem with private/public ip configuration
I am also running spark in containers on different docker hosts. Starting the worker container with these arguments worked for me:
docker run \
-e SPARK_WORKER_PORT=6066 \
-p 6066:6066 \
-p 8081:8081 \
--hostname $PUBLIC_HOSTNAME \
-e SPARK_LOCAL_HOSTNAME=$PUBLIC_HOSTNAME \
-e SPARK_IDENT_STRING=$PUBLIC_HOSTNAME \
-e SPARK_PUBLIC_DNS=$PUBLIC_IP \
spark ...
where $PUBLIC_HOSTNAME is a hostname reachable from the master.
The missing piece was SPARK_LOCAL_HOSTNAME, an undocumented option AFAICT.
https://github.com/apache/spark/blob/v2.1.0/core/src/main/scala/org/apache/spark/util/Utils.scala#L904
I'm running 3 different types of docker containers on my machine with the intention of deploying them into the cloud when all the software we need are added to them: Master, Worker and Jupyter notebook (with Scala, R and Python kernels).
Here are my observations so far:
Master:
I couldn't make it bind to the Docker Host IP. Instead, I pass in a made up domain name to it: -h "dockerhost-master" -e SPARK_MASTER_IP="dockerhost-master". I couldn't find a way to make Akka bind against the container's IP and but accept messages against the host IP. I know it's possible with Akka 2.4, but maybe not with Spark.
I'm passing in -e SPARK_LOCAL_IP="${HOST_IP}" which causes the Web UI to bind against that address instead of the container's IP, but the Web UI works all right either way.
Worker:
I gave the worker container a different hostname and pass it as --host to the Spark org.apache.spark.deploy.master.Worker class. It can't be the same as the master's or the Akka cluster will not work: -h "dockerhost-worker"
I'm using Docker's add-host so the container is able to resolve the hostname to the master's IP: --add-host dockerhost-master:${HOST_IP}
The master URL that needs to be passed is spark://dockerhost-master:7077
Jupyter:
This one needs the master URL and add-host to be able to resolve it
The SparkContext lives in the notebook and that's where the web UI of the Spark Application is started, not the master. By default it binds to the internal IP address of the Docker container. To change that I had to pass in: -e SPARK_PUBLIC_DNS="${VM_IP}" -p 4040:4040. Subsequent applications from the notebook would be on 4041, 4042, etc.
With these settings the three components are able to communicate with each other. I'm using custom startup scripts with spark-class to launch the classes in the foreground and keep the Docker containers from quitting at the moment.
There are a few other ports that could be exposed such as the history server which I haven't encountered yet. Using --net host seems much simpler.
I think I found a solution for my use-case (one Spark container / host OS):
Use --net host with docker run => host's eth0 is visible in the container
Set SPARK_PUBLIC_DNS and SPARK_LOCAL_IP to host's ip, ignore the docker0's 172.x.x.x address
Spark can bind to the host's ip and other machines communicate to it as well, port forwarding takes care of the rest. DNS or any complex configs were not needed, I haven't thoroughly tested this but so far so good.
Edit: Note that these instructions are for Spark 1.x, at Spark 2.x only SPARK_PUBLIC_DNS is required, I think SPARK_LOCAL_IP is deprecated.

Resources