Dockers networking - networking

I have a scenario where I want to establish a communication between the docker container running inside a Virtual machine and console (the VM is present on the same console). This work is to be done on SUSE Linux. How can this be done?

Create a external nat int the host using something `docker network create.
Mention this network while creating the container.

Related

How to access instance from outside?

I installed openstack using packstack. But I don't know how to configure network.
I want to access my instance using ssh like this.
Is it possible to do this?
By default, Packstack creates an "external" network that is entirely contained inside the host. This is great for creating a cloud without any knowledge of the network environment, but prevents you from accessing instances from outside.
Configure your Packstack so that its external network is your network. This is documented at https://www.rdoproject.org/networking/neutron-with-existing-external-network/.
Actually, without external network its not possible.Your instance should have external(public) ip.I'm using this way but you have other option like proxy.
if your Centos7 machine can connect your instance;
Use HaProxy at Centos machine and proxy instance ssh port to specific centos7 port and connect using this port.I'm using this way on microstack.

Create multiple Docker network connections

Monday when I got to work I realized that Docker was something that I had to use to fix some server issues in the company at the moment. So since this week all my work has been studying Docker and try to make it work as soon as possible.
So far I understood the containers / swarm / etc, but I am still stuck with the network. Basically I need to run 3 different networks under Docker with different containers on it.
I need to run 3 different networks which will be assigned to 3 public IPs provided by the hoster (OVH) (I don't even know if it will work since only tomorrow I'll get the VPS to work).
So let's say over the network 1 there will be 3 containers to be used as production, network 2 will be used for development and 3rd network to be used as test.
Is this possible to make with Docker?
ATM I'm running tests on a raspbian (jessie) using Docker engine but like I said, I am still stuck with the whole Docker network interface.
Create the networks
docker network create net1
docker network create net2
docker network create net3
Attach the containers to the desired network
docker run --net=net1 --name=container1 [opts] [image]
or, if the container already exists:
docker network connect net1 container1
if you to attach a host IP to the container you can just bind a port to it.
Let's say a container runs o port 80:
docker run --name=container1 --net=net1 -p YOU_IP_ADDR:80:80 [image]

Spark SPARK_PUBLIC_DNS and SPARK_LOCAL_IP on stand-alone cluster with docker containers

So far I have run Spark only on Linux machines and VMs (bridged networking) but now I am interesting on utilizing more computers as slaves. It would be handy to distribute a Spark Slave Docker container on computers and having them automatically connecting themselves to a hard-coded Spark master ip. This short of works already but I am having trouble configuring the right SPARK_LOCAL_IP (or --host parameter for start-slave.sh) on slave containers.
I think I correctly configured the SPARK_PUBLIC_DNS env variable to match the host machine's network-accessible ip (from 10.0.x.x address space), at least it is shown on Spark master web UI and accessible by all machines.
I have also set SPARK_WORKER_OPTS and Docker port forwards as instructed at http://sometechshit.blogspot.ru/2015/04/running-spark-standalone-cluster-in.html, but in my case the Spark master is running on an other machine and not inside Docker. I am launching Spark jobs from an other machine within the network, possibly also running a slave itself.
Things that I've tried:
Not configure SPARK_LOCAL_IP at all, slave binds to container's ip (like 172.17.0.45), cannot be connected to from master or driver, computation still works most of the time but not always
Bind to 0.0.0.0, slaves talk to master and establish some connection but it dies, an other slave shows up and goes away, they continue looping like this
Bind to host ip, start fails as that ip is not visible within the container but would be reachable by others as port-forwarding is configured
I wonder why isn't the configured SPARK_PUBLIC_DNS being used when connecting to slaves? I thought SPARK_LOCAL_IP would only affect on local binding but not being revealed to external computers.
At https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/connectivity_issues.html they instruct to "set SPARK_LOCAL_IP to a cluster-addressable hostname for the driver, master, and worker processes", is this the only option? I would avoid the extra DNS configuration and just use ips to configure traffic between computers. Or is there an easy way to achieve this?
Edit:
To summarize the current set-up:
Master is running on Linux (VM at VirtualBox on Windows with bridged networking)
Driver submits jobs from an other Windows machine, works great
Docker image for starting up slaves is distributed as a "saved" .tar.gz file, loaded (curl xyz | gunzip | docker load) and started on other machines within the network, has this probem with private/public ip configuration
I am also running spark in containers on different docker hosts. Starting the worker container with these arguments worked for me:
docker run \
-e SPARK_WORKER_PORT=6066 \
-p 6066:6066 \
-p 8081:8081 \
--hostname $PUBLIC_HOSTNAME \
-e SPARK_LOCAL_HOSTNAME=$PUBLIC_HOSTNAME \
-e SPARK_IDENT_STRING=$PUBLIC_HOSTNAME \
-e SPARK_PUBLIC_DNS=$PUBLIC_IP \
spark ...
where $PUBLIC_HOSTNAME is a hostname reachable from the master.
The missing piece was SPARK_LOCAL_HOSTNAME, an undocumented option AFAICT.
https://github.com/apache/spark/blob/v2.1.0/core/src/main/scala/org/apache/spark/util/Utils.scala#L904
I'm running 3 different types of docker containers on my machine with the intention of deploying them into the cloud when all the software we need are added to them: Master, Worker and Jupyter notebook (with Scala, R and Python kernels).
Here are my observations so far:
Master:
I couldn't make it bind to the Docker Host IP. Instead, I pass in a made up domain name to it: -h "dockerhost-master" -e SPARK_MASTER_IP="dockerhost-master". I couldn't find a way to make Akka bind against the container's IP and but accept messages against the host IP. I know it's possible with Akka 2.4, but maybe not with Spark.
I'm passing in -e SPARK_LOCAL_IP="${HOST_IP}" which causes the Web UI to bind against that address instead of the container's IP, but the Web UI works all right either way.
Worker:
I gave the worker container a different hostname and pass it as --host to the Spark org.apache.spark.deploy.master.Worker class. It can't be the same as the master's or the Akka cluster will not work: -h "dockerhost-worker"
I'm using Docker's add-host so the container is able to resolve the hostname to the master's IP: --add-host dockerhost-master:${HOST_IP}
The master URL that needs to be passed is spark://dockerhost-master:7077
Jupyter:
This one needs the master URL and add-host to be able to resolve it
The SparkContext lives in the notebook and that's where the web UI of the Spark Application is started, not the master. By default it binds to the internal IP address of the Docker container. To change that I had to pass in: -e SPARK_PUBLIC_DNS="${VM_IP}" -p 4040:4040. Subsequent applications from the notebook would be on 4041, 4042, etc.
With these settings the three components are able to communicate with each other. I'm using custom startup scripts with spark-class to launch the classes in the foreground and keep the Docker containers from quitting at the moment.
There are a few other ports that could be exposed such as the history server which I haven't encountered yet. Using --net host seems much simpler.
I think I found a solution for my use-case (one Spark container / host OS):
Use --net host with docker run => host's eth0 is visible in the container
Set SPARK_PUBLIC_DNS and SPARK_LOCAL_IP to host's ip, ignore the docker0's 172.x.x.x address
Spark can bind to the host's ip and other machines communicate to it as well, port forwarding takes care of the rest. DNS or any complex configs were not needed, I haven't thoroughly tested this but so far so good.
Edit: Note that these instructions are for Spark 1.x, at Spark 2.x only SPARK_PUBLIC_DNS is required, I think SPARK_LOCAL_IP is deprecated.

Accessing other machines on a vagrant/virtual box host only network from within a docker container

I have a lab environment I have setup using vagrant. It consists of 3 machines, two application servers with docker 1.4.1 installed and one database server with postgres 9.3 installed. All machines are running Centos 6.6.
The environment is setup using a host-only private network. Here's the vagrant file
Vagrant.configure('2') do |config|
config.vm.box = "centos-6.0-updated"
{
'db' => '10.17.33.10',
'app1' => '10.17.33.11',
'app2' => '10.17.33.12',
}.each do |short_name, ip|
config.vm.define short_name do |host|
host.vm.network 'private_network', ip: ip
host.vm.hostname = "#{short_name}.my.dev"
end
end
end
I'm finding that when I'm inside of a container on app1 or app2 I cannot access the db server. I believe the issue is that vagrant/virtualbox's host only private network uses addresses in the 127.0.0.x range. On the host, vagrant configures the loopback interface to handle sending requests to each machine on the network. However in the container, because this interface is not configured the container treats all 127.0.0.x requests as requests for localhost and just sends them back to itself.
Is there any alternative configuration I can setup on the vagrant side, or the docker side that will alleviate this issue? In short, I want to have a vagrant environment where containers on my app server can talk to the db server. Note the db is installed directly on the db-host, not running inside a docker container. Also, this is meant to mimic a production environment that will not use vagrant, so I'd want any docker changes to work in a more normal networking scenario as well.
By default VirtualBox private networks are host-only, so one VM can't see another. You can change this to instead use a VirtualBox internal network by using the virtualbox__intnet setting, so you'd need a line like host.vm.network "private_network", ip: ip, virtualbox__intnet: true
There's a bit more info on this here http://docs.vagrantup.com/v2/virtualbox/networking.html
Note that this is VirtualBox specific. If this needs to also work with a different provider I think you're going to need to use a public network in order to get the necessary bridging and take into account all the related issues with securing that. See http://docs.vagrantup.com/v2/networking/public_network.html .

PCI passthrough strategy in Docker or oVirt

We have to deploy a test system where a Docker container or a VM (oVirt 3.5) shares up to 4x 10GB network cards with other containers/VMs.
So far we are using just oVirt for this purpose but we would like to shift to a Dockerized system to save some resources on the machines.
Does anybody have some experience or suggestion?
Docker containers are really just processes; it can run them each in a separate network namespace (the default) or let them use the host's network directly (--net=host).
If running in a separate network namespace then they won't have any access to the host's network cards; in the default config (--net=bridge) they are NAT networked via a Linux bridge, so if that matches your requirements, you're away.
Link to Docker docs on networking

Resources