Use host networking and additional networks in docker compose - networking

I'm trying to set up a dev environment for my project.
I have a container (ms1) which should be put in his own network ("services" in my case), and a container (apigateway) which should access that network while exposing an http port to the host's network.
Ideally my docker compose file would look like this:
version: '2'
services:
ms1:
expose:
- "13010"
networks:
services:
aliases:
- ms1
apigateway:
networks:
services:
aliases:
- api
network_mode: "host"
networks:
services:
docker-compose doesn't allow to use network_mode and networks at the same time.
Do I have other alternatives?
At the moment I'm using this:
apigateway:
networks:
services:
aliases:
- api
ports:
- "127.0.0.1:10000:13010"
and then apigateway container listens on 0.0.0.0:13010. It works but it is slow and it freezes if the host's internet connection goes down.
Also, I'm planning on using vagrant in the future upon docker, does it allow to solve in a clean way?

expose in docker-compose does not publish the port on the host. Since you probably don't need service linking anymore (instead you should rely on Docker networks as you do already), the option has limited value in general and seems to provide no value at all to you in your scenario.
I suspect you've come to using it by mistake and after realizing that it didn't seem to have any effect by itself, stumbled upon the fact that using the host network driver would "make it work". This had nothing to do with the expose property, mind you. It's just that the host network driver lets contained processes bind to the host network interface directly. Thanks to this, you could reach the API gateway process from the outside. You could remove the expose property and it would still work.
If this is the only reason why you picked the host network driver, then you've fallen victim of the X-Y problem:
(tl;dr)
You should never need to use the host network driver in normal situations, the default bridge network driver works just fine. What you're looking for is the ports property, not expose. This sets up the appropriate port forwarding behind the scenes.

In docker 1.13 you should be able to create a service to bridge between the two networks. I'm using something similar to fix another problem and I think this could also help here:
docker service create \
--name proxy \
--network proxy \
--publish mode=host,target=80,published=80 \
--publish mode=host,target=443,published=443 \
--constraint 'node.hostname == myproxynode' \
--replicas 1 \
letsnginx

I would try this :
1/ Find the host network
docker network ls
2/ Use this dockercompose file
services:
ms1:
ports:
- "13010"
networks:
- service
apigateway:
networks:
- front
- service
networks:
front:
service:
external:
name: "<ID of the network>"

Related

docker-compose : How can I isolate a docker container from "outside" (jenkins container + nginx reverse proxy)

As a training mockup I am trying to setup a jenkins instance behind an nginx reverse proxy ensuring also https.
So I create one container for nginx and one for jenkins. I have succeded, including the nginx configuration with (auto-signed) certificates.
I can reach the jenkins instance using https and the nginx container ip from my machine.
But my final goal is to completely isolate the jenkins container so that it cannot be reached at all from "outside". And this is not achieved.
The default port declared in the official image being 8080, I can still reach the jenkins instance with the jenkins container IP and the port 8080.
I'd made a first setup through an ansible playbook using docker container and it worked well.
But, I cannot obtain the same behavior with docker-compose.
Here is the docker-compose file I wrote.
version: "3.5"
services:
revproxy:
image: nginx:alpine
depends_on:
- jenkins_ci
networks:
- proxy
ports:
- "90:8080"
- "443:443"
volumes:
- /home/vagrant/dockerResources/etc/certs:/etc/nginx/certs
- /home/vagrant/dockerResources/etc/nginx/conf.d/reverse_proxy.conf:/etc/nginx/conf.d/reverse_proxy.conf
jenkins_ci:
image: jenkins/jenkins:lts
networks:
- proxy
networks:
proxy:
name: revProxy
internal: yes
When inspecting the jenkins_ci container, I can find its IP and direct my browser to this IP with port 8080. That's I don't want to be able to do. I would like the jenkins container to be reachable only through nginx reverse proxy address.
If someone could give me a hint.
I finally found a solution to my problem.
Knowing that the jenkins image default exposed port is 8080, I have set :
ports:
- "8080"
on the jenkins_ci service definition in the docker-compose file.
Now, I can see that there is no more IP address for jenkins_ci container. Nevertheless, it remains accessible from another container (here, the nginx one) thanks to the service name (so jenkins_ci).
I found the solution within this question.
Unfortunaltely, declaring the exposed port number without its public counterpart to hide the exposed port (and thus the IP if no port at all are publicly exposed) is not recall in the docker-compose port syntax section.

Docker-compose container using host DNS server

I'm running several containers on my "Ubuntu 16.10 Server" in a "custom" bridge network with compose 2.9 (in a yml version 2.1). Most of my containers are internally using the same ports, so there is no way for me to use the "host" network driver.
My containers are all links together, using the dedicated links attribute.
But, I also need to access services exposed outside of my containers. These services have dedicated URL with names registered in my company's DNS server.
While I have no problem to use public DNS and reach any public service from within my containers, I just can't reach my private DNS.
Do you know a working solution to use private DNS from a container? Or even better, use host's network DNS configuration?
PS: Of course, I can link to my company's services using the extra_hosts attribute in my services in my docker-compose.yml file. But... that's definitively not the goal of having a DNS. I don't want to register all my services in my YML file, and I don't want to update it each time services' IP are updated in my company.
Context :
Host: Ubuntu 16.10 server
Docker Engine: 1.12.6
Docker Compose: 1.9.0
docker-compose.yml: 2.1
Network: Own bridge.
docker-compose.yml file (extract):
version: '2.1'
services:
nexus:
image: sonatype/nexus3:$NEXUS_VERSION
container_name: nexus
restart: always
hostname: nexus.$URL
ports:
- "$NEXUS_81:8081"
- "$NEXUS_443:8443"
extra_hosts:
- "repos.private.network:192.168.200.200"
dns:
- 192.168.3.7
- 192.168.111.1
- 192.168.10.5
- 192.168.10.15
volumes_from:
- nexus-data
networks:
- pic
networks:
pic:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
I tried with and without the ipam configuration for the pic network, without any luck.
Tests & Results:
docker exec -ti nexus curl repos.private.network
returns properly the HTML page served by this service
docker exec -ti nexus curl another-service.private.network
Returns curl: (6) Could not resolve host: another-service.private.network; Name or service not known
While curl another-service.private.network from the host returns the appropriate HTML page.
And "of course" another-service.private.network is known in my 4 DNS servers (192.168.3.7, 192.168.111.1, 192.168.10.5, 192.168.10.15).
You don't specify which environment you're running docker-compose in e.g Mac, Windows or Unix, so it will depend a little bit on what changes are needed. You also don't specify if you're using the default bridge network in docker on a user created bridge network.
In either case, by default, Docker should try and map DNS resolution from the Docker Host into your containers. So if your Docker Host can resolve the private DNS addresses, then in theory your containers should be able to as well.
I'd recommend reading this official Docker DNS documentation as it is pretty reasonable. Here for the default Docker bridge network, here for user created bridge networks.
A slight gotcha is if you're running using Docker for Mac, Docker Machine or Docker for Windows you need to remember that your Docker Host is actually the VM running on your machine and not the physical box itself, so you need to ensure that the VM has the correct DNS resolution options set. You will need to restart your containers for changes to DNS resolution to be picked up by them.
You can of course override all the default settings using docker-compose. It has full options for explicitly setting DNS servers, DNS search options etc. As an example:
version: 2
services:
application:
dns:
- 8.8.8.8
- 4.4.4.4
- 192.168.9.45
You'll find the documentation for those features here.

How to expose a Docker network to the host machine?

Consider the following docker-compose.yml
version: '2'
services:
serv1:
build: .
ports:
- "8080:8080"
links:
- serv2
serv2:
image: redis
ports:
- "6379:6379"
I am fowarding the ports to the host in order to manage my services, but the services can access each other simply using the default docker network. For example, a program running on serv1 could access redis:6379 and some DNS magic will make that work. I would like to add my host to this network so that i can access container's ports by their hostname:port.
You can accomplish this by running a dns proxy (like dnsmasq) in a container that is on the same network as the application. Then point your hosts dns at the container ip, and you'll be able to resolve hostnames as if you were in the container on the network.
https://github.com/hiroshi/docker-dns-proxy is one example of this.
If you need a quick workaround to access a container:
Get the container IP:
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
172.19.0.9
If you need to use the container name, add it to your /etc/hosts.
# /etc/hosts
172.19.0.9 container_name
I am not sure if I understand you correctly. You want e.g. your redis server be accessible not only from containers that are in the same network, but also from outside the container using your host ip address?
To accomplish that you have to use the expose command as described here https://docs.docker.com/compose/compose-file/#/expose
expose:
- "6379"
So
ports:
- "6379:6379"
expose:
- "6379"
should do the trick.
The EXPOSE instruction informs Docker that the container listens on
the specified network ports at runtime. EXPOSE does not make the ports
of the container accessible to the host. To do that, you must use
either the -p flag to publish a range of ports or the -P flag to
publish all of the exposed ports. You can expose one port number and
publish it externally under another number.
from https://docs.docker.com/engine/reference/builder/#expose
Just modify the hosts file on your host machine to add the container entries.
Example:
127.0.0.1 container1
127.0.0.1 container2
127.0.0.1 container3
Assuming that the binding of the ports has been done.

Docker Container to Host Routing

I need a better up-to-date solution the following problem:
Problem: I have to manually create an iptable rule in order to allow a route from a dynamically docker bridge to the host. Otherwise container a cannot connect to container b because there is by default no route from a docker network to the docker host itself.
I have the following setup:
container-nginx (docker)
|
|-container-jira (docker) (https://jira.example.com)
|-container-confluence (docker) (https://confluence.example.com)
In order to have properly functioning Atlassian application links between Jira and Confluence:
Jira accesses Confluence over https://confluence.example.com
Confluence accesses Jira over https://jira.example.com
I use docker-compose for the whole setup and all container are inside the same network. By default this will not work i will get "no route to host" in both containers for hosts confluence.example.com and jira.example.com. Because every container inside the docker network have no route to the docker host itself.
Currently, each time the setup is initialized I manually create an iptable rule from the dynamically created docker bridge with id "br-wejfiweji" to the host.
This is cumbersome, is there "a new way" or "better way" to do this in Docker 1.11.x?
docker-compose version 2 does create a network which allows all containers to see each other. See "Networking in Compose" (since docker 1.10)
If your containers are created with the right hostname, that is jira.example.com and confluence.example.com (see docker-compose.yml hostname directive), nginx can proxy-pass directly to jira.example.com and confluence.example.com.
Those two hostname will resolve to the right IP address within the network created by docker-compose for those 3 (nginx, jira and confluence) containers.
I suggest in the comment to use an alias in order for jira to see confluence as nginx (nginx being aliases to confluence), in order for jira to always use nginx when accessing confluence.
version: '2'
services:
# HTTPS-ReverseProxy
nginx:
image: blacklabelops/nginx
container_name: nginx
networks:
default:
aliases:
- 'crucible.example.com'
- 'confluence.example.com'
- 'crowd.example.com'
- 'bitbucket.example.com'
- 'jira.example.com'
ports:
- '443:443'

Docker container cannot resolve request to service in another container

I'm running gitlab-ce and gitlab-ci-multi-runner in separated docker containers, but on the same server.
Gitlab CE works fine, I can access it via browser and clone projects using both http and ssh.
However my runner cannot connect to Gitlab using domain/server ip. It can connect to it only via local docker network (for example using ip address 172.17.0.X or, if linked, by using service alias).
Ping to domain/server ip returns response.
I tried to link it as gitlab:example.domain.com but it didn't work, as somehow runner resolved server ip address instead of local network address
Checking for builds... failed: couldn't execute POST against http://example.domain.com/ci/api/v1/builds/register.json: Post http://example.domain.com/ci/api/v1/builds/register.json: dial tcp server.ip:80: i/o timeout
#Edit
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:8.2.2-ce.0
hostname: domain.name
privileged: true
volumes:
- ./gitlab-config:/etc/gitlab
- ./gitlab-data:/var/opt/gitlab
- ./gitlab-logs:/var/log/gitlab
restart: always
ports:
- server.ip:22:22
- server.ip:80:80
- server.ip:443:443
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- ./runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
I have no clue what's the issue here.
I'd appreciate your help.
Thanks in advance! :)
Seems like it was a firewall problem. Unlocking docker0 interface allowed traffic from containers :)

Resources