Mailcow setup behind Traefik Proxy causes https certificate error - nginx

I am trying to setup the mailcow installation behind Traefik proxy. Apparently, Traefik proxy is not able to recognize the nginx-mailcow container in its network and hence does not create a certificate for https connection. so when I bring up the mailcow service using docker-compose up, I can access the mailcow services but on insecure connection (http) and browser warns that connection is not secure.
When I check my acme.json file from Traefik: I can not find any certificate related to mailcow domain i.e., mail.tld.com there.
I have the following setup:
Logs of affected containers:
Traefik Container Logs:
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="Cannot connect to docker server context canceled" providerName=docker
time="2020-04-18T13:40:37+02:00" level=info msg="Configuration loaded from file: /traefik.yml"
time="2020-04-19T00:27:31+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-5f3a25b43c42fd85df675d2d9682b6053501844c2cfe15b7802cf918df138025 providerName=docker
time="2020-04-19T00:33:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-f4d41ee79e382b413e04b039b5fc91e1c6217c78740245c8666373fe2d6a9b23
2020/04/19 00:39:44 reverseproxy.go:445: httputil: ReverseProxy read error during body copy: unexpected EOF
time="2020-04-19T00:50:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-915f80e492c2c22917d0af81add1dde15577173c82cc928b0b6101c8a260adc5
time="2020-04-19T00:58:43+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-852985c4efc48559ca3568b1829e31b46eb9f968fc328a8566e3dc6ab6f1af21 providerName=docker
time="2020-04-19T02:02:39+02:00" level=error msg="Error while Peeking first byte: read tcp 172.21.0.2:80->208.91.109.90:55153: read: connection reset by peer"
time="2020-04-19T08:11:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-840ef4db0ccc9fa84038dc7a52133779926dba4c51554516c17404ede80a2c01
The contents of Traefik docker-compose.yml:
version: '3'
services:
traefik:
image: traefik:v2.1
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.tld.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:pass"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.tld.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
Contents of traefik.yml (I used .yml instead of .toml)
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
http:
acme:
email: myemail#tld.com
storage: acme.json
httpChallenge:
entryPoint: http
Just to point out, with this setup of Traefik, certificates are generated automatically for other services like gitlab. For that, I just correctly labelled the gitlab service and assigned the Traefik network to it and Traefik service would recognize the gitlab service and generates the certificate in acme.json but sadly not for nginx-mailcow.
The contents of my docker-compose.override.yml for mailcow:
version: '2.1'
services:
nginx-mailcow:
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.tld.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=80"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- /opt/containers/traefik/data:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=tld.com
networks:
proxy:
external: true
The contents of my nginx-mailcow service in docker-compose.yml
version: '2.1'
services:
...
nginx-mailcow:
depends_on:
- sogo-mailcow
- php-fpm-mailcow
- redis-mailcow
image: nginx:mainline-alpine
dns:
- ${IPV4_NETWORK:-172.22.1}.254
command: /bin/sh -c "envsubst < /etc/nginx/conf.d/templates/listen_plain.template > /etc/nginx/conf.d/listen_plain.active &&
envsubst < /etc/nginx/conf.d/templates/listen_ssl.template > /etc/nginx/conf.d/listen_ssl.active &&
envsubst < /etc/nginx/conf.d/templates/server_name.template > /etc/nginx/conf.d/server_name.active &&
envsubst < /etc/nginx/conf.d/templates/sogo.template > /etc/nginx/conf.d/sogo.active &&
envsubst < /etc/nginx/conf.d/templates/sogo_eas.template > /etc/nginx/conf.d/sogo_eas.active &&
. /etc/nginx/conf.d/templates/sogo.auth_request.template.sh > /etc/nginx/conf.d/sogo_proxy_auth.active &&
. /etc/nginx/conf.d/templates/sites.template.sh > /etc/nginx/conf.d/sites.active &&
nginx -qt &&
until ping phpfpm -c1 > /dev/null; do sleep 1; done &&
until ping sogo -c1 > /dev/null; do sleep 1; done &&
until ping redis -c1 > /dev/null; do sleep 1; done &&
until ping rspamd -c1 > /dev/null; do sleep 1; done &&
exec nginx -g 'daemon off;'"
environment:
- HTTPS_PORT=${HTTPS_PORT:-443}
- HTTP_PORT=${HTTP_PORT:-80}
- MAILCOW_HOSTNAME=${MAILCOW_HOSTNAME}
- IPV4_NETWORK=${IPV4_NETWORK:-172.22.1}
- TZ=${TZ}
- ALLOW_ADMIN_EMAIL_LOGIN=${ALLOW_ADMIN_EMAIL_LOGIN:-n}
volumes:
- ./data/web:/web:ro
- ./data/conf/rspamd/dynmaps:/dynmaps:ro
- ./data/assets/ssl/:/etc/ssl/mail/:ro
- ./data/conf/nginx/:/etc/nginx/conf.d/:rw
- ./data/conf/rspamd/meta_exporter:/meta_exporter:ro
- sogo-web-vol-1:/usr/lib/GNUstep/SOGo/
ports:
- "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
- "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
restart: always
networks:
mailcow-network:
aliases:
- nginx
....
I have also tried comment out ports in nginx-mailcow service but the problem persists. My current mailcow.conf changes:
HTTP_BIND=127.0.0.1
HTTP_PORT=8080
HTTPS_BIND=127.0.0.1
HTTPS_PORT=8443
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Reproduction of said bug:
I setup the traefik proxy first (see contents above). Once the Traefik is up and running (I also tested for other services and it works fine in generating a certificate). Now first I cloned the mailcow repository. Then I run ./generate_config.sh to generate mailcow.conf file. As input to generate_config.sh I provide my domain name i.e., mail.tld.com
Then I comment out the ports in docker-compose.yml file because I do not want to use port 80 and 443 for nginx-mailcow as these ports are already being used by Traefik.
Then I create a docker-compose.override.yml (see contents above) to add additional configs to nginx-mailcow service (traefik labels, traefik network). The override file also contain the certdumper service which would copy https certificate from acme.json to mailcow services.
Then, I change the following two variables in mailcow.conf:
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Finally, I run the mailcow using docker-compose up -d. In browser, if check https://mail.tld.com => It warns that connection is insecure. If I check acme.json. I find no certificate for mail.tld.com.
System information:
+-------------------------------------------------+---------------------------------+
| Question | Answer |
+-------------------------------------------------+---------------------------------+
| My operating system | linux x86_64 Ubuntu 18.04.1 LTS |
| Is Apparmor, SELinux or similar active? | No |
| Virtualization technlogy | KVM |
| Server/VM specifications (Memory, CPU Cores) | 16GB, 6 cores |
| Docker Version (docker version) | 19.03.8 |
| Docker-Compose Version (docker-compose version) | 1.25.4, build 8d51620a |
| Reverse proxy (custom solution) | Traefik |
+-------------------------------------------------+---------------------------------+
If you need more information, I would be happy to provide. Any help will be much appreciated. Thank you.

Finally I was able to solve the problem after investing many hours in reading the Traefik Documentation. I made tiny mistake in assigning proxy labels to the nginx-mailcow service. The solution is below.
I forgot to mention certificate resolver and I had to expose the port which I now added as follows:
services:
nginx-mailcow:
expose:
- "8080"
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.example.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.certresolver=http"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=8080"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- <path_to_acme.json_file_dir>:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=example.com
For people who are setting up for the first time, I had to make some additional changes beforehand.
Firstly, when you run generate.sh file then in mailcow.conf file you need to make following changes:
HTTP_PORT=8080
HTTP_BIND=127.0.0.1
HTTPS_PORT=8443
HTTPS_BIND=127.0.0.1
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
We make these changes as we can not run mailcow nginx on the same ports as traefik.
Now as nginx-mailcow will be running on 8080 or 8443 so we need to expose one of these ports so traefik can talk to mailcow-nginx service. I already exposed port 8080 in the override compose file)
You also need to also adapt your loadbalancer port from 80 to 8080. (As I configured above)
You need to also tell which certificate resolver should it use. So you need to add this line in labels (I made this as well above in override config)
You have to make sure that your acme.json file (certificate file is accessible by certdumper service). So replace to actual path of acme.json directory path
I hope this helps.

Related

nginx-prometheus-exporter container cannot connect to nginx

I have a docker-compose and both nginx and nginx-prometheus-exporter are containers. I put the relevant parts here:
nginx:
container_name: nginx
image: nginx:1.19.3
restart: always
ports:
- 80:80
- 443:443
- "127.0.0.1:8080:8080"
nginx-exporter:
image: nginx/nginx-prometheus-exporter:0.8.0
command:
-nginx.scrape-uri
-http://127.0.0.1:8080/stub_status
I tried http://nginx:8080/stub_status,
nginx:8080/stub_status and
127.0.0.1:8080/stub_status for -nginx.scrape-uri but none of them worked and I got Could not create Nginx Client: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": dial tcp 127.0.0.1:8080: connect: connection refused.
Also the localhost:8080/stub_status is available in my VM using curl.
the problem was the missing -.
nginx:
container_name: nginx
image: nginx:1.19.3
restart: always
ports:
- 80:80
- 443:443
- "127.0.0.1:8080:8080"
nginx-exporter:
image: nginx/nginx-prometheus-exporter:0.8.0
command:
- -nginx.scrape-uri
- http://127.0.0.1:8080/stub_status
In my case, I was running nginx-prometheus-exporter in docker. Instead of using http://127.0.0.1:8080/stub_status, find the IP of your host machine (where docker is running) by running below command:
ip addr show docker0
and pass URL in docker run command like this:
-nginx.scrape-uri=http://<host_machine_IP>:8080/stub_status
Note: Change port and server url "/stub_status" in above command as you have configured in nginx

Why do ports need to be specified twice separated by a colon?

A lot of times, I see ports described twice with a colon like in this Docker Compose file from the Docker Networking in Compose page:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
networks:
default:
# Use a custom driver
driver: custom-driver-1
I've often wondered why the "8000:8000" and not simply "8000"
Then I saw this example, which has the two ports different:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Can someone explain what this port representation means?
The first port is host's port and the second is the remote port (i.e: in the container). That expression bounds the remote port to the local port.
In the example you map container's 8080 port to host's 8080 port, but it's perfectly normal to use different ports (e.g: 48080:8080)
If the 'host' port and the ':' of the publish port is omitted, eg. 'docker run -d -p 3000 myimage'. Docker will auto assign a (high number) host port for you. You can check to see it by running 'docker ps'.

Docker Compose: Mock external services

I have the following situation:
My application consists of a single web service that calls an
external API (say, some SaaS service, ElasticSearch or so). For non-unit-testing purposes we want to control the external service and later also inject faults. The application and the "mocked" API are dockerized and
now I want to use docker-compose to spin all containers up.
Because the application has several addresses hardcoded (e.g. the hostname of external services) I cannot change them and need to work around.
The service container makes a call to http://external-service.com/getsomestuff.
My idea was to use some features that are provided by docker to reroute all outgoing traffic to the external http://external-service.com/getsomestuff to the mock container without changing the URL.
My docker-compose.yaml looks like:
version: '2'
services:
service:
build: ./service
container_name: my-service1
ports:
- "5000:5000"
command: /bin/sh -c "python3 app.py"
api:
build: ./api-mock
container_name: my-api-mock
ports:
- "5001:5000"
command: /bin/sh -c "python3 app.py"
Finally, I have a driver that just does the following:
curl -XGET localhost:5000/
curl -XPUT localhost:5001/configure?delay=10
curl -XGET localhost:5000/
where the second curl just sets the delay in the mock to 10 seconds.
There are several options I have considered:
Using iptables-fu (would require modifying Dockerfiles to install it)
Using docker networks (this is really unclear to me)
Is there any simple option to achieve what I want?
Edit:
For clarity, here is the relevant part of the service code:
import requests
#app.route('/')
def do_stuff():
r = requests.get('http://external-service.com/getsomestuff')
return process_api_response(r.text())
Docker runs an internal DNS server for user defined networks. Any unknown host lookups are forwarded to you normal DNS servers.
Version 2+ compose files will automatically create a network for compose to use so there's a number of ways to control the hostnames it resolves.
The simplest way is to name your container with the hostname:
version: "2"
services:
external-service.com:
image: busybox
command: sleep 100
ping:
image: busybox
command: ping external-service.com
depends_on:
- external-service.com
If you want to keep container names you can use links
version: "2"
services:
api:
image: busybox
command: sleep 100
ping:
image: busybox
links:
- api:external-service.com
command: ping external-service.com
depends_on:
- api
Or network aliases
version: "2"
services:
api:
image: busybox
command: sleep 100
networks:
pingnet:
aliases:
- external-service.com
ping:
image: busybox
command: ping external-service.com
depends_on:
- api
networks:
- pingnet
networks:
pingnet:
I'm not entirely clear what the problem is you're trying to solve, but if you're trying to make external-service.com inside the container direct traffic to your "mock" service, I think you should be able to do that using the extra_hosts directive in your docker-compose.yml file. For example, if I have this:
version: "2"
services:
example:
image: myimage
extra_hosts:
- google.com:172.23.254.1
That will result in /etc/hosts in the container containing:
172.23.254.1 google.com
And attempts to access http://google.com will hit my web server at 172.23.254.1.
I was able to solve this with -links, is there a way to do networks in docker-compose?
version: '3'
services:
MOCK:
image: api-mock:latest
container_name: api-mock-container
ports:
- "8081:80"
api:
image: my-service1:latest
links:
- MOCK:external-service.com

Kubernetes Minikube Secrets appear not mounted in Pod

I have a "Deployment" in Kubernetes which works fine in GKE, but fails in MiniKube.
I have a Pod with 2 containers:-
(1) Nginx as reverse proxy ( reads secrets and configMap volumes at /etc/tls & /etc/nginx respectively )
(2) A JVM based service listening on localhost
The problem in the minikube deployment is that the Nginx container fails to read the TLS certs which appear not to be there - i.e. the volume mount of the secrets to the Pod appears to have failed.
nginx: [emerg] BIO_new_file("/etc/tls/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/tls/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
But if I do "minikube logs" I get a large amount of seemingly "successful" tls volume mounts...
MountVolume.SetUp succeeded for volume "kubernetes.io/secret/61701667-eca7-11e6-ae16-080027187aca-scriptwriter-tls" (spec.Name: "scriptwriter-tls")
And the secret themselves are in the cluster okay ...
$ kubectl get secrets scriptwriter-tls
NAME TYPE DATA AGE
scriptwriter-tls Opaque 3 1h
So it would appear that as far as miniKube is concerned all is well from a secrets point of view. But on the other hand the nginx container can't see it.
I can't logon to the container either since it keeps terminating.
For completeness the relevant sections from the Deployment yaml ...
Firstly the nginx config...
- name: nginx
image: nginx:1.7.9
imagePullPolicy: Always
ports:
- containerPort: 443
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
volumeMounts:
- name: "nginx-scriptwriter-dev-proxf-conf"
mountPath: "/etc/nginx/conf.d"
- name: "scriptwriter-tls"
mountPath: "/etc/tls"
And secondly the volumes themselves at the container level ...
volumes:
- name: "scriptwriter-tls"
secret:
secretName: "scriptwriter-tls"
- name: "nginx-scriptwriter-dev-proxf-conf"
configMap:
name: "nginx-scriptwriter-dev-proxf-conf"
items:
- key: "nginx-scriptwriter.conf"
path: "nginx-scriptwriter.conf"
Any pointers of help would be greatly appreciated.
I am a first class numpty! :-) Sometimes the error is just the error! So the problem was that the secrets are created using local $HOME/.ssh/* certs ... and if you are generating them from different computers with different certs then guess what?! So all fixed now :-)

Docker container cannot resolve request to service in another container

I'm running gitlab-ce and gitlab-ci-multi-runner in separated docker containers, but on the same server.
Gitlab CE works fine, I can access it via browser and clone projects using both http and ssh.
However my runner cannot connect to Gitlab using domain/server ip. It can connect to it only via local docker network (for example using ip address 172.17.0.X or, if linked, by using service alias).
Ping to domain/server ip returns response.
I tried to link it as gitlab:example.domain.com but it didn't work, as somehow runner resolved server ip address instead of local network address
Checking for builds... failed: couldn't execute POST against http://example.domain.com/ci/api/v1/builds/register.json: Post http://example.domain.com/ci/api/v1/builds/register.json: dial tcp server.ip:80: i/o timeout
#Edit
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:8.2.2-ce.0
hostname: domain.name
privileged: true
volumes:
- ./gitlab-config:/etc/gitlab
- ./gitlab-data:/var/opt/gitlab
- ./gitlab-logs:/var/log/gitlab
restart: always
ports:
- server.ip:22:22
- server.ip:80:80
- server.ip:443:443
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- ./runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
I have no clue what's the issue here.
I'd appreciate your help.
Thanks in advance! :)
Seems like it was a firewall problem. Unlocking docker0 interface allowed traffic from containers :)

Resources