Haproxy in docker cannot access device's localhost via "host.docker.internal" - networking

I need to configure Haproxy for local testing.
The goal is to have 3 services
haproxy in docker container
listening http app on device's (macOS) localhost
a client app sending request through haproxy to the listening app (number 2.)
docker-compose.yml configuration for the proxy is
proxy_server:
image: haproxy:2.7.0-alpine
container_name: proxy_server
user: root # I used this to install curl in the container
ports:
- '3128:80' # haproxy itself
- '20005:20005' # configured proxy in haproxy.cfg
restart: always
volumes:
- ./test/proxy_server/config:/usr/local/etc/haproxy # this maps the haproxy.cfg file into the container
extra_hosts:
- 'host.docker.internal:host-gateway' # allows the proxy to access device's localhost on linux
haproxy.cfg is
defaults
timeout client 5s
timeout connect 5s
timeout server 5s
timeout http-request 5s
listen reverse-proxy
bind *:20005
mode http
option httplog
log stdout format raw local0 debug
When the listening app (2.) listens on localhost:56454, I can connect into the haproxy container, install curl and connect to the listening app via host.docker.internal host.
/ # curl -v -I http://host.docker.internal:56454
* Trying 192.168.65.2:56454...
* Connected to host.docker.internal (192.168.65.2) port 56454 (#0)
> HEAD / HTTP/1.1
> Host: host.docker.internal:56454
> User-Agent: curl/7.86.0
> Accept: */*
This is correct.
The problem is that I am not able to send request through the proxy to the same URL (http://host.docker.internal:56454) because the proxy logs
172.22.0.1:56636 [10/Dec/2022:13:50:22.601] reverse-proxy reverse-proxy/<NOSRV> 0/-1/-1/-1/0 503 217 - - SC-- 1/1/0/0/0 0/0 "POST http://host.docker.internal:56454/confirmation HTTP/1.1"
and the client gets following response:
HTTP Status 503: <html><body><h1>503 Service Unavailable</h1>\nNo server is available to handle this request.\n</body></html>
Also, the request passes correctly when I use the following docker-compose.yml configuration with ubuntu/squid image instead of the haproxy one
proxy_server:
image: ubuntu/squid
container_name: proxy_server
ports:
- '3128:3128'
restart: always
extra_hosts:
- 'host.docker.internal:host-gateway' # allows the proxy to access device's localhost on linux
So, I guess the problem is that haproxy somehow does not see the service on http://host.docker.internal:56454 even though the service is accessible from the container.
I've also tried ubuntu and debian versions of the haproxy image and it still does not work correctly.
Any idea how to fix it?
Edit:
Investigating <NOSRV>... No clue how to fix it yet.

Related

Mailcow setup behind Traefik Proxy causes https certificate error

I am trying to setup the mailcow installation behind Traefik proxy. Apparently, Traefik proxy is not able to recognize the nginx-mailcow container in its network and hence does not create a certificate for https connection. so when I bring up the mailcow service using docker-compose up, I can access the mailcow services but on insecure connection (http) and browser warns that connection is not secure.
When I check my acme.json file from Traefik: I can not find any certificate related to mailcow domain i.e., mail.tld.com there.
I have the following setup:
Logs of affected containers:
Traefik Container Logs:
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="Cannot connect to docker server context canceled" providerName=docker
time="2020-04-18T13:40:37+02:00" level=info msg="Configuration loaded from file: /traefik.yml"
time="2020-04-19T00:27:31+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-5f3a25b43c42fd85df675d2d9682b6053501844c2cfe15b7802cf918df138025 providerName=docker
time="2020-04-19T00:33:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-f4d41ee79e382b413e04b039b5fc91e1c6217c78740245c8666373fe2d6a9b23
2020/04/19 00:39:44 reverseproxy.go:445: httputil: ReverseProxy read error during body copy: unexpected EOF
time="2020-04-19T00:50:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-915f80e492c2c22917d0af81add1dde15577173c82cc928b0b6101c8a260adc5
time="2020-04-19T00:58:43+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-852985c4efc48559ca3568b1829e31b46eb9f968fc328a8566e3dc6ab6f1af21 providerName=docker
time="2020-04-19T02:02:39+02:00" level=error msg="Error while Peeking first byte: read tcp 172.21.0.2:80->208.91.109.90:55153: read: connection reset by peer"
time="2020-04-19T08:11:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-840ef4db0ccc9fa84038dc7a52133779926dba4c51554516c17404ede80a2c01
The contents of Traefik docker-compose.yml:
version: '3'
services:
traefik:
image: traefik:v2.1
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.tld.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:pass"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.tld.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
Contents of traefik.yml (I used .yml instead of .toml)
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
http:
acme:
email: myemail#tld.com
storage: acme.json
httpChallenge:
entryPoint: http
Just to point out, with this setup of Traefik, certificates are generated automatically for other services like gitlab. For that, I just correctly labelled the gitlab service and assigned the Traefik network to it and Traefik service would recognize the gitlab service and generates the certificate in acme.json but sadly not for nginx-mailcow.
The contents of my docker-compose.override.yml for mailcow:
version: '2.1'
services:
nginx-mailcow:
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.tld.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=80"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- /opt/containers/traefik/data:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=tld.com
networks:
proxy:
external: true
The contents of my nginx-mailcow service in docker-compose.yml
version: '2.1'
services:
...
nginx-mailcow:
depends_on:
- sogo-mailcow
- php-fpm-mailcow
- redis-mailcow
image: nginx:mainline-alpine
dns:
- ${IPV4_NETWORK:-172.22.1}.254
command: /bin/sh -c "envsubst < /etc/nginx/conf.d/templates/listen_plain.template > /etc/nginx/conf.d/listen_plain.active &&
envsubst < /etc/nginx/conf.d/templates/listen_ssl.template > /etc/nginx/conf.d/listen_ssl.active &&
envsubst < /etc/nginx/conf.d/templates/server_name.template > /etc/nginx/conf.d/server_name.active &&
envsubst < /etc/nginx/conf.d/templates/sogo.template > /etc/nginx/conf.d/sogo.active &&
envsubst < /etc/nginx/conf.d/templates/sogo_eas.template > /etc/nginx/conf.d/sogo_eas.active &&
. /etc/nginx/conf.d/templates/sogo.auth_request.template.sh > /etc/nginx/conf.d/sogo_proxy_auth.active &&
. /etc/nginx/conf.d/templates/sites.template.sh > /etc/nginx/conf.d/sites.active &&
nginx -qt &&
until ping phpfpm -c1 > /dev/null; do sleep 1; done &&
until ping sogo -c1 > /dev/null; do sleep 1; done &&
until ping redis -c1 > /dev/null; do sleep 1; done &&
until ping rspamd -c1 > /dev/null; do sleep 1; done &&
exec nginx -g 'daemon off;'"
environment:
- HTTPS_PORT=${HTTPS_PORT:-443}
- HTTP_PORT=${HTTP_PORT:-80}
- MAILCOW_HOSTNAME=${MAILCOW_HOSTNAME}
- IPV4_NETWORK=${IPV4_NETWORK:-172.22.1}
- TZ=${TZ}
- ALLOW_ADMIN_EMAIL_LOGIN=${ALLOW_ADMIN_EMAIL_LOGIN:-n}
volumes:
- ./data/web:/web:ro
- ./data/conf/rspamd/dynmaps:/dynmaps:ro
- ./data/assets/ssl/:/etc/ssl/mail/:ro
- ./data/conf/nginx/:/etc/nginx/conf.d/:rw
- ./data/conf/rspamd/meta_exporter:/meta_exporter:ro
- sogo-web-vol-1:/usr/lib/GNUstep/SOGo/
ports:
- "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
- "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
restart: always
networks:
mailcow-network:
aliases:
- nginx
....
I have also tried comment out ports in nginx-mailcow service but the problem persists. My current mailcow.conf changes:
HTTP_BIND=127.0.0.1
HTTP_PORT=8080
HTTPS_BIND=127.0.0.1
HTTPS_PORT=8443
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Reproduction of said bug:
I setup the traefik proxy first (see contents above). Once the Traefik is up and running (I also tested for other services and it works fine in generating a certificate). Now first I cloned the mailcow repository. Then I run ./generate_config.sh to generate mailcow.conf file. As input to generate_config.sh I provide my domain name i.e., mail.tld.com
Then I comment out the ports in docker-compose.yml file because I do not want to use port 80 and 443 for nginx-mailcow as these ports are already being used by Traefik.
Then I create a docker-compose.override.yml (see contents above) to add additional configs to nginx-mailcow service (traefik labels, traefik network). The override file also contain the certdumper service which would copy https certificate from acme.json to mailcow services.
Then, I change the following two variables in mailcow.conf:
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Finally, I run the mailcow using docker-compose up -d. In browser, if check https://mail.tld.com => It warns that connection is insecure. If I check acme.json. I find no certificate for mail.tld.com.
System information:
+-------------------------------------------------+---------------------------------+
| Question | Answer |
+-------------------------------------------------+---------------------------------+
| My operating system | linux x86_64 Ubuntu 18.04.1 LTS |
| Is Apparmor, SELinux or similar active? | No |
| Virtualization technlogy | KVM |
| Server/VM specifications (Memory, CPU Cores) | 16GB, 6 cores |
| Docker Version (docker version) | 19.03.8 |
| Docker-Compose Version (docker-compose version) | 1.25.4, build 8d51620a |
| Reverse proxy (custom solution) | Traefik |
+-------------------------------------------------+---------------------------------+
If you need more information, I would be happy to provide. Any help will be much appreciated. Thank you.
Finally I was able to solve the problem after investing many hours in reading the Traefik Documentation. I made tiny mistake in assigning proxy labels to the nginx-mailcow service. The solution is below.
I forgot to mention certificate resolver and I had to expose the port which I now added as follows:
services:
nginx-mailcow:
expose:
- "8080"
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.example.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.certresolver=http"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=8080"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- <path_to_acme.json_file_dir>:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=example.com
For people who are setting up for the first time, I had to make some additional changes beforehand.
Firstly, when you run generate.sh file then in mailcow.conf file you need to make following changes:
HTTP_PORT=8080
HTTP_BIND=127.0.0.1
HTTPS_PORT=8443
HTTPS_BIND=127.0.0.1
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
We make these changes as we can not run mailcow nginx on the same ports as traefik.
Now as nginx-mailcow will be running on 8080 or 8443 so we need to expose one of these ports so traefik can talk to mailcow-nginx service. I already exposed port 8080 in the override compose file)
You also need to also adapt your loadbalancer port from 80 to 8080. (As I configured above)
You need to also tell which certificate resolver should it use. So you need to add this line in labels (I made this as well above in override config)
You have to make sure that your acme.json file (certificate file is accessible by certdumper service). So replace to actual path of acme.json directory path
I hope this helps.

Error 502 when accessing backend inside same cluster in Kubernetes

Backend: python (Django)
Frontend: angular6
I just deployed my backend and frontend on same cluster in Google Kubernetes. They are two individual services inside same cluster. The pods on the clusters look like:
NAME READY STATUS RESTARTS AGE
backend-f4f5df588-nbc9p 1/1 Running 0 1h
frontend-85885799d9-92z5f 1/1 Running 0 1h
And the service looks like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.3.249.148 35.232.61.116 8000:32291/TCP 26m
frontend LoadBalancer 10.3.248.72 35.224.112.111 8081:31444/TCP 3m
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 1h
My backend just works on the django server, starting by using python manage.py runserver command, everything works fine. I built the frontend and deployed on Nginx server. So there're two Docker images, one for django, one for nginx, as two pods in cluster.
Then there're two ingress for both of them. Exposing 80 port for frontend and 8000 for backend. Holding on the load balancer nginx controller. After assigning a domain, I can visit https://abc/project as front end. But when I want to make API requests, ERR 502 appears. The error message in nginx is:
38590 connect() failed (111: Connection refused) while connecting to upstream, client: 163.185.148.245, server: _, request: "GET /project/api HTTP/1.1", upstream: "http://10.0.0.30:8000/dataproject/api", host: "abc"
The upstream in error message is the correct IP for the backend service, but still gets a 502 error. I can curl from nginx server to frontend. But cannot cur to backend. Any help?
PS. Everything works fine before deployment.
Fixed. Django runserver cmd use 0.0.0.0 so it wont prevent from outside connections:
python runserver 0.0.0.0:8000

Verify if nginx is working correctly with Proxy Protocol locally

Environment
I have set up Proxy Protocol support on an AWS classic load balancer as shown here which redirects traffic to backend nginx (configured with ModSecurity) instances.
Everything works great and I can hit my websites from the open internet.
Now, since my nginx configuration is done in AWS User Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.
Problem
Before enabling proxy protocol I used to check whether my nginx instance is healthy, and ModSecurity is working by checking a 403 response from this command
$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per this link.
# curl -k https://localhost -v
* About to connect() to localhost port 443 (#0)
* Trying ::1...
* Connected to localhost (::1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
# cat /var/logs/nginx/error.log
2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ򫂱�u��%d�z��mRN�[e��<�,�
�+̩� �0��/̨��98k�̪32g�5=�/<
" while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443
What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
You can use the --haproxy-protocol curl option, which adds the extra proxy protocol info to the request.
curl --haproxy-protocol localhost
So:
curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
Proxy Protocol append a plain text line before the streaming anything
PROXY TCP4 127.0.0.1 127.0.0.1 0 8080
Above is an example, but this happens the very first thing. Now if I have a NGINX listening on SSL and http both using proxy_protocol then it expects to see this line first and then any other thing
So if do
$ curl localhost:81
curl: (52) Empty reply from server
And in nginx logs
web_1 | 2017/10/27 06:35:15 [error] 5#5: *2 broken header: "GET / HTTP/1.1
If I do
$ printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 80\r\nGET /test/abc\r\n\r\n" | nc localhost 81
You can reach API /test/abc and args_given = ,
It works. As I am able to send the proxy protocol it accepts
Now in case of SSL if I use below
printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 8080\r\nGET /test/abc\r\n\r\n" | openssl s_client -connect localhost:8080
It would still error out
web_1 | 2017/10/27 06:37:27 [error] 5#5: *1 broken header: ",(�� #_5���_'���/��ߗ
That is because the client is trying to do Handshake first instead of sending proxy protocol first then handshake
So you possible solutions are
Terminate SSL on LB and then handle http on nginx with proxy_protocol and use the the nc command option I posted
Add a listen 127.0.0.1:<randomlargeport> and execute your test using the same. This is still safe as you are listening to localhost only
Add another SSL port and use listen 127.0.0.1:443 ssl and listen <private_ipv4>:443 ssl proxy_protocol
All solutions are in priority order as per my choice, you can make your own choice
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.
curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
Unfortunately bash version didn't work in my case, so I wrote python3 code:
#!/usr/bin/env python3
import socket
import sys
def check_status(host, port):
'''Check app status, return True if ok'''
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(3)
s.connect((host, port))
s.sendall(b'GET /status HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: curl7.0\r\nAccept: */*\r\n\r\n')
data = s.recv(1024)
if data.decode().endswith('OK'):
return True
else:
return False
try:
status = check_status('127.0.0.1', 80)
except:
status = False
if status:
sys.exit(0)
else:
sys.exit(1)

Can you use nginx reverse proxy to docker containers without exposing any ports?

I'd like to know if it's possible to use nginx with docker compose as an api gateway / reverse proxy / ssl termination point without exposing any ports on the containers behind it. I.e. I want to use only the intranet created by docker compose when the containers are linked to communicate past nginx. Ideally the only publicly accessible port will be port 443 (ssl) on nginx. Is this doable? Or do I have to expose ports on my containers?
Yes is doable.
Just define your application in one container and nginx in another container, both in the same docker-compose.yml. Link them. And only expose the 443 port in nginx container.
docker-compose.yml
nginx:
image: nginx
links:
- node1:node1
- node2:node2
- node3:node3
ports:
- "443:443"
node1:
build: ./node
node2:
build: ./node
node3:
build: ./node
More info: http://anandmanisankar.com/posts/docker-container-nginx-node-redis-example/
Regards

Docker container cannot resolve request to service in another container

I'm running gitlab-ce and gitlab-ci-multi-runner in separated docker containers, but on the same server.
Gitlab CE works fine, I can access it via browser and clone projects using both http and ssh.
However my runner cannot connect to Gitlab using domain/server ip. It can connect to it only via local docker network (for example using ip address 172.17.0.X or, if linked, by using service alias).
Ping to domain/server ip returns response.
I tried to link it as gitlab:example.domain.com but it didn't work, as somehow runner resolved server ip address instead of local network address
Checking for builds... failed: couldn't execute POST against http://example.domain.com/ci/api/v1/builds/register.json: Post http://example.domain.com/ci/api/v1/builds/register.json: dial tcp server.ip:80: i/o timeout
#Edit
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:8.2.2-ce.0
hostname: domain.name
privileged: true
volumes:
- ./gitlab-config:/etc/gitlab
- ./gitlab-data:/var/opt/gitlab
- ./gitlab-logs:/var/log/gitlab
restart: always
ports:
- server.ip:22:22
- server.ip:80:80
- server.ip:443:443
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- ./runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
I have no clue what's the issue here.
I'd appreciate your help.
Thanks in advance! :)
Seems like it was a firewall problem. Unlocking docker0 interface allowed traffic from containers :)

Resources