redmine installation not working through nginx and thin - nginx

I've installed Redmine on an Ubuntu 13.04 server.
This installation worked fine and I confirmed Redmine was working through the WEBrick server (as per redmine documentation).
To make things more stable I want to run Redmine behind Nginx & Thin.
With this part I run into problems as Nginx reports getting timeouts:
2013/07/19 07:47:32 [error] 1051#0: *10 upstream timed out (110: Connection timed out) while connecting to upstream, .......
Thin Configuration:
---
chdir: /home/redmine/app/redmine
environment: production
address: 127.0.0.1
port: 3000
timeout: 5
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 128
max_persistent_conns: 64
require: []
wait: 10
servers: 1
daemonize: true
I can see Thin is running, the pid file is created and a logfile is started.
I see no further additions to the logfile when doing requests.
Nginx configuration:
upstream redmine {
server 127.0.0.1:3000;
}
server {
server_name redmine.my.domain;
listen 443;
ssl on;
ssl_certificate /home/redmine/sites/redmine/certificates/server.crt;
ssl_certificate_key /home/redmine/sites/redmine/certificates/server.key;
access_log /home/redmine/sites/redmine/logs/server.access.nginx.log;
error_log /home/redmine/sites/redmine/logs/server.error.nginx.log;
root /home/redmine/app/redmine;
location / {
try_files $uri #ruby;
}
location #ruby {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 5;
proxy_pass http://redmine;
}
}
I can see additions to the Nginx log.
Can anyone give me a hint on where to find the problem in this?
Current result of iptables -L
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:3000
ACCEPT tcp -- anywhere anywhere tcp dpt:https
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination

The error is because your firewall "iptables" blocked the port.
Rollback your iptables config, then issue the follow command:
iptables -I INPUT -i lo -p tcp --dport 3123 -j ACCEPT
Remember to save the setting by:
service iptables save
More information about iptables: https://help.ubuntu.com/community/IptablesHowTo
p.s. sudo may be needed for the above commands.

Related

Redis failed to connect: connection refused from Nginx

I'm struggling with this issue more than a week, and still cannot pull it off.
I'm trying to provision our environment using Ansible, and I want to provision a staging server, with the same environment as production, I have setup the Redis server, and it's running listening on 6379 I have Nginx up and running and it's serving requests, but when it's got to the part of Lua to connect to Redis, it throw on me connection refused error.
Here is Nginx debug log: Link
Redis Listening on 6379
$ sudo lsof -i -P -n | grep LISTEN | grep 6379
redis-ser 1978 redis 4u IPv6 138447828 0t0 TCP *:6379 (LISTEN)
redis-ser 1978 redis 5u IPv4 138447829 0t0 TCP *:6379 (LISTEN)
Connecting to Redis through Python
Python 2.7.12 (default, Oct 8 2019, 14:14:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import redis
redis.Redis(host='127.0.0.1', port=6379, db='0')
r.set("Test", 'value')
True
r.get("Test")
'value'
Lua Code:
local red = redis:new()
red:set_timeout(500)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.say("Redis failed to connect: ", err)
return
end
Nginx conf:
server {
listen 8080;
server_name xxx.com;
access_log /var/log/nginx/xxxx_access.log;
error_log /var/log/nginx/xxxx_error.log debug;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $http_cf_connecting_ip;
proxy_set_header X-Real-IP $http_cf_connecting_ip;
proxy_set_header X-URI $uri;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
location / {
rewrite_by_lua_file '/var/www/xxxx/nginx/add_header_web.lua';
proxy_pass http://xxxxx/;
}
}
Environment
Redis 3.2.0
Nginx: openresty/1.7.7.2
configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-I/opt/ngx_openresty-1.7.7.2/build/luajit-root/usr/local/openresty/luajit/include/luajit-2.1 -DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2 -O2' --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.57 --add-module=../xss-nginx-module-0.04 --add-module=../ngx_coolkit-0.2rc2 --add-module=../set-misc-nginx-module-0.28 --add-module=../form-input-nginx-module-0.10 --add-module=../encrypted-session-nginx-module-0.03 --add-module=../srcache-nginx-module-0.28 --add-module=../ngx_lua-0.9.14 --add-module=../ngx_lua_upstream-0.02 --add-module=../headers-more-nginx-module-0.25 --add-module=../array-var-nginx-module-0.03 --add-module=../memc-nginx-module-0.15 --add-module=../redis2-nginx-module-0.11 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.13 --add-module=../rds-csv-nginx-module-0.05 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/opt/ngx_openresty-1.7.7.2/build/luajit-root/usr/local/openresty/luajit/lib -Wl,-rpath,/usr/local/lib' --conf-path=/etc/nginx/nginx.conf --with-http_realip_module --with-http_stub_status_module --with-http_geoip_module --with-http_ssl_module --with-http_sub_module --add-module=/opt/nginxmodules/nginx-upload-progress-module --add-module=/opt/nginxmodules/nginx-push-stream-module
Update:
Well, I just updated my openresty to latest version and things went back to work
I was having this same issue but figured it out after a bit of help from a Chinese thread from a google group. Essentially we have to add options table with the pool name when connecting. I don't know why, but it worked for me.
Here is my code:
local options_table = {}
options_table["pool"] = "docker_server"
local ok, err = red:connect("10.211.55.8", 6379, options_table)

Configuring Supervisor for Daphne (Django Channels)

I have created a web application with Django Channels which I face problems with while trying to set up with Supervisor system.
To start with, the application locally works well.
Remotely (I use an AWS EC2 instance with Ubuntu Server 18.04 LTS), when run with a command daphne -b 0.0.0.0 -p 8000 mysite.asgi:application it also works well.
However, I cannot make it work with Supervisor. I follow instructions from the official Django Channels docs (https://channels.readthedocs.io/en/latest/deploying.html) and therefore I have:
nginx config file:
upstream channels-backend {
server localhost:8000;
}
server {
server_name www.example.com;
keepalive_timeout 5;
client_max_body_size 1m;
access_log /home/ubuntu/django_app/logs/nginx-access.log;
error_log /home/ubuntu/django_app/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/django_app/mysite/staticfiles/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
server_name www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
return 404; # managed by Certbot
}
Supervisor config file:
[fcgi-program:asgi]
socket=tcp://localhost:8000
directory=/home/ubuntu/django_app/mysite
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
numprocs=4
process_name=asgi%(process_num)d
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/django_app/logs/supervisor_log.log
redirect_stderr=true
When set this way, the webpage does not work (504 Gateway Time-out). In the Supervisor log file I see:
2018-11-14 14:48:21,511 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-14 14:48:21,516 INFO HTTP/2 support enabled
2018-11-14 14:48:21,517 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,015 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,025 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-14 14:48:22,026 CRITICAL Listen failure: [Errno 2] No such file or directory: '1416' -> b'/run/daphne/daphne0.sock.lock'
2018-11-14 14:48:22,091 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,096 INFO HTTP/2 support enabled
2018-11-14 14:48:22,097 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,135 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,152 INFO HTTP/2 support enabled
2018-11-14 14:48:22,153 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,237 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,241 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,242 INFO Configuring endpoint unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,242 CRITICAL Listen failure: [Errno 2] No such file or directory: '1419' -> b'/run/daphne/daphne3.sock.lock'
2018-11-14 14:48:22,252 INFO Configuring endpoint unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,252 CRITICAL Listen failure: [Errno 2] No such file or directory: '1420' -> b'/run/daphne/daphne2.sock.lock'
etc.
Please note that in the Supervisor command the Daphne process is invoked in another way (with other set of parameters) than I run it before - instead of parameters for address and port, there are parameters for socket and file descriptor (about which I do not know much at all). I suspect that it is the reason of the encountered error.
Any help or suggestions will be highly appreciated.
The relevant packages versions:
channels==2.1.2
channels-redis==2.2.1
daphne==2.2.1
Django==2.1.2
EDIT:
When I create empty files for socket files (which are present in command for Daphne in the Supervisor config file), ie. /run/daphne/daphne0.sock, /run/daphne/daphne1.sock, etc., then the log file states the following:
2018-11-15 10:24:38,289 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,290 INFO HTTP/2 support enabled
2018-11-15 10:24:38,280 INFO Configuring endpoint fd:fileno=0
2018-11-15 10:24:38,458 INFO Listening on TCP address 127.0.0.1:8000
2018-11-15 10:24:38,475 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,476 CRITICAL Listen failure: Couldn't listen on any:b'/run/daphne/daphne0.sock': [Errno 98] Address already in use.
Question: should these files not be empty? What should they include?
In the supervisor ASGI config file, in the following line
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
replace --fd 0 with --endpoint fd:fileno=0.
Issue: https://github.com/django/daphne/issues/234
Fabio's answer, with replacing the file descriptor parameter for the endpoint parameter, presents a quick workaround for this problem (which appeared to be a bug in the Daphne code).
However, the fix in Daphne repository was quickly committed so that the original instructions work well.
As a side note (for people still getting critical listen failures which I wrote about in the original question), please be sure that the physical location for socket files (/run/daphne/ in my case) is accessible - I spent too much time just to discover that simply creating the daphne folder in /run catalog does the job (even though I run everything with sudo)... For precautionary measures one may consider redirecting the socket files to another folder, e.g. /tmp which does let creating a directory without sudo permission.

Allow nginx to read docker.sock

In order to monitor my docker containers, I've decided to expose docker remote API through nginx by the following rule:
server {
listen 1234;
server_name xxx.xxx.xxx.xxx;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unix:/var/run/docker.sock;
}
}
But in the nginx.error file, I get the following error:
connect() to unix:/var/run/docker.sock failed (13: Permission denied
The reason is that docker.sock is under the ownership of docker group while nginx is running in www-data group.
What is the best way to solve this problem?
The reason is that docker.sock is under the ownership of docker group
while nginx is running in www-data group.
What is the best way to solve this problem?
For this issue you can add user under which nginx running to docker group.
usermod -a -G www-data,docker user
You can open the http socket in the daemon by adding the following text to the ExecStart in the daemon config file:
-H <ip address>:2375
You can find the location of the configuration file in the output of the command:
systemctl status docker
You can set permission for docker socket:
sudo chmod a+r /var/run/docker.sock

Jenkins: How to configure Jenkins behind Nginx reverse proxy for JNLP slaves to connect

I am trying to set up a Jenkins master and a Jenkins slave node where the Jenkins Master is behind Nginx reverse proxy on a different server with SSL termination. The nginx configuration is as following:
upstream jenkins {
server <server ip>:8080 fail_timeout=0;
}
server {
listen 443 ssl;
server_name jenkins.mydomain.com;
ssl_certificate /etc/nginx/certs/mydomain.crt;
ssl_certificate_key /etc/nginx/certs/mydomain.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http:// https://;
proxy_pass http://jenkins;
}
}
server {
listen 80;
server_name jenkins.mydomain.com;
return 301 https://$server_name$request_uri;
}
The TCP port for JNLP agents is set as 50000 in Jenkins master Global Security configuration. Port 50000 is set to be accessible from anywhere on the host machine.
The JNLP slave is launched with the following command:
java -jar slave.jar -jnlpUrl https://jenkins.mydomain.com/computer/slave-1/slave-agent.jnlp -secret <secret>
The JNLP slave fails to connect to the configured JNLP port on the master:
INFO: Connecting to jenkins.mydomain.com:50000 (retrying:4)
java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at hudson.remoting.Engine.connect(Engine.java:400)
at hudson.remoting.Engine.run(Engine.java:298)
What is the configuration required for the JNLP slave to connect to the Jenkins master?
The JNLP port seems to use a binary protocol, not a text-based HTTP protocol, so unfortunately it can't be reverse-proxied through NGINX like the normal Jenkins pages can be.
Instead, you should:
Configure Global Security > Check "Enable security" and set a Fixed
"TCP port for JNLP slave agents". This will cause all Jenkins pages
to emit extra HTTP headers specifying this port: X-Hudson-CLI-Port,
X-Jenkins-CLI-Port, X-Jenkins-CLI2-Port.
Allow your fixed TCP JNLP
port through any firewall(s) so CLI clients and JNLP agents can
directly reach the Jenkins server on the backend.
Set the system property hudson.TcpSlaveAgentListener.hostName to the
hostname or IP address of your Jenkins server on the backend. This
will cause all pages to emit an extra HTTP header
(X-Jenkins-CLI-Host) containing this specified hostname. This tells
CLI clients where to connect, but supposedly not JNLP agents.
For each of your build slave machines in the list of nodes at
jenkins.mydomain.com/computer/ that uses the Launch method "Launch slave agents via Java Web Start", click the computer, click Configure, click the Advanced... button on the right side under Launch method, and set the "Tunnel connection through" field appropriately. Read the question mark help. You probably just need the "HOST:" syntax, where HOST is the hostname or IP address of your Jenkins server on the backend.
References:
https://issues.jenkins-ci.org/browse/JENKINS-11982
https://support.cloudbees.com/hc/en-us/articles/218097237-How-to-troubleshoot-JNLP-slaves-connection-issues-with-Jenkins
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
It's been almost 4 years since OP has asked this question, nevertheless, if you reached this page and looking for a proper solution, well, it's now possible.
I use Traefik as reverse proxy to Jenkins. TCP port inbound completely disabled now.
The only thing you need to make sure is your agent/slave is trusting Jenkins server certificate (as webSocket cannot be used with -disableHttpsCertValidation or -noCertificateCheck
If this is a Windows agent, use:
C:\Program Files (x86)\Java\jre1.8.0_251\bin\keytool.exe -import -storepass "changeit" -keystore "C:\Program Files (x86)\Java\jre1.8.0_251\lib\security\cacerts" -alias <cert_alias> -file "<path_to_cert>"
(Change path accordingly to your java version)

Bad gateway error on target website configured using the "nginx-proxy" docker container

I try to resolve a 502 gateway error on my vps on latex.comnmodel.org, using the great nginx-proxy docker container. I'm lost in config, so i crosspost this problem as a question in github, and here to find an help.
My docker0 is 172.17.0.1, and the docker -ps command return :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dca0d15c69bf sharelatex/sharelatex "/sbin/my_init" 2 minutes ago Up 2 minutes 0.0.0.0:5000->80/tcp sharelatex
55ebd6b84a6a osixia/phpldapadmin "/container/tool/run" 3 days ago Up 3 days 80/tcp, 443/tcp sleepy_thompson
e8fe2bd50c3a osixia/openldap "/container/tool/run" 3 days ago Up 3 days 389/tcp, 636/tcp dreamy_babbage
9597ef0cded5 jwilder/nginx-proxy "/app/docker-entrypoi" 3 days ago Up 3 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pensive_raman
I create the sharelatex image with and without VIRTUAL_PORT option :
sudo docker run -d -e "VIRTUAL_HOST=latex.comnmodel.org" -e "VIRTUAL_PORT=80" -v ~/sharelatex_data:/var/lib/sharelatex -p 5000:80 --name=sharelatex sharelatex/sharelatex
The docker exec pensive_raman grep -vE '^\s*$' /etc/nginx/conf.d/default.conf return
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream latex.comnmodel.org {
# sharelatex
server 172.17.0.5:80;
}
server {
server_name latex.comnmodel.org;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://latex.comnmodel.org;
}
}
upstream ldap.comnmodel.org {
# sleepy_thompson
server 172.17.0.4:80;
}
server {
server_name ldap.comnmodel.org;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://ldap.comnmodel.org;
}
}
When i ping latex.comnmodel.org located on my vps ip 51.255.47.40 :
PING latex.comnmodel.org (51.255.47.40) 56(84) bytes of data.
64 bytes from 40.ip-51-255-47.eu (51.255.47.40): icmp_seq=1 ttl=50 time=14.6 ms
64 bytes from 40.ip-51-255-47.eu (51.255.47.40): icmp_seq=2 ttl=50 time=12.9 ms
64 bytes from 40.ip-51-255-47.eu (51.255.47.40): icmp_seq=3 ttl=50 time=13.6 ms
The docker logs pensive_raman return
nginx.1 | latex.comnmodel.org 81.64.146.124 - - [22/Nov/2015:22:40:23 +0000] "GET / HTTP/1.1" 502 181 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
nginx.1 | latex.comnmodel.org 81.64.146.124 - - [22/Nov/2015:22:40:26 +0000] "GET / HTTP/1.1" 502 181 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
nginx.1 | latex.comnmodel.org 81.64.146.124 - - [22/Nov/2015:22:40:32 +0000] "GET / HTTP/1.1" 502 181 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
I try to connect to pensive_raman (the name of nginx-proxy image) and ping the sharelatex container 172.0.17.5, without success, is there a problem with my network config ?
Do I need to use the --link option of docker run to connect nginx-proxy container and sharelatex container ?
I have two 80 port on 0.0.0.0, it can be the problem, i need to precise an ip 172.17.0.5 when i run the sharelatex image ? This is not clean
The website latex.comnmodel.org return a 502 bad gateway, what i miss here, this is very frustrating :(
UPDATE 1 :
Documentation says that if --icc = false, the command sudo iptables -L -n command contain a DROP RULE. It seems this is not the case, so icc option take the default true value.
Chain INPUT (policy ACCEPT)
target prot opt source destination
f2b-sshd tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 172.17.0.5 tcp dpt:80
Chain f2b-sshd (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Check if you are running the docker daemon with --icc=false (inter-container communication)

Resources