iptables rules break communication between Docker containers - nginx

nginx-proxy is a Docker container that acts as a reverse proxy to other containers. It uses the Docker API to detect other containers and automatically proxies traffic to them.
I have a simple nginx-proxy setup: (where subdomain.example.com is replaced with my domain)
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run -e VIRTUAL_HOST=subdomain.example.com kdelfour/cloud9-docker
It works with no problem when I have my firewall off. When I have my firewall on, I get a 504 Gateway Time-out error from nginx. This means that I'm able to see nginx on port 80, but my firewall rules seem to be restricting container-to-container and/or Docker API traffic.
I created a GitHub issue, but the creator of nginx-proxy said he had never run into this issue.
These are the "firewall off" rules: (these work)
iptables -F
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
These are my "firewall on" rules: (these don't work)
# Based on tutorial from http://www.thegeekstuff.com/scripts/iptables-rules / http://www.thegeekstuff.com/2011/06/iptables-rules-examples/
# Delete existing rules
iptables -F
# Set default chain policies
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
# Allow loopback access
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow inbound/outbound SSH
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow inbound/outbound HTTP
iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
# Allow inbound/outbound HTTPS
iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
# Ping from inside to outside
iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Ping from outside to inside
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Allow outbound DNS
iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT
# Allow outbound NTP
iptables -A OUTPUT -p udp -o eth0 --dport 123 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 123 -j ACCEPT
# This bit is from https://blog.andyet.com/2014/09/11/docker-host-iptables-forwarding
# Docker Rules: Forward chain between docker0 and eth0.
iptables -A FORWARD -i docker0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o docker0 -j ACCEPT
ip6tables -A FORWARD -i docker0 -o eth0 -j ACCEPT
ip6tables -A FORWARD -i eth0 -o docker0 -j ACCEPT
iptables-save > /etc/network/iptables.rules
Why won't the proxy work when I have my firewall on?

Thanks to advice by Joel C (see the comments above), there was a problem on the FORWARD chain which I fixed like so:
iptables -A FORWARD -i docker0 -j ACCEPT

Related

Scan nmap through iptables restrictions

We need to find port on a server and read info from this port. If we will try to use nmap here - we will be banned. Because iptables config, that blocks scans. What nmap flags can we use for finding this port and not be banned?
first port iptables:
ipset create scanned_ports hash:ip,port family inet hashsize 32768 maxelem 65536 timeout 1
iptables -A INPUT -p tcp -m state --state INVALID -j DROP
iptables -A INPUT -p tcp -m state --state NEW -m set ! --match-set scanned_ports src,dst -m hashlimit --hashlimit-above 1000/hour --hashlimit-burst 1000 --hashlimit-mode srcip --hashlimit-name portscan --hashlimit-htable-expire 10000 -j SET --add-set port_scanners src --exist
iptables -A INPUT -p tcp -m state --state NEW -m set --match-set port_scanners src -j DROP
iptables -A INPUT -p tcp -m state --state NEW -j SET --add-set scanned_ports src,dst
nohup python -mSimpleHTTPServer $_PORT > /dev/null &
second port iptables:
ipset create scanned_ports hash:ip,port family inet hashsize 32768 maxelem 65536 timeout 1
iptables -A INPUT -p tcp -m state --state INVALID -j DROP
iptables -A INPUT -p tcp -m state --state NEW -m set ! --match-set scanned_ports src,dst -m hashlimit --hashlimit-above 1000/hour --hashlimit-burst 1000 --hashlimit-mode srcip --hashlimit-name portscan --hashlimit-htable-expire 10000 -j SET --add-set port_scanners src --exist
iptables -A INPUT -p tcp -m state --state NEW -m set --match-set port_scanners src -j DROP
iptables -A INPUT -p tcp -m state --state NEW -j SET --add-set scanned_ports src,dst
iptables -I INPUT -p tcp --tcp-flags ALL SYN -j REJECT --reject-with tcp-reset --dport $_PORT
nohup python -mSimpleHTTPServer $_PORT > /dev/null &

how to open a port 5250 on redhat

I have configured a website but it seems not to be accessible and I found that I must open a port of 5250. I searched and I could not find a good way to do it. I am wondering if anyone could advise ?
I was reading that by default it is stored in
vi /etc/sysconfig/iptables
which I only see
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8089 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
then I added the following
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 5250 -j ACCEPT
then I restarted
I also tried to see the firewall status which made me more confused
# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)

Restrict Docker exposed port from only specific IP adresses

How to restrict a container's port exposed by Docker from only a list of IPs? Only this list of IP would be able to access this port.
I tried that:
iptables -I DOCKER -p tcp --dport PORT_X -j REJECT --reject-with icmp-port-unreachable
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_1 --destination HOST_IP_1 -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_2 --destination HOST_IP_1 -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_3 --destination HOST_IP_1 -j ACCEPT
I had the same problem. I solved it with this rules :
iptables -I DOCKER-USER -i <your_interface_name> -j DROP
iptables -I DOCKER-USER -i <your_interface_name> -s <your_first_ip> -j ACCEPT
iptables -I DOCKER-USER -i <your_interface_name> -s <your_second_ip> -j ACCEPT
Care, DOCKER-USER is a chain which will not be deleted when service docker restart
You should be able to add your port flag, but i'm not an expert and it is not my needs.
Your policy is whitelist, it's better to create a user custom chain handle this alone.
For example, I have a redis container, I want it only serve for specific IPs:
$ docker run -d -p 6379:6379 redis:2.8
After started redis container, the iptables looks like this:
$ iptables -t filter -nL
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:6379
Create our custom chain:
$ iptables -N CUSTOM_REDIS
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 172.31.101.37 --destination 172.17.0.2 -j ACCEPT
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 172.31.101.38 --destination 172.17.0.2 -j ACCEPT
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 0.0.0.0/0 --destination 172.17.0.2 -j DROP
Replace the original rule with custom chain:
$ iptables -R DOCKER 1 -p tcp --source 0.0.0.0/0 --destination 172.17.0.2 --dport 6379 -j CUSTOM_REDIS
Now my redis can only access by ip: 172.31.101.37 and 172.31.101.38.
Note:
172.17.0.2 is the ip of redis container
From the docker guide here:
Docker’s forward rules permit all external source IPs by default. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER filter chain. For example, to restrict external access such that only source IP 8.8.8.8 can access the containers, the following rule could be added:
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
In your case since you want to allow multiple IP addresses I think something like this should work:
iptables -I DOCKER -s EXTERNAL_IP_1 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -s EXTERNAL_IP_2 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -s EXTERNAL_IP_3 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X -j REJECT --reject-with icmp-port-unreachable
You may also want to prevent access from docker directly, using the specific IP you want to listen, like -p 1.2.3.4:6379:6379/tcp syntax, that way the container will listen only on that IP and interface.
If you use that IP as private IPs, you can avoid completely the iptables because you restricted access only from local/private network.
You can use ufw from inside docker container
sudo ufw [--dry-run] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] [proto protocol] [from ADDRESS [port PORT]][to ADDRESS [port PORT]]

Kubernetes networking issue - Service nodePort can't be reached externally

I have a 3x node kubernetes cluster: node1 (master), node2, and node3. I have a pod that's currently scheduled on node3 that I'd like to be exposed externally to the cluster. So I have a service of type nodePort with the nodePort set to 30080. I can successfully do curl localhost:30080 locally on each node: node1, node2, and node3. But externally, curl nodeX:30080 only works against node3. The other two timeout. tcpdump confirms node1 and node2 are receiving the request but not responding.
How can I make this work for all three nodes so I don't have to keep track of which node the pod is currently scheduled on? My best guess is that this is an iptables issue where I'm missing an iptables rule to DNAT traffic if the source IP isn't localhost. That being said, I have no idea how to troubleshoot to confirm this is the issue and then how to fix it. It seems like that rule should automatically be there.
Here's some info my setup:
kube-ravi196: 10.163.148.196
kube-ravi197: 10.163.148.197
kube-ravi198: 10.163.148.198
CNI: Canal (flannel + calico)
Host OS: Ubuntu 16.04
Cluster set up through kubeadm
$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-registry" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-registry-v0-1mthd 1/1 Running 0 39m 192.168.75.13 ravi-kube198
$ kubectl get service --namespace=kube-system -l "k8s-app=kube-registry"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-registry 10.100.57.109 <nodes> 5000:30080/TCP 5h
$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-proxy" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-proxy-1rzz8 1/1 Running 0 40m 10.163.148.198 ravi-kube198
kube-proxy-fz20x 1/1 Running 0 40m 10.163.148.197 ravi-kube197
kube-proxy-lm7nm 1/1 Running 0 40m 10.163.148.196 ravi-kube196
Note that curl localhost from node ravi-kube196 is successful (a 404 is good).
deploy#ravi-kube196:~$ curl localhost:30080/test
404 page not found
But trying to curl the IP from a machine outside the cluster fails:
ravi#rmac2015:~$ curl 10.163.148.196:30080/test
(hangs)
Then trying to curl the node IP that the pod is scheduled on works.:
ravi#rmac2015:~$ curl 10.163.148.198:30080/test
404 page not found
Here are my iptables rules for that service/pod on the 196 node:
deploy#ravi-kube196:~$ sudo iptables-save | grep registry
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -s 192.168.75.13/32 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-MARK-MASQ
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp -j DNAT --to-destination 192.168.75.13:5000
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -s 10.163.148.196/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.196:1
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -s 10.163.148.198/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.198:1
-A KUBE-SEP-KXX2UKHAML22525B -s 10.163.148.197/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KXX2UKHAML22525B -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.197:1
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-SVC-E66MHSUH4AYEXSQE
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7QBKTOBWZOW2ADYZ
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-KXX2UKHAML22525B
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-SEP-DARQFIU6CIZ6DHSZ
-A KUBE-SVC-JV2WR75K33AEZUK7 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-SEP-7BIJVD3LRB57ZVM2
kube-proxy logs from 196 node:
deploy#ravi-kube196:~$ kubectl logs --namespace=kube-system kube-proxy-lm7nm
I0105 06:47:09.813787 1 server.go:215] Using iptables Proxier.
I0105 06:47:09.815584 1 server.go:227] Tearing down userspace rules.
I0105 06:47:09.832436 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0105 06:47:09.836004 1 conntrack.go:66] Setting conntrack hashsize to 32768
I0105 06:47:09.836232 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0105 06:47:09.836260 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I found the cause of why the service couldn't be reached externally. It was because iptables FORWARD chain was dropping the packets. I raised an issue with kubernetes at https://github.com/kubernetes/kubernetes/issues/39658 with a bunch more detail there. A (poor) workaround is to change the default FORWARD policy to ACCEPT.
Update 1/10
I raised an issue with Canal, https://github.com/projectcalico/canal/issues/31, as it appears to be a Canal specific issue. Traffic getting forwarded to flannel.1 interface is getting dropped. A better fix than changing default FORWARD policy to ACCEPT is to just add a rule for flannel.1 interface. sudo iptables -A FORWARD -o flannel.1 -j ACCEPT.

Direct port 5060 for eth1

I have two network interface, eth0 is the internal network necessary for the connection of PCs with the softphone and eth1 to link to internet. I'm using iptables on CentOS 6.5 to direct all the outputs of the Freepbx (Asterisk) to eth1, but I don't have success.
The rule
iptables -A PREROUTING -i eth1 -t mangle -p tcp --dport 5060 -j MARK --set-mark 1
Take a ook at sip.conf. In the [general] section, there is a bindaddress or udpbindaddress. Set it to 0.0.0.0 to make sure asterisk listens on all interfaces. You can check it by:
netstat -lnap | grep 5060
udp 0 0 0.0.0.0:5060 0.0.0.0:* 30822/asterisk
Then restrict access to unnecessary interfaces using iptables, like (note the order):
iptables -A INPUT -i eth1 -p udp --dport 5060 -j ACCEPT
iptables -A INPUT -p udp --dport 5060 -j DROP
iptables -A OUTPUT -o eth1 -p udp --sport 5060 -j ACCEPT
iptables -A OUTPUT -p udp --sport 5060 -j DROP
If public ip on same server, you need use INPUT table and ACCEPT destination.
If it on other host, you have use DNAT.

Resources