How to design a docker-compose that isolates multiple internal networks and can all access the external network? - networking

I want to create a network to verify some technology such as NAT,P2P etc.
The topology diagram is as follows
host machine
|
internet
|
----------
| | |
s1r c1r c2r
| | |
s1 c1 c2
s1r,c1r,c2r are routers on internet,and the s1,c1,c2 are server,clients.
In order to simulate the real Internet, I hope to achieve the following effect
1.server and clients can't ping,because they belong to a different subnet
such as c1->s1,c2 ×
2.server and clients can ping routers.
such as c1->s1r,c1r,c2r √
3.all devices can ping host machine and external internet
such as c1->www.google.com √
But the network I am implementing now only satisfies the second rule, how should I modify it?
And when I ping c1(192.168.2.100) from s1(192.168.1.100), I can ping through. Use tcpdump to capture packets on c1r and see that the request is recorded as
03:19:46.771468 IP 192.168.1.100 > p2ptest_c1_1.p2ptest_c1-internal: ICMP echo request, id 181, seq 4, length 64
03:19:46.771487 IP p2ptest_c1_1.p2ptest_c1-internal > 192.168.1.100: ICMP echo reply, id 181, seq 4, length 64
I think this means that NAT does not take effect, because the source address of ping is not the address of s1r (10.0.0.101), but the address of s1 (192.168.1.100), which has not successfully simulated the NAT translation function.
My docker-compose.yaml is as follows, please tell me where I am wrong? Thanks
version: "3.3"
services:
s1r:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: s1r
networks:
s-internal:
ipv4_address: 192.168.1.2
internet:
ipv4_address: 10.0.0.101
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE &&
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE &&
ip route add 192.168.2.0/24 via 10.0.0.102 &&
ip route add 192.168.3.0/24 via 10.0.0.103 &&
tail -f /dev/null"
c1r:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: c1r
networks:
c1-internal:
ipv4_address: 192.168.2.2
internet:
ipv4_address: 10.0.0.102
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE &&
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE &&
ip route add 192.168.1.0/24 via 10.0.0.101 &&
ip route add 192.168.3.0/24 via 10.0.0.103 &&
tail -f /dev/null"
c2r:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: c2r
networks:
c2-internal:
ipv4_address: 192.168.3.2
internet:
ipv4_address: 10.0.0.103
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE &&
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE &&
ip route add 192.168.1.0/24 via 10.0.0.101 &&
ip route add 192.168.2.0/24 via 10.0.0.102 &&
tail -f /dev/null"
s1:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: s1
networks:
s-internal:
ipv4_address: 192.168.1.100
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
ip route del default &&
ip route add default via 192.168.1.2 &&
tail -f /dev/null"
c1:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: c1
networks:
c1-internal:
ipv4_address: 192.168.2.100
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
ip route del default &&
ip route add default via 192.168.2.2 &&
tail -f /dev/null"
c2:
image: ubuntu
cap_add:
- NET_ADMIN
hostname: c2
networks:
c2-internal:
ipv4_address: 192.168.3.100
volumes:
- ./volumes/programs:/home/programs
command: >-
sh -c "apt update -y &&
apt install -y net-tools traceroute iputils-ping iptables iproute2 netcat tcpdump &&
ip route del default &&
ip route add default via 192.168.3.2 &&
tail -f /dev/null"
networks:
internet:
driver: bridge
ipam:
config:
- subnet: 10.0.0.0/24
s-internal:
driver: bridge
ipam:
config:
- subnet: 192.168.1.0/24
c1-internal:
driver: bridge
ipam:
config:
- subnet: 192.168.2.0/24
c2-internal:
driver: bridge
ipam:
config:
- subnet: 192.168.3.0/24

Related

Incoming Connections Getting Dropped with sshuttle running

I am running the traffic from my docker container through sshuttle to a remote server, which is working great with this command:
sshuttle -l 0.0.0.0 -r user#server 0/0 -v
The problem is that I need incoming connections to be allowed to reach my local server via the remote server's ip address and a specific port. I've tried creating an additional ssh tunnel via
ssh -NR 0.0.0.0:43523:localhost:43523
This almost works, as the incoming connections show up in the sshuttle verbose logs, but the connection never establishes (connection timed out from the client side).
Here are the iptables rules created by sshuttle at runtime:
iptables -t nat -N sshuttle-12300
iptables -t nat -F sshuttle-12300
iptables -t nat -I OUTPUT 1 -j sshuttle-12300
iptables -t nat -I PREROUTING 1 -j sshuttle-12300
iptables -t nat -A sshuttle-12300 -j RETURN -m ttl --ttl 63
iptables -t nat -A sshuttle-12300 -j RETURN -m addrtype --dst-type LOCAL
iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 0.0.0.0/0 -p tcp --to-ports 12300
So my question is: What is causing the incoming connections to not work? And how can I fix it?

Restrict Docker exposed port from only specific IP adresses

How to restrict a container's port exposed by Docker from only a list of IPs? Only this list of IP would be able to access this port.
I tried that:
iptables -I DOCKER -p tcp --dport PORT_X -j REJECT --reject-with icmp-port-unreachable
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_1 --destination HOST_IP_1 -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_2 --destination HOST_IP_1 -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X --source EXTERNAL_IP_3 --destination HOST_IP_1 -j ACCEPT
I had the same problem. I solved it with this rules :
iptables -I DOCKER-USER -i <your_interface_name> -j DROP
iptables -I DOCKER-USER -i <your_interface_name> -s <your_first_ip> -j ACCEPT
iptables -I DOCKER-USER -i <your_interface_name> -s <your_second_ip> -j ACCEPT
Care, DOCKER-USER is a chain which will not be deleted when service docker restart
You should be able to add your port flag, but i'm not an expert and it is not my needs.
Your policy is whitelist, it's better to create a user custom chain handle this alone.
For example, I have a redis container, I want it only serve for specific IPs:
$ docker run -d -p 6379:6379 redis:2.8
After started redis container, the iptables looks like this:
$ iptables -t filter -nL
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:6379
Create our custom chain:
$ iptables -N CUSTOM_REDIS
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 172.31.101.37 --destination 172.17.0.2 -j ACCEPT
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 172.31.101.38 --destination 172.17.0.2 -j ACCEPT
$ iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 0.0.0.0/0 --destination 172.17.0.2 -j DROP
Replace the original rule with custom chain:
$ iptables -R DOCKER 1 -p tcp --source 0.0.0.0/0 --destination 172.17.0.2 --dport 6379 -j CUSTOM_REDIS
Now my redis can only access by ip: 172.31.101.37 and 172.31.101.38.
Note:
172.17.0.2 is the ip of redis container
From the docker guide here:
Docker’s forward rules permit all external source IPs by default. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER filter chain. For example, to restrict external access such that only source IP 8.8.8.8 can access the containers, the following rule could be added:
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
In your case since you want to allow multiple IP addresses I think something like this should work:
iptables -I DOCKER -s EXTERNAL_IP_1 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -s EXTERNAL_IP_2 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -s EXTERNAL_IP_3 -p tcp --dport PORT_X -j ACCEPT
iptables -I DOCKER -p tcp --dport PORT_X -j REJECT --reject-with icmp-port-unreachable
You may also want to prevent access from docker directly, using the specific IP you want to listen, like -p 1.2.3.4:6379:6379/tcp syntax, that way the container will listen only on that IP and interface.
If you use that IP as private IPs, you can avoid completely the iptables because you restricted access only from local/private network.
You can use ufw from inside docker container
sudo ufw [--dry-run] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] [proto protocol] [from ADDRESS [port PORT]][to ADDRESS [port PORT]]

kubernetes ClusterIP service not able to route requests to containers on other nodes

We have 3 physical machines installed with Kubernetes on Centos 7. We are using one machine as both master and worker and the other 2 machines are used as workers.
I have a service as defined below.
kubectl get service hostnames -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-01-08T21:26:54Z
name: hostnames
namespace: default
resourceVersion: "1209904"
selfLink: /api/v1/namespaces/default/services/hostnames
uid: 2d6b6ffe-d5e9-11e6-b2d8-842b2b55e882
spec:
clusterIP: 10.254.241.39
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app: hostnames
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Trying to invoke the service is working only when the request is routed to the container that is physically on the same machine.
[root#server5 hostnames]# curl 10.254.241.39:80
^C
[root#server5 hostnames]# curl 10.254.241.39:80
hostnames-9ga5b
[root#server5 hostnames]# curl 10.254.241.39:80
hostnames-9ga5b
[root#server5 hostnames]# curl 10.254.241.39:80
The endpoints exist and invoking the endpoint IP Address directly works.
server5 hostnames]# curl 10.20.36.4:9376; curl 10.20.48.6:9376; curl 10.20.63.2:9376
hostnames-9ga5b
hostnames-ygxnk
hostnames-vcfql
The IP Tables rules are created by kube-proxy as shown below.
server5 hostnames]# iptables-save | grep hostnames
-A KUBE-SEP-3UQOVTFJM332BGMS -s 10.20.48.6/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-3UQOVTFJM332BGMS -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.48.6:9376
-A KUBE-SEP-6ZUKVGLXRG6BMNNI -s 10.20.63.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6ZUKVGLXRG6BMNNI -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.63.2:9376
-A KUBE-SEP-UMK676VFQ5WVT4CI -s 10.20.36.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-UMK676VFQ5WVT4CI -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.20.36.4:9376
-A KUBE-SERVICES -d 10.254.241.39/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-UMK676VFQ5WVT4CI
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3UQOVTFJM332BGMS
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-6ZUKVGLXRG6BMNNI
Checking the logs of kube-proxy does not show any errors. Increase the logging level using -v 4.
We checked the behavior above on the other 2 machines and is identical (IP tables rules, being able to route the requests of the service only to local container, end points being reachable directly with the container IP address).
Is there a reason kubernetes service is not able to route the requests to containers running on other physical machines? The firewall on all machines is disabled and stopped.
Thanks.

Converting IPTables rules to Firewalld

I'm working on setting up Cuckoo Sandbox and I have several IPTables rules that need to be converted to Firewalld rules.
Here's the reference page for the Cuckoo Sandbox install guide: http://docs.cuckoosandbox.org/en/latest/installation/guest/network/#virtual-networking
The 3 lines that I need to convert from IPTables format are (Subnet removed):
iptables -A FORWARD -o eth0 -i vboxnet0 -s 0.0.0.0/0 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE
I've made an attempt to convert the rules and implement them using firewall-cmd, and here are the three updated rules that I came up with:
firewall-cmd --permanent --direct --add-rule ipv4 -A FORWARD -o eth0 -i vboxnet0 -s 0.0.0.0/0 -m conntrack --ctstate NEW -j ACCEPT
firewall-cmd --permanent --direct --add-rule ipv4 -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
firewall-cmd --permanent --direct --add-rule ipv4 filter POSTROUTING 0 -t nat -j MASQUERADE
However, when I attempt to add one of the above rules using sudo firewall-cmd I get a response that says:
wrong priority
usage: --direct --add-rule { ipv4 | ipv6 | eb } <table> <chain> <priority> <args>
What am I doing wrong?
Thanks for any help!
It looks like you have just copied and pasted your iptables arguments to the back of an firewall-cmd command: that will not work. The error message is telling you that it is not finding what it expects after 'ipv4': table, chain, priority and args. You need something like:
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 9000 -j ACCEPT
You can add MASQUERADE in a couple of ways:
firewall-cmd --permanent --zone=external --add-masquerade
firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -o eth1 -j MASQUERADE
Here is a good reference for getting started with firewalld: https://www.certdepot.net/rhel7-get-started-firewalld/

IPtables NAT/Masquerade to allow OpenStack instances to access sites external to the laptop they're running on

I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?
For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.

Resources