IPtables NAT/Masquerade to allow OpenStack instances to access sites external to the laptop they're running on - openstack

I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?

For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.

Related

Route local port from Raspbian to another machine (port tunneling)

I want to route incoming tcp traffic on port 5555 on a Raspberry with Raspbian to another machine and port within the same local network, and make it persistent to reboots.
Context
The objective is that if I access the service on 5555 on localhost, it will load a different port on the remote machine. The ultimate goal is to forward port 53 (DNS) into another machine (non-53 port), but in the meantime, I am testing with http: https://localhost:5555, it should load https://192.168.250.250:9999 where 192.168.250.250 is a remote machine within my local network (accessible to all local network, ping 192.168.250.250 works).
What I've tried
There's a lot of resources on networking like this. Most rely on IP Forwarding on the router, which won't work in my case as I am trying to redirect ports within hosts in my localhost accessing the machines directly. The others, for port tunnelling, all use the methods below:
iptables
sudo iptables -t nat -A PREROUTING -p tcp --sport 5555 -j DNAT --to-destination 192.168.250.250 --dport 9999
This didn't work. I tried a few variations, including:
sudo iptables -t nat -A PREROUTING -p tcp --sport 5555 -j DNAT --to-destination 192.168.250.250:9999
This didn't work, despite the rule getting registered:
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp spt:5555 dpt:9999 to:192.168.250.250
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I have also installed iptables-persistent to make it persistent, but it just doesn't redirect in the first place.
I have also tried a variant of the command since I think I may have misunderstood the "source" port as being the destination:
sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 192.168.250.250:9999 --dport 5555
After any of these changes, I always run:
sudo dpkg-reconfigure iptables-persistent
sudo netfilter-persistent save
sudo netfilter-persistent restart
To make sure the rules are permanently applied. I have also tried this tutorial to load the configuration on reboot. Nonetheless, again, this just doesn't forward, the permanent side of it is unclear and secondary at this stage.
socat
socat tcp-listen:5555,reuseaddr,fork tcp:192.168.250.250:9999
This works fine. However, it's not persistent. As soon as I cntrl+c the terminal, it stops redirecting.
nc
sudo nc -l -p 5555 -c 'nc 192.168.250.250 9999'
and
sudo nc -l -p 5555 192.168.250.250 9999
Neither work. The first one throws errors (-c not existing). The latter doesn't do anything.
The up tables solution should work. However, you must check your ipv4 forwarding and enable it (most linux distros will have this as not enabled/allowed) and this is likely to be your problem.
Check this
$ cat /proc/sys/net/ipv4/ip_forward
0
0 means ip_forwarding is not allowed and the kernel will not perform it.
Either do
$ echo 1> /proc/sys/net/ipv4/ip_forward
or use sysctl
$ sysctl -w net.ipv4.ip_forward = 1

How to access internal from external?

I have two network interfaces on a node. One is internal network and the other is external network. Internal network is 192.168.50.0/255.255.255.0(internal network).
And external network is 192.168.0.0/255.255.255.0. Kubernetes consists of 192.168.50.0/255.255.255.0. I want to approach internal network from another local nodes without using internal network interface. How can I solve this problem?
Without subnet masks , I do not understand how they are different networks.
But , in any case , you need to enable routing packets from one interface to another. I assume you are on Linux node , there you may enable ip-forwarding.
echo 1 >> /proc/sys/net/ipv4/ip_forward
Then set up some rules in iptables to perform the natting and forwarding:
Example rules:
# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT
# We allow traffic from the LAN side
iptables -A INPUT -i eth0 -j ACCEPT
######################################################################
#
# ROUTING
#
######################################################################
# eth0 is LAN
# eth1 is WAN
# Allow established connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Masquerade.
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
# fowarding
iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
https://serverfault.com/questions/453254/routing-between-two-networks-on-linux

Load-balancing UDP on localhost by source IP

I have a server (openvpn) which is not multithreaded and hence does not take advantage of the multiple cores in the box. I'm trying to solve the problem by running multiple servers, each on a different port, e.g. 127.0.0.1:8000, 127.0.0.1:8001, ... then load balancing the exterior 1194 port based on the source IP -- openvpn uses UDP but all packets for a client must arrive at the same server.
Issue I'm running into is how to load balance. I tried IPVS, but it seems like it doesn't work with servers on the same host. Then tried nginx's new udp feature, but again no dice. Any ideas on how to achieve this?
I discovered that plain old iptables can create such a load balancer, using the HMARK target extension (see man 8 iptables-extensions).
Essentially the HMARK target can mark a packet based on a hash of specific IP tuple parameters, source IP and source port in my case, as these will be unique per client, even behind a NAT. Then I can route the packets to the appropriate localhost server based on the mark:
iptables -A PREROUTING -t mangle -p udp --dport 1194 -j HMARK \
--hmark-tuple src,sport --hmark-mod 2 \
--hmark-rnd 0xcafeface --hmark-offset 0x8000
iptables -A PREROUTING -t nat -p udp -m mark --mark 0x8000 \
-j DNAT --to-destination 127.0.0.1:8000
iptables -A PREROUTING -t nat -p udp -m mark --mark 0x8001 \
-j DNAT --to-destination 127.0.0.1:8001
Remember to enable routing packets to localhost:
sysctl -w net.ipv4.conf.eth0.route_localnet=1

iptables command to bridge openstack virtual network

I successfully installed openstack on spare server using the ubuntu single-node installer script. The openstack status page on the underlying ubuntu instance is green across the board. From the host ubuntu instance I can ping / ssh to all of the various openstack instances which have been started on the virtual network.
I now want to access the horizon dashboard from my pc on the local network. (I can't access it from the host ubuntu machine since it is a server install & thus has no desktop to run a web browser on) My local network is 192.168.1.xxx, with the ubuntu server having a static ip of 192.168.1.200. Horizon was installed on an instance with ip 10.0.4.77.
Based on the following blog post, (http://serenity-networks.com/installing-ubuntu-openstack-on-a-single-machine-instead-of-7/) it looks like I need to make an iptables change to the host ubuntu instance to bridge between the two networks. The suggested command from the blog post above is:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.250 --dport 8000 -j DNAT --to-destination 10.0.6.241:443
Which if I modify for my network / install would be:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
However, I am suspicious this is not the preferred way to do this. First, because the --dport 8000 seems wrong, and second because I was under the impression that neutron should be used to create the necessary bridge.
Any help would be appreciated...
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
This command has nothing to do with neutron. It just made your ubuntu server a router connecting your local network and openstack private network, so that you can access horizon through ip of local network.
--dport 8000 is not fixed, you can change to any unoccupied port. It only influence the horizon address you enter in address bar.

How do I hide a network interface from a process under Linux?

I am trying to do some network testing with a 10G network card which has 2 ports (eth1, eth2).
In order to test, I would use something like iperf to do bandwidth testing:
I connect a cable directly from port 1(eth1) to port 2(eth2).
ip addresses:
eth1: 192.168.20.1/24
eth2: 192.168.20.2/24
Terminal 1:
user#host:~$ iperf -s -B 192.168.20.1
Terminal 2:
user#host:~$ iperf -c 192.168.20.1
Results:
------------------------------------------------------------
Client connecting to 192.168.20.1, TCP port 5001
TCP window size: 169 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.20.1 port 53293 connected with 192.168.20.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 41.6 GBytes 35.7 Gbits/sec
As you can see, the data is not going through the cable at all but just through the localhost or memory which is why I am getting speeds above 10G.
Is it possible to hide eth1 from the command "iperf -c 192.168.20.1" so that the data is forced through the cable?
Update 1:
I have now tried the following after a reference made by #Mardanian :
Note: Ports are now eth2/eth3 (not eth1/eth2)
eth2 has mac address 00:40:c7:6c:01:12
eth3 has mac address 00:40:c7:6c:01:13
ifconfig eth2 192.168.20.1/24 up
ifconfig eth3 192.168.20.2/24 up
arp -s 192.168.20.3 00:40:c7:6c:01:12
arp -s 192.168.20.4 00:40:c7:6c:01:13
ip route add 192.168.20.3 dev eth3
ip route add 192.168.20.4 dev eth2
iptables -t nat -A POSTROUTING -d 192.168.20.4 -j SNAT --to-source 192.168.20.3
iptables -t nat -A POSTROUTING -d 192.168.20.3 -j SNAT --to-source 192.168.20.4
iptables -t nat -A PREROUTING -d 192.168.20.3 -j DNAT --to-destination 192.168.20.1
iptables -t nat -A PREROUTING -d 192.168.20.4 -j DNAT --to-destination 192.168.20.2
iperf -s -B 192.168.20.3
bind failed: Cannot assign requested address
These dummy addresses do not seem to work properly, I can't seem to bind or even ping them.
arp -an
? (192.168.20.3) at 00:40:c7:6c:01:12 [ether] PERM on eth2
? (192.168.20.4) at 00:40:c7:6c:01:13 [ether] PERM on eth3
As far as I understand, arp doesn't bind an ip address to an interface, it just tells the system that in order find a certain ip, it lets the system know which interface to go through - that is why I cannot bind to the dummy ip addresses. If I bind to the real ip addresses, then I still would be going through the local system.
iperf will always use loopback if it detects the destination is local. Force kernel to route it through inteface. see linux: disable using loopback and send data via wire between 2 eth cards of one comp

Resources