I have a openVPN server setup on a AWS instance and I would like to use it to route traffic from my home client (client1, 192.168.0.0/24) to a client(client2, 10.81.0.0/16) on a machine on a second network through the openVPN server. I want to route the connections from client1 to client2's network so that I can connect to several devices in client2's network. However I dont have control over the gateway in client2's network so I can't add a route back to the vpn.
As far as I can tell I have the openVPN configuration setup in that once client1 and client2 are connected I can access client2 from client1, the routes are also setup so that if I ping a machine on client2's network the traffic is routed through the vpn but no response happens as client2's network devices do not know how to route the vpn ips back to client2.
I am assuming that I need to setup nat masqurading at client2 but I am unsure how to properly handle this as I am not that familiar with iptables.
tried on client2:
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
server.conf
port 1194
proto udp
dev tun
user nobody
group nogroup
persist-key
persist-tun
keepalive 10 120
topology subnet
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
client-to-client
route 10.81.0.0 255.255.0.0
push "route 10.81.0.0 255.255.0.0"
dh none
ecdh-curve prime256v1
... encryption info ...
client-config-dir /etc/openvpn/ccd
status /var/log/openvpn/status.log
verb 3
ccd/client2
iroute 10.81.0.0 255.255.0.0
For anyone with a similar issue, I found this https://arashmilani.com/post?id=53 that helped me solve the issue.
For me I needed to add the following instead of what I tried.
iptables -A FORWARD -i tun0 -j ACCEPT
iptables -A FORWARD -i tun0 -o eno2 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eno2 -o tun+ -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eno2 -j MASQUERADE
tun0 is the tunnel interface from the VPN and eno2 is the interface for client2's network. 10.8.0.0/24 is the default subnet for the VPN subnet.
The forwarding was the big issue, also the masquerade is based on the ip address range of the VPN on the output interface.
I have a VPC on GCP with a bastion host that has a public IP.
I am trying to connect from my local machine which is behind a firewall to an instance on a specific port behind the bastion server.
SSH works via bastion, ports are open between instances of GCP within the VPC.
I am trying to create port forwarding from my local machine to bastion to zookeeper on port 2181.
I have setup Ip tables however i just lose the packets somewhere on the way if doing a tcptraceroute.
Scenario is as follows:
Local machine -> Firewall -> Bastion -> Zookeeper
SSH connection from Local machine to Zookeeper(192.168.80.11) works (via Bastion)
My Configuration is as follows:
sudo iptables -t nat -A PREROUTING -p tcp --dport 2181 -j DNAT --to-destination 192.168.80.11:2181
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
Its just not work, what am i doing wrong?
My Ip tables have some weird entries tho:
:OUTPUT ACCEPT [83590:46593196]
COMMIT
# Completed on Mon May 11 09:33:24 2020
# Generated by xtables-save v1.8.2 on Mon May 11 09:33:24 2020
*raw
:PREROUTING ACCEPT [130202:45658294]
:OUTPUT ACCEPT [83590:46593196]
COMMIT
# Completed on Mon May 11 09:33:24 2020
# Generated by xtables-save v1.8.2 on Mon May 11 09:33:24 2020
*mangle
:PREROUTING ACCEPT [130202:45658294]
:INPUT ACCEPT [130201:45657860]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [83590:46593196]
:POSTROUTING ACCEPT [83593:46593358]
COMMIT
# Completed on Mon May 11 09:33:24 2020
# Generated by xtables-save v1.8.2 on Mon May 11 09:33:24 2020
*nat
:PREROUTING ACCEPT [234:14414]
-A PREROUTING -i eth0 -p tcp -m tcp --dport 2181 -j DNAT --to-destination 192.168.80.11:2181
:INPUT ACCEPT [233:13980]
:POSTROUTING ACCEPT [126:8408]
:OUTPUT ACCEPT [126:8408]
-A POSTROUTING -j MASQUERADE
COMMIT
# Completed on Mon May 11 09:33:24 2020
im currently try to use an pi as monitoring system which requires a connection to the local ethernet. Now i also want to use the same pi as wifi ap. But all configuration examples i've found for pi bridging ethernet and wifi so the pi itself cannot access the ethernet anymore.
Currently the configuration looks like this
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wireless-power off
If i bridge the networks (and rpi works as intended as a wifi ap) the configuration looks like this
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet manual
auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wireless-power off
auto br0
iface br0 inet static
address 192.168.1.11
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
bridge-ports eth0 wlan0
bridge-waitport 5
bridge-stp off
bridge-fd 0
So the question is how to combine both configurations so that the pi has also access to the same (bridged) network?
For topology something like this, the configuration is this.
________________________________________
| RPi |
Internet --- WLAN(WiFi) (Ethernet Ports)LAN ----- Devices
|________________________________________|
Based on Milinds comment i reversed a solution from the post:
First, install the following packages:
apt-get update && apt-get -y install hostapd hostap-utils iw bridge-utils dnsmasq
add to /boot/cmdline.txt:
[...] net.ifnames=0 [...]
replace /etc/network/interfaces:
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
wireless-power off
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
create /etc/hostapd/hostapd.conf:
ctrl_interface=/var/run/hostapd
macaddr_acl=0 auth_algs=1
driver=nl80211
interface=wlan0
hw_mode=g
ieee80211n=1
channel=1
ssid=REPLACE_WITH_YOUR_SSID
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=1
wpa=3
wpa_passphrase=REPLACE_WITH_YOUR_PASSPHRASE
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP
replace /etc/dnsmasq.conf:
interface=wlan0
listen-address=192.168.2.1
bind-interfaces
server=8.8.8.8
domain-needed
bogus-priv
dhcp-range=192.168.2.2,192.168.2.100,12h
uncomment in /etc/sysctl.conf:
[...]
net.ipv4.ip_forward=1
[...]
Run now the following commands for iptable routing:
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"
Enable ip table routing on startup
add to /etc/rc.local before exit 0:
[...]
iptables-restore < /etc/iptables.ipv4.nat
[...]
Finally reboot and the pi should works as intended as wifi ap sharing internet from ethernet port.
I have a server within OVH network. Proxmox 4.3 was installed there as a supervisor and it's hosting 2 LXC containters. Both are running in 192.168.11.0/24 network setup on vmbr2 network for which I have also setup NAT like that:
auto vmbr2
iface vmbr2 inet static
address 192.168.11.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.11.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.11.0/24' -o vmbr0 -j MASQUERADE
I've also bought Failover IP from OVH, setup virtual MAC for it and assigned it to one LXC container (vmbr0 interface).
My problem is that I can access this IP on LXC server where this IP is assigned (obviously), but I can't do that from other LXC server. Connection just timeout when I simply do wget to it.
What am I missing in my configuration?
I found it. Apparently I missed routing entry on main host:
route add -host failover_ip gw main_ip
Thanks to this all LXC hosts have now access to my Failover IP.
I've been running Xen on Debian 8 at home for a couple of year in bridged mode , mostly to try PCI passthrough capabilities for gaming and still having local linux envs at reach. I've started building xen on a regular basis, from 4.2 unstable to 4.5.1 theses days, and i'm eager to try qxl accelerated drivers in 4.6 right after this summer.
But today, my problems are far away from passthrough. I've got this dedicated server rented since months, it only has a single IP and i've never sucessfully managed to setup vm's internal network config. A lot of scripts i've found on the web are for inferiors versions of xen, and networking & vif scripts changed quite a bit.
All I want is a clean way to get my VM's get adressed either based on MAC adress or statically into the 192.168.88.0/24 subnet, and being able to
forward a list of ports (tcp or udp) toward specific vm's
So here are my config files :
/etc/network/interfaces
auto lo
iface lo inet loopback
allow-hotplug eth0
iface eth0 inet dhcp
auto dummy0
iface dummy0 inet manual
pre-up ifconfig $IFACE up
post-down ifconfig $IFACE down
auto xenbr0
iface xenbr0 inet static
bridge_ports dummy0
address 192.168.88.254
broadcast 192.168.88.255
netmask 255.255.255.0
bridge_maxwait 0
bridge_stp off
bridge_fd 0
netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 62.210.115.1 0.0.0.0 UG 0 0 0 eth0
62.210.115.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.88.0 0.0.0.0 255.255.255.0 U 0 0 0 xenbr0
iptables -L && iptables -t nat -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT udp -- anywhere anywhere PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged udp spt:bootpc dpt:bootps
ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out vif1.0 --physdev-is-bridged
ACCEPT all -- 192.168.88.2 anywhere PHYSDEV match --physdev-in vif1.0 --physdev-is-bridged
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
--
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp dpt:2222 to:192.168.88.2:22
DNAT tcp -- anywhere anywhere tcp dpt:8888 to:192.168.88.2:80
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
/etc/network/interfaces [domU]
auto lo
iface lo inet loopback
allow-hotplug eth0
iface eth0 inet dhcp
address 192.168.88.2
broadcast 192.168.88.255
netmask 255.255.255.0
gateway 192.168.88.254
For networking script in xen, i've created a copy of the vif-bridge script and i've added 2 lines into it to use a small script that handle iptables rules (found on the internet, and probably incomplete iptable rules)
/etc/xen/script/portmapper.py
#!/usr/bin/env python
netdev='eth0'
# {'domU'_ip:[(domU_port, dom0port, ['tcp'|'udp']), (domU_port, dom0port), ..}
# 3rd param - protocol is optional - if not specified, tcp is default
portmap={'192.168.88.2': [(22, 2222), (80, 8888)],
'192.168.88.3': [(8081, 10001), (22, 10002)],
'192.168.88.4': [(6697, 6697)],
}
# do not edit below this line
ip_tables_proto='iptables -%s PREROUTING -t nat -p %s -i %s --dport %d -j DNAT --to %s:%d\n'
import sys
is_delete=False
def usage():
print >>sys.stderr, 'Usage: %s [-d] domU_ip' % sys.argv[0]
sys.exit(1)
def is_ip(adr):
ip_list=adr.split('.')
if len(ip_list)!=4:
usage()
for i in ip_list:
try:
if int(i)>255 or int(i)<0:
usage()
except ValueError:
usage()
args_no=len(sys.argv)
if args_no==3:
if sys.argv[1]=='-d':
is_delete=True
ip=sys.argv[2]
is_ip(ip)
elif args_no==2:
ip=sys.argv[1]
is_ip(ip)
else:
usage()
if is_delete:
action="D"
else:
action="A"
mapping=portmap.get(ip, [])
cmds=''
for port_map in mapping:
if len(port_map)==3 and port_map[2] in ('tcp', 'udp'):
from_port, to_port, proto=port_map
elif len(port_map)==2:
from_port, to_port=port_map
proto='tcp'
cmds+=ip_tables_proto % (action, proto, netdev, to_port, ip, from_port)
import os
os.system(cmds)
/etc/xen/scripts/vif-bridge-nat
#!/bin/bash
#============================================================================
# ${XEN_SCRIPT_DIR}/vif-bridge-nat
# Script for configuring a vif in bridged mode.
#
# Usage:
# vif-bridge (add|remove|online|offline)
#
# Environment vars:
# vif vif interface name (required).
# XENBUS_PATH path to this device's details in the XenStore (required).
#
# Read from the store:
# bridge bridge to add the vif to (optional). Defaults to searching for the
# bridge itself.
# ip list of IP networks for the vif, space-separated (optional).
#
# up:
# Enslaves the vif interface to the bridge and adds iptables rules
# for its ip addresses (if any).
#
# down:
# Removes the vif interface from the bridge and removes the iptables
# rules for its ip addresses (if any).
#============================================================================
dir=$(dirname "$0")
. "$dir/vif-common.sh"
bridge=${bridge:-}
bridge=$(xenstore_read_default "$XENBUS_PATH/bridge" "$bridge")
ip=${ip:-}
if [ -z "$bridge" ]
then
bridge=$(brctl show | awk 'NR==2{print$1}')
if [ -z "$bridge" ]
then
fatal "Could not find bridge, and none was specified"
fi
else
#
# Old style bridge setup with netloop, used to have a bridge name
# of xenbrX, enslaving pethX and vif0.X, and then configuring
# eth0.
#
# New style bridge setup does not use netloop, so the bridge name
# is ethX and the physical device is enslaved pethX
#
# So if...
#
# - User asks for xenbrX
# - AND xenbrX doesn't exist
# - AND there is a ethX device which is a bridge
#
# ..then we translate xenbrX to ethX
#
# This lets old config files work without modification
#
if [ ! -e "/sys/class/net/$bridge" ] && [ -z "${bridge##xenbr*}" ]
then
if [ -e "/sys/class/net/eth${bridge#xenbr}/bridge" ]
then
bridge="eth${bridge#xenbr}"
fi
fi
fi
RET=0
ip link show dev $bridge 1>/dev/null 2>&1 || RET=1
if [ "$RET" -eq 1 ]
then
fatal "Could not find bridge device $bridge"
fi
case "$command" in
online)
setup_virtual_bridge_port "$dev"
set_mtu $bridge $dev
add_to_bridge "$bridge" "$dev"
$dir/portmap.py $ip
;;
offline)
do_without_error brctl delif "$bridge" "$dev"
do_without_error ifconfig "$dev" down
$dir/portmap.py -d $ip
;;
add)
setup_virtual_bridge_port "$dev"
set_mtu $bridge $dev
add_to_bridge "$bridge" "$dev"
;;
esac
handle_iptable
call_hooks vif post
log debug "Successful vif-bridge $command for $dev, bridge $bridge."
if [ "$type_if" = vif -a "$command" = "online" ]
then
success
fi
ATM, my VM's don't even have acesss to the outside. I've been playing with tcpdump, pings, and wget to try to see what's happening, and tweaking the config, managed sometimes to get request to the ouside, coming back to the dom0 and getting droped, routed on the wrong interface, not finding his way to the VM. I'm feeling like i'm just missing sommes little pieces or screws...
I've also tried to use vif-nat with isc-dhcp-server, without any sucess. It could be usefull to be able to name the vm's and then map'em to a dns, so if you have any informations on this idea.
Thanks for your time.
Nevermind, i was just missing 2 iptables lines, here is what i was missing
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface xenbr0 -j ACCEPT
I've put this in my /etc/rc.local so it's get executed right after startup and only once (i've put it in /etc/network/interfaces at first, but rules get multiplicated for each vm)
My script for port-forwarding do it's job, so ports get mapped correctly, rules are added uppon vm creation and removed when destroyed.
I'll probably take a deeper look into my network rules cause i doubt they are really perfect, but it does the job.