I'm trying to set up a container with docker.
The container can access the internet while I'm under my home network which doesn't have any filter, but fails to connect while under the university network (I can't even docker run ubuntu ping 8.8.8.8. I just get nothing). From my experience the university network drops everything that's not on port :80 and is not an http/https/ftp(and similar protocols) request.
I can ask for a specific MAC address to not be filtered.
With which MAC address does docker interface with internet?
Does it use my wireless board? I think it creates a new interface, but I have no idea if all the containers traffic goes through it.
Which MAC address should I ask to unlock in order for my containers not to be filtered?
Thanks!
I can ask for a specific MAC address to not be filtered. With which MAC address does docker interface with internet?
When communicating with the outside world, Docker is using the MAC address and source IP address of your host. If you are connected to the University network using your wireless NIC, then this is the NIC that Docker containers use for external connectivity.
Docker creates a bridge device on your system named docker0. All containers connect to this bridge, and use a private range of ip addresses. Communication external to your host happens via NAT rules configured using iptables (you can view them by running iptables -t nat -S). These rules make traffic originating in Docker containers appear to originate from your host instead.
Related
This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed.
The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Proxmox Server with 1 NIC(eth0)
3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
eth0 is PCI-Passthroughed to the OPNsense KVM
A private network on vmbr30, 10.1.7.0/24
An IPsec mobile client connect (172.16.0.0/24) to LAN
To better outline the setup, i create this [drawing][1]: (not sure its perfect, tell me what to improve)
Questions:
How to setup such a scenario using PCI-Passthrough instead of the Bridged Mode.
Follow ups
I) Why i cannot access PROXMOX.2 but access VMEXT.11 (ARP?)
II) is why do i need a from * to * IPSEC chain rule to get ipsec running. That is most probably a very much opnsense related question.
III) I tried to handle the 2 additional external IPs by adding virtual ips in OPNsense, adding a 1:1 nat to the internal LAN ip and opening the firewall for the ports needed ( for each private lan IP ) - but yet i could not get it running. The question is, should each private IP have a seperate MAC or not? What is specifically needed to get a multi-ip setup on WAN
General high level perspective
Adding the pci-passthrough
A bit out of scope, but what you will need is
a serial console/LARA to the proxmox host.
a working LAN connection from opnsense (in my case vmbr30) to proxmox private ( 10.1.7.2 ) and vice versa. You will need this when you only have the tty console and need to reconfigure the opnsense intefaces to add em0 as the new WAN device
You might have a working IPsec connection before or opened WAN ssh/gui for further configuration of opnsense after the passthrough
In general its this guide - in short
vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
update-grub
vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Then reboot and ensure you have a iommu table
find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
Now find your network card
lspci -nn
in my case
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
After this command, you detach eth0 from proxmox and lose network connection. Ensure you have a tty! Please replace "8086 15b7" and 00:1f.6 with your pci-slot ( see above)
echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind
Now edit your VM and add the PCI network card:
vim /etc/pve/qemu-server/100.conf
and add ( replace 00:1f.6)
machine: q35
hostpci0: 00:1f.6
Boot opnsense connect using ssh root#10.1.7.1 from your tty proxmox host, edit the interfaces, add em0 as your WAN interface and set it on DHCP - reboot your opnsense instance and it should be up again.
add a serial console to your opnsense
In case you need a fast disaster recovery or your opnsense instance is borked, a CLI based serial is very handy, especially if you connect using LARA/iLO whatever.
Do get this done, add
vim /etc/pve/qemu-server/100.conf
and add
serial0: socket
Now in your opnsense instance
vim /conf/config.xml
and add / change this
<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>
Be sure you replace the current serialspeed with 9600. No reboot your opnsense vm and then
qm terminal 100
Press Enter again and you should see the login prompt
hint: you can also set your primaryconsole to serial, helps you get into boot prompts and more and debug that.
more on this under https://pve.proxmox.com/wiki/Serial_Terminal
Network interfaces on Proxmox
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
address 10.1.7.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
OPNsense
WAN is External-IP1, attached em0 (eth0 pci-passthrough), DHCP
LAN is 10.1.7.1, attached to vmbr30
Multi IP Setup
Yet, i only cover the ExtraIP part, not the extra Subnet-Part. To be able to use the extra IPs, you have to disable seperate MACs for each ip in the robot - so all extra IPs have the same MAC ( IP1,IP2,IP3 )
Then, in OPN, for each extern IP you add a Virtual IP in Firewall-VirtualIPs(For every Extra IP, not the Main IP you bound WAN to). Give each Virtual IP a good description, since it will be in the select box later.
Now you can go to either Firewall->NAT->Forward, for each port
Destination: The ExtIP you want to forward from (IP2/IP3)
Dest port rang: your ports to forward, like ssh
Redirect target IP: your LAN VM/IP to map on, like 10.1.7.52
Set the redirect port, like ssh
Now you have two options, the first one considered the better, but could be more maintenance.
For every domain you access the IP2/IP3 services with, you should define local DNS "overrides" mapping on the actually private IP. This will ensure that you can communicate from the inner to your services and avoids the issues you would have since you used NATing before.
Otherwise you need to care about NAT reflection - otherwise your LAN boxes will not be able to access the external IP2/IP3, which can lead to issues in Web applications at least. Do this setup and activate outbound rules and NAT reflection:
What is working:
OPN can route a]5]5ccess the internet and has the right IP on WAN
OPN can access any client in the LAN ( VMPRIV.151 and VMEXT.11 and PROXMOX.2)
i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
i can access 10.1.7.1 ( opnsense ) while connected with IPsec
i can access VMEXT using the IPsec client
i can forward ports or 1:1NAT from the extra IP2/IP3 to specific private VMs
Bottom Line
This setup works out a lot better then the alternative with the bridged mode i described. There is no more async-routing anymore, there is no need for a shorewall on proxmox, no need for a complex bridge setup on proxmox and it performs a lot better since we can use checksum offloding again.
Downsides
Disaster recovery
For disaster recovery, you need some more skills and tools. You need a LARA/iPO serial console the the proxmox hv ( since you have no internet connection ) and you will need to configure you opnsense instance to allow serial consoles as mentioned here, so you can access opnsense while you have no VNC connection at all and now SSH connection either ( even from local LAN, since network could be broken ). It works fairly well, but it needs to be trained once to be as fast as the alternatives
Cluster
As far as i can see, this setup is not able to be used in a cluster proxmox env. You can setup a cluster initially, i did by using a tinc-switch setup locally on the proxmox hv using Seperate Cluster Network. Setup the first is easy, no interruption. The second join needs to already taken into LARA/iPO mode since you need to shutdown and remove the VMs for the join ( so the gateway will be down ). You can do so by temporary using the eth0 NIC for internet. But after you joined, moved your VMs in again, you will not be able to start the VMs ( and thus the gateway will not be started). You cannot start the VMS, since you have no quorum - and you have no quorum since you have no internet to join the cluster. So finally a hen-egg issue i cannot see to be overcome. If that should be handled, only by actually a KVM not being part of the proxmox VMs, but rather standalone qemu - not desired by me right now.
I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.
I am running proxmox 4 with around 10 KVM and 14LXC.
I can configure ips and network from web GUI for LXC container.
I want to configure the Network interface For KVM without accessing the VM.
Is is possible to configure Network interface without accessing the VM.
As far as I know you can't configure the IP address in proxmox for a KVM vm (only for the lxc container you can define the ip address). For a KVM vm you can configure if the network connection is in Bridged mode or NAT.
For LXC containers you can use the pct command to set network for the container. More info about that on the Proxmox WIKI (scroll down to the Network section) - https://pve.proxmox.com/wiki/Linux_Container
What you could do for KVM would be to use a local DHCP server (you can install one on your proxmox if you want (apt-get install isc-dhcp-server). You have to define an ip address pool that will be assigned to your vms by the dhcp server.
Then configure the kvm machine using: qm command
qm set vmid options
From a man qm you discover this:
-net[n] [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,queues=<integer>]
[,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>]
So basically you can define the network for your kvm vm, say if it's bridged, set a specific mac address for that card.
If you want to add a specific ip to that vm you can do it based on its mac address (you have to configure in the dhcp server that a specific ip address is assigned to the desired mac address).
I'm trying to debug a problem I'm having understanding the difference between the NAT network adapter in VirtualBox and the NAT network adapter in VMWare Fusion. So far, I can configure VMWare and achieve my desired result, but I cannot achieve this in VirtualBox. In a VMWare VM, I'm able to use a NAT network adapter to achieve the following:
The guest is assigned it's own unique IP address
The guest has access to the outside Internet
The host can ping the guest and ssh to it
The guest can ping the host and ssh to it
The guest can resolve (internal) domain names just like the Host
I thought I saw that this was possible in VirtualBox, but now I'm thinking it's not possible. Perhaps there is some option that is close to VMWare, in which I manually modify /etc/resolv.conf in the guest to match that of the host? I did find a few questions that seem to indicate I should instead be using Bridged mode in VirtualBox, e.g. this question: Can't ping to VirtualBox instance , in which both answers appear to suggest VirtualBox's NAT adapter doesn't support the functionality I want:
It is quite obvious that when you are using NAT it will be impossible to ping host after NAT. It is how the NAT works... even if you will have real not virtual host the bechaviour will be the same.
and
You need to change networking mode from NAT to bridged, and ping should start working in both directions.
Also, answers to this question seem to back up the above: How to ping ubuntu guest on VirtualBox
Is it true that a NAT adapter in VirtualBox cannot be ping'ed from the Host OS?
I have used virtual box for years and I also have 2-3 years experience in computer networking.
Yes, in virtual box you can't ping the guest that use NAT from the host and this also how NAT works in real life. In real life, if you want to be able to contact a host behind NAT, you have to set a port forwarding rules where the connection to a certain port of the router will be forwarded to a certain machine. This must be done on the router.
To enable port forwarding in virtual box environment, select the Network pane in the virtual machine’s configuration window, expand the Advanced section, and click the Port Forwarding button. Note that this button is only active if you’re using a NAT network type – you only need to forward ports if you’re using a NAT (http://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/).
I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.