I am on Fedora v33 Server edition (no GUI) and I have setup 2 network connections.
One is Ethernet, which I use to connect my Macbook to the Linux machine, the other is the WLAN connection the machine uses to connect to the internet.
So now whenever I do
nmcli con up eno1
I lose access to the Internet (ping www.google.com does not return any packets)
When the ethernet is down everything works, but I cannot use ethernet obviously.
Something similar can happen on a Mac OS where I can simply "drag" a network to set the priority. How do I do the same using only the terminal on a unix system like Fedora?
Ok after some research I ran into this great tool called nmtui
sudo dnf install NetworkManager-tui
And after installing the tool and running it with sudo nmtui I edited my ethernet connection and saw the option called
Never use this network for default route which translates to the option never-default=true inside the [ipv4] in the /etc/NetworkManager/system-connections/ config file.
After that I ran sudo nmcli con down eno1 && sudo nmcli con up eno1 and after running nmcli again I can see that the order of connections charged, where now my WLAN is first and my ethernet connection is second.
I have a setup at home as follow:
DHCP clients -----> (wifi)(bridge) Openwrt -----> (eth)Main Router
The device I'm using is TPlink MR3020 with Barrier Breaker and I tried to set up transparent proxy for bridge traffic - I want to redirect the packets passing through the bridge to proxy server(privoxy). I tried to use ebtables. But when I enter the following command:
ebtables -t broute -A BROUTING -p IPv4 --ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT
I got following error:
Unable to update the kernel. Two possible causes:
1. Multiple ebtables programs were executing simultaneously. The ebtables
userspace tool doesn't by default support multiple ebtables programs running
concurrently. The ebtables option --concurrent or a tool like flock can be
used to support concurrent scripts that update the ebtables kernel tables.
2. The kernel doesn't support a certain ebtables extension, consider
recompiling your kernel or insmod the extension.
I tried to activate the IPv4 package with insmod, but no luck.
Any ideas on how to accomplish this?
Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.
I am planning on using KVM in order to virtualize some GNU/Linux and Windows machines at home.
My physical network is 1gbe using Link Aggregation at some stages. In the worst case, it's still 1gbe though.
I am wondering if it is possible to "emulate" 10gbe ethernet (or anything faster than 1gbe) between two virtual machines on the same host (or one VM and the host itself) by avoiding the physical network altogether. I think for this to work they'll need to be in the same network, connected to the same virtual switch and VLAN.
Yes.
Create a bridge using brctl tool on the host:
brctl addbr vm-bridge
ifconfig vm-bridge up
For each VM specify virtio-net NIC and add them to the bridge.
Create qemu-ifup script:
#!/bin/sh
switch=vm-bridge
/sbin/ifconfig $1 promisc 0.0.0.0
/usr/sbin/brctl addif ${switch} $1
Specify this script in "-netdev" parameter of QEMU:
-netdev tap,id=net1,vhost=on,script=/home/user/qemu-ifup,ifname=vm_net1
I have 2 servers(serv1,serv2) that communicate and i'm trying to sniff packets matching certain criteria that gets transferred from serv1 to serv2. Tshark is installed on my Desktop(desk1). I have written the following script:
while true; do
tshark -a duration:10 -i eth0 -R "(sip.CSeq.method == "OPTIONS") && (sip.Status-Code) && ip.src eq serv1" -Tfields -e sip.response-time > response.time.`date +%F-%T`
done
This script seems to run fine when run on serv1(since serv1 is sending packets to serv2). However, when i try to run this on desk1, it cant capture any packets. They all are on the same LAN. What am i missing?
Assuming that either serv1 or serv2 are on the same physical ethernet switch as desk1, you can sniff transit traffic between serv1 and serv2 by using a feature called SPAN (Switch Port Analyzer).
Assume your server is on FastEtheret4/2 and your desktop is on FastEthernet4/3 of the Cisco Switch... you should telnet or ssh into the switch and enter these commands...
4507R#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
4507R(config)#monitor session 1 source interface fastethernet 4/2
!--- This configures interface Fast Ethernet 4/2 as source port.
4507R(config)#monitor session 1 destination interface fastethernet 4/3
!--- The configures interface Fast Ethernet 0/3 as destination port.
4507R#show monitor session 1
Session 1
---------
Type : Local Session
Source Ports :
Both : Fa4/2
Destination Ports : Fa4/3
4507R#
This feature is not limited to Cisco devices... Juniper / HP / Extreme and other Enterprise ethernet switch vendors also support it.
How about using the misnamed tcpdump which will capture all traffic from the wire. What I suggest doing is just capturing packets on the interface. Do not filter at the capture level. After you can filter the pcap file. Something like this
tcpdump -w myfile.pcap -n -nn -i eth0
If your LAN is a switched network (most are) or your desktop NIC doesn't support promiscuous mode, then you won't be able to see any of the packets. Verify both of those things.