I can't remove default routing during a boot of Linux - networking

I setup network through systemd.network service. I have two interfaces. eth0(wire) and wwan0(wireless). I described it in two files:
20-wire.network
[Match]
Name=eth0
[Network]
Address=192.168.100.1/24
#Gateway=192.168.2.16
DefaultRouteOnDevice=false
[Route]
Gateway=192.168.2.16
25-wireless.network
[Match]
Name=wwan0
[Network]
DHCP=yes
DNS=8.8.8.8
DefaultRouteOnDevice=true
I would want my default routing always was through wwan0.
But after booting or creating ssh-session occurs added default routing through eth0.
unnecessary route-->default dev eth0 scope link
default via 192.168.2.16 dev wwan0 proto dhcp src 192.168.2.136 metric 1024
169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.73.67
192.168.2.0/24 dev wwan0 proto kernel scope link src 192.168.2.136
192.168.2.16 dev wwan0 proto dhcp scope link src 192.168.2.136 metric 1024
192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.1

Networkmanager - connman set up default routing. I disabled it in the /var/lib/connman/ethernet_00049f05e066_cable/settings. I changed the parameter:
AutoConnect=true on false

Related

How to: Podman rootless expose containers ports to the outside and see real client ip

This is my first time asking something on stackoverflow. For years I've been lurking but now I decided to finally register myself. Hence, I apologize if my question/information is not formatted nicely.
Current situation:
I'm slowly getting more and more familiar with Podman and I'm in the process of moving some of my containers over from docker (rootful) to podman (rootless). I'm using Podman 4.3.1 on Debian 11. I've managed to get some containers working and was able to externally connect to them. However, the container shows client/source ip '127.0.0.1' instead of my real client's IPv4. I was wondering whether something like the following is possible?
Ideal situation:
Assigning a specific IPv4 to the container (rootless). Using nftables/iptables to forward packets from the host's network to the containers ipv4 (e.g. 192.168.1.12). Being able to see the real client's IPv4 in the container to still be able use fail2ban etc.
As you may notice, I'm still very much in the process of learning how containerization works and specifically for networking. I don't want to use the hosts network for my container for security reasons. If something is unclear tell me and I'll try to better explain myself.
Thanks for taking your time to read this :)
When you're running Podman as a non-root user, the virtual tap device that represents the container's eth0 interface can't be attached directly to a bridge device. This means it's not possible to use netfilter rules to direct traffic into the container; instead, Podman relies on a proxy process.
There are some notes on this configuration here.
By default, Podman uses the rootlessport proxy, which replaces the source ip of the connection with an internal ip from the container namespace. You can, however, explicitly request Podman to use slirp4netns as the port handler, which will preserve the source address at the expense of some performance.
For example, if I start a container like this:
podman run --name darkhttpd --rm -p 8080:8080 docker.io/alpinelinux/darkhttpd
And then connect to this from somewhere:
curl 192.168.1.200:8080
I will see in the access log:
10.0.2.100 - - [12/Feb/2023:15:30:54 +0000] "GET / HTTP/1.1" 200 354 "" "curl/7.85.0"
Where 10.0.2.100 is in fact the address of the container:
$ podman exec darkhttpd ip a show tap0
2: tap0: <BROADCAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN qlen 1000
link/ether 26:77:5b:e8:f4:6e brd ff:ff:ff:ff:ff:ff
inet 10.0.2.100/24 brd 10.0.2.255 scope global tap0
valid_lft forever preferred_lft forever
inet6 fd00::2477:5bff:fee8:f46e/64 scope global dynamic flags 100
valid_lft 86391sec preferred_lft 14391sec
inet6 fe80::2477:5bff:fee8:f46e/64 scope link
valid_lft forever preferred_lft forever
But if I explicitly request slirp4nets as the port handler:
podman run --name darkhttpd --rm -p 8080:8080 --network slirp4nets:port_handler=slirp4netns docker.io/alpinelinux/darkhttpd
Then in the access log I will see the actual source ip of the request:
192.168.1.97 - - [12/Feb/2023:15:32:17 +0000] "GET / HTTP/1.1" 200 354 "" "curl/7.74.0"
In most cases, you don't want to rely on the source ip address for authentication/authorization purposes, so the default behavior makes sense.
If you need the remote ip for logging purposes, the option presented here will work, or you can also look into running a front-end proxy in the global namespace that places the client ip into the X-Forwarded-For header and use that for your logs.
Here is an alternative solution not mentioned in the nice
answer by #larsks.
Socket activation
When using socket activation of containers, the source IP is available to the container.
Support for socket activation is not yet wide-spread but for instance the container image docker.io/library/mariadb supports socket activation. The container image docker.io/library/nginx also supports socket activation (although in a non-standard way, as nginx uses its own environment variable instead of using the standard systemd environment variable LISTEN_FDS)
I wrote a minimal demo of how to use run docker.io/library/nginx with Podman and socket activation:
https://github.com/eriksjolund/podman-nginx-socket-activation

Static IP address set in /etc/network/interface not getting updated after rmmod and insmod

I have configured static IP address in /etc/network/interfaces file as below
# The loopback interface
auto lo
iface lo inet loopback
# Wired or wireless interfaces
auto eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
broadcast 192.168.1.255
hwaddress ether 01:06:92:85:00:12
But, when i try to do rmmod of the driver e1000 and then
insmod again. the eth0 network interface would be loaded but, the ip address is not assigned until i explicitly do ifconfig eth0 or ifup eth0.
I have tried adding a script in /etc/network/if-up.d/loadeth.sh
which has
#!/bin/sh
if [ "$IFACE" = eth0 ]; then
echo "eth0 up" >> /var/log/oak_pci.log
fi
but, no luck the IP address is getting assigned.
My aim is that whenever i insmod the ethernet device driver i want to get the network interface(eth0) assigned with static IP address i have assigned in the interfaces file
Could anybody let me know what am i missing here
what am i missing here
The files in /etc/network/ are parsed when when ifup or ifdown commands are executed. (I think also when ifplugd picks them up).
insmod loads a module into the running kernel.
You are missing the knowledge, that there is just no connection between insmod-ing a kernel driver and reading any files from /etc/network directory.
My aim is that whenever i insmod the ethernet device driver i want to get the network interface(eth0) assigned with static IP address i have assigned in the interfaces file
You may setup udev rule to run a custom script upon insmod-ing a kernel driver or when interface comes up.
After going through man page of udev i understood how to create udev rules and with a dummy test specified in this link https://www.tecmint.com/udev-for-device-detection-management-in-linux/ i was able to invoke the udev rules when insmod-ing and rmmod-ing a driver.
So, Here's what i did to automatically set the ip address for the ethernet network interface once driver is loaded or insmoded
I create a udev rules file named 80-net_auto_up.rules in the ethernet pcie driver recipe (it is an out of tree kernel module. Hence, custom recipe)
i added SUBSYSTEM=="net", ACTION=="add", RUN+="/sbin/ifup eth0"
and edited ethernet pcie driver recipe .bb file and added below lines
...
SRC_URI = "all source files of ethernet pcie driver
file://80-net_auto_up.rules \
"
FILES_${PN} += "${sysconfdir}/udev/rules.d/*"
do_install_append() {
...
install -d ${D}${sysconfdir}/udev/rules.d
install -m 0644 ${WORKDIR}/80-net_auto_up.rules ${D}${sysconfdir}/udev/rules.d/
}
and now it works. when i reset the ethernet device manually.
The device is getting detected and Static IP address configured in the /etc/network/interfaces is set

PPP and ethernet interface not working at the same time

My device is running on Debian OS strech version (not desktop).
I am not an IT personal, but a programmer. I need to know how to configure the network on the debian so both PPP cellular modem & the ethernet interface can access the internet.
There are 3 network interfaces:
1. Ethernet interface enp1s0: dhcp client. (gets ip from the dhcp server and access to the internet)
2. Ethernet interface snp2s0: static ip
3. Modem PPP: wvdial gets access to the internet using the modem
/etc/network/interface file:
auto lo
iface lo inet loopback
allow-hotplug enp1s0
iface enp1s0 inet dhcp
auto enp2s0
iface enp2s0 inet static
address 10.0.13.1
netmask 255.0.0.0
manual ppp0
iface ppp0 inet wvdial
ip route
default via 10.0.0.100 dev enp1s0
10.0.0.0/24 dev enp1s0 proto kernel scope link src 10.0.0.11
10.0.0.0/8 dev enp2s0 proto kernel scope link src 10.0.13.1
/etc/resolv.conf file:
domain mydomain.local
search mydomain.local
nameserver 10.0.0.3
/etc/wvdial.conf file:
[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0
Init3 = AT+CGDCONT=1,"IP","internetg"
Init4 = AT+CGATT=1
Phone = *99***1#
Modem Type = USB Modem
Baud = 460800
New PPPD = yes
Modem = /dev/ttyACM2
ISDN = 0
Password = ''
Username = ''
Auto DNS = Off
/etc/ppp/peers/wvdial file:
noauth
name wvdial
usepeerdns
Problem:
1. My device is running and enp1s0 is connected to the internet. (modem is down)
2. I then run command to perform dialup of the ppp: ifup ppp0
3. As a result the device ppp0 appears in the 'ip a' command, but the ethernet interface enp1s0 is not connected to the internet anymore and also the modem is not connected, but has ip which means there is some problem with routing table and/or dns.
After dialup the ip route table does not have any default/rule for the PPP.
ip route:
default via 10.0.0.100 dev enp1s0
10.0.0.0/24 dev enp1s0 proto kernel scope link src 10.0.0.11
10.0.0.0/8 dev enp2s0 proto kernel scope link src 10.0.13.1
After dialup I noticed that the /etc/resolv.conf file changed and the dns of the ethernet interface is deleted and now appears the PPP dns entries:
/etc/resolv.conf
nameserver 194.90.0.11
nameserver 212.143.0.11
domain mydomain.local
search mydomain.local
The network should behave as follows:
1. If both PPP and ethernet interface are up, then both should have access to the internet at the same time
2. If only 1 of the devices are up (PPP or ethernet interface) then it should work
3. Dialup/Dialdown should not affect the ethernet connection to the internet
What are the exact commands needed and file configuration in order to be able to have PPP and ethernet interface enp1s0 work at the same time?
- ip routing table
- dns
- wvdial
for default route, add defaultroute and replacedefaultroute option to /etc/ppp/peers/wvdial file.

Configure kvm (libvirt) routed network on Ubuntu 16.04 host

I have an Ubuntu 16.04 KVM hypervisor behind a Debian-based firewall, and I'm trying to make the guest VMs IP-reachable, preferably matching the subnet I'm using for that collection of machines.
The firewall is hosting a 10.4.0.0/16 network, and successfully NAT'ing and accepting applicable traffic.
The hypervisor is at 10.4.20.250, with the virsh network configuration shown below. Of note, I've extended the netmask to try separating the clients from the host:
<network>
<name>default</name>
<uuid>02b5de1a-cde4-45dd-b8f5-a9fdfa1c6809</uuid>
<forward mode='route'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:a3:f0:e9'/>
<ip address='10.4.20.20' netmask='255.255.255.128'>
</ip>
</network>
The hypervisor (10.4.20.250) also has the following:
# ip r
default via 10.4.0.1 dev enp0s25 onlink
10.4.0.0/16 dev enp0s25 proto kernel scope link src 10.4.20.250
10.4.20.0/25 dev virbr0 proto kernel scope link src 10.4.20.20
169.254.0.0/16 dev enp0s25 scope link metric 1000
# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.fe54009e64d0 yes vnet0
# ip link show virbr0
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fe:54:00:9e:64:d0 brd ff:ff:ff:ff:ff:ff
# virsh domiflist myguest
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 bridge virbr0 virtio 52:54:00:9e:64:d0
The guest ("myguest") at 10.4.20.25 is able to reach the internet at large; it's configured with:
ip r
default via 10.4.20.20 dev eth0
10.4.0.0/17 dev eth0 proto kernel scope link src 10.4.20.25
From a terminal session connected to the hypervisor (10.4.20.250), I can ping itself, the bridge at 10.4.20.20, the guest at 10.4.20.25, the firewall at 10.4.0.1, and the internet at large.
From the firewall (10.4.0.1) I can ping the hypervisor (10.4.20.250) and the bridge (10.4.20.20) .. but pings to the client (10.4.20.25) are lost. Similarly, from another machine on the 10.4 network, I can ping the firewall, the hypervisor, and the bridge, but not the client. I have the following rules set:
ip r
default via 10.4.0.1 dev enp4s0 onlink
10.4.0.0/16 dev enp4s0 proto kernel scope link src 10.4.2.1
10.4.20.0/25 via 10.4.20.20 dev enp4s0
192.168.15.0/24 dev enp1s0 proto kernel scope link src 192.168.15.242
Any help what configuration I might be missing to make my client be reachable from remote devices?
Note, I have tried to set the forward mode as 'open' but virsh net-edit gives me the following error:
error: unsupported configuration: unknown forwarding type 'open'

Port forwarding to virtual machine qemu

I recently installed a Virtual Machine under Ubuntu 11.10, Right now, I assume, it is using NAT and its internal address is 192.168.122.88.
I have setup a web server in my virtual machine and I want to be able to access it when I go to 192.168.122.88 . However, right now it times out. When I log in to the virtual machine and try to access localhost it works.
So, for some reason, my iptables is blocking traffic from the host to the virtual machine (But not the other way around).
How can I allow traffic to flow from my host to my vm so I can see the webserver from the host?
I used Ubuntu Virtual Machine Manager w/KVM and libvirt.
I tried doing someting like this
iptables -t nat -A PREROUTING -d 192.168.0.10 -p tcp --dport 80 -j DNAT --to-destination 192.168.122.88:80
with no avail. Apparently it says there is no route to host??
'No route to host' means that the host machine doesn't have a IP address that can match the net you are trying to reach (you even don't have a default route), assure you have both nets on the host.
For example:
$ ip route show
default via 192.168.1.254 dev p3p1 src 192.168.1.103
default via 172.16.128.1 dev p3p1
169.254.0.0/16 dev p3p1 scope link metric 1003
172.16.128.0/17 dev p3p1 proto kernel scope link src 172.16.128.2
192.168.1.0/24 dev p3p1 proto kernel scope link src 192.168.1.103
On KVM host machines, I attach the virtual interfaces to some bridge. For example:
<interface type='bridge'>
<mac address='01:02:03:04:05:06'/>
<source bridge='br4'/>
<target dev='vnet4'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Then, I assign an IP address to the bridge on the host, and set it on up:
ip address add 192.168.0.1/24 dev br4
ip link set up dev br4
On my virtual machine, I assign some IP address on the subnet like 192.168.0.2, then the ping should be successful between them.
ping 192.168.0.1
Maybe you need to allow forwarded connections to the virtual machines. Try this:
iptables -I FORWARD -m state -d 192.168.122.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT
Hope this helps.

Resources