I am working on setup a netmap enabled (high performance bridging firewall).
The question is if i am using netmap's bridging tools to bridge em0 and em1,
and setup ipfw rules to block some kinds traffic on one em0, will it works?
the kernel bridging is works fine with ipfw but its slow(not netmap enabled), my worry is if it short circle the firewall rules, if i look at the implementation, it doesn't do anything about packet filtering, just once em0 received packets it will forward to em1 immediately
the netmap bridging tools is bridge.c
https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4
While a NIC is in netmap mode, the OS will still believe the interface
is up and running. OS-generated packets for that NIC end up into a
netmap ring, and another ring is used to send packets into the
OS network stack. A close(2) on the file descriptor removes the
binding, and returns the NIC to normal mode (reconnecting the data
path to the host stack), or destroys the virtual port.
NICs without native support can still be used in netmap mode through emu-
lation. Performance is inferior to native netmap mode but still signifi-
cantly higher than sockets, and approaching that of in-kernel solutions
such as Linux's pktgen.
PS:
You can do bridging and filtering with ng_ipfw + ng_bridge - it's a fast kernel based solution
Related
I run a small business network with around a 500mbit Internet connection and want to introduce an NIPS (network intrusion prevention system). I have identified SNORT or SURICATA as the software of choice (and maybe Zeek which I know less about). Perhaps with PFSense etc. TBD.
Wifi is heavily used in the business, as is standard Windows LAN-cable PCs. Currently our basic Router/Modem handles everything.
CURRENT network topology:
INTERNET ==> Existing ADSL-like Router/Modem (with DHCP + wifi) ==> Office network infrastructure etc
I want to insert a basic Linux box with 2 or four cores + 4GB of ram and a basic 1gbps network card for this SNORT/SURICATA box, before the Internet router.
I want to confirm the following is a good means to go about introducing NIPS:
DESIRED network topology:
INTERNET ==> Existing ADSL-like Router/Modem (disable wifi) ==> SNORT/SURICATA Linux Box ==> Spare Standard ADSL-like Router/Modem with DHCP + Wifi enabled ==> Office network infrastructure etc.
Question: Will this setup allow the SNORT/SURICATA box (given default settings / nothing fancy enabled) to:
Track LAN source IP address of WAN traffic, both outgoing and incoming. I.e. Torrent connection between "Local Computer LAN IP and Remote IP" -, not "Router IP and Remote IP"
Ability to login to SNORT/SURICATA box (no subnet craziness - at least not super hard to resolve problems)
Any gotchas here?
Note this is for a small business with 20 employees, not 300 etc. Conforming to every best practice is impractical at this size.
I am not keen on adding a WIFI network card to said Linux box. The reason is, in a crisis, I want to be able to unplug the snort box and connect the two routers together and immediately provide Internet to the office in case the box goes down for whatever reason (bad snort rules, hard drive dies etc). Also, router/modems need clicks to get connectivity going - I don't need to load up Putty, which would be very hard for anyone else to deal with, if I am not available.
Thanks for the help!
The setup that you are trying to accomplish can easily be done by install a pfSense box (2-4 cores and 4 GB RAM). You can choose the hardware spec from the below link:
https://docs.netgate.com/pfsense/en/latest/book/hardware/index.html
Configure suricata to run in inline IPS mode and you will be good to go. You can anytime ask for assistance while configuring suricata.
I have an RN-171 wifly module connected with a micro-controller.
I am using the UDP-protocol to communicate with the module. Also, I am using the firmware's UDP auto-pair feature to set the host ip. As soon as the module receives a UDP packet, it sets the host IP address to the ip from where it received the data. Now, this host ip cannot be changed without entering into the command mode.
I want the module to behave in the following way:
Every time it receives a UDP packet, it updates the host ip to the ip address from where that signal came from.
Also, I can use the TCP protocol but it only allows a single connection at a time. One more problem that I faced using the TCP protocol was that if I try to initiate a second TCP connection with the module, it not only refuses the second connection but also hangs the first stable connection. Even if the second connection initiation does not hang the module and it just gets refused, I will be ready to work with TCP.
I have been researching a lot on the web regarding this problem but since these modules are not widely used, they have a very limited support.
I've used RN-171 extensively and have many resolved tickets in their support system.
According to the WiFly Command Reference, Advanced Features and Applications User’s Guide, you cannot open more than one TCP port with the module. (the default number being 2000)
Unfortunately, regarding the UDP functionality, there's not much you can do. If you have a new host wishing to communicate over UDP, connect to the module over TCP, go into command mode and set the address using "$$$", "set ip host 0.0.0.0", "save", "exit" commands. Alternatively, instead of 0.0.0.0, you can enter the new host's own ip address: "$$$", "set ip host ###.###.###.###", "exit". Replace "###.###.###.###" with the ip address of the device.
This way, you won't get wrong host ip in case more than one device communicates over UDP at the same time. Also, by not using "save", the auto-pairing will still be saved to EEPROM memory. Also, you can send "ip flags 0x##" before "exit", this way you can also set bit[6] to 0 (UDP auto pairing disabled) temporarily by using the hex value that has this bit set to zero.
One of my problems that Microchip technical support tested around the summer of 2013 is that you cannot use RN-171 as an access point for other RN-171s since they have a firmware error preventing one from doing that and, as of firmware v4.41, released in January of 2014, there is no fix yet nor planned.
I myself do not recommend the latest firmware version v4.41, since it does not appear to work with most routers; however Soft AP mode on this works fine. On the other hand, v4.00.1 is much more compatible, however you should take care when cutting off the power since it has a potentially disastrous bricking problem if you cut the power when flash writing is in progress - the module may lock its memory forever.
I recommend registering and opening a Microchip ticket which usually will be answered within two business days and they're quite supportive. Their firmware update cycle is however quite long, and it usually takes a year or so for a new update.
Why is it must that all interfaces (routers and bridges) involved support LRO/TSO technique ?
Routers don't. Bridges do.
External routers, hubs, switches or anything else that is externally connected to the network will not see the effects of TSO, only interfaces inside the device with TSO will experience any effects - it's a software thing.
A router is an external device which is connected to the network by ethernet cables, fibre optic cables, wireless comms etc. These communication mediums adhere to internation standards such as 803.2 for ethernet or 803.11 for wireless - they're hardware devices, and hardware devices have very strict rules on how they communicate.
A bridge is an internal software construct and is specific to your OS.
Let's use 803.2 (ethernet) and a linux host for an example.
An application calls for a socket to be created and then pushes a large data chunk into the socket. The linux kernel determines which interface this data should be transitted on. The kernel will next interrogate the driver for this interface to determine its capabilities, if the interface is TSO capable the kernel will pass an sk_buff with a single "template" header and a huge chunk of data (more than 1 packets worth) to the interface driver.
Let's consider a standard interface straight to a hardware NIC first:
Some interfaces have fake TSO (they segment the packet in the driver) and some have true TSO (the template header and data are passed to the hardware with minimal alterations). At this point ether the driver or the NIC hardware will convert this large segment of data into multiple, standard compliant, 803.2 ethernet frames, it is these compliant frames that an external device, such as a router, hub, switch, modem or other host will see on the wire.
Now let's consider several NICs behind a software bridge:
Although the kernel is aware of each NIC at a low level, the network stack is only aware of the bride, thus only capabilities that ALL of the underlying NICs have should be passed up to the bridge. If an sk_buff is passed to a bridge, then ALL the interfaces in the bridge will receive the same sk_buff. We'll assume that the kernal has once again passed our large TSO sk_buff to a bridge, if any of the underlying interfaces does not support TSO then the packet will most likely be dropped by the hardware NIC in question.
In summary:
Worst case scenario is the bridge will repeatedly retry to send the same data chunk on the broken interface and the whole bridge will lock up until the application decides to give up. Best case scenario, the non TSO NIC will simply appear to be dead.
That said, if the NIC has unsafe code in its driver then this could cause a segmentation fault that could bring the whole system down.
Can someone explain the concepts of IPoIB and TCP over infiniband? I understand the overall concept and data rates provided by native infiniband, but dont quite understand how TCP and IPoIB fit in. Why do u need them and what do they do? What is the difference when someone says their network uses IPoIB or TCP with infiniband? Which one is better? I am not from a strong networking background, so it would be nice if you could elaborate.
Thank you for your help.
InfiniBand adapters ("HCAs") provide a couple of advanced features that can be used via the native "verbs" programming interface:
Data transfers can be initiated directly from userspace to the hardware, bypassing the kernel and avoiding the overhead of a system call.
The adapter can handle all of the network protocol of breaking a large message (even many megabytes) into packets, generating/handling ACKs, retransmitting lost packets, etc. without using any CPU on either the sender or receiver.
IPoIB (IP-over-InfiniBand) is a protocol that defines how to send IP packets over IB; and for example Linux has an "ib_ipoib" driver that implements this protocol. This driver creates a network interface for each InfiniBand port on the system, which makes an HCA act like an ordinary NIC.
IPoIB does not make full use of the HCAs capabilities; network traffic goes through the normal IP stack, which means a system call is required for every message and the host CPU must handle breaking data up into packets, etc. However it does mean that applications that use normal IP sockets will work on top of the full speed of the IB link (although the CPU will probably not be able to run the IP stack fast enough to use a 32 Gb/sec QDR IB link).
Since IPoIB provides a normal IP NIC interface, one can run TCP (or UDP) sockets on top of it. TCP throughput well over 10 Gb/sec is possible using recent systems, but this will burn a fair amount of CPU. To your question, there is not really a difference between IPoIB and TCP with InfiniBand -- they both refer to using the standard IP stack on top of IB hardware.
The real difference is between using IPoIB with a normal sockets application versus using native InfiniBand with an application that has been coded directly to the native IB verbs interface. The native application will almost certainly get much higher throughput and lower latency, while spending less CPU on networking.
we're designing SOHO router based on MIPS processor, wired up with 24-ports switch. The CPU runs NAT (configured with iptables), iptables rules, dhcp etc. it doesn't have any H/W acceleration for these functions. When testing NAT in full-mesh mode (i.e. one WAN port and others are LAN port), we observe the significant system's slowdown, especially console responds very slowly, and there is also packets loss.
The 'top' shows that ksoftirqd consumes over 80% of CPU.
What can be the reason of such behaviour? Does the Linux NAT run in userland?
ksoftirqds are kernel threads driving ... soft IRQs, things like TIMER_SOFTIRQ, SCSI_SOFTIRQ, TASKLET_SOFTIRQ, and what's relevant to your case, NET_TX_SOFTIRQ and NET_RX_SOFTIRQ. These are implemented in bottom halves of the kernel, as deffered work from top halves - the actual interrupt handlers in the device drivers where latency is critical.
Actual interrupt handler, or hardware IRQ, for a network card is concerned with getting data to/from the device as quickly as possible. It doesn't know anything about NAT and other TCP/IP processing. It knows about its bus handling (say PCI), its card specifics (ring buffers, control/config registers), DMA, and a bit about Ethernet. It hands/receives packets (skbufs to be exact) through queues to/from bottom half.
Take a look at the ethtool(8) if you haven't yet. See if you can tune the hardware/drivers to do checksum/segmentation offloading, etc. I don't have any suggestions on the NAT front, I don't use it.
Hope this helps a bit.
Edit:
As mentioned in the comments, check the NIC hardware for interrupt mitigation and the supporting driver for NAPI support.
ksoftirqd is the IRQ handler. You may check /proc/interrupts to see which IRQ is under load.
The CPU is overloaded: use a stronger model, or use simplier iptables rules.
Linux NAT run in kernel space, ksoftirqd is in kernel space.