Does the vhost-user driver ensures the distribution of traffic between multiple RX queues? - rss

I have a question for you. I know that vhost-user NICs can be configured with many RX/TX queues, but does the vhost-user driver ensures the distribution of traffic between RX queues?
I used the sample application l3fdw to switch traffic between two vhost-user NICs, each with 4 queues. The traffic was generated using TREX (and testpmd also), running inside a VM. When I traced my experiment, I noticed that the traffic was only received in queue "0", while the other RX queues were empty.
The l3fdw app tells me that "Port 0 modified RSS hash function based on hardware support,requested:0xa38c configured:0". For offloading capabilities, testpmd indicates that the vhost-user nic NIC has only support for VLAN STRIP (and not for RSS)!
I appreciate any clarification on this matter.
Thank you,
PS:
DPDK version: 19.08
Qemu version: 4.2.1
Adele

Answer for the original question does the vhost-user driver ensures the distribution of traffic between RX queues? is
There is no mechanism like RSS or RTE_FLOW from DPDK Libraries which will ensure software packet distribution on the RX queues of VHOST NIC.
#AdelBelkhiri there are multiple aspects to be clarified to understand this better.
Features supported by VHOST PMD do not advertise either RTE_FLOW or RSS.
Driver code for vhost pmd in file rte_eth_vhost.c does not advertise RSS or RTE_FLOW capability
there is an article which describes the use of OVS and Multiple queues. The RSS is configured on the Physical NIC with 2 RX queues. The RSS is done on the Physical NIC, 2 separate threads picks the packets from the Physical RX queue and puts the same in VHOST queues. Thus achieving pass-through RSS.
hence in your case where you have 2 VM with 2 NIC ports each having 4 queues, please try 8 PMD threads on OVS to concurrently fwd packets between queues. Where the TREX (TX) VM will ensure to put appropriate packets into each queue seperately.
But the simple answer is there is no RSS or RTE_FLOW logic to distribute traffic

Related

Network packet flow with configured RSS and PFRING cluster

I struggle to understand the concept of how RSS connects to PFRING cluster load balancing.
Here is my current understanding. When RSS is configured NIC calculates packet hashes and places them in a RSS queues. On the other side PFRING kernel module takes packet from NIC and places them in the ring.
How does those two come together? Does PFRING takes packets from RSS queue and puts them in ring?
This is what I was able to discover.
As I understand it, the NIC is instructed on the destination where to copy the packet. Once this is done we have the packet copied by the NIC to the PF_RING ring (1-copy) and an interrupt is thrown, bypassing the kernel (no 2-copy). The ring queue is mapped per RSS during NIC driver initialization.
In 0-copy mode we bypass the PF_RING ring and access the NIC memory directly

Can you simulate a modbus slave via serial connection on node-red?

I have managed to use node-red to simulate a slave device using a TCP connection, but now I want to do it via serial connection. I am using a dell gateway running mbpoll to simulate the master connected to a raspberry pi running node-red to simulate the slave via RS485. Is it possible for me to use node-red on the raspberry pi to simulate the slave device, so it responds to requests from the gateway with values like a sensor would?
Based on your previous questions you are using node-red-contrib-modbus; this node does not support Modbus RTU as per the node descriptions:
modbus-server - Node to provide a Modbus TCP server based on node-modbus (jsmodbus) for testing.
modbus-flex-server - Node to provide a flexible Modbus TCP server based on modbus-serial for testing.
There may be another module out there that does support RTU slaves but I am not aware of one. As such I think your options are:
Modify the existing note to add support for RTP.
Use a gateway. There may be software that will do this (its not all that complicated) but I've not found anything freely available. There are a range of hardward gateways (some fairly cheap) that support comms between an RTU master and a TCP slave - e.g. 1 or 2.

Emulate UDP/TCP/IP connections of 40,000 or more

I need to simulate a massive amount of TCP/IP ethernet traffic. For example, I want to simulate the environment that an ISP has where there might be 40,000 different IP addresses sending TCP/UDP IP traffic to different remote hosts. This is my ideal setup:
Traffic generator - > the device I want to test (one inbound interface and one outbound interface) - > traffic receiver.
The device I want to test is a network traffic monitor/QOS appliance. It effectively sits 'in-line', one interface would be connected to the traffic generator and the other interface connected to the traffic receiver. This in-line interface is effectively a bridge and is not assigned an IP address. It can monitor & apply QOS rules on all traffic passing over that bridge interface.
Layer 4 control is important, so that I can set port numbers (80, 443, 22 etc). Layer 7 application information would be ideal as the device I am testing also does deep packet inspection.
Methods I have already tried include using iperf but in order to simulate 40,000 IP addresses I would need to configure 40,000 virtual interfaces on both the traffic generator and the traffic receiver manually, and I have found that iperf is limited to about 1000 simultaneous connections(on my set up). I have also tried replaying large PCAP files, but then I do not have control over the packets to test QOS capabilities.
Other software/solutions I have looked into are:
http://mininet.org/ (can't handle the amount of connections I need).
ns-3
I am looking for someone to point me in the right direction. Thank you.
There are commercial products for this kind of thing. Short of a home-brew setup with a combination of apache bench, siege, and tcpreplay (which would take significant effort to implement).
See www.spirent.com or www.ixiacom.com.

Bonding on RedHat 6 with LACP

I'm currently encountering an issue in RedHat 6.4. I have two physical NICs which I am trying to bond together using LACP.
I have the corresponding configuration set up on my switch, and I have implemented the recommended configuration from the RedHat Install Guide on my NICs.
However, when I start my network services, I'm seeing my LACP IP on the physical NICs as well as the bonding interface (respectively eth0, eth1 and bond0). i'm thinking I should only see my IP address on my bond0 interface?
The connectivity with my network is not established. I don't know what is wrong with my configuration.
Here are my ifcfg-eth0, eth1 and bond0 files (IP blanked for discretion purposes).
ifcfg-eth0 :
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
USERCTL=no
TYPE=Ethernet
NM_CONTROLLED=no
ifcfg-eth1 :
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
USERCTL=no
TYPE=Ethernet
NM_CONTROLLED=no
ifcfg-bond0 :
DEVICE=bond0
IPADDR=X.X.X.X
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="mode=4"
Thanks to anyone who can pinpoint my problem.
Jeremy
Let me answer my own question here in case anyone is having the same issue.
Turns out I just needed to deactivate the "NetworkManager" service on my RedHat Server. Turn it off, and deactivate it, then works like a charm.
Network bonding : Modes of bonding
Modes 0, 1, and 2 are by far the most commonly used among them.
Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.
Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.
Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR’d with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.
Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.
Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.
Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.
Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.
~]#service NetworkManager stop | chkconfig NetworkManager off
Try This and if not continue with bellow command too
~]#service network start/restart | chkconfig network on

TCP - LRO/TSO techniques

Why is it must that all interfaces (routers and bridges) involved support LRO/TSO technique ?
Routers don't. Bridges do.
External routers, hubs, switches or anything else that is externally connected to the network will not see the effects of TSO, only interfaces inside the device with TSO will experience any effects - it's a software thing.
A router is an external device which is connected to the network by ethernet cables, fibre optic cables, wireless comms etc. These communication mediums adhere to internation standards such as 803.2 for ethernet or 803.11 for wireless - they're hardware devices, and hardware devices have very strict rules on how they communicate.
A bridge is an internal software construct and is specific to your OS.
Let's use 803.2 (ethernet) and a linux host for an example.
An application calls for a socket to be created and then pushes a large data chunk into the socket. The linux kernel determines which interface this data should be transitted on. The kernel will next interrogate the driver for this interface to determine its capabilities, if the interface is TSO capable the kernel will pass an sk_buff with a single "template" header and a huge chunk of data (more than 1 packets worth) to the interface driver.
Let's consider a standard interface straight to a hardware NIC first:
Some interfaces have fake TSO (they segment the packet in the driver) and some have true TSO (the template header and data are passed to the hardware with minimal alterations). At this point ether the driver or the NIC hardware will convert this large segment of data into multiple, standard compliant, 803.2 ethernet frames, it is these compliant frames that an external device, such as a router, hub, switch, modem or other host will see on the wire.
Now let's consider several NICs behind a software bridge:
Although the kernel is aware of each NIC at a low level, the network stack is only aware of the bride, thus only capabilities that ALL of the underlying NICs have should be passed up to the bridge. If an sk_buff is passed to a bridge, then ALL the interfaces in the bridge will receive the same sk_buff. We'll assume that the kernal has once again passed our large TSO sk_buff to a bridge, if any of the underlying interfaces does not support TSO then the packet will most likely be dropped by the hardware NIC in question.
In summary:
Worst case scenario is the bridge will repeatedly retry to send the same data chunk on the broken interface and the whole bridge will lock up until the application decides to give up. Best case scenario, the non TSO NIC will simply appear to be dead.
That said, if the NIC has unsafe code in its driver then this could cause a segmentation fault that could bring the whole system down.

Resources