VHOST-USER interface between ovs-dpdk and a VM - networking

I'm trying to understand the packet life-cycle in ovs-dpdk (running on host) communicating with a VM through vhost-user interface:
packet is received via physical port to the device.
DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in user-space.
ovs-dpdk copies this packet to the shared-vring of the associated guest (shared between ovs-dpdk userspace process and guest)
no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct? How is part 4 implemented ? This is a communication between OS in guest and application in guest, so how this is implemented with zero copy?

no more copies in the guest - i.e. when any application running on the guest wants to consume the packet - there is a zero copy between the shared-vring and the guest application.
Is that correct?
Not really. It is correct if you run a DPDK application in the guest. But if you run a normal kernel in the guest, there will be another copy between guest kernel and guest user space.
How is part 4 implemented ?
See above. It is true only for DPDK applications.

Related

How to use DPDK in a UDP communication between remote servers?

I played a bit with the official dpdk by setting up the environment and running some example applications. Then I found out about the UDPDK which combined the DPDK with the UDP stack.
I already have set up the environment for UDPDK as per documentation and then ran the sample app 'pktgen' (both in the local VM and on the public server). Now as far as I understand, this project's aim was to send pure UDP packets between connected devices.
I tried to send UDP packets from VM1(using DPRK) to VM2(normal) and tried to receive packets through a normal UDP receiver (java app) and succeeded, Also was able to send from one server(using DPDK) to another server (normal & both servers are connected to same switch as I could arping between them)
Edit :
My next target / main goal is to send/receive UDP packets from/to 1 public server (using DPDK) to another public server (normal & they are not connected, and no control over switch). Then I came to know about Open vSwitch and been told that this can be the way though I saw DPDK-OVS being used between VM's mainly. Is it really possible to send/receive UDP packets from/to remote public server using DPDK-OVS and if so then how?
Thanks in advance for any help.
For the question can one send UDP packets between 2 servers which are remotely connected (not connected directly or through the switch); the answer is yes, one can do the same without any external or 3rd party switching applciation
Reason:
packets traverse through the local network using Ethernet and VLAN
packets traverse through the remote network using MPLS, IP address or tunnel protocols.
So as long a valid packet with ethernet, vlan, IP, UDP is constructed sending it locally or remotely is possible.
How to do it:
Ensure the port used supports VF
create a VF instance and bind to DPDK
use DPDK API such as pktmbuf_alloc, mtod, eth, IP, udp to create the desired packet.
send a packet on VF interface using tx_buffer or tx_burst.
As long as the right MAC address, VLAN and|or MPLS is right external routing is taken care.
If packets are travelled through the tunnelling via NAT, ip-in-ip or GRE|Geneve, then we have 2 options
Prepare the NAT, tunnelling in DPDK and send over a physical interface
send the custom packet from the DPDK application using TAP PMD into the kernel, using network IP route tables the packets will be forwarded with appropriate tunnelling.
The above second approach takes care of neighbour discovery and tunnelling overhead.
hence the use of DPDK-OVS or OVS or any virtual switch does not solve the underlying issue.
DPDK-OVS provides DPDK vhostuser/vhostuserclient type port as virtual (virtio) device to VM. to VM, the virtio device in VM is just like any other normal network device, the UDP applications runs on VM does not care what underlying network devices the VM runs, UDP receives/sends packet through Linux network stack. you could run another userspace stack on VM and UDP applications runs on top of the userspace stack to bypass VM Linux stack.

How Do I Remote Desktop to a VMWare Windows 10 VM, not the base machine?

I have a PC running Windows 10. On that PC I have VMWare hosting a Windows 10 VM. I can run the VM without issue from the local machine. The VM has a typical Windows PC Name, different from the base machine.
When I try to make a Remote Desktop connection from a different PC to the VM using the VM PC Name, it connects to the base machine. I can see the VM running on the base machine and control it.
I need to be able to run several VM's on this base machine and then use RDP to run remote desktop sessions on the VM's.
Other configuration info:
The VM Network is configured as NAT and I have followed the instructions here
(https://kb.vmware.com/s/article/1018809)
If I change to Bridged for the Network then I can ping my other PC from the VM if I set up a
fixed IP address - nothing if I try DHCP but that may be due to company network constraints.
In Bridged mode, I can't ping back to the VM from my other PC. (Edit: fixed, this was just Network Discovery and Firewall settings)
I need this system running on Windows 10 as our IT department doesn't want to support my application (even though they agree to it being used) which means I can't go to a Windows Server option. Also, the VM's need to be Windows 10 for application compatibility.
All the equipment under test is in the same LAN subnet and on a single, dumb switch.
Any help would be appreciated.
Launch the menu item VM > Settings.
Search the start menu for command prompt from within the virtual machine. Enter ipconfig in the prompt and search for a value following the IPv4 Address. Record this address for later use.
Now select the menu item Edit > Virtual Network Editor.
Select the NAT network type and then choose NAT Settings.
From this new prompt, click Add to include a new port forwarder.
Enter the following information: Host Port: 9997, Type: TCP, Virtual machine IP address: Enter the IP you recorded in Step 2.
note: This port number is 3389 by default, Save any open prompts so the configuration changes can take place.
The final step is to enable RDP connections from within the operating system itself.

How to use the RJ45 tool in the CORE network emulator?

I have recently installed the CORE Network Emulator, and have already read the relevant parts of the the docs. CORE promises to be able to connect the virtual networks you create in it with physical once. However, I am having trouble connecting my virtual network to the physical one, which the RJ45 tool promises to do. From what I have read, in the CORE NetEm you can assign a network interface to the RJ45 tool, which then bridges your physical device to the network.
I have tried creating a basic topology, with one virtual host, a router, and then my computer with the RJ45 tool and I am trying to see if I can reach my computer from the host or vice versa with a ping command, but all I get is "network is unreachable."
Unfortunately, the CORE docs don't go into detail in how to use this tool and I wasn't able to find any other sources on the internet which have to do anything with it.
Here you can find the documenation: http://coreemu.github.io/core/usage.html#connecting-with-physical-networks
Does anyone have any experience with CORE and can help me out with this?
Many thanks!
The CORE RJ45 tool creates a Linux bridge between a virtual interface and a physical one.
Example: if you have node n1 linked to an RJ45 node assigned to eth0, after pressing "Start", on the underlying host you'll have a bridge with the n1:eth0 veth0 pair device and your host's eth0 device enslaved.
You'll need to configure routing between your virtual and physical networks. In the above example, suppose n1:eth0 is 10.0.0.1/24. When you plug a physical device into eth0, that device needs a route back to 10.0.0.1. That device may be on the same subnet, for example if it has the address 10.0.0.2/24. If your physical device has an address on a different subnet, you'll need to manually add a route to reach the 10.0.0.0/24 network, via the connecting interface.
I had the same problem. My CORE version is v.5.3.0 (20190615) on Ubuntu 18.04 LTS w/ Linux 5.0.0-37 generic on x86_64. Have OSPF v2, v3, Zegra, and IPForward correctly configured at r1, so that vpc1 can send and receive data successfully.
The RJ45 port of a built-in physical interface on the CORE host was mapped to a virtual endpoint for connecting the 2nd real computer, rpc 192.168.10.10/24 with a virtual switch sw1. Another virtual PC, vpc1 192.168.10.20/24 and a router r1 with 192.168.10.1/24 and 10.0.10.1/24 two interfaces.
Can ping from rpc to vpc1 and to r1 at 192.168.10.1 but not 10.0.10.1 or beyond. However, using the two-node tool or virtual terminal of vpc1, I can also traceroute and ping r1 and beyond.
The reason why the traffic of the real remote PC rpc could not be routed by r1 from 192.168.10.1 to 10.0.10.1 and back was because its WiFi was left on with the gateway configured to a FiOS router. You cannot have two gateways. Once the WiFi got turned off, the traceroute and ping can reach r1 and beyond.
This could also be the root cause of your problem.

Packet Out-of-order with Open vSwitch running on multi-core?

From the document, it seems the latest version of Open vSwitch supports multi-core.
In our OpenStack test environment which uses Open vSwitch on the host, it is observed that the sequence of the packets is changed when they are sent from the same VM to the physical network with the same IP destination. Is this something related to the multi-core processing on the host?
We also tried the similar test with kvm (as guest hypervisor) and use Linux bridge on the host, and the packet sequence was kept not changed.
Could anyone give me some hints on this?
This is one of OVS's normal forwarding behaviors.
http://openvswitch.org/pipermail/discuss/2012-April/007003.html

tun/tap interface communication with physical device

I'm not clear about how the tun/tap interface is working. From Wikipedia, I got this:
Packets sent by an operating system via a TUN/TAP device are delivered to a user-space program that attaches itself to the device. A user-space program may also pass packets into a TUN/TAP device. In this case TUN/TAP device delivers (or "injects") these packets to the operating system network stack thus emulating their reception from an external source.
Now, let's suppose that I create a tun with IP 12.12.12.1. If on this machine I have two NICs, will I be able to communicate with this tun (on 12.12.12.1 IP) from an external machine(let's say 12.12.12.2) no matter what NIC device the second machine is connected to (let's say eth0 or eth1)?
With other words, are the tun and NICs independent one of each other, or you need to communicate with the tun through a specific NIC?
N.B. Links on topic are welcome!
If you set up a virtual network e.g. 12.12.12.0/24 that is reachable via your virtual interface and you send a packet to this network from your machine, the kernel module implementing tun/tap will send this packet from the kernel via a character device to your application. It is up to your application that what it does with this packet. It can be transmitted to some other application (e.g. VPN server). Your application can also feed packets back via this character device, and the OS network stack will see these packets as ingress network traffic.
If the machine acts as a router it can just use a tun/tap virtual interface as a regular one and forward traffic via it, but it is always the application handling the device that manages packets. Outgoing traffic via the virtual interface is always delivered to your application, and incoming traffic via the virtual interface always originates from your application.

Resources