QEMU/KVM network interface address - networking

What's the use of the addr value when specifying a new network interface on QEMU/KVM ?
Example: qemu -hda deb.img -net nic,addr=192.168.1.10
Is there a way to specify directly the IP address of a network interface ?
(Directly means at the moment when we launch the guest)

If you call
# /opt/qemu/bin/qemu-system-x86_64 \
-drive file=/opt/test.qcow2,format=qcow2 \
-vnc :0 \
-machine pc,accel=kvm,usb=off \
-m 2048 \
-net nic,addr=192.16.0.1
You receive errror:
qemu-system-x86_64: Invalid PCI device address 192.16.0.1 for device e1000
Addr param is not an ip address, it's a device id.
If you call
# /opt/qemu/bin/qemu-system-x86_64 \
-drive file=/opt/test.qcow2,format=qcow2 \
-vnc :0 \
-machine pc,accel=kvm,usb=off \
-m 2048 \
-net nic,addr=0x10
nic was added as device 00:10.0 (Domain 0, Bus 0, Device 0x10, Function 0).

You can set the static IP address in the guest OS.
I think IP is not a physical feature for the guest, but you can set the MAC or BDF of the NIC with the param

Related

Cosmos DB emulator installation on Mac

ipaddr="`ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' | head -n 1`"
docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
docker run \
--publish 8081:8081 \
--publish 10250-10255:10250-10255 \
--memory 3g --cpus=2.0 \
--name=test-linux-emulator1 \
--env AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10 \
--env AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true \
--env AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=$ipaddr \
--interactive \
--tty \
mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
I followed these steps but getting this error
./cosmosdb-emulator: ERROR: Invalid mapping of address 0x40037db000 in reserved address space below 0x400000000000. Possible causes:
1) the process (itself, or via a wrapper) starts-up its own running environment sets the stack size limit to unlimited via syscall setrlimit(2);
2) the process (itself, or via a wrapper) adjusts its own execution domain and flag the system its legacy personality via syscall personality(2);
3) sysadmin deliberately sets the system to run on legacy VA layout mode by adjusting a sysctl knob vm.legacy_va_layout.
Mac Chip: Apple M1 Pro
From the official doc,
The emulator only supports MacBooks with Intel processors.
The Apple M1 Pro you are using is not supported by the emulator

Unable to connect testpmd to OVS+DPDK

Summary
I'm trying to use testpmd as a sink of traffic from a physical NIC, through OVS with DPDK.
When I run testpmd, it fails. The error message is very brief, so I have no idea what's wrong.
How can I get testpmd to connect to a virtual port in OVS with DPDK ?
Steps
I'm mostly following these Mellanox instructions.
# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget
# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB
# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node
# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3
# step 10 - there is no step 10 in the doc linked above
# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev
Then for the OVS elements I'm trying to follow these steps
# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
-- set Interface dpdk0 type=dpdk \
options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
-- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=0000:5e:00.1 ofport_request=2
# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
-- \
set Interface dpdkvhostuser0 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:4,1:6" \
ofport_request=3
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
-- \
set Interface dpdkvhostuser1 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:8,1:10" \
ofport_request=4
# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2
Then I run testpmd
sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd \
--vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
--vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
-c 0x00fff000 \
-n 1 \
--socket-mem=2048,2048 \
--file-prefix=testpmd \
--log-level=9 \
--no-pci \
-- \
--port-numa-config=0,0,1,0 \
--ring-numa-config=0,1,0,1,1,0 \
--numa \
--socket-num=0 \
--txd=512 \
--rxd=512 \
--mbcache=512 \
--rxq=1 \
--txq=1 \
--nb-cores=4 \
-i \
--rss-udp \
--auto-start
The output is:
...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
Cause: Start ports failed
The bottom of /usr/local/var/log/openvswitch/ovs-vswitchd.log is
2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed
What is the cause of the failure?
Should I use a dpdkvhostuserclient instead of dpdkvhostuser ?
What else I've tried
Looking in /var/log/messages for more info - but that's just a copy of stdout and stderr.
Rebooting
Looking up the OVS docs, but they don't mention "logs"
I've tried changing my testpmd command arguments. (Docs here)
Getting rid of --no-pci. The result is
Configuring Port 0 (socket 0)
Port 0: 24:8A:07:9E:94:94
Configuring Port 1 (socket 0)
Port 1: 24:8A:07:9E:94:95
Configuring Port 2 (socket 0)
Fail to configure port 2
EAL: Error - exiting with code: 1
Cause: Start ports failed
Those MAC addresses are for the physical NICs I've already connected to OVS.
Removing --auto-start: same result
--nb-cores=1: same result
Removing the 2nd --vdev: Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained. When I add --port-topology=chained I end up with the original error.
Other Info
DPDK 17.11.4
OVS 2.10.1
NIC: Mellanox Connect-X5
OS: Centos 7.5
My NIC is on NUMA node 0
When I run ip addr I see that an interface called br0 has the same MAC address as my physical NICs (p3p1, when it was bound to the kernel)
When I run sudo ovs-vsctl show I see
d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
Bridge "br0"
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:5e:00.1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:5e:00.0"}
Port "br0"
Interface "br0"
type: internal
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}
Edit: Added contents of /usr/local/var/log/openvswitch/ovs-vswitchd.log
Indeed, we need dpdkvhostuser, not client.
The number of queues is mismatch in OVS options:n_rxq=2 vs testpmd --txq=1.

docker userdefined network - no connection to the outside world

I try to expose a docker container to the outside world (well, to be specific just in my internal network - 10.0.0.0/24) with a static ip adress. In my example the container should have the IP address 10.0.0.200.
Docker version is 1.10.3.
Therefore i created a userdefined network:
docker network create --subnet 10.0.0.0/24 --gateway 10.0.0.254 dn in bridge mode.
Then i created a container and attached it to the container.
docker run -d \
--name testhost \
-it \
-h testhost \
--net dn \
--ip 10.0.0.200 \
-p 8080:8080 \
some image
The container has the correct ip and gw assigned (10.0.0.200, 10.0.0.254 - which is also the ip from the docker created bridge interface) but no communication from the container to the outside world nor from the outside to the container is possible. only thing that works is nslookup but tbh i dont know why this is working.
From another host in the network i can ping the bridge interface which was created through the docker network create command.
A second container connected the the dn network can ping my first container. so communication inside the network seems fine.
As in the docker [network documentation][1]
[1]: https://docs.docker.com/engine/userguide/networking/#a-bridge-network "docker network docu" (second picture in bridge network) it should be possible
It seems that im missing some step or config. Any advice is appreciated, thank you.

File transfer between 2 vmware workstations on same host

I need to transfer a file from aProcess1 on a VM1 inside a VmWare Workstation to another VM2 inside the same VmWare Workstation hypervisor on the same host, so as to calculate the rate of data transfer between these 2 virtual machines.
Either to write a FTP server client server program, then how to calculate time..?
And also how to manage the ports in virtual machines (let's say on both Ubuntu is working )
in writing server client program..?
What you need to do is not trivial and you'll have to invest some effort. First of all you need to create a bridge between the two VMs and have each VM with a tap interface on that bridge.
I have a script below you can look at as an example - it creates some screen sessions (you will need a basic .screenrc) and I launch a VM in each screen tab. Really the bit that should only interest you is the bridge setup and how to launch qemu.
Networking setup you want static routes - the below is an example of what I had with eth0 being a user network interface and eth1 being the interface connected to the peer VM. You could get rid of eth0. The peer VM route was a mirror of this
sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
up route add default gw 10.0.2.2 eth0
auto eth1
iface eth1 inet static
address 20.0.0.1
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.2
up route add -host 21.0.0.1 gw 20.0.0.2 dev eth1
up route add -host 21.0.0.2 gw 20.0.0.2 dev eth1
up route del default gw 20.0.0.2 eth1
# You want it like this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 20.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
20.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
# on the peer
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 21.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
21.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
Once you have the VMs up and have configured a static route on each of those, you can use iperf with one end as a sink and the other as a source e.g.:
iperf -s
iperf -c 20.0.0.1 -t 10 -i 1
bridge script setup:
#
# Settings
#
INCLUDE_QEMU_AS_SCREEN=
A_TELNET_1=6661
A_NAME="A-VM"
A_MEMORY=1G
B_TELNET_1=6665
B_NAME="B-VM"
B_MEMORY=1G
A_DISK_IMAGE=.A.disk.img
B_DISK_IMAGE=.B.disk.img
A_PID=.A.pid
B_PID=.B.pid
A_CMD_1=.A.cmd.1
B_CMD_1=.B.cmd.1
#
# Run QEMU in background or foreground
#
if [ "$INCLUDE_QEMU_AS_SCREEN" != "" ]
then
SCREEN_QEMU_A='screen -t "A-qemu"'
SCREEN_QEMU_B='screen -t "B-qemu"'
else
SCREEN_QEMU_A='bg'
SCREEN_QEMU_B='bg'
fi
#
# Store logs locally and use the date to avoid losing old logs
#
LOG_DATE=`date "+%a_%b_%d_at_%H_%M"`
HOME=$(eval echo ~${SUDO_USER})
LOG_DIR=logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=/tmp/$LOGNAME/logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=.
fi
fi
LOG_DIR_TEST=$LOG_DIR
mkdir -p $LOG_DIR_TEST
#
# create the tap
#
echo
echo ================ create taps ================
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap1
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap2
#
# bring up the tap
#
echo
echo =============== bring up taps ===============
sudo ifconfig $LOGNAME-tap1 up
sudo ifconfig $LOGNAME-tap2 up
#
# show the tap
#
echo
echo =================== tap 1 ===================
ifconfig $LOGNAME-tap1
echo
echo =================== tap 4 ===================
ifconfig $LOGNAME-tap2
#
# create the bridge
#
sudo brctl addbr $LOGNAME-br1
#
# bring up the bridge
#
sudo ifconfig $LOGNAME-br1 1.1.1.1 up
#
# show my bridge
#
echo
echo =================== bridge 1 ===================
ifconfig $LOGNAME-br1
brctl show $LOGNAME-br1
brctl showmacs $LOGNAME-br1
#
# attach tap interface to bridge
#
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap2
SCRIPT_START="echo Starting..."
SCRIPT_EXIT="echo Exiting...; sleep 3"
cat >$A_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-A -f -c 'telnet localhost $A_TELNET_1'
$SCRIPT_EXIT
%%%
cat >$B_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-B -f -c 'telnet localhost $B_TELNET_1'
$SCRIPT_EXIT
%%%
chmod +x $A_CMD_1
chmod +x $B_CMD_1
run_qemu_in_screen_or_background()
{
SCREEN=$1
shift
if [ "$SCREEN" = "bg" ]
then
$* &
else
$SCREEN $*
fi
}
echo
echo
echo
echo "##########################################################"
echo "# Starting QEMU #"
echo "##########################################################"
echo
echo
echo
run_qemu_in_screen_or_background \
$SCREEN_QEMU_A \
qemu-system-x86_64 -nographic \
-m $A_MEMORY \
-enable-kvm \
-drive file=$A_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$A_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=10:16:3e:00:01:12 \
-net tap,ifname=$LOGNAME-tap1,vlan=21,script=no \
-boot c \
-pidfile $A_PID
run_qemu_in_screen_or_background \
$SCREEN_QEMU_B \
qemu-system-x86_64 -nographic \
-m $B_MEMORY \
-enable-kvm \
-drive file=$B_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$B_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=30:16:3e:00:03:14 \
-net tap,ifname=$LOGNAME-tap2,vlan=21,script=no \
-boot c \
-pidfile $B_PID
sleep 1
screen -t "$A_NAME" sh -c "sh $A_CMD_1"
screen -t "$B_NAME" sh -c "sh $B_CMD_1"
sleep 5
echo
echo
echo
echo "##########################################################"
echo "# Hit enter to quit #"
echo "##########################################################"
echo
echo
echo
read xx
cat $A_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $A_PID 2>/dev/null
cat $B_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $B_PID 2>/dev/null
rm -f $A_CMD_1 2>/dev/null
rm -f $B_CMD_1 2>/dev/null
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap2
sudo ifconfig $LOGNAME-br1 down
sudo brctl delbr $LOGNAME-br1
sudo ifconfig $LOGNAME-tap1 down
sudo ifconfig $LOGNAME-tap2 down
sudo tunctl -d $LOGNAME-tap1
sudo tunctl -d $LOGNAME-tap2
Supposing your VM1 and VM2 have some active ISO-OSI-L2/L3 connectivity, so there is some transport available, a way better approach to setup process-to-process communication is to use some industry-proven messaging framework rather than to spend time to build just another ftp-c/s.
For sending / receiving anything ( incl. BLOBs alike the whole files ) try ZeroMQ or nanomsg libraries, as these are broker-less frameworks, have bindings for many programming languages ready & have genuine performance / low latency.
Any distributed process-to-process systems projects will benefit from early adoption and using this approach.
Check http://zguide.zeromq.org/c:fileio3 to get some additional insights about additional add-on benefits alike load-balancing, failure-recovery et al.
As per your Q1: FTP servers, per-se, report the file transfer times. In ZeroMQ you can pragrammatically measure amount of time spent on file transfer as a distance between the two timestamps
As per your Q2: for ZeroMQ use any port allowed ( not restricted ) in Ubuntu

Socat Windows serial port access

I want to root my serial COM10 to LAN --> LAN to COM12
I therefor need the equivalent command for windows version of socat:
socat -d -d -d TCP4-LISTEN:23000,reuseaddr,fork /dev/ttyS0
What do I have to enter under Windows instead of /dev/ttyS0 if I want to access my COM10?
Sender : socat -d -d -d TCP4:loalhost:23000 /dev/ttyS1
Receiver: socat -d -d -d TCP4-LISTEN:23000 /dev/ttyS2
Thank in advance!
Use standard Linux name convention:
/dev/ttyS0 is equivalent to COM1
/dev/ttyS1 ~ COM2
... so COM10 should be /dev/ttyS9.
http://cygwin.com/cygwin-ug-net/using-specialnames.html#pathnames-posixdevices

Resources