Summary
I'm trying to use testpmd as a sink of traffic from a physical NIC, through OVS with DPDK.
When I run testpmd, it fails. The error message is very brief, so I have no idea what's wrong.
How can I get testpmd to connect to a virtual port in OVS with DPDK ?
Steps
I'm mostly following these Mellanox instructions.
# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget
# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB
# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node
# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3
# step 10 - there is no step 10 in the doc linked above
# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev
Then for the OVS elements I'm trying to follow these steps
# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
-- set Interface dpdk0 type=dpdk \
options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
-- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=0000:5e:00.1 ofport_request=2
# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
-- \
set Interface dpdkvhostuser0 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:4,1:6" \
ofport_request=3
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
-- \
set Interface dpdkvhostuser1 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:8,1:10" \
ofport_request=4
# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2
Then I run testpmd
sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd \
--vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
--vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
-c 0x00fff000 \
-n 1 \
--socket-mem=2048,2048 \
--file-prefix=testpmd \
--log-level=9 \
--no-pci \
-- \
--port-numa-config=0,0,1,0 \
--ring-numa-config=0,1,0,1,1,0 \
--numa \
--socket-num=0 \
--txd=512 \
--rxd=512 \
--mbcache=512 \
--rxq=1 \
--txq=1 \
--nb-cores=4 \
-i \
--rss-udp \
--auto-start
The output is:
...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
Cause: Start ports failed
The bottom of /usr/local/var/log/openvswitch/ovs-vswitchd.log is
2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed
What is the cause of the failure?
Should I use a dpdkvhostuserclient instead of dpdkvhostuser ?
What else I've tried
Looking in /var/log/messages for more info - but that's just a copy of stdout and stderr.
Rebooting
Looking up the OVS docs, but they don't mention "logs"
I've tried changing my testpmd command arguments. (Docs here)
Getting rid of --no-pci. The result is
Configuring Port 0 (socket 0)
Port 0: 24:8A:07:9E:94:94
Configuring Port 1 (socket 0)
Port 1: 24:8A:07:9E:94:95
Configuring Port 2 (socket 0)
Fail to configure port 2
EAL: Error - exiting with code: 1
Cause: Start ports failed
Those MAC addresses are for the physical NICs I've already connected to OVS.
Removing --auto-start: same result
--nb-cores=1: same result
Removing the 2nd --vdev: Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained. When I add --port-topology=chained I end up with the original error.
Other Info
DPDK 17.11.4
OVS 2.10.1
NIC: Mellanox Connect-X5
OS: Centos 7.5
My NIC is on NUMA node 0
When I run ip addr I see that an interface called br0 has the same MAC address as my physical NICs (p3p1, when it was bound to the kernel)
When I run sudo ovs-vsctl show I see
d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
Bridge "br0"
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:5e:00.1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:5e:00.0"}
Port "br0"
Interface "br0"
type: internal
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}
Edit: Added contents of /usr/local/var/log/openvswitch/ovs-vswitchd.log
Indeed, we need dpdkvhostuser, not client.
The number of queues is mismatch in OVS options:n_rxq=2 vs testpmd --txq=1.
Related
ipaddr="`ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' | head -n 1`"
docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
docker run \
--publish 8081:8081 \
--publish 10250-10255:10250-10255 \
--memory 3g --cpus=2.0 \
--name=test-linux-emulator1 \
--env AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10 \
--env AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true \
--env AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=$ipaddr \
--interactive \
--tty \
mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
I followed these steps but getting this error
./cosmosdb-emulator: ERROR: Invalid mapping of address 0x40037db000 in reserved address space below 0x400000000000. Possible causes:
1) the process (itself, or via a wrapper) starts-up its own running environment sets the stack size limit to unlimited via syscall setrlimit(2);
2) the process (itself, or via a wrapper) adjusts its own execution domain and flag the system its legacy personality via syscall personality(2);
3) sysadmin deliberately sets the system to run on legacy VA layout mode by adjusting a sysctl knob vm.legacy_va_layout.
Mac Chip: Apple M1 Pro
From the official doc,
The emulator only supports MacBooks with Intel processors.
The Apple M1 Pro you are using is not supported by the emulator
I have successfully installed fabric in 5 nodes. One for peer0, peer1, orderer0, kafka and client respectively
I am trying to start order with the following environment set in start-order.sh
ORDERER_GENERAL_LOGLEVEL=info \
ORDERER_GENERAL_LISTENADDRESS=orderer0 \
ORDERER_GENERAL_GENESISMETHOD=file \
ORDERER_GENERAL_GENESISFILE=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/genesis.block \
ORDERER_GENERAL_LOCALMSPID=OrdererOrg0MSP \
ORDERER_GENERAL_LOCALMSPDIR=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/msp \
ORDERER_GENERAL_TLS_ENABLED=false \
ORDERER_GENERAL_TLS_PRIVATEKEY=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.key \
ORDERER_GENERAL_TLS_CERTIFICATE=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.crt \
ORDERER_GENERAL_TLS_ROOTCAS=[/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/ca.crt,/root/bcnetwork/conf/crypto-config/peerOrganizations/org0/peers/peer0.org0/tls/ca.crt,/root/bcnetwork/conf/crypto-config/peerOrganizations/org1/peers/peer2.org1/tls/ca.crt] \
CONFIGTX_ORDERER_BATCHTIMEOUT=1s \
CONFIGTX_ORDERER_ORDERERTYPE=kafka \
CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka-zookeeper:9092] \
orderer
Host orderer0 I have set it in /etc/hosts which has no issue in it. But on executing, I get the following error
2018-02-19 12:53:31.597 UTC [orderer/main] main -> INFO 001 Starting orderer:
Version: 1.0.2
Go version: go1.9
OS/Arch: linux/amd64
2018-02-19 12:53:31.602 UTC [orderer/main] initializeGrpcServer -> CRIT 002 Failed to listen: listen tcp XX.XXX.XXX.XX:7050: bind: cannot assign requested address
r
Machine Config FYR
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
go version go1.9.4 linux/amd64
OS : Ubuntu 16.04
You can try:
-e ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 -e ORDERER_GENERAL_LISTENPORT=7050
orderer0 itself can't resolve to any address unless specified in your host.conf.
I have deployed the fabric on multiple nodes using docker swarm mode. You will need to create the swarm mode on one host and have other hosts join it as manager node. After that, you create an overlay network on the top of the swarm. This network is shared among all the nodes and thus allow containers to communicate with each other. You can see the complete tutorial at
hyperledger-fabric-on-multiple-hosts
I'm using ucarp over linux bonding for high availability and automatic failover of two servers.
Here are the commands I used on each server for starting ucarp :
Server 1 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.229 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
Server 2 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.242 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
and the content of the scripts :
vip-up.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr add "$2"/24 dev "$1"
vip-down.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr del "$2"/24 dev "$1"
Everything works well and the servers switch from one to another correctly when the master becomes unavailable.
The problem is when I unplug both servers from the switch for a too long time (approximatively 30 min). As they are unplugged they both think they are master,
and when I replug them, the one with the lowest ip address tries to stay master by sending gratuitous arps. The other one switches to backup as expected, but I'm unable to access the master through its virtual ip.
If I unplug the master, the second server goes from backup to master and is accessible through its virtual ip.
My guess is that the switch "forgets" about my servers when they are disconnected from too long, and when I reconnect them, it is needed to go from backup to master to update correctly switch's arp cache, eventhough the gratuitous arps send by master should do the work. Note that restarting ucarp on the master does fix the problem, but I need to restart it each time it was disconnected from too long...
Any idea why it does not work as I expected and how I could solve the problem ?
Thanks.
What's the use of the addr value when specifying a new network interface on QEMU/KVM ?
Example: qemu -hda deb.img -net nic,addr=192.168.1.10
Is there a way to specify directly the IP address of a network interface ?
(Directly means at the moment when we launch the guest)
If you call
# /opt/qemu/bin/qemu-system-x86_64 \
-drive file=/opt/test.qcow2,format=qcow2 \
-vnc :0 \
-machine pc,accel=kvm,usb=off \
-m 2048 \
-net nic,addr=192.16.0.1
You receive errror:
qemu-system-x86_64: Invalid PCI device address 192.16.0.1 for device e1000
Addr param is not an ip address, it's a device id.
If you call
# /opt/qemu/bin/qemu-system-x86_64 \
-drive file=/opt/test.qcow2,format=qcow2 \
-vnc :0 \
-machine pc,accel=kvm,usb=off \
-m 2048 \
-net nic,addr=0x10
nic was added as device 00:10.0 (Domain 0, Bus 0, Device 0x10, Function 0).
You can set the static IP address in the guest OS.
I think IP is not a physical feature for the guest, but you can set the MAC or BDF of the NIC with the param
I need to transfer a file from aProcess1 on a VM1 inside a VmWare Workstation to another VM2 inside the same VmWare Workstation hypervisor on the same host, so as to calculate the rate of data transfer between these 2 virtual machines.
Either to write a FTP server client server program, then how to calculate time..?
And also how to manage the ports in virtual machines (let's say on both Ubuntu is working )
in writing server client program..?
What you need to do is not trivial and you'll have to invest some effort. First of all you need to create a bridge between the two VMs and have each VM with a tap interface on that bridge.
I have a script below you can look at as an example - it creates some screen sessions (you will need a basic .screenrc) and I launch a VM in each screen tab. Really the bit that should only interest you is the bridge setup and how to launch qemu.
Networking setup you want static routes - the below is an example of what I had with eth0 being a user network interface and eth1 being the interface connected to the peer VM. You could get rid of eth0. The peer VM route was a mirror of this
sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
up route add default gw 10.0.2.2 eth0
auto eth1
iface eth1 inet static
address 20.0.0.1
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.2
up route add -host 21.0.0.1 gw 20.0.0.2 dev eth1
up route add -host 21.0.0.2 gw 20.0.0.2 dev eth1
up route del default gw 20.0.0.2 eth1
# You want it like this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 20.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
20.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
# on the peer
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 21.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
21.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
Once you have the VMs up and have configured a static route on each of those, you can use iperf with one end as a sink and the other as a source e.g.:
iperf -s
iperf -c 20.0.0.1 -t 10 -i 1
bridge script setup:
#
# Settings
#
INCLUDE_QEMU_AS_SCREEN=
A_TELNET_1=6661
A_NAME="A-VM"
A_MEMORY=1G
B_TELNET_1=6665
B_NAME="B-VM"
B_MEMORY=1G
A_DISK_IMAGE=.A.disk.img
B_DISK_IMAGE=.B.disk.img
A_PID=.A.pid
B_PID=.B.pid
A_CMD_1=.A.cmd.1
B_CMD_1=.B.cmd.1
#
# Run QEMU in background or foreground
#
if [ "$INCLUDE_QEMU_AS_SCREEN" != "" ]
then
SCREEN_QEMU_A='screen -t "A-qemu"'
SCREEN_QEMU_B='screen -t "B-qemu"'
else
SCREEN_QEMU_A='bg'
SCREEN_QEMU_B='bg'
fi
#
# Store logs locally and use the date to avoid losing old logs
#
LOG_DATE=`date "+%a_%b_%d_at_%H_%M"`
HOME=$(eval echo ~${SUDO_USER})
LOG_DIR=logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=/tmp/$LOGNAME/logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=.
fi
fi
LOG_DIR_TEST=$LOG_DIR
mkdir -p $LOG_DIR_TEST
#
# create the tap
#
echo
echo ================ create taps ================
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap1
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap2
#
# bring up the tap
#
echo
echo =============== bring up taps ===============
sudo ifconfig $LOGNAME-tap1 up
sudo ifconfig $LOGNAME-tap2 up
#
# show the tap
#
echo
echo =================== tap 1 ===================
ifconfig $LOGNAME-tap1
echo
echo =================== tap 4 ===================
ifconfig $LOGNAME-tap2
#
# create the bridge
#
sudo brctl addbr $LOGNAME-br1
#
# bring up the bridge
#
sudo ifconfig $LOGNAME-br1 1.1.1.1 up
#
# show my bridge
#
echo
echo =================== bridge 1 ===================
ifconfig $LOGNAME-br1
brctl show $LOGNAME-br1
brctl showmacs $LOGNAME-br1
#
# attach tap interface to bridge
#
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap2
SCRIPT_START="echo Starting..."
SCRIPT_EXIT="echo Exiting...; sleep 3"
cat >$A_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-A -f -c 'telnet localhost $A_TELNET_1'
$SCRIPT_EXIT
%%%
cat >$B_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-B -f -c 'telnet localhost $B_TELNET_1'
$SCRIPT_EXIT
%%%
chmod +x $A_CMD_1
chmod +x $B_CMD_1
run_qemu_in_screen_or_background()
{
SCREEN=$1
shift
if [ "$SCREEN" = "bg" ]
then
$* &
else
$SCREEN $*
fi
}
echo
echo
echo
echo "##########################################################"
echo "# Starting QEMU #"
echo "##########################################################"
echo
echo
echo
run_qemu_in_screen_or_background \
$SCREEN_QEMU_A \
qemu-system-x86_64 -nographic \
-m $A_MEMORY \
-enable-kvm \
-drive file=$A_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$A_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=10:16:3e:00:01:12 \
-net tap,ifname=$LOGNAME-tap1,vlan=21,script=no \
-boot c \
-pidfile $A_PID
run_qemu_in_screen_or_background \
$SCREEN_QEMU_B \
qemu-system-x86_64 -nographic \
-m $B_MEMORY \
-enable-kvm \
-drive file=$B_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$B_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=30:16:3e:00:03:14 \
-net tap,ifname=$LOGNAME-tap2,vlan=21,script=no \
-boot c \
-pidfile $B_PID
sleep 1
screen -t "$A_NAME" sh -c "sh $A_CMD_1"
screen -t "$B_NAME" sh -c "sh $B_CMD_1"
sleep 5
echo
echo
echo
echo "##########################################################"
echo "# Hit enter to quit #"
echo "##########################################################"
echo
echo
echo
read xx
cat $A_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $A_PID 2>/dev/null
cat $B_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $B_PID 2>/dev/null
rm -f $A_CMD_1 2>/dev/null
rm -f $B_CMD_1 2>/dev/null
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap2
sudo ifconfig $LOGNAME-br1 down
sudo brctl delbr $LOGNAME-br1
sudo ifconfig $LOGNAME-tap1 down
sudo ifconfig $LOGNAME-tap2 down
sudo tunctl -d $LOGNAME-tap1
sudo tunctl -d $LOGNAME-tap2
Supposing your VM1 and VM2 have some active ISO-OSI-L2/L3 connectivity, so there is some transport available, a way better approach to setup process-to-process communication is to use some industry-proven messaging framework rather than to spend time to build just another ftp-c/s.
For sending / receiving anything ( incl. BLOBs alike the whole files ) try ZeroMQ or nanomsg libraries, as these are broker-less frameworks, have bindings for many programming languages ready & have genuine performance / low latency.
Any distributed process-to-process systems projects will benefit from early adoption and using this approach.
Check http://zguide.zeromq.org/c:fileio3 to get some additional insights about additional add-on benefits alike load-balancing, failure-recovery et al.
As per your Q1: FTP servers, per-se, report the file transfer times. In ZeroMQ you can pragrammatically measure amount of time spent on file transfer as a distance between the two timestamps
As per your Q2: for ZeroMQ use any port allowed ( not restricted ) in Ubuntu