docker build apt-get update failed with custom network - networking

Can somebody help me get apt-get working when docker build image.
It works with docker0 bridge. But when I add my own bridge as :
docker network create --gateway=192.168.0.253 --subnet 192.168.0.0/24 --ip-range=192.168.0.128/25 \
-o "com.docker.network.bridge.default_bridge"="true" \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="192.168.0.254" \
-o "com.docker.network.bridge.name"="br0" \
-o "com.docker.network.driver.mtu"="1500" \
-d bridge mynet
linked with /etc/network/interfaces :
auto enp2s0
iface enp2s0 inet manual
auto br0
iface br0 inet static
bridge_ports enp2s0
bridge_fd 0
bridge_stp off
bridge_maxwait 0
address 192.168.0.253
netmask 255.255.255.0
network 192.168.0.0
gateway 192.168.0.254
dns-nameservers 212.27.40.240 212.27.40.241
My docker build failed
Step 4 : RUN apt-get update
---> Running in 4edfcd4885fe
Err:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Temporary failure resolving 'archive.ubuntu.com' ....
But apt-get work fine with :
docker run -it --network=mynet --ip=192.168.0.130 --hostname=test --name test ubuntu:16.04 /bin/bash
How can I solve my build ?
Thanks

Related

Unable to connect testpmd to OVS+DPDK

Summary
I'm trying to use testpmd as a sink of traffic from a physical NIC, through OVS with DPDK.
When I run testpmd, it fails. The error message is very brief, so I have no idea what's wrong.
How can I get testpmd to connect to a virtual port in OVS with DPDK ?
Steps
I'm mostly following these Mellanox instructions.
# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget
# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB
# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node
# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3
# step 10 - there is no step 10 in the doc linked above
# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev
Then for the OVS elements I'm trying to follow these steps
# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
-- set Interface dpdk0 type=dpdk \
options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
-- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=0000:5e:00.1 ofport_request=2
# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
-- \
set Interface dpdkvhostuser0 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:4,1:6" \
ofport_request=3
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
-- \
set Interface dpdkvhostuser1 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:8,1:10" \
ofport_request=4
# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2
Then I run testpmd
sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd \
--vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
--vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
-c 0x00fff000 \
-n 1 \
--socket-mem=2048,2048 \
--file-prefix=testpmd \
--log-level=9 \
--no-pci \
-- \
--port-numa-config=0,0,1,0 \
--ring-numa-config=0,1,0,1,1,0 \
--numa \
--socket-num=0 \
--txd=512 \
--rxd=512 \
--mbcache=512 \
--rxq=1 \
--txq=1 \
--nb-cores=4 \
-i \
--rss-udp \
--auto-start
The output is:
...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
Cause: Start ports failed
The bottom of /usr/local/var/log/openvswitch/ovs-vswitchd.log is
2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed
What is the cause of the failure?
Should I use a dpdkvhostuserclient instead of dpdkvhostuser ?
What else I've tried
Looking in /var/log/messages for more info - but that's just a copy of stdout and stderr.
Rebooting
Looking up the OVS docs, but they don't mention "logs"
I've tried changing my testpmd command arguments. (Docs here)
Getting rid of --no-pci. The result is
Configuring Port 0 (socket 0)
Port 0: 24:8A:07:9E:94:94
Configuring Port 1 (socket 0)
Port 1: 24:8A:07:9E:94:95
Configuring Port 2 (socket 0)
Fail to configure port 2
EAL: Error - exiting with code: 1
Cause: Start ports failed
Those MAC addresses are for the physical NICs I've already connected to OVS.
Removing --auto-start: same result
--nb-cores=1: same result
Removing the 2nd --vdev: Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained. When I add --port-topology=chained I end up with the original error.
Other Info
DPDK 17.11.4
OVS 2.10.1
NIC: Mellanox Connect-X5
OS: Centos 7.5
My NIC is on NUMA node 0
When I run ip addr I see that an interface called br0 has the same MAC address as my physical NICs (p3p1, when it was bound to the kernel)
When I run sudo ovs-vsctl show I see
d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
Bridge "br0"
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:5e:00.1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:5e:00.0"}
Port "br0"
Interface "br0"
type: internal
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}
Edit: Added contents of /usr/local/var/log/openvswitch/ovs-vswitchd.log
Indeed, we need dpdkvhostuser, not client.
The number of queues is mismatch in OVS options:n_rxq=2 vs testpmd --txq=1.

How to connect to docker container from browser on host?

I am using Docker for Mac 1.12.0-rc4-beta19.
The container built with following dockerfile and docker-compose.yml.
I want to connect to 1344 port of container from host os browser with http://localhost:1344.
But connection be fail.
I am using 1344 of container to test of bottle (python lightweight web framework) application
Why cannot connect to container's port from host?
docker-compose.yml:
version: '2'
services:
datastore:
image: busybox:latest
volumes:
- ./share:/share_to_container
### base (ubuntu)
base:
build: ./
ports:
- "127.0.0.1:1344:1344"
- "8000:8000"
volumes:
- ./app:/app
volumes_from:
- datastore
links:
- db
- webserver
db:
build:
context: .
dockerfile: "mysqlfile"
environment:
- MYSQL_ROOT_PASSWORD=mypassword
ports:
- "3306:3306"
volumes:
- ./mysql:/mysql
volumes_from:
- datastore
webserver:
image: nginx
ports:
- "8080:80"
volumes:
- ./nginx/mysite.template:/etc/nginx/conf.d/mysite.template
volumes_from:
- datastore
Edit:
The port 8080 connection is correctly, But 1344 is fail
The following is full Dockerfile for base service
Dockerfile_for_base:
from ubuntu:latest
maintainer myname
run mkdir ~/app
copy vim /root/.vim
copy vimrc /root/.vimrc
#update
run apt-get update
run apt-get -y update
run apt-get -y install libssl-dev
run apt-get -yf install curl
run apt-get -y install mysql-client
run apt-get -y install clang
run apt-get -y install lldb
run apt-get -y install make
run apt-get -y install libsqlite3-dev
run apt-get -y install man
run apt-get -y install vim
run apt-get -y install git
run apt-get -y install pkg-config
run apt-get -y install zip
run apt-get -y install unzip
run apt-get -y install language-pack-ja-base
run apt-get -y install language-pack-ja
run apt-get -y install language-pack-en-base
run apt-get -y install language-pack-en
run apt-get -y install fcitx-mozc
run apt-get -y install libreadline-dev
# setting locale to japanese
run update-locale LANG=ja_JP.UTF-8 LANGUAGE=ja_JP:ja
env LANG ja_JP.UTF-8
env LC_CTYPE ja_JP.UTF-8
env LC_MESSAGES en_US.UTF-8
run im-config -n fcitx
# end of locale settings
# install latest python3 and some python packages (https://github.com/docker-library/python/blob/3db904b3f5407840e591daf3aa54670a685b22b3/3.5/Dockerfile)
ENV GPG_KEY 97FC712E4C024BBEA48A61ED3A5CA953F73C700D
ENV PYTHON_VERSION 3.5.2
# if this is called "PIP_VERSION", pip explodes with "ValueError: invalid truth value '<VERSION>'"
ENV PYTHON_PIP_VERSION 8.1.2
RUN set -ex \
&& curl -fSL "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" -o python.tar.xz \
&& curl -fSL "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" -o python.tar.xz.asc \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& rm -r "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz \
\
&& cd /usr/src/python \
&& ./configure \
--enable-loadable-sqlite-extensions \
--enable-shared \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
&& pip3 install --no-cache-dir --upgrade pip==$PYTHON_PIP_VERSION \
&& [ "$(pip list | awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
&& find /usr/local -depth \
\( \
\( -type d -a -name test -o -name tests \) \
-o \
\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
\) -exec rm -rf '{}' + \
&& rm -rf /usr/src/python ~/.cache
# make some useful symlinks that are expected to exist
RUN cd /usr/local/bin \
&& ln -s easy_install-3.5 easy_install \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \
&& ln -s python3-config python-config
# end of latest python installation
#install some packages
run pip --no-cache-dir install bottle
run pip --no-cache-dir install feedparser
run pip --no-cache-dir install PyMySQL
run pip --no-cache-dir install -U pip
run pip --no-cache-dir install -U setuptools
#prompt and compiler environment variables
env CC clang
env CXX clang++
run echo 'export PS1="\h:\W \u$ "' >> ~/.bashrc
# git config
run git config --global user.name "myusername"
run git config --global user.email "my#email.address"
run git config --global color.ui true
run git config --global core.editor vim
expose 1000
expose 2000
expose 3000
expose 4000
expose 5000
expose 1344
cmd bash
If by "host os browser" you mean your Mac, you certainly need to remove the host from the port mapping as suggested. The reason you can't connect is the actual Docker host is a (xhyve) Virtual Machine running between your Mac and Docker. Docker will automatically publish the port between your Mac and the container like you have it, just remove the host, i.e., - "1344:1344"
(Fyi, in your setup as-is you would need to connect via the VM host which doesn't really help you.)
If you still have problems, post any errors and steps to reproduce.
So, after you posted your Dockerfile, it doesn't look like you're running anything? You have the CMD action set to bash and are not overriding it in your docker-compose.yml. I'm a little surprised the container is up at all (since it would just run bash and exit).
Are these files complete?
As an aside, you may want to reformat / lint your Dockerfile for best practises.
Did you see this forum topic?
So just run a container and call ˋifconfigˋ command. Example of output:
bash-4.3# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2%32738/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:508 (508.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1%32738/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
So IP of VM from example is 172.17.0.2

IPtables NAT/Masquerade to allow OpenStack instances to access sites external to the laptop they're running on

I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?
For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.

Docker container Network connection between different host

I installed docker on two hosts (Virtual Machines). I'd like to make the containers on different host to be able to connect each other.
Here's VM1's and VM2's ifconfig output:
VM1
bridge0 : inet addr:172.17.52.1 Bcast:172.17.52.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
VM2
bridge0 : inet addr:172.17.53.1 Bcast:172.17.53.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.77 Bcast:192.168.122.255 Mask:255.255.255.0
bridge is used for the container. I have made some network configurations:
iptables -t nat -A POSTROUTING -s 172.17.52.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM1)
iptables -t nat -A POSTROUTING -s 172.17.53.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM2)
route add -net 172.17.52.0 netmask 255.255.255.0 gw 192.168.122.129 (on VM2)
route add -net 172.17.53.0 netmask 255.255.255.0 gw 192.168.122.77 (on VM1)
I get no output when a container pings another container
(172.17.52.X ping 172.17.53.X)
VM1 can ping VM2 successfully. The container on VM1 can also ping VM2 successfully, but I get no output when the container on VM1 pings the container on VM2.
One very easy way to achieve this would be by using Weave.
You can install it with:
sudo wget -O /usr/local/bin/weave \
https://github.com/zettio/weave/releases/download/latest_release/weave
sudo chmod a+x /usr/local/bin/weave
VM1
sudo weave launch
C=$(sudo weave run 10.2.1.1/24 -t -i busybox)
VM2
sudo weave launch 192.168.122.129
C=$(sudo weave run 10.2.1.2/24 -t -i busybox)
docker exec $C ping -c 3 10.2.1.1/24
You have just create a virtual network with containers. The beauty is that these VMs can be anywhere, as long as at least one of them has public IP with port 6783 open.
You can even enable NaCL crypto by running weave launch -password "<MySecret>" or (exporting WEAVE_PASSWORD="<MySecret>" prior to weave launch).

File transfer between 2 vmware workstations on same host

I need to transfer a file from aProcess1 on a VM1 inside a VmWare Workstation to another VM2 inside the same VmWare Workstation hypervisor on the same host, so as to calculate the rate of data transfer between these 2 virtual machines.
Either to write a FTP server client server program, then how to calculate time..?
And also how to manage the ports in virtual machines (let's say on both Ubuntu is working )
in writing server client program..?
What you need to do is not trivial and you'll have to invest some effort. First of all you need to create a bridge between the two VMs and have each VM with a tap interface on that bridge.
I have a script below you can look at as an example - it creates some screen sessions (you will need a basic .screenrc) and I launch a VM in each screen tab. Really the bit that should only interest you is the bridge setup and how to launch qemu.
Networking setup you want static routes - the below is an example of what I had with eth0 being a user network interface and eth1 being the interface connected to the peer VM. You could get rid of eth0. The peer VM route was a mirror of this
sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
up route add default gw 10.0.2.2 eth0
auto eth1
iface eth1 inet static
address 20.0.0.1
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.2
up route add -host 21.0.0.1 gw 20.0.0.2 dev eth1
up route add -host 21.0.0.2 gw 20.0.0.2 dev eth1
up route del default gw 20.0.0.2 eth1
# You want it like this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 20.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
20.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
# on the peer
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 21.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
21.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
Once you have the VMs up and have configured a static route on each of those, you can use iperf with one end as a sink and the other as a source e.g.:
iperf -s
iperf -c 20.0.0.1 -t 10 -i 1
bridge script setup:
#
# Settings
#
INCLUDE_QEMU_AS_SCREEN=
A_TELNET_1=6661
A_NAME="A-VM"
A_MEMORY=1G
B_TELNET_1=6665
B_NAME="B-VM"
B_MEMORY=1G
A_DISK_IMAGE=.A.disk.img
B_DISK_IMAGE=.B.disk.img
A_PID=.A.pid
B_PID=.B.pid
A_CMD_1=.A.cmd.1
B_CMD_1=.B.cmd.1
#
# Run QEMU in background or foreground
#
if [ "$INCLUDE_QEMU_AS_SCREEN" != "" ]
then
SCREEN_QEMU_A='screen -t "A-qemu"'
SCREEN_QEMU_B='screen -t "B-qemu"'
else
SCREEN_QEMU_A='bg'
SCREEN_QEMU_B='bg'
fi
#
# Store logs locally and use the date to avoid losing old logs
#
LOG_DATE=`date "+%a_%b_%d_at_%H_%M"`
HOME=$(eval echo ~${SUDO_USER})
LOG_DIR=logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=/tmp/$LOGNAME/logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=.
fi
fi
LOG_DIR_TEST=$LOG_DIR
mkdir -p $LOG_DIR_TEST
#
# create the tap
#
echo
echo ================ create taps ================
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap1
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap2
#
# bring up the tap
#
echo
echo =============== bring up taps ===============
sudo ifconfig $LOGNAME-tap1 up
sudo ifconfig $LOGNAME-tap2 up
#
# show the tap
#
echo
echo =================== tap 1 ===================
ifconfig $LOGNAME-tap1
echo
echo =================== tap 4 ===================
ifconfig $LOGNAME-tap2
#
# create the bridge
#
sudo brctl addbr $LOGNAME-br1
#
# bring up the bridge
#
sudo ifconfig $LOGNAME-br1 1.1.1.1 up
#
# show my bridge
#
echo
echo =================== bridge 1 ===================
ifconfig $LOGNAME-br1
brctl show $LOGNAME-br1
brctl showmacs $LOGNAME-br1
#
# attach tap interface to bridge
#
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap2
SCRIPT_START="echo Starting..."
SCRIPT_EXIT="echo Exiting...; sleep 3"
cat >$A_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-A -f -c 'telnet localhost $A_TELNET_1'
$SCRIPT_EXIT
%%%
cat >$B_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-B -f -c 'telnet localhost $B_TELNET_1'
$SCRIPT_EXIT
%%%
chmod +x $A_CMD_1
chmod +x $B_CMD_1
run_qemu_in_screen_or_background()
{
SCREEN=$1
shift
if [ "$SCREEN" = "bg" ]
then
$* &
else
$SCREEN $*
fi
}
echo
echo
echo
echo "##########################################################"
echo "# Starting QEMU #"
echo "##########################################################"
echo
echo
echo
run_qemu_in_screen_or_background \
$SCREEN_QEMU_A \
qemu-system-x86_64 -nographic \
-m $A_MEMORY \
-enable-kvm \
-drive file=$A_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$A_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=10:16:3e:00:01:12 \
-net tap,ifname=$LOGNAME-tap1,vlan=21,script=no \
-boot c \
-pidfile $A_PID
run_qemu_in_screen_or_background \
$SCREEN_QEMU_B \
qemu-system-x86_64 -nographic \
-m $B_MEMORY \
-enable-kvm \
-drive file=$B_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$B_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=30:16:3e:00:03:14 \
-net tap,ifname=$LOGNAME-tap2,vlan=21,script=no \
-boot c \
-pidfile $B_PID
sleep 1
screen -t "$A_NAME" sh -c "sh $A_CMD_1"
screen -t "$B_NAME" sh -c "sh $B_CMD_1"
sleep 5
echo
echo
echo
echo "##########################################################"
echo "# Hit enter to quit #"
echo "##########################################################"
echo
echo
echo
read xx
cat $A_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $A_PID 2>/dev/null
cat $B_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $B_PID 2>/dev/null
rm -f $A_CMD_1 2>/dev/null
rm -f $B_CMD_1 2>/dev/null
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap2
sudo ifconfig $LOGNAME-br1 down
sudo brctl delbr $LOGNAME-br1
sudo ifconfig $LOGNAME-tap1 down
sudo ifconfig $LOGNAME-tap2 down
sudo tunctl -d $LOGNAME-tap1
sudo tunctl -d $LOGNAME-tap2
Supposing your VM1 and VM2 have some active ISO-OSI-L2/L3 connectivity, so there is some transport available, a way better approach to setup process-to-process communication is to use some industry-proven messaging framework rather than to spend time to build just another ftp-c/s.
For sending / receiving anything ( incl. BLOBs alike the whole files ) try ZeroMQ or nanomsg libraries, as these are broker-less frameworks, have bindings for many programming languages ready & have genuine performance / low latency.
Any distributed process-to-process systems projects will benefit from early adoption and using this approach.
Check http://zguide.zeromq.org/c:fileio3 to get some additional insights about additional add-on benefits alike load-balancing, failure-recovery et al.
As per your Q1: FTP servers, per-se, report the file transfer times. In ZeroMQ you can pragrammatically measure amount of time spent on file transfer as a distance between the two timestamps
As per your Q2: for ZeroMQ use any port allowed ( not restricted ) in Ubuntu

Resources