I need to transfer a file from aProcess1 on a VM1 inside a VmWare Workstation to another VM2 inside the same VmWare Workstation hypervisor on the same host, so as to calculate the rate of data transfer between these 2 virtual machines.
Either to write a FTP server client server program, then how to calculate time..?
And also how to manage the ports in virtual machines (let's say on both Ubuntu is working )
in writing server client program..?
What you need to do is not trivial and you'll have to invest some effort. First of all you need to create a bridge between the two VMs and have each VM with a tap interface on that bridge.
I have a script below you can look at as an example - it creates some screen sessions (you will need a basic .screenrc) and I launch a VM in each screen tab. Really the bit that should only interest you is the bridge setup and how to launch qemu.
Networking setup you want static routes - the below is an example of what I had with eth0 being a user network interface and eth1 being the interface connected to the peer VM. You could get rid of eth0. The peer VM route was a mirror of this
sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
up route add default gw 10.0.2.2 eth0
auto eth1
iface eth1 inet static
address 20.0.0.1
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.2
up route add -host 21.0.0.1 gw 20.0.0.2 dev eth1
up route add -host 21.0.0.2 gw 20.0.0.2 dev eth1
up route del default gw 20.0.0.2 eth1
# You want it like this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 20.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
20.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
# on the peer
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 21.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
21.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
Once you have the VMs up and have configured a static route on each of those, you can use iperf with one end as a sink and the other as a source e.g.:
iperf -s
iperf -c 20.0.0.1 -t 10 -i 1
bridge script setup:
#
# Settings
#
INCLUDE_QEMU_AS_SCREEN=
A_TELNET_1=6661
A_NAME="A-VM"
A_MEMORY=1G
B_TELNET_1=6665
B_NAME="B-VM"
B_MEMORY=1G
A_DISK_IMAGE=.A.disk.img
B_DISK_IMAGE=.B.disk.img
A_PID=.A.pid
B_PID=.B.pid
A_CMD_1=.A.cmd.1
B_CMD_1=.B.cmd.1
#
# Run QEMU in background or foreground
#
if [ "$INCLUDE_QEMU_AS_SCREEN" != "" ]
then
SCREEN_QEMU_A='screen -t "A-qemu"'
SCREEN_QEMU_B='screen -t "B-qemu"'
else
SCREEN_QEMU_A='bg'
SCREEN_QEMU_B='bg'
fi
#
# Store logs locally and use the date to avoid losing old logs
#
LOG_DATE=`date "+%a_%b_%d_at_%H_%M"`
HOME=$(eval echo ~${SUDO_USER})
LOG_DIR=logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=/tmp/$LOGNAME/logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=.
fi
fi
LOG_DIR_TEST=$LOG_DIR
mkdir -p $LOG_DIR_TEST
#
# create the tap
#
echo
echo ================ create taps ================
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap1
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap2
#
# bring up the tap
#
echo
echo =============== bring up taps ===============
sudo ifconfig $LOGNAME-tap1 up
sudo ifconfig $LOGNAME-tap2 up
#
# show the tap
#
echo
echo =================== tap 1 ===================
ifconfig $LOGNAME-tap1
echo
echo =================== tap 4 ===================
ifconfig $LOGNAME-tap2
#
# create the bridge
#
sudo brctl addbr $LOGNAME-br1
#
# bring up the bridge
#
sudo ifconfig $LOGNAME-br1 1.1.1.1 up
#
# show my bridge
#
echo
echo =================== bridge 1 ===================
ifconfig $LOGNAME-br1
brctl show $LOGNAME-br1
brctl showmacs $LOGNAME-br1
#
# attach tap interface to bridge
#
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap2
SCRIPT_START="echo Starting..."
SCRIPT_EXIT="echo Exiting...; sleep 3"
cat >$A_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-A -f -c 'telnet localhost $A_TELNET_1'
$SCRIPT_EXIT
%%%
cat >$B_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-B -f -c 'telnet localhost $B_TELNET_1'
$SCRIPT_EXIT
%%%
chmod +x $A_CMD_1
chmod +x $B_CMD_1
run_qemu_in_screen_or_background()
{
SCREEN=$1
shift
if [ "$SCREEN" = "bg" ]
then
$* &
else
$SCREEN $*
fi
}
echo
echo
echo
echo "##########################################################"
echo "# Starting QEMU #"
echo "##########################################################"
echo
echo
echo
run_qemu_in_screen_or_background \
$SCREEN_QEMU_A \
qemu-system-x86_64 -nographic \
-m $A_MEMORY \
-enable-kvm \
-drive file=$A_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$A_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=10:16:3e:00:01:12 \
-net tap,ifname=$LOGNAME-tap1,vlan=21,script=no \
-boot c \
-pidfile $A_PID
run_qemu_in_screen_or_background \
$SCREEN_QEMU_B \
qemu-system-x86_64 -nographic \
-m $B_MEMORY \
-enable-kvm \
-drive file=$B_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$B_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=30:16:3e:00:03:14 \
-net tap,ifname=$LOGNAME-tap2,vlan=21,script=no \
-boot c \
-pidfile $B_PID
sleep 1
screen -t "$A_NAME" sh -c "sh $A_CMD_1"
screen -t "$B_NAME" sh -c "sh $B_CMD_1"
sleep 5
echo
echo
echo
echo "##########################################################"
echo "# Hit enter to quit #"
echo "##########################################################"
echo
echo
echo
read xx
cat $A_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $A_PID 2>/dev/null
cat $B_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $B_PID 2>/dev/null
rm -f $A_CMD_1 2>/dev/null
rm -f $B_CMD_1 2>/dev/null
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap2
sudo ifconfig $LOGNAME-br1 down
sudo brctl delbr $LOGNAME-br1
sudo ifconfig $LOGNAME-tap1 down
sudo ifconfig $LOGNAME-tap2 down
sudo tunctl -d $LOGNAME-tap1
sudo tunctl -d $LOGNAME-tap2
Supposing your VM1 and VM2 have some active ISO-OSI-L2/L3 connectivity, so there is some transport available, a way better approach to setup process-to-process communication is to use some industry-proven messaging framework rather than to spend time to build just another ftp-c/s.
For sending / receiving anything ( incl. BLOBs alike the whole files ) try ZeroMQ or nanomsg libraries, as these are broker-less frameworks, have bindings for many programming languages ready & have genuine performance / low latency.
Any distributed process-to-process systems projects will benefit from early adoption and using this approach.
Check http://zguide.zeromq.org/c:fileio3 to get some additional insights about additional add-on benefits alike load-balancing, failure-recovery et al.
As per your Q1: FTP servers, per-se, report the file transfer times. In ZeroMQ you can pragrammatically measure amount of time spent on file transfer as a distance between the two timestamps
As per your Q2: for ZeroMQ use any port allowed ( not restricted ) in Ubuntu
Related
What does the following command do?
tcpdump -i eth1 -s 0 -w capnet1.cap
I said it collects
it collects data
I have a loadbalancer (see status below) that I want to delete. I already deleted the instances in its pool. Full disclosure: This is on a Devstack which I rebooted, and where I recreated the lb-mgmt-network routing manually. I may have overlooked a detail after the reboot. The loadbalancer worked before the reboot.
The first step to delete the loadbalancer is to delete its pool members. This fails as follows:
$ alias olb='openstack loadbalancer'
$ olb member delete website-pool 08f55..
Load Balancer 1ff... is immutable and cannot be updated. (HTTP 409)
What can I do to make it mutable?
Below, see the loadbalancer's status after recreating the o-hm0 route and restarting the amphora. Its provisioning status is ERROR, but according to the API, this should enable me to delete it:
$ olb status show kubelb
{
"loadbalancer": {
"id": "1ff7682b-3989-444d-a1a8-6c91aac69c45",
"name": "kubelb",
"operating_status": "ONLINE",
"provisioning_status": "ERROR",
"listeners": [
{
"id": "d3c3eb7f-345f-4ded-a7f8-7d97e3af0fd4",
"name": "weblistener",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"pools": [
{
"id": "9b0875e0-7d16-4ebc-9e8d-d1b90d4264a6",
"name": "website-pool",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE",
"members": [
{
"id": "08f55bba-260a-4b83-ad6d-f9d6b44f0e2c",
"name": "",
"operating_status": "NO_MONITOR",
"provisioning_status": "ACTIVE",
"address": "172.16.0.21",
"protocol_port": 80
},
{
"id": "f7665e90-dad0-480e-8ef4-65e0a042b9fa",
"name": "",
"operating_status": "NO_MONITOR",
"provisioning_status": "ACTIVE",
"address": "172.16.0.22",
"protocol_port": 80
}
]
}
]
}
]
}
}
When you have a load balancer in ERROR state you have two options:
Delete the load balancer using the cascade delete option (--cascade on the cli).
Use the failover API to tell Octavia to repair the load balancer once your cloud is fixed.
In Octavia, operating status is a measured/observed status. If they don't go ONLINE it is likely that there is a network configuration issue with the lb-mgmt-net and the health heartbeat messages (UDP 5555) are not making it back to the health manager controller.
That said, devstack is not setup to work after a reboot. Specifically neutron and the network interfaces will be in an improper state. As you have found, you can manually reconfigure those and usually get things working again.
If I understand documentation and source code right, a loadbalancer in provisioning status ERROR can be deleted but not modified. Unfortunately, it can only be deleted after its pools and listeners have been deleted, which would modify the loadbalancer. Looks like a chicken and egg problem to me. I "solved" this by recreating the cloud from scratch. I guess I could also have cleaned up the database.
An analysis of the stack.sh log file revealed that a few additional steps were needed to make the Devstack cloud reboot-proof. To make Octavia ready:
Create /var/run/octavia, owned by the stack user
Ensure o-hm0 is up
Give o-hm0 the correct MAC and IP addresses, both found in the details of Neutron port octavia-health-manager-standalone-listen-port
add netfilter rules for traffic coming from o-hm0
At this point, I feel I can reboot Devstack and still have functioning load balancers. Strangely, all load balancers' operating_status (as well as their listeners' and pools' operating_status) is OFFLINE. However, that doesn't prevent them from working. I have not found out how to make that ONLINE.
In case anybody is interested, below is the script I use after rebooting Devstack. In addition, I also changed the Netplan configuration so that br-ex gets the server's IP address (further below).
restore-devstack script:
$ cat restore-devstack
source ~/devstack/openrc admin admin
if losetup -a | grep -q /opt/stack/data/stack-volumes
then echo loop devices are already set up
else
sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-default-backing-file
sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
echo restarting Cinder Volume service
sudo systemctl restart devstack#c-vol
fi
sudo lvs
openstack volume service list
echo
echo recreating /var/run/octavia
sudo mkdir /var/run/octavia
sudo chown stack /var/run/octavia
echo
echo setting up the o-hm0 interface
if ip l show o-hm0 | grep -q 'state DOWN'
then sudo ip l set o-hm0 up
else echo o-hm0 interface is not DOWN
fi
HEALTH_IP=$(openstack port show octavia-health-manager-standalone-listen-port -c fixed_ips -f yaml | grep ip_address | cut -d' ' -f3)
echo health monitor IP is $HEALTH_IP
if ip a show dev o-hm0 | grep -q $HEALTH_IP
then echo o-hm0 interface has IP address
else sudo ip a add ${HEALTH_IP}/24 dev o-hm0
fi
HEALTH_MAC=$(openstack port show octavia-health-manager-standalone-listen-port -c mac_address -f value)
echo health monitor MAC is $HEALTH_MAC
sudo ip link set dev o-hm0 address $HEALTH_MAC
echo o-hm0 MAC address set to $HEALTH_MAC
echo route to loadbalancer network:
ip r show 192.168.0.0/24
echo
echo fix netfilter for Octavia
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 20514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 10514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 5555 -j ACCEPT
echo fix netfilter for Magnum
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 443 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 9511 -j ACCEPT
Netplan config:
$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
enp1s0:
dhcp4: no
br-ex:
addresses: [192.168.1.200/24]
nameservers: { addresses: [192.168.1.16,1.1.1.1] }
gateway4: 192.168.1.1
version: 2
Summary
I'm trying to use testpmd as a sink of traffic from a physical NIC, through OVS with DPDK.
When I run testpmd, it fails. The error message is very brief, so I have no idea what's wrong.
How can I get testpmd to connect to a virtual port in OVS with DPDK ?
Steps
I'm mostly following these Mellanox instructions.
# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget
# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB
# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node
# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3
# step 10 - there is no step 10 in the doc linked above
# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev
Then for the OVS elements I'm trying to follow these steps
# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
-- set Interface dpdk0 type=dpdk \
options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
-- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=0000:5e:00.1 ofport_request=2
# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
-- \
set Interface dpdkvhostuser0 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:4,1:6" \
ofport_request=3
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
-- \
set Interface dpdkvhostuser1 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:8,1:10" \
ofport_request=4
# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2
Then I run testpmd
sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd \
--vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
--vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
-c 0x00fff000 \
-n 1 \
--socket-mem=2048,2048 \
--file-prefix=testpmd \
--log-level=9 \
--no-pci \
-- \
--port-numa-config=0,0,1,0 \
--ring-numa-config=0,1,0,1,1,0 \
--numa \
--socket-num=0 \
--txd=512 \
--rxd=512 \
--mbcache=512 \
--rxq=1 \
--txq=1 \
--nb-cores=4 \
-i \
--rss-udp \
--auto-start
The output is:
...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
Cause: Start ports failed
The bottom of /usr/local/var/log/openvswitch/ovs-vswitchd.log is
2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed
What is the cause of the failure?
Should I use a dpdkvhostuserclient instead of dpdkvhostuser ?
What else I've tried
Looking in /var/log/messages for more info - but that's just a copy of stdout and stderr.
Rebooting
Looking up the OVS docs, but they don't mention "logs"
I've tried changing my testpmd command arguments. (Docs here)
Getting rid of --no-pci. The result is
Configuring Port 0 (socket 0)
Port 0: 24:8A:07:9E:94:94
Configuring Port 1 (socket 0)
Port 1: 24:8A:07:9E:94:95
Configuring Port 2 (socket 0)
Fail to configure port 2
EAL: Error - exiting with code: 1
Cause: Start ports failed
Those MAC addresses are for the physical NICs I've already connected to OVS.
Removing --auto-start: same result
--nb-cores=1: same result
Removing the 2nd --vdev: Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained. When I add --port-topology=chained I end up with the original error.
Other Info
DPDK 17.11.4
OVS 2.10.1
NIC: Mellanox Connect-X5
OS: Centos 7.5
My NIC is on NUMA node 0
When I run ip addr I see that an interface called br0 has the same MAC address as my physical NICs (p3p1, when it was bound to the kernel)
When I run sudo ovs-vsctl show I see
d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
Bridge "br0"
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:5e:00.1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:5e:00.0"}
Port "br0"
Interface "br0"
type: internal
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}
Edit: Added contents of /usr/local/var/log/openvswitch/ovs-vswitchd.log
Indeed, we need dpdkvhostuser, not client.
The number of queues is mismatch in OVS options:n_rxq=2 vs testpmd --txq=1.
Can somebody help me get apt-get working when docker build image.
It works with docker0 bridge. But when I add my own bridge as :
docker network create --gateway=192.168.0.253 --subnet 192.168.0.0/24 --ip-range=192.168.0.128/25 \
-o "com.docker.network.bridge.default_bridge"="true" \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="192.168.0.254" \
-o "com.docker.network.bridge.name"="br0" \
-o "com.docker.network.driver.mtu"="1500" \
-d bridge mynet
linked with /etc/network/interfaces :
auto enp2s0
iface enp2s0 inet manual
auto br0
iface br0 inet static
bridge_ports enp2s0
bridge_fd 0
bridge_stp off
bridge_maxwait 0
address 192.168.0.253
netmask 255.255.255.0
network 192.168.0.0
gateway 192.168.0.254
dns-nameservers 212.27.40.240 212.27.40.241
My docker build failed
Step 4 : RUN apt-get update
---> Running in 4edfcd4885fe
Err:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Temporary failure resolving 'archive.ubuntu.com' ....
But apt-get work fine with :
docker run -it --network=mynet --ip=192.168.0.130 --hostname=test --name test ubuntu:16.04 /bin/bash
How can I solve my build ?
Thanks
I installed docker on two hosts (Virtual Machines). I'd like to make the containers on different host to be able to connect each other.
Here's VM1's and VM2's ifconfig output:
VM1
bridge0 : inet addr:172.17.52.1 Bcast:172.17.52.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
VM2
bridge0 : inet addr:172.17.53.1 Bcast:172.17.53.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.77 Bcast:192.168.122.255 Mask:255.255.255.0
bridge is used for the container. I have made some network configurations:
iptables -t nat -A POSTROUTING -s 172.17.52.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM1)
iptables -t nat -A POSTROUTING -s 172.17.53.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM2)
route add -net 172.17.52.0 netmask 255.255.255.0 gw 192.168.122.129 (on VM2)
route add -net 172.17.53.0 netmask 255.255.255.0 gw 192.168.122.77 (on VM1)
I get no output when a container pings another container
(172.17.52.X ping 172.17.53.X)
VM1 can ping VM2 successfully. The container on VM1 can also ping VM2 successfully, but I get no output when the container on VM1 pings the container on VM2.
One very easy way to achieve this would be by using Weave.
You can install it with:
sudo wget -O /usr/local/bin/weave \
https://github.com/zettio/weave/releases/download/latest_release/weave
sudo chmod a+x /usr/local/bin/weave
VM1
sudo weave launch
C=$(sudo weave run 10.2.1.1/24 -t -i busybox)
VM2
sudo weave launch 192.168.122.129
C=$(sudo weave run 10.2.1.2/24 -t -i busybox)
docker exec $C ping -c 3 10.2.1.1/24
You have just create a virtual network with containers. The beauty is that these VMs can be anywhere, as long as at least one of them has public IP with port 6783 open.
You can even enable NaCL crypto by running weave launch -password "<MySecret>" or (exporting WEAVE_PASSWORD="<MySecret>" prior to weave launch).