Can not set the virtual interface for openstack with ansible installation procedure - networking

I am a bit of a noob, and I am trying to install openstack (xena) on 3 debian machine, respectively named node1, node2 and node3.
By default, all those machines have a fixed ip address (in the dhcp server):
node1: 172.0.16.250
node2: 172.0.16.251
node3: 172.0.16.252
---
gateway: 172.0.16.2
mask: 255.240.0.0
---
dhcp server start->finish: 172.0.16.10 -> 172.0.16.249
My goal is to simply test openstack. I want to install the infra on node1, compute and storage on node2 & node3.
While following the installation procedure here, I have to add virtual network. The 3 computers only have 1 ethernet connection each. I use this configuration example for my nodes.
When restarting the node, I do not have any connection to internet anymore, nor to the local network.
I understand that I am doing something wrong, and I would like to contact internet from these machines, and contact them from any point in my LAN, so I can install openstack with ansible.
The steps I am following : https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html

If you are ruuning on Hypervisor like Virtualbox try to add a dedicated nat-mode interface for internet access. and multiple interface for your bridges like br-mgmt br-vxlan and ..

Related

Why is it not possible to ping a real machine to a vm inside openstack

I created a vm (vm-devstack-01) using Vagrant and Virtualbox in which I installed Devstack. The vm has an enp0s3 interface in NAT mode and an enp0s8 interface in bridge mode. The real network I use in my house is 192.168.88.0/24. This network uses DHCP addressing.
vm-devstack-01:
I set FLOATING_RANGE from local.conf to 192.168.88.224/27.
My local.conf:
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.88.43
FLAT_INTERFACE=enp0s8
FLOATING_RANGE=192.168.88.224/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
Later I created a debian VM (vm-debian-01) on openstack which received floating ip 192.168.88.230.
Also, the security group releasing the ping was created:
Ingress IPv4 ICMP Any 0.0.0.0/0
With this configuration it was possible to ping vm-devstack-01 to vm-debian-01 created inside openstack.
But I can't ping from the real machine (my notebook - IP 192.168.88.28) to vm-debian-01. What am I doing wrong ?
You need MASQUERADE definitions on your Openstack host machine.
That is, network translation for packets to-from your VM.
At the same time, you need routing to your Openstack host from all other networks that you want to reach VM's.
Masquerade rules
Routing
Proper Security Group settings in Openstack

DevStack instances can't be reached outside devstack node

Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!

Failed to ssh to machine after installing Docker

My ssh stops working after I successfully installed Docker (following the official site instruction https://docs.docker.com/engine/installation/) on an ubuntu machine A. Now my laptop cannot ssh to A but ok for other machines, say B, that sitting in the same network environment as A. A can ssh to B and B can also ssh to A. What could be the problem? Can anyone suggest how I can make a diagnostic?
If you are using a vpn service you might be encountering an ip conflict between docker0 interface and your vpn service.
to resolve this:
stop docker service:
sudo service docker stop
remove old docker0 interface created by docker
ip link del docker0
configure docker0 bridge (in my case i only had to define "bip" option)
start the docker service:
sudo service docker start
Most probably there is ip conflict between docker0 interface and your VPN service. As already answered, way is to stop docker service, remove docker0 interface and configure daemon.json file. I added following lines to my daemon.json
{
"default-address-pools":
[
{"base":"10.10.0.0/16","size":24}
]
}
My VPN was providing me an IP like 192.168.. so I chose a base IP that does not fall in that range. Note that the daemon.json file does not exist, so you have to create it in, etc/docker/.

Docker 1.10 container's IP in LAN

Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.

Networking-KVM-2hosts-2vms-lan_router

I have two hosts running opensuse 42.1 connected to a dlink router via eth0, accessible on 192.168.0.1 and using NetworkManager:
- vboard/eth0 is assigned via router DHCP ip 192.168.0.199
- rihana/eth0 192.168.0.198
Using KVM on both hosts, I have two opensuse VMs ( vmvboard, vmrihana) one on each host.
I configured on both hosts a virbr0 network identically, in the range 192.168.100.0/24 and DHCP range 192.168.100.128-254 and NAT on any physical device.
Vm can ping its KVM host on both side, but VM's cannot talk to each other across router network. This config used to work on opensuse 13.2, but not using network manager...
What am I doing wrong?
Is there anyone to help me with that configuration: networking with 2 hosts, a router and 2 VM's, one on each host ?
Thanks a lot in advance for your ideas.
Bridged Network for hosts and vms in a few clicks : using wicked.
Set the Dlink router settings via
Firefox/Chrome url: 192.168.0.1
User:Admin
Pass: Blank
IP on 192.168.0.1 subnet mask 255.255.255.224 ( 30 usable IPs)
Enable DHCP: unchecked
Note: it might be easier to first reset router to standard setting and connect through Network Manager
Setting up the Bridged Network with Libvirt/virt-manager and Wicked opensuse network
service.
1.1 Clean host1 and host2 previous bridge definitions
with YAST/NetworkManager/Network Settings/
Global Options Tab> Select in Dropdown: Wicked
> Overview Tab > Delete all Interfaces to make them appear as "not configured"
> HostName/DNS > Note that your hostname , domain remains there
> Routing Tab> Enable IPv4 Forwarding is off (no routing features for host1 and host2)
Click Ok.
This has now cleaned all interfaces/bridges and activated Wicked Network Service, instead of Network Manager:GNOME/Right Upper Corner has no Wired/Wifi Settings Menu Options.
1.2 Check the cleanup in Gnome Terminal as root: # su root
Password:
# cd /etc/sysconfig/network
ls ifcfg
returns only
ifcfg-lo ifcfg.template
ls .ifcfg*
ls: cannot access .ifcfg*: No such file or directory
but in case there are still .ifcfg-br0 .ifcfg-eth0,
rm .ifcfg-br0 # rm .ifcfg-eth0
ifconfig
shows only lo interface:
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 [...]
Check if wicked is active:
# systemctl status wicked
wicked.service - wicked managed network interfaces
Loaded: loaded (/usr/lib/systemd/system/wicked.service; enabled) Active: active (exited) since Fri 2016-01-08 15:37:56 AZOT; 34min ago
2. Setting up the Bridge with LIBVIRT: Gnome Terminal command line:
# virt-manager
or
GNOME YAST/Virtualization/Create Vm,followed by Cancel Creation
2.1 Click on your hypervisor QEMU/KVM to connect ( in my case QEMU/KVM ) 2.2 Menu Edit/Connections Details or Right-Click Details
2.3 Goto Network Interfaces Tab > Click + (Add) > Bridge > Forward :
Name : br0
Start Mode: none
Activate Now: checked on
IP Settings: Leave DHCP
or Configure, Mode: Static ( to continue on previous example and because VMs IP are already statically defined)
Address: 192.168.0.2/27 (equivalent to Subnet Mask set to 255.255.255.224) Gateway 192.168.0.1
Bridge Settings: turn STP off ( no complex networks )
Choose Interface(s)to Bridge
eth0 is checked
2.4 Finish - This will take sometimes to set up.
"The virtual interface is now being created." Processing... And br0 or brx shows as active.
2.5 Adjust your VMs Network NIC settings while they are still down
2.5.0 Remove Old NIC from VM:
Virt-Manager > Select VM > Open > Click Lamp Icon > Select NIC > Click Remove (right- down corner).
Note: if Lamp Icon does not appear after Open, Goto View and Select Toolbar checkbox.
2.5.1 Add new NIC to VM:
Virt-Manager > Select VM > Open > Click Lamp Icon > Add Hardware > Network >
Network source: Bridge br0: host device eth0
MAC Address: checked, leave the suggested one
Device Model: Hypervisor Default or the one you know the vm-guest has the driver for.
Finish
2.5.2 Run the VM and test in VM's Gnome Terminal
ping vm
# ping host1 etc...
2.6 Repeat the steps in host2 bare metal, with Step 1 and subsequent 2.3 with Name : br0
Start Mode: none
Activate Now: checked on
IP Settings: Leave DHCP
or Configure, Mode: Static
Address: 192.168.0.10/27
Bridge Settings: turn STP off
Choose Interface(s)to Bridge eth0 is checked
2.7 Adjust host2 VMs Network NIC settings according to 2.5
Conclusion: With a clean starting situation,
virt-manager could set up the bridge and network connection to the router successfully,
in just a few clicks.


Resources