FLAT_INTERFACE in local.conf in devstack - openstack

I have two questions regarding devstack.
For the All-In-One Single Machine setup for devstack, there is a setting FLAT_INTERFACE=eth0. Where is this variable FLAT_INTERFACE used in the stack.sh script? I can see the variable FIXED_RANGE is used in stack.sh.
I have a Ubuntu on the company network with DHCP. I installed devstack on this Ubuntu. I want the VMs on this Ubuntu to get the same subnet ip address as the ubuntu PC from the company DHCP. How do I do that? It seems the All-In-One Single Machine setup requires the FIXED_RANGE to be configured and the VMs will get ip address from the FIXED_RANGE, which is not the same as the subnet on the company DHCP.

Check out http://docs.openstack.org/developer/devstack/configuration.html#local-conf at Minimal configuration section, it would guide you about a sample config of you devstack installation.
Mainly it says the following:
Minimal Configuration
While stack.sh is happy to run without a localrc section in local.conf, devlife is better when there are a few minimal variables set. This is an example of a minimal configuration that touches the values that most often need to be set.
[...]
[[local|localrc]]
ADMIN_PASSWORD=secrete
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
#FIXED_RANGE=172.31.1.0/24
#FLOATING_RANGE=192.168.20.0/25
#HOST_IP=10.3.4.5

Related

Can not set the virtual interface for openstack with ansible installation procedure

I am a bit of a noob, and I am trying to install openstack (xena) on 3 debian machine, respectively named node1, node2 and node3.
By default, all those machines have a fixed ip address (in the dhcp server):
node1: 172.0.16.250
node2: 172.0.16.251
node3: 172.0.16.252
---
gateway: 172.0.16.2
mask: 255.240.0.0
---
dhcp server start->finish: 172.0.16.10 -> 172.0.16.249
My goal is to simply test openstack. I want to install the infra on node1, compute and storage on node2 & node3.
While following the installation procedure here, I have to add virtual network. The 3 computers only have 1 ethernet connection each. I use this configuration example for my nodes.
When restarting the node, I do not have any connection to internet anymore, nor to the local network.
I understand that I am doing something wrong, and I would like to contact internet from these machines, and contact them from any point in my LAN, so I can install openstack with ansible.
The steps I am following : https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html
If you are ruuning on Hypervisor like Virtualbox try to add a dedicated nat-mode interface for internet access. and multiple interface for your bridges like br-mgmt br-vxlan and ..

Set GITLAB to be accessible on LAN

After many research i have not found anything...
I install GITLAB on a CentOS VM. The CentOS ip address is 192.168.100.1.
In the file /etc/gitlab/gitlab.rb, I modified the line:
external_url 'http:192.168.100.1:1234'
I executed the command 'gitlab-ctl reconfigure' and no errors appeared.
When I use Firefox, and I can access to my Gitlab with all the Centos' interfaces:
192.168.100.1:1234
127.0.0.1:1234
It is normal because when i execute 'netstat -ntlp', I can see:
tcp 0 0.0.0.0:1234 LISTEN 22222/nginx:master
What is the problem?
I cannot access to GitLAB outside from the same Network 192.168.100.1/24.
From an other VM on the same network (192.168.100.2), i can ping '192.168.100.2'. I also make an ssh connection but if I made a:
curl 192.168.100.1:1234
The result is "Time out"
Thank,
Vincent

DevStack instances can't be reached outside devstack node

Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!

RDO packstack : losing IP connectivity during installation

I'm trying to install Openstack Mitaka via RDO packstack. I'm following this tutorial. It completely alligns with the official doc.
I'm making sure that I have internet connectivity and that my hostname is resolving (by putting it in the /etc/hosts file). When I install Openstack via packstack --allinone, I see the puppet scripts executing but after a while it hangs.
When I then try to ping my Centos machine it fails. I have no clue why this is as I verified the ping worked before I started the install. It must happen during the packstack installation process.
I have tried now 4 times, reinstalling Centos and Packstack and the behaviour is consistent. I'm running on Virtualbox and my network is in Bridge mode.
Any ideas?
I found out that packstack during installation changed my IP address. Not sure why or how, but it was different at some point in time. So the key is to set a static IP address in the /etc/sysconfig/network-scripts/ifcfg-enp0s3 file and also ensure your hostname resolves (by setting it in the /etc/hosts file)

OpenStack: how to verify if you are using kvm or just qemu

I successfully install OpenStack on real hardware with vitalization enabled, then I configure it to use KVM. However, when I run this command "nova hypervisor-show node1" on the compute node it show me this:
hypervisor_type: qemu
should it print KVM instead of qemu? and is there any way to make sure I am using KVM not pure qemu.
please note that I used fuel to deploy openstack environment.
This is a bug in the nova-client. Because kvm is still within the confines of qemu for openstack, it shows up as qemu whether it is kvm or not.
ref: https://bugs.launchpad.net/nova/+bug/1195361
You can do a ps aux on the node and see if you see a kvm process running.

Resources