Error on configuring Devstack compute nodes: Service n-net is not running - openstack

While installing Devstack on a compute node in a multi-node devstack lab environment error encountered: Service n-net is not running.
The local.conf file has localrc as:
HOST_IP=192.168.42.12 # change this per compute node
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.42.11
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
Please help me removing this error.
P.S: I must use nova-net and not neutron for interaction between the controller and the compute nodes.

For the Ocata release I founded a solution (2-node setup). An import part is the placement-api since the Newton update (14.0.0), so first of all enable this in all your nodes:
local.conf:
enable_service placement-api
First run ./stack.sh on your controller node and after that installation run it on the other nodes.
Also here you will see the error Service n-net is not running...
Now edit your nova.conf file in /etc/nova/nova.conf because there will be no database and database_api section:
[database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova
[api_database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova_api
When adding these, you can check if it works with following command:
stack#jerico-02:/devstack$ nova-manage --debug host list
host zone
0.0.0.0 internal
jerico-03 internal
jerico-02 nova
Also in the dashboard the new compute (hypervisor) shows up!
Hope it helps!
(Tested on Ubuntu Server 16.04 LTS with devstack and OpenStack Ocata)

Related

mpi operator tensorflow benchmark example not starting

I'm trying to run this mpiJob example, https://github.com/kubeflow/mpi-operator/blob/master/examples/v2beta1/tensorflow-benchmarks/tensorflow-benchmarks.yaml by follwing the steps in this readme. I deployed the configuration to a local k3s cluster, but the launcher pod is failing with the error, ssh: Could not resolve hostname tensorflow-benchmarks-worker-0.tensorflow-benchmarks-worker: Name or service not known
How can I resolve this problem?

OpenStack-Devstack: Can't create instances using KVM on host

I have a Dockerize installation from Devstack all-in-one on Ubuntu 20.04. The goal for me is to connect to the host's KVM and create instances there. Nova was configured as follows for this purpose.
#/etc/nova/nova.conf
#/etc/nova/nova-cpu.conf
[libvirt]
connection_uri = qemu+ssh://root#172.10.1.1/system
When I try to build the instance, I get the following error.
Build of instance cdd6f8b4-6dcf-4a43-b96a-fb6166b20235 aborted: Failed to allocate the network(s), not rescheduling.
ovs-vsctl commands cause the error. What is the problem? Does this need to be done differently?

Openstack Failed to Create Instance ovs-vsctl terminating with signal 14

Please help.
I am following the microstack tutorial and I am stuck at creating instances here.
This is what I get in the commandline :
openstack --insecure server create --flavor myflavor --image 20.04 --network mynetwork --key-name mykeypair --min 2 --max 2 myinstance
However I am having the following errors as shown in the webui.
Horizon Error
After digging in more by running sudo systemctl status snap.microstack.* --no-pager -l, I found that there was an error with snap.microstack.libvirtd.service terminated with signal 14 as shown below:
snap.microstack.libvirtd.service error
Any idea how to solve this?
I found the issue, I have to create 2 group of virtual drives which the host will use /dev/sda and ceph-osd will use /dev/sdb.

openstack octavia failed to build compute instance

I installed and configured octavia for openstack load balancing. but when i want create a new loadbalancer using openstack loadbalancer create --name lb1 --vip-subnet-id subnet-pub octavia worker log say: ERROR octavia.controller.worker.v1.controller_worker octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: Failed to retrieve image with amphora tag.
why? (I use ubuntu)
another question is: I installed octavia on controller node. must install anything on compute node(s)?
I had a similar problem and adding --project service solved it when uploading the image.
$ openstack image create amphora-x64-haproxy.qcow2 --container-format bare --disk-format qcow2 --private --tag amphora --file amphora-x64-haproxy.qcow2 --property hw_architecture='x86_64' --property hw_rng_model=virtio --project service
About the second question no need for anything to be installed on compute nodes. Only network access to lb-mgmt-net from controllers.
This Link helped me.
Set tag of image with value "amphora"
openstack image set --tag "amphora" image_name

Kaa node service fails to start mongodb and zookeeper

We are trying to setup a Single Node Kaa server(version 0.10.0) in an Ubuntu 16.04 machine.
Followed the documentation given here
We were unable to connect to the admin UI after starting the kaa node service.
On investigating further we could see that the Mongodb and zookeeper services were not started. So we manually started those services. After that we were able to connect to Kaa admin UI. Do we need any additional steps to get these service running on kaa-node start ?
I setup kaaproject with the guide for my Ubuntu 16.04.1 LTS VM and Zookeeper was not running by default on my server also, so I had to install the deamon (which starts zookeeper also on startup):
sudo apt-get install zookeeperd
Check if zookeeper is running:
netstat -ntlp | grep 2181
This should result in an output like this:
With mongodb I had the problem, that there was not enough space available for the journal files. I fixed this by increasing the available disk space + setting smallfiles=true in the /etc/mongod.conf
Probably you have some troubles with configurations for services. Check if auto-startup is enabled for MongoDB / Zookeeper by the next command:
$ systemctl is-enabled ${service-name}
if you see this:
$ disabled
then auto-startup is disabled for specified service and you should try next in order to enable it:
$ systemctl enable ${service-name}

Resources