Openstack Failed to Create Instance ovs-vsctl terminating with signal 14 - openstack

Please help.
I am following the microstack tutorial and I am stuck at creating instances here.
This is what I get in the commandline :
openstack --insecure server create --flavor myflavor --image 20.04 --network mynetwork --key-name mykeypair --min 2 --max 2 myinstance
However I am having the following errors as shown in the webui.
Horizon Error
After digging in more by running sudo systemctl status snap.microstack.* --no-pager -l, I found that there was an error with snap.microstack.libvirtd.service terminated with signal 14 as shown below:
snap.microstack.libvirtd.service error
Any idea how to solve this?

I found the issue, I have to create 2 group of virtual drives which the host will use /dev/sda and ceph-osd will use /dev/sdb.

Related

OpenStack-Devstack: Can't create instances using KVM on host

I have a Dockerize installation from Devstack all-in-one on Ubuntu 20.04. The goal for me is to connect to the host's KVM and create instances there. Nova was configured as follows for this purpose.
#/etc/nova/nova.conf
#/etc/nova/nova-cpu.conf
[libvirt]
connection_uri = qemu+ssh://root#172.10.1.1/system
When I try to build the instance, I get the following error.
Build of instance cdd6f8b4-6dcf-4a43-b96a-fb6166b20235 aborted: Failed to allocate the network(s), not rescheduling.
ovs-vsctl commands cause the error. What is the problem? Does this need to be done differently?

Podman build command unable to pull image

I have configured Subuid and Subgid after installing Podman in RHEL7
I have created a simple Dockerfile to print hello world and was trying to build the image.
My Dockerfile
FROM alpine
CMD ["echo", "Hello World"]
To test I am running below command
Podman build -t imagename .
I see the below error received.
STEP 1: FROM alpine
Error: error creating build container: The following failures happened while trying to pull image specified by "alpine" based on search registries in /etc/containers/registries.conf:
* "localhost/alpine": Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "registry.access.redhat.com/alpine": Error initializing source docker://registry.access.redhat.com/alpine:latest: error pinging docker registry registry.access.redhat.com: Get https://registry.access.redhat.com/v2/: read tcp 10.70.85.174:17758->23.54.147.129:443: read: connection reset by peer
* "registry.redhat.io/alpine": Error initializing source docker://registry.redhat.io/alpine:latest: error pinging docker registry registry.redhat.io: Get https://registry.redhat.io/v2/: read tcp 10.70.85.174:36028->104.79.150.216:443: read: connection reset by peer
* "docker.io/library/alpine": Error initializing source docker://alpine:latest: error pinging docker registry registry-1.docker.io: Get https://registry-1.docker.io/v2/: read tcp 10.70.85.174:53352->18.213.137.78:443: read: connection reset by peer
Am I missing any configuration ?
Thanks
Have you still the docket Daemon running and/or docker installed?
First stop the docker Daemon
sudo systemctl stop docker
OR
sudo service docker stop
Then uninstall docker
Ubuntu here but what ever you need you can Google :D
sudo apt-get remove docker docker-engine docker.io containerd runc
Try again,
If other fail now try a refreshed install of podman
sudo --reinstall install podman
Sources
https://www.cyberciti.biz/faq/debian-ubuntu-linux-reinstall-a-package-using-apt-get-command/
https://askubuntu.com/questions/935569/how-to-completely-uninstall-docker
https://intellipaat.com/community/43965/how-to-stop-docker
https://podman.io/getting-started/installation
I suggest that you first search your image in registries
podman search alpine
you should get a list of images available. Choose the one you want - version, name, tag etc and put that in the dockerfile.
to be sure it is accessible, do the 'pull' manually
podman pull alpine<version,tag>

ICp 2.1.0.1: Installation failed with error TASK [master: Waiting for MariaDB service to start]

I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651

Error on configuring Devstack compute nodes: Service n-net is not running

While installing Devstack on a compute node in a multi-node devstack lab environment error encountered: Service n-net is not running.
The local.conf file has localrc as:
HOST_IP=192.168.42.12 # change this per compute node
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.42.11
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
Please help me removing this error.
P.S: I must use nova-net and not neutron for interaction between the controller and the compute nodes.
For the Ocata release I founded a solution (2-node setup). An import part is the placement-api since the Newton update (14.0.0), so first of all enable this in all your nodes:
local.conf:
enable_service placement-api
First run ./stack.sh on your controller node and after that installation run it on the other nodes.
Also here you will see the error Service n-net is not running...
Now edit your nova.conf file in /etc/nova/nova.conf because there will be no database and database_api section:
[database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova
[api_database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova_api
When adding these, you can check if it works with following command:
stack#jerico-02:/devstack$ nova-manage --debug host list
host zone
0.0.0.0 internal
jerico-03 internal
jerico-02 nova
Also in the dashboard the new compute (hypervisor) shows up!
Hope it helps!
(Tested on Ubuntu Server 16.04 LTS with devstack and OpenStack Ocata)

bootstrap clodufiy3.4 error occured

I installed cloudify3.4 according to the cloudify DOCS. When I install the manager, and executed like this:
# cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i openstack-manager-blueprint-inputs.yaml
an error occurred:
[ERROR] Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> Timed out trying to connect to 192.168.17.15 (tried 5 times)
I have already build a extern network 192.168.17.0/24 and I have already installed
cloudify_docker_plugin-1.3.2-py27-none-linux_x86_64-Ubuntu-trusty.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-redhat-Maipo.wgn
cloudify_host_pool_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-redhat-Maipo.wgn
So, how to solve this error? Thank you to everyone who helped me!
It seems that you can't connect the manager.
Please make sure that you have an ssh connection from the CLI to the manager.
Since you are bootstrapping an Openstack manager you should make sure to have an external IP if you are outside of Openstack or that the CLI is on the same network if you are on Openstack.

Resources