Live migration failure: not all arguments were converted to strings - openstack

Live migration to another compute node fails. I receive an error in the nova-compute log of the host compute node:
2020-10-21 15:15:52.496 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] About to invoke the migrate API _live_migration_operation /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7808
2020-10-21 15:15:52.497 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Live Migration failure: not all arguments converted during string formatting
2020-10-21 15:15:52.498 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation thread notification thread_finished /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:8149
2020-10-21 15:15:52.983 614454 DEBUG nova.virt.libvirt.migration [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] VM running on src, migration failed _log /usr/lib/python3/dist-packages/nova/virt/libvirt/migration.py:361
2020-10-21 15:15:52.984 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7978
2020-10-21 15:15:52.985 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation has aborted
Please help me with a solution to this issue.

Related

Openstack fails to launch new instances with status ERROR

I was following the tutorials on Openstack (Stein) docs website to launch an instance on my provider network. I am using networking option 2. I run the following command to create the instance, replacing PROVIDER_NET_ID with my provider network interface id.
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance1
I run openstack server list to check the status of my instance. It shows a status of ERROR.
I checked the /var/log/nova/nova-compute.log on my compute node (I only have one compute node) and came across the following error.
ERROR nova.compute.manager [req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6
995cce48094442b4b29f3fb665219408 429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Instance failed to spawn: PermissionError:
[Errno 13] Permission denied: '/var/lib/nova/instances/19a6c859-5dde-4ed3-9010-4d93ebe9a942'
However, logs before this error in the log file seem like everything was okay before this error occured.
2022-05-23 12:40:21.011 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942]
Attempting claim on node compute1: memory 64 MB, disk 1 GB, vcpus 1 CPU
2022-05-23 12:40:21.019 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total memory: 3943 MB, used: 512.00 MB
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] memory limit not specified, defaulting to unlimited
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total disk: 28 GB, used: 0.00 GB
2022-05-23 12:40:21.021 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] disk limit not specified, defaulting to unlimited
2022-05-23 12:40:21.024 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total vcpu: 4 VCPU, used: 0.00 VCPU
2022-05-23 12:40:21.025 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] vcpu limit not specified, defaulting to unlimited
2022-05-23 12:40:21.028 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Claim successful on node compute1
Anyone have any ideas one what I may be doing wrong?
I'll be thankful.

live migration in openstack-ansible [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
When I try to migrate from one compute host to another, I get an error
What is the reason for this error? i get same error.
compute2
2019-09-17 10:29:27.009 2371 ERROR nova.virt.libvirt.driver [-]
[instance: ab64119d-d075-4c99-8687-788695711b32] Live Migration
failure: Unsafe migration: Migration without shared storage is unsafe:
libvirtError: Unsafe migration: Migration without shared storage is
unsafe 2019-09-17 10:29:27.506 2371 ERROR nova.virt.libvirt.driver [-]
[instance: ab64119d-d075-4c99-8687-788695711b32] Migration operation
has aborted 2019-09-17 10:29:27.533 2371 INFO nova.compute.manager [-]
[instance: ab64119d-d075-4c99-8687-788695711b32] Swapping old
allocation on 0002f629-1480-4c71-b74a-eb9ca16f87d1 held by migration
ae674faa-49f0-4139-8eb9-966d842d8370 for instance
compute1
2019-09-17 10:29:25.626 2261 INFO nova.virt.libvirt.imagecache [req-7455f1fa-1821-4760-a38c-80ed4a7aa95a - - - - -] image e0d82262-e5dd-46f3-8747-8bb451a11f3d at (/var/lib/nova/instances/_base/993dda6ef2a8133a22deb14a205ae0d791dbd070): checking
2019-09-17 10:29:25.627 2261 INFO os_vif [req-7dfec421-606d-4923-a8f8-b4796ffdc155 b2223e6724d441dc9ceb01e2d93c42e2 a4d7dd39e119424781ff6cc62874381e - default default] Successfully plugged vif VIFBridge(active=True,address=fa:16:3e:78:d1:a2,bridge_name='brqb8d9540b-30',has_traffic_filtering=True,id=65ef51ba-8e72-44a2-9f45-ac3aa0ad2225,network=Network(b8d9540b-307c-490d-a99b-7ce565065a11),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap65ef51ba-8e')
2019-09-17 10:29:25.665 2261 INFO nova.virt.libvirt.imagecache [req-7455f1fa-1821-4760-a38c-80ed4a7aa95a - - - - -] Active base files: /var/lib/nova/instances/_base/993dda6ef2a8133a22deb14a205ae0d791dbd070

Nova instance throws an error on launch - "failed to perform requested operation on instance"

Nova instance throws an error on launch - "failed to perform requested operation on instance….the server has either erred or is incapable of performing the requested operation (HTTP 500)". See screenshot below.
InstanceCraetion Error
Surprisingly it works well when attaching volume separately after instance launch. You need set "Create New Volume” to “No” during creation of instance.
We restarted cinder service, but it did not solve the issue.
From the API logs we figured out that there is HTTP 500 error during API interactions in service endpoints (Nova & Cinder). Logs pasted below.
Can someone help to resolve this issue ?
Thanks in advance.
Openstack - Details
It is 3 Node System .one Controller +2 Compute .
Controller has Centos7 and Openstack Ocata Release
Cinder Version 1.11.0 and Nova Version 7.1.2
List of Nova and Cinder RPM’s
==> api.log <==
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Caught error: <class 'oslo_messaging.exceptions.MessagingTimeout'> Timed out waiting for a reply to message ID bf2f80590a754b59a720405cd0bc1ffb
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault Traceback (most recent call last):
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 79, in __call__
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault return req.get_response(self.application)
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2019-01-30 04:16:28.793 275098 INFO cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action returned with HTTP 500
2019-01-30 04:16:28.794 275098 INFO eventlet.wsgi.server [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] 10.110.77.4 "POST /v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action HTTP/1.1" status: 500 len: 425 time: 60.0791931
2019-01-30 04:16:28.813 275098 INFO cinder.api.openstack.wsgi [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] POST http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action
2019-01-30 04:16:28.852 275098 INFO cinder.volume.api [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Volume info retrieved successfully.
Nova Logs :
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Instance failed block device setup
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Traceback (most recent call last):
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1588, in _prep_block_device
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] wait_func=self._await_block_device_map_created)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 512, in attach_block_devices
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] _log_and_attach(device)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 509, in _log_and_attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] bdm.attach(*attach_args, **attach_kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 408, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] do_check_attach=do_check_attach)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in wrapped
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ret_val = method(obj, context, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 258, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 168, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] res = method(self, ctx, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 190, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] res = method(self, ctx, volume_id, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 391, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] exc.code if hasattr(exc, 'code') else None)})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] self.force_reraise()
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] six.reraise(self.type_, self.value, self.tb)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] context).volumes.initialize_connection(volume_id, connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 404, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] {'connector': connector})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in _action
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] resp, body = self.api.client.post(url, body=body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 167, in post
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] return self._cs_request(url, 'POST', **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 155, in _cs_request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] return self.request(url, method, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 144, in request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] raise exceptions.from_response(resp, body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-dcd4a981-8b22-4c3d-9ba7-25fafe80b8f5)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]
2019-01-30 03:58:04.811 5642 DEBUG nova.compute.claims [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Aborting claim: [Claim: 4096 MB memory, 40 GB disk] abort /usr/lib/python2.7/site-packages/nova/compute/claims.py:124
2019-01-30 03:58:04.812 5642 DEBUG oslo_concurrency.lockutils [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2019-01-30 03:58:04.844 5642 INFO nova.scheduler.client.report [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Deleted allocation for instance aba62cf8-0880-4bf7-8201-3365861c8079
Output of some hygiene commands from openstack :
[root#controller ~(keystone_admin)]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | controller | nova | enabled | up | 2019-01-31T10:27:20.000000 | - |
| cinder-scheduler | controller | nova | enabled | up | 2019-01-31T10:27:13.000000 | - |
| cinder-volume | controller#lvm | nova | enabled | up | 2019-01-31T10:27:12.000000 | - |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
[root#controller yum.repos.d]# rpm -qa | grep cinder
openstack-cinder-10.0.5-1.el7.noarch
puppet-cinder-10.4.0-1.el7.noarch
python-cinder-10.0.5-1.el7.noarch
python2-cinderclient-1.11.0-1.el7.noarch
[root#controller yum.repos.d]# rpm -qa | grep nova
openstack-nova-conductor-15.1.0-1.el7.noarch
openstack-nova-novncproxy-15.1.0-1.el7.noarch
openstack-nova-compute-15.1.0-1.el7.noarch
openstack-nova-cert-15.1.0-1.el7.noarch
openstack-nova-api-15.1.0-1.el7.noarch
openstack-nova-console-15.1.0-1.el7.noarch
openstack-nova-common-15.1.0-1.el7.noarch
openstack-nova-placement-api-15.1.0-1.el7.noarch
python-nova-15.1.0-1.el7.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-scheduler-15.1.0-1.el7.noarch
puppet-nova-10.5.0-1.el7.noarch
[root#controller yum.repos.d]#
[root#controller yum.repos.d]# rpm -qa | grep ocata
centos-release-openstack-ocata-1-2.el7.noarch
[root#controller yum.repos.d]# uname -a
Linux controller 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root#controller yum.repos.d]#
centos-release-openstack-ocata-1-2.el7.noarch
[root#controller yum.repos.d]# cinder --version
1.11.0
[root#controller yum.repos.d]# nova --version
7.1.2
[root#controller yum.repos.d]#
I got the fix for this issue.
I have observed there were few projects in Openstack where volume deletion stuck in error state with "Error deleting". I changed the volume state explicitly from cinder db using the "cinder reset-state --state available volume-id".
This allowed me to delete the volume successfully. I restarted the cinder service then after and everything stated working as usual

cant' make openstack image using packer

I'm trying to build a centos image on openstack with packer. For some reason, the build is terminated in the middle of the script and I can't figure out problems.
Additionally, I can't find out any logs in glance.
Here is my packer script and error log.
packer script
{
"builders": [
{
"availability_zone": "nova",
"domain_id": "xxxx",
"flavor": "m1.tiny",
"identity_endpoint": "http://xxx:5000/v3",
"image_name": "centos",
"image_visibility": "private",
"image_members": "myname",
"networks": "xxx-xxx-xxxx",
"password": "mypassword",
"region": "RegionOne",
"source_image": "17987fc7-e5af-487f-ae74-754ade318824",
"ssh_keypair_name": "mykeypair",
"ssh_private_key_file": "/root/.ssh/id_rsa",
"ssh_username": "mysshusername",
"tenant_name": "admin",
"type": "openstack",
"username": "myusername"
}
],
"provisioners": [
{
"script": "setup-centos.sh",
"type": "shell"
}
]
}
Error Log
...
2018/07/27 13:01:31 packer: 2018/07/27 13:01:31 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:33 packer: 2018/07/27 13:01:33 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:35 packer: 2018/07/27 13:01:35 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:37 packer: 2018/07/27 13:01:37 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:39 packer: 2018/07/27 13:01:39 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:41 packer: 2018/07/27 13:01:41 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:43 ui error: ==> openstack: Error waiting for image: Resource not found
==> openstack: Error waiting for image: Resource not found
2018/07/27 13:01:43 ui: ==> openstack: Terminating the source server: 1034619b-4dc9-45d1-b160-20290e0c4c08 ...
==> openstack: Terminating the source server: 1034619b-4dc9-45d1-b160-20290e0c4c08 ...
2018/07/27 13:01:43 packer: 2018/07/27 13:01:43 Waiting for state to become: [DELETED]
2018/07/27 13:01:44 packer: 2018/07/27 13:01:44 Waiting for state to become: [DELETED] currently SHUTOFF (0%)
2018/07/27 13:01:46 packer: 2018/07/27 13:01:46 [INFO] 404 on ServerStateRefresh, returning DELETED
2018/07/27 13:01:46 [INFO] (telemetry) ending openstack
2018/07/27 13:01:46 [INFO] (telemetry) found error: Error waiting for image: Resource not found
2018/07/27 13:01:46 ui error: Build 'openstack' errored: Error waiting for image: Resource not found
2018/07/27 13:01:46 Builds completed. Waiting on interrupt barrier...
2018/07/27 13:01:46 machine readable: error-count []string{"1"}
2018/07/27 13:01:46 ui error:
==> Some builds didn't complete successfully and had errors:
2018/07/27 13:01:46 machine readable: openstack,error []string{"Error waiting for image: Resource not found"}
2018/07/27 13:01:46 ui error: --> openstack: Error waiting for image: Resource not found
2018/07/27 13:01:46 ui:
==> Builds finished but no artifacts were created.
2018/07/27 13:01:46 [INFO] (telemetry) Finalizing.
Build 'openstack' errored: Error waiting for image: Resource not found
==> Some builds didn't complete successfully and had errors:
--> openstack: Error waiting for image: Resource not found
==> Builds finished but no artifacts were created.
2018/07/27 13:01:47 waiting for all plugin processes to complete...
2018/07/27 13:01:47 /root/pack/packer: plugin process exited
2018/07/27 13:01:47 /root/pack/packer: plugin process exited
Thanks in advance.
I found error logs in glance api.
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1642, in snapshot
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver purge_props=False)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 132, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver purge_props=purge_props)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 733, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver _reraise_translated_image_exception(image_id)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 1050, in _reraise_translated_image_exception
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver six.reraise(type(new_exc), new_exc, exc_trace)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 731, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver image = self._update_v2(context, sent_service_image_meta, data)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 745, in _update_v2
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver image = self._add_location(context, image_id, location)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 630, in _add_location
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver location, {})
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 168, in call
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver result = getattr(controller, method)(*args, **kwargs)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 340, in add_location
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver response = self._send_image_update_request(image_id, add_patch)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 535, in inner
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver return RequestIdProxy(wrapped(*args, **kwargs))
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 324, in _send_image_update_request
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver data=json.dumps(patch_body))
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 294, in patch
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver return self._request('PATCH', url, **kwargs)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 277, in _request
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver resp, body_iter = self._handle_response(resp)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 107, in _handle_response
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver raise exc.from_response(resp, resp.content)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver ImageNotAuthorized: Not authorized for image 9eb18ad3-ba29-4240-a7eb-5c8e87ef40b5.
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver
This error is related to glance api setting.
So, I changed glance-api.conf
[DEFAULT]
...
show_multiple_locations = True
After that, everything works well!

Cloudify manager bootsrapping - rest service failed

I followed the steps in http://docs.getcloudify.org/4.1.0/installation/bootstrapping/#option-2-bootstrapping-a-cloudify-manager to bootstrap the cloudify manager using option 2, and getting the following error repeatedly:
Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> restservice
error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
The command is able to install a verify a lot of things like rabbitmq, postgresql etc, but always fails at rest service. Create and configure of rest service is successful, but verification fails. It looks like the service never starts.
2017-08-22 04:23:19.700 CFY <manager> [rest_service_cyd4of.start] Task started 'fabric_plugin.tasks.run_script'
2017-08-22 04:23:20.506 LOG <manager> [rest_service_cyd4of.start] INFO: Starting Cloudify REST Service...
2017-08-22 04:23:21.011 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is running...
2017-08-22 04:23:21.403 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is working as expected...
2017-08-22 04:23:21.575 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 3 seconds...
2017-08-22 04:23:24.691 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 6 seconds...
2017-08-22 04:23:30.815 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 12 seconds...
[10.0.2.15] out: restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
[10.0.2.15] out: Traceback (most recent call last):
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 71, in <module>
[10.0.2.15] out: verify_restservice(restservice_url)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 34, in verify_restservice
[10.0.2.15] out: utils.verify_service_http(SERVICE_NAME, url, headers=headers)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1734, in verify_service_http
[10.0.2.15] out: ctx.abort_operation('{0} error: {1}: {2}'.format(service_name, url, e))
[10.0.2.15] out: File "/tmp/cloudify-ctx/cloudify.py", line 233, in abort_operation
[10.0.2.15] out: subprocess.check_call(cmd)
[10.0.2.15] out: File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[10.0.2.15] out: raise CalledProcessError(retcode, cmd)
[10.0.2.15] out: subprocess.CalledProcessError: Command '['ctx', 'abort_operation', 'restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>']' returned non-zero exit status 1
[10.0.2.15] out:
Fatal error: run() received nonzero return code 1 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3"
I am using CentOS 7.
Any suggestion to address the issue or debug will be appreciated
Can you please try the same bootstrap option using these instructions and let me know if it works for you?
Do you have the python-virtualenv package installed? If you do, try uninstalling it.
The version of virtualenv in CentOS repositories is too old and causes problems with the REST service installation. Cloudify will install its own version of virtualenv while bootstrapping, but only if one is not already present in the system.

Resources