I'm trying to build a centos image on openstack with packer. For some reason, the build is terminated in the middle of the script and I can't figure out problems.
Additionally, I can't find out any logs in glance.
Here is my packer script and error log.
packer script
{
"builders": [
{
"availability_zone": "nova",
"domain_id": "xxxx",
"flavor": "m1.tiny",
"identity_endpoint": "http://xxx:5000/v3",
"image_name": "centos",
"image_visibility": "private",
"image_members": "myname",
"networks": "xxx-xxx-xxxx",
"password": "mypassword",
"region": "RegionOne",
"source_image": "17987fc7-e5af-487f-ae74-754ade318824",
"ssh_keypair_name": "mykeypair",
"ssh_private_key_file": "/root/.ssh/id_rsa",
"ssh_username": "mysshusername",
"tenant_name": "admin",
"type": "openstack",
"username": "myusername"
}
],
"provisioners": [
{
"script": "setup-centos.sh",
"type": "shell"
}
]
}
Error Log
...
2018/07/27 13:01:31 packer: 2018/07/27 13:01:31 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:33 packer: 2018/07/27 13:01:33 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:35 packer: 2018/07/27 13:01:35 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:37 packer: 2018/07/27 13:01:37 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:39 packer: 2018/07/27 13:01:39 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:41 packer: 2018/07/27 13:01:41 Waiting for image creation status: SAVING (25%)
2018/07/27 13:01:43 ui error: ==> openstack: Error waiting for image: Resource not found
==> openstack: Error waiting for image: Resource not found
2018/07/27 13:01:43 ui: ==> openstack: Terminating the source server: 1034619b-4dc9-45d1-b160-20290e0c4c08 ...
==> openstack: Terminating the source server: 1034619b-4dc9-45d1-b160-20290e0c4c08 ...
2018/07/27 13:01:43 packer: 2018/07/27 13:01:43 Waiting for state to become: [DELETED]
2018/07/27 13:01:44 packer: 2018/07/27 13:01:44 Waiting for state to become: [DELETED] currently SHUTOFF (0%)
2018/07/27 13:01:46 packer: 2018/07/27 13:01:46 [INFO] 404 on ServerStateRefresh, returning DELETED
2018/07/27 13:01:46 [INFO] (telemetry) ending openstack
2018/07/27 13:01:46 [INFO] (telemetry) found error: Error waiting for image: Resource not found
2018/07/27 13:01:46 ui error: Build 'openstack' errored: Error waiting for image: Resource not found
2018/07/27 13:01:46 Builds completed. Waiting on interrupt barrier...
2018/07/27 13:01:46 machine readable: error-count []string{"1"}
2018/07/27 13:01:46 ui error:
==> Some builds didn't complete successfully and had errors:
2018/07/27 13:01:46 machine readable: openstack,error []string{"Error waiting for image: Resource not found"}
2018/07/27 13:01:46 ui error: --> openstack: Error waiting for image: Resource not found
2018/07/27 13:01:46 ui:
==> Builds finished but no artifacts were created.
2018/07/27 13:01:46 [INFO] (telemetry) Finalizing.
Build 'openstack' errored: Error waiting for image: Resource not found
==> Some builds didn't complete successfully and had errors:
--> openstack: Error waiting for image: Resource not found
==> Builds finished but no artifacts were created.
2018/07/27 13:01:47 waiting for all plugin processes to complete...
2018/07/27 13:01:47 /root/pack/packer: plugin process exited
2018/07/27 13:01:47 /root/pack/packer: plugin process exited
Thanks in advance.
I found error logs in glance api.
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1642, in snapshot
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver purge_props=False)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 132, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver purge_props=purge_props)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 733, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver _reraise_translated_image_exception(image_id)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 1050, in _reraise_translated_image_exception
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver six.reraise(type(new_exc), new_exc, exc_trace)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 731, in update
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver image = self._update_v2(context, sent_service_image_meta, data)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 745, in _update_v2
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver image = self._add_location(context, image_id, location)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 630, in _add_location
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver location, {})
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 168, in call
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver result = getattr(controller, method)(*args, **kwargs)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 340, in add_location
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver response = self._send_image_update_request(image_id, add_patch)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 535, in inner
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver return RequestIdProxy(wrapped(*args, **kwargs))
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 324, in _send_image_update_request
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver data=json.dumps(patch_body))
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 294, in patch
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver return self._request('PATCH', url, **kwargs)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 277, in _request
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver resp, body_iter = self._handle_response(resp)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 107, in _handle_response
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver raise exc.from_response(resp, resp.content)
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver ImageNotAuthorized: Not authorized for image 9eb18ad3-ba29-4240-a7eb-5c8e87ef40b5.
2018-08-02 18:37:56.030 7 ERROR nova.virt.libvirt.driver
This error is related to glance api setting.
So, I changed glance-api.conf
[DEFAULT]
...
show_multiple_locations = True
After that, everything works well!
Related
I am working on a Data Monitoring task where I am using the Great Expectation framework to monitor the quality of the data. I am using the airflow+big query+great expectation together to achieve this.
I have set the param is_blocking:False for expectation, but the job is aborted with an exception and the downstream tasks could not execute because of this. Is there a way the notifications are sent but the execution will not stop.
Detailed exception as follows:
[2021-11-29 15:19:45,925] {taskinstance.py:1252} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=data-science
AIRFLOW_CTX_DAG_ID=abcd-data-ds-1
AIRFLOW_CTX_TASK_ID=ge-notify-_data_monitoring-expect_-5ff9677f
AIRFLOW_CTX_EXECUTION_DATE=2021-11-29T11:00:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=scheduled__2021-11-29T11:00:00+00:00
[2021-11-29 15:19:45,926] {great_expectations_notification_operator.py:42} INFO - Retrieving key data-ds-v4__promo_roi_input_features_monitoring_expect_column_values_to_be_between47deadf091f092857156a30495953f3c_20211129T110000
[2021-11-29 15:19:45,986] {alerts.py:109} INFO - Sending slack notification
[2021-11-29 15:19:46,411] {great_expectations_notification_operator.py:73} ERROR - Validation failed in datawarehouse for abcd.xyz.is_outlier
[2021-11-29 15:19:46,430] {taskinstance.py:1463} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1165, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1283, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1308, in _execute_task
result = task_copy.execute(context=context)
File "/opt/airflow/src/datahub/operators/expectations/great_expectations_notification_operator.py", line 79, in execute
raise AirflowException(message)
airflow.exceptions.AirflowException: Validation failed in datawarehouse for abcd.xyz.is_outlier
[2021-11-29 15:19:46,432] {taskinstance.py:1506} INFO - Marking task as FAILED. dag_id=curated-data-ds-v4, task_id=ge-notify-data_monitoring-expect_-5ff9677f, execution_date=20211129T110000, start_date=20211129T151945, end_date=20211129T151946
[2021-11-29 15:19:46,505] {local_task_job.py:151} INFO - Task exited with return code 1
[2021-11-29 15:19:46,557] {alerts.py:109} INFO - Sending slack notification
[2021-11-29 15:19:47,564] {local_task_job.py:261} INFO - 0 downstream tasks scheduled from follow-on schedule check
Live migration to another compute node fails. I receive an error in the nova-compute log of the host compute node:
2020-10-21 15:15:52.496 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] About to invoke the migrate API _live_migration_operation /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7808
2020-10-21 15:15:52.497 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Live Migration failure: not all arguments converted during string formatting
2020-10-21 15:15:52.498 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation thread notification thread_finished /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:8149
2020-10-21 15:15:52.983 614454 DEBUG nova.virt.libvirt.migration [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] VM running on src, migration failed _log /usr/lib/python3/dist-packages/nova/virt/libvirt/migration.py:361
2020-10-21 15:15:52.984 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7978
2020-10-21 15:15:52.985 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation has aborted
Please help me with a solution to this issue.
Nova instance throws an error on launch - "failed to perform requested operation on instance….the server has either erred or is incapable of performing the requested operation (HTTP 500)". See screenshot below.
InstanceCraetion Error
Surprisingly it works well when attaching volume separately after instance launch. You need set "Create New Volume” to “No” during creation of instance.
We restarted cinder service, but it did not solve the issue.
From the API logs we figured out that there is HTTP 500 error during API interactions in service endpoints (Nova & Cinder). Logs pasted below.
Can someone help to resolve this issue ?
Thanks in advance.
Openstack - Details
It is 3 Node System .one Controller +2 Compute .
Controller has Centos7 and Openstack Ocata Release
Cinder Version 1.11.0 and Nova Version 7.1.2
List of Nova and Cinder RPM’s
==> api.log <==
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Caught error: <class 'oslo_messaging.exceptions.MessagingTimeout'> Timed out waiting for a reply to message ID bf2f80590a754b59a720405cd0bc1ffb
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault Traceback (most recent call last):
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 79, in __call__
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault return req.get_response(self.application)
2019-01-30 04:16:28.785 275098 ERROR cinder.api.middleware.fault File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2019-01-30 04:16:28.793 275098 INFO cinder.api.middleware.fault [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action returned with HTTP 500
2019-01-30 04:16:28.794 275098 INFO eventlet.wsgi.server [req-634abf81-df79-42b5-b8f4-8f19488c0bba a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] 10.110.77.4 "POST /v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action HTTP/1.1" status: 500 len: 425 time: 60.0791931
2019-01-30 04:16:28.813 275098 INFO cinder.api.openstack.wsgi [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] POST http://10.110.77.2:8776/v2/2db5c111414e4d2bbc14645e6f0931db/volumes/301f71f0-8fb5-4429-a67c-473d42ff9def/action
2019-01-30 04:16:28.852 275098 INFO cinder.volume.api [req-53d149ac-6e60-4ddd-9ace-216d12122790 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - default default] Volume info retrieved successfully.
Nova Logs :
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Instance failed block device setup
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Traceback (most recent call last):
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1588, in _prep_block_device
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] wait_func=self._await_block_device_map_created)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 512, in attach_block_devices
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] _log_and_attach(device)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 509, in _log_and_attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] bdm.attach(*attach_args, **attach_kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 408, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] do_check_attach=do_check_attach)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in wrapped
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ret_val = method(obj, context, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 258, in attach
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 168, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] res = method(self, ctx, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 190, in wrapper
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] res = method(self, ctx, volume_id, *args, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 391, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] exc.code if hasattr(exc, 'code') else None)})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] self.force_reraise()
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] six.reraise(self.type_, self.value, self.tb)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] context).volumes.initialize_connection(volume_id, connector)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 404, in initialize_connection
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] {'connector': connector})
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in _action
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] resp, body = self.api.client.post(url, body=body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 167, in post
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] return self._cs_request(url, 'POST', **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 155, in _cs_request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] return self.request(url, method, **kwargs)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 144, in request
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] raise exceptions.from_response(resp, body)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-dcd4a981-8b22-4c3d-9ba7-25fafe80b8f5)
2019-01-30 03:58:04.808 5642 ERROR nova.compute.manager [instance: aba62cf8-0880-4bf7-8201-3365861c8079]
2019-01-30 03:58:04.811 5642 DEBUG nova.compute.claims [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] [instance: aba62cf8-0880-4bf7-8201-3365861c8079] Aborting claim: [Claim: 4096 MB memory, 40 GB disk] abort /usr/lib/python2.7/site-packages/nova/compute/claims.py:124
2019-01-30 03:58:04.812 5642 DEBUG oslo_concurrency.lockutils [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2019-01-30 03:58:04.844 5642 INFO nova.scheduler.client.report [req-a4b94c35-2532-4e82-864c-ff33b972a3b2 a1c4c0232896400ba6fddf1cbcb54dd8 2db5c111414e4d2bbc14645e6f0931db - - -] Deleted allocation for instance aba62cf8-0880-4bf7-8201-3365861c8079
Output of some hygiene commands from openstack :
[root#controller ~(keystone_admin)]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | controller | nova | enabled | up | 2019-01-31T10:27:20.000000 | - |
| cinder-scheduler | controller | nova | enabled | up | 2019-01-31T10:27:13.000000 | - |
| cinder-volume | controller#lvm | nova | enabled | up | 2019-01-31T10:27:12.000000 | - |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
[root#controller yum.repos.d]# rpm -qa | grep cinder
openstack-cinder-10.0.5-1.el7.noarch
puppet-cinder-10.4.0-1.el7.noarch
python-cinder-10.0.5-1.el7.noarch
python2-cinderclient-1.11.0-1.el7.noarch
[root#controller yum.repos.d]# rpm -qa | grep nova
openstack-nova-conductor-15.1.0-1.el7.noarch
openstack-nova-novncproxy-15.1.0-1.el7.noarch
openstack-nova-compute-15.1.0-1.el7.noarch
openstack-nova-cert-15.1.0-1.el7.noarch
openstack-nova-api-15.1.0-1.el7.noarch
openstack-nova-console-15.1.0-1.el7.noarch
openstack-nova-common-15.1.0-1.el7.noarch
openstack-nova-placement-api-15.1.0-1.el7.noarch
python-nova-15.1.0-1.el7.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-scheduler-15.1.0-1.el7.noarch
puppet-nova-10.5.0-1.el7.noarch
[root#controller yum.repos.d]#
[root#controller yum.repos.d]# rpm -qa | grep ocata
centos-release-openstack-ocata-1-2.el7.noarch
[root#controller yum.repos.d]# uname -a
Linux controller 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root#controller yum.repos.d]#
centos-release-openstack-ocata-1-2.el7.noarch
[root#controller yum.repos.d]# cinder --version
1.11.0
[root#controller yum.repos.d]# nova --version
7.1.2
[root#controller yum.repos.d]#
I got the fix for this issue.
I have observed there were few projects in Openstack where volume deletion stuck in error state with "Error deleting". I changed the volume state explicitly from cinder db using the "cinder reset-state --state available volume-id".
This allowed me to delete the volume successfully. I restarted the cinder service then after and everything stated working as usual
I followed the steps in http://docs.getcloudify.org/4.1.0/installation/bootstrapping/#option-2-bootstrapping-a-cloudify-manager to bootstrap the cloudify manager using option 2, and getting the following error repeatedly:
Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> restservice
error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
The command is able to install a verify a lot of things like rabbitmq, postgresql etc, but always fails at rest service. Create and configure of rest service is successful, but verification fails. It looks like the service never starts.
2017-08-22 04:23:19.700 CFY <manager> [rest_service_cyd4of.start] Task started 'fabric_plugin.tasks.run_script'
2017-08-22 04:23:20.506 LOG <manager> [rest_service_cyd4of.start] INFO: Starting Cloudify REST Service...
2017-08-22 04:23:21.011 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is running...
2017-08-22 04:23:21.403 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is working as expected...
2017-08-22 04:23:21.575 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 3 seconds...
2017-08-22 04:23:24.691 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 6 seconds...
2017-08-22 04:23:30.815 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 12 seconds...
[10.0.2.15] out: restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
[10.0.2.15] out: Traceback (most recent call last):
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 71, in <module>
[10.0.2.15] out: verify_restservice(restservice_url)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 34, in verify_restservice
[10.0.2.15] out: utils.verify_service_http(SERVICE_NAME, url, headers=headers)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1734, in verify_service_http
[10.0.2.15] out: ctx.abort_operation('{0} error: {1}: {2}'.format(service_name, url, e))
[10.0.2.15] out: File "/tmp/cloudify-ctx/cloudify.py", line 233, in abort_operation
[10.0.2.15] out: subprocess.check_call(cmd)
[10.0.2.15] out: File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[10.0.2.15] out: raise CalledProcessError(retcode, cmd)
[10.0.2.15] out: subprocess.CalledProcessError: Command '['ctx', 'abort_operation', 'restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>']' returned non-zero exit status 1
[10.0.2.15] out:
Fatal error: run() received nonzero return code 1 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3"
I am using CentOS 7.
Any suggestion to address the issue or debug will be appreciated
Can you please try the same bootstrap option using these instructions and let me know if it works for you?
Do you have the python-virtualenv package installed? If you do, try uninstalling it.
The version of virtualenv in CentOS repositories is too old and causes problems with the REST service installation. Cloudify will install its own version of virtualenv while bootstrapping, but only if one is not already present in the system.
I'm trying to bootstrap a cloudify manager using the simple-manager-blueprint from the cloudify-manager-repo and following the instructions here
I am running the bootstrap process from Ubuntu 16, and attempting to bootstrap onto an already-existing Centos 7 VM (KVM) hosted remotely.
The error I get during the bootstrap process is:
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy init -r
Initialization completed successfully
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy --version
Cloudify CLI 3.3.1
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy bootstrap -p ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint.yaml -i ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint-inputs.yaml
executing bootstrap validation
2016-06-10 13:03:38 CFY <manager> Starting 'execute_operation' workflow execution
2016-06-10 13:03:38 CFY <manager> [rabbitmq_b88e8] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [python_runtime_89bdd] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rest_service_61510] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [amqp_influx_2f816] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_host_d688e] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [influxdb_98fd6] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [logstash_39e85] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_configuration_0d9ca] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [mgmt_worker_f0d02] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [riemann_20a3e] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [java_runtime_c9a1c] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [elasticsearch_b1536] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [nginx_db289] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [webui_9c064] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rabbitmq_b88e8] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [python_runtime_89bdd] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_configuration_0d9ca] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [mgmt_worker_f0d02] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [nginx_db289] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rest_service_61510] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_host_d688e] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [riemann_20a3e] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [influxdb_98fd6] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [logstash_39e85] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [amqp_influx_2f816] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [webui_9c064] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [elasticsearch_b1536] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [java_runtime_c9a1c] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> 'execute_operation' workflow execution succeeded
bootstrap validation completed successfully
executing bootstrap
Inputs ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint-inputs.yaml
Inputs <cloudify.workflows.local._Environment object at 0x7fc76b458a10>
2016-06-10 13:03:45 CFY <manager> Starting 'install' workflow execution
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Creating node
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Configuring node
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Starting node
2016-06-10 13:03:46 CFY <manager> [java_runtime_e2b0d] Creating node
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a] Creating node
2016-06-10 13:03:46 CFY <manager> [python_runtime_a24d5] Creating node
2016-06-10 13:03:46 CFY <manager> [rabbitmq_2656a] Creating node
2016-06-10 13:03:46 CFY <manager> [influxdb_720e7] Creating node
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [python_runtime_a24d5.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [influxdb_720e7.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [rabbitmq_2656a.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [java_runtime_e2b0d.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a.create] Task started 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: preparing fabric environment...
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: Fabric env: {u'always_use_pty': True, u'key_filename': u'/home/k/.ssh/id_rsa.pub', u'user': u'cloudify', u'host_string': u'10.124.129.42'}
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: environment prepared successfully
[10.124.129.42] put: /tmp/tmppt9dtd-configure_manager.sh -> /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] run: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] out: Traceback (most recent call last):
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 130, in <module>
[10.124.129.42] out: main()
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 119, in main
[10.124.129.42] out: args.timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 78, in client_req
[10.124.129.42] out: response = request_method(socket_url, request, timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 59, in http_client_req
[10.124.129.42] out: timeout=timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
[10.124.129.42] out: return opener.open(url, data, timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 437, in open
[10.124.129.42] out: response = meth(req, response)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
[10.124.129.42] out: 'http', request, response, code, msg, hdrs)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 475, in error
[10.124.129.42] out: return self._call_chain(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
[10.124.129.42] out: result = func(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
[10.124.129.42] out: raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
[10.124.129.42] out: urllib2.HTTPError: HTTP Error 504: Gateway Time-out
[10.124.129.42] out: /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63: line 3: .: filename argument required
[10.124.129.42] out: .: usage: . filename [arguments]
[10.124.129.42] out:
Fatal error: run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63"
Aborting.
2016-06-10 13:03:47 LOG <manager> [manager_configuration_baa5a.create] ERROR: Exception raised on operation [fabric_plugin.tasks.run_script] invocation
Traceback (most recent call last):
File "/home/k/cfyenv/local/lib/python2.7/site-packages/cloudify/decorators.py", line 122, in wrapper
result = func(*args, **kwargs)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 214, in run_script
remote_env_script_path, command))
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/network.py", line 639, in host_prompting_wrapper
return func(*args, **kwargs)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/operations.py", line 1042, in run
shell_escape=shell_escape)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/operations.py", line 932, in _run_command
error(message=msg, stdout=out, stderr=err)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/utils.py", line 327, in error
return func(message)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/utils.py", line 32, in abort
raise env.abort_exception(msg)
FabricTaskError: run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63"
2016-06-10 13:03:47 CFY <manager> [manager_configuration_baa5a.create] Task failed 'fabric_plugin.tasks.run_script' -> run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63" [attempt 1/6]
2016-06-10 13:03:47 CFY <manager> [python_runtime_a24d5.create] Task started 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: preparing fabric environment...
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: Fabric env: {u'always_use_pty': True, u'key_filename': u'/home/k/.ssh/id_rsa.pub', u'hide': u'running', u'user': u'cloudify', u'host_string': u'10.124.129.42'}
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: environment prepared successfully
[10.124.129.42] put: /tmp/tmpmndvAt-create.sh -> /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] run: source /tmp/cloudify-ctx/scripts/env-tmpmndvAt-create.sh-F7IX8WT9 && /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] out: Traceback (most recent call last):
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 130, in <module>
[10.124.129.42] out: main()
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 119, in main
[10.124.129.42] out: args.timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 78, in client_req
[10.124.129.42] out: response = request_method(socket_url, request, timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 59, in http_client_req
[10.124.129.42] out: timeout=timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
[10.124.129.42] out: return opener.open(url, data, timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 437, in open
[10.124.129.42] out: response = meth(req, response)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
[10.124.129.42] out: 'http', request, response, code, msg, hdrs)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 475, in error
[10.124.129.42] out: return self._call_chain(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
[10.124.129.42] out: result = func(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
[10.124.129.42] out: raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
[10.124.129.42] out: urllib2.HTTPError: HTTP Error 504: Gateway Time-out
[10.124.129.42] out: /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9: line 3: .: filename argument required
[10.124.129.42] out: .: usage: . filename [arguments]
[10.124.129.42] out:
Fatal error: run() received nonzero return code 2 while executing!
^C
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ ^C
As far as I can tell it looks like the bootstrap scripts are expecting something to be listening http on the target manager host but it's not there, but of course I could be way off track as I'm new to cloudify.
I've made only minimal changes to the blueprints input:
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager/cloudify-manager-blueprints-3.3.1$ cat ./simple-manager-blueprint-inputs.yaml
#############################
# Provider specific Inputs
#############################
# The public IP of the manager to which the CLI will connect.
public_ip: '<my target hosts ip>'
# The manager's private IP address. This is the address which will be used by the
# application hosts to connect to the Manager's fileserver and message broker.
private_ip: '<my target hosts ip>'
# SSH user used to connect to the manager
ssh_user: 'cloudify'
# SSH key path used to connect to the manager
ssh_key_filename: '/home/k/.ssh/id_rsa.pub'
# This is the user with which the Manager will try to connect to the application hosts.
agents_user: 'cloudify'
#resources_prefix: ''
#############################
# Security Settings
#############################
# Cloudify REST security is disabled by default. To disable security, set to true.
# Note: If security is disabled, the other security inputs are irrelevant.
#security_enabled: false
# Enabling SSL limits communication with the server to SSL only.
# NOTE: If enabled, the certificate and private key files must reside in resources/ssl.
#ssl_enabled: false
# Username and password of the Cloudify administrator.
# This user will also be included in the simple userstore repostiroty if the
# simple userstore implementation is used.
admin_username: 'admin'
admin_password: '<my admin password>'
#insecure_endpoints_disabled: false
#############################
# Agent Packages
#############################
# The key names must be in the format: distro_release_agent (e.g. ubuntu_trusty_agent)
# as the key is what's used to name the file, which later allows our
# agent installer to identify it for your distro and release automatically.
# Note that the windows agent key name MUST be `cloudify_windows_agent`
agent_package_urls:
# ubuntu_trusty_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/Ubuntu-trusty-agent_3.3.1-sp-b310.tar.gz
# ubuntu_precise_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/Ubuntu-precise-agent_3.3.1-sp-b310.tar.gz
centos_7x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/centos-Core-agent_3.3.1-sp-b310.tar.gz
# centos_6x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/centos-Final-agent_3.3.1-sp-b310.tar.gz
# redhat_7x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/redhat-Maipo-agent_3.3.1-sp-b310.tar.gz
# redhat_6x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/redhat-Santiago-agent_3.3.1-sp-b310.tar.gz
# cloudify_windows_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-windows-agent_3.3.1-sp-b310.exe
#############################
# Cloudify Modules
#############################
# Note that you can replace rpm urls with names of packages as long as they're available in your default yum repository.
# That is, as long as they provide the exact same version of that module.
rest_service_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-rest-service-3.3.1-sp_b310.x86_64.rpm'
management_worker_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-management-worker-3.3.1-sp_b310.x86_64.rpm'
amqpinflux_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-amqp-influx-3.3.1-sp_b310.x86_64.rpm'
cloudify_resources_url: 'https://github.com/cloudify-cosmo/cloudify-manager/archive/3.3.1.tar.gz'
webui_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-ui-3.3.1-sp-b310.tgz'
# This is a Cloudify specific redistribution of Grafana.
grafana_source_url: http://repository.cloudifysource.org/org/cloudify3/components/grafana-1.9.0.tgz
#############################
# External Components
#############################
# Note that you can replace rpm urls with names of packages as long as they're available in your default yum repository.
# That is, as long as they provide the exact same version of that module.
pip_source_rpm_url: http://repository.cloudifysource.org/org/cloudify3/components/python-pip-7.1.0-1.el7.noarch.rpm
java_source_url: http://repository.cloudifysource.org/org/cloudify3/components/jre1.8.0_45-1.8.0_45-fcs.x86_64.rpm
# RabbitMQ Distribution of Erlang
erlang_source_url: http://repository.cloudifysource.org/org/cloudify3/components/erlang-17.4-1.el6.x86_64.rpm
rabbitmq_source_url: http://repository.cloudifysource.org/org/cloudify3/components/rabbitmq-server-3.5.3-1.noarch.rpm
elasticsearch_source_url: http://repository.cloudifysource.org/org/cloudify3/components/elasticsearch-1.6.0.noarch.rpm
elasticsearch_curator_rpm_source_url: http://repository.cloudifysource.org/org/cloudify3/components/elasticsearch-curator-3.2.3-1.x86_64.rpm
logstash_source_url: http://repository.cloudifysource.org/org/cloudify3/components/logstash-1.5.0-1.noarch.rpm
nginx_source_url: http://repository.cloudifysource.org/org/cloudify3/components/nginx-1.8.0-1.el7.ngx.x86_64.rpm
influxdb_source_url: http://repository.cloudifysource.org/org/cloudify3/components/influxdb-0.8.8-1.x86_64.rpm
riemann_source_url: http://repository.cloudifysource.org/org/cloudify3/components/riemann-0.2.6-1.noarch.rpm
# A RabbitMQ Client for Riemann
langohr_source_url: http://repository.cloudifysource.org/org/cloudify3/components/langohr.jar
# Riemann's default daemonizer
daemonize_source_url: http://repository.cloudifysource.org/org/cloudify3/components/daemonize-1.7.3-7.el7.x86_64.rpm
nodejs_source_url: http://repository.cloudifysource.org/org/cloudify3/components/node-v0.10.35-linux-x64.tar.gz
#############################
# RabbitMQ Configuration
#############################
# Sets the username/password to use for clients such as celery
# to connect to the rabbitmq broker.
# It is recommended that you set both the username and password
# to something reasonably secure.
rabbitmq_username: 'cloudify'
rabbitmq_password: '<my rabbit password>'
# Enable SSL for RabbitMQ. If this is set to true then the public and private
# certs must be supplied (`rabbitmq_cert_private`, `rabbitmq_cert_public` inputs).
#rabbitmq_ssl_enabled: false
# The private certificate for RabbitMQ to use for SSL. This must be PEM formatted.
# It is expected to begin with a line containing 'PRIVATE KEY' in the middle.
#rabbitmq_cert_private: ''
# The public certificate for RabbitMQ to use for SSL. This does not need to be signed by any CA,
# as it will be deployed and explicitly used for all other components.
# It may be self-signed. It must be PEM formatted.
# It is expected to begin with a line of dashes with 'BEGIN CERTIFICATE' in the middle.
# If an external endpoint is used, this must be the public certificate associated with the private
# certificate that has already been configured for use by that rabbit endpoint.
#rabbitmq_cert_public: ''
# Allows to define the message-ttl for the different types of queues (in milliseconds).
# These are not used if `rabbitmq_endpoint_ip` is provided.
# https://www.rabbitmq.com/ttl.html
rabbitmq_events_queue_message_ttl: 60000
rabbitmq_logs_queue_message_ttl: 60000
rabbitmq_metrics_queue_message_ttl: 60000
# This will set the queue length limit. Note that while new messages
# will be queued in RabbitMQ, old messages will be deleted once the
# limit is reached!
# These are not used if `rabbitmq_endpoint_ip` is provided.
# Note this is NOT the message byte length!
# https://www.rabbitmq.com/maxlength.html
rabbitmq_events_queue_length_limit: 1000000
rabbitmq_logs_queue_length_limit: 1000000
rabbitmq_metrics_queue_length_limit: 1000000
# RabbitMQ File Descriptors Limit
rabbitmq_fd_limit: 102400
# You can configure an external endpoint of a RabbitMQ Cluster to use
# instead of the built in one.
# If one is provided, the built in RabbitMQ cluster will not run.
# Also note that your external cluster must be preconfigured with any
# user name/pass and SSL certs if you plan on using RabbitMQ's security
# features.
#rabbitmq_endpoint_ip: ''
#############################
# Elasticsearch Configuration
#############################
# bootstrap.mlockall is set to true by default.
# This allows to set the heapsize for your cluster.
# https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
#elasticsearch_heap_size: 2g
# This allows to provide any JAVA_OPTS to Elasticsearch.
#elasticsearch_java_opts: ''
# The index for events will be named `logstash-YYYY.mm.dd`.
# A new index corresponding with today's date will be added each day.
# Elasticsearch Curator is used to rotate the indices on a daily basis
# via a cronjob. This allows to determine the number of days to keep.
#elasticsearch_index_rotation_interval: 7
# You can configure an external endpoint of an Elasticsearch Cluster to use
# instead of the built in one. The built in Elasticsearch cluster will not run.
# You need to provide an IP (defaults to localhost) and Port (defaults to 9200) of your Elasticsearch Cluster.
#elasticsearch_endpoint_ip: ''
#elasticsearch_endpoint_port: 9200
#############################
# InfluxDB Configuration
#############################
# You can configure an external endpoint of an InfluxDB Cluster to use
# instead of the built in one.
# If one is provided, the built in InfluxDB cluster will not run.
# Note that the port is currently not configurable and must remain 8086.
# Also note that the database username and password are hardcoded to root:root.
#influxdb_endpoint_ip: ''
#############################
# Offline Resources Upload
#############################
# You can configure a set of resources to upload at bootstrap. These resources
# will reside on the manager and enable offline deployment. `dsl_resources`
# should contain any resource needed in the parsing process (i.e. plugin.yaml files)
# and any plugin archive should be compiled using the designated wagon tool
# which can be found at: http://github.com/cloudify-cosmo/wagon.
# The path should be passed to plugin_resources. Any resource your
# blueprint might need, could be uploaded using this mechanism.
#dsl_resources:
# - {'source_path': 'http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/fabric-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/script-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/script-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/diamond-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/diamond-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/aws-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/aws-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/openstack-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/openstack-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/vsphere-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/vsphere-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/cloudify/3.3.1/types.yaml', 'destination_path': '/spec/cloudify/3.3.1/types.yaml'}
# The plugins you would like to use in your applications should be added here.
# By default, the Diamond, Fabric and relevant IaaS plugins are provided.
# Note that you can upload plugins post-bootstrap via the `cfy plugins upload`
# command.
plugin_resources:
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-redhat-Maipo.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py26-none-linux_x86_64-centos-Final.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-Ubuntu-precise.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-Ubuntu-trusty.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_fabric_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_aws_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_openstack_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_vcloud_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_vsphere_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
I'm kinda lost even knowing where to start troubleshooting. Any assistance very gratefully received
K.
Have you looked at the document on offline installation? This should address the scenario when you need to work behind a firewall or a proxy.