openstack service failed when creating the password plugin - openstack

I installed openstack through packstack. However I am having a tough time dealing with the commands.
openstack service list --long
I get:
Discovering versions from the identity service failed when creating
the password plugin. Attempting to determine version from URL. Unable
to establish connection to http://10.23.77.68:5000/v2.0/tokens
10.23.77.68 is my controller node
I ran another command for the neutron, gave me the same response. Kindly help. Absolutely new in this arena.
I do not know what logs to paste, as I am very new,but you can let me know. However I can start with nova-api.log
2016-06-26 21:48:54.675 7560 INFO nova.osapi_compute.wsgi.server
[req-fdf8e231-59b3-46f7-b988-57e7be7d5e17
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336/servers/detail HTTP/1.1" status:
200 len: 15838 time: 0.5084898 2016-06-26 21:53:54.151 7535 INFO
nova.osapi_compute.wsgi.server
[req-5dd24a35-fb47-45d6-94da-19303e27a95b
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336 HTTP/1.1" status: 404 len: 264 time: 0.2648351 2016-06-26 21:53:54.167 7535 INFO
nova.osapi_compute.wsgi.server
[req-a5eac33d-660c-41a5-8269-2ba8c3063984
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/ HTTP/1.1" status: 200 len: 573 time: 0.0116799 2016-06-26 21:53:55.033 7535 INFO nova.osapi_compute.wsgi.server
[req-2eeb31d2-947e-45be-bfb8-b8f8ebf602b8
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336/servers/detail HTTP/1.1" status:
200 len: 15838 time: 0.6974850
EDIT :
/var/log/keystone.log is
[root#controller ~]# tail /var/log/keystone/keystone.log 2016-06-29
15:11:21.975 22759 INFO keystone.common.wsgi
[req-ce18ee5e-2323-4f7a-937c-71cb3b96e9a0
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 -
default default] GET http://10.23.77.68:35357/v2.0/users 2016-06-29
15:11:21.976 22759 WARNING oslo_log.versionutils
[req-ce18ee5e-2323-4f7a-937c-71cb3b96e9a0
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 -
default default] Deprecated: get_users of the v2 API is deprecated as
of Mitaka in favor of a similar function in the v3 API and may be
removed in Q. 2016-06-29 15:11:36.526 22854 INFO keystone.common.wsgi
[req-a63438d5-603e-423f-8c9d-25cf44ac12dc - - - - -] GET
http://10.23.77.68:35357/ 2016-06-29 15:11:36.536 28937 INFO
keystone.common.wsgi [req-177f988e-43ac-49ee-bf9a-e12084646f28 - - - -
-] POST http://10.23.77.68:35357/v2.0/tokens 2016-06-29 15:11:36.682 28393 INFO keystone.common.wsgi
[req-48ccc8d8-e9cb-4e1e-b8bc-ceba9139d654
f93c5815f49342c8809ed489801ae9e1 b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:11:37.047 22096 INFO keystone.common.wsgi
[req-98c1fd31-e5b5-48e8-afd6-8d635ae4cb6a
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/ 2016-06-29 15:11:37.056
25970 INFO keystone.common.wsgi
[req-971dd038-0433-4d36-a341-654a0f421472 - - - - -] POST
http://10.23.77.68:35357/v2.0/tokens 2016-06-29 15:11:37.182 24078
INFO keystone.common.wsgi [req-33cd309d-38c3-4faa-acfb-4406708cd6c8
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:12:23.884 22587 INFO keystone.common.wsgi
[req-44e1df97-e487-4e82-9293-c1344d0cbaef
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:12:27.816 27690 INFO keystone.common.wsgi
[req-0755f2a0-8280-4567-af0f-270df896e6f6
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
[root#controller ~]#

Could you please tell which version of Openstack are you ruuning ? Liberty or Mitaka ?
Also at what stage did the installation stop ?
Source the keystonerc_admin file. This file contains the credentials that will give you access to using these commands/APIs
command: source

you can use curl http://10.23.77.68:5000 on your controller ndoe, if it successfully return version list, then keystone service is ok, you need to check the connect between control node and other nodes, otherwise, you need to check keystone log in 10.23.77.68, it seems this service is not running

Have you try restarting the httpd service and memcached?
systemctl start httpd.service

Related

Airflow server constantly restarting - Signal 15

I launch a airflow webserver command in my local machine to start an airflow instance on port 8081. The server starts, however the pŕompt constantly shows some warning messages, as a loop. No error message appears, but the server doesn't works. Those are the messages:
/usr/local/lib/python3.8/dist-packages/airflow/configuration.py:361 DeprecationWarning: The default_queue option in [celery] has been moved to the default_queue option in [operators] - the old setting has been used, but please update your config.
/usr/local/lib/python3.8/dist-packages/airflow/configuration.py:361 DeprecationWarning: The dag_concurrency option in [core] has been renamed to max_active_tasks_per_dag - the old setting has been used, but please update your config.
/usr/local/lib/python3.8/dist-packages/airflow/configuration.py:361 DeprecationWarning: The processor_poll_interval option in [scheduler] has been renamed to scheduler_idle_sleep_time - the old setting has been used, but please update your config.
[2022-06-13 15:11:57,355] {manager.py:779} WARNING - No user yet created, use flask fab command to do it.
[2022-06-13 15:12:01,925] {manager.py:512} WARNING - Refused to delete permission view, assoc with role exists DAG Runs.can_create User
[2022-06-13 15:12:19 +0000] [1117638] [INFO] Handling signal: ttou
[2022-06-13 15:12:19 +0000] [1120256] [INFO] Worker exiting (pid: 1120256)
[2022-06-13 15:12:19 +0000] [1117638] [WARNING] Worker with pid 1120256 was terminated due to signal 15
[2022-06-13 15:12:22 +0000] [1117638] [INFO] Handling signal: ttin
[2022-06-13 15:12:22 +0000] [1121568] [INFO] Booting worker with pid: 1121568
Do you know what can could be happening?
Thank you in advance!

Openstack fails to launch new instances with status ERROR

I was following the tutorials on Openstack (Stein) docs website to launch an instance on my provider network. I am using networking option 2. I run the following command to create the instance, replacing PROVIDER_NET_ID with my provider network interface id.
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance1
I run openstack server list to check the status of my instance. It shows a status of ERROR.
I checked the /var/log/nova/nova-compute.log on my compute node (I only have one compute node) and came across the following error.
ERROR nova.compute.manager [req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6
995cce48094442b4b29f3fb665219408 429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Instance failed to spawn: PermissionError:
[Errno 13] Permission denied: '/var/lib/nova/instances/19a6c859-5dde-4ed3-9010-4d93ebe9a942'
However, logs before this error in the log file seem like everything was okay before this error occured.
2022-05-23 12:40:21.011 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942]
Attempting claim on node compute1: memory 64 MB, disk 1 GB, vcpus 1 CPU
2022-05-23 12:40:21.019 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total memory: 3943 MB, used: 512.00 MB
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] memory limit not specified, defaulting to unlimited
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total disk: 28 GB, used: 0.00 GB
2022-05-23 12:40:21.021 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] disk limit not specified, defaulting to unlimited
2022-05-23 12:40:21.024 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total vcpu: 4 VCPU, used: 0.00 VCPU
2022-05-23 12:40:21.025 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] vcpu limit not specified, defaulting to unlimited
2022-05-23 12:40:21.028 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Claim successful on node compute1
Anyone have any ideas one what I may be doing wrong?
I'll be thankful.

DevStack: failed to create new CentOS instance

After deployed DevStack, I managed to create cirros instances. Now I want create CentOS instance:
I download image CentOS-7-x86_64-GenericCloud-1608.qcow2 from [here].(http://cloud.centos.org/centos/7/images/)
Then I run nova boot --flavor 75c84ea2-d5b0-4d99-b935-08f654122aa3 --image 997f51bd-1ee2-4cdb-baea-6cef766bf191 --security-groups 207880e9-165f-4295-adfd-1f91ac96aaaa --nic net-id=26c05c99-b82d-403f-a988-fc07d3972b6b centos-1
Then I run nova list, it gives: b9f97618-085b-4d2b-bc94-34f3b953e2ee | centos-1 | ERROR | - | NOSTATE
It is in ERROR state, so I grep log with that b9f97618-085b-4d2b-bc94-34f3b953e2ee (instance id): grep b9f97618-085b-4d2b-bc94-34f3b953e2ee *.log
The grep returns:
Result:
n-api.log:2016-10-13 22:09:27.975 DEBUG nova.compute.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] block_device_mapping
[BlockDeviceMapping(boot_index=0,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id='997f51bd-1ee2-4cdb-baea-6cef766bf191',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=,volume_id=None,volume_size=None),
BlockDeviceMapping(boot_index=-1,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id=None,instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='blank',tag=None,updated_at=,volume_id=None,volume_size=1)]
from (pid=12331) _bdm_validate_set_size_and_instance
/opt/stack/nova/nova/compute/api.py:1239 n-api.log:2016-10-13
22:09:28.117 DEBUG nova.compute.api
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Fetching instance by UUID from
(pid=12331) get /opt/stack/nova/nova/compute/api.py:2215
n-api.log:2016-10-13 22:09:28.184 DEBUG neutronclient.v2_0.client
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-2b427b03-67d9-474e-be93-b631b6a2ba78 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:09:28.195 INFO nova.osapi_compute.wsgi.server
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] 10.61.148.89
"GET /v2.1/servers/b9f97618-085b-4d2b-bc94-34f3b953e2ee HTTP/1.1"
status: 200 len: 2018 time: 0.0843861 n-api.log:2016-10-13
22:09:52.232 DEBUG neutronclient.v2_0.client
[req-415982d6-9ff4-4c80-99a8-46e1765a58d9 admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-645a777a-35df-456e-a982-433e97cdb0e7 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:17:04.476 DEBUG neutronclient.v2_0.client
[req-3b1c4dff-d9e9-41a5-9719-5bbb7c68085c admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c from
(pid=12330) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.170 WARNING nova.scheduler.utils
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Setting instance to ERROR state.
n-cond.log:2016-10-13 22:09:28.304 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] deallocate_for_instance() from
(pid=19162) deallocate_for_instance
/opt/stack/nova/nova/network/neutronv2/api.py:1154
n-cond.log:2016-10-13 22:09:28.350 DEBUG neutronclient.v2_0.client
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 from
(pid=19162) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.351 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Instance cache missing network
info. from (pid=19162) _get_preexisting_port_ids
/opt/stack/nova/nova/network/neutronv2/api.py:2133
n-cond.log:2016-10-13 22:09:28.362 DEBUG nova.network.base_api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Updating instance_info_cache
with network_info: [] from (pid=19162)
update_instance_cache_with_nw_info
/opt/stack/nova/nova/network/base_api.py:43 grep: n-dhcp.log: No such
file or directory n-sch.log:2016-10-13 22:09:28.166 DEBUG nova.filters
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] Filtering
removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results:
[('RetryFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('AvailabilityZoneFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('RamFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]), ('DiskFilter', None)]
from (pid=19243) get_filtered_objects
/opt/stack/nova/nova/filters.py:129 n-sch.log:2016-10-13 22:09:28.166
INFO nova.filters [req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin
admin] Filtering removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results: ['RetryFilter:
(start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 0)']
q-svc.log:2016-10-13 22:09:28.184 INFO neutron.wsgi
[req-2b427b03-67d9-474e-be93-b631b6a2ba78 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.038510 q-svc.log:2016-10-13 22:09:28.350 INFO
neutron.wsgi [req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.042906 q-svc.log:2016-10-13 22:09:52.233 INFO
neutron.wsgi [req-645a777a-35df-456e-a982-433e97cdb0e7 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:52] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.041629 q-svc.log:2016-10-13 22:17:04.477 INFO
neutron.wsgi [req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:17:04] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.044646
Now I have no idea about what's going wrong about that instance deployment, could anyone give me some suggestions?
Some suggestions in order to discard common problems:
The flavor: The flavor you are using is the same you used with cirros ?. Is the answer is yes: That flavor include a specific disk size for the root disk ?. If "yes", check the minimum disk size required for the centos generic image you are using. Either the image need a bigger disk, or, the disk is to big for your box. Then, check your available HD space, the flavor specs, and the image specs.
Network: Let's discard neutron. Instead of assigning the network, assign a port. Create a port in neutron, and in the nova boot command, assign the port to the vm instead of assigning the network (--nic port-id=port-uuid).
Glance image definition: When you created the glance image from the downloaded qcow2 file, did you included any metadata item that is forcing the image to request a cinder-based disk ?. Did you included any metadata at all ?. If so, get rid of all metadata items on the glance image.
Try again to launch a cirros instance. If the cirros goes OK, then it's something with the image (maybe any of the above: glance, flavor, disk space).
Let me know what you find !.

openstack cinder error on liberty

I have an install of Liberty RDO openstack. However, when i attempt:
[root#controller ~(keystonerc_admin:admin)]# cinder --insecure quota-defaults edc8225a13404a00b44d8099e060c3d5
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-aee74e5b-b9da-460a-a4b1-14f67c165e48)
In Horizon, this error manifests itself as:
Error: Unable to retrieve volume limit information.
When navigating to horizon -> admin -> defaults.
The cinder logs show:
2016-03-10 02:07:19.970 30161 WARNING keystoneclient.auth.identity.generic.base [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
2016-03-10 02:07:19.970 30161 ERROR cinder.api.middleware.fault [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] Caught error: Could not determine a suitable URL for the plugin
2016-03-10 02:07:19.971 30161 INFO cinder.api.middleware.fault [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] http://192.168.33.11:8776/v2/edc8225a13404a00b44d8099e060c3d5/os-quota-sets/edc8225a13404a00b44d8099e060c3d5/defaults returned with HTTP 500
2016-03-10 02:07:19.972 30161 INFO eventlet.wsgi.server [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] 192.168.33.11 - - [10/Mar/2016 02:07:19] "GET /v2/edc8225a13404a00b44d8099e060c3d5/os-quota-sets/edc8225a13404a00b44d8099e060c3d5/defaults HTTP/1.1" 500 425 0.082927
My cinder config:
[root#controller ~(keystonerc_admin:admin)]# cat /etc/cinder/cinder.conf | grep -vE '(^$|^\#)'
[DEFAULT]
my_ip=192.168.33.11
auth_strategy=keystone
debug=True
verbose=True
rpc_backend=rabbit
glance_host=192.168.33.11
enabled_backends=lvm
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection=mysql://cinder:change_me#192.168.33.11/cinder
[fc-zone-manager]
[keymgr]
encryption_auth_url=http://localhost:5000/v3
[keystone_authtoken]
insecure=True
auth_uri=https://192.168.33.11:5000
auth_url=https://192.168.33.11:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=cinder
password=change_me
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.168.33.11
rabbit_userid=openstack
rabbit_password=change_me
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[lvm]
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group=cinder-volumes
iscsi_protocol=iscsi
iscsi_helper=lioadm
This looks like it could be this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1272572
I don't know what way rdo deploys openstack - but it looks like you are using the v3 Identity API
encryption_auth_url=http://localhost:5000/v3
[keystone_authtoken]
insecure=True
auth_uri=https://192.168.33.11:5000
auth_url=https://192.168.33.11:35357
These unversioned auth endpoints present a http 300 'multiple choices' so they can work with cinder-pythonclient (v2.0) and openstack common client (v3).
I would determine - what is your default keystone endpoint (no version in endpoint = 3, otherwise /v2.0).
What version of keystone is Horizon using 'USE_IDENTITIY_API = X' in local_settings.py
the newer openstack common client uses a different systax for quotas - if you are on identity api v3
os quota set
# Compute settings
[--cores <num-cores>]
[--fixed-ips <num-fixed-ips>]
[--floating-ips <num-floating-ips>]
[--injected-file-size <injected-file-bytes>]
[--injected-files <num-injected-files>]
[--instances <num-instances>]
[--key-pairs <num-key-pairs>]
[--properties <num-properties>]
[--ram <ram-mb>]
# Volume settings
[--gigabytes <new-gigabytes>]
[--snapshots <new-snapshots>]
[--volumes <new-volumes>]
<project>

Installation failed. Failed to receive heartbeat from agent

I got this error
Installation failed. Failed to receive heartbeat from agent.
when I was installing cloudera on single node.
This is what is in my /etc/hosts file:
127.0.0.1 localhost
192.168.2.131 ubuntu
This is what is in my /etc/hostname file:
ubuntu
And this is the error in my /var/log/cloudera-scm-agent file:
[13/Jun/2014 12:31:58 +0000] 15366 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[13/Jun/2014 12:31:58 +0000] 15366 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/process
[13/Jun/2014 12:31:58 +0000] 15366 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/supervisor
[13/Jun/2014 12:31:58 +0000] 15366 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/supervisor/include
[13/Jun/2014 12:31:58 +0000] 15366 MainThread agent ERROR Failed to connect to previous supervisor.
Traceback (most recent call last):
File "/usr/lib/cmf/agent/src/cmf/agent.py", line 1236, in find_or_start_supervisor
self.get_supervisor_process_info()
File "/usr/lib/cmf/agent/src/cmf/agent.py", line 1423, in get_supervisor_process_info
self.identifier = self.supervisor_client.supervisor.getIdentification()
File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/xmlrpc.py", line 460, in request
self.connection.request('POST', handler, request_body, self.headers)
File "/usr/lib/python2.7/httplib.py", line 958, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 757, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 111] Connection refused
[13/Jun/2014 12:31:58 +0000] 15366 MainThread tmpfs INFO Reusing mounted tmpfs at /run/cloudera-scm-agent/process
[13/Jun/2014 12:31:59 +0000] 15366 MainThread agent INFO Trying to connect to newly launched supervisor (Attempt 1)
[13/Jun/2014 12:31:59 +0000] 15366 MainThread agent INFO Successfully connected to supervisor
[13/Jun/2014 12:31:59 +0000] 15366 MainThread _cplogging INFO [13/Jun/2014:12:31:59] ENGINE Bus STARTING
[13/Jun/2014 12:31:59 +0000] 15366 MainThread _cplogging INFO [13/Jun/2014:12:31:59] ENGINE Started monitor thread '_TimeoutMonitor'.
[13/Jun/2014 12:31:59 +0000] 15366 MainThread _cplogging INFO [13/Jun/2014:12:31:59] ENGINE Serving on ubuntu:9000
[13/Jun/2014 12:31:59 +0000] 15366 MainThread _cplogging INFO [13/Jun/2014:12:31:59] ENGINE Bus STARTED
[13/Jun/2014 12:31:59 +0000] 15366 MainThread __init__ INFO New monitor: (<cmf.monitor.host.HostMonitor object at 0x305b990>,)
[13/Jun/2014 12:31:59 +0000] 15366 MainThread agent WARNING Setting default socket timeout to 30!
[13/Jun/2014 12:31:59 +0000] 15366 MonitorDaemon-Scheduler __init__ INFO Monitor ready to report: ('HostMonitor',)
[13/Jun/2014 12:31:59 +0000] 15366 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels
[13/Jun/2014 12:31:59 +0000] 15366 MainThread parcel INFO Agent does create users/groups and apply file permissions
[13/Jun/2014 12:31:59 +0000] 15366 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache
[13/Jun/2014 12:31:59 +0000] 15366 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache
[13/Jun/2014 12:31:59 +0000] 15366 MainThread agent INFO Active parcel list updated; recalculating component info.
[13/Jun/2014 12:32:04 +0000] 15366 Monitor-HostMonitor throttling_logger INFO Using java location: '/usr/lib/jvm/java-7-oracle-cloudera/bin/java'.
[13/Jun/2014 12:32:04 +0000] 15366 Monitor-HostMonitor throttling_logger ERROR Failed to collect NTP metrics
Traceback (most recent call last):
File "/usr/lib/cmf/agent/src/cmf/monitor/host/ntp_monitor.py", line 39, in collect
result, stdout, stderr = self._subprocess_with_timeout(args, self._timeout)
File "/usr/lib/cmf/agent/src/cmf/monitor/host/ntp_monitor.py", line 32, in _subprocess_with_timeout
return subprocess_with_timeout(args, timeout)
File "/usr/lib/cmf/agent/src/cmf/monitor/host/subprocess_timeout.py", line 40, in subprocess_with_timeout
close_fds=True)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
[13/Jun/2014 12:32:12 +0000] 15366 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['/usr/lib/jvm/java-7-oracle-cloudera/bin/java', '-classpath', '/usr/share/cmf/lib/agent-5.0.2.jar', 'com.cloudera.cmon.agent.DnsTest']
None
[13/Jun/2014 12:32:12 +0000] 15366 Monitor-HostMonitor throttling_logger ERROR Failed to collect java-based DNS names
Traceback (most recent call last):
File "/usr/lib/cmf/agent/src/cmf/monitor/host/dns_names.py", line 67, in collect
result, stdout, stderr = self._subprocess_with_timeout(args, self._poll_timeout)
File "/usr/lib/cmf/agent/src/cmf/monitor/host/dns_names.py", line 49, in _subprocess_with_timeout
return subprocess_with_timeout(args, timeout)
File "/usr/lib/cmf/agent/src/cmf/monitor/host/subprocess_timeout.py", line 81, in subprocess_with_timeout
raise Exception("timeout with args %s" % args)
Exception: timeout with args ['/usr/lib/jvm/java-7-oracle-cloudera/bin/java', '-classpath', '/usr/share/cmf/lib/agent-5.0.2.jar', 'com.cloudera.cmon.agent.DnsTest']
I am facing similar problems. I've found how to solve the problem:
ERROR Failed to collect NTP metrics
It's because NTP service is not installed/started.
Try:
sudo apt-get update && sudo apt-get install ntp
sudo service ntp start
Got the same error, please assure that your hostname could be translated to your ip.
Run ifconfig -a lookup your ip address for eth0, then run dig or host command using your FQDN and review the ip address is the same that ifconfig shows.
Follow this tutorial from cloudera: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_11_1.html
When installing Cloudera 5.2 on AWS, this error occurs. It's a known issue and Cloudera put the workaround on their website (copied here):
Installing on AWS, you must use private EC2 hostnames.
When installing on an AWS instance, and adding hosts using their public names, the installation will fail when the hosts fail to heartbeat.
Workaround:
Use the Back button in the wizard to return to the original screen, where it prompts for a license.
Rerun the wizard, but choose "Use existing hosts" instead of searching for hosts. Now those hosts show up with their internal EC2 names.
Continue through the wizard and the installation should succeed.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).

Resources