Openstack fails to launch new instances with status ERROR - openstack

I was following the tutorials on Openstack (Stein) docs website to launch an instance on my provider network. I am using networking option 2. I run the following command to create the instance, replacing PROVIDER_NET_ID with my provider network interface id.
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance1
I run openstack server list to check the status of my instance. It shows a status of ERROR.
I checked the /var/log/nova/nova-compute.log on my compute node (I only have one compute node) and came across the following error.
ERROR nova.compute.manager [req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6
995cce48094442b4b29f3fb665219408 429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Instance failed to spawn: PermissionError:
[Errno 13] Permission denied: '/var/lib/nova/instances/19a6c859-5dde-4ed3-9010-4d93ebe9a942'
However, logs before this error in the log file seem like everything was okay before this error occured.
2022-05-23 12:40:21.011 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942]
Attempting claim on node compute1: memory 64 MB, disk 1 GB, vcpus 1 CPU
2022-05-23 12:40:21.019 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total memory: 3943 MB, used: 512.00 MB
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] memory limit not specified, defaulting to unlimited
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total disk: 28 GB, used: 0.00 GB
2022-05-23 12:40:21.021 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] disk limit not specified, defaulting to unlimited
2022-05-23 12:40:21.024 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total vcpu: 4 VCPU, used: 0.00 VCPU
2022-05-23 12:40:21.025 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] vcpu limit not specified, defaulting to unlimited
2022-05-23 12:40:21.028 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Claim successful on node compute1
Anyone have any ideas one what I may be doing wrong?
I'll be thankful.

Related

openstack Instance failed network setup

I have 2 ubuntu node (controller and compute) that i installed openstack, module by module.
but i can not create any instance.
in nova compute logs show this error:
ERROR nova.compute.manager [req-42d9d648-b39f-4214-9e5e-b8fb8799c10e 91704884e43f48fcbd156b8d7429fc3e 5e055db0a5464dc1997ab0f456792271 - default default] Instance failed network setup after 1 attempt(s): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://localhost:9696/v2.0/networks?id=163f0b54-e337-40ac-81af-958c24ceeb7f: HTTPConnectionPool(host='localhost', port=9696): Max retries exceeded with url: /v2.0/networks?id=163f0b54-e337-40ac-81af-958c24ceeb7f (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4ce476a820>: Failed to establish a new connection: [Errno 111] ECONNREFUSED')
please help me

Live migration failure: not all arguments were converted to strings

Live migration to another compute node fails. I receive an error in the nova-compute log of the host compute node:
2020-10-21 15:15:52.496 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] About to invoke the migrate API _live_migration_operation /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7808
2020-10-21 15:15:52.497 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Live Migration failure: not all arguments converted during string formatting
2020-10-21 15:15:52.498 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation thread notification thread_finished /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:8149
2020-10-21 15:15:52.983 614454 DEBUG nova.virt.libvirt.migration [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] VM running on src, migration failed _log /usr/lib/python3/dist-packages/nova/virt/libvirt/migration.py:361
2020-10-21 15:15:52.984 614454 DEBUG nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:7978
2020-10-21 15:15:52.985 614454 ERROR nova.virt.libvirt.driver [-] [instance: bc41148a-8fdd-4be1-b8fa-468ee17a4f5b] Migration operation has aborted
Please help me with a solution to this issue.

Nexus 3 Compact blob store task does not remove images physically

We were deleting old docker images with keeping last 10 of them. We tried Compact blob store task to delete them physically but on the administration/Repository settings, Blob store still shows the same size after deleting images.
This is the compact blob store log:
2018-06-28 14:18:40,709+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Task information:
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - ID: 2bf9a574-f3e6-4f8e-8351-d98e4abc5103
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Type: blobstore.compact
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Name: cbs
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Description: Compacting default blob store
2018-06-28 14:18:40,713+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Deletions index file rebuild not required
2018-06-28 14:18:40,713+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Begin deleted blobs processing
2018-06-28 14:18:41,551+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Elapsed time: 837.6 ms, processed: 45/45
2018-06-28 14:18:41,551+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Task complete
Docker layers can be shared across many different images, so the layers associated with an image are not deleted automatically when you delete an image. First run a "docker - delete unused manifests and images" task, then try running the compact blobstore again.

DevStack: failed to create new CentOS instance

After deployed DevStack, I managed to create cirros instances. Now I want create CentOS instance:
I download image CentOS-7-x86_64-GenericCloud-1608.qcow2 from [here].(http://cloud.centos.org/centos/7/images/)
Then I run nova boot --flavor 75c84ea2-d5b0-4d99-b935-08f654122aa3 --image 997f51bd-1ee2-4cdb-baea-6cef766bf191 --security-groups 207880e9-165f-4295-adfd-1f91ac96aaaa --nic net-id=26c05c99-b82d-403f-a988-fc07d3972b6b centos-1
Then I run nova list, it gives: b9f97618-085b-4d2b-bc94-34f3b953e2ee | centos-1 | ERROR | - | NOSTATE
It is in ERROR state, so I grep log with that b9f97618-085b-4d2b-bc94-34f3b953e2ee (instance id): grep b9f97618-085b-4d2b-bc94-34f3b953e2ee *.log
The grep returns:
Result:
n-api.log:2016-10-13 22:09:27.975 DEBUG nova.compute.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] block_device_mapping
[BlockDeviceMapping(boot_index=0,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id='997f51bd-1ee2-4cdb-baea-6cef766bf191',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=,volume_id=None,volume_size=None),
BlockDeviceMapping(boot_index=-1,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id=None,instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='blank',tag=None,updated_at=,volume_id=None,volume_size=1)]
from (pid=12331) _bdm_validate_set_size_and_instance
/opt/stack/nova/nova/compute/api.py:1239 n-api.log:2016-10-13
22:09:28.117 DEBUG nova.compute.api
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Fetching instance by UUID from
(pid=12331) get /opt/stack/nova/nova/compute/api.py:2215
n-api.log:2016-10-13 22:09:28.184 DEBUG neutronclient.v2_0.client
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-2b427b03-67d9-474e-be93-b631b6a2ba78 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:09:28.195 INFO nova.osapi_compute.wsgi.server
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] 10.61.148.89
"GET /v2.1/servers/b9f97618-085b-4d2b-bc94-34f3b953e2ee HTTP/1.1"
status: 200 len: 2018 time: 0.0843861 n-api.log:2016-10-13
22:09:52.232 DEBUG neutronclient.v2_0.client
[req-415982d6-9ff4-4c80-99a8-46e1765a58d9 admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-645a777a-35df-456e-a982-433e97cdb0e7 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:17:04.476 DEBUG neutronclient.v2_0.client
[req-3b1c4dff-d9e9-41a5-9719-5bbb7c68085c admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c from
(pid=12330) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.170 WARNING nova.scheduler.utils
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Setting instance to ERROR state.
n-cond.log:2016-10-13 22:09:28.304 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] deallocate_for_instance() from
(pid=19162) deallocate_for_instance
/opt/stack/nova/nova/network/neutronv2/api.py:1154
n-cond.log:2016-10-13 22:09:28.350 DEBUG neutronclient.v2_0.client
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 from
(pid=19162) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.351 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Instance cache missing network
info. from (pid=19162) _get_preexisting_port_ids
/opt/stack/nova/nova/network/neutronv2/api.py:2133
n-cond.log:2016-10-13 22:09:28.362 DEBUG nova.network.base_api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Updating instance_info_cache
with network_info: [] from (pid=19162)
update_instance_cache_with_nw_info
/opt/stack/nova/nova/network/base_api.py:43 grep: n-dhcp.log: No such
file or directory n-sch.log:2016-10-13 22:09:28.166 DEBUG nova.filters
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] Filtering
removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results:
[('RetryFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('AvailabilityZoneFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('RamFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]), ('DiskFilter', None)]
from (pid=19243) get_filtered_objects
/opt/stack/nova/nova/filters.py:129 n-sch.log:2016-10-13 22:09:28.166
INFO nova.filters [req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin
admin] Filtering removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results: ['RetryFilter:
(start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 0)']
q-svc.log:2016-10-13 22:09:28.184 INFO neutron.wsgi
[req-2b427b03-67d9-474e-be93-b631b6a2ba78 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.038510 q-svc.log:2016-10-13 22:09:28.350 INFO
neutron.wsgi [req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.042906 q-svc.log:2016-10-13 22:09:52.233 INFO
neutron.wsgi [req-645a777a-35df-456e-a982-433e97cdb0e7 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:52] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.041629 q-svc.log:2016-10-13 22:17:04.477 INFO
neutron.wsgi [req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:17:04] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.044646
Now I have no idea about what's going wrong about that instance deployment, could anyone give me some suggestions?
Some suggestions in order to discard common problems:
The flavor: The flavor you are using is the same you used with cirros ?. Is the answer is yes: That flavor include a specific disk size for the root disk ?. If "yes", check the minimum disk size required for the centos generic image you are using. Either the image need a bigger disk, or, the disk is to big for your box. Then, check your available HD space, the flavor specs, and the image specs.
Network: Let's discard neutron. Instead of assigning the network, assign a port. Create a port in neutron, and in the nova boot command, assign the port to the vm instead of assigning the network (--nic port-id=port-uuid).
Glance image definition: When you created the glance image from the downloaded qcow2 file, did you included any metadata item that is forcing the image to request a cinder-based disk ?. Did you included any metadata at all ?. If so, get rid of all metadata items on the glance image.
Try again to launch a cirros instance. If the cirros goes OK, then it's something with the image (maybe any of the above: glance, flavor, disk space).
Let me know what you find !.

openstack service failed when creating the password plugin

I installed openstack through packstack. However I am having a tough time dealing with the commands.
openstack service list --long
I get:
Discovering versions from the identity service failed when creating
the password plugin. Attempting to determine version from URL. Unable
to establish connection to http://10.23.77.68:5000/v2.0/tokens
10.23.77.68 is my controller node
I ran another command for the neutron, gave me the same response. Kindly help. Absolutely new in this arena.
I do not know what logs to paste, as I am very new,but you can let me know. However I can start with nova-api.log
2016-06-26 21:48:54.675 7560 INFO nova.osapi_compute.wsgi.server
[req-fdf8e231-59b3-46f7-b988-57e7be7d5e17
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336/servers/detail HTTP/1.1" status:
200 len: 15838 time: 0.5084898 2016-06-26 21:53:54.151 7535 INFO
nova.osapi_compute.wsgi.server
[req-5dd24a35-fb47-45d6-94da-19303e27a95b
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336 HTTP/1.1" status: 404 len: 264 time: 0.2648351 2016-06-26 21:53:54.167 7535 INFO
nova.osapi_compute.wsgi.server
[req-a5eac33d-660c-41a5-8269-2ba8c3063984
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/ HTTP/1.1" status: 200 len: 573 time: 0.0116799 2016-06-26 21:53:55.033 7535 INFO nova.osapi_compute.wsgi.server
[req-2eeb31d2-947e-45be-bfb8-b8f8ebf602b8
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 - -
-] 10.23.77.68 "GET /v2/1a7f14a22e56468fa12ebe04ef7ee336/servers/detail HTTP/1.1" status:
200 len: 15838 time: 0.6974850
EDIT :
/var/log/keystone.log is
[root#controller ~]# tail /var/log/keystone/keystone.log 2016-06-29
15:11:21.975 22759 INFO keystone.common.wsgi
[req-ce18ee5e-2323-4f7a-937c-71cb3b96e9a0
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 -
default default] GET http://10.23.77.68:35357/v2.0/users 2016-06-29
15:11:21.976 22759 WARNING oslo_log.versionutils
[req-ce18ee5e-2323-4f7a-937c-71cb3b96e9a0
765512ebca194201b741a9688e07b598 1a7f14a22e56468fa12ebe04ef7ee336 -
default default] Deprecated: get_users of the v2 API is deprecated as
of Mitaka in favor of a similar function in the v3 API and may be
removed in Q. 2016-06-29 15:11:36.526 22854 INFO keystone.common.wsgi
[req-a63438d5-603e-423f-8c9d-25cf44ac12dc - - - - -] GET
http://10.23.77.68:35357/ 2016-06-29 15:11:36.536 28937 INFO
keystone.common.wsgi [req-177f988e-43ac-49ee-bf9a-e12084646f28 - - - -
-] POST http://10.23.77.68:35357/v2.0/tokens 2016-06-29 15:11:36.682 28393 INFO keystone.common.wsgi
[req-48ccc8d8-e9cb-4e1e-b8bc-ceba9139d654
f93c5815f49342c8809ed489801ae9e1 b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:11:37.047 22096 INFO keystone.common.wsgi
[req-98c1fd31-e5b5-48e8-afd6-8d635ae4cb6a
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/ 2016-06-29 15:11:37.056
25970 INFO keystone.common.wsgi
[req-971dd038-0433-4d36-a341-654a0f421472 - - - - -] POST
http://10.23.77.68:35357/v2.0/tokens 2016-06-29 15:11:37.182 24078
INFO keystone.common.wsgi [req-33cd309d-38c3-4faa-acfb-4406708cd6c8
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:12:23.884 22587 INFO keystone.common.wsgi
[req-44e1df97-e487-4e82-9293-c1344d0cbaef
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
2016-06-29 15:12:27.816 27690 INFO keystone.common.wsgi
[req-0755f2a0-8280-4567-af0f-270df896e6f6
85c9b4514a3042f991cb00c8b1a5b3ca b0d28d12a3814157b93b5badf9340d1f -
default default] GET http://10.23.77.68:35357/v3/auth/tokens
[root#controller ~]#
Could you please tell which version of Openstack are you ruuning ? Liberty or Mitaka ?
Also at what stage did the installation stop ?
Source the keystonerc_admin file. This file contains the credentials that will give you access to using these commands/APIs
command: source
you can use curl http://10.23.77.68:5000 on your controller ndoe, if it successfully return version list, then keystone service is ok, you need to check the connect between control node and other nodes, otherwise, you need to check keystone log in 10.23.77.68, it seems this service is not running
Have you try restarting the httpd service and memcached?
systemctl start httpd.service

Resources