install openstack-keystone error:Could not find domain: default - openstack

I follow the official document try to install openstack-keystone
openstack --os-auth-url http://192.168.80.6:35357/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name admin --os-username admin --os-auth-type password \
token issue
Verify that the user can authenticate,error:
The request you have made requires authentication. (HTTP 401) (Request-ID: req-8d9e9608-2adb-4b80-bc00-f0fd9e9684ae)
I checked log find:
2017-10-04 09:06:40.966 1256 INFO keystone.common.wsgi [req-5a17f2ba-ce0e-46cb-8397-707ac9240870 - - - - -] GET http://192.168.80.6:35357/v3/
2017-10-04 09:06:40.982 1243 INFO keystone.common.wsgi [req-8d9e9608-2adb-4b80-bc00-f0fd9e9684ae - - - - -] POST http://192.168.80.6:35357/v3/auth/tokens
2017-10-04 09:06:40.987 1243 WARNING keystone.auth.controllers [req-8d9e9608-2adb-4b80-bc00-f0fd9e9684ae - - - - -] Could not find domain: default
2017-10-04 09:06:40.988 1243 WARNING keystone.common.wsgi [req-8d9e9608-2adb-4b80-bc00-f0fd9e9684ae - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.80.6
I checked domain list
# openstack domain list
+---------------------------------+---------+---------+----------------+
| ID | Name | Enabled | Description |
+---------------------------------+---------+---------+----------------+
| 75391e2f3a1c4c8e94a82d05badb941 | default | True | Default Domain |
| 8 | | | |
+---------------------------------+---------+---------+----------------
I check configuration,or unless what I should do ?
thanks!

I think it has to do with using the project-id instead of project-name. The project name would be default, while the id would be 75391e2f3a1c4c8e94a82d05badb9418.
Change:
--os-project-domain-id default
to
--os-project-name default
or
--os-project-domain-id 75391e2f3a1c4c8e94a82d05badb9418
Update the --os-user-domain-id in the same way.
Give that a try and see if you are able to get a token.

Related

failed to bind port in openstack-neutron

NOTE : I've seen this question and error being posted on different forums and in here, but none of them worked for me, and they belong to earlier versions of openstack. So, I posted a new question.
I've been setting up OpenStack Train based on its installation documents, and after setting up services, I tried to create a selfservice network using the instructions here but in "Verify Operation" section, step number 3, I see all of the ports are down :
[root#dev-openstack-controller ~]# openstack port list
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| 628ec286-90aa-4cca-92da-f698fb44a4e6 | | fa:16:3e:a9:31:55 | ip_address='10.100.1.1', subnet_id='8d579a73-6951-445f-9905-51b9be2a6ff5' | DOWN |
| bb77b0d9-7ea8-47d3-b951-139a7616a4bd | | fa:16:3e:89:52:37 | ip_address='203.0.113.166', subnet_id='0666d21c-0fd9-4caf-b560-f7d11e50cd83' | DOWN |
| d2b684c9-eeee-47c4-ae12-dc97e19adf48 | | fa:16:3e:cc:b8:3d | ip_address='10.100.1.2', subnet_id='8d579a73-6951-445f-9905-51b9be2a6ff5' | DOWN |
| fb7aff87-d083-4ed2-bf82-2ab4393373c7 | | fa:16:3e:c8:a7:95 | ip_address='203.0.113.101', subnet_id='0666d21c-0fd9-4caf-b560-f7d11e50cd83' | DOWN |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
First, I don't know why I see 4 ports instead of 2, and second, when I check neutron logs, I get the following error, which says it fails to bind port :
2020-04-18 11:05:12.321 25009 INFO neutron.plugins.ml2.plugin [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Attempt 10 to bind port 628ec286-90aa-4cca-92da-f698fb44a4e6
2020-04-18 11:05:12.347 25009 ERROR neutron.plugins.ml2.managers [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Port 628ec286-90aa-4cca-92da-f698fb44a4e6 does not have an IP address assigned and there are no driver with 'connectivity' = 'l2'. The port cannot be bound.
2020-04-18 11:05:12.348 25009 ERROR neutron.plugins.ml2.managers [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Failed to bind port 628ec286-90aa-4cca-92da-f698fb44a4e6 on host dev-openstack-controller.ershandc.org for vnic_type normal using segments [{'network_id': 'ae2b1f57-d91a-4ecd-ad15-2cc4b51a376f', 'segmentation_id': 45, 'physical_network': None, 'id': 'c28112f0-4f07-4f23-9f89-c3e37e68054c', 'network_type': u'vxlan'}]
I also get the same error for flat networks as well :
2020-04-18 11:05:11.107 25009 INFO neutron.plugins.ml2.plugin [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Attempt 10 to bind port bb77b0d9-7ea8-47d3-b951-139a7616a4bd
2020-04-18 11:05:11.135 25009 ERROR neutron.plugins.ml2.managers [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Port bb77b0d9-7ea8-47d3-b951-139a7616a4bd does not have an IP address assigned and there are no driver with 'connectivity' = 'l2'. The port cannot be bound.
2020-04-18 11:05:11.136 25009 ERROR neutron.plugins.ml2.managers [req-5c9c16a4-2327-4f46-b0ab-84e4e128d783 - - - - -] Failed to bind port bb77b0d9-7ea8-47d3-b951-139a7616a4bd on host dev-openstack-controller.ershandc.org for vnic_type normal using segments [{'network_id': '25c5e314-e851-4a9c-ac7a-8e7b3e426deb', 'segmentation_id': None, 'physical_network': u'provider', 'id': '6dccf301-422b-41b9-b719-2999200126c6', 'network_type': u'flat'}]
I have tried different connectivities on ml2 plugin. Most of the cases relate to the following line in ml2_plugin.conf :
[ml2_type_flat]
flat_networks = flat
vni_ranges = 1:1000
Based on the openstack documentation, it should be flat but I've tried * as well and it didn't work.
Can someone elaborate the problem for me? I'm installing on a CentOS 7 VM. Let me know if more information is needed.
This is a quite dated question, but nevertheless I encountered the same situation hence let me post this anyway.
As the log says "no driver with 'connectivity' = 'l2'", the problem was in ml2_conf.ini. In my case, whole driver definitions were in [DEFAULT] section. I moved them to the proper location [ml2], everything started working fine.
It's been a while since I tried OpenStack, and finally I know the answer to the problem, which might help some people:
For setting up neutron, you need to have your internal network being set up in promiscuous mode. I've been using VMWare for the setup and I did not have access to setup the VSwitch in this mode, also was not approved by our security auditor and there were other priorities to take care of, so I had to drop the project. But I found out that this post is getting attention and shogoK answer did not work for me, so with some research and getting a helping hand from some network expert, problem was identified. Hope this clue helps someone in the community.

microstack on ubuntu does not register cinder volume service

mircostack installation failed to register cinder.
root#ubuntu:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
Installation itself succeeded
root#ubuntu:~# snap install microstack --classic --edge
[...]
Initialization also succeeded with only a simple warning.
root#ubuntu:~# microstack.init --auto
2020-02-24 09:12:21,219 - microstack_init - INFO - Configuring networking ...
2020-02-24 09:12:22,604 - microstack_init - INFO - Setting up ipv4 forwarding...
2020-02-24 09:12:23,904 - microstack_init - INFO - Opening horizon dashboard up to *
2020-02-24 09:12:25,047 - microstack_init - INFO - Waiting for RabbitMQ to start ...
Waiting for 10.20.20.1:5672
2020-02-24 09:12:25,072 - microstack_init - INFO - RabbitMQ started!
2020-02-24 09:12:25,072 - microstack_init - INFO - Configuring RabbitMQ ...
2020-02-24 09:12:26,397 - microstack_init - INFO - RabbitMQ Configured!
2020-02-24 09:12:26,471 - microstack_init - INFO - Waiting for MySQL server to start ...
Waiting for 10.20.20.1:3306
2020-02-24 09:12:26,493 - microstack_init - INFO - Mysql server started! Creating databases ...
/snap/microstack/196/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1287, 'Using GRANT for creating new user is deprecated and will be removed in future release. Create new user with CREATE USER statement.')
result = self._query(query)
/snap/microstack/196/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1287, "Using GRANT statement to modify existing user's properties other than privileges is deprecated and will be removed in future release. Use ALTER USER statement for this operation.")
result = self._query(query)
2020-02-24 09:12:28,039 - microstack_init - INFO - Configuring Keystone Fernet Keys ...
2020-02-24 09:13:01,205 - microstack_init - INFO - Bootstrapping Keystone ...
2020-02-24 09:13:11,362 - microstack_init - INFO - Creating service project ...
2020-02-24 09:13:18,118 - microstack_init - INFO - Keystone configured!
2020-02-24 09:13:18,188 - microstack_init - INFO - Configuring nova compute hypervisor ...
2020-02-24 09:13:34,039 - microstack_init - INFO - Configuring nova control plane services ...
Waiting for 10.20.20.1:8774
2020-02-24 09:16:33,766 - microstack_init - INFO - Creating default flavors...
2020-02-24 09:17:05,828 - microstack_init - INFO - Configuring Neutron
Waiting for 10.20.20.1:9696
2020-02-24 09:19:19,611 - microstack_init - INFO - Configuring Glance ...
Waiting for 10.20.20.1:9292
2020-02-24 09:20:08,536 - microstack_init - INFO - Adding cirros image ...
2020-02-24 09:20:08,541 - microstack_init - INFO - Downloading cirros image ...
100% [........................................................................] 12716032 / 127160322020-02-24 09:20:18,112 - microstack_init - INFO - Creating microstack keypair (~/.ssh/id_microstack)
2020-02-24 09:20:21,411 - microstack_init - INFO - Creating security group rules ...
2020-02-24 09:20:33,853 - microstack_init - INFO - restarting libvirt and virtlogd ...
2020-02-24 09:20:34,136 - microstack_init - INFO - Complete. Marked microstack as initialized!
portal and command line are working
root#ubuntu:~# microstack.openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| fcb4efad-509f-4235-bd86-694125a0fbbc | cirros | active |
+--------------------------------------+--------+--------+
It is failing to call colume command and in the portal (horizon) there is also no volume area visible.
root#ubuntu:~# microstack.openstack volume list
public endpoint for volumev2 service not found
In the service list I see, that cinder service is not registered.
root#ubuntu:~# microstack.openstack service list
+----------------------------------+-----------+-----------+
| ID | Name | Type |
+----------------------------------+-----------+-----------+
| 237331a245714772a8d61eab3e6234e5 | keystone | identity |
| 69d4e94ceeba471baacddb9baf803a7a | neutron | network |
| 6c3a18bb84d3424e80178f4f40d4c861 | nova | compute |
| bba95e72a5b24ec989ec76d433577972 | glance | image |
| daa8cba4740643919b8474f188a7b3d4 | placement | placement |
+----------------------------------+-----------+-----------+
I already installed it 3 times, all times with same result.
How can I further investigate or check what has failed or how can I fix it?
Regards
Lukas

DevStack: failed to create new CentOS instance

After deployed DevStack, I managed to create cirros instances. Now I want create CentOS instance:
I download image CentOS-7-x86_64-GenericCloud-1608.qcow2 from [here].(http://cloud.centos.org/centos/7/images/)
Then I run nova boot --flavor 75c84ea2-d5b0-4d99-b935-08f654122aa3 --image 997f51bd-1ee2-4cdb-baea-6cef766bf191 --security-groups 207880e9-165f-4295-adfd-1f91ac96aaaa --nic net-id=26c05c99-b82d-403f-a988-fc07d3972b6b centos-1
Then I run nova list, it gives: b9f97618-085b-4d2b-bc94-34f3b953e2ee | centos-1 | ERROR | - | NOSTATE
It is in ERROR state, so I grep log with that b9f97618-085b-4d2b-bc94-34f3b953e2ee (instance id): grep b9f97618-085b-4d2b-bc94-34f3b953e2ee *.log
The grep returns:
Result:
n-api.log:2016-10-13 22:09:27.975 DEBUG nova.compute.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] block_device_mapping
[BlockDeviceMapping(boot_index=0,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id='997f51bd-1ee2-4cdb-baea-6cef766bf191',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=,volume_id=None,volume_size=None),
BlockDeviceMapping(boot_index=-1,connection_info=None,created_at=,delete_on_termination=True,deleted=,deleted_at=,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=,image_id=None,instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='blank',tag=None,updated_at=,volume_id=None,volume_size=1)]
from (pid=12331) _bdm_validate_set_size_and_instance
/opt/stack/nova/nova/compute/api.py:1239 n-api.log:2016-10-13
22:09:28.117 DEBUG nova.compute.api
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Fetching instance by UUID from
(pid=12331) get /opt/stack/nova/nova/compute/api.py:2215
n-api.log:2016-10-13 22:09:28.184 DEBUG neutronclient.v2_0.client
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-2b427b03-67d9-474e-be93-b631b6a2ba78 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:09:28.195 INFO nova.osapi_compute.wsgi.server
[req-d9327bbd-d333-4d37-8651-57e95d21396b admin admin] 10.61.148.89
"GET /v2.1/servers/b9f97618-085b-4d2b-bc94-34f3b953e2ee HTTP/1.1"
status: 200 len: 2018 time: 0.0843861 n-api.log:2016-10-13
22:09:52.232 DEBUG neutronclient.v2_0.client
[req-415982d6-9ff4-4c80-99a8-46e1765a58d9 admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-645a777a-35df-456e-a982-433e97cdb0e7 from
(pid=12331) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-api.log:2016-10-13 22:17:04.476 DEBUG neutronclient.v2_0.client
[req-3b1c4dff-d9e9-41a5-9719-5bbb7c68085c admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 used request id req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c from
(pid=12330) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.170 WARNING nova.scheduler.utils
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Setting instance to ERROR state.
n-cond.log:2016-10-13 22:09:28.304 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] deallocate_for_instance() from
(pid=19162) deallocate_for_instance
/opt/stack/nova/nova/network/neutronv2/api.py:1154
n-cond.log:2016-10-13 22:09:28.350 DEBUG neutronclient.v2_0.client
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] GET call to
neutron for
http://10.61.148.89:9696/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
used request id req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 from
(pid=19162) _append_request_id
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:127
n-cond.log:2016-10-13 22:09:28.351 DEBUG nova.network.neutronv2.api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Instance cache missing network
info. from (pid=19162) _get_preexisting_port_ids
/opt/stack/nova/nova/network/neutronv2/api.py:2133
n-cond.log:2016-10-13 22:09:28.362 DEBUG nova.network.base_api
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] [instance:
b9f97618-085b-4d2b-bc94-34f3b953e2ee] Updating instance_info_cache
with network_info: [] from (pid=19162)
update_instance_cache_with_nw_info
/opt/stack/nova/nova/network/base_api.py:43 grep: n-dhcp.log: No such
file or directory n-sch.log:2016-10-13 22:09:28.166 DEBUG nova.filters
[req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin admin] Filtering
removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results:
[('RetryFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('AvailabilityZoneFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]),
('RamFilter', [(u'i-z78fw9mn', u'i-z78fw9mn')]), ('DiskFilter', None)]
from (pid=19243) get_filtered_objects
/opt/stack/nova/nova/filters.py:129 n-sch.log:2016-10-13 22:09:28.166
INFO nova.filters [req-6b5bf92a-ce53-46d4-8965-b54e02d21aef admin
admin] Filtering removed all hosts for the request with instance ID
'b9f97618-085b-4d2b-bc94-34f3b953e2ee'. Filter results: ['RetryFilter:
(start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 0)']
q-svc.log:2016-10-13 22:09:28.184 INFO neutron.wsgi
[req-2b427b03-67d9-474e-be93-b631b6a2ba78 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.038510 q-svc.log:2016-10-13 22:09:28.350 INFO
neutron.wsgi [req-9dc53ce3-1f4e-4619-a22e-ce98a6f1c382 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:28] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee
HTTP/1.1" 200 211 0.042906 q-svc.log:2016-10-13 22:09:52.233 INFO
neutron.wsgi [req-645a777a-35df-456e-a982-433e97cdb0e7 admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:09:52] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.041629 q-svc.log:2016-10-13 22:17:04.477 INFO
neutron.wsgi [req-eb8bd6ef-1ecb-4c41-9355-26e4edb84d5c admin
55a846ac28f847eca8521ff71dea8633] 10.61.148.89 - - [13/Oct/2016
22:17:04] "GET
/v2.0/ports.json?device_id=b9f97618-085b-4d2b-bc94-34f3b953e2ee&device_id=d6c67c2f-0d21-4ef8-bcfe-eba852ed0cc1 HTTP/1.1" 200 1241 0.044646
Now I have no idea about what's going wrong about that instance deployment, could anyone give me some suggestions?
Some suggestions in order to discard common problems:
The flavor: The flavor you are using is the same you used with cirros ?. Is the answer is yes: That flavor include a specific disk size for the root disk ?. If "yes", check the minimum disk size required for the centos generic image you are using. Either the image need a bigger disk, or, the disk is to big for your box. Then, check your available HD space, the flavor specs, and the image specs.
Network: Let's discard neutron. Instead of assigning the network, assign a port. Create a port in neutron, and in the nova boot command, assign the port to the vm instead of assigning the network (--nic port-id=port-uuid).
Glance image definition: When you created the glance image from the downloaded qcow2 file, did you included any metadata item that is forcing the image to request a cinder-based disk ?. Did you included any metadata at all ?. If so, get rid of all metadata items on the glance image.
Try again to launch a cirros instance. If the cirros goes OK, then it's something with the image (maybe any of the above: glance, flavor, disk space).
Let me know what you find !.

openstack cinder error on liberty

I have an install of Liberty RDO openstack. However, when i attempt:
[root#controller ~(keystonerc_admin:admin)]# cinder --insecure quota-defaults edc8225a13404a00b44d8099e060c3d5
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-aee74e5b-b9da-460a-a4b1-14f67c165e48)
In Horizon, this error manifests itself as:
Error: Unable to retrieve volume limit information.
When navigating to horizon -> admin -> defaults.
The cinder logs show:
2016-03-10 02:07:19.970 30161 WARNING keystoneclient.auth.identity.generic.base [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
2016-03-10 02:07:19.970 30161 ERROR cinder.api.middleware.fault [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] Caught error: Could not determine a suitable URL for the plugin
2016-03-10 02:07:19.971 30161 INFO cinder.api.middleware.fault [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] http://192.168.33.11:8776/v2/edc8225a13404a00b44d8099e060c3d5/os-quota-sets/edc8225a13404a00b44d8099e060c3d5/defaults returned with HTTP 500
2016-03-10 02:07:19.972 30161 INFO eventlet.wsgi.server [req-89efb8d4-299b-4cf6-bca3-386f6c4e9348 9bf9e8f990624c2ca0c08c1bf02edbdb edc8225a13404a00b44d8099e060c3d5 - - -] 192.168.33.11 - - [10/Mar/2016 02:07:19] "GET /v2/edc8225a13404a00b44d8099e060c3d5/os-quota-sets/edc8225a13404a00b44d8099e060c3d5/defaults HTTP/1.1" 500 425 0.082927
My cinder config:
[root#controller ~(keystonerc_admin:admin)]# cat /etc/cinder/cinder.conf | grep -vE '(^$|^\#)'
[DEFAULT]
my_ip=192.168.33.11
auth_strategy=keystone
debug=True
verbose=True
rpc_backend=rabbit
glance_host=192.168.33.11
enabled_backends=lvm
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection=mysql://cinder:change_me#192.168.33.11/cinder
[fc-zone-manager]
[keymgr]
encryption_auth_url=http://localhost:5000/v3
[keystone_authtoken]
insecure=True
auth_uri=https://192.168.33.11:5000
auth_url=https://192.168.33.11:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=cinder
password=change_me
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.168.33.11
rabbit_userid=openstack
rabbit_password=change_me
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[lvm]
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group=cinder-volumes
iscsi_protocol=iscsi
iscsi_helper=lioadm
This looks like it could be this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1272572
I don't know what way rdo deploys openstack - but it looks like you are using the v3 Identity API
encryption_auth_url=http://localhost:5000/v3
[keystone_authtoken]
insecure=True
auth_uri=https://192.168.33.11:5000
auth_url=https://192.168.33.11:35357
These unversioned auth endpoints present a http 300 'multiple choices' so they can work with cinder-pythonclient (v2.0) and openstack common client (v3).
I would determine - what is your default keystone endpoint (no version in endpoint = 3, otherwise /v2.0).
What version of keystone is Horizon using 'USE_IDENTITIY_API = X' in local_settings.py
the newer openstack common client uses a different systax for quotas - if you are on identity api v3
os quota set
# Compute settings
[--cores <num-cores>]
[--fixed-ips <num-fixed-ips>]
[--floating-ips <num-floating-ips>]
[--injected-file-size <injected-file-bytes>]
[--injected-files <num-injected-files>]
[--instances <num-instances>]
[--key-pairs <num-key-pairs>]
[--properties <num-properties>]
[--ram <ram-mb>]
# Volume settings
[--gigabytes <new-gigabytes>]
[--snapshots <new-snapshots>]
[--volumes <new-volumes>]
<project>

Authentication failed using instance:connect in Apache Karaf 3.0.2

Can someone please help me why this call is no longer working in Apache Karaf 3.0.2. I verified that it was working in version 3.0.1. All instances are up and running, but I am unable to connect to one of my instances directly from the command line.
su - karaf -c " client -h localhost -a 8101 -u karaf -r 50 -d 2 \" instance:connect -u karaf -p karaf test1 \\\" feature:repo-list \\\" \" "
Logging in as karaf
455 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, b6:f6:d6:3f:8b:2f:ad:a4:0f:3f:3d:c3:7b:96:fd:ae] presented unverified {} key: {}
Connecting to host localhost on port 8103
Connecting to unknown server. Automatically adding to known hosts.
Storing the server key in known_hosts.
Error executing command: Authentication failed
The call is part of an automated process and I cannot connect to a specific instance directly. Is there any specific configuration required, that was not necessary in 3.0.1?
UPDATE #1:
I have added the verbose option... Does it give you any hints what to do?
client -v -h localhost -a 8101 -u karaf -r 50 -d 2 " instance:connect -u karaf test1 \" feature:repo-list \" "
39 [main] INFO org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
Logging in as karaf
367 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Client session created
380 [main] INFO org.apache.sshd.client.session.ClientSessionImpl - Start flagging packets as pending until key exchange is done
383 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Server version string: SSH-2.0-SSHD-CORE-0.12.0
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: server->client [aes128-ctr, hmac-sha1, none] {} {}
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: client->server [aes128-ctr, hmac-sha1, none] {} {}
444 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, 22:8b:f8:9d:bc:c6:40:d8:fe:52:aa:90:c0:f2:70:ec] presented unverified {} key: {}
457 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Dequeing pending packets
524 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
568 [sshd-SshClient[bea319b]-nio2-thread-2] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_SUCCESS
Connecting to host localhost on port 8102
Error executing command: Authentication failed
UPDATE #2:
I switched the logger to DEBUG and I found this exception:
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_SERVICE_ACCEPT
2015-01-15 11:28:48,920 | INFO | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_USERAUTH_FAILURE
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Authentications that can continue: keyboard-interactive, password, publickey
2015-01-15 11:28:48,922 | DEBUG | 5]-nio2-thread-1 | Nio2Session | 28 - org.apache.sshd.core - 0.12.0 | Caught exception, now calling handler
2015-01-15 11:28:48,922 | WARN | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Exception caught
java.lang.IllegalStateException: No SSH_AUTH_SOCK environment variable set
at org.apache.karaf.shell.ssh.KarafAgentFactory.createClient(KarafAgentFactory.java:71)
at org.apache.sshd.client.auth.UserAuthPublicKey.init(UserAuthPublicKey.java:78)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.tryNext(ClientUserAuthServiceNew.java:212)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.processUserAuth(ClientUserAuthServiceNew.java:178)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.process(ClientUserAuthServiceNew.java:131)
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:80)
at org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:399)
at org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)
at org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:256)
at org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:731)
at org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)
at org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:187)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:53)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:46)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.7.0_65]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_65]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_65]

Resources