Can't validate keystone endpoint when I trying to define an OpenStack cloud for juju - openstack

I am trying to define an OpenStack cloud for juju. To do this, I have first deployed Devstack using the following configuration in the local.conf file:
$ cat local.conf | grep -v "#" | grep -v "^$"
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=172.29.21.181
FLOATING_RANGE=172.29.20.1/22
Q_FLOATING_ALLOCATION_POOL=start=172.29.21.182,end=172.29.21.184
PUBLIC_NETWORK_GATEWAY=172.29.21.181
ENABLED_SERVICES+=,tls-proxy
ENABLED_SERVICES+=,g-api,g-reg
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
After a successful deployment, these are the endpoints:
$ openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| 0b489b8a683d4be489448230437e39ca | RegionOne | cinder | block-storage | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
| 0b9e96cfe0b440b781171ac0b082de3a | RegionOne | keystone | identity | True | admin | https://172.29.21.181/identity |
| 29ce5b2061dd474492f3aebda164acd0 | RegionOne | cinderv2 | volumev2 | True | public | https://172.29.21.181/volume/v2/$(project_id)s |
| 45e10e75eb6848f5a934674373962e11 | RegionOne | glance | image | True | public | https://172.29.21.181/image |
| 8c35460b8c0d4c21ac9b7dd27bc92c48 | RegionOne | keystone | identity | True | public | https://172.29.21.181/identity |
| af451150c3094497936fd6877380d877 | RegionOne | placement | placement | True | public | https://172.29.21.181/placement |
| b3907f627f684ada8526b89c2c9683f9 | RegionOne | neutron | network | True | public | https://172.29.21.181:9696/ |
| c642b07700b54be39e1dd537e8c0f8be | RegionOne | nova | compute | True | public | https://172.29.21.181/compute/v2.1 |
| dbb94215bc89457383a390a0490a89f6 | RegionOne | nova_legacy | compute_legacy | True | public | https://172.29.21.181/compute/v2/$(project_id)s |
| e1037ed336d541b080e365caa0020e78 | RegionOne | cinderv3 | volumev3 | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
But when I try to add the cloud to juju using the "juju add-cloud" command (I am following the indications of this link: https://juju.is/docs/olm/openstack) I get the following error:
$ juju add-cloud openstack
This operation can be applied to both a copy on this client and to the one on a controller.
No current controller was detected and there are no registered controllers on this client: either bootstrap one or register one.
Cloud Types
lxd
maas
manual
openstack
vsphere
Select cloud type: openstack
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity/v3
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: http://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at http://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181:5000/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181:5000/v3
I can curl the url:
$ curl https://172.29.21.181/identity
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "https://172.29.21.181/identity/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
And I can connect to the port where Keystone is listening:
$ nc -vz 172.29.21.181 5000
Connection to 172.29.21.181 5000 port [tcp/*] succeeded!
I set no_proxy=127.0.0.1,localhost,172.29.21.181 and NO_PROXY=127.0.0.1,localhost,172.29.21.181
as environment variables, because searching for solutions on the Internet I understood that maybe it could solve my problem. But it didn't work.
Apart from this cloud I have another one deployed through Openstack-Ansible. In this cloud I have not encountered this error, the only difference I see is that the url is https://{HOST_IP}:5000/v3.
If anyone has any ideas it would be very helpful, thank you.

I have found a way to bypass this error, but I don’t know exactly why. I have modified the OS_AUTH_URL environment variable to end in “/v3”:
$ unset OS_AUTH_URL
$ export OS_AUTH_URL=https://172.29.21.181/identity/v3
Now, after using it as suggested value when running “juju add-cloud”, I don’t get the error when running “juju bootstrap”. I guess when you enter the url manually, juju checks the validity of it and fails for some code reason maybe. Having skipped that check, I guess the “juju bootstrap” command will directly use the url ending in “/v3” which is correct and works.
Now I get the following error:
$ juju bootstrap openstack --verbose
Adding contents of "/opt/stack/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
Creating Juju controller "openstack-regionone" on openstack/RegionOne
Loading image metadata
ERROR failed to bootstrap model: no image metadata found
But I guess I just have to add Swift to my deployment and follow the instructions in this link: https://juju.is/docs/olm/cloud-image-metadata

Related

OpenStack Mistral workflow error while executing using GUI

I am getting error while executing OpenStack simple mistral workflow on OpenStack(wallaby) devstack environment. While I can execute the workflow from CLI command and got success But it fails if I try the same thing with GUI
root#openstack:~# openstack workflow definition show test_get
---
version: '2.0'
test_get:
description: Test Get.
tasks:
my_task:
action: std.http
input:
url: http://www.google.com
root#openstack:~# openstack workflow execution create test_get
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| ID | 482e3803-45ef-411e-a0f4-1427abfc8649 |
| Workflow ID | 9dc0d4a4-8c5b-4288-8126-e1147da3bd02 |
| Workflow name | test_get |
| Workflow namespace | |
| Description | |
| Task Execution ID | <none> |
| Root Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2021-06-21 16:58:54 |
| Updated at | 2021-06-21 16:58:54 |
| Duration | ... |
+--------------------+--------------------------------------+
But while executing in GUI I get **
Execution is missing field "workflow_identifier"
**
Faced the same issue in Yoga release. Spent a few hours to investigate it and found interesting thing:
/usr/local/lib/python3.8/dist-packages/mistralclient/api/v2/executions.py
class ExecutionManager(base.ResourceManager):
resource_class = Execution
def create(self, wf_identifier='', namespace='',
workflow_input=None, description='', source_execution_id=None,
**params):
self._ensure_not_empty(
workflow_identifier=wf_identifier or source_execution_id
)
But! in the webform we are using workflow_identifier instead of wf_identifier
/usr/local/lib/python3.8/dist-packages/mistraldashboard/workflows/forms.py
def handle(self, request, data):
try:
data['workflow_identifier'] = data.pop('workflow_name')
data['workflow_input'] = {}
for param in self.workflow_parameters:
value = data.pop(param)
if value == "":
value = None
data['workflow_input'][param] = value
ex = api.execution_create(request, **data)
FIX is to rename workflow_identifier to wf_identifier in the form like
data['wf_identifier'] = data.pop('workflow_name')
After that mistral-dashboard works fine with execution creating.

AWS Amplify API does not exist

I am followng the Automated setup https://docs.amplify.aws/lib/restapi/getting-started/q/platform/js
I have created the Data model using Amplify UI and run
amplify pull --appId XXXX --envName staging
and I got
Successfully generated models. Generated models can be found in /dev/extension/vue-extension/src
Post-pull status:
Current Environment: staging
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------- | --------- | ----------------- |
| Api | my_custom_name | No Change | awscloudformation |
but when I run this code
import Amplify, { API } from 'aws-amplify';
import awsconfig from '#/aws-exports';
Amplify.configure(awsconfig);
API.get('my_custom_name', '/rankings').then(items => console.log(items)).catch(e=> console.log(e ));
I get
API my_custom_name does not exist
You're import awsconfig from '#/aws-exports'; looks incorrect.
Try changing it to import awsconfig from './aws-exports';

Centos7.8 install openstack mitaka version, control the node to install mirror service glance, the mirror contains problems

Centos7.8 install openstack mitaka version, control the node to install mirror service glance, the mirror contains problems
According to the official documentation Mitaka official documentation operations, Step 3 Upload the image to the image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:
I execute the following command
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
The size of the image in the output is zero. How should I check this problem
[root#controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2020-05-24T14:45:54Z |
| disk_format | qcow2 |
| file | /v2/images/c89f6866-0c48-4ee5-84f1-bf7fa0998edf/file |
| id | c89f6866-0c48-4ee5-84f1-bf7fa0998edf |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | a9629b19eb9348adbf02a5432dd79411 |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2020-05-24T14:45:54Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+

glance doesn't work due to authentication fail

I'm setting up Openstack on some machines. I was following this guide http://docs.openstack.org/liberty/install-guide-ubuntu/ until I ran into this problem:
When I'm verifying Image service (Glance), I got the following problem:
$ cat admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=passw0rd
export OS_AUTH_URL=http://Renaissance:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
$ source admin-openrc.sh
$ glance --debug image-create --name "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --visibility public --progress
curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}7ce8d893ef6cdaca2ed5a876c8211a841455ba65' -H 'Content-Type: application/octet-stream' http://Renaissance:9292/v2/schemas/image
Request returned failure status 401.
Invalid OpenStack Identity credentials.
I would get same error using any other glance function (e.g. glance image-list).
I think I'm having my configurations correct since I followed the guide.
Here's my Openstack services, projects, users, roles and endpoints
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| bf585630a5cb475b9e883493de3813fa | glance | image |
| fc29e468dae849e6afb97ecc3bf487f6 | keystone | identity |
+----------------------------------+----------+----------+
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 0bc473b2e77a4a9bb7871ed2afacb995 | admin |
| dcaf480621164c409b6704c3f42e0869 | service |
| e9f709d860fe46e2819b6bf1c78ccd0f | nonadmin |
+----------------------------------+----------+
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 485374adcbe54ce5b9ef465b84aa2c9f | admin |
| 7447f4cd56f64ccfb111cba74f9a4b92 | nonadmin |
| d9ffc32240d24328b10af8b2550ec414 | glance |
+----------------------------------+----------+
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 466fea231ef54d3ca4564fb42f51bb5c | admin |
| a36c726d27f04ebf92d336c3acfcd945 | user |
+----------------------------------+-------+
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
| 01f62a7b9f7f4fa782e8bc695e74afc1 | RegionOne | glance | image | True | internal | http://Renaissance:9292 |
| abb7e5052d8646428e82ef58ca21b376 | RegionOne | keystone | identity | True | public | http://Renaissance:5000/v2.0 |
| d5b3180255b44a0eafe0810a20e104bc | RegionOne | glance | image | True | public | http://Renaissance:9292 |
| e0392842c6f64ac389a5688bc2581192 | RegionOne | keystone | identity | True | internal | http://Renaissance:5000/v2.0 |
| e0eb3dd0ed774669bce9a74dd3831c05 | RegionOne | keystone | identity | True | admin | http://Renaissance:35357/v2.0 |
| ec855dca8f87454e997fd55c47f17703 | RegionOne | glance | image | True | admin | http://Renaissance:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------+
My auth configuration of glance (in glance-api.conf and glance-registry.conf) is listed below:
...
[keystone_authtoken]
# Complete public Identity API endpoint. (string value)
auth_uri = http://Renaissance:5000
auth_uri = http://Renaissance:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = passw0rd
...
And I can get token using Openstack:
$ openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-10-01T01:16:48.482839Z |
| id | 2a4e052a2c4140a28f550158d95ecd3b |
| project_id | 0bc473b2e77a4a9bb7871ed2afacb995 |
| user_id | 485374adcbe54ce5b9ef465b84aa2c9f |
+------------+----------------------------------+
I'm guessing its the api version problem, but I've been changing the version number in the uri but it didn't work. Any help is appreciated. Thanks!
in your glance configuration, the project name is service, but your env var project name is admin.
solutions:
ensure passw0rd is the real pw to glance:service account
change glance conf to use admin project instead

Routing checking user role Symfony2

I have two bundles and I want routes from one of the bundle only accessible if the user have a defined role.
The logic from the router matcher should be:
if the user have the role
| name | path | success |
|------------------|-------|---------|
| bundle_1_route_1 | / | false |
| bundle_1_route_2 | /test | true |
If the user don't have the role
| name | path | success |
|------------------|-------|---------|
| bundle_1_route_1 | / | false |
| bundle_1_route_2 | /test | false |
| bundle_2_route_1 | /aaa | false |
| bundle_2_route_2 | /test | true |
The problem is I can't do that using the security because the path are the same
I tried with the #security annotation http://symfony.com/doc/current/bundles/SensioFrameworkExtraBundle/annotations/security.html
But I have an access denied when on bundle_1_route_2 with no role and the others urls are not checked.
I want to continue checking all the available url if the role is not matching
I found an other solution, but this is not very clean, and it will create error if the session does not exists
bundle_1:
resource: "#Bundle1/Controller/"
type: annotation
prefix: /
condition: "'ROLE_FILTER' in request.getSession().get('bundle1.user').getRoles()"
Is there a way to create completely custom conditions on routing ?

Resources