Openstack - Could not find resource admin error when lauching multi-region - openstack

I'm trying to launch a multi-region cloud with devstack, but keep getting the error message
Could not find resource admin
during the install of devstack on the 2nd region. Openstack itself is not even installed on the 2nd region, while installation runs fine on the 1st one. The only difference I see are some configuration variables in local.conf for the second region:
REGION_NAME
HOST_IP
KEYSTONE_SERVICE_HOST
KEYSTONE_AUTH_HOST
Changed the 2 keystone variables so the 2nd region only authenticates on keystone service installed on the 1st region. Already checked if the regions are accessible to each other using the ping command line, and that the region 1 have an endpoint available for region 2 with keystone service:
openstack endpoint list | grep keystone
Here is a sample of the final output when I run ./stack.sh to install devstack on the 2nd region. I'd appreciate any help. Thanks!
...
devstack/functions-common:time_stop:L2354: START_TIME[$name]=
devstack/functions-common:time_stop:L2355: TOTAL_TIME[$name]=0
./stack.sh:main:L998: is_service_enabled keystone
devstack/functions-common:is_service_enabled:L2046: return 0
./stack.sh:main:L999: echo_summary 'Starting Keystone'
./stack.sh:echo_summary:L379: [[ -t 3 ]]
./stack.sh:echo_summary:L379: [[ True != \T\r\u\e ]]
./stack.sh:echo_summary:L385: echo -e Starting Keystone
./stack.sh:main:L1001: '[' 192.100.100.10 == 192.100.200.10 ']'
./stack.sh:main:L1007: is_service_enabled tls-proxy
/home/stack/devstack/functions-common:is_service_enabled:L2046: return 1
./stack.sh:main:L1016: cat
./stack.sh:main:L1031: source /home/stack/devstack/userrc_early
devstack/userrc_early:source:L4: export OS_IDENTITY_API_VERSION=3
devstack/userrc_early:source:L4: OS_IDENTITY_API_VERSION=3
devstack/userrc_early:source:L5: export OS_AUTH_URL=http://192.100.100.10:35357
devstack/userrc_early:source:L5: OS_AUTH_URL=http://192.100.100.10:35357
devstack/userrc_early:source:L6: export OS_USERNAME=admin
devstack/userrc_early:source:L6: OS_USERNAME=admin
devstack/userrc_early:source:L7: export OS_USER_DOMAIN_ID=default
devstack/userrc_early:source:L7: OS_USER_DOMAIN_ID=default
devstack/userrc_early:source:L8: export OS_PASSWORD=openstack
devstack/userrc_early:source:L8: OS_PASSWORD=openstack
devstack/userrc_early:source:L9: export OS_PROJECT_NAME=admin
devstack/userrc_early:source:L9: OS_PROJECT_NAME=admin
devstack/userrc_early:source:L10: export OS_PROJECT_DOMAIN_ID=default
devstack/userrc_early:source:L10: OS_PROJECT_DOMAIN_ID=default
devstack/userrc_early:source:L11: export OS_REGION_NAME=RegionTwo
devstack/userrc_early:source:L11: OS_REGION_NAME=RegionTwo
./stack.sh:main:L1033: create_keystone_accounts
devstack/lib/keystone:create_keystone_accounts:L376: local admin_tenant
devstack/lib/keystone:create_keystone_accounts:L377: openstack project show admin -f value -c id
Could not find resource admin
devstack/lib/keystone:create_keystone_accounts:L377: admin_tenant=
devstack/lib/keystone:create_keystone_accounts:L1: exit_trap
./stack.sh:exit_trap:L474: local r=1
./stack.sh:exit_trap:L475: jobs -p
./stack.sh:exit_trap:L475: jobs=
./stack.sh:exit_trap:L478: [[ -n '' ]]

Related

How can determine managed identity of Azure VM a script is running on?

For post-processing of AzD=Azure Developer CLI I need to authorize the managed identity of the Azure VM, the script is currently running on, to the subscription selected by AzD. How can I determine managed identity of the VM with help of the metadata endpoint?
I created this script authorize-vm-identity.sh which determines the VM's resourceID (could be in a different subscription than the actual resources managed by AzD) from the metadata endpoint and then obtains the managed identities' principalId to make the actual role assignment with:
#!/bin/bash
source <(azd env get-values | sed 's/AZURE_/export AZURE_/g')
AZURE_VM_ID=`curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq -r '.compute.resourceId'`
if [ ! -z $AZURE_VM_ID ];
then
AZURE_VM_MI_ID=`az vm show --id $AZURE_VM_ID --query 'identity.principalId' -o tsv`
fi
if [ ! -z $AZURE_VM_MI_ID ];
then
az role assignment create --role Contributor --assignee $AZURE_VM_MI_ID --scope /subscriptions/$AZURE_SUBSCRIPTION_ID
fi
Prerequisites:
Azure CLI
jq
curl

Airflow 2.0.2 - No user yet created

we're moving from airflow 1.x to 2.0.2, and I'm noticing the below error in my terminal after i run docker-compose run --rm webserver initdb:
{{manager.py:727}} WARNING - No user yet created, use flask fab
command to do it.
but in my entrypoint.sh I have the below to create users:
echo "Creating airflow user: ${AIRFLOW_CREATE_USER_USER_NAME}..."
su -c "airflow users create -r ${AIRFLOW_CREATE_USER_ROLE} -u ${AIRFLOW_CREATE_USER_USER_NAME} -e ${AIRFLOW_CREATE_USER_USER_NAME}#vice.com \
-p ${AIRFLOW_CREATE_USER_PASSWORD} -f ${AIRFLOW_CREATE_USER_FIRST_NAME} -l \
${AIRFLOW_CREATE_USER_LAST_NAME}" airflow
echo "Created airflow user: ${AIRFLOW_CREATE_USER_USER_NAME} done!"
;;
Because of this error whenever I try to run airflow locally I still have to run the below to create a user manually every time I start up airflow:
docker-compose run --rm webserver bash
airflow users create \
--username name \
--firstname fname \
--lastname lname \
--password pw \
--role Admin \
--email email#email.com
Looking at the airflow docker entrypoint script entrypoint_prod.sh file, looks like airflow will create the an admin for you when the container on boots.
By default the admin user is 'admin' without password.
If you want something diferent, set this variables: _AIRFLOW_WWW_USER_PASSWORD and _AIRFLOW_WWW_USER_USERNAME
(I'm on airflow 2.2.2)
Looks like they changed the admin creation command password from -p test to -p $DEFAULT_PASSWORD. I had to pass in this DEFAULT_PASSWORD env var to the docker-compose environment for the admin user to be created. It also looks like they now suggest using the .env.localrunner file for configuration.
Here is the commit where that change was made.
(I think you asked this question prior to that change being made, but maybe this will help someone in the future who had my same issue).

How do you access Airflow Web Interface?

Hi I am taking a datacamp class on how to use Airflow and it shows how to create dags once you have access to an Airflow Web Interface.
Is there an easy way to create an account in the Airflow Web Interface? I am very lost on how to do this or is this just an enterprise tool where they provide you access to it once you pay?
You must do this on terminal. Run these commands:
export AIRFLOW_HOME=~/airflow
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
airflow standalone
Then, in there, you can see the username and password provided.
Then, open Chrome and search for:
localhost:8080
And write the username and password.
airflow has a web interface as well by default and default user pass is : airflow/airflow
you can run it by using :
airflow webserver --port 8080
then open the link : http://localhost:8080
if you want to make a new username by this command:
airflow create_user [-h] [-r ROLE] [-u USERNAME] [-e EMAIL] [-f FIRSTNAME]
[-l LASTNAME] [-p PASSWORD] [--use_random_password]
learn more about Running Airflow locally
You should install it , it is a python package not a website to register on.
The easiest way to install Airflow is:
pip install apache-airflow
if you need extra packages with it:
pip install apache-airflow[postgres,gcp]
finally run the webserver and the scheduler in different cmd :
airflow webserver # it is by default 8080
airflow scheduler

How can I auto-create .docker folder in the home directory when spinning up VM Cluster (gce_vm_cluster) on gcloud through R?

I create VMs using the following command in R:
vms <- gce_vm_cluster(vm_prefix=vm_base_name,
cluster_size=cluster_size,
docker_image = my_docker,
ssh_args = list(username="test_user",
key.pub="/home/test_user/.ssh/google_compute_engine.pub",
key.private="/home/test_user/.ssh/google_compute_engine"),
predefined_type = "n1-highmem-2")
now when I SSH into the VMs, I do not find the .docker folder in the home directory
test_user#test_server_name:~$ gcloud beta compute --project "my_test_project" ssh --zone "us-central1-a" "r-vm3"
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .ssh
Now the below command gives an error (..obviously)
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
Unable to find image 'gcr.io/my_test_project/myimage:version1' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
I need to run the docker-credential-gcr configure-docker command to get the folder/file .docker/config.json
test_user#r-vm3 ~ $ docker-credential-gcr configure-docker
/home/test_user/.docker/config.json configured to use this credential helper for GCR registries
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .docker .ssh
Now,
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
version1: Pulling from my_test_project/myimage
Digest: sha256:98abc76543d2e10987f6ghi5j4321098k7654321l0987m65no4321p09qrs87654t
Status: Image is up to date for gcr.io/my_test_project/myimage:version1
gcr.io/my_test_project/myimage:version1
What I am trying to resolve:
I need the .docker/config.json to appear in the VMs without SSHing in and running the docker-credential-gcr configure-docker command
how about creating a bash script, upload to a cloud storage bucket, and call it while creating the cluster? Also you mentioned "R" Are you talking about R script?

Register Designate with Keystone

I have followed the following Guide for Setup of designate.
http://docs.openstack.org/developer/designate/install/ubuntu.html
Above guide is having the exact workflow what I was looking for.
I need to setup Designate using PowerDns Backend. It provides way for doing the same.
But In case of Registering Designate with Keystone it lacks in Detail.
Please some one help me regarding the same.
Now I am trying to access http://IP.Address:9001/v2/command.
It gives error as follows:
Authentication required
Error log from designate-api:
2015-10-20 03:58:36.917 20993 WARNING keystoneclient.middleware.auth_token [-] Unable to find authentication token in headers
2015-10-20 03:58:36.917 20993 INFO keystoneclient.middleware.auth_token [-] Invalid user token - rejecting request
2015-10-20 03:58:36.917 20993 INFO eventlet.wsgi [-] 61.12.45.30 - - [20/Oct/2015 03:58:36] "GET /v1/ HTTP/1.1" 401 217 0.000681
I found the way for doing the same.
Here it is detailed steps attached.
Registering keystone with designate:
Kestone Setup:
apt-get install keystone
Edit /etc/keystone/keystone.conf and change the [database] section:
connection = mysql://keystone:keystone#localhost/keystone
rm /var/lib/keystone/keystone.db
$ mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'#'localhost' \
IDENTIFIED BY 'keystone';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'#'%' \
IDENTIFIED BY 'keystone';
mysql> exit
pip install mysql-python
su -s /bin/sh -c "keystone-manage db_sync" keystone
Execute the following command note down the value:
openssl rand -hex 10
Edit /etc/keystone/keystone.conf and change the [DEFAULT] section, replacing ADMIN_TOKEN with the results of the command:
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
Configure the log directory. Edit the /etc/keystone/keystone.conf file and update the [DEFAULT] section:
[DEFAULT]
...
log_dir = /var/log/keystone
service keystone restart
Users tenants service and endpoint creation:
export OS_SERVICE_TOKEN=token_value
(please edit the token value generated above)
export OS_SERVICE_ENDPOINT=http://localhost:35357/v2.0
keystone tenant-create --name service --description "Service Tenant" --enabled true
keystone service-create --type dns --name designate --description="Designate"
keystone endpoint-create --service designate --publicurl http://127.0.0.1:9001/v1 --adminurl http://127.0.0.1:9001/v1 --internalurl http://127.0.0.1:9001/v1
keystone user-create --name dnsaas --tenant service --pass dnsaas --enabled true
keystone role-create --name=admin
keystone user-role-add --user dnsaas --tenant service --role admin
apt-get install python-designateclient
Create an openrc file:
$ vi openrc
export OS_USERNAME=dnsaas
export OS_PASSWORD=dnsaas
export OS_TENANT_NAME=service
export OS_AUTH_URL=http://localhost:5000/v2.0/
export OS_AUTH_STRATEGY=keystone
export OS_REGION_NAME=RegionOne
source an openrc file:
. openrc
Note :
Execute or restart the designate-central and designate-api services.
designate domain-list command
designate domain-list
Above command is not returing any errors means fine to go.

Resources