How can determine managed identity of Azure VM a script is running on? - azure-resource-manager

For post-processing of AzD=Azure Developer CLI I need to authorize the managed identity of the Azure VM, the script is currently running on, to the subscription selected by AzD. How can I determine managed identity of the VM with help of the metadata endpoint?

I created this script authorize-vm-identity.sh which determines the VM's resourceID (could be in a different subscription than the actual resources managed by AzD) from the metadata endpoint and then obtains the managed identities' principalId to make the actual role assignment with:
#!/bin/bash
source <(azd env get-values | sed 's/AZURE_/export AZURE_/g')
AZURE_VM_ID=`curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq -r '.compute.resourceId'`
if [ ! -z $AZURE_VM_ID ];
then
AZURE_VM_MI_ID=`az vm show --id $AZURE_VM_ID --query 'identity.principalId' -o tsv`
fi
if [ ! -z $AZURE_VM_MI_ID ];
then
az role assignment create --role Contributor --assignee $AZURE_VM_MI_ID --scope /subscriptions/$AZURE_SUBSCRIPTION_ID
fi
Prerequisites:
Azure CLI
jq
curl

Related

Airflow 2.0.2 - No user yet created

we're moving from airflow 1.x to 2.0.2, and I'm noticing the below error in my terminal after i run docker-compose run --rm webserver initdb:
{{manager.py:727}} WARNING - No user yet created, use flask fab
command to do it.
but in my entrypoint.sh I have the below to create users:
echo "Creating airflow user: ${AIRFLOW_CREATE_USER_USER_NAME}..."
su -c "airflow users create -r ${AIRFLOW_CREATE_USER_ROLE} -u ${AIRFLOW_CREATE_USER_USER_NAME} -e ${AIRFLOW_CREATE_USER_USER_NAME}#vice.com \
-p ${AIRFLOW_CREATE_USER_PASSWORD} -f ${AIRFLOW_CREATE_USER_FIRST_NAME} -l \
${AIRFLOW_CREATE_USER_LAST_NAME}" airflow
echo "Created airflow user: ${AIRFLOW_CREATE_USER_USER_NAME} done!"
;;
Because of this error whenever I try to run airflow locally I still have to run the below to create a user manually every time I start up airflow:
docker-compose run --rm webserver bash
airflow users create \
--username name \
--firstname fname \
--lastname lname \
--password pw \
--role Admin \
--email email#email.com
Looking at the airflow docker entrypoint script entrypoint_prod.sh file, looks like airflow will create the an admin for you when the container on boots.
By default the admin user is 'admin' without password.
If you want something diferent, set this variables: _AIRFLOW_WWW_USER_PASSWORD and _AIRFLOW_WWW_USER_USERNAME
(I'm on airflow 2.2.2)
Looks like they changed the admin creation command password from -p test to -p $DEFAULT_PASSWORD. I had to pass in this DEFAULT_PASSWORD env var to the docker-compose environment for the admin user to be created. It also looks like they now suggest using the .env.localrunner file for configuration.
Here is the commit where that change was made.
(I think you asked this question prior to that change being made, but maybe this will help someone in the future who had my same issue).

Azure ARM - mount StorageAccount FileShare to a linux VM

I prepared an ARM template, template creates listed azure resources: linux VM deployment, Storage deployment, file share in this Storage Account.
ARM works fine, but I would like to add one thing, mounting file share to a linux VM (using script from file share blade, script proposed by Microsoft).
I would like to use Custom Script Extension, and then use "commandToExecute" option to paste inline linux script (this one for file share mounting).
My question is: how to retrieve password to file share and then pass it as a parameter to the inline script. Is it possible? Is it possible to paste file share mounting script as an inline script in ARM template? maybe there is any other way to complete my task? I know that I can store script in a storage account and in ARM template put "blob SAS URL" in the Custom Extension ARM area, but still is a question how to retrieve the password to File Shares, below is the script for File share mount.
sudo mkdir /mnt/wsustorageaccount
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/StorageAccountName.cred" ]; then
sudo bash -c 'echo "username=xxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
sudo bash -c 'echo "password=xxxxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
fi
sudo chmod 600 /etc/smbcredentials/StorageAccountName.cred
sudo bash -c 'echo "//StorageAccount.file.core.windows.net/test /mnt/StorageAccount cifs nofail,vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //StorageAccountName.file.core.windows.net/test /mnt/StorageAccountName -o vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino
You can use this quickstart example:
listKeys(variables('storageAccountId'), '2019-04-01').keys[0].value

pmrep command execution in Informatica cloud

Can anyone tell me is it possible to execute pmrep commands in Informatica Cloud services to import and export workflow object?
pmrep connect -r MY_REP -d MY_DOMAIN -n MY_USER -x MY_PASSWORD
./pmrep objectexport -o workflow -f $FOLDER -n $WORKFLOW -m -s -b -r -u ${EXPORTDIR}/${FOLDER}_${WORKFLOW}.xml
That's not possible in Informatica cloud, you don't have access to the repository it's hosted by Informatica.
You need to use REST API to import and export objects from IICS repository, the document is in the following link.
https://network.informatica.com/docs/DOC-17563

Openstack - Could not find resource admin error when lauching multi-region

I'm trying to launch a multi-region cloud with devstack, but keep getting the error message
Could not find resource admin
during the install of devstack on the 2nd region. Openstack itself is not even installed on the 2nd region, while installation runs fine on the 1st one. The only difference I see are some configuration variables in local.conf for the second region:
REGION_NAME
HOST_IP
KEYSTONE_SERVICE_HOST
KEYSTONE_AUTH_HOST
Changed the 2 keystone variables so the 2nd region only authenticates on keystone service installed on the 1st region. Already checked if the regions are accessible to each other using the ping command line, and that the region 1 have an endpoint available for region 2 with keystone service:
openstack endpoint list | grep keystone
Here is a sample of the final output when I run ./stack.sh to install devstack on the 2nd region. I'd appreciate any help. Thanks!
...
devstack/functions-common:time_stop:L2354: START_TIME[$name]=
devstack/functions-common:time_stop:L2355: TOTAL_TIME[$name]=0
./stack.sh:main:L998: is_service_enabled keystone
devstack/functions-common:is_service_enabled:L2046: return 0
./stack.sh:main:L999: echo_summary 'Starting Keystone'
./stack.sh:echo_summary:L379: [[ -t 3 ]]
./stack.sh:echo_summary:L379: [[ True != \T\r\u\e ]]
./stack.sh:echo_summary:L385: echo -e Starting Keystone
./stack.sh:main:L1001: '[' 192.100.100.10 == 192.100.200.10 ']'
./stack.sh:main:L1007: is_service_enabled tls-proxy
/home/stack/devstack/functions-common:is_service_enabled:L2046: return 1
./stack.sh:main:L1016: cat
./stack.sh:main:L1031: source /home/stack/devstack/userrc_early
devstack/userrc_early:source:L4: export OS_IDENTITY_API_VERSION=3
devstack/userrc_early:source:L4: OS_IDENTITY_API_VERSION=3
devstack/userrc_early:source:L5: export OS_AUTH_URL=http://192.100.100.10:35357
devstack/userrc_early:source:L5: OS_AUTH_URL=http://192.100.100.10:35357
devstack/userrc_early:source:L6: export OS_USERNAME=admin
devstack/userrc_early:source:L6: OS_USERNAME=admin
devstack/userrc_early:source:L7: export OS_USER_DOMAIN_ID=default
devstack/userrc_early:source:L7: OS_USER_DOMAIN_ID=default
devstack/userrc_early:source:L8: export OS_PASSWORD=openstack
devstack/userrc_early:source:L8: OS_PASSWORD=openstack
devstack/userrc_early:source:L9: export OS_PROJECT_NAME=admin
devstack/userrc_early:source:L9: OS_PROJECT_NAME=admin
devstack/userrc_early:source:L10: export OS_PROJECT_DOMAIN_ID=default
devstack/userrc_early:source:L10: OS_PROJECT_DOMAIN_ID=default
devstack/userrc_early:source:L11: export OS_REGION_NAME=RegionTwo
devstack/userrc_early:source:L11: OS_REGION_NAME=RegionTwo
./stack.sh:main:L1033: create_keystone_accounts
devstack/lib/keystone:create_keystone_accounts:L376: local admin_tenant
devstack/lib/keystone:create_keystone_accounts:L377: openstack project show admin -f value -c id
Could not find resource admin
devstack/lib/keystone:create_keystone_accounts:L377: admin_tenant=
devstack/lib/keystone:create_keystone_accounts:L1: exit_trap
./stack.sh:exit_trap:L474: local r=1
./stack.sh:exit_trap:L475: jobs -p
./stack.sh:exit_trap:L475: jobs=
./stack.sh:exit_trap:L478: [[ -n '' ]]

Openstack-Folsom keystone script fail to configure

Based on this link https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst#openstack-folsom-install-guide , I tried running these scripts but it fails despite me setting the HOST_IP & EXT HOST_IP.
./keystone_basic.sh
./keystone_endpoints_basic.sh
Below is the error log received:-
-keystone:error:unrecognized arguments: service id of 18ea5916544429bed2c84af0303077
I have provide the information such as tenant_name, tenant_id and so on in a source file but it happens to be the script provided does not get recognized by the system. Below are the details of the OS I use.
I created VMs instead of using physical machines. Installed with Ubuntu 12.04 LTS.
Please advice on how to tackle this issue.
Thanks.
I had the same problem. I am using Ubuntu 12.04 LTS. After running:
keystone help user-create tenant id appears as follows:
Optional arguments:
...
--service_id <service-id>
Change --service-id to --service_id with a global replace
[Using command line]
# sed -i 's/--service-id/--service_id/g' /path/to/script.sh
restart keystone & It's database entries
mysql -u root -ppassword -e "drop database keystone"
mysql -u root -ppassword -e "create database keystone"
mysql -u root -ppassword -e "grant all privileges on keystone.* TO 'keystone'#'%' identified by 'password'"
mysql -u root -ppassword -e "grant all privileges on keystone.* TO 'keystone'#'localhost' identified by 'password'"
service keystone restart
keystone-manage db_sync

Resources