Azure Bicep multiple scopes in template - azure-resource-manager

After using terraform for a very long time, I decided to start learning azure bicep. So far I am trying to have a grip on the logic. So far I have playing around on deployment of a storage account and keyvault. What I am doing here is the following.
create a storage account
use existing key vault to store storage account connection string as secret
create a key based on the storage account name
And this works as I am expected.
So I wanted to take one step forward. and here is where I am a bit confused.
What I wanted to do, is to use the same bicep template, to create a new secret but in a different resource group into a different key vault.
Now according to my understand of azure documentation, the template comes with a default scope which in my specific case target my default subscription and to run my bicep template from the terminal I use the command
az deployment group create -f ./template.bicep -g <resource-group-name>
and this is my template:
// Default values I'm using to test
param keyVaultName string = '<keyvault-name>'
param managedIdentityName string = 'test-managed-identity'
param tenantCodes array = [
'elhm'
'feor'
]
// I'm using prefix so I dont need to create additional arrays
var keyVaultKeyPrefix = 'Client-Key-'
var storagePrefix = 'sthrideveur'
// Get a reference to key vault
resource keyVault 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultName
}
// Create a managed identity
resource managedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityName
location: resourceGroup().location
}
// Grant permissions to key vault
resource accessPolicy 'Microsoft.KeyVault/vaults/accessPolicies#2019-09-01' = {
name: '${keyVault.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentity.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
// Create key vault keys
resource keyVaultKeys 'Microsoft.KeyVault/vaults/keys#2021-06-01-preview' = [for tenantCode in tenantCodes: {
name: '${keyVault.name}/${keyVaultKeyPrefix}${tenantCode}'
properties: {
keySize: 2048
kty: 'RSA'
// storage key should only needs these operations
keyOps: [
'unwrapKey'
'wrapKey'
]
}
}]
// Create storage accounts
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-04-01' = [for tenantCode in tenantCodes: {
name: '${storagePrefix}${tenantCode}'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_RAGRS'
}
// Assign the identity
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${managedIdentity.id}': {}
}
}
properties: {
allowCrossTenantReplication: true
minimumTlsVersion: 'TLS1_2'
allowBlobPublicAccess: false
allowSharedKeyAccess: true
networkAcls: {
bypass: 'AzureServices'
virtualNetworkRules: []
ipRules: []
defaultAction: 'Allow'
}
supportsHttpsTrafficOnly: true
encryption: {
identity: {
// specify which identity to use
userAssignedIdentity: managedIdentity.id
}
keySource: 'Microsoft.Keyvault'
keyvaultproperties: {
keyname: '${keyVaultKeyPrefix}${tenantCode}'
keyvaulturi: keyVault.properties.vaultUri
}
services: {
file: {
keyType: 'Account'
enabled: true
}
blob: {
keyType: 'Account'
enabled: true
}
}
}
accessTier: 'Cool'
}
}]
// Store the connectionstrings in KV if specified
resource storageAccountConnectionStrings 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = [ for (name, i) in tenantCodes :{
name: '${keyVault.name}/${storagePrefix}${name}'
properties: {
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount[i].name};AccountKey=${listKeys(storageAccount[i].id, storageAccount[i].apiVersion).keys[0].value};EndpointSuffix=${environment().suffixes.storage}'
}
}]
according to the documentation here https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/deploy-to-resource-group?tabs=azure-cli
When I need to target a specific resource group, I can use the scope in the resource, so I create this:
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
scope: resourceGroup('secondresourcegroup')
}
So far no errors, but the problem happens when I had to create a managed identity resource.
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
scope: resourceGroup('secondresourcegroup')
}
resource managedIdentityTwo 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityNameTwo
location: resourceGroup().location
}
resource accessPolicyApi 'Microsoft.Media/videoAnalyzers/accessPolicies#2021-11-01-preview' = {
name: '${keyvaultApi.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentityTwo.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
In the key vault I could declare the scope, but to the underlying resources, such as access policy etc, I cannot declare the scope. So how can bicep understand that those resources needs to target a specific resource group and specific key vault?
Because when I run the terminal command, I am targeting a specific resource group, so I don't really understand how I can use one template to target different resource groups and resources accordingly.
I hope I made my point clear, and please if I didn't, just feel free to ask me more informations.
Thank you so much for your time and help
UPDATE:
When I try to run the code as it is, I get the following error:
{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"ParentResourceNotFound\",\r\n \"message\": \"Can not perform requested operation on nested resource. Parent resource 'secondkeyvault' not found.\"\r\n }\r\n}"}]}}
UPDATE:
So I followed the Daniel lead and in a second template I deployed the code I needed for the second template as follow:
template2.bicep
param deploymentIdOne string = newGuid()
param deploymentIdTwo string = newGuid()
output deploymentIdOne string = '${deploymentIdOne}-${deploymentIdTwo}'
output deploymentIdTwo string = deploymentIdTwo
// Default values I'm using to test
param keyVaultApiName string = 'secondkeyvaultapi'
param managedIdentityNameTwo string = 'second-second-identity'
var keyVaultKeyPrefixTw = 'Client-Key-'
param tenantCodes array = [
'tgrf'
]
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
}
resource managedIdentityTwo 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityNameTwo
location: resourceGroup().location
}
resource accessPolicyApi 'Microsoft.KeyVault/vaults/accessPolicies#2019-09-01' = {
name: '${keyvaultApi.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentityTwo.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
// Store the connectionstrings in KV if specified
resource clientApiKeys 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = [ for name in tenantCodes :{
name: '${keyvaultApi.name}/${keyVaultKeyPrefixTw}${name}'
properties: {
value: '${deploymentIdOne}-${deploymentIdTwo}'
}
}]
and in my main template I added the module:
module clientKeyApi 'template2.bicep' = {
name: 'testfrgs'
scope: 'secondresourcegroup'
}
But there is something that is not clear 100% to me.
How does it work to override all the for loop and parameters name I have declared in my template2.bicep , and yet the module require a scope subscription, if I declare the scope, wouldn't this override the default value?
Sorry guys for the newbie questions, I am trying to break my mindset from terraform and understand better how bicep work.
Any explanation would be amazing and helpful

You can't specify scope on the resource, but you can specify it on a module. You'll need to turn the resource that adds the access policy to the keyvault into a separate module, then specify scope on the module. You can also make the scope for your deployment subscription, but then you'll need to break everything that targets a specific resource group into modules, as well.
This is due to how ARM deployments work. The default scope for an ARM deployment is at the resource group level. You can't point a resource at a different resource group because it's outside the scope of the deployment.
Modules, however, run as sub-deployments and therefore can have a different scope set.
This is a case where Terraform is more straightforward, since it calls the Azure APIs directly instead of using the ARM deployment model. Terraform doesn't care about deployment scopes because it doesn't use them.

Related

Error getting keys from Azure Storage Account with listkeys(...) method with Bicep syntax

I have a Bicep template to create an Azure Storage Account
#description('the name of the storage account')
param name string
#description('the alias of the storage account')
param shortName string
#description('tags')
param tags object
#description('the name of the key vault resource where place output secrets')
param keyVaultName string
resource storageAccount 'Microsoft.Storage/storageAccounts#2022-09-01' = {
name: name
location: resourceGroup().location
sku: {
name: 'Standard_LRS'
tier: 'Standard'
}
kind: 'StorageV2'
tags: union(tags, {
type: 'storage-account'
})
}
Then, I need to get the keys
var keys = listkeys(storageAccount.id, storageAccount.apiVersion)
output keyObject object = keys[0]
output KeyValue string = keys[0].value
But everytime that I runs the template, I receive these errors:
{
"code": "DeploymentOutputEvaluationFailed",
"message": "Unable to evaluate template outputs: 'keyObject,keyValue'. Please see error details and deployment operations. Please see https://aka.ms/arm-common-errors for usage details.",
"details": [
{
"code": "DeploymentOutputEvaluationFailed",
"target": "keyObject",
"message": "The template output 'keyObject' is not valid: The language expression property '0' can't be evaluated, property name must be a string.."
},
{
"code": "DeploymentOutputEvaluationFailed",
"target": "keyValue",
"message": "The template output 'keyValue' is not valid: The language expression property '0' can't be evaluated, property name must be a string.."
}
]
}
The purpose of get keys is to save it into Azure Key Vault by using KeyValue var from previous step
resource keyVault 'Microsoft.KeyVault/vaults#2022-07-01' existing = {
name: keyVaultName
}
resource secret 'Microsoft.KeyVault/vaults/secrets#2022-07-01' = {
parent: keyVault
name: secretName
properties: {
value: KeyValue
contentType: 'plain/text'
}
}
So..
What's wrong with listKeys(...) method?
By following this tweet https://twitter.com/adotfrank/status/1341084692100108288?s=46&t=sWx0hvS0sS47llWLlbWZTw I found an alternative method to get keys.
Just referencing to a storage account object and use the method listKeys()
resource storageAccount 'Microsoft.Storage/storageAccounts#2022-09-01' = {
name: name
location: resourceGroup().location
sku: {
name: 'Standard_LRS'
tier: 'Standard'
}
kind: 'StorageV2'
tags: union(tags, {
type: 'storage-account'
})
}
var storageAccountKeys = storageAccount.listKeys()
Then, I can access to primary or secondary key with storageAccountKeys.keys[0].value
This fix solve my issue.

ARM resource group deployment showing modification for new deployments eventhough there are no chnages

I am using below Bicep file for Azure role assignments . So here I have a Azuredevops pipeline which wil build the bicepfile to arm template and from pipeline variables, the paramaters.json file will be getting updated.
main.bicep
targetScope = 'resourceGroup'
#description('Principal type of the assignee.')
#allowed([
'Device'
'ForeignGroup'
'Group'
'ServicePrincipal'
'User'
])
param principalType string
#description('the id for the role defintion, to define what permission should be assigned')
param RoleDefinitionId string
#description('the id of the principal that would get the permission')
param principalId string
#description('the role deffinition is collected')
resource roleDefinition 'Microsoft.Authorization/roleDefinitions#2018-01-01-preview' existing = {
scope: resourceGroup()
name: RoleDefinitionId
}
resource RoleAssignment 'Microsoft.Authorization/roleAssignments#2020-10-01-preview' = {
name: guid(resourceGroup().id, RoleDefinitionId, principalId)
properties: {
roleDefinitionId: roleDefinition.id
principalId: principalId
principalType: principalType
}
}
paramters.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"principalType": {
"value": "#{principalType}#"
},
"RoleDefinitionId": {
"value": "#{RoleDefinitionId}#"
},
"principalId": {
"value": "#{principalId}#"
}
}
}
pipeline build task for creation deployment.
'az deployment group create --resource-group $(resourceGroup) --template-file $(System.DefaultWorkingDirectory)/template/main.json --parameters $(System.DefaultWorkingDirectory)/template/parameters.json'
When I triggered the pipeline firstime, i got output summary as below.
The deployment will update the following scope:
Scope: /subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/XXXXXXXXXXXXXXXXXXX-rg
+ Microsoft.Authorization/roleAssignments/xxxxxxxxxxxxxxxx [2020-10-01-preview]
apiVersion: "2020-10-01-preview"
id: "/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/XXXXXXXXXXXXXXXXXXX-rg/providers/Microsoft.Authorization/roleAssignments/xxxxxxxxxxxxxxxxx"
name: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
properties.principalId: "xxxxxxxxxxxxx"
properties.roleDefinitionId: "/subscriptions/XXXXXXXXXXXXXXXXXXXXX/resourceGroups/XXXXXXXXXXXXXXXXXXX-rg/providers/Microsoft.Authorization/roleDefinitions/xxxxxxxxxxxxxxxxxxxxxxx"
type: "Microsoft.Authorization/roleAssignments"
And after that, if I retrigger the pipeline again without any change to the templates. Its showing as 1 to modify, but expected that the output will show as "no change". Because we havenet made any changes to the resource either from pipeline side or manually.
Scope: /subscriptions/xxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxxx-rg
~ Microsoft.Authorization/roleAssignments/xxxxxxxxxxxxxxxxxxxxxxx [2020-10-01-preview]
~ properties.roleDefinitionId: "/subscriptions/xxxxxxxxxxxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/xxxxxxxxxxxxxxxxxxxxxxxxx" => "/subscriptions/xxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxx-rg/providers/Microsoft.Authorization/roleDefinitions/xxxxxxxxxxxxxxxxxxx"
x properties.principalType: "Group"
Resource changes: 1 to modify
iF i again deploy also, the next time again will show the same output as 1 to modify
What is the issue here, Why ARM deployment is showing changes eventhough there are no changes.
Azure built-in role definitions are defined at the subscription level unless it is a custom role that you've created at the another scope.
In your bicep file, you can change the scope of the roleDefinition resource:
#description('the role deffinition is collected')
resource roleDefinition 'Microsoft.Authorization/roleDefinitions#2018-01-01-preview' existing = {
scope: subscription()
name: RoleDefinitionId
}
or you could also use subscriptionResourceId:
resource RoleAssignment 'Microsoft.Authorization/roleAssignments#2020-10-01-preview' = {
name: guid(resourceGroup().id, RoleDefinitionId, principalId)
properties: {
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', RoleDefinitionId)
principalId: principalId
principalType: principalType
}
}

How to delete a scheduled event using API?

Disclaimer: I am new to Hasura. I think I am missing some key understanding of how Hasura works.
Here is the list of steps I did so far:
Initiazed a new Hasura project using Heroku Postgresql database
using /v1/query and the following post body, I managed to create a scheduled event (I see it in the Hasura Web Console):
{
type: "create_scheduled_event",
args: {
webhook: "some API endpoint",
schedule_at: "somedate",
headers: [
{ name: "method", value: "POST" },
{ name: "Content-Type", value: "application/json" },
],
payload: "somepayload",
comment: "I SUPPLY A UNIQUI ID TO USE IN THE FOLLOWING DELETE QUERY",
retry_conf: {
num_retries: 3,
timeout_seconds: 120,
tolerance_seconds: 21675,
retry_interval_seconds: 12,
}
}
}
Now, I am trying to delete this event using this post body:
{
type: "delete",
args: {
table: {
schema: "hdb_catalog",
name: "hdb_scheduled_events",
},
where: {
comment: {
$eq: `HERE I PROVIDE THE UNIQUE ID I SET ON THE EVENT CREATION ABOVE`,
}
}
}
}
and getting back this response:
data: {
path: '$.args',
error: 'table "hdb_catalog.hdb_scheduled_events" does not exist',
code: 'not-exists'
}
as I understand hdb_catalog is the schema that I should work against but it does not appear anywhere in my Heroku database. I actually managed to create a scheduled event even without any database connected to the project. So, it seems that Hasura uses something else to store my scheduled events, but what??? How to access that database/table? Would you please help me?
You should use the delete_scheduled_event API instead of trying to delete the row itself from the hdb_catalog

Using hotchocolate's filters with schema first approach

I'm using a schema interceptor to configure my schema. It's a multi-tenant application, so I build the schema according to the tenant's configuration. I'm mapping that configuration to SDL language (schema-first approach) and then I add it to the schema builder (schemaBuilder.AddDocumentFromString(...)).
As said on the documentation (here), "Schema-first does currently not support filtering!". But that is the only approach I can use right now, so I'm trying to find a workaround.
What I've tried:
Manually create the input filter types and add the filtering to the server (something like this):
...
schemaBuilder.AddDocumentFromString(#"
type Query {
persons(where: PersonFilterInput): [Person]
}
input PersonFilterInput {
and: [PersonFilterInput!]
or: [PersonFilterInput!]
name: StringOperationFilterInput
}
input StringOperationFilterInput {
and: [StringOperationFilterInput!]
or: [StringOperationFilterInput!]
eq: String
neq: String
contains: String
ncontains: String
in: [String]
nin: [String]
startsWith: String
nstartsWith: String
endsWith: String
nendsWith: String
}
}
type Person {
name: String
}");
...
//add resolvers
...
And on the server configuration:
services
.AddGraphQLServer()
.TryAddSchemaInterceptor<TenantSchemaInterceptor>()
.AddFiltering();
However, this is not enough because the filters aren't being applied.
Query:
{
persons (where: { name: {eq: "Joao" }}){
name
}
}
Results:
{
"data": {
"persons": [
{
"name": "Joao"
},
{
"name": "Liliana"
}
]
}
}
Is there anything I can do to workaround this problem?
Thank you people
Filter support for schema-first is coming with version 12. You then do not even have to specify everything since we will provide schema building directives.
type Query {
persons: [Person] #filtering
}
type Person {
name: String
}
you also will be able to control which filter operations can be provided. We have the first preview coming up this week.

Pass variables from terraform to arm template

I am deploying an ARM template with Terraform.
We deploy all our Azure infra with Terraform but for AKS there are some preview features which are not in terraform yet so we want to deploy an AKS cluster with an ARM template.
If I create a Log Analytics workspace with TF, how can I pass the workspace id to ARM.
resource "azurerm_resource_group" "test" {
name = "k8s-test-bram"
location = "westeurope"
}
resource "azurerm_log_analytics_workspace" "test" {
name = "lawtest"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "PerGB2018"
retention_in_days = 30
}
So here is a snippet of the AKS ARM where I want to enable monitoring and I refer to the workspaceresourceId. But how do I define/declare the parameter to get the id from the workspace that I created with TF
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": "[parameters('EnableRBAC')]",
"dnsPrefix": "[parameters('DnsPrefix')]",
"addonProfiles": {
"httpApplicationRouting": {
"enabled": false
},
omsagent": {
"enabled": true,
"config": {
"logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]"
}
}
},
you could use the parameters property of the azurerm_template_deployment deployment to pass in parameters:
parameters = {
"workspaceResourceId" = "${azurerm_log_analytics_workspace.test.id}"
}
I think it should look more or less like that, here's the official doc on this.

Resources