How do one add delay to deployment of ARM template resource? - azure-resource-manager

I deploy 2 resources which one depends on another one but it seems to be a delay between first resource becoming fully operational and second resource being implemented. Code is below. First resource being deployed is DNS resource pointing to APP service and second resource is adding custom hostname binding to App Service. Issue is that there is seems to be a delay in up to 30 seconds between app service being able to validate DNS record being available to verify record. Is it possible somehow to add small delay between resources deployments since just using dependsOn is not sufficient in this case
{
"apiVersion": "2020-09-01",
"name": "[concat(parameters('webAppName'), '-mysite','/mysite.', variables('dnsZoneName'))]",
"type": "Microsoft.Web/sites/hostNameBindings",
"location": "[variables('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/dnszones/CNAME', variables('dnsZoneName'), 'mysite')]"
],
"properties": {
"domainId": null,
"siteName": "[concat(parameters('webAppName'), '-mysite')]",
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified"
}
},
{
"type": "Microsoft.Network/dnszones/CNAME",
"apiVersion": "2018-05-01",
"dependsOn": [
"[concat(parameters('webAppName'), '-mysite')]"
],
"name": "[concat(variables('dnsZoneName'), '/mysite')]",
"properties": {
"TTL": 3600,
"CNAMERecord": {
"cname": "[reference(concat(parameters('webAppName'), '-mysite'), '2016-03-01', 'Full').properties.defaultHostName]"
},
"targetResource": {}
}
},

No, its not possible to do directly, but you can use a couple of alternatives:
Deploy a dummy resource between those, you can find a resource that doesn't cost anything
Do some fancy stuff with nested templates, like calling an empty nested template 10 times in a row (in sequence, not in parallel)
Use deploymentScript resource to just issue a sleep 30 command.

To give an example of a deployment script in that can sleep.
I would add this to its own file so it can be used as a module in multiple places
BICEP
param location string = resourceGroup().location
param utcValue string = utcNow()
param sleepName string = 'sleep-1'
param sleepSeconds int = 30
resource sleepDelay 'Microsoft.Resources/deploymentScripts#2020-10-01' = {
name: sleepName
location: location
kind: 'AzurePowerShell'
properties: {
forceUpdateTag: utcValue
azPowerShellVersion: '8.3'
timeout: 'PT10M'
arguments: '-seconds ${sleepSeconds}'
scriptContent: '''
param ( [string] $seconds )
Write-Output Sleeping for: $seconds ....
Start-Sleep -Seconds $seconds
Write-Output Sleep over - resuming ....
'''
cleanupPreference: 'OnSuccess'
retentionInterval: 'P1D'
}
}
You can decompile this with: az bicep decompile --file module_name.bicep to get the ARM version...
ARM
{
"type": "Microsoft.Resources/deploymentScripts",
"apiVersion": "2020-10-01",
"name": "[parameters('sleepName')]",
"location": "[parameters('location')]",
"kind": "AzurePowerShell",
"properties": {
"forceUpdateTag": "[parameters('utcValue')]",
"azPowerShellVersion": "8.3",
"timeout": "PT10M",
"arguments": "[format('-seconds {0}', parameters('sleepSeconds'))]",
"scriptContent": " param ( [string] $seconds ) \n Write-Output Sleeping for: $seconds ....\n Start-Sleep -Seconds $seconds \n Write-Output Sleep over - resuming ....\n ",
"cleanupPreference": "OnSuccess",
"retentionInterval": "P1D"
}
}
You must also ensure that any actions you want to delay must depend on this module/resource - otherwise they will run in parallel, and not after the delay...

Related

Get a list of resources defined in an Azure deployment template prior to actual deployment in Azure PowerShell

In Azure CLI, I can use az deployment sub validate to get detailed output about all the resources in a subscription-level deployment template:
{
"error": null,
"id": "/subscriptions/.../providers/Microsoft.Resources/deployments/main",
"location": "westeurope",
"name": "main",
"properties": {
...
"validatedResources": [
{
"id": "/subscriptions/.../resourceGroups/my-rg"
},
{
"id": "/subscriptions/.../resourceGroups/my-rg/providers/Microsoft.Resources/deployments/logAnalyticsWorkspace",
"resourceGroup": "my-rg"
},
{
"id": "/subscriptions/.../resourceGroups/my-rg/providers/Microsoft.Resources/deployments/identity",
"resourceGroup": "my-rg"
},
...
]
},
"type": "Microsoft.Resources/deployments"
}
The Azure PowerShell equivalent of az deployment sub validate is Test-AzSubscriptionDeployment, but that returns only errors (if any), no list of resource IDs.
How can I get a similar list of resource IDs for the resources defined by a template using Azure PowerShell?
Both of those operations make the same API call to Azure and get the same response back, but as you found, Powershell only returns errors and not the actual API response content.
You can work around by manually calling the API, but it is a bit cumbersome.
$Template = Get-Content -Path template.json | ConvertFrom-Json
$payload = #{location="westus2"; properties=#{template = $Template; mode="Incremental"}} | ConvertTo-Json -Depth 20
$results = Invoke-AzRestMethod -Path "/subscriptions/$subid/providers/Microsoft.Resources/deployments/test/validate?api-version=2021-04-01" -Method POST -Payload $payload
($results.Content | ConvertFrom-Json).properties.validatedResources

Apply an Azure Policy to a management group using ARM

Goal: Deploy an Azure Policy to a management group so when certain tags are missing from a resource within its remit, apply the specified Tag from the resource group
Problem: Deploying this template to the management group results in "'The template function 'RESOURCEGROUP' is not expected at this location."
There is a fairly plain structure similar to:
<Management Group> - <Subscription 1> - <Resource Group 1> - <Resource A>
- <Resource Group 2> - <Resource B>
- <Subscription 2> - <Resource Group 3> - <Resource C>
- <Resource D>
There is a fairly simple template using a nested policy definition:
......
"resources": [
{
"type": "Microsoft.Authorization/policyDefinitions",
"apiVersion": "2019-09-01",
"name": ".",
"properties": {
"policyType": "Custom",
"mode": "Indexed",
"displayName": ".",
"description": ".",
"metadata": {
"category": "Tags"
},
"policyRule": {
"if": {
"anyOf": [
{
"field": "tags['costCenter']",
"exists": "false"
},
{
"field": "tags['CostCenter']",
"notin": "[parameters('allowedCostCenter')]"
}
]
},
"then": {
"effect": "modify",
"details": {
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
],
"operations": [
{
"operation": "add",
"field": "tags['CostCenter']",
"value": "[resourcegroup().tags['CostCenter']]"
}
]
}
}
}
}
}
]
I realise that you can not use "resourcegroup()" on items that are not within a resource group, but the guides suggested using this within the nested template and on "indexed" resources should work.
I'm fairly sure the pipeline is correct as I already have several audit policies deploying
From experimenting in the portal, this looks like it should be possible
There is a decent amount of reading around, but I have not read (or at least understood) that seems to help with this
Is what I am trying to achieve possible? If so, can you see what I am doing wrong?
Thanks for your help!
You need to add escape character if you want resourcegroup() function to be executed as a part of the Azure Policy, not the MG-scope ARM template:
"value": "[[resourcegroup().tags['CostCenter']]"

How to I prevent Microsoft.Automation/automationAccounts/Compilationjobs to always run in ARM deployment?

My ARM template is below which is nested template in bigger ARM template. For some reason DSC Compilation job always run on each deployment. I expected it not be run if it was already run before. How do I control this behavior? I tried using "incrementNodeConfigurationBuild": "false" but it did not do the trick.
{
"name": "WorkerNodeDscConfiguration",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('automationAccountRGName')]",
"dependsOn": [],
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.1",
"resources": [
{
"apiversion": "2015-10-31",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"name": "[parameters('automationAccountName')]",
"type": "Microsoft.Automation/automationAccounts",
"properties": {
"sku": {
"name": "Basic"
}
},
"tags": {},
"resources": [
{
"name": "workernode",
"type": "configurations",
"apiVersion": "2018-01-15",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"state": "Published",
"overwrite": "false",
"incrementNodeConfigurationBuild": "false",
"Source": {
"Version": "1.2",
"type": "uri",
"value": "[parameters('WorkerNodeDSCConfigURL')]"
}
}
},
{
"name": "[guid(resourceGroup().id, deployment().name)]",
"type": "Compilationjobs",
"apiVersion": "2018-01-15",
"tags": {},
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]",
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'),'/Configurations/workernode')]"
],
"properties": {
"configuration": {
"name": "workernode"
},
"incrementNodeConfigurationBuild": "false",
"parameters": {
"WebServerContentURL": "[parameters('WebServerContentURL')]"
}
}
}
]
}
]
}
}
}
In short, AFAIK you should be able to control this behaviour with 'condition'.
To explain it in detail, the DSC compilation jobs resource always run on each deployment because when we use the DSC compilation jobs resource (i.e., Microsoft.Automation/automationAccounts/compilationjobs) in the ARM template, IMHO what it does in the behind is, basically clicks on 'Compile' button of the DSC configuration.
If you click on that 'Compile' button, the compilation of job happens for sure even if it already compiled the job. You may check the same part manually as well.
So AFAIK that was the reason for compilation job always running on each deployment.
What you could do is, update your ARM template with 'condition' (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-templates-resources#condition and https://learn.microsoft.com/en-us/azure/architecture/building-blocks/extending-templates/conditional-deploy) and then wrap your template with below sample piece of PowerShell code that would determine if the Compilation of job for particular DSC configuration is done already and then deploy the template by passing inline parameter value or by updating condition parameter in parameters template file with new or existing value accordingly. (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy#pass-parameter-values)
$DscCompilationJob = Get-AzAutomationDscCompilationJob -AutomationAccountName AUTOMATIONACCOUNTNAME -ResourceGroupName RESOURCEGROUPNAME|Sort-Object -Descending -Property CreationTime|Select -First 1| Select Status
$DscCompilationJobStatus = $DscCompilationJob.Status
if ($DscCompilationJobStatus -ne "Completed"){
$DscCompilationJobStatusInlineParameter = "new"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with new value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}else{
$DscCompilationJobStatusInlineParameter = "existing"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with existing value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}
And regarding incrementNodeConfigurationBuild property, IMHO this property is just with regards to creation of a new build version of Node Configuration is required or not i.e., when incremental node configuration build is set to false, it does not override the earlier existing Node Configuration by creating a new Node Configuration with the name CONFIGNAME[<2>] (the version number is incremented based on the existing version number already present).
Hope this helps!! Cheers!! :)

Script paths into Azure Data Factory DataLakeAnalytics u-sql pipeline

I'm trying to publish a data factory solution with this ADF DataLakeAnalyticsU-SQL pipeline activity following the azure step by step doc (https://learn.microsoft.com/en-us/azure/data-factory/data-factory-usql-activity).
{
"type": "DataLakeAnalyticsU-SQL",
"typeProperties": {
"scriptPath": "\\scripts\\111_risk_index.usql",
"scriptLinkedService": "PremiumAzureDataLakeStoreLinkedService",
"degreeOfParallelism": 3,
"priority": 100,
"parameters": {
"in": "/DF_INPUT/Consodata_Prelios_consegna_230617.txt",
"out": "/DF_OUTPUT/111_Analytics.txt"
}
},
"inputs": [
{
"name": "PremiumDataLakeStoreLocation"
}
],
"outputs": [
{
"name": "PremiumDataLakeStoreLocation"
}
],
"policy": {
"timeout": "06:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "ConsodataFilesProcessing",
"linkedServiceName": "PremiumAzureDataLakeAnalyticsLinkedService"
}
During publishing got this error:
25/07/2017 18:51:59- Publishing Project 'Premium.DataFactory'....
25/07/2017 18:51:59- Validating 6 json files
25/07/2017 18:52:15- Publishing Project 'Premium.DataFactory' to Data
Factory 'premium-df'
25/07/2017 18:52:15- Value cannot be null.
Parameter name: value
Trying to figure up what could be wrong with the project it came up that the issues reside into the activity options "typeProperties" as shown above, specifically for scriptPath and scriptLinkedService attributes. The doc says:
scriptPath: Path to folder that contains the U-SQL script. Name of the file
is case-sensitive.
scriptLinkedService: Linked service that links the storage that contains the
script to the data factory
Publishing the project without them (using hard-coded script) it will complete successfully. The problem is that I can't either figure out what exactly put into them. I tried with several combinations paths. The only thing I know is that the script file must be referenced locally into the solution as a dependency.
The script linked service needs to be Blob Storage, not Data Lake Storage.
Ignore the publishing error, its misleading.
Have a linked service in your solution to an Azure Storage Account, referred to in the 'scriptLinkedService' attribute. Then in the 'scriptPath' attribute reference the blob container + path.
For example:
"typeProperties": {
"scriptPath": "datafactorysupportingfiles/CreateDimensions - Daily.usql",
"scriptLinkedService": "BlobStore",
"degreeOfParallelism": 2,
"priority": 7
},
Hope this helps.
Ps. Double check for case sensitivity on attribute names. It can also throw unhelpful errors.

Google Cloud Datastore runQuery returning 412 "no matching index found"

** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.

Resources