Get a list of resources defined in an Azure deployment template prior to actual deployment in Azure PowerShell - azure-resource-manager

In Azure CLI, I can use az deployment sub validate to get detailed output about all the resources in a subscription-level deployment template:
{
"error": null,
"id": "/subscriptions/.../providers/Microsoft.Resources/deployments/main",
"location": "westeurope",
"name": "main",
"properties": {
...
"validatedResources": [
{
"id": "/subscriptions/.../resourceGroups/my-rg"
},
{
"id": "/subscriptions/.../resourceGroups/my-rg/providers/Microsoft.Resources/deployments/logAnalyticsWorkspace",
"resourceGroup": "my-rg"
},
{
"id": "/subscriptions/.../resourceGroups/my-rg/providers/Microsoft.Resources/deployments/identity",
"resourceGroup": "my-rg"
},
...
]
},
"type": "Microsoft.Resources/deployments"
}
The Azure PowerShell equivalent of az deployment sub validate is Test-AzSubscriptionDeployment, but that returns only errors (if any), no list of resource IDs.
How can I get a similar list of resource IDs for the resources defined by a template using Azure PowerShell?

Both of those operations make the same API call to Azure and get the same response back, but as you found, Powershell only returns errors and not the actual API response content.
You can work around by manually calling the API, but it is a bit cumbersome.
$Template = Get-Content -Path template.json | ConvertFrom-Json
$payload = #{location="westus2"; properties=#{template = $Template; mode="Incremental"}} | ConvertTo-Json -Depth 20
$results = Invoke-AzRestMethod -Path "/subscriptions/$subid/providers/Microsoft.Resources/deployments/test/validate?api-version=2021-04-01" -Method POST -Payload $payload
($results.Content | ConvertFrom-Json).properties.validatedResources

Related

Invalid mint owner during set_collection Candy Machine v2

i was trying to mint some nfts with candy machine but when i try to execute:
ts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts set_collection \
-e devnet \
-k ~/.config/solana/devnet.json \
-c example \
-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG
I got this error:
throw new Error(`Invalid mint owner: ${JSON.stringify(info.owner)}`);
^
Error: Invalid mint owner: "11111111111111111111111111111111"
at Token.getMintInfo (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/node_modules/#solana/spl-token/client/token.js:731:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async parseCollectionMintPubkey (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/helpers/various.ts:438:5)
at async Command.<anonymous> (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:941:34)
Someone knows why? I have tried putting a different address from the one with that i have created the candymachine and with I made the upload or also the same but the issue is the same, maybe there is something wrong with it or with other things?
This is an example of my json:
{
"name": "#1",
"description": "description",
"external_url": "",
"image": "0.png",
"attributes": [
{
"trait_type": "Background Color Woman",
"value": "Light Blue"
},
{
"trait_type": "Background color man",
"value": "Metal Grey"
}
],
"properties": {
"files": [
{
"uri": "0.png",
"type": "image/png"
}
],
"creators": [
{
"address": "GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE",
"share": 100
}
]
},
"compiler": "https://the-nft-generator.com",
"symbol": "Test",
"collection": {
"name": "test",
"family": "test"
}
}
If i upload without execute set_collections it works but with a different name of the collection from the one specified in the jsons file
set_collection is used to set the collection field to all the nfts inside a Candy machine that has not started the mint (0 minted NFTs). To set a collection you can pass any NFT (that is a masterEditionV2) that has the same updateAuthority as the wallet that you used to create ur CandyMachine.
In this case you are trying to set a collection that uses this NFT (-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG), and you said that ur CM was created with the wallet that has pubkey GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE. The NFT has updateAuthority 42NevAWA6A8m9prDvZRUYReQmhNC3NtSZQNFUppPJDRB and thats a completly different pubkey that the one that you used to create the Candy Machine.
You can always use the collection webpage. This webpage allows you to create and mint a collection NFT with certain metadata, and also will migrate (change the onchain collections to the new created collection) the NFTs that you pass on the website and this can be updated at anytime with more NFTs. This website WILL NOT migrate unminted nfts from a candy machine.
If you want to use set_collection make sure to provide, on the -m parameter, an NFT that has the same updateAuthority that ur Candy Machine. Also make sure that your Candy Machine has 0 minted NFTs.

How do one add delay to deployment of ARM template resource?

I deploy 2 resources which one depends on another one but it seems to be a delay between first resource becoming fully operational and second resource being implemented. Code is below. First resource being deployed is DNS resource pointing to APP service and second resource is adding custom hostname binding to App Service. Issue is that there is seems to be a delay in up to 30 seconds between app service being able to validate DNS record being available to verify record. Is it possible somehow to add small delay between resources deployments since just using dependsOn is not sufficient in this case
{
"apiVersion": "2020-09-01",
"name": "[concat(parameters('webAppName'), '-mysite','/mysite.', variables('dnsZoneName'))]",
"type": "Microsoft.Web/sites/hostNameBindings",
"location": "[variables('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/dnszones/CNAME', variables('dnsZoneName'), 'mysite')]"
],
"properties": {
"domainId": null,
"siteName": "[concat(parameters('webAppName'), '-mysite')]",
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified"
}
},
{
"type": "Microsoft.Network/dnszones/CNAME",
"apiVersion": "2018-05-01",
"dependsOn": [
"[concat(parameters('webAppName'), '-mysite')]"
],
"name": "[concat(variables('dnsZoneName'), '/mysite')]",
"properties": {
"TTL": 3600,
"CNAMERecord": {
"cname": "[reference(concat(parameters('webAppName'), '-mysite'), '2016-03-01', 'Full').properties.defaultHostName]"
},
"targetResource": {}
}
},
No, its not possible to do directly, but you can use a couple of alternatives:
Deploy a dummy resource between those, you can find a resource that doesn't cost anything
Do some fancy stuff with nested templates, like calling an empty nested template 10 times in a row (in sequence, not in parallel)
Use deploymentScript resource to just issue a sleep 30 command.
To give an example of a deployment script in that can sleep.
I would add this to its own file so it can be used as a module in multiple places
BICEP
param location string = resourceGroup().location
param utcValue string = utcNow()
param sleepName string = 'sleep-1'
param sleepSeconds int = 30
resource sleepDelay 'Microsoft.Resources/deploymentScripts#2020-10-01' = {
name: sleepName
location: location
kind: 'AzurePowerShell'
properties: {
forceUpdateTag: utcValue
azPowerShellVersion: '8.3'
timeout: 'PT10M'
arguments: '-seconds ${sleepSeconds}'
scriptContent: '''
param ( [string] $seconds )
Write-Output Sleeping for: $seconds ....
Start-Sleep -Seconds $seconds
Write-Output Sleep over - resuming ....
'''
cleanupPreference: 'OnSuccess'
retentionInterval: 'P1D'
}
}
You can decompile this with: az bicep decompile --file module_name.bicep to get the ARM version...
ARM
{
"type": "Microsoft.Resources/deploymentScripts",
"apiVersion": "2020-10-01",
"name": "[parameters('sleepName')]",
"location": "[parameters('location')]",
"kind": "AzurePowerShell",
"properties": {
"forceUpdateTag": "[parameters('utcValue')]",
"azPowerShellVersion": "8.3",
"timeout": "PT10M",
"arguments": "[format('-seconds {0}', parameters('sleepSeconds'))]",
"scriptContent": " param ( [string] $seconds ) \n Write-Output Sleeping for: $seconds ....\n Start-Sleep -Seconds $seconds \n Write-Output Sleep over - resuming ....\n ",
"cleanupPreference": "OnSuccess",
"retentionInterval": "P1D"
}
}
You must also ensure that any actions you want to delay must depend on this module/resource - otherwise they will run in parallel, and not after the delay...

Access denied due to invalid subscription key or wrong API endpoint (Cognitive Services Custom Vision)

I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.
You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.
The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362

How to I prevent Microsoft.Automation/automationAccounts/Compilationjobs to always run in ARM deployment?

My ARM template is below which is nested template in bigger ARM template. For some reason DSC Compilation job always run on each deployment. I expected it not be run if it was already run before. How do I control this behavior? I tried using "incrementNodeConfigurationBuild": "false" but it did not do the trick.
{
"name": "WorkerNodeDscConfiguration",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('automationAccountRGName')]",
"dependsOn": [],
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.1",
"resources": [
{
"apiversion": "2015-10-31",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"name": "[parameters('automationAccountName')]",
"type": "Microsoft.Automation/automationAccounts",
"properties": {
"sku": {
"name": "Basic"
}
},
"tags": {},
"resources": [
{
"name": "workernode",
"type": "configurations",
"apiVersion": "2018-01-15",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"state": "Published",
"overwrite": "false",
"incrementNodeConfigurationBuild": "false",
"Source": {
"Version": "1.2",
"type": "uri",
"value": "[parameters('WorkerNodeDSCConfigURL')]"
}
}
},
{
"name": "[guid(resourceGroup().id, deployment().name)]",
"type": "Compilationjobs",
"apiVersion": "2018-01-15",
"tags": {},
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]",
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'),'/Configurations/workernode')]"
],
"properties": {
"configuration": {
"name": "workernode"
},
"incrementNodeConfigurationBuild": "false",
"parameters": {
"WebServerContentURL": "[parameters('WebServerContentURL')]"
}
}
}
]
}
]
}
}
}
In short, AFAIK you should be able to control this behaviour with 'condition'.
To explain it in detail, the DSC compilation jobs resource always run on each deployment because when we use the DSC compilation jobs resource (i.e., Microsoft.Automation/automationAccounts/compilationjobs) in the ARM template, IMHO what it does in the behind is, basically clicks on 'Compile' button of the DSC configuration.
If you click on that 'Compile' button, the compilation of job happens for sure even if it already compiled the job. You may check the same part manually as well.
So AFAIK that was the reason for compilation job always running on each deployment.
What you could do is, update your ARM template with 'condition' (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-templates-resources#condition and https://learn.microsoft.com/en-us/azure/architecture/building-blocks/extending-templates/conditional-deploy) and then wrap your template with below sample piece of PowerShell code that would determine if the Compilation of job for particular DSC configuration is done already and then deploy the template by passing inline parameter value or by updating condition parameter in parameters template file with new or existing value accordingly. (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy#pass-parameter-values)
$DscCompilationJob = Get-AzAutomationDscCompilationJob -AutomationAccountName AUTOMATIONACCOUNTNAME -ResourceGroupName RESOURCEGROUPNAME|Sort-Object -Descending -Property CreationTime|Select -First 1| Select Status
$DscCompilationJobStatus = $DscCompilationJob.Status
if ($DscCompilationJobStatus -ne "Completed"){
$DscCompilationJobStatusInlineParameter = "new"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with new value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}else{
$DscCompilationJobStatusInlineParameter = "existing"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with existing value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}
And regarding incrementNodeConfigurationBuild property, IMHO this property is just with regards to creation of a new build version of Node Configuration is required or not i.e., when incremental node configuration build is set to false, it does not override the earlier existing Node Configuration by creating a new Node Configuration with the name CONFIGNAME[<2>] (the version number is incremented based on the existing version number already present).
Hope this helps!! Cheers!! :)

Script paths into Azure Data Factory DataLakeAnalytics u-sql pipeline

I'm trying to publish a data factory solution with this ADF DataLakeAnalyticsU-SQL pipeline activity following the azure step by step doc (https://learn.microsoft.com/en-us/azure/data-factory/data-factory-usql-activity).
{
"type": "DataLakeAnalyticsU-SQL",
"typeProperties": {
"scriptPath": "\\scripts\\111_risk_index.usql",
"scriptLinkedService": "PremiumAzureDataLakeStoreLinkedService",
"degreeOfParallelism": 3,
"priority": 100,
"parameters": {
"in": "/DF_INPUT/Consodata_Prelios_consegna_230617.txt",
"out": "/DF_OUTPUT/111_Analytics.txt"
}
},
"inputs": [
{
"name": "PremiumDataLakeStoreLocation"
}
],
"outputs": [
{
"name": "PremiumDataLakeStoreLocation"
}
],
"policy": {
"timeout": "06:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "ConsodataFilesProcessing",
"linkedServiceName": "PremiumAzureDataLakeAnalyticsLinkedService"
}
During publishing got this error:
25/07/2017 18:51:59- Publishing Project 'Premium.DataFactory'....
25/07/2017 18:51:59- Validating 6 json files
25/07/2017 18:52:15- Publishing Project 'Premium.DataFactory' to Data
Factory 'premium-df'
25/07/2017 18:52:15- Value cannot be null.
Parameter name: value
Trying to figure up what could be wrong with the project it came up that the issues reside into the activity options "typeProperties" as shown above, specifically for scriptPath and scriptLinkedService attributes. The doc says:
scriptPath: Path to folder that contains the U-SQL script. Name of the file
is case-sensitive.
scriptLinkedService: Linked service that links the storage that contains the
script to the data factory
Publishing the project without them (using hard-coded script) it will complete successfully. The problem is that I can't either figure out what exactly put into them. I tried with several combinations paths. The only thing I know is that the script file must be referenced locally into the solution as a dependency.
The script linked service needs to be Blob Storage, not Data Lake Storage.
Ignore the publishing error, its misleading.
Have a linked service in your solution to an Azure Storage Account, referred to in the 'scriptLinkedService' attribute. Then in the 'scriptPath' attribute reference the blob container + path.
For example:
"typeProperties": {
"scriptPath": "datafactorysupportingfiles/CreateDimensions - Daily.usql",
"scriptLinkedService": "BlobStore",
"degreeOfParallelism": 2,
"priority": 7
},
Hope this helps.
Ps. Double check for case sensitivity on attribute names. It can also throw unhelpful errors.

Resources