ARM Template globalization - azure-resource-manager

I Have two ARM templates to deploy resources in DEV and QA separately. Now my requirement is to create only one template to deploy resources in DEV and QA based on selected environment.
Someone help me how to create ARM template to deploy resources based on select Environment like DEV or QA.

Per my understanding, you could use Logical functions for ARM templates to create Azure resources with conditional action. Use the condition element to specify whether the resource is deployed. The value for this element resolves to true or false. When the value is true, the resource is created. When the value is false, the resource isn't created. The value can only be applied to the whole resource.
For more information, read the tutorial: Use condition in ARM templates and this.

Related

Firebase Deploy Error: 'Changing secrets is not supported' when using separate projects with Stripe 'Run Payments with Stripe' extension

I'm trying to setup multiple environments for my Vue / Firebase Project.
I have two Firebase Projects
1.) Dev
2.) Prod
The project utilizes Stripe Extension which pulls the API Key from an auto-generated file called:
firestore-stripe-payments.env
which contains:
STRIPE_API_KEY=projects/${param:PROJECT_NUMBER}/secrets/firestore-stripe-payments-STRIPE_API_KEY-xxxx/versions/latest
Where xxxx is a random 4 character string.
That line pulls the value of the key from Google Secret Manager.
Let's say Dev is 'dddd'
and Prod is: 'pppp'
The issue is that I can only define either:
firestore-stripe-payments-STRIPE_API_KEY-dddd
or
firestore-stripe-payments-STRIPE_API_KEY-pppp
At first I tried to create a new value within Google Secret Manager simply called:
firestore-stripe-payments-STRIPE_API_KEY
The thought was this should be a simple fix, and it would pull the associated API_KEY for the project currently being used.
but this causes the error:
Error: firestore-stripe-payments: Found 'projects/foo/secrets/firestore-stripe-payments-STRIPE_API_KEY/versions/latest' for secret param STRIPE_API_KEY, but this instance was previously using a different secret projects/fooo/secrets/firestore-stripe-payments-STRIPE_API_KEY-dddd.
Changing secrets is not supported. If you want to change the value of this secret, use a new version of projects/foo/secrets/firestore-stripe-payments-STRIPE_API_KEY-dddd.You can create a new version at https://console.cloud.google.com/security/secret-manager?project=fooo
Also, if there is a better place to ask this question please let me know, couldn't find the 'right' room
For this scenario, could you include a separate env (env.dev) file using the following guidelines
.env # loaded in all cases
.env.local # loaded in all cases, ignored by git
.env.[mode] # only loaded in specified mode
.env.[mode].local
For generating separate keys for each environment, I believe from your example you are using a single Stripe Extension on a single project.
Firebase Extensions can support multiple instances of an Extension per project, this will create a separate "dev" secret for you to use.
Additionally, a separate Firebase project with another "Stripe Extension" installation would be recommended to separate any concerns in development.

Using Octopus to configure log4net.config in .NET Core 3.1 Web API project

I'm using Octopus as part of our deployment for a .NET Core 3.1 Web API project.
log4net.config exists in .\Utility\Logs.
I'm trying to follow the pattern here:
https://help.octopus.com/t/transformation-best-practice-log4net-config-or-any-non-web-app-config/9906/4
As I understand it, this consists of three parts:
Create Log4Net.DeploymentTransform.config, with the variables in #{name} format - this has been done.
Turn on the "Substitute variables in files" feature, and point at the Log4Net.DeploymentTransform.config transformation file (variable replacement happens before transformation). That would result in the #{LogFileLocation} variable being replaced with whatever value was set for your LogFileLocation variable in the current scope.
This is done and is working.when my app is deployed, Log4Net.DeploymentTransform.config is there as well and the variable has been successfully set in it.
You'd also turn on the configuration transforms feature, and fill out the additional transforms section in the configuration transforms feature to identify your transform file (e.g. Log4Net.DeploymentTransform.config => log4net.config).
This is not working, the content of Log4Net.DeploymentTransform.config is not being copied on top of log4net.config, though they are in the same folder upon deployment.
Here is what I did in our "deploy step"
Which sure looks like what the article is saying to do.
What else should I check? Any idea why step 3 isn't occurring?
Your syntax looks correct for the files - have you checked to ensure that you have the xdt attributes set?
In the example forum post you shared, the log4net and appender elements are tagged with xmlns:xdt, xdt:Transform and xdt:Match attributes that help the XDT layer determine how to transform the files.
A quick example - I created a Log4NetConfigTest package with two files -
Utility/Logs/log4net.config
Utility/Logs/log4net.trasnform.config
I used the same sample code from the forum post as well.
Here's the set up for my package deployment configurations:
With that set up (and my LogFileLocation project variable set), I was able to see the following in my task log for the deployment:
Deploying package: C:\Octopus\Files\Log4NetConfigTest#S1.0.0#20004C95A0E0094490814B5A365DDAD2.zip
Transforming 'C:\Octopus\Applications\Development\Log4NetConfigTest\1.0.0_1\Utility\Logs\log4net.config' using 'C:\Octopus\Applications\Development\Log4NetConfigTest\1.0.0_1\Utility\Logs\log4net.transform.config'.
No matching appSetting, applicationSetting, nor connectionString names were found in: C:\Octopus\Applications\Development\Log4NetConfigTest\1.0.0_1\Utility\Logs\log4net.config
The task log confirmed that Octopus did apply the transformations, and once deployed, I confirmed that my new test log location was present and correct in both the base configuration and the transformation file.

How to configure fail-fast strategy for Zuul CI jobs for a specific project?

How can someone configure a specific project (repository) to use a fail-fast strategy for jobs running inside it?
fail-fast strategy on CI means that you want to cancel any running jobs as soon the first one that counts towards final result is failing (saving compute resources and providing a faster feedback).
Please note that I explicitly asked about a specific project as this option should not affect other project that may have different preferences.
The fail-fast option is already by design a per project (or per project-template) and pipeline setting [1].
e.g. from within an untrusted project:
- project:
check:
fail-fast: true
jobs:
- test foo
Or you can set it in a config project for a different project:
- project:
name: org/other-project
check:
fail-fast: true
[1] https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.%3Cpipeline%3E.fail-fast

Override publish folder (url) of publish profile during the build pipeline

We have one ASP.Net solution with several projects. each project have build.pubxml with unique folder path.
For example:
In project Test we have this line inside the build.pubxml:
<publishUrl>C:\publish\SolutionName\Test</publishUrl>
In project Exam we have this line inside the build.pubxml:
<publishUrl>C:\publish\SolutionName\Exam</publishUrl>
In the build pipeline (in TFS) with have MSBuild step with this argument:
/p:PublishProfile=build.pubxml
After the build we got 2 folders - Test and Exam in C:\publish\SolutionName.
So far so good.
The problem is we have few branches, and we want to separate the publish folder for each branch, so we added .pubxml for each branch and in the build pipeline we specified the correct one. but is make are a lot of work on each new branch created and can cause mistakes.
We tried to pass the /p:publishUrl=C:\publish\BranchName in the MSBuild but then we got a one folder with all the content of Test and Exam and not two folders.
The idea is to have only one .pubxml file for each project with a parameter and pass the value in the pipeline, for example:
<publishUrl>C:\publish\$(Parameter)\Test</publishUrl>
And in the build we will pass the parameter according to the branch.
It is possible to do something like this?
It is possible to do something like this?
The answer is Yes. Since msbuild accepts Global Properties in command-line. If we define one Property in build.pubxml like <publishUrl>C:\PublishFolders\$(BranchID)\xxx(Test,Exam...)</publishUrl>, then we can simply pass the value in msbuild arguments like this:
Then we'll get Test and Exam folders under C:\PublishFolders\NewTest. Also we can choose to pass the pipeline predefined variables to the command like: /p:BranchID=$(Build.SourceBranch)...
This works for build in local machine, tfs and Azure Devops pipeline. Hope all above helps :)

BizTalk deployment - Binding Files not well imported into custom pipeline component

When I deploy my applications I notice a very strange behaviour – Not all binding files are imported as they should be while others are.
I'm using a custom pipeline component that promotes a set of properties to the context, some are defined at in the pipeline itself while others are defined in the BTS Admin console so we can define them in the binding files.
<ReceivePipelineData><Root xmlns:xsd="http://www.w3.org/2001/XMLSchema" … Components><Component Name="I.MMA.COMPONENT"><Properties ><Customer vt="8">CUSTOMER_NAME</Customer>< … etc.
This works fine for some of our applications, they are being well deployed and our values that are configured in the binding file are visible in the admin portal while other applications don’t show the same information and the strange part is that, when we generate the binding files for the failing applications, the values are in the binding file but not visible nor used by BTS.
I have this behavior on different machines, anybody got an idea what is going on?
I recently moved my pipeline component from one stage to another while I didn't update the Stage CategoryId and the values were ignored.
I've seen something similar and fixed it by manually re-entering the pipeline settings in the admin console. I suspect that if the binding file settings are not exactly what the console expects for the pipeline then it doesn't display them. So try re-entering the settings for the affected ports and update your binding files from that.

Resources