I have a release pipeline in Azure DevOps, one of the steps is running test designed using Nunit.
YAML of task:
steps:
- task: DotNetCoreCLI#2
displayName: 'Run Tests'
inputs:
command: custom
projects: '**/a/Tests.dll'
custom: vstest
arguments: '--logger:"trx;LogFileName=test.trx" '
In my tests Setup I am retrieving few settings from Environment variables using:
private static IConfiguration InitConfiguration()
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.AddEnvironmentVariables()
.Build();
return config;
}
I am also storing the values of my environment variables in Azure DevOps variable groups.
Everything works fine when my variables are not locked, they are retrieved successfully and tests run fine.
However when I lock the secret ones their value comes empty hence my tests fail to run.
What should I change in order for locked variables to be correctly read from my application.
By default variables are mapped into environment variables, but when you locked them you actually make them secrets. And to have secret maps as environment you must map them explicitly. Take a look on this
variables:
GLOBAL_MYSECRET: $(mySecret) # this will not work because the secret variable needs to be mapped as env
GLOBAL_MY_MAPPED_ENV_VAR: $(nonSecretVariable) # this works because it's not a secret.
steps:
- powershell: |
Write-Host "Using an input-macro works: $(mySecret)"
Write-Host "Using the env var directly does not work: $env:MYSECRET"
Write-Host "Using a global secret var mapped in the pipeline does not work either: $env:GLOBAL_MYSECRET"
Write-Host "Using a global non-secret var mapped in the pipeline works: $env:GLOBAL_MY_MAPPED_ENV_VAR"
Write-Host "Using the mapped env var for this task works and is recommended: $env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
- bash: |
echo "Using an input-macro works: $(mySecret)"
echo "Using the env var directly does not work: $MYSECRET"
echo "Using a global secret var mapped in the pipeline does not work either: $GLOBAL_MYSECRET"
echo "Using a global non-secret var mapped in the pipeline works: $GLOBAL_MY_MAPPED_ENV_VAR"
echo "Using the mapped env var for this task works and is recommended: $MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
So if you want to have secret available in one task you need to add something like this:
env:
MY_MAPPED_ENV_VAR: $(mySecret)
And you need to do this for each secret separately.
With classic it is the same. You need to map them here:
Related
I'm creating a Quasar 2/Vue 3 app that runs in Openshift. We have secrets created in Openshift for env variables like the path to the auth server. I have variables created in quasar.config.js that work with static values. Now I need to replace them with the values from secrets.
build: {
env: {
VUE_APP_AUTHORITY_URL: "https://keycloak.dev.xxxxxx.com/auth"
},
. . .
How do I format this so my build process in Openshift pulls the secrets?
I create release pipeline on Azure DevOps server and i have a some problem.
How i can change properties in .net core configuration file (appsettings.EnvName.json).
When I create application on framework I had parameters.xml where I set XPath to value, default value and property name. And on pipeline I set key-value. But on net core app this method don't work =)
I want to use about the same approach. What would I indicate the path to the value and its value. For example:
ConnectionStrings.Db1="Server={DB1.Server};Database={DB1.DbName};Trusted_Connection = True;"
ConnectionStrings.Db2="Server={DB2.Server};Database={DB2.DbName};Trusted_Connection = True;"
Now I have added a step to execute an arbitrary powershell script on a remote server
$jsonFile = 'appsettings.Template.json'
$jsonFileOut = 'appsettings.Production.json'
$configValues =
'ConnectionStrings.Db1="Server={DB1.Server};Database={DB1.DbName};Trusted_Connection = True;"',
'ConnectionStrings.Db2="Server={DB2.Server};Database={DB2.DbName};Trusted_Connection = True;"'
$config = Get-Content -Path $jsonFile | ConvertFrom-Json
ForEach ($item in $configValues)
{
$kv = $item -split "="
Invoke-Expression $('$config.' + $kv[0] + '="' + $kv[1] + '"')
}
$config | ConvertTo-Json | Out-File $jsonFileOut
But I don’t really like this solution, how can I do the same in a more beautiful way
dotnet core handles this in a different way. Full framework based on app.config transformation. It means that you defined one file which later was trasnformed for given build configuration (like Debug, Release, or your own). In dotnet core you define appsettings.json for each environment. This works very well because all settings are in your compiled app. And then at runtime bases on ASPNETCORE_ENVIRONMENT environment variable a proper settings is selected. Thus you may have one package for your all environments without recompilation. To benefit from that you must define file per each enviroment, but this is not transformation. This is full file.
For instance file for your local development may look like this:
{
"ConnectionStrings": {
"BloggingDatabase": "Server=(localdb)\\mssqllocaldb;Database=EFGetStarted.ConsoleApp.NewDb;Trusted_Connection=True;"
},
}
And file for your dev enviroment appsettings.dev.json like this:
{
"ConnectionStrings": {
"BloggingDatabase": "Server=102.10.10.12\\mssqllocaldb;Database=EFGetStarted.ConsoleApp.NewDb;Trusted_Connection=True;"
},
}
And then to configure loading this file you have to have configured Startup method:
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables();
this.Configuration = builder.Build();
}
This will load all your appsettings file and later use proper file based on enviroment variable.
To set this variable you may use this command in command prompt setx ASPNETCORE_ENVIRONMENT Dev or this in Powershell [Environment]::SetEnvironmentVariable("ASPNETCORE_ENVIRONMENT", "Dev", "Machine")
I hope it help you understand how settings works on dotnet core. If you need more guidance please check this links:
Configuration in ASP.NET Core
Use multiple environments in ASP.NET Core
To sum up you don't need to change your settings in release pipeline. You need to preapre full file per enviromnet where you are going to host your app. You can be interested in replacing some values in file based on variables in your pipeline. You can consider few options here like
token replacement
JSON variable substitution example
This is usefult when you don't want to keep your secrets directly in source code.
EDIT
If you want to replace values in you appsettings file one of the option is token replace. For this you must first instead of values keep token in your file. For instance #{SomeVariable}# will be replaced with value of SomeVariable` from your pipeline for this confirguration of token replace task.
I'm trying to set an environment variable for an API key that I don't want in my code. My source javascript looks something like this :
.get(`http://api-url-and-parameters&api-key=${process.env.API_KEY}`)
I'm using webpack and the package dotenv-webpack https://www.npmjs.com/package/dotenv-webpack to set API_KEY in a gitignored .env file and it's all running fine on my local. I'd like to also be able to set that variable when deploying through Netlify, I've tried adding it through to GUI to the 'build environment variables', and also to set it directly in the build command, but without success.
Any idea what might be the issue ?
WARNING: If this is a secret key, you will not want to expose this environment variable value in any bundle that gets returned to the client. It should only be used by your build scripts to be used to create your content during build.
Issue
dotenv-webpack expects there to be a .env file to load in your variables during the webpack build of your bundle. When the repository is checked out by Netlify, the .env does not exist because for good reason it is in .gitignore.
Solution
Store your API_KEY in the Netlify build environment variables and build the .env using a script prior to running the build command.
scripts/create-env.js
const fs = require('fs')
fs.writeFileSync('./.env', `API_KEY=${process.env.API_KEY}\n`)
Run the script as part of your build
node ./scripts/create-env.js && <your_existing_webpack_build_command>
Caveats & Recommendations
Do not use this method with a public facing repository [open] because any PR or branch deploy could create a simple script into your code to expose the API_KEY
The example script above is for simplicity so, make any script you use be able to error out with a code other than 0 so if the script fails the deploy will fail.
You can set Dotenv-webpack to load system environment variables as well as those you have declared in your .env file by doing the following:
plugins: [
new Dotenv({
systemvars: true
})
]
I.e Setting the systemvars attribute of your webpack dotenv plugin to true.
Note that system environment variables with the same name will overwrite those defined in your .env file.
Source: https://www.npmjs.com/package/dotenv-webpack#properties
if you go to corresponding site's settings in Netlify, under build&deploy you can find a section called environment variables you can easily add your environment variables from there. if you add MY_API_KEY variable to environment variables you will be able to access it inside your project via process.env.MY_API_KEY.
If you're using Nuxt JS there is a more "straight forward" approach.
Just edit the nuxt.config.js like so:
module.exports = {
env: {
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY
},
// ...
Then add the GOOGLE_API_KEY to Netlify through the build environment variables as usual.
Credit goes to yann-linn and his answer on github.
What you can also do is also to define a global constant in Webpack. Netlify environment variables defined in UI will work with it. You don't need dotenv or dotenv-webpack.
webpack.config.js
const webpack = require("webpack");
module.exports = {
plugins: [
new webpack.DefinePlugin({
"process.env.API_KEY": JSON.stringify(process.env.API_KEY)
}),
]
}
However again, of course you shouldn't do it just inputting enviornmental variables in the frontend if your API key is confidential and project public. The API key will appear in the source code of the website and will be easily accessible for everyone visiting it. Lambda function would be a better option.
You can use the Netlify's config file also ...
You can find documentation here.
Also i wanted to have the same ENV variables with with different values per branch/environment.
This workaround worked for me:
Create a netlify.toml file like:
[build]
NUXT_ENV_BASE_API = "/api"
NUXT_ENV_HOST_DOMAIN = "https://your-domain.gr"
[context.branch-deploy]
environment = { NUXT_ENV_BASE_API = "/dev-api", NUXT_ENV_HOST_DOMAIN = "https://dev.your-domain.gr" }
[context.production]
environment = { NUXT_ENV_BASE_API = "/api", NUXT_ENV_HOST_DOMAIN = "https://your-domain.gr" }
And deploy in Netlify ...
I'm using Cloud Functions for Firebase with three different projects for development, testing and production purposes. Each project has a service-account.json. When I deploy the sources to an environment, the initialization looks like this:
var serviceAccount = require("./service-account-dev.json");
firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: "https://nwDEV.firebaseio.com"
});
This is a bit difficult to handle, because I have to change the code everytime I want to deploy to a different environment. Is there a way to have an overall configuration, e.g. in firebase.json or.firebasesrc, which allows to integrate the service-account and decides on deployment which configuration to choose?
Otherwise is there a possibility to detect under which environment the code is running and to load the specific service-account.json and to set the databaseURL-property?
You can use environment variables. https://firebase.google.com/docs/functions/config-env
Select the project (you can use the command firebase projects:list to see them):
firebase use my-project-development
Set an environment variable
firebase functions:config:set app.environment="dev"
In your functions file, apply a conditional to choose the file:
const serviceAccount = functions.config().app.environment === 'dev' ? 'credentials-dev.json' : 'credentials-prod.json';
Then you can use the file depending on the project:
firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: "https://nwDEV.firebaseio.com"
});
From what I understand of your question, what you are looking for boils down to a solution to translate the cloud functions you are deploying to the appropriate settings, i.e. production, development, and testing, which I assume means each of these is a unique project, and therefore database, in your Firebase environment.
If the above is true then the following should help.
Firebase Cloud Functions, and CLI more generally, is able to deploy to a specific project in your Firebase environment. To do this execute the following command in the terminal while in the cloud functions directory.
$ firebase use --add
This will allow you to pick your additional project (for instance, development) and assign it an alias (I recommend "development" if it is as such). Then when deploying your functions you can choose which project (and therefore database) to deploy to by using the alias.
$ firebase use default # sets environment to the default alias
$ firebase use development # sets environment to the development alias
For more information please see: https://firebase.googleblog.com/2016/07/deploy-to-multiple-environments-with.html
One thing you may have to do for this to work would be to use the default config settings for Cloud Functions.
$ admin.initializeApp(functions.config().firebase);
Short answer:
the GCLOUD_PROJECT environment variable will be unique to your project, hence you can utilise it like this (sample code is for 2 different projects but you can extend it using switch or any other conditional statement):
const env = process.env.GCLOUD_PROJECT === 'my-app-prod' ? 'prod' : 'dev';
then use that env variable to load intended configuration.
Full example: (TypeScript)
update .firebaserc file
{
"projects": {
"default": "my-app-dev",
"prod": "my-app-prod",
}
}
create and modify your ./somewhere/config.ts file accordingly, let's say you're using AWS services (please ensure to secure your configuration details)
export const config = {
dev: {
awsRegion: 'myDevRegion',
awsAccessKey: 'myDevKey',
awsSecretKey: 'myDevSecretKey'
},
prod: {
awsRegion: 'myProdRegion',
awsAccessKey: 'myProdKey',
awsSecretKey: 'myProdSecretKey'
}
};
now above items can be used in the index.ts file
import { config } from './somewhere/config';
import * as aws from 'aws-sdk';
. . .
const env = process.env.GCLOUD_PROJECT === 'my-app-prod' ? 'prod' : 'dev';
const awsCredentials = {
region: config[env].awsRegion,
accessKeyId: config[env].awsAccessKey,
secretAccessKey: config[env].awsSecretKey
};
aws.config.update(awsCredentials);
. . .
export const myFuncToUseAWS = functions....
Now the deployment
Dev environment deployment: $ firebase deploy --only functions -P default
Prod environment deployment: $ firebase deploy --only functions -P prod
In my Meteor (1.2) app I've separated files for development and production
e.g.
client/lib/appVars.config.PROD.js
client/lib/appVars.config.CONFIG.js
Ideally the "twin" files have the same variables, functions etc. with little differences but (global) variables and functions which are common to debug and production have the same name.
Is there a way to call meteor run with a command line parameter DEBUG_MODE = true | false so that I cad load either one or the other file, depending on the current mode (debug, production)?
Set different environmental variables and run via CLI with meteor run --settings settings.json
Then you just need a development and production (and staging?) settings.json
Example of a settings file:
{
"awsBucket": "my-example-staging",
"awsAccessKeyId": "AABBCCddEEff12123131",
"awsSecretKey": "AABBCCddEEff12123131+AABBCCddEEff12123131",
"public": {
"awsBucketUrl": "https://my-meteor-example.s3.amazonaws.com",
"environment": "staging"
},
"googleApiKey": "AABBCCddEEff12123131"
}
EDIT ADD:
To access your environmental keys, just select
Meteor.settings.awsBucket
Security Update (thanks Dave Weldon)
See https://docs.meteor.com/#/full/structuringyourapp
Re production vs development, you should have two settings.json files, the standard one for production (.config/settings.json) and a development one (.config/development/config.json) and when you boot outside of production you boot meteor --settings .config/development/settings.json
Re client side, note that if you make the key public e.g.
{
"service_id":"...",
"service_secret":"...",
"public":{
"service_name":"..."
}
}
Then only Meteor.settings.public.service_name will be accessible on the client