I have an Azure Function (.NET Core) that is configured to read application settings from both a JSON file and environment variables:
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(_baseConfigurationPath)
.AddJsonFile("appsettings.json", optional: true)
.AddEnvironmentVariables()
.Build();
BuildAgentMonitorConfiguration configuration = configurationBuilder.Get<BuildAgentMonitorConfiguration>();
appsettings.json has the following structure:
{
"ProjectBaseUrl": "https://my-project.visualstudio.com/",
"ProjectName": "my-project",
"AzureDevOpsPac": ".....",
"SubscriptionId": "...",
"AgentPool": {
"PoolId": 38,
"PoolName": "MyPool",
"MinimumAgentCount": 2,
"MaximumAgentCount": 10
},
"ContainerRegistry": {
"Username": "mycontainer",
"LoginServer": "mycontainer.azurecr.io",
"Password": "..."
},
"ActiveDirectory": {
"ClientId": "...",
"TenantId": "...",
"ClientSecret": "..."
}
}
Some of these settings are configured as environment variables in the Azure Function. Everything works as expected:
The problem now is to configure some of these variables in a build pipeline, which are used in unit and integration tests. I've tried adding a variable group as follows and linking it to the pipeline:
But the environment variables are not being set and the tests are failing. What am I missing here?
I also have the same use case in which I want some environment variable to be set up using the azure build pipeline so that the test cases can access that environment variable to get the test passed.
Directly setting the env variable using the EXPORT,ENV command does not work for the subsequent task so to have the environment variable set up for subsequent task follow the syntax as mentioned on https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch
ie the
task.set variable with the script tag
Correct way of setting ENV variable using build pipeline
- script: |
echo '##vso[task.setvariable variable=LD_LIBRARY_PATH]$(Build.SourcesDirectory)/src/Projectname/bin/Release/netcoreapp2.0/x64'
displayName: set environment variable for subsequent steps
Please be careful of the spaces as its is yaml. The above script tags set up the variable LD_LIBRARY_PATH (used in Linux to define path for .so files) to the directory defined.
This style of setting the environment variable works for subsequent task also , but if we set the env variable like mentioned below the enviroment variable will be set for the specefic shell instance and will not be applicable for subsequent tasks
Wrong way of setting env variable :
- script: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(Build.SourcesDirectory)/src/CorrectionLoop.HttpApi/bin/Release/netcoreapp2.0/x64
displayName: Set environment variable
You can use the similar syntax for the setting up your environment variable.
I ran into this as well when generating EF SQL scripts from a build task. According to the docs, the variables you define in the "Variables" tab are also provided to the process as environment variables.
Notice that variables are also made available to scripts through environment variables. The syntax for using these environment variables depends on the scripting language. Name is upper-cased, . replaced with _, and automatically inserted into the process environment
For my instance, I just had to load a connection string, but deal with the case difference of the key between the json file and the environment:
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddEnvironmentVariables()
.Build();
var connectionString = config["connectionString"] ?? config["CONNECTIONSTRING"];
If you are using bash then their example does not work, as they are referring incorrectly to the variables in the documentation. Instead it should be:
Store secret
#!/bin/bash
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
echo "##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic"
Retrieve secret
Wrong: Their example
#!/bin/bash
echo "No problem reading $1 or $SAUCE"
echo "But I cannot read $SECRET_SAUCE"
echo "But I can read $2 (but the log is redacted so I do not spoil the secret)"
Right:
#!/bin/bash
echo "No problem reading $(sauce)"
echo "But I cannot read $(secret.Sauce)"
Configure KeyVault Secrets in Variable Groups for FileTransform#1
The below will read KeyVault Secrets used in a Variable group and add them to the Environments Variables for FileTransform#1 to use
setup KeyVault
Create Variable Group and import the values you want to use for the Pipeline.
In this example we used:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
Setup names to match keyVault names: (you can pass these into the yml steps template)
#These parameters are here to support Library > Variable Groups > with "secrets" from a KeyVault
#KeyVault keys cannot contain "_" or "." as FileTransform1# wants
#This script takes "--" keys and replaces them with "." and adds them into "env:" variables so Transform can do it's thing.
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
stages:
- template: ./locationOfTemplate.yml
parameters:
apiSecretKeys: ${{ parameters.apiSecretKeys }}
... build api - publish to .zip file
Setup Variable groups on "job level"
variables:
#next line here for pipeline validation purposes..
- ${{if parameters.variableGroup}}:
- group: ${{parameters.variableGroup}}
#OR
#- VariableGroupNameContinaingSecrets
Template file: (the magic)
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default: []
steps:
- ${{if parameters.apiSecretKeys}}:
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
displayName: List Env Vars Before
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$writeKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "value :" "$(${{key}})"
Write-Host "writeKey:" $writeKey
Write-Host "##vso[task.setvariable variable=$writeKey]$(${{key}})"
displayName: Writing Dashes To LowDash for ${{key}}
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$readKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "oldValue:" "$(${{key}})"
Write-Host "readKey :" $readKey
Write-Host "newValue:" ('$env:'+"$($readKey)" | Invoke-Expression)
displayName: Read From New Env Var for ${{key}}
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
name: List Env Vars After adding secrets to env
Run Task - task: FileTransform#1
Enjoy ;)
Related
i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}
So this is my dilemma... I am requiring a user to enter the name of a database (i.e. dbx) and the location (canada or america) through extra-vars (-e "dc=canada" -e "dbname=dbx". From that, I am going read the vars
vars:
dbx:
canada:
dbu: db1
home: /u01/app/oracle
america:
dbu: db2
home: /u01/app/oracle
to get the dbu. The dbu will then be compared to databases running on the host
- name: see if db is running on this host
command: echo database is running here
when: dbu == item.database_name
with_items:
- "{{custom python module}}"
I can get the value if I put
- name: output
register: x
debug:
msg: "{{ dbx[dc].dbu }}"
However if I change dbx to the value of dbname, it errors out.
Hope that makes sense.
Thanks Zeitounator and lxop.
By adding another nested loop info:
I was able to get the result by your suggestion:
{{ info[g_db][dc].dbu }}
Could anyone share some blocklist test cases with an ENV variable as well, I found that in the spec file, we can't change the env variable in the rails middleware.
If we set the env variable in the spec file.
stub_const('ENV', 'RACK_ATTACK_BLOCK_IP_LIST' => '1.1.1.1')
in the application.yml file, there is a setting:
RACK_ATTACK_BLOCK_IP_LIST: '2.2.2.2'
If we run the test case and monitor env values in rake_attack.rb file,we can only get the new env variable value '1.1.1.1' in side safelist block, for example :
blocklist('block_ip_list') do |req|
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
block_ip_list.include?(req.ip)
end
if we move the safe_ip_list out of the safelist block, it will still '2.2.2.2'
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
blocklist('block_ip_list') do |req|
block_ip_list.include?(req.ip)
end
I'm using api-platform and I write unittest with fixtures (https://api-platform.com/docs/distribution/testing/#testing-the-api). When I run test for specific class I got the same fixtures data but when I run all test I got some data random.
Here are my fixtures:
# one file
App\Entity\User:
user_{1..11}:
nickname: '<username()>'
avatar: '#image_user_<current()>'
firstname: '<firstname()>'
lastname: '<lastname()>'
user_{12..13}:
nickname: '<username()>'
avatar: null
firstname: '<firstname()>'
lastname: '<lastname()>'
# other file
App\Entity\Project:
project_{1..7}:
name: '<company()>'
author: '#user_<numberBetween(3, 13)>'
main_image: '#image_project_<current()>'
score: '<randomFloat(null, 0, 5)>'
created_at: <dateTime('2020-08-11 09:24')>
Nelmio_alice config looks like:
> bin/api debug:config nelmio_alice
Current configuration for extension with alias "nelmio_alice"
=============================================================
nelmio_alice:
functions_blacklist:
- current
- shuffle
- date
- time
- file
- md5
- sha1
locale: en_US
seed: 1
loading_limit: 5
max_unique_values_retry: 150
When I run test for one class every time they pass (all data are the same):
bin/api-test tests/Entity/UserTest
bin/api-test tests/Entity/ProjectTest
But when I want to run all tests I got random data for user
bin/api-test
When I clear cache I got random data for project too but with next run projects test pass, users not
bin/api cache:clear --env=test
bin/api-test
// some projects and users test fail
bin/api-test
// projects tests pass, users not
bin/api is an alias for docker-compose exec php bin/console
bin/api-test is an alias for docker-compose exec php bin/phpunit
In order to customize my grunt tasks, I need access to the grunt task name given on the command line when starting grunt.
The options is no problem, since its well documented (grunt.options).
It's also well documented how to find out the task name, when running a grunt task.
But I need access to the task name before.
Eg, the user writes
grunt build --target=client
When configuring the grunt job in my Gruntfile.js, I can use
grunt.option('target') to get 'client'.
But how do I get hold of parameter build before the task build starts?
Any guidance is much appreciated!
Your grunt file is basically just a function. Try adding this line to the top:
module.exports = function( grunt ) {
/*==> */ console.log(grunt.option('target'));
/*==> */ console.log(grunt.cli.tasks);
// Add your pre task code here...
Running with grunt build --target=client should give you the output:
client
[ 'build' ]
At that point, you can run any code you need to before your task is run including setting values with new dependencies.
A better way is to use grunt.task.current which has information about the currently running task, including a name property. Within a task, the context (i.e. this) is the same object. So . . .
grunt.registerTask('foo', 'Foobar all the things', function() {
console.log(grunt.task.current.name); // foo
console.log(this.name); // foo
console.log(this === grunt.task.current); // true
});
If build is an alias to other tasks and you just want to know what command was typed that led to the current task execution, I typically use process.argv[2]. If you examine process.argv, you'll see that argv[0] is node (because grunt is a node process), argv[1] is grunt, and argv[2] is the actual grunt task (followed by any params in the remainder of argv).
EDIT:
Example output from console.log(grunt.task.current) on grunt#0.4.5 from within a task (can't have a current task from not a current task).
{
nameArgs: 'server:dev',
name: 'server',
args: [],
flags: {},
async: [Function],
errorCount: [Getter],
requires: [Function],
requiresConfig: [Function],
options: [Function],
target: 'dev',
data: { options: { debugPort: 5858, cwd: 'server' } },
files: [],
filesSrc: [Getter]
}
You can use grunt.util.hooker.hook for this.
Example (part of Gruntfile.coffee):
grunt.util.hooker.hook grunt.task, (opt) ->
if grunt.task.current and grunt.task.current.nameArgs
console.log "Task to run: " + grunt.task.current.nameArgs
CMD:
C:\some_dir>grunt concat --cmp my_cmp
Task to run: concat
Running "concat:coffee" (concat) task
Task to run: concat:coffee
File "core.coffee" created.
Done, without errors.
There is also a hack that I've used to prevent certain task execution:
grunt.util.hooker.hook grunt.task, (opt) ->
if grunt.task.current and grunt.task.current.nameArgs
console.log "Task to run: " + grunt.task.current.nameArgs
if grunt.task.current.nameArgs is "<some task you don't want user to run>"
console.log "Ooooh, not <doing smth> today :("
exit() # Not valid. Don't know how to exit :), but will stop grunt anyway
CMD, when allowed:
C:\some_dir>grunt concat:coffee --cmp my_cmp
Running "concat:coffee" (concat) task
Task to run: concat:coffee
File "core.coffee" created.
Done, without errors.
CMD, when prevented:
C:\some_dir>grunt concat:coffee --cmp my_cmp
Running "concat:coffee" (concat) task
Task to run: concat:coffee
Ooooh, not concating today :(
Warning: exit is not defined Use --force to continue.
Aborted due to warnings.