Rack::Attack test case: can't change ENV variable value using blocklist - rackattack

Could anyone share some blocklist test cases with an ENV variable as well, I found that in the spec file, we can't change the env variable in the rails middleware.
If we set the env variable in the spec file.
stub_const('ENV', 'RACK_ATTACK_BLOCK_IP_LIST' => '1.1.1.1')
in the application.yml file, there is a setting:
RACK_ATTACK_BLOCK_IP_LIST: '2.2.2.2'
If we run the test case and monitor env values in rake_attack.rb file,we can only get the new env variable value '1.1.1.1' in side safelist block, for example :
blocklist('block_ip_list') do |req|
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
block_ip_list.include?(req.ip)
end
if we move the safe_ip_list out of the safelist block, it will still '2.2.2.2'
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
blocklist('block_ip_list') do |req|
block_ip_list.include?(req.ip)
end

Related

path not being detected by Nextflow

i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}

How to achieve the dynamic generation for tasks in airflow

Basically, I want to achieve things like that.
I have no idea how to make sure the task_id can be called consistently. I used the PythonOperator and tried to call the kwargs.
One of my function is like:
def Dynamic_Structure(log_type, callableFunction, context, args):
# dyn_value = "{{ task_instance.xcom_pull(task_ids='Extract_{}_test'.format(log_type)) }}"
structure_logs = StrucFile(
task_id = 'Struc_{}_test'.format(log_type),
provide_context = True,
format_location = config.DATASET_FORMAT,
struc_location = config.DATASET_STR,
# log_key_hash_dict_location = dl_config.EXEC_WINDOWS_MIDDLE_KEY_HASH_DICT,
log_key_hash_dict_location = dl_config.DL_MODEL_DATASET/log_type/dl_config.EXEC_MIDDLE_KEY_HASH_DICT,
data_type = 'test',
xcom_push=True,
dag = dag,
op_kwargs=args,
log_sort = log_type,
python_callable = eval(callableFunction),
# the following parameters depend on the format of raw log
regex = config.STRUCTURE_REGEX[log_type],
log_format = config.STRUCTURE_LOG_FORMAT[log_type],
columns = config.STRUCTURE_COLUMNS[log_type]
)
return structure_logs
Then I call it like:
for log_type in log_types:
...
structure_logs_task = Dynamic_Structure(log_type, 'values_function', {'previous_task_id': 'Extract_{}_test'.format(log_type)})
But I can not get the performances like the chart.
I am confused about how to call pervious_task_id. Xcom_pull and Xcom_push would help, but I have no idea how to use it in PythonOperator.
The problem is not the code.
I tried to read environment from .env for the log_types. But I did not find the right way to read environment variable in docker images.
It should be:
In docker-compose.yml
version: "x"
services:
xxx:
build:
# there is the space between context and .
context: .
dockerfile: ./Dockerfile
args:
- PORT=${PORT}
volumes:
...
In Dockerfile
FROM xx
ARG PORT
ENV PORT "$PORT"
EXPOSE ${PORT}
...
In the root folder, you can define the ARG in .env.

How to configure environment variables in Azure DevOps pipeline?

I have an Azure Function (.NET Core) that is configured to read application settings from both a JSON file and environment variables:
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(_baseConfigurationPath)
.AddJsonFile("appsettings.json", optional: true)
.AddEnvironmentVariables()
.Build();
BuildAgentMonitorConfiguration configuration = configurationBuilder.Get<BuildAgentMonitorConfiguration>();
appsettings.json has the following structure:
{
"ProjectBaseUrl": "https://my-project.visualstudio.com/",
"ProjectName": "my-project",
"AzureDevOpsPac": ".....",
"SubscriptionId": "...",
"AgentPool": {
"PoolId": 38,
"PoolName": "MyPool",
"MinimumAgentCount": 2,
"MaximumAgentCount": 10
},
"ContainerRegistry": {
"Username": "mycontainer",
"LoginServer": "mycontainer.azurecr.io",
"Password": "..."
},
"ActiveDirectory": {
"ClientId": "...",
"TenantId": "...",
"ClientSecret": "..."
}
}
Some of these settings are configured as environment variables in the Azure Function. Everything works as expected:
The problem now is to configure some of these variables in a build pipeline, which are used in unit and integration tests. I've tried adding a variable group as follows and linking it to the pipeline:
But the environment variables are not being set and the tests are failing. What am I missing here?
I also have the same use case in which I want some environment variable to be set up using the azure build pipeline so that the test cases can access that environment variable to get the test passed.
Directly setting the env variable using the EXPORT,ENV command does not work for the subsequent task so to have the environment variable set up for subsequent task follow the syntax as mentioned on https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch
ie the
task.set variable with the script tag
Correct way of setting ENV variable using build pipeline
- script: |
echo '##vso[task.setvariable variable=LD_LIBRARY_PATH]$(Build.SourcesDirectory)/src/Projectname/bin/Release/netcoreapp2.0/x64'
displayName: set environment variable for subsequent steps
Please be careful of the spaces as its is yaml. The above script tags set up the variable LD_LIBRARY_PATH (used in Linux to define path for .so files) to the directory defined.
This style of setting the environment variable works for subsequent task also , but if we set the env variable like mentioned below the enviroment variable will be set for the specefic shell instance and will not be applicable for subsequent tasks
Wrong way of setting env variable :
- script: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(Build.SourcesDirectory)/src/CorrectionLoop.HttpApi/bin/Release/netcoreapp2.0/x64
displayName: Set environment variable
You can use the similar syntax for the setting up your environment variable.
I ran into this as well when generating EF SQL scripts from a build task. According to the docs, the variables you define in the "Variables" tab are also provided to the process as environment variables.
Notice that variables are also made available to scripts through environment variables. The syntax for using these environment variables depends on the scripting language. Name is upper-cased, . replaced with _, and automatically inserted into the process environment
For my instance, I just had to load a connection string, but deal with the case difference of the key between the json file and the environment:
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddEnvironmentVariables()
.Build();
var connectionString = config["connectionString"] ?? config["CONNECTIONSTRING"];
If you are using bash then their example does not work, as they are referring incorrectly to the variables in the documentation. Instead it should be:
Store secret
#!/bin/bash
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
echo "##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic"
Retrieve secret
Wrong: Their example
#!/bin/bash
echo "No problem reading $1 or $SAUCE"
echo "But I cannot read $SECRET_SAUCE"
echo "But I can read $2 (but the log is redacted so I do not spoil the secret)"
Right:
#!/bin/bash
echo "No problem reading $(sauce)"
echo "But I cannot read $(secret.Sauce)"
Configure KeyVault Secrets in Variable Groups for FileTransform#1
The below will read KeyVault Secrets used in a Variable group and add them to the Environments Variables for FileTransform#1 to use
setup KeyVault
Create Variable Group and import the values you want to use for the Pipeline.
In this example we used:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
Setup names to match keyVault names: (you can pass these into the yml steps template)
#These parameters are here to support Library > Variable Groups > with "secrets" from a KeyVault
#KeyVault keys cannot contain "_" or "." as FileTransform1# wants
#This script takes "--" keys and replaces them with "." and adds them into "env:" variables so Transform can do it's thing.
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
stages:
- template: ./locationOfTemplate.yml
parameters:
apiSecretKeys: ${{ parameters.apiSecretKeys }}
... build api - publish to .zip file
Setup Variable groups on "job level"
variables:
#next line here for pipeline validation purposes..
- ${{if parameters.variableGroup}}:
- group: ${{parameters.variableGroup}}
#OR
#- VariableGroupNameContinaingSecrets
Template file: (the magic)
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default: []
steps:
- ${{if parameters.apiSecretKeys}}:
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
displayName: List Env Vars Before
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$writeKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "value :" "$(${{key}})"
Write-Host "writeKey:" $writeKey
Write-Host "##vso[task.setvariable variable=$writeKey]$(${{key}})"
displayName: Writing Dashes To LowDash for ${{key}}
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$readKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "oldValue:" "$(${{key}})"
Write-Host "readKey :" $readKey
Write-Host "newValue:" ('$env:'+"$($readKey)" | Invoke-Expression)
displayName: Read From New Env Var for ${{key}}
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
name: List Env Vars After adding secrets to env
Run Task - task: FileTransform#1
Enjoy ;)

Self-referential values in an R config file

Using the config package, I'd like elements to reference other elements,
like how path_file_a references path_directory.
config.yml file in the working directory:
default:
path_directory : "data-public"
path_file_a : "{path_directory}/a.csv"
path_file_b : "{path_directory}/b.csv"
path_file_c : "{path_directory}/c.csv"
# recursive : !expr file.path(config::get("path_directory"), "c.csv")
Code:
config <- config::get()
config$path_file_a
# Returns: "{path_directory}/a.csv"
glue::glue(config$path_file_a, .envir = config)
# Returns: "data-public/a.csv"
I can use something like glue::glue() on the value returned by config$path_file_a.
But I'd prefer to have the value already substituted so config$path_file_a contains the actual value (not the template for the value).
As you might expect, uncommenting the recursive line creates an endless self-referential loop.
Are there better alternatives to glue::glue(config$path_file_a, .envir = config)?
I came across the same problem and I've written a wrapper around config and glue.
The package is called gonfig and has been submitted to CRAN.
With it you would have:
config.yml
default:
path_directory : "data-public"
path_file_a : "{path_directory}/a.csv"
path_file_b : "{path_directory}/b.csv"
path_file_c : "{path_directory}/c.csv"
And in your R script:
config <- gonfig::get()
config$path_file_c
#> "data-public/c.csv"

Key in Configuration : how to list Configurations and Keys?

The sbt in Action book introduces a concept of Key in Configuration
It then lists the default configurations:
Compile
Test
Runtime
IntegrationTest
Q1) Is it possible to print out a list of all Configurations from a sbt session? If not, can I find information on Configurations in the sbt documentation?
Q2) For a particular Configuration, e.g. 'Compile', is it possible to print out a list of Keys for the Configuration from a sbt session? If not, can I find information on a Configuration's Keys in the sbt documentation?
List of all configurations
For this you can use a setting like so:
val allConfs = settingKey[List[String]]("Returns all configurations for the current project")
val root = (project in file("."))
.settings(
name := "scala-tests",
allConfs := {
configuration.all(ScopeFilter(inAnyProject, inAnyConfiguration)).value.toList
.map(_.name)
}
This shows the name of all configurations. You can access more details about each configuration inside the map.
Output from the interactive sbt console:
> allConfs
[info] * provided
[info] * test
[info] * compile
[info] * runtime
[info] * optional
If all you want is to print them you can have a settingKey[Unit] and use println inside the setting definition.
List of all the keys in a configuration
For this we need a task (there might be other ways, but I haven't explored, in sbt I'm satisfied if something works... ) and a parser to parse user input.
All join the above setting in this snippet:
import sbt._
import sbt.Keys._
import complete.DefaultParsers._
val allConfs = settingKey[List[String]]("Returns all configurations for the current project")
val allKeys = inputKey[List[String]]("Prints all keys of a given configuration")
val root = (project in file("."))
.settings(
name := "scala-tests",
allConfs := {
configuration.all(ScopeFilter(inAnyProject, inAnyConfiguration)).value.toList
.map(_.name)
},
allKeys := {
val configHints = s"One of: ${
configuration.all(ScopeFilter(inAnyProject, inAnyConfiguration)).value.toList.mkString(" ")
}"
val configs = spaceDelimited(configHints).parsed.map(_.toLowerCase).toSet
val extracted: Extracted = Project.extract(state.value)
val l = extracted.session.original.toList
.filter(set => set.key.scope.config.toOption.map(_.name.toLowerCase)
.exists(configs.contains))
.map(_.key.key.label)
l
}
)
Now you can use it like:
$ sbt "allKeys compile"
If you are in interactive mode you can press tab after allKeys to see the prompt:
> allKeys
One of: provided test compile runtime optional
Since allKeys is a task it's output won't appear on the sbt console if you just "return it" but you can print it.

Resources