Alice generate random data when run all tests - symfony

I'm using api-platform and I write unittest with fixtures (https://api-platform.com/docs/distribution/testing/#testing-the-api). When I run test for specific class I got the same fixtures data but when I run all test I got some data random.
Here are my fixtures:
# one file
App\Entity\User:
user_{1..11}:
nickname: '<username()>'
avatar: '#image_user_<current()>'
firstname: '<firstname()>'
lastname: '<lastname()>'
user_{12..13}:
nickname: '<username()>'
avatar: null
firstname: '<firstname()>'
lastname: '<lastname()>'
# other file
App\Entity\Project:
project_{1..7}:
name: '<company()>'
author: '#user_<numberBetween(3, 13)>'
main_image: '#image_project_<current()>'
score: '<randomFloat(null, 0, 5)>'
created_at: <dateTime('2020-08-11 09:24')>
Nelmio_alice config looks like:
> bin/api debug:config nelmio_alice
Current configuration for extension with alias "nelmio_alice"
=============================================================
nelmio_alice:
functions_blacklist:
- current
- shuffle
- date
- time
- file
- md5
- sha1
locale: en_US
seed: 1
loading_limit: 5
max_unique_values_retry: 150
When I run test for one class every time they pass (all data are the same):
bin/api-test tests/Entity/UserTest
bin/api-test tests/Entity/ProjectTest
But when I want to run all tests I got random data for user
bin/api-test
When I clear cache I got random data for project too but with next run projects test pass, users not
bin/api cache:clear --env=test
bin/api-test
// some projects and users test fail
bin/api-test
// projects tests pass, users not
bin/api is an alias for docker-compose exec php bin/console
bin/api-test is an alias for docker-compose exec php bin/phpunit

Related

How to add nested dictionary to dynamic host in Ansible

I have application details in respective vars like below. For example, myapp1 in "QA" environment would look like the below:
cat myapp1_QA.yml
---
APP_HOSTS:
- myapphost7:
- logs:
- /tmp/web/apphost7_access
- /tmp/web/apphost7_error
- myapphost9:
- logs:
- /tmp/web/apphost9_access
- /tmp/web/apphost9_error
- /tmp/web/apphost9_logs
WEB_HOSTS:
- mywebhost7:
- logs:
- /tmp/webserver/webhost7.pid
In this example I wish to create a dynamic host containing the 3 hosts
myapphost7
myapphost9
mywebhost7
and each host has variable log which can be looped to get the file paths:
Below is my ansible play:
---
- hosts: localhost
tasks:
- include_vars:
file: "{{ playbook_dir }}/{{ appname }}_{{ myenv }}.yml"
- name: Dsiplay dictionary data
debug:
msg: "{{ item[logs] }}"
loop: "{{ APP_HOSTS }}"
I get the below error:
ansible-playbook read.yml -e appname=myapp1 -e myenv=QA
TASK [Dsiplay dictionary data] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'logs' is undefined\n\nThe error appears to be in '/root/read.yml': line 8, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Dsiplay dictionary data\n ^ here\n"}
My requirement is to store "myapphost7", "myapphost9", "mywebhost7" in group using add_hosts: hosts: while a variable logs: having the list of log files.
Note: if no hosts mywebhost7 is defined under WEB_HOSTS: or APP_HOSTS: then nothing should be added to the dynamic host.
Can you please suggest?

Ansible - how to retrieve value from dict passing in the list in extra vars

So this is my dilemma... I am requiring a user to enter the name of a database (i.e. dbx) and the location (canada or america) through extra-vars (-e "dc=canada" -e "dbname=dbx". From that, I am going read the vars
vars:
dbx:
canada:
dbu: db1
home: /u01/app/oracle
america:
dbu: db2
home: /u01/app/oracle
to get the dbu. The dbu will then be compared to databases running on the host
- name: see if db is running on this host
command: echo database is running here
when: dbu == item.database_name
with_items:
- "{{custom python module}}"
I can get the value if I put
- name: output
register: x
debug:
msg: "{{ dbx[dc].dbu }}"
However if I change dbx to the value of dbname, it errors out.
Hope that makes sense.
Thanks Zeitounator and lxop.
By adding another nested loop info:
I was able to get the result by your suggestion:
{{ info[g_db][dc].dbu }}

How to configure environment variables in Azure DevOps pipeline?

I have an Azure Function (.NET Core) that is configured to read application settings from both a JSON file and environment variables:
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(_baseConfigurationPath)
.AddJsonFile("appsettings.json", optional: true)
.AddEnvironmentVariables()
.Build();
BuildAgentMonitorConfiguration configuration = configurationBuilder.Get<BuildAgentMonitorConfiguration>();
appsettings.json has the following structure:
{
"ProjectBaseUrl": "https://my-project.visualstudio.com/",
"ProjectName": "my-project",
"AzureDevOpsPac": ".....",
"SubscriptionId": "...",
"AgentPool": {
"PoolId": 38,
"PoolName": "MyPool",
"MinimumAgentCount": 2,
"MaximumAgentCount": 10
},
"ContainerRegistry": {
"Username": "mycontainer",
"LoginServer": "mycontainer.azurecr.io",
"Password": "..."
},
"ActiveDirectory": {
"ClientId": "...",
"TenantId": "...",
"ClientSecret": "..."
}
}
Some of these settings are configured as environment variables in the Azure Function. Everything works as expected:
The problem now is to configure some of these variables in a build pipeline, which are used in unit and integration tests. I've tried adding a variable group as follows and linking it to the pipeline:
But the environment variables are not being set and the tests are failing. What am I missing here?
I also have the same use case in which I want some environment variable to be set up using the azure build pipeline so that the test cases can access that environment variable to get the test passed.
Directly setting the env variable using the EXPORT,ENV command does not work for the subsequent task so to have the environment variable set up for subsequent task follow the syntax as mentioned on https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch
ie the
task.set variable with the script tag
Correct way of setting ENV variable using build pipeline
- script: |
echo '##vso[task.setvariable variable=LD_LIBRARY_PATH]$(Build.SourcesDirectory)/src/Projectname/bin/Release/netcoreapp2.0/x64'
displayName: set environment variable for subsequent steps
Please be careful of the spaces as its is yaml. The above script tags set up the variable LD_LIBRARY_PATH (used in Linux to define path for .so files) to the directory defined.
This style of setting the environment variable works for subsequent task also , but if we set the env variable like mentioned below the enviroment variable will be set for the specefic shell instance and will not be applicable for subsequent tasks
Wrong way of setting env variable :
- script: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(Build.SourcesDirectory)/src/CorrectionLoop.HttpApi/bin/Release/netcoreapp2.0/x64
displayName: Set environment variable
You can use the similar syntax for the setting up your environment variable.
I ran into this as well when generating EF SQL scripts from a build task. According to the docs, the variables you define in the "Variables" tab are also provided to the process as environment variables.
Notice that variables are also made available to scripts through environment variables. The syntax for using these environment variables depends on the scripting language. Name is upper-cased, . replaced with _, and automatically inserted into the process environment
For my instance, I just had to load a connection string, but deal with the case difference of the key between the json file and the environment:
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddEnvironmentVariables()
.Build();
var connectionString = config["connectionString"] ?? config["CONNECTIONSTRING"];
If you are using bash then their example does not work, as they are referring incorrectly to the variables in the documentation. Instead it should be:
Store secret
#!/bin/bash
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
echo "##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic"
Retrieve secret
Wrong: Their example
#!/bin/bash
echo "No problem reading $1 or $SAUCE"
echo "But I cannot read $SECRET_SAUCE"
echo "But I can read $2 (but the log is redacted so I do not spoil the secret)"
Right:
#!/bin/bash
echo "No problem reading $(sauce)"
echo "But I cannot read $(secret.Sauce)"
Configure KeyVault Secrets in Variable Groups for FileTransform#1
The below will read KeyVault Secrets used in a Variable group and add them to the Environments Variables for FileTransform#1 to use
setup KeyVault
Create Variable Group and import the values you want to use for the Pipeline.
In this example we used:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
Setup names to match keyVault names: (you can pass these into the yml steps template)
#These parameters are here to support Library > Variable Groups > with "secrets" from a KeyVault
#KeyVault keys cannot contain "_" or "." as FileTransform1# wants
#This script takes "--" keys and replaces them with "." and adds them into "env:" variables so Transform can do it's thing.
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
stages:
- template: ./locationOfTemplate.yml
parameters:
apiSecretKeys: ${{ parameters.apiSecretKeys }}
... build api - publish to .zip file
Setup Variable groups on "job level"
variables:
#next line here for pipeline validation purposes..
- ${{if parameters.variableGroup}}:
- group: ${{parameters.variableGroup}}
#OR
#- VariableGroupNameContinaingSecrets
Template file: (the magic)
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default: []
steps:
- ${{if parameters.apiSecretKeys}}:
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
displayName: List Env Vars Before
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$writeKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "value :" "$(${{key}})"
Write-Host "writeKey:" $writeKey
Write-Host "##vso[task.setvariable variable=$writeKey]$(${{key}})"
displayName: Writing Dashes To LowDash for ${{key}}
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$readKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "oldValue:" "$(${{key}})"
Write-Host "readKey :" $readKey
Write-Host "newValue:" ('$env:'+"$($readKey)" | Invoke-Expression)
displayName: Read From New Env Var for ${{key}}
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
name: List Env Vars After adding secrets to env
Run Task - task: FileTransform#1
Enjoy ;)

Not able to execute lifecycle operation using script plugin

I'm trying to learn how to use script plugin. I'm following script plugin docs here but not able to make it work.
I've tried to use the plugin in two ways. The first, when cloudify.interface.lifecycle.start operation is mapped directly to a script:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
inputs: {}
The second with a direct mapping:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: script.script_runner.tasks.run
inputs:
script_path: scripts/create_project.sh
I've created a directory named scripts and placed the below create_project.sh script in this directory:
#! /bin/bash -e
ctx logger info "Hello to this world"
hostname
I'm getting errors while validating the blueprint.
Error when operation is mapped directly to a script:
[2019-04-13 13:29:40.594] [DEBUG] DslParserExecClient - got output from dsl parser Could not extract plugin from operation mapping 'scripts/create_project.sh', which is declared for operation 'start'. In interface 'cloudify.interfaces.lifecycle' in node 'Import_Project' of type 'cloudify.nodes.WebServer'
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-27O0e1t813N6as
in line: 3, column: 2
path: node_templates.Import_Project
value: {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'scripts/create_project.sh', 'inputs': {}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}
Error when using a direct mapping:
[2019-04-13 13:25:21.015] [DEBUG] DslParserExecClient - got output from dsl parser node 'Import_Project' has no relationship which makes it contained within a host and it has a plugin 'script' with 'host_agent' as an executor. These types of plugins must be installed on a host
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-279QCz2CV3Y81L
in line: 2, column: 0
path: node_templates
value: {'Import_Project': {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'script.script_runner.tasks.run', 'inputs': {'script_path': 'scripts/create_project.sh'}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}}
What is missing to make this work?
I also found the Cloudify Script Plugin examples from their documentation do not work: https://docs.cloudify.co/4.6/working_with/official_plugins/configuration/script/
However, I found I can make the examples work by adding an executor line in parallel with the implementation line to override the host_agent executor as follows:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
executor: central_deployment_agent
inputs: {}

How can I found that the Rscript have been run successfully on the docker by the CWL?

I have written the CWL code which runs the Rscript command in the docker container. I have two files of cwl and yaml and run them by the command:
cwltool --debug a_code.cwl a_input.yaml
I get the output which said that the process status is success but there is no output file and in the result "output": null is reported. I want to know if there is a method to find that the Rscript file has run on the docker successfully. I want actually know about the reason that the output files are null.
The final part of the result is:
[job a_code.cwl] {
"output": null,
"errorFile": null
}
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
Removing intermediate output directory /tmp/tmpCG9Xs1
Removing intermediate output directory /tmp/tmpCG9Xs1
{
"output": null,
"errorFile": null
}
Final process status is success
Final process status is success
R code:
library(cummeRbund)
cuff<-readCufflinks( dbFile = "cuffData.db", gtfFile = NULL, runInfoFile = "run.info", repTableFile = "read_groups.info", geneFPKM = "genes.fpkm_trac .... )
#setwd("/scripts")
sink("cuff.txt")
print(cuff)
sink()
My cwl file code is:
class: CommandLineTool
cwlVersion: v1.0
id: cummerbund
baseCommand:
- Rscript
inputs:
- id: Rfile
type: File?
inputBinding:
position: 0
- id: cuffdiffout
type: 'File[]?'
inputBinding:
position: 1
- id: errorout
type: File?
inputBinding:
position: 99
prefix: 2>
valueFrom: |
error.txt
outputs:
- id: output
type: File?
outputBinding:
glob: cuff.txt
- id: errorFile
type: File?
outputBinding:
glob: error.txt
label: cummerbund
requirements:
- class: DockerRequirement
dockerPull: cummerbund_0
my input file (yaml file) is:
inputs:
Rfile:
basename: run_cummeR.R
class: File
nameext: .R
nameroot: run_cummeR
path: run_cummeR.R
cuffdiffout:
- class: File
path: cuffData.db
- class: File
path: genes.fpkm_tracking
- class: File
path: read_groups.info
- class: File
path: genes.count_tracking
- class: File
path: genes.read_group_tracking
- class: File
path: isoforms.fpkm_tracking
- class: File
path: isoforms.read_group_tracking
- class: File
path: isoforms.count_tracking
- class: File
path: isoform_exp.diff
- class: File
path: gene_exp.diff
errorout:
- class: File
path: error.txt
Also, this is my Dockerfile for creating image:
FROM r-base
COPY . /scripts
RUN apt-get update
RUN apt-get install -y \
libcurl4-openssl-dev\
libssl-dev\
libmariadb-client-lgpl-dev\
libmariadbclient-dev\
libxml2-dev\
r-cran-plyr\
r-cran-reshape2
WORKDIR /scripts
RUN Rscript /scripts/build.R
ENTRYPOINT /bin/bash
I got the answer!
There were some problems in my program.
1. The docker was not pulled correctly then the cwl couldn't produce any output.
2. The inputs and outputs were not defined mandatory. So I got the success status in the case which I did not have proper inputs and output.

Resources