We use TFS on our project. I have set Parallelism -> Multi Agent in the phase settings. The command itself to run (.NET Core) is:
dotnet test --filter TestCategory="Mobile" --logger trx -m:1.
Do I understand correctly that these settings will not split the tests between the two agents, but run the command above on the two agents?
The Visual Studio Test (- task: VSTest#2) has built-in magic to distribute the test based on configurable criteria:
You could switch to using the vstest task instead; to run your tests to get this "magic".
The dotnet core task or invoking dotnet straight from the command line doesn't have this magic.
There is a github repo that shows how to take advantage of the default of the hidden variables that are set by the agent when running in parallel:
#!/bin/bash
filterProperty="Name"
tests=$1
testCount=${#tests[#]}
totalAgents=$SYSTEM_TOTALJOBSINPHASE
agentNumber=$SYSTEM_JOBPOSITIONINPHASE
if [ $totalAgents -eq 0 ]; then totalAgents=1; fi
if [ -z "$agentNumber" ]; then agentNumber=1; fi
echo "Total agents: $totalAgents"
echo "Agent number: $agentNumber"
echo "Total tests: $testCount"
echo "Target tests:"
for ((i=$agentNumber; i <= $testCount;i=$((i+$totalAgents)))); do
targetTestName=${tests[$i -1]}
echo "$targetTestName"
filter+="|${filterProperty}=${targetTestName}"
done
filter=${filter#"|"}
echo "##vso[task.setvariable variable=targetTestsFilter]$filter"
This way you can slice the tasks in your pipeline:
- bash: |
tests=($(dotnet test . --no-build --list-tests | grep Test_))
. 'create_slicing_filter_condition.sh' $tests
displayName: 'Create slicing filter condition'
- bash: |
echo "Slicing filter condition: $(targetTestsFilter)"
displayName: 'Echo slicing filter condition'
- task: DotNetCoreCLI#2
displayName: Test
inputs:
command: test
projects: '**/*Tests/*Tests.csproj'
arguments: '--no-build --filter "$(targetTestsFilter)"'
I'm not sure whether this will support 100.000's of tests. In that case you may have to break the list into batches and call dotnet test multiple times in a row. I couldn't find support for vstest playlists.
Related
I am using Github actions to run my test with robot framework, when test complete, and in bash shell I can get return code in special variable via $?, but even test fail it also get 0
name: Test
on: [workflow_dispatch]
jobs:
TEST-Run:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: install
run: |
pip3 install -r requirements.txt
- name: Run Tests
run: |
robot test.robot
- name: Set Robot Return Code
run: |
echo "ROBOT_RC=$?" >> "$GITHUB_ENV"
- name: If Auto Test Pass Rate Not 100%, Job Will Fail
if: env.ROBOT_RC != '0'
run: |
echo "Auto Test Pass Rate Not 100%, Please Check Test Result"
exit 1
Any help or explanation is welcome! Thank you.
According to jobs.<job_id>.steps[*].run:
Each run keyword represents a new process and shell in the runner environment. When you provide multi-line commands, each line runs in the same shell.
So, you need to combine those steps in one:
- name: Run Tests
run: |
robot test.robot
echo "ROBOT_RC=$?" >> "$GITHUB_ENV"
or,
- name: Run Tests
run: |
robot test.robot; ROBOT_RC=$?
echo "ROBOT_RC=$ROBOT_RC" >> "$GITHUB_ENV"
See jobs.<job_id>.steps[*].shell for more details.
I want to add an AZ CLI tasks that runs as a test task in my pipeline.
I can always add an AZ CLI task with --query but I want this task to run just as a Test so that if the --query fails or AZ CLI command fails with some error code, I want it to be reported as failed test and it should not fail the pipeline.
Can this be achieved somehow?
UPDATE
If I run az cli command like
output=$(az acr repository show --name <ACR-name> --image <Image-name>:<Incorrect-version> --query "name" --output tsv
echo $output
And since the version is incorrect the output would be "ERROR: Error: the specified tag does not exist. Correlation ID: XXXXXXXXXXXXXXXX"
Is it possible to publish this as a Test result tab of YAML pipeline?
I want to launch my test suite from a bash script using the & operator in background mode b/c I want my bash script to do something else while the test suite is running.
This doesn't seem possible ATM. What I see is:
$ dotnet test --filter "FullyQualifiedName~GenerateTransactions" >> dotnet.log 2>&1 &
[1] 2068
Tailing the log file for 5 minutes doesn't show me anything. Then pressing Enter in the terminal where I started the tests appears to show me that the dotnet process has stopped
$ <press Enter>
[1]+ Stopped dotnet test --filter "FullyQualifiedName~GenerateTransactions" >> dotnet.log 2>&1
Then I type fg and the dotnet process comes to the foreground and the log file immediatelly starts filling with output:
$ fg
dotnet test --filter "FullyQualifiedName~GenerateTransactions" >> dotnet.log 2>&1
So how do I make dotnet test detach from the controlling terminal/script ?
you can use nohup command to start you dotnet test
nohup dotnet test --filter "FullyQualifiedName~GenerateTransactions" >> dotnet.log &
I have a test suite of over 10,000 tests and sometimes only want to rerun the tests, that failed on the previous run, using the dotnet vstest CLI.
I ended up using the following PowerShell command, to run only the previously failed tests again, based on the newest trx file in .\TestResults\:
dotnet vstest '.\bin\Debug\netcoreapp3.0\MyTests.dll' /Logger:trx /Tests:"$((Select-Xml -Path (gci '.\TestResults\' | sort LastWriteTime | select -last 1).FullName -XPath "//ns:UnitTestResult[#outcome='Failed']/#testName" -Namespace #{"ns"="http://microsoft.com/schemas/VisualStudio/TeamTest/2010"}).Node.Value | % {$_ -replace '^My\.Long\.And\.Tedious\.Namespace\.', ''} | % {$_ -replace '^(.*?)\(.*$','$1'} | Join-String -Separator ','))"
Beware, that there is a character limit on the maximum command line length, that can easily be hit when many tests have previously failed.
Use the % {$_ -replace '^My\.Long\.And\.Tedious\.Namespace\.', ''} part, to get rid of namespace prefixes, if you can.
I am thinking of organizing my Postman API tests by creating a collection per feature. So for example we have 2 features the collections will be saved in /test/FEAT-01/FEAT-01-test.postman_collection.json and another in /test/FEAT-02/FEAT-02-test.postman_collection.json. In this way I am able to compare the collections separately in git and selectively execute tests as I want.
However I want my GitLab CI to execute all my tests under my root test folder using Newman. How can I achieve that?
One thing that worked for me, was creating one job per collection in gitlab, it looks something like this:
variables:
POSTMAN_COLLECTION1: <Your collection here>.json
POSTMAN_COLLECTION2: <Your collection here>.json
POSTMAN_ENVIRONMENT: <Your environment here>.json
DATA: <Your iterator file>
stages:
- test
postman_tests:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION1} -e ${POSTMAN_ENVIRONMENT} -d ${DATA} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
postman_test2:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION2} -e ${POSTMAN_ENVIRONMENT} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
You can also create a script that reads from your root folder and executes X times per collection, something like this:
$ for collection in ./PostmanCollections/*; do newman run "$collection" --environment ./PostmanEnvironments/Test.postman_environment.json' -r cli; done