Robot framework run on Github actions can't get test return codes - automated-tests

I am using Github actions to run my test with robot framework, when test complete, and in bash shell I can get return code in special variable via $?, but even test fail it also get 0
name: Test
on: [workflow_dispatch]
jobs:
TEST-Run:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: install
run: |
pip3 install -r requirements.txt
- name: Run Tests
run: |
robot test.robot
- name: Set Robot Return Code
run: |
echo "ROBOT_RC=$?" >> "$GITHUB_ENV"
- name: If Auto Test Pass Rate Not 100%, Job Will Fail
if: env.ROBOT_RC != '0'
run: |
echo "Auto Test Pass Rate Not 100%, Please Check Test Result"
exit 1
Any help or explanation is welcome! Thank you.

According to jobs.<job_id>.steps[*].run:
Each run keyword represents a new process and shell in the runner environment. When you provide multi-line commands, each line runs in the same shell.
So, you need to combine those steps in one:
- name: Run Tests
run: |
robot test.robot
echo "ROBOT_RC=$?" >> "$GITHUB_ENV"
or,
- name: Run Tests
run: |
robot test.robot; ROBOT_RC=$?
echo "ROBOT_RC=$ROBOT_RC" >> "$GITHUB_ENV"
See jobs.<job_id>.steps[*].shell for more details.

Related

Error using GitHub actions with R markdown

I'm new to using GitHub actions, so please be gentle with me! I'm trying to automate a script that I'm currently running using Windows Task Scheduler. I've written the yml code below and its stored (privately) on GitHub at jeremy-horne/Client/.github/workflows/
The RMD code is 900 lines long and works when run in R studio. It is stored on GitHub at jeremy-horne/Client
`
name: my_project
on:
schedule:
- cron: "18 17 */1 * *" # Every day at 18:17
jobs:
render-rmarkdown:
runs-on: ubuntu-latest
# optionally use a convenient Ubuntu LTS + DVC + CML image
#container: docker://dvcorg/cml:latest
steps:
- uses: actions/checkout#v3
- name: Setup R
uses: r-lib/actions/setup-r#v2
#- name: Setup Pandoc
# uses: r-lib/actions/setup-pandoc#v2
- name: Setup dependencies
run: |
R -e 'install.packages("renv")'
R -e 'renv::restore()'
- name: Identify and email articles
#env:
# repo_token: ${{ secrets.GITHUB_TOKEN }}
run: |
Rscript -e 'rmarkdown::render(input = "My_Markdown_Code.Rmd")'
if [[ "$(git status --porcelain)" != "" ]]; then
git config --local user.name "$GITHUB_ACTOR"
git config --local user.email "$GITHUB_ACTOR#users.noreply.github.com"
git add *
git commit -m "Auto update Report"
git push origin
fi
`
The error I get says "render-rmarkdown
there is no package called ‘rmarkdown’"
Any help much appreciated!

Parallel execution of tests on TFS

We use TFS on our project. I have set Parallelism -> Multi Agent in the phase settings. The command itself to run (.NET Core) is:
dotnet test --filter TestCategory="Mobile" --logger trx -m:1.
Do I understand correctly that these settings will not split the tests between the two agents, but run the command above on the two agents?
The Visual Studio Test (- task: VSTest#2) has built-in magic to distribute the test based on configurable criteria:
You could switch to using the vstest task instead; to run your tests to get this "magic".
The dotnet core task or invoking dotnet straight from the command line doesn't have this magic.
There is a github repo that shows how to take advantage of the default of the hidden variables that are set by the agent when running in parallel:
#!/bin/bash
filterProperty="Name"
tests=$1
testCount=${#tests[#]}
totalAgents=$SYSTEM_TOTALJOBSINPHASE
agentNumber=$SYSTEM_JOBPOSITIONINPHASE
if [ $totalAgents -eq 0 ]; then totalAgents=1; fi
if [ -z "$agentNumber" ]; then agentNumber=1; fi
echo "Total agents: $totalAgents"
echo "Agent number: $agentNumber"
echo "Total tests: $testCount"
echo "Target tests:"
for ((i=$agentNumber; i <= $testCount;i=$((i+$totalAgents)))); do
targetTestName=${tests[$i -1]}
echo "$targetTestName"
filter+="|${filterProperty}=${targetTestName}"
done
filter=${filter#"|"}
echo "##vso[task.setvariable variable=targetTestsFilter]$filter"
This way you can slice the tasks in your pipeline:
- bash: |
tests=($(dotnet test . --no-build --list-tests | grep Test_))
. 'create_slicing_filter_condition.sh' $tests
displayName: 'Create slicing filter condition'
- bash: |
echo "Slicing filter condition: $(targetTestsFilter)"
displayName: 'Echo slicing filter condition'
- task: DotNetCoreCLI#2
displayName: Test
inputs:
command: test
projects: '**/*Tests/*Tests.csproj'
arguments: '--no-build --filter "$(targetTestsFilter)"'
I'm not sure whether this will support 100.000's of tests. In that case you may have to break the list into batches and call dotnet test multiple times in a row. I couldn't find support for vstest playlists.

Access output of sbt print in Github Actions workflow

I'm trying to save the output of running sbt print version | tail -n 1 to an environment variable in a Github Actions workflow, and it doesn't seem to work.
This is what I think should work, but it's just an empty string, when I try to access the variable later on in the job:
echo "TAG_VERSION=$(sbt 'print version' | tail -n 1)" >> $GITHUB_ENV
It works great in my own shell, but not in Github Actions.
I'm using version 1.5.3for sbt.
This is the logs for the step that doesn't seem to work, test version step, it just seems to not load up correctly.:
Run echo "TAG_VERSION=$(sbt 'print version' | tail -n 1)" >> $GITHUB_ENV
echo "TAG_VERSION=$(sbt 'print version' | tail -n 1)" >> $GITHUB_ENV
shell: /usr/bin/bash -e {0}
env:
CI: true
JAVA_HOME: /home/runner/.jabba/jdk/adopt#1.8.0-292
Downloading sbt launcher for 1.5.3:
From https://repo1.maven.org/maven2/org/scala-sbt/sbt-launch/1.5.3/sbt-launch-1.5.3.jar
To /home/runner/.sbt/launchers/1.5.3/sbt-launch.jar
Downloading sbt launcher 1.5.3 md5 hash:
From https://repo1.maven.org/maven2/org/scala-sbt/sbt-launch/1.5.3/sbt-launch-1.5.3.jar.md5
To /home/runner/.sbt/launchers/1.5.3/sbt-launch.jar.md5
[info] [launcher] getting org.scala-sbt sbt 1.5.3 (this may take some time)...
[info] [launcher] getting Scala 2.12.14 (for sbt)...
This is the full workflow:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: olafurpg/setup-scala#v11
#- name: run tests
# run: |
# sbt test
- name: docker login
run: echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
#- name: build image and publish
# run: sbt 'Docker / publish'
- name: test version
run: echo "TAG_VERSION=$(sbt 'print version' | tail -n 1)" >> $GITHUB_ENV
- name: print test version
run: echo ${{ env.TAG_VERSION }}
- name: get version
run: echo "TAG_VERSION=$(echo $(sbt -Dsbt.supershell=false 'print version' | tail -n 1))" >> $GITHUB_ENV
- name: checkout helm repo
uses: actions/checkout#v2
with:
repository: peterstorm/dialer-integration-argo
token: ${{ secrets.ACCESS_TOKEN }}
path: './dialer-integration-argo'
- name: change image tag in helm repo
uses: mikefarah/yq#master
with:
cmd: yq eval -i '.image.tag = "${{ env.TAG_VERSION }}"' './dialer-integration-argo/values.yaml'
- name: push helm repo changes
run: |
cd dialer-integration-argo &&
git config --global user.name 'Peter Storm' &&
git config --global user.email 'peter.storm#peterstorm.io' &&
git add . &&
git commit -m "commit from github actions" &&
git push origin master
Thank you!
Hopefully, someone else finds this to be helpful (I know the question is over a year-old, but you never know). I had very similar issue, and was attempting to output the result of $(sbt 'print version' | tail -n 1) to a gha step output or the $GITHUB_ENV. After many different permutations I found success by setting the -no-colors flag on the sbt command so:
- name: Get Version
id: version
run: echo ::set-output name=snapshot::$(sbt -no-colors 'print version' | tail -n 1)
- name: Show Version
run: echo $(sbt -no-colors 'print version' | tail -n 1)
- name: Output Version
run: echo "${{ steps.version.outputs.snapshot }} :rocket:" >> $GITHUB_STEP_SUMMARY
I would guess this would work if you used the $GITHUB_ENV mechanism

How to prevent a step failing in Bitbucket Pipelines?

I am running all my test cases and some of them get fail sometimes, pipeline detects it and fail the step and build. this blocks the next step to be executed (zip the report folder). I want to send that zip file as an email attachment.
Here is my bitbucket-pipelines.yml file
custom: # Pipelines that can only be triggered manually
QA2: # The name that is displayed in the list in the Bitbucket Cloud GUI
- step:
image: openjdk:8
caches:
- gradle
size: 2x # double resources available for this step to 8G
script:
- apt-get update
- apt-get install zip
- cd config/geb
- ./gradlew -DBASE_URL=qa2 clean BSchrome_win **# This step fails**
- cd build/reports
- zip -r testresult.zip BSchrome_winTest
after-script: # On test execution completion or build failure, send test report to e-mail lists
- pipe: atlassian/email-notify:0.3.11
variables:
<<: *email-notify-config
TO: 'email#email.com'
SUBJECT: "Test result for QA2 environment"
BODY_PLAIN: |
Please find the attached test result report to the email.
ATTACHMENTS: config/geb/build/reports/testresult.zip
The steps:
- cd build/reports
and
- zip -r testresult.zip BSchrome_winTest
do not get executed because - ./gradlew -DBASE_URL=qa2 clean BSchrome_win failed
I don't want bitbucket to fail the step and stop the Queue's step from executing.
The bitbucket-pipelines.yml file is just running bash/shell commands on Unix. The script runner looks for the return status codes of each command, to see if it succeeded (status = 0) or failed (status = non-zero). So you can use various techniques to control this status code:
Add " || true" to the end of your command
./gradlew -DBASE_URL=qa2 clean BSchrome_win || true
When you add "|| true" to the end of a shell command, it means "ignore any errors, and always return a success code 0". More info:
Bash ignoring error for a particular command
https://www.cyberciti.biz/faq/bash-get-exit-code-of-command/
Use "gradlew --continue" flag
./gradlew -DBASE_URL=qa2 clean BSchrome_win --continue
The "--continue" flag can be used to prevent a single test failure from stopping the whole task. So if one test or sub-step fails, gradle will try to continue running the other tests until all are run. However, it may still return an error, if an important step failed. More info: Ignore Gradle Build Failure and continue build script?
Move the 2 steps to the after-script section
after-script:
- cd config/geb # You may need this, if the current working directory is reset. Check with 'pwd'
- cd build/reports
- zip -r testresult.zip BSchrome_winTest
If you move the 2 steps for zip creation to the after-script section, then they will always run, regardless of the success/fail status of the previous step.
A better solution
If you want all the commands in your script to execute regardless of errors then put set +e at the top of your script.
If you just want to ignore the error for one particular command then put set +e before that command and set -e after it.
Example:
- set +e
- ./gradlew -DBASE_URL=qa2 clean BSchrome_win **# This step fails**
- set -e
Also valid for group of commands:
- set +e
- cd config/geb
- ./gradlew -DBASE_URL=qa2 clean BSchrome_win **# This step fails**
- cd config/geb
- set -e
I had a similar problem I had a command that normally takes 1 minute, but sometimes stalls and hits the 2 hour max build timeout (and corrupts my cypress installation)...
I wrapped my command with the timeout command and then ORd the result with true
eg. I changed this:
- yarn
to this:
- timeout 5m yarn || yarn cypress install --force || true # Sometimes this stalls, so kill it if it takes more than 5m and reinstall cypress
- timeout 5m yarn # Try again (in case it failed on previous line). Should be quick

How to organize your Postman collections and execute them with Newman with GitLab CI?

I am thinking of organizing my Postman API tests by creating a collection per feature. So for example we have 2 features the collections will be saved in /test/FEAT-01/FEAT-01-test.postman_collection.json and another in /test/FEAT-02/FEAT-02-test.postman_collection.json. In this way I am able to compare the collections separately in git and selectively execute tests as I want.
However I want my GitLab CI to execute all my tests under my root test folder using Newman. How can I achieve that?
One thing that worked for me, was creating one job per collection in gitlab, it looks something like this:
variables:
POSTMAN_COLLECTION1: <Your collection here>.json
POSTMAN_COLLECTION2: <Your collection here>.json
POSTMAN_ENVIRONMENT: <Your environment here>.json
DATA: <Your iterator file>
stages:
- test
postman_tests:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION1} -e ${POSTMAN_ENVIRONMENT} -d ${DATA} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
postman_test2:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION2} -e ${POSTMAN_ENVIRONMENT} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
You can also create a script that reads from your root folder and executes X times per collection, something like this:
$ for collection in ./PostmanCollections/*; do newman run "$collection" --environment ./PostmanEnvironments/Test.postman_environment.json' -r cli; done

Resources