Get Github Action to fail when R throws an error [duplicate] - r

I want to exit a job if a specific condition is met:
jobs:
foo:
steps:
...
- name: Early exit
run: exit_with_success # I want to know what command I should write here
if: true
- run: foo
- run: ...
...
How can I do this?

There is currently no way to exit a job arbitrarily, but there is a way to allow skipping subsequent steps if an earlier step failed, by using conditionals:
jobs:
foo:
steps:
...
- name: Early exit
run: exit_with_success # I want to know what command I should write here
- if: failure()
run: foo
- if: failure()
run: ...
...
The idea is that if the first step fails, then the rest will run, but if the first step doesn't fail the rest will not run.
However, it comes with the caveat that if any of the subsequent steps fail, the steps following them will still run, which may or may not be desirable.
Another option is to use step outputs to indicate failure or success:
jobs:
foo:
steps:
...
- id: s1
name: Early exit
run: # exit_with_success
- id: s2
if: steps.s1.conclusion == 'failure'
run: foo
- id: s3
if: steps.s2.conclusion == 'success'
run: ...
...
This method works pretty well and gives you very granular control over which steps are allowed to run and when, however it became very verbose with all the conditions you need.
Yet another option is to have two jobs:
one which checks your condition
another which depends on it:
jobs:
check:
outputs:
status: ${{ steps.early.conclusion }}
steps:
- id: early
name: Early exit
run: # exit_with_success
work:
needs: check
if: needs.check.outputs.status == 'success'
steps:
- run: foo
- run: ...
...
This last method works very well by moving the check into a separate job and having another job wait and check the status. However, if you have more jobs, then you have to repeat the same check in each one. This is not too bad as compared to doing a check in each step.
Note: In the last example, you can have the check job depend on the outputs of multiple steps by using the object filter syntax, then use the contains function in further jobs to ensure none of the steps failed:
jobs:
check:
outputs:
status: ${{ join(steps.*.conclusion) }}
steps:
- id: early
name: Early exit
run: # exit_with_success
- id: more_steps
name: Mooorreee
run: # exit_maybe_with_success
work:
needs: check
if: !contains(needs.check.outputs.status, 'failure')
steps:
- run: foo
- run: ...
Furthermore, keep in mind that "failure" and "success" are not the only conclusions available from a step. See steps.<step id>.conclusion for other possible reasons.

Related

how to use pat token to clone private repo through GitHub actions.?

name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a single command using the runners shell
- name: terratest
run: echo Github pat token is ${{ secrets.TF_GITHUB_PAT }}
- name: checkout t2form-k8s repo
uses: actions/checkout#v2
with:
repository: harikalyank/t2form-k8s
token: ${{ secrets.TF_GITHUB_PAT }}
ref: master
The above one is my GH-workflow and it's getting failed to clone repo:t2form-k8s.
error: Warning: The save-state command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
Error: Input required and not supplied: token
I created Pat/secret token as well on repository level with name:TF_GITHUB_PAT

Include multiple salt states in specific order

I want to compose multiple existing salt states into a new one, where they need to be executed in a specific order.
The SaltStack documentation explains that salt states can be included.
As I understand, the included states will be run before the rest of the sls file.
Example:
include:
- config-pulled
- service-restarted
Using this example, I want service-restarted to be executed after config-pulled and only if config-pulled was successful.
But the execution order of multiple included states is not guaranteed. The docs say:
... If you need to guarantee order of execution, consider using requisites.
I could imagine to use requisities directly on the include. For example:
include:
- config-pulled
- service-restarted:
require:
- config-pulled
But this does not work.
Questions
How to use requisites when including states?
Do I have to use an orchestrate script instead?
You don't have to use orch and you can use onchanges requisites with a test.succeed_with_changes.
tested on salt 3004.2
To sum up the demo, onchanges inhibits the execution of the test.succeed_with_changes by default unless there is a change in the given state (tests.config-pulled). onchanges_in does the same the other way around and inhibits server reload unless test.succeed_with_changes is modified (in the salt sense).
Example:
/srv/salt
├── tests
│   ├── config-pulled.sls
│   ├── init.sls
│   └── service-restarted.sls
config-pulled.sls
config-pulled:
file.managed:
- name: /tmp/config
- contents: 1000
# uncomment random to simulate a change (or change 1000 just over)
# - contents: {{ range(1000, 9999) | random }}
service-restarted.sls
service-restarted:
cmd.run:
- name: echo service-restarted
init.sls
include:
- tests.config-pulled
- tests.service-restarted
demo:
test.succeed_with_changes:
- onchanges:
- sls: tests.config-pulled
- onchanges_in:
- sls: tests.service-restarted
This approach might become a bit difficult to maintain.
A completely different approach would be a reorganization of sls.
I usually separate server install and reload from customization (two separate sls), therefore when I have to handle configs, I include the "install and reload" one (usually init.sls) on top of any/all the sls managing the conf to import the states and configure my config states with require_in, ex:
myconf:
file.managed:
- [...]
- require_in:
- cmd: rndc-reload
or
myconf:
file.managed:
- [...]
- require_in:
- service: haproxy
Note: this approach scale well as several states or even sls can manage configs using this mechanism and the daemon will be reloaded only once after all configs are set.
UPDATE
Add output to show specific order
Without changes
# salt-call state.apply tests
local:
----------
ID: config-pulled
Function: file.managed
Name: /tmp/config
Result: True
Comment: File /tmp/config is in the correct state
Started: 14:49:20.363480
Duration: 15.688 ms
Changes:
----------
ID: demo
Function: test.succeed_with_changes
Result: True
Comment: State was not run because none of the onchanges reqs changed
Started: 14:49:20.380447
Duration: 0.004 ms
Changes:
----------
ID: service-restarted
Function: cmd.run
Name: echo service-restarted
Result: True
Comment: State was not run because none of the onchanges reqs changed
Started: 14:49:20.380536
Duration: 0.002 ms
Changes:
Summary for local
------------
Succeeded: 3
Failed: 0
------------
Total states run: 3
Total run time: 15.694 ms
With changes
# salt-call state.apply tests
local:
----------
ID: config-pulled
Function: file.managed
Name: /tmp/config
Result: True
Comment: File /tmp/config updated
Started: 14:49:15.757487
Duration: 18.292 ms
Changes:
----------
diff:
---
+++
## -1 +1 ##
-1001
+1002
----------
ID: demo
Function: test.succeed_with_changes
Result: True
Comment: Success!
Started: 14:49:15.777084
Duration: 0.549 ms
Changes:
----------
testing:
----------
new:
Something pretended to change
old:
Unchanged
----------
ID: service-restarted
Function: cmd.run
Name: echo service-restarted
Result: True
Comment: Command "echo service-restarted" run
Started: 14:49:15.777785
Duration: 7.69 ms
Changes:
----------
pid:
4130033
retcode:
0
stderr:
stdout:
service-restarted
Summary for local
------------
Succeeded: 3 (changed=3)
Failed: 0
------------
Total states run: 3
Total run time: 26.531 ms
includes are not states. so requisites will not work with them.
as for the ticket you pointed to that became https://docs.saltproject.io/en/latest/ref/states/all/salt.states.test.html#salt.states.test.nop test.nop which is just a state that doesn't do anything.
for handling what you are talking about you would do something like
include:
- http
- libvirt
run_http:
test.nop:
- require:
- sls: http
run_libvirt:
test.nop:
- require:
- test: run_http
- sls: libvirt

SaltStack result success only if return is expected

I'm using win_wua.list to check if the servers have available updates
update_system:
module.run:
- name: win_wua.list
The problem is that no meter if the server have pending updates or not the state result is "True"
But my goal is to run this as post-check so I want the state to be "successful" only if the return is "Nothing to return" in any other case I would like the state to be with result "False"
LKA5:
----------
ID: update_system
Function: module.run
Name: win_wua.list
Result: True
Comment: Module function win_wua.list executed
Started: 17:04:29.326587
Duration: 4017.653 ms
Changes:
----------
ret:
Nothing to return
Summary for LKA5
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 4.018 s
LKA3:
----------
ID: update_system
Function: module.run
Name: win_wua.list
Result: True
Comment: Module function win_wua.list executed
Started: 17:04:25.563111
Duration: 7113.497 ms
Changes:
----------
ret:
Nothing to return
Summary for LKA3
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 7.113 s
So there are two modules to manage Windows updates with Saltstack:
win_wua execution module
win_wua state module
Seems you are using the execution module, which always runs irrespective of the current state of system. So it will always report as changed. On the other hand, the state module will only run if updates are required.
The following state ID will ensure that the system is updated (as specified), and it will report as changed. Then we can use the requisite system to trigger another state.
Example:
# Including driver updates for example
update_system:
wua.uptodate:
- drivers: True
# Verify if further updates are there
verify_updates:
module.run:
- name: win_wua.list
- install: True
- onchanges:
- wua: update_system
In the above example, if update_system didn't update anything (there were no updates), then verify_updates won't run. If some packages were updated, then verify_updates will run. There are other requisites (linked above) that can be considered as per your use case.

Salt stack execute a command/invoke a different state upon state failure

I am trying to run multiple states in a sls file and I have a requirement to execute a command upon failure of a state.
e.g.
test_cmd1:
cmd.run:
- name: |
echo 'Command 1'
test_cmd2:
cmd.run:
- name: |
echo 'Command 2'
on_fail_command:
cmd.run:
- name: |
echo 'On failure'
exit 1
I want on_fail_command to be executed when any of test_cmd1 or test_cmd2 fails... but not run when both test commands successfully execute. I have failHard set to True globally in our system.
I tried using onfail but that does not behave the way I want. onfail executes a state if any of the state listed under onfail fails, but here I am looking to skip executing other states upon a state fail but instead jump to on_fail_command and then exit.
Set the order of your on_fail_command state so it runs before anything else, and failhard so it fails the whole job.

Run downloaded SaltStack formula

I've downloaded the PHP formula by following the instructions here: https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
I've changed apache to php. In my salt config file (which I assume is /etc/salt/master), I've set file_roots like so:
file_roots:
base:
- /srv/salt
- /srv/formulas/php-formula
I don't know how I'm supposed to run it now. I've successfully run a salt state file by discovering that the documentation is incomplete, so I'd missed a step I wasn't aware of.
If I try to run the formula the same way I've been running the state, I just get errors.
salt '*' state.apply php-formula
salt-minion:
Data failed to compile:
----------
No matching sls found for 'php-formula' in env 'base'
ERROR: Minions returned with non-zero exit code
I've also tried: sudo salt '*' state.highstate, and it also has errors:
salt-minion:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes:
Summary for salt-minion
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
ERROR: Minions returned with non-zero exit code
You have to add a top.sls file to /srv/salt/, not just in /srv/pillar/. If you have a file called /srv/salt/php.sls, you have remove it, otherwise it will interfere with /srv/pillar/php.sls.
Contents of /srv/salt/top.sls:
base:
'*':
- php
This is kind of bizarre, because my previous test (which wasn't a formula) used /srv/salt/php.sls and /srv/pillar/top.sls. Now I'm using /srv/pillar/php.sls and /srv/salt/top.sls.

Resources