Ansible async task with poll=0 living beyond timeout - asynchronous

For the context : ansible 2.7.9
I'm experimenting with Ansible asynchronous actions, and working with this playbook :
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: This is "long task", sleeping for {{ secondsTaskLong }} seconds
shell: "[ -f testFile ] && rm testFile; sleep {{ secondsTaskLong }}; touch testFile"
async: "{{ timeoutSeconds }} "
poll: 0
- stat:
path: testFile
register: checkTestFile
- name: Check the test file exists
assert:
that: checkTestFile.stat.exists
...
test 1 :
ansible-playbook async.yml --extra-var "secondsTaskLong=2 timeoutSeconds=3"
The assertion fails, but if I check the directory contents, ls reveals the test file ./testFile is here.
test 2 :
Testing with a 5s duration :
ansible-playbook async.yml --extra-var "secondsTaskLong=5 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The final watch part reveals that the test file is created a few seconds AFTER the end of the playbook execution.
test 3 :
Now with a 9s duration (i.e. way beyond the timeout) :
ansible-playbook async.yml --extra-var "secondsTaskLong=9 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The test file is still created.
test 4 :
Now trying with 10 seconds :
ansible-playbook async.yml --extra-var "secondsTaskLong=10 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The test file is NOT created.
What is going on exactly ? What allows this "long task" live for 9 seconds, beyond the timeout ? What kills it at 10 seconds ?
EDIT:
I can add an explicit connection timeout > 10s and still observe the same behavior :
ansible-playbook async.yml -T 20 --extra-var "secondsTaskLong=9 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
ansible-playbook async.yml -T 20 --extra-var "secondsTaskLong=10 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"

What is going on exactly ? What allows this "long task" live for 9
seconds, beyond the timeout ? What kills it at 10 seconds ?
When poll > 0, ansible uses async value as timeout, otherwise, connection timeout (default is 10 seconds) is used. Refer the details on connection timeout here and poll value here.
For example, if you change the task like below and use timeoutSeconds=3 and secondsTaskLong=4 then you will get timeout error after 3 seconds and assert will fail. However, for timeoutSeconds=3 and secondsTaskLong=2, assert will be successful.
- name: This is "long task", sleeping for {{ secondsTaskLong }} seconds
shell: "[ -f testFile ] && rm testFile; sleep {{ secondsTaskLong }}; touch testFile"
async: "{{ timeoutSeconds }} "
poll: 1

Related

How to use kafkacat -f option?

I am trying to consume Kafka message using below options [format token as requirement]
kafkacat -C -b localhost:9092 -t test-topic -p 0 -f 'Topic %t [%p] at offset %o: key %k: %s\n' -o -1  -e | jq .
But getting below error,
Error: file/topic list only allowed in producer(-P)/kafkaconsumer(-G) mode
Usage: <path> <options> [file1 file2 .. | topic1 topic2 ..]]
kcat - Apache Kafka producer and consumer tool
https://github.com/edenhill/kcat
If I try above command without -f option then it works, but I want the formatted output. What would be the issue?
This has worked for me
[root#vm-10-75-112-163 cloud-user]# kcat -b mybroker:9092 -t test1 -f 'Topic %t[%p], offset: %o, key: %k, payload: %S bytes: %s\n' -C
Topic test1[0], offset: 0, key: , payload: 1 bytes: 1
Topic test1[0], offset: 1, key: , payload: 1 bytes: 2

How do I get just the STDOUT of a salt state?

my output now
I'm learning salt stack right now and I was wondering if there was a way to get the stdout of a salt state and put it into a document and then send it to the master. Or is there a better way to do this?
To achieve this, we'll have to save the execution of the script in a variable. It will contain a hash containing keys that are showing up under changes:. Then the contents of this variable (stdout) can be written to a file.
{% set script_res = salt['cmd.script']('salt://test.sh') %}
create-stdout-file:
file.managed:
- name: /tmp/script-stdout.txt
- contents: {{ script_res.stdout }}
The output is already going to the master. It would be better to actually output in json and query down to the data you want in your document on the master.
such as the following
Normal output
$ sudo salt salt00\* state.apply tests.test3
salt00.wolfnet.bad4.us:
----------
ID: test_run
Function: cmd.run
Name: echo test
Result: True
Comment: Command "echo test" run
Started: 10:39:51.103057
Duration: 18.281 ms
Changes:
----------
pid:
8661
retcode:
0
stderr:
stdout:
test
Summary for salt00.wolfnet.bad4.us
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 18.281 ms
json output
$ sudo salt salt00\* state.apply tests.test3 --out json
{
"salt00.wolfnet.bad4.us": {
"cmd_|-test_run_|-echo test_|-run": {
"name": "echo test",
"changes": {
"pid": 9057,
"retcode": 0,
"stdout": "test",
"stderr": ""
},
"result": true,
"comment": "Command \"echo test\" run",
"__sls__": "tests.test3",
"__run_num__": 0,
"start_time": "10:40:55.582273",
"duration": 19.374,
"__id__": "test_run"
}
}
}
json parsed down with jq to just the stdout
$ sudo salt salt00\* state.apply tests.test3 --out=json | jq '.|.[]|."cmd_|-test_run_|-echo test_|-run"|.changes.stdout'
"test"
Also, for the record it is considered bad practice to put code that changes the system into jinja. Jinja always runs when a template is rendered and there is no way to control if it happens so just running test=true tests will still run the jinja code that makes changes which could be very harmful to your systems.

How to retrieve the process id of script running in different script

This is a little complicated case for me.
I want to track if the 'script1_sparkSubmit01.sh' is completed or not which is triggered by Main.sh; if not then wait for it to complete; if completed, then proceed with the remaining script(s) in the main.sh.
Main Script: Main.sh
ksh script1_sparkSubmit01.sh 2>&1 &
pid=$!
echo $pid
while [ 1 ]
do
[ -n "$pid" ] && sleep 60 || break
done
ksh script2_sparkSubmit02.sh 2>&1 &
Another script: script1_sparkSubmit01.sh
spark-submit --jars $sqldriver_jar_path $spark_jar_path/table-load_2.11-1.0.jar >> ${log_dir}/$log_file_name1 2>&1 &
Currently, pid is giving some random value which when I lookup is not available in the current shell. However, I see the 'spark-submit' command of script1_sparkSubmit01.sh running in the current shell.
Kindly help.
Taking the PID from the script which 'script1_sparkSubmit01.sh' triggers - did work out to me.
ksh script1_sparkSubmit01.sh 2>&1 &
while [ 1 ]
do
pid=$(ps -aux | grep 'table-load_2.11-1.0.jar' | grep -v "grep" | awk '{print $2}')
[ -n "$pid" ] && sleep 30 || break
done
ksh script2_sparkSubmit01.sh 2>&1 &

Mapping ACPI events for brightness buttons on Lenovo Yoga X1 v2

I installed Ubuntu Gnome 17.04 on my new Lenovo Yoga X1 (version 2) and the brightness buttons don't work out of the box. I've gone through the steps (below) I thought necessary to map these keys to xrandr calls, but nothing happens, even if I log the key mapping event being caught successfully. If I manually run the logged command brightness changes appropriately. What am I missing in the ACPI route?
First see what ACPI events the brightness buttons are sending
$ acpi_listen
video/brightnessdown BRTDN 00000087 00000000
video/brightnessup BRTUP 00000086 00000000
Then create the event definitions
$ cat yoga-brightness-up
event=video/brightnessup BRTUP 00000086
action=/etc/acpi/yoga-brightness.sh up
$ cat yoga-brightness-down
event=video/brightnessdown BRTDN 00000087
action=/etc/acpi/yoga-brightness.sh down
Define the action script
$ cat /etc/acpi/yoga-brightness.sh
#!/bin/sh
# Where the backlight brightness is stored
BR_DIR="/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/"
test -d "$BR_DIR" || exit 0
MIN=0
MAX=$(cat "$BR_DIR/max_brightness")
VAL=$(cat "$BR_DIR/brightness")
if [ "$1" = down ]; then
VAL=$((VAL-71))
else
VAL=$((VAL+71))
fi
if [ "$VAL" -lt $MIN ]; then
VAL=$MIN
elif [ "$VAL" -gt $MAX ]; then
VAL=$MAX
fi
PERCENT=`echo "$VAL / $MAX" | bc -l`
#export XAUTHORITY=/home/ivo/.Xauthority # CHANGE "ivo" TO YOUR USER
#export DISPLAY=:0.0
export XAUTHORITY=/home/jorvis/.Xauthority
export DISPLAY=:0
echo "xrandr --output eDP-1 --brightness $PERCENT" > /tmp/yoga-brightness.log
xrandr --output eDP-1 --brightness $PERCENT
echo $VAL > "$BR_DIR/brightness"
Restart acpid
$ sudo /etc/init.d/acpid reload
Success should write to the brightness log
$ rm /tmp/yoga-brightness.log
[ hit brightness down button three times ]
$ sudo cat /tmp/yoga-brightness.log
xrandr --output eDP-1 --brightness .76603773584905660377
Log is written correctly, as is the brightness value:
$ cat /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/brightness
759
Nothing happens on the actual display though. It DOES work
though if I manually run the command which was logged to
have run.
$ xrandr --output eDP-1 --brightness .76603773584905660377
And after reading through this post I noticed the ENV part again and double-checked it. The problem was the setting of $XAUTHORITY in the script. It was fine for it to point to my ~/.Xauthority (which didn't exist by default), but I needed to do this:
$ ln -s /run/user/1000/gdm/Xauthority ~/.Xauthority
After that the brightness buttons worked.

Unix troubleshooting, missing /etc/init.d file

I am working through this tutorial on daemonizing php scripts. When I run the following Unix command:
. /etc/init.d/functions
#startup values
log=/var/log/Daemon.log
#verify that the executable exists
test -x /home/godlikemouse/Daemon.php || exit 0RETVAL=0
prog="Daemon"
proc=/var/lock/subsys/Daemon
bin=/home/godlikemouse/Daemon.php
start() {
# Check if Daemon is already running
if [ ! -f $proc ]; then
echo -n $"Starting $prog: "
daemon $bin --log=$log
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $proc
echo
fi
return $RETVAL
}
I get the following output:
./Daemon: line 12: /etc/init.d/functions: No such file or directory
Starting Daemon: daemon: unrecognized option `--log=/var/log/Daemon.log'
I looked at my file system and there was no /etc/init.d file. Can anyone tell me what this is and where to obtain it? Also is the absence of that file what's causing the other error?
Separate your args within their own " " double-quotes:
args="--node $prog"
daemon "nohup ${exe}" "$args &" </dev/null 2>/dev/null
daemon "exe" "args"

Resources