How to get steps status of a Rundeck job to a file - status

I have a Parent job in Rundeck and in steps I have multiple reference jobs (that are created using "Job Reference - Execute another Job for each Node") and internally each of these reference jobs has different tasks.
Is there as way I can get Steps status (Pass or Fail) of the Parent job's to a file?
The purpose of this is to generate a report and attach to a mail which will have the success or failure of each step.

As you may have noticed, Rundeck only takes the Parent Job execution on Job Reference Step workflows (this is by design), in this case, we will have to "play" a bit with Rundeck.
We need the executions of the child jobs, for that, we will have to make them run individually, capture the state of each of these executions and place it in a template Markdown file which can be sent in a notification as an email template.
To call each job we will create a "fake parent job" which runs each child jobs individually (via API using cURL) and save the result (using jq to extract the values) on an Markdown file in an inline-script step, this is the script:
# script that obtains individual child jobs and uses it to a mail notification
# https://stackoverflow.com/questions/64590927/how-to-get-steps-status-of-a-rundeck-job-to-a-file
#####################################################
# rundeck instance values
rdeck_server="your_rundeck_host" #rundeck hostname
rdeck_port="4440" # rundeck tcp port
rdeck_api="36" # rundeck api version
jobid="9c667cb5-7f93-4c01-99b0-3249e75887e4 1c861d46-17a3-45ee-954d-74c6d7b597a0 ae9e455e-ca7a-440e-9ab4-bfd25827b287" # space separated child jobs id's
token="bmqlGuhEcekSyNtWAwuizER4YoJBgZdI" # user token
#####################################################
# "prudential" time between actions (in seconds)
pt="2"
#####################################################
# clean the file
echo "" > myfile.md
#####################################################
# add a header to a file
mydate=$(date +'%m/%d/%Y')
echo "# $mydate report" >> myfile.md
#####################################################
# 1) run the job via API and store the execution ID.
# 2) takes the execution ID and store job status via API
for myid in $jobid # iterate over jobs id's
do
# sleep
sleep $pt
# save the execution id (to get the status later) on a variable named "execid"
execid=$(curl -s -H accept:application/json --location --request POST "http://$rdeck_server:$rdeck_port/api/$rdeck_api/job/$myid/run?authtoken=$token" | jq -r '.id')
# sleep
sleep $pt
# save the status on a variable named "status"
status=$(curl -s --location --request GET "http://$rdeck_server:$rdeck_port/api/$rdeck_api/execution/$execid/state?authtoken=$token" | jq -r '.executionState')
# put the status on a file
echo "* job $myid is $status" >> myfile.md
# rundeck friendly output message
echo "job $myid is done, status: $status"
done
And within a Job Definition it would look like this:
<joblist>
<job>
<defaultTab>nodes</defaultTab>
<description></description>
<executionEnabled>true</executionEnabled>
<id>eb4826bc-cc49-46b5-9aff-351afb529197</id>
<loglevel>INFO</loglevel>
<name>FakeParent</name>
<nodeFilterEditable>false</nodeFilterEditable>
<notification>
<onfailure>
<email attachType='file' recipients='it#example.net' subject='info' />
</onfailure>
<onsuccess>
<email attachType='file' recipients='it#example.net' subject='info' />
</onsuccess>
</notification>
<notifyAvgDurationThreshold />
<plugins />
<scheduleEnabled>true</scheduleEnabled>
<sequence keepgoing='false' strategy='node-first'>
<command>
<exec>echo "starting..."</exec>
</command>
<command>
<fileExtension>.sh</fileExtension>
<script><![CDATA[# script that obtains individual child jobs and uses it to a mail notification
# https://stackoverflow.com/questions/64590927/how-to-get-steps-status-of-a-rundeck-job-to-a-file
#####################################################
# rundeck instance values
rdeck_server="your_rundeck_host" #rundeck hostname
rdeck_port="4440" # rundeck tcp port
rdeck_api="36" # rundeck api version
jobid="9c667cb5-7f93-4c01-99b0-3249e75887e4 1c861d46-17a3-45ee-954d-74c6d7b597a0 ae9e455e-ca7a-440e-9ab4-bfd25827b287" # space separated child jobs id's
token="bmqlGuhEcekSyNtWAwuizER4YoJBgZdI" # user token
#####################################################
# "prudential" time between actions (in seconds)
pt="2"
#####################################################
# clean the file
echo "" > myfile.md
#####################################################
# add a header to a file
mydate=$(date +'%m/%d/%Y')
echo "# $mydate report" >> myfile.md
#####################################################
# 1) run the job via API and store the execution ID.
# 2) takes the execution ID and store job status via API
for myid in $jobid # iterate over jobs id's
do
# sleep
sleep $pt
# save the execution id (to get the status later) on a variable named "execid"
execid=$(curl -s -H accept:application/json --location --request POST "http://$rdeck_server:$rdeck_port/api/$rdeck_api/job/$myid/run?authtoken=$token" | jq -r '.id')
# sleep
sleep $pt
# save the status on a variable named "status"
status=$(curl -s --location --request GET "http://$rdeck_server:$rdeck_port/api/$rdeck_api/execution/$execid/state?authtoken=$token" | jq -r '.executionState')
# put the status on a file
echo "* job $myid is $status" >> myfile.md
# rundeck friendly output message
echo "job $myid is done, status: $status"
done]]></script>
<scriptargs />
<scriptinterpreter>/bin/bash</scriptinterpreter>
</command>
<command>
<exec>echo "done!"</exec>
</command>
</sequence>
<uuid>eb4826bc-cc49-46b5-9aff-351afb529197</uuid>
</job>
</joblist>
If you check the "notifications" section you will notice that it will send an email if it executes correctly or if it fails. You need to configure Rundeck so that it can send emails. I leave the steps to configure it:
Stop your Rundeck service.
Add the e-mail configuration on the rundeck-config.properties file (usually at /etc/rundeck path):
# e-mail notification settings
grails.mail.host=your-smtp-host.com
grails.mail.port=25
grails.mail.username=your-username
grails.mail.password=yourpassword
More info here.
Add the template configuration, specific for your job, also on rundeck-config.properties file (check the line 3, is the file path generated by the job script):
# project and job specific
rundeck.mail.YOUR-PROJECT-NAME.YOUR-JOB-NAME.template.subject=your-subject-string
rundeck.mail.YOUR-PROJECT-NAME.YOUR-JOB-NAME.template.file=/path/to/your/myfile.md
rundeck.mail.YOUR-PROJECT-NAME.YOUR-JOB-NAME.template.log.formatted=true # (if true, prefix log lines with context information)
Start your rundeck service.
At the moment of executing your job, you will see the individual execution and the report in the inbox.

Related

How to fetch the result of ngxtop command in a file

I need to ferch the result of ngxtop command in a file. But when i m trying to run it from a shell script file it goes to a running state untill I manually kill or stop it...
I do this proccess with the next cli command group:
touch file.log && echo "please press CTRL+C to stop process and save ngxtop file.log" && ngxtop print request status http_referer >> file.log
When you run this code will save the output in file.log the you can show the informacion with next code:
cat file.log
This is an example when the process is complete and we load some sites in nginx while the command is executing screeshoot of ngxtop trace

UC4 Appworx V8 - Send spool file in attachment

I got a Korn Shell that calls a SQL/PLUS to check and spool data into a .CSV file in UNIX.
This KShell is working fine on Unix, it creates the file and Return 0.
Launching the Job from UC4 AppWorx i want him to Attach the Spooled File in UNIX in the Notification sended by the Job when he Finish.
I want this to work this way:
1º I Launch the Job
2º It checks the data, if data is founded then it creates a file in /tmp directory in UNIX with the .CSV extension.
3º When the job finishes he send me an email with Spool File (.CSV) in Unix.
Is there any way? How can i make this?
Thanks.
You'll need to create an chain in Appworx (the docs should be able to walk you through it). This chain will have 1 or more jobs.
First, you don't need a k script to call SQL/PLUS. You can invoke SQL/PLUS directly. Write the script as a .sql file (it can include sqlplus directives, sql, and PL/SQL as needed). Set the job as "program type" AWSQLP. Point it at the .sql script that you have made available to Appworx.
The sqlplus script can use logic to determine if it should create a file. If it should, it can write out files directly (though getting proper .csv files from it can be a pain).
Then, attach a notification to the job, and the notification object should be set to do an email attachment. You'll have to use the "pattern" type, and put in a full filepath to the csv. Substitution variables can be used if you want a new filename each invocation.
Depending on your version, some of these options can be moved around a bit (We just upgraded last year, UC4 no longer owns it). Click on the help menu and do to the documentation entry... it's not the best in the world, but far from the worst.
first of all thanks for answering.
I usually create a JOB lets say, SEM_CHECK_THINGS, with 1 prompt defined on UC4 that runs a Query in the Database to check if the table, test_table (Example) got data, to do this i use select decode(count(*),0, 'N', 'Y') from test_table;
This job also executes a simple KShell in unix:
KShell Content:
echo "Job Name: $1"
echo "Job Control Flag: $2"
jobName=$1
jobFlag=$2
echo "Job ${jobName} started ..." >> $logFile
date >> $logFile
if [[ ${jobFlag} == "Y" ]]; then
echo "Job ${jobName} executed successfully with data found." >> $logFile
echo "Job ${jobName} executed successfully with data found."
exit 1
else
echo "Job ${jobName} finished with no data found." >> $logFile
echo "Job ${jobName} finished with no data found."
exit 0
fi
I usually force "ABORT" by using Exit 1 if data is found to request another Job that will execute an .SQL that will spool the data from the test_table.
whenever sqlerror exit sql.sqlcode
whenever sqlerror exit 1
prompt this is a test
set echo off
set trimspool on
set trimout off
set linesize 1500
set feedback on
set newpage none
SET HEADING OFF
set und off
set pagesize 10000
alter session set nls_date_format = 'dd-MON-yyyy HH24:MI:SS';
spool &1
SELECT 'PREV_RESULTSET;LAST_RESULTSET;NR_COUNT' FROM DUAL
UNION ALL
SELECT PREV_RESULTSET||';'||LAST_RESULTSET||';'||COUNT(1) NR_COUNT
FROM SEM_REPORT_PEDIDOS
GROUP BY PREV_RESULTSET, LAST_RESULTSET;
spool off
exit 1
By using the Spool &1 and by "Hardcoding" the file testspool.csv in the "Other Output" option in the Notification of that Job i managed to do this, to receive an email with the content i need/want from that table.
But i want really want its to make this in one single job, make a validation, if data is found then Spool and attach .CSV file to the email notification sent by that job.

Asterisk service and error logging

How can I log Asterisk service (service status, e.g. service is running or stopped) and Asterisk errors in a remote database?
so, with the /etc/asterisk.logger.conf you can have errors go to a syslog, which you can parse for errors and put into a DB. To check the status I recommend a bash script that looks for asterisk running and sends that status to mysql (if last column ordered by datetime) is different then the current status insert it into the db. You can use cron to check status every few minutes.
#!/bin/bash
APP=`ps -aux | grep -v 'grep' | grep 'asterisk'`
# 1 is false in BASH
APP_RUNNING=1
if [ $APP != "" ];
then
APP_RUNNING=0
fi

shell script email notification upon success or failure

I'm writing a shell script that basically transfers data files through SFTP to a database server and then invokes a pl/sql procedure which loads the data from those files (external tables) into internal database tables.
I've been doing some research on effective exception handling in shell scripts and it appears the set -e option can be used to terminate a script with an error whenever any command in the script runs which returns a non-zero exitcode.
So, my plan is to have a script which contains all of the processing that needs to get done (SFTP, moving/deleting files, calling pl/sql procedure, etc...) and to include set -e at the top of the script. I also plan to redirect output to a log file in this script.
Then, I plan to have another script that calls the main processing script and then emails the log that gets produced with either a "Success" or "Failure" indicator in the subject of the email.
Are there any "gotchas" that any of you can foresee in this approach or does this seem reasonable?
Sounds reasonable.
One thing you may also do make it one command and less scripts:
someSFTPscipt &> somelogfile.txt; if [ $? -eq 0 ]; then echo "Success"; else echo "Failure"; fi
someSFTPscipt &> somelogfile.txt; redirects the output of the script to a logfile
if [ $? -eq 0 ]; then echo "Success"; else echo "Failure"; fi checks whether it succeeded (returned 0) or failed (any other non-zero value). Simply replace the echo with your mail commands.
Thanks for all of the feedback on this.
I ended up going with this "wrapper" shell that calls the main processing shell. A cron is going to launch this daily in my particular case.
Comments are certainly welcome if this can be improved.
#!/bin/sh
################################################################################
# Author : Zack Macomber #
# Date : 02/22/2012 #
# Description: Calls main_process.sh and emails results of the process. #
# Also appends to master log file #
################################################################################
# Flag any errors that occur during processing
set -e
# Set newly created files to "rw" for everyone
umask 111
#############
# VARIABLES #
#############
EMAIL_RECIPIENTS=my_email#some_domain.com
MAIN_DIR=/scripts/
#############
# FUNCTIONS #
#############
send_email()
{
uuencode results.log results.log | \
mailx -s "DATA_LOAD $1 - consult attached log for details" $EMAIL_RECIPIENTS
}
################
# MAIN PROCESS #
################
cd $MAIN_DIR
sh main_process.sh > results.log && send_email SUCCESS || send_email FAILURE
cat results.log >> pub_data_load.log
exit 0

moving from one to another server in shell script

Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh
The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH
I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer

Resources