Get the Job predecessor Job name in a scriopt - autosys

I have a mail Job (Job-Mail) that run with this condition
s(Job A)|s(Job B)|s(Job C)
, if any of the three Jobs success than run the mail Job, and it run as I expected.
Now I need to get the name of success predecessor Job that trigger Job-Mail in the script as I have a logic in the script to generate file A incase Job A success and to generate file B incase Job B success

Inside shell scripts, use
job_depends -c -j Job-Mail and look for Atomic conditions. You will get all predecessor jobs and their status. With help of sed/awk/grep, see which jobs are sucesss and return the name. If more than 1 job success then raise exception or see which finished first and return that.
Alternatively, you can do a if else in shell and condition should be something like
jobWhichKickedJob-Mail = ""
if `autostatus -j Job A` = 'SUCCESS' then
jobWhichKickedJob-Mail = "Job A"
elif `autostatus -j Job B` = 'SUCCESS' then
jobWhichKickedJob-Mail = "Job B"
else
jobWhichKickedJob-Mail = "Job C"
and use this variable.
Note: By the time your script is running, if any other job succeeds, you might not get correct result.

Related

Sequentially Run Programs in Unix

I have several programs that need to be ran in a certain order (p1 then p2 then p3 then p4).
Normally I would simply make a simple script or type p1 && p2 && p3 && p4.
However, these programs to not exit correctly. I only know it is finished successfully when "Success" is printed. Currently, I SIGINT once I see "Success" or "Fail" and then manually run the next program if it's "Success".
Is there a simpler way to sequentially execute p1, p2, p3, p4 with less human intervention?
Edit: Currently using ksh, but I wouldn't mind knowing the other ones too.
In bash, you can pipe the command to grep looking for 'Success', then rely on grep's result code. The trick to that is wrapping the whole expression in curly braces to get an inline sub-shell. Like so:
$ cat foo.sh
#!/bin/bash
[ 0 -eq $(( $RANDOM %2 )) ] && echo 'Success' || echo 'Failure'
exit 0
$ { ./foo.sh | grep -q 'Success'; } && ls || df
The part inside the curly braces ({}) returns 0 if "Success" is in the output, otherwise 1, as if the foo.sh command had done so itself. More details on that technique.
I've not used ksh in a long while, but I suspect there is a similar construction.
I'm also new to linux programming, but I found something that might be helpful for you. Have you tried using the 'wait' command?
from this answer on stackexchange:
sleep 1 &
PID1=$!
sleep 2 &
PID2=$!
wait $PID1
echo PID1 has ended.
wait
echo All background processes have exited.
I haven't tested it myself, but it looks like what you described in your question.
all the answers so far would work fine if your programs would actually terminate.
here is a couple ideas you can use look through documentation for more details.
1st - option would be to modify your programs to have them terminate after printing the result message by returning a success code.
2nd - if not possible use forks.
write a main where you make a fork each time you want to execute a program.
in the child process use dup2 to have the process' output in a file of your choice.
in the main keep checking the content of said file until you get something and compare it with either success or failure.
-empty the file.
then you can make another fork and execute the next program.
repeat the same operation again.
bear in mind that execute is a command that replaces the code of the process it is executed in with the code of the file passed as a parameter so make the dup2 call first.
When your program returns Success or Fail and continues running, you should kill it as soon as the string passes.
Make a function like
function startp {
prog=$1
./${prog} | while read -r line; do
case "${line}" in
"Success")
echo OK
mykill $prog
exit 0
;;
"Fail")
echo NOK
mykill $prog
exit 1
;;
*) echo "${line}"
;;
esac
done
exit 2
}
You need to add a mykill function that looks for the px program and kills it (xargs is nice for it).
Call the function like
startp p1 && startp p2 && startp p3

How to reset number of retrys on Autosys job

I have a list of jobs that have number of retrys set on them (in jil definition). When I get the job status, I see the number of retrys (in this case 12). I am trying to find a way to reset that:
->autorep -J XXXXX%
Job Name Last Start Last End ST Run/Ntry Pri/Xit
XXXXXX 03/19/2014 14:27:38 03/19/2014 14:56:07 SU 146461/12 0
number of retries could be set on a job level:
look for n_retrys: in output of command autorep -J XXXXX% -q
or it could be on server level:
grep -i MaxRestartTrys config.$AUTOSERV
MaxRestartTrys=10
the third option is that the job was triggered manualy multiple times.

How to create options in KSH script

I am creating a KSH interface script that will call other scripts based on the users input. The other scripts are Encrypt and Decrypt. Each one of these scripts receive parameters. I have seen someone execute a script before using "-" + first letter of a script name before. How do I do this for my script? So for example if my script is called menu and the user typed in : menu -e *UserID Filename.txt* the script would run and the encrypt script would be executed along with the associated parameters. So far my script takes in the encrypt/decrypt script option as a parameter. Here is my script:
#!/bin/ksh
#I want this parameter to become an
action=$1
if [ $1 = "" ]
then
print_message "Parameters not satisfied"
exit 1
fi
#check for action commands
if [ $1 = "encrypt" ]
then
dest=$2
fileName=$3
./Escript $dest $fileName
elif [ $1 = "decrypt" ]
then
outputF=$2
encryptedF=$3
./Dscript $outputF $encryptedF
else
print "Parameters not satisfied. Please enter encrypt or decrypt plus-n arguments"
fi
Thanks for the help!
There isn't any kind of automatic way to turn a parameter into another script to run; what you're doing is pretty much how you would do it. Check the parameter, and based on the contents, run the two different scripts.
You can structure it somewhat more nicely using case, and you can pass the later parameters directly through to the other script using "$#", with a shift to strip off the first parameter. Something like:
[ $# -ge 1 ] || (echo "Not enough parameters"; exit 1)
command=$1
shift
case $command in
-e|--encrypt) ./escript "$#" ;;
-d|--decrypt) ./dscript "$#" ;;
*) echo "Unknown option $command"; exit 1 ;;
esac
This also demonstrates how you can implement both short and long options, by providing two different strings to match against in a single case statement (-e and --encrypt), in case that's what you were asking about. You can also use globs, like -e*) to allow any option starting with -e such as -e, -encrypt, -elephant, though this may not be what you're looking for.

Make tcsh wait until specific background job ends + alert me

What "nonblocking" command makes tcsh wait until a specific background
task completes and then "alerts" me by running a command of my choosing?
I want "wait %3 && xmessage job completed &" to wait until background
job [3] is finished and then xmessage me "job completed", but want
this command itself to return immediately, not "block" the terminal.
Obviously, my syntax above doesn't work. What does?
I've written a Perl program that can do this, but surely tcsh can do
it natively?
You may be able to do something like (untested):
while (! $?)
kill -s 0 $!
sleep 1
end
Or take a look at the notify command. I'm not sure if it would do what you want.

How do you use newgrp in a script then stay in that group when the script exits

I am running a script on a solaris Box. specifically SunOS 5.7. I am not root. I am trying to execute a script similar to the following:
newgrp thegroup <<
FOO
source .login_stuff
echo "hello world"
FOO
The Script runs. The problem is it returns back to the calling process which puts me in the old group with the source .login_stuff not being sourced. I understand this behavior. What I am looking for is a way to stay in the sub shell. Now I know I could put an xterm& (see below) in the script and that would do it, but having a new xterm is undesirable.
Passing your current pid as a parameter.
newgrp thegroup <<
FOO
source .login_stuff
xterm&
echo $1
kill -9 $1
FOO
I do not have sg available.
Also, newgrp is necessary.
The following works nicely; put the following bit at the top of the (Bourne or Bash) script:
### first become another group
group=admin
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
### now continue with rest of the script
This works fine on Linuxen. One caveat: arguments containing spaces are broken apart. I suggest you use the
env arg1='value 1' arg2='value 2' script.sh construct to pass them in (I couldn't get it to work with $# for some reason)
The newgrp command can only meaningfully be used from an interactive shell, AFAICT. In fact, I gave up on it about ... well, let's say long enough ago that the replacement I wrote is now eligible to vote in both the UK and the USA.
Note that newgrp is a special command 'built into' the shell. Strictly, it is a command that is external to the shell, but the shell has built-in knowledge about how to handle it. The shell actually exec's the program, so you get a new shell immediately afterwards. It is also a setuid root program. On Solaris, at least, newgrp also seems to ignore the SHELL environment variable.
I have a variety of programs that work around the issue that newgrp was intended to address. Remember, the command pre-dates the ability of users to belong to multiple groups at once (see the Version 7 Unix Manuals). Since newgrp does not provide a mechanism to execute commands after it executes, unlike su or sudo, I wrote a program newgid which, like newgrp, is a setuid root program and allows you to switch from one group to another. It is fairly simple - just main() plus a set of standardized error reporting functions used. Contact me (first dot last at gmail dot com) for the source. I also have a much more dangerous command called 'asroot' that allows me (but only me - under the default compilation) to tweak user and group lists much more thoroughly.
asroot: Configured for use by jleffler only
Usage: asroot [-hnpxzV] [<uid controls>] [<gid controls>] [-m umask] [--] command [arguments]
<uid controls> = [-u usr|-U uid] [-s euser|-S euid][-i user]
<gid controls> = [-C] [-g grp|-G gid] [-a grp][-A gid] [-r egrp|-R egid]
Use -h for more help
Option summary:
-a group Add auxilliary group (by name)
-A gid Add auxilliary group (by number)
-C Cancel all auxilliary groups
-g group Run with specified real GID (by name)
-G gid Run with specified real GID (by number)
-h Print this message and exit
-i Initialize UID and GIDs as if for user (by name or number)
-m umask Set umask to given value
-n Do not run program
-p Print privileges to be set
-r euser Run with specified effective UID (by name)
-R euid Run with specified effective UID (by number)
-s egroup Run with specified effective GID (by name)
-S egid Run with specified effective GID (by number)
-u user Run with specified real UID (by name)
-U uid Run with specified real UID (by number)
-V Print version and exit
-x Trace commands that are executed
-z Do not verify the UID/GID numbers
Mnemonic for effective UID/GID:
s is second letter of user;
r is second letter of group
(This program grew: were I redoing it from scratch, I would accept user ID or user name without requiring different option letters; ditto for group ID or group name.)
It can be tricky to get permission to install setuid root programs. There are some workarounds available now because of the multi-group facilities. One technique that may work is to set the setgid bit on the directories where you want the files created. This means that regardless of who creates the file, the file will belong to the group that owns the directory. This often achieves the effect you need - though I know of few people who consistently use this.
newgrp adm << ANYNAME
# You can do more lines than just this.
echo This is running as group \$(id -gn)
ANYNAME
..will output:
This is running as group adm
Be careful -- Make sure you escape the '$' with a slash. The interactions are a little strange, because it expands even single-quotes before it executes the shell as the other group. so, if your primary group is 'users', and the group you're trying to use is 'adm', then:
newgrp adm << END
# You can do more lines than just this.
echo 'This is running as group $(id -gn)'
END
..will output:
This is running as group users
..because 'id -gn' was run by the current shell, then sent to the one running as adm.
Anyways, I know this post is ancient, but hope this is useful to someone.
This example was expanded from plinjzaad's answer; it handles a command line which contains quoted parameters that contain spaces.
#!/bin/bash
group=wg-sierra-admin
if [ $(id -gn) != $group ]
then
# Construct an array which quotes all the command-line parameters.
arr=("${#/#/\"}")
arr=("${arr[*]/%/\"}")
exec sg $group "$0 ${arr[#]}"
fi
### now continue with rest of the script
# This is a simple test to show that it works.
echo "group: $(id -gn)"
# Show all command line parameters.
for i in $(seq 1 $#)
do
eval echo "$i:\${$i}"
done
I used this to demonstrate that it works.
% ./sg.test 'a b' 'c d e' f 'g h' 'i j k' 'l m' 'n o' p q r s t 'u v' 'w x y z'
group: wg-sierra-admin
1:a b
2:c d e
3:f
4:g h
5:i j k
6:l m
7:n o
8:p
9:q
10:r
11:s
12:t
13:u v
14:w x y z
Maybe
exec $SHELL
would do the trick?
You could use sh& (or whatever shell you want to use) instead of xterm&
Or you could also look into using an alias (if your shell supports this) so that you would stay in the context of the current shell.
In a script file eg tst.ksh:
#! /bin/ksh
/bin/ksh -c "newgrp thegroup"
At the command line:
>> groups fred
oldgroup
>> tst.ksh
>> groups fred
thegroup
sudo su - [user-name] -c exit;
Should do the trick :)

Resources