Autosys Dependent jobs of ON ICE running immediately after box start - autosys

Lets say I have a box with 4 jobs. There is some issue with job2, so I want to skip it and go to Job3 after Job1 is completed. Till I fix and test the issue for Job2.
I thought ON ICE job2 will work. But when I do it, as soon as box_a is RU, job1 and job3 are starting same time. But I want job3 to run only after Job1 is SU. What has to be done ?
For now, Im holding the job2 everyday, and next day Im marking it to SU, when job1 is SU and holding again. But for this i have to monitor everyday.
box_a
job1
job2
job3
job4

A would suggest to remove the Job2 from the box assuming its testing is not dependent on the other jobs in the box and update the condition of Job3.
Save a copy of Job2 and Job3 JIL's.
Update Job2 JIL:
update_job: Job2
box_name:
this would remove the job from the box and keep it as in independent job
Next update Job3 JIL:
update_job: Job3
condition: success(Job1)
Job3 would run after the completion of Job1.
After, the Job2 issue is resolved, update back the Job2 and Job3 JIL with the backup to revert back the temporary changes.
Hope this helps, if any queries do ask.

On Ice is as good as being invisible - so you'll need to update the condition of Job 3 to depend on the success of Job 1 if you have Job 2 on ICE.
Option 2: If you don't want to edit Job 3, make sure you take a back up of Job 2 and replace the command with sleep 1. This acts as a placeholder so you are not changing your entire architecture whilst you debug and plan to put Job 2 back once resolved.

Since you want the job2 to got completion, change the command field to echo, this will ensure that the script is not executed but it will just echo and the job will go to success. Change it back once the issue is fixed.
insert_job: test
ob_type: c
command: echo "/home/Autosys/db.sh"
machine: prod
owner: dev
days_of_week: all

Related

Restart Autosys job when terminated via another Autosys job

I am setting up an Autosys job to restart another job when the main job is terminated.
Insert_job: Job_Restarter
Job_type: cmd
Condition: t(main_job)
Box_name: my_test_box
Permission: gx,get
Command: send -E FORCE_STARTJOB -J main_job
When the main job is terminated, the restart job runs but fails and I get an error code of 1. I know this is a generic error code, but dose anyone have an idea of what I am doing wrong?
Edit:
Did some digging. "Sendevent" is not recognized as a command. Is there another way to restart the job through another job?

Autosys job to auto-update itself as SUCCESS if no response from CMD

In my Autosys box scheduled to run every week, I have 2 jobs:
Job1 - Call up a shell script to generate a file
Job2 - Call up a
shell script to transfer the generated file
What happened is that for Job2, even though the file has been successfully transferred, there is no exit code from the shell script. This resulted in Job2 and the box being in RUNNING state and prevent the box from running at the next week schedule.
The ideal way is to amend the transfer shell script (in Job2) to return a proper exit code. But I do not have access to the shell script to make any change.
In JIL, is it possible to achieve either one of the following:
immediate after Job2 CMD execution, mark Job2 as success, OR
after X minutes of Job2 CMD execution, mark Job2 as success
Adding the term_run_time attribute to the JIL of Job2 will terminate the job if it has been running for more than the number of minutes specified.
For example, term_run_time: 60 sets a 60 minute termination timer.

Schedule a job in Unix

I am pretty new to Unix environment.
I am trying to schedule two tasks in Unix server. The second task is dependent on the result of the first task. So, I want to run the first task. If there is no error then I want the second task to run automatically. But if the first task fails, I want to reschedule the first task again after 30 minutes.
I have no idea where to start from.
You don't need cron for this. A simple shell script is all you need:
#!/bin/sh
while :; do # Loop until the break statement is hit
if task1; then # If task1 is successful
task2 # then run task2
break # and we're done.
else # otherwise task1 failed
sleep 1800 # and we wait 30min
fi # repeat
done
Note that task1 must indicate success with an exit status of 0, and failure with nonzero.
As Wumpus sharply observes, this can be simplified to
#!/bin/sh
until task1; do
sleep 1800
done
task2

Unix cron job for shell scripts

I would like to have a cron job which executes 3 shell scripts consecutively i.e., execution of next shell script depending on the completion of previous scripts.
How can I do it?
Here is an example showing a cron which executes 3 scripts at 9am Mon-Fri.
00 09 * * 1-5 script1.sh && script2.sh && script3.sh 2>&1 >> /var/tmp/cron.log
If any one of the scripts fails, the next script in the sequence will not be executed.
Write one script which calls these three scripts and pit it into cron.
To elaborate on yi_H's answer: You can combine them in one shell script in different ways, depending on what you want.
job1.sh
job2.sh
job3.sh
will run all three consecutively, regardless of the result.
job1.sh && job2.sh && job3.sh
will run all three, but it will stop if one of them fails (that is, if job1 returns an error, job2 and job3 will not run).

List and kill at jobs on UNIX

I have created a job with the at command on Solaris 10.
It's working now but I want to kill it but I don't know how I can find the job number and how to kill that job or process.
You should be able to find your command with a ps variant like:
ps -ef
ps -fubob # if your job's user ID is bob.
Then, once located, it should be a simple matter to use kill to kill the process (permissions permitting).
If you're talking about getting rid of jobs in the at queue (that aren't running yet), you can use atq to list them and atrm to get rid of them.
To delete a job which has not yet run, you need the atrm command. You can use atq command to get its number in the at list.
To kill a job which has already started to run, you'll need to grep for it using:
ps -eaf | grep <command name>
and then use kill to stop it.
A quicker way to do this on most systems is:
pkill <command name>
at -l to list jobs, which gives return like this:
age2%> at -l
11 2014-10-21 10:11 a hoppent
10 2014-10-19 13:28 a hoppent
atrm 10 kills job 10
Or so my sysadmin told me, and it
First
ps -ef
to list all processes. Note the the process number of the one you want to kill. Then
kill 1234
were you replace 1234 with the process number that you want.
Alternatively, if you are absolutely certain that there is only one process with a particular name, or you want to kill multiple processes which share the same name
killall processname

Resources