Autosys job not getting triggered every 15 minutes - autosys

I have defined the autosys box as below :
description="Trigger Hive refresh jobs"
date.conditions=date_conditions: 1
days.of.week=days_of_week: all
start.mins=start_mins:0,15,30,45
run.window=run_window:"02:00-23:59"
and the box's container has only the following command
command=/app/datarefresher/bin/refreshhivetables.sh
However, this task just runs at 02:00 and returns to the success state after one run.
It hasn't been triggered every 15 minutes as specified.

Related

Verify time with 3 seconds buffer

Get Current Audio Upload Time
${time}= Get Current Date
${converted-time}= Convert Date ${time} result_format=%H:%M:%S
Log To Console time is ${converted-time}
Set Global Variable ${converted-time}
this script verifies that after searching the string, the result is returned then verify time is correct.
however, there is elapsed when the robot captured the current time vs the application captured time.
I cannot verify it directly, I need to give 3 seconds buffer.
robot captured time: 16:38:04
app captured time: 16:38:56
Verify Audio Time
[Arguments] ${RandomNumber}
Wait For Elements State //mark[text()='Dual-Channel-Audio-${RandomNumber}']//following::td[text()='${converted-time}']
UI which shows time

Apache Airflow: rerun for tasks with date parameters

I have a hourly shell script job that takes a date and hour as input params. The date and hour are used to construct the input path to fetch data for the logic contained in the job DAG. When a job fails and I need to rerun it (by clicking "Clear" for the failed task node to clean up the status to re-trigger a new run), how can I make sure the date and hour used for rerun are the same as the failed run since the rerun could happen in a different hour as the original run?
You have 3 options:
Hover to the failed task which is going to clear, in its displaying tag there will be a value with key Run:, it is its Execution date and time.
Click on the failed task which is going to clear, heading of its displaying popup which has the clear option will be [taskname] on [executiondatewithtime]
Open the task log, the first line after the attempts count will be included a string with format Executing <Task([TaskName]): task_id> on [ExecutionDate withTime]

Control-M : setting job status ok after specified time

I have a control-m file watcher job which waits for a specific file, if file arrived with in specified time job ends ok, but I want to set job status ok when file does not arrived in specified time instead of keep waiting for the file, is this possible ? how to implement it ?
Thank you.
There are two ways of setting up a file-watcher.
File Watcher Job
Filewatcher Utility in Control M ctmfw
There are two consequences of FW jobs getting completed.
Giving the out condition to next job, so that the successer jobs start executing
Just to complete the job, so that this gets cleard off in New day process.
Now, if you want 1st consequence, then this is one option -
Assume that your FW job [ABC] runs between 0600 - 1800, and the out condition it passes to the successor job is ABC-OK. Successor job [DEF] runs on getting the condition ABC-OK; Keep a dummy job [ABC_DUMMY] which runs on 1805 that sets the same condition ABC-OK. So, once ABC_DUMMY completes, DEF will get the condition it is looking for and will execute.
If the file arrived early, then the FW job ABC will run, and it will set the condition ABC-OK. and DEF will start running.
In both this condition, ensure that once DEF is completed, ABC-OK is negated.
If you are looking for second consequence, then I believe as long as job is not failing, FW jobs will be in 'To Run' status, and this will get cleared off in the New Day Process.
Happy to help further. Just post your doubts here.
JN
Edit your FileWatcher job
In the EXECUTION tab:
Submit between "Enter your beginning time" to "enter your ending time"
In the STEPS tab:
ON (Statement=* CODE=COMPSTAT=0)
DO OK
DO CONDITION, NAME=FILE-FOUND
ON (Statement=* CODE=COMPSTAT!0)
DO OK
DO CONDITION, NAME=FILE-NOT-FOUND
Use wait until parameter in file watcher. Suppose if you want the job to watch for the file until 06:00 AM, mention 06:00 AM in wait until parameter mention 06:00.
Exactly at 06:00 AM the job will get fail if it doesn't find the file. then u can use step tab to set the job okay with either of the following options.
Option 1:
ON(ON (Statement=* CODE=COMPSTAT!0))
DO OK
or
Option 2:
ON( (statement=* CODE=NOTOK))
DO OK

Autosys job scheduling with date_condition and condition

I'm trying to do such scheduling:
JOB_A starts on it's own calendar (possibly every day?) and do somehing. It's already configured.
JOB_B should be started right after JOB_A but only on every friday. I need to configure this job.
So the questions are:
How autosys works when i define date_condition with start_times and conditions?
Any way to define date_condition without start_times and start_mins?
How to define JOB_B? JOB_A ends at 7.52-7.53 every day. I cannot use calendar, it's pain to make new one because of burocracy and processes. I don't have time for this :\
I tried to do this, but has no results:
date_condition: 1
start_times: "7:00, 8:00"
condition: s(JOB_A)
You can schedule JOB_B to run only on Fridays, have it start at the same time as JOB_A, and make it dependent on JOB_A. This way, Autosys will start it at e.g. 07:00, check its conditions, and see that JOB_A is still running, causing JOB_B to wait for JOB_A to finish.
Assuming JOB_A is already set up and running at 07:00, JOB_B would need to be configured like this:
date_conditions: 1
days_of_week: fr
start_times: "07:00"
condition: s(JOB_A)

LSF parent job waiting for child

I am using LSF bsub command to submit jobs in Unix environment. However the LSF job is waiting for child jobs to finish.
Here is an example (details about sample scripts below):
Without LSF: If I submit parent.ksh in Unix without using LSF, i.e in command prompt I type ./parent.ksh, parent.ksh get's submitted and get's completed in a second without waiting for child jobs script1.ksh and script2.ksh since these jobs have been submitted in background mode. This is typical behaviour in Unix.
With LSF: However if I submit my parent.ksh using LSF, i.e. bsub parent.ksh, parent.ksh wait for 180 seconds(thats the longest time taken by child number 2 i.e. script2.ksh) after submission. Please note I have expcluded time taken by job in pending status.
This is something I was not expecting, how can I ensure this does not happen?
I had checked, script1.ksh and script2.ksh was invoked in both cases.
parent.ksh:
#!/bin/ksh<br>
/abc/def/script1.ksh &<br>
/abc/def/script1.ksh &<br><br>
script1.ksh:
#!/bin/ksh<br>
sleep 80<br><br>
script2.ksh:
#!/bin/ksh<br>
sleep 180
I guess the reason is that LSF tracks process tree of your job, thus LSF job only completes till these two background processes exits. So you can try to create a new process group for the background process under a new session.

Resources