Kill job if execution time less than 2 min - control-m

I have a chain with several jobs, and sometimes a certain job that normally takes about 2 hours, finishes in less than 2 minutes.
What I would like to do is to kill this job if it ends in less than 2 minutes so that the chain won't proceed.
Is this possible?
Thanks

Well you don't really want to kill anything, do you? If you do, see BMC's note (including video) on using ctmkilljob.
In this case your next job is dependent on 2 things, the predecessor job being ok and the duration of the predecessor job. Add another input condition to your next job (in addition to the existing condition) to represent the >2 minutes duration.
On the job that needs to run for more than 2 minutes, add a Notify when the Exectime exceeds 2 mins (or 60 mins or whatever you decide is the threshold) and get it to shout to an entry in your shout destination table.
On the Control-M Server create a new program entry in your shout destination table and create a small script referenced by the shout. The script should use the ctmcontb utility to create a new ODAT condition that your next is waiting on. If you have a look at the BMC help note for ctmkilljob (and just substitute in ctmcontb) then you'll see how to do this.

Related

Is there a way to measure or monitor a scheduler job's resources used?

Recently i got the task to optimize a quite huge PLSQL script which prior to my changes took about 1 hour +/- 10mins.
So I got to do some reallocation of some methods and generally just some replacement of big views with simpler subquery or with statements. I noticed that if I ran the scheduled job by right-clicking it and execute job I would in most cases see the run duration change (in a positive way). But if I enabled the job and let it run by its schedule it takes the original hour no matter what changes you do to it.
Now my question here is: Is there any way to monitor the RAM or CPU usage of the session/job or is there a difference in general how many resources are allocated to background processes? Because my suspicion here is the "manual" run job somehow gets some priorities the scheduler doesn't get or doesn't take.
Either way for troubleshooting purposes you can't take a few hours a work day just to wait for results.

How to process calls one by one through the cvp script?

I have a problem that demands processing calls one by one. Once first call enters our decision loop another call should be redirected to a diferent loop untill first call is processed . We have an issue when 2 calls enter in the script one second appart they are processed by call studio at the same time.
Is somehow possible to dynamicly separate those calls?
we already tried to separate calls with set local variable elemtn but it does not get update quickly enough to distinguish different calls
We need calls to enter the script one at the time,not together
You can manage this by adding a wait node in icm with 5 seconds
.when the call enters vxml script in icm we add 5 secs delay.
Before the wait node add a % allocator node which will have two paths
One will have 5 seconds wait and one without. So when
Two calls are fired at same time one will go through wait
And one will go immediately go to decision and you can flip flop between the two.

How to run Airflow DAG for specific number of times?

How to run airflow dag for specified number of times?
I tried using TriggerDagRunOperator, This operators works for me.
In callable function we can check states and decide to continue or not.
However the current count and states needs to be maintained.
Using above approach I am able to repeat DAG 'run'.
Need expert opinion, Is there is any other profound way to run Airflow DAG for X number of times?
Thanks.
I'm afraid that Airflow is ENTIRELY about time based scheduling.
You can set a schedule to None and then use the API to trigger runs, but you'd be doing that externally, and thus maintaining the counts and states that determine when and why to trigger externally.
When you say that your DAG may have 5 tasks which you want to run 10 times and a run takes 2 hours and you cannot schedule it based on time, this is confusing. We have no idea what the significance of 2 hours is to you, or why it must be 10 runs, nor why you cannot schedule it to run those 5 tasks once a day. With a simple daily schedule it would run once a day at approximately the same time, and it won't matter that it takes a little longer than 2 hours on any given day. Right?
You could set the start_date to 11 days ago (a fixed date though, don't set it dynamically), and the end_date to today (also fixed) and then add a daily schedule_interval and a max_active_runs of 1 and you'll get exactly 10 runs and it'll run them back to back without overlapping while changing the execution_date accordingly, then stop. Or you could just use airflow backfill with a None scheduled DAG and a range of execution datetimes.
Do you mean that you want it to run every 2 hours continuously, but sometimes it will be running longer and you don't want it to overlap runs? Well, you definitely can schedule it to run every 2 hours (0 0/2 * * *) and set the max_active_runs to 1, so that if the prior run hasn't finished the next run will wait then kick off when the prior one has completed. See the last bullet in https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled.
If you want your DAG to run exactly every 2 hours on the dot [give or take some scheduler lag, yes that's a thing] and to leave the prior run going, that's mostly the default behavior, but you could add depends_on_past to some of the important tasks that themselves shouldn't be run concurrently (like creating, inserting to, or dropping a temp table), or use a pool with a single slot.
There isn't any feature to kill the prior run if your next schedule is ready to start. It might be possible to skip the current run if the prior one hasn't completed yet, but I forget how that's done exactly.
That's basically most of your options there. Also you could create manual dag_runs for an unscheduled DAG; creating 10 at a time when you feel like (using the UI or CLI instead of the API, but the API might be easier).
Do any of these suggestions address your concerns? Because it's not clear why you want a fixed number of runs, how frequently, or with what schedule and conditions, it's difficult to provide specific recommendations.
This functionality isn't natively supported by Airflow
But by exploiting the meta-db, we can cook-up this functionality ourselves
we can write a custom-operator / python operator
before running the actual computation, check if 'n' runs for the task (TaskInstance table) already exist in meta-db. (Refer to task_command.py for help)
and if they do, just skip the task (raise AirflowSkipException, reference)
This excellent article can be used for inspiration: Use apache airflow to run task exactly once
Note
The downside of this approach is that it assumes historical runs of task (TaskInstances) would forever be preserved (and correctly)
in practise though, I've often found task_instances to be missing (we have catchup set to False)
furthermore, on large Airflow deployments, one might need to setup routinal cleanup of meta-db, which would make this approach impossible

Autosys Job Queue

I'm trying to set1 an autosys jobs configuration so that will have a "funnel" job queue behavior, or, as I call it, in a 'waterdrops' pattern, each job executing in sequence after a given time interval, with local job failure not cascading into sequence failure.
1 (ask for it to be setup, actually, as I do not control the Autosys machine)
Constraints
I have an (arbitrary) N jobs (all executing on success of job A)
For this discussion, lets say three (B1, B2, B3)
Real production numbers might go upward of 100 jobs.
All these jobs won't be created at the same time, so addition of a new job should be as less painful as possible.
None of those should execute simultaneously.
Not actually a direct problem for our machine
But side effect on a remote, client machine : jobs include file transfer, which are trigger-listened to on client machine, which doesn't handle well.
Adaptation of client-machine behavior is, unfortunately, not possible.
Failure of job is meaningless to other jobs.
There should be a regular delay in between each job
This is a soft requirement in that, our jobs being batch scripts, we can always append or prepend a sleep command.
I'd rather, however have a more elegant solution especially if the delay is centralised : a parameter - that could be set to greater values, should the need arise.
State of my reasearch
Legend
A(s) : Success status of job
A(d) : Done status of job
Solution 1 : Unfailing sequence
This is the current "we should pick this solution" solution.
A (s) --(delay D)--> B(d) --(delay D)--> B2(d) --(delay D)--> B3 ...
Pros :
Less bookeeping than solution 2
Cons :
Bookeeping of the (current) tailing job
Sequence doesn't resist to job being ON HOLD (ON ICE is fine).
Solution 2 : Stairway parallelism
A(s) ==(delay D)==> B1
A(s) ==(delay D x2)==> B2
A(s) ==(delay D x3)==> B3
...
Pros :
Jobs can be put ON HOLD without incidence.
Cons :
Bookeeping to know "who is when" (and what's the next delay to implement)
N jobs executed at the same time
Underlying race condition created
++ Risk of overlap of job execution, especially if small delays accumulates
Solution 3 : The Miracle Box ?
I have read a bit about Job Boxes, but the specific details eludes me.
-----------------
A(s) ====> | B1, B2, B3 |
-----------------
Can we limit the number of concurrent executions of jobs of a box (i.e a box-local max_load, if I understand that parameter) ?
Pros :
Adding jobs would be painless
Little to no bookeeping (box name, to add new jobs - and it's constant)
Jobs can be put ON HOLD without incidence (unless I'm mistaken)
Cons :
I'm half-convinced it can't be done (but that's why I'm asking you :) )
... any other problem I have failed to forseen
My questions to SO
Is Solution 3 a possibility, and if yes, what are the specific commands and parameters for implementing it ?
Am I correct in favoring Solution 1 over Solution 2 otherwise2 ?
An alternative solution fitting in the constraints is of course more than welcome!
Thanks in advance,
Best regards
PS: By the way, is all of this a giant race condition manager for the remote machine failing behavior ?
Yes, it is.
2 I'm aware it skirts a bit toward the "subjective" part of questions rejection rules, but I'm asking it in regards to the solution(s) correctness toward my (arguably) objective constraints.
I would suggest you to do below
Put all the jobs (B1,B2,B3) in a box job B.
Create another job (say M1) which would run on success of A. This job will call a shell/perl script (say forcejobs.sh)
The shell script will get a list of all the jobs in B and start a loop with a sleep interval of delay period. Inside loop it would force start jobs one by one after the delay period.
So outline of script would be
get all the jobs in B
for each job start for loop
force start the job
sleep for delay interval
At the end of the loop, when all jobs are successfully started, you can use an infinite loop and keep checking status of jobs. Once all jobs are SU/FA or whatever, you can end the script and send the result to you/stdout and finish the job M1.

Get a Unix script to run at exactly the same time every time

I am writing a script to capture disk usage on a system (yes, I know there is software that can do this). For database reporting purposes, I want the interval between data points to be as equal as possible. For example, if I am polling disk usage every 10 minutes, I want every data point to be YYYY-MM-DD HH:[0-5]0:00. If I'm am polling every 5 minutes, it would be YYYY-MM-DD HH:[0-5][05]:00.
If I have a ksh script (or even a Perl script) to capture the disk usage, how can I let the script come active and wait for the next "Poll time" before taking a snapshot, and then sleep for the correct number of seconds until the next "Poll time". If I am polling every 5 minutes, and it is 11:42:00, then I want to sleep for 180 seconds so it will take a snapshot at 11:45:00 - and then sleep for 5 minutes so it will take another snapshot at 11:50:00.
I wrote a way that works if my poll time is every 10 minutes, but if I change the poll time to a different number, it doesn't work. I would like it to be flexible on the poll time.
I prefer to do this in shell script, but if it is way too much code, Perl would be fine too.
Any ideas on how to accomplish this?
Thanks in advance!
Brian
EDIT: Wow - I left out a pretty important part - that cron is disabled, so I will not be able to use cron for this task. I am very sorry to all the people who gave that as an answer, because yes, that is the perfect way to do what I wanted, if I could use cron.
I will be using our scheduler to kick off my script right before midnight every day, and I want the script to handle running at the exact "poll times", sleeping in between, and exiting at midnight.
Again, I'm very sorry for not clarifying on crontabs.
cron will do the job.
http://en.wikipedia.org/wiki/Cron
Just configure it to run your ksh script at the times you need and you are done
You might want to consider using cron. This is exactly what it was made for.
If I were doing this, I would use the system scheduler (cron or something else) to schedule my program to run every 180 seconds.
EDIT: I might have misunderstood your request. Are you looking more for something along the following lines? (I suspect there is a bug or two here):
ANOTHER EDIT: Remove dependency on Time::Local (but now I suspect more bugs ;-)
#!/usr/bin/perl
use strict;
use warnings;
use POSIX qw( strftime );
my $mins = 5;
while ( 1 ) {
my ($this_sec, $this_min) = (localtime)[0 .. 1];
my $next_min = $mins * ( 1 + int( $this_min / $mins ) );
my $to_sleep = 60 * int( $next_min - $this_min - 1 )
+ 60 - $this_sec;
warn strftime('%Y:%m:%d %H:%M:%S - ', localtime),
"Sleeping '$to_sleep' seconds\n";
sleep $to_sleep;
}
__END__
Have it sleep for a very short time, <=1 sec, and check each time whether poll time has arrived. Incremental processor use will be negligible.
Edit: cron is fine if you know what interval you will use and don't intend to change frequently. But if you change intervals often, consider a continuously running script w/ short sleep time.
Depending on how fine grained your time resolution needs to be, there may be a need to write your script daemon style. Start it once, while(1) and do the logic inside the program (you can check every second until it's time to run again).
Perl's Time::HiRes allows very fine granularity if you need it.

Resources