Configure Control-m job using in condition - control-m

Control-m Job description
Job A runs Mon-Friday, Job B runs on Saturday. Job C runs Mon-Saturday. I need Job D to run when:
1)Job A and Job C runs or
2) Job B and Job C runs.
Is this possible using control-m, how should I modify the in-condition.

I would modify so that Job A and Job B pass the same condition. Job D will be waiting on condition from Job A or Job B, and from Job C.

The best solution to this is to make use of an OR when defining the in-conditions on Job D.
Solution:
Job A, Job B and Job C all post normal out-conditions to Job D.
Job D is configured to require the following in-conditions
(Job A AND (Job B OR Job C))

Related

Find which instances got terminated via Athena query

I am running a query which is giving me list of instances launched for a particular month for my security group.
Lets say - [A, B ,C ,D]
My goal is to find what all instances got terminated also and when.
Now the issue I am facing is that I don't want to execute my query repeatedly - for each instance - to check whether instance got terminated or not..
Like:
SELECT eventname, useridentity.username, eventtime, requestparameters FROM your_athena_tablename WHERE (requestparameters like '%instanceid%')and eventtime > '2017-02-15T00:00:00Z'order by eventtime asc;
How can I pass multiple values here ??

Creating autosys job

We have a autosys box job let say A which contain 3 child jobs(B,c and D).
We have created a separate autosys job E , which will send mail if B and C fails.
We need to have condition where once E executed successfully..the autosys job A needs to be restarted again.
Note - the box job is scheduled to run daily at particular time
If you provide a sample code that would be good.
Base on what I understood. You want to run job A if E is executed successfully.
for that you need to add below line in your code (Jil file or using GUI)
/*-----job E if I had code available, would have changed and shared -----------*/
condition: success(E)
or
condition: s(E)
This job will execute immediately after job E is successful
Condition for job "E".
condition: f(B) OR FAILURE(C)
condition: FAILURE(B) | f(C)
Condition for job "A".
condition: s(E)
Note: Job name is case sensitive, you can give f or failure for failure status, s or success for success status, OR or | for OR condition & AND or & for AND condition. You can use d or done when you want your job to start even if predecessor job completed with SUCCESS, FAILURE or TERMINATED state.

Savepoint in an Oracle PL/SQL LOOP to stop deadlocks or record locks contention

I have a simple procedure but I'm unsure on how to best implement a strategy to stop deadlocks or record locks. I'm updating a number of tables in an cursor LOOP while calling a procedure that also updates tables.
There have been issues with deadlocks or record locks, so I've been tasked to cure this problem of the program from crashing once it comes up against a deadlock or record lock but to sleep for 5 minutes and carry on processing any new records.
The perfect solution is that it skips pass the deadlock or record lock and carry's on processing the rest of the records that aren't locked, sleeps for 5 minutes then picks up that record when the cursor is called again. The program continues to run through the day until it's killed.
My procedure is below, I have put in what I think is best but should I have the exception inside the Inner loop rather than the outer loop? While also having a savepoint in the inner loop?
PROCEDURE process_dist_data_fix
IS
lx_record_locked EXCEPTION;
lx_deadlock_detected EXCEPTION;
PRAGMA EXCEPTION_INIT(lx_record_locked, -54);
PRAGMA EXCEPTION_INIT(lx_deadlock_detected, -60);
CURSOR c_files
IS
SELECT surr_id
FROM batch_pre_dist_data_fix
WHERE status = 'REQ'
ORDER BY surr_id;
TYPE file_type IS TABLE OF batch_pre_dist_data_fix.surr_id%TYPE;
l_file_tab file_type;
BEGIN
LOOP
BEGIN
OPEN c_files;
FETCH c_files BULK COLLECT INTO l_file_tab;
CLOSE c_files;
IF l_file_tab.COUNT > 0
THEN
FOR i IN 1..l_file_tab.COUNT
LOOP
-- update main table with start date
UPDATE batch_pre_dist_data_fix
SET start_dtm = SYSDATE
WHERE surr_id = l_file_tab(i);
-- update tables
update_soundmouse_tables (l_file_tab(i));
END LOOP;
END IF;
Dbms_Lock.Sleep(5*60); -- sleep for 5 mins before looking for more records to process
-- if there is a deadlock or a locked record then log the error, rollback and wait 5 minutes, then loop again
EXCEPTION
WHEN lx_deadlock_detected OR lx_record_locked THEN
ROLLBACK;
Dbms_Lock.Sleep(5*60); -- sleep for 5 minutes before processing records again
END;
END LOOP;
END process_dist_data_fix;
First understand that deadlock is a completely differnt issue than a "record locked". So for "record locked" most of the time there should not be anything that you need to do. 9/10 you want the program to wait on a lock. if you are waiting too long then you may have to redefine your transaction boundaries. For example here your code pattern is quite typical. You read a list of "to do" and then you "do it". In such cases it will be rare that you want to do something special for "record locking". if batch_pre_dist_data_fix table row is locked for some reason you should simply wait for lock to release and continue. if reverse is true, that since this job takes so long and you are locking batch_pre_dist_data_fix for so long for another process, then you may want to redefine transaction boundary. i.e. maybe you say that after each loop iteration you commit. But beware of how open cursor behave on commit.
Deadlock is a completely differnt animal. Here you always have "another session" and db detected a situation where you will never be able to get out of the situation and it killed one random session. So it is when Session 1 is waiting on a resource of session 2 and session 2 is waiting on a resource f session 1. That is an infinite wait condition that db can detect so it kills one random sessoin. One simple way to address that is that if all transactions that deal with multiple tables simply lock them in same order we will not have a deadlock. So lets say we have table A,B,C,D. If we simply decide that tables will be locked in this order. i.e. it is ok to do A,B,D or A,C,D or C,D - but never D,C. Then you will not get deadlock. To troubleshoot deadlock see the dump oracle created when it gave the error and find the other session and what was list of tables that session had and see how they should be locked.

How to determine the userid f person who has opened the table in UNIX

Is it possible to find out the userid of person who has opened the table while updation on that table goes on. I generally gets an error like "A lock is held by process 28036".
It will really helpful if anyone can guide me on this.
Try ps -fp 28036 | tail -1 | awk 'print $1'. That should give you the username of the owner of the process. It also does not require SU privileges.

DQL: enable (return_top 10) performance impact

I had this query that originally caused massive timeouts:
select d.r_object_id
from isc_fiche d, dmr_con c
where any c.parent_id = d.r_object_id
group by d.r_object_id
having count(*) > 2
yet when I add the enable (return_top 10) to the end then the performance issues seem a thing off the past. Apparently (according to colleagues) that statement could have a perfomance-improving effect.
Could someone clarify this for me?
Full query with 'way' better performance:
select d.r_object_id
from isc_fiche d, dmr_con c
where any c.parent_id = d.r_object_id
group by d.r_object_id
having count(*) > 2
enable(return_top 10)
enable (return_top 10) modifies the executed SQL statement, it adds a limiting clause to it, like ROWNUM <= 10 in Oracle. It depends on the underlying RDBMS, so I guess it's not ROWNUM <= 10 if you use EMC Documentum with Microsoft SQL Server.
You can run DQLs on the web interface of ECM Documentum (if I remember correctly it's called Webtop). There is a checkbox on the DQL run page which shows the generated SQL. You should check there what's the difference between the two DQL query.

Resources