Creating autosys job - autosys

We have a autosys box job let say A which contain 3 child jobs(B,c and D).
We have created a separate autosys job E , which will send mail if B and C fails.
We need to have condition where once E executed successfully..the autosys job A needs to be restarted again.
Note - the box job is scheduled to run daily at particular time

If you provide a sample code that would be good.
Base on what I understood. You want to run job A if E is executed successfully.
for that you need to add below line in your code (Jil file or using GUI)
/*-----job E if I had code available, would have changed and shared -----------*/
condition: success(E)
or
condition: s(E)
This job will execute immediately after job E is successful

Condition for job "E".
condition: f(B) OR FAILURE(C)
condition: FAILURE(B) | f(C)
Condition for job "A".
condition: s(E)
Note: Job name is case sensitive, you can give f or failure for failure status, s or success for success status, OR or | for OR condition & AND or & for AND condition. You can use d or done when you want your job to start even if predecessor job completed with SUCCESS, FAILURE or TERMINATED state.

Related

Find which instances got terminated via Athena query

I am running a query which is giving me list of instances launched for a particular month for my security group.
Lets say - [A, B ,C ,D]
My goal is to find what all instances got terminated also and when.
Now the issue I am facing is that I don't want to execute my query repeatedly - for each instance - to check whether instance got terminated or not..
Like:
SELECT eventname, useridentity.username, eventtime, requestparameters FROM your_athena_tablename WHERE (requestparameters like '%instanceid%')and eventtime > '2017-02-15T00:00:00Z'order by eventtime asc;
How can I pass multiple values here ??

Teradata TPT job is successful in case there are records in Error Table 2

I have teradata TPT job defined, there is Error Limit =1 set in Update operator. When there are records in Error Table 1, the job fails, but when there are records only in Error Table 2, the job is successful. How do I make it fail in case of Error table 2 as well?
You could use the DDL operator to explicitly check after the UPDATE completes.
STEP FailIfError2Exists (
APPLY ('ABORT WHERE (SELECT COUNT(*) FROM DBC.TablesV WHERE DatabaseName=''workingDatabaseName'' AND TableName=''errorTable2Name'')=1;')
TO OPERATOR ($DDL() ATTRIBUTES(...));
);
The ABORT will return success if the condition is false or a 3514 error if it is true. Note that you will also need to wrap your UPDATE operator in an explicit STEP if it isn't already.
Note that a checkpoint file will be left on the client, and by default TPT would try to restart at the failing step. If you want the next iteration of the job to start at the beginning, you will want to remove that checkpoint (e.g. with twbrmcp).

Can't delete from a table inside a trigger

I'm building this DB about the University for one of my course classes and I'm trying to create a trigger that doesn't allow for a professor to be under 21yo.
I have a Person class and then a Professor subclass.
What I want to happen is, you create a Person object, then a Professor object using that Person object's id, but, if the Person is under 21yo, delete this Professor object, then delete the Person object.
Everything works fine up until the "delete the Person object" part where this doesn't happen and I'm not sure why. Any help?
This is the sqlite code I have:
AFTER INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ROLLBACK, 'Professor cant be under 21');
DELETE FROM Person WHERE (personId= new.personId);
END;```
One common issue is that there many not be a current transaction scope to rollback to, which would result in this error:
Error: cannot rollback - no transaction is active
If that occurs, then the trigger execution will be aborted and the delete never executed.
If ROLLBACK does succeed, then this creates a paradox, by rolling back to before the trigger was executed in a strictly ACID environment it would not be valid to continue executing the rest of this trigger, because the INSERT never actually occurred. To avoid this state of ambiguity, any call to RAISE() that is NOT IGNORE will abort the processing of the trigger.
CREATE TRIGGER - The RAISE()
When one of RAISE(ROLLBACK,...), RAISE(ABORT,...) or RAISE(FAIL,...) is called during trigger-program execution, the specified ON CONFLICT processing is performed and the current query terminates. An error code of SQLITE_CONSTRAINT is returned to the application, along with the specified error message.
NOTE: This behaviour is different to some other RDBMS, for instance see this explanation on MS SQL Server where execution will specifically continue in the trigger.
As OP does not provide calling code that demonstrates the scenario it is worth mentioning that in SQLite on RAISE(ROLLBACK,)
If no transaction is active (other than the implied transaction that is created on every command) then the ROLLBACK resolution algorithm works the same as the ABORT algorithm.
Generally, if you wanted to Create a Person and then a Professor as a single operation, you would Create a Stored Procedure that would validate the inputs first, preventing the original insert at the start.
To maintain referential integrity, even if an SP is used, you could still add a check constraint on the Professor record or raise an ABORT from a BEFORE trigger to prevent the INSERT from occurring in the first place:
BEFORE INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ABORT, 'Professor can''t be under 21');
END
This way it is up to the calling process to manage how to handle the error. The ABORT can be caught in the calling logic and would effectively result in rolling back the outer transaction, but the point is that the calling logic should handle negative side effects. As a general rule triggers that cascade logic should only perform positive side effects, that is to say they should only affect data if the inserted row succeeds. In this case we are rolling back the insert, so it becomes hard to identify why the Person would be deleted.

Savepoint in an Oracle PL/SQL LOOP to stop deadlocks or record locks contention

I have a simple procedure but I'm unsure on how to best implement a strategy to stop deadlocks or record locks. I'm updating a number of tables in an cursor LOOP while calling a procedure that also updates tables.
There have been issues with deadlocks or record locks, so I've been tasked to cure this problem of the program from crashing once it comes up against a deadlock or record lock but to sleep for 5 minutes and carry on processing any new records.
The perfect solution is that it skips pass the deadlock or record lock and carry's on processing the rest of the records that aren't locked, sleeps for 5 minutes then picks up that record when the cursor is called again. The program continues to run through the day until it's killed.
My procedure is below, I have put in what I think is best but should I have the exception inside the Inner loop rather than the outer loop? While also having a savepoint in the inner loop?
PROCEDURE process_dist_data_fix
IS
lx_record_locked EXCEPTION;
lx_deadlock_detected EXCEPTION;
PRAGMA EXCEPTION_INIT(lx_record_locked, -54);
PRAGMA EXCEPTION_INIT(lx_deadlock_detected, -60);
CURSOR c_files
IS
SELECT surr_id
FROM batch_pre_dist_data_fix
WHERE status = 'REQ'
ORDER BY surr_id;
TYPE file_type IS TABLE OF batch_pre_dist_data_fix.surr_id%TYPE;
l_file_tab file_type;
BEGIN
LOOP
BEGIN
OPEN c_files;
FETCH c_files BULK COLLECT INTO l_file_tab;
CLOSE c_files;
IF l_file_tab.COUNT > 0
THEN
FOR i IN 1..l_file_tab.COUNT
LOOP
-- update main table with start date
UPDATE batch_pre_dist_data_fix
SET start_dtm = SYSDATE
WHERE surr_id = l_file_tab(i);
-- update tables
update_soundmouse_tables (l_file_tab(i));
END LOOP;
END IF;
Dbms_Lock.Sleep(5*60); -- sleep for 5 mins before looking for more records to process
-- if there is a deadlock or a locked record then log the error, rollback and wait 5 minutes, then loop again
EXCEPTION
WHEN lx_deadlock_detected OR lx_record_locked THEN
ROLLBACK;
Dbms_Lock.Sleep(5*60); -- sleep for 5 minutes before processing records again
END;
END LOOP;
END process_dist_data_fix;
First understand that deadlock is a completely differnt issue than a "record locked". So for "record locked" most of the time there should not be anything that you need to do. 9/10 you want the program to wait on a lock. if you are waiting too long then you may have to redefine your transaction boundaries. For example here your code pattern is quite typical. You read a list of "to do" and then you "do it". In such cases it will be rare that you want to do something special for "record locking". if batch_pre_dist_data_fix table row is locked for some reason you should simply wait for lock to release and continue. if reverse is true, that since this job takes so long and you are locking batch_pre_dist_data_fix for so long for another process, then you may want to redefine transaction boundary. i.e. maybe you say that after each loop iteration you commit. But beware of how open cursor behave on commit.
Deadlock is a completely differnt animal. Here you always have "another session" and db detected a situation where you will never be able to get out of the situation and it killed one random session. So it is when Session 1 is waiting on a resource of session 2 and session 2 is waiting on a resource f session 1. That is an infinite wait condition that db can detect so it kills one random sessoin. One simple way to address that is that if all transactions that deal with multiple tables simply lock them in same order we will not have a deadlock. So lets say we have table A,B,C,D. If we simply decide that tables will be locked in this order. i.e. it is ok to do A,B,D or A,C,D or C,D - but never D,C. Then you will not get deadlock. To troubleshoot deadlock see the dump oracle created when it gave the error and find the other session and what was list of tables that session had and see how they should be locked.

Configure Control-m job using in condition

Control-m Job description
Job A runs Mon-Friday, Job B runs on Saturday. Job C runs Mon-Saturday. I need Job D to run when:
1)Job A and Job C runs or
2) Job B and Job C runs.
Is this possible using control-m, how should I modify the in-condition.
I would modify so that Job A and Job B pass the same condition. Job D will be waiting on condition from Job A or Job B, and from Job C.
The best solution to this is to make use of an OR when defining the in-conditions on Job D.
Solution:
Job A, Job B and Job C all post normal out-conditions to Job D.
Job D is configured to require the following in-conditions
(Job A AND (Job B OR Job C))

Resources