Control-M Job Execution status - control-m

Is there any way in Control-M to pass the execution of the job without running the script?
Suppose i have passed all the parameters required for execution of job i.e. File Path, File Name , Node Id, Parameters but i want to run the job as dummy one but still want to successfully execute it and make it in green status.
Can we specify any command in Job Scheduling?

Yes, select 'dummy' as type of job and there is no need to pass command line to dummy job. It will get completed as soon as it gets the resource and you can use it as trigger for another job.
Regards,
Sahil Kondel

You can force set job OK in post-processing to get Green/Success status

Related

Is there a way to get the time dependency as a variable in Autosys similar to AUTO_JOB_NAME?

I need to somehow get the start_time for the current job run dynamically from Autosys. It would be passed as an argument into our Python script. Current time isn’t good enough, it has to be the time for which the job is currently triggered which was supplied in the Jil. I haven’t found anything other than $AUTO_JOB_NAME that would get any jil attributes.
command: python3 /path/path/script.py —-job_id ${AUTO_JOB_NAME} —-conf file.conf
I want to add something like:
command: python3 /path/path/script.py —-job_id ${AUTO_JOB_NAME} —-conf file.conf —-time ${CURRENT_START_TIME}
And no current Autosys time is not what I’m looking for.
I already looked through documentation and it seems the only values defined in the runtime environment for a job are $AUTORUN and $AUTO_JOB_NAME
Why don't we use current time in the remote agent server where the Python script is hosted. It would be the same as tge job start time yoy are looking for. May differ my milliseconds, but you may experiment on that

Airflow- failing a task which returns no data?

What would the best way be to fail a task which is the result of a BCP query (command line query for MS SQL server I am connecting to)?
I am downloading data from multiple tables every 30 minutes. If the data doesn't exist, the BCP command is still creating a file (0 size). This makes it seem like the task was always successful, but in reality it means that there is missing data on a replication server another team is maintaining.
bcp "SELECT * FROM database.dbo.table WHERE row_date = '2016-05-28' AND interval = 0" queryout /home/var/filename.csv -t, -c -S server_ip -U user -P password
The row_date and interval would be tied to the execution date in Airflow. I would like for airflow to show a failed task instance if the query returned no data though. Any suggestions?
Check for file size as part of the task?
Create an upstream task which reads the first couple of rows and tells Airflow whether the query was valid or not?
I would use your first suggestion and check for the file size as part of the task.
If it is not possible to do this in the same task as the query, create a new one with that specific purpose with an upstream dependency. In the cases that the file is empty just trigger an exception in the task.

How to reschedule a coordinator job in OOZIE without restarting the job?

When i changed the start time of a coordinator job in job.properties in oozie, the job is not taking the changed time, instead its running in the old scheduled time.
Old job.properties:
startMinute=08
startTime=${startDate}T${startHour}:${startMinute}Z
New job.properties:
startMinute=07
startTime=${startDate}T${startHour}:${startMinute}Z
The job is not running at the changed time:07th minute,its running at 08th minute in every hour.
Please can you let me know the solution, how i can make the job pickup the updated properties(changed timing) without restarting or killing the job.
You can't really change the timing of the co-ordinator via any methods given by Oozie(v3.3.2) . When you submit a job the contents properties are stored in the database whereas the actual workflow is in the HDFS.
Everytime you execute the co-ordinator it is necessary to have the workflow in the path specified in properties during job submission but the properties file is not needed. What I mean to imply is the properties file does not come into the picture after submitting the job.
One hack is to update the time directly in the database using SQL query.But I am not sure about the implications of it.The property might become inconsistent across the database.
You have to kill the job and resubmit a new one.
Note: oozie provides a way to change the concurrency,endtime and pausetime as specified in the official docs.

Autosys failing a job if the dependent job doesnot complete/success before a particular time

Autosys job retail_daily_job runs at 8:00 GMT. It is dependent on success of runner_daily_job.
Condition is If runner_daily_job is not success by 7:30 GMT, then status of retail_daily_job should be made to fail.i.e retail_daily_job should fail.
How to do this in autosys? what is the command to be used in jil file?
Thanks and Regards,
Simi
Not easy to do. Autosys doesn't support a negation condition for example NOT SUCCESS. I would try creating a job that would run at 0730 that would change the status of the retail_daily_job job to FAILURE if the runner_daily_job is FAILURE or TERMINATED or RUNNING.
Agree with clmccomas, not easy due to lack of certain operators.
We get around this by having our jobs create "status" files that other jobs can reference if need be. I've found that trying to parse/depend on output from autorep can be troublesome but is also doable. So we usually have a job status directory and create dated folders
/yyyymmdd/jobname_
The your command job would check the existing status file that's there. Often you may only want to write the status on completion.
The suggestion of having a job to alter statuses might work out well too.
Itseems like you are making the situation a bit complex.
If you want retail_daily_job to run based on the success of runner_daily_job, then simply put the condition as success(runner_daily_job).
So as you said if runner_daily_job is not success by 7:30 GMT (which means failure), then automatically the status of retail_daily_job will become to fail.
Hope this clarifies your question.

Autosys job to start after another job with any status

I need to run job after another job with whatever status the previous job finished.
So far I use
condition: success(x) or failure(x)
can it be written for any status?
I dont know the exact version of the autosys. But not the very latest.
condition: done(x)

Resources