how to trigger AutoSys insert_job and sendevent stored in jil file in one shot? - autosys

I'm very new to AutoSys jobs and I have following commands stored in single jil file. let's call it, test.jil.
insert_job: job_A
command: echo 'mock'
description : mock job A
sendevent -E JOB_ON_ICE -J job_A
I'm trying to run jil < test.jil. it doesn't recognize sendevent. How can i get it working ?

In jil file we can write commands like insert_job,
delete_job,update_job but sendevent is different command which should be triggered by autosys agent.
So you can separately create executable file in which you can write that sendevent command and execute it through CLI.
Thanks.

Actually there is a change in one of the last service packs. For your JIL you can specify status:
insert_job: test_job2
command:dir
machine:localhost
status:on_ice
The valid parms are:
FAILURE, INACTIVE, ON_HOLD, ON_ICE, ON_NOEXEC, SUCCESS, or TERMINATED.

Related

Restart Autosys job when terminated via another Autosys job

I am setting up an Autosys job to restart another job when the main job is terminated.
Insert_job: Job_Restarter
Job_type: cmd
Condition: t(main_job)
Box_name: my_test_box
Permission: gx,get
Command: send -E FORCE_STARTJOB -J main_job
When the main job is terminated, the restart job runs but fails and I get an error code of 1. I know this is a generic error code, but dose anyone have an idea of what I am doing wrong?
Edit:
Did some digging. "Sendevent" is not recognized as a command. Is there another way to restart the job through another job?

autosys Extract specific jobs from jil file

I have thousands of jobs in Autosys JIL file, I would like to extract specific jobs starting with admin-* from jil files.
I would extract them from the autosys itself as a jil file. On the cmd line just do:
autorep -qJ admin-* > admin_jobs.jil
That will create a file admin_jobs.jil with the search criteria.
Dave
In the Autosys portal under Enterprise Command Line tab
Select your Servers.
Under Command enter below and click Execute.
autorep -J admin-% -q
This will provide all jobs starting with "admin-".

Can Snakemake work if a rule's shell command is a cluster job?

In below example, if shell script shell_script.sh sends a job to cluster, is it possible to have snakemake aware of that cluster job's completion? That is, first, file a should be created by shell_script.sh which sends its own job to the cluster, and then once this cluster job is completed, file b should be created.
For simplicity, let's assume that snakemake is run locally meaning that the only cluster job originating is from shell_script.sh and not by snakemake .
localrules: that_job
rule all:
input:
"output_from_shell_script.txt",
"file_after_cluster_job.txt"
rule that_job:
output:
a = "output_from_shell_script.txt",
b = "file_after_cluster_job.txt"
shell:
"""
shell_script.sh {output.a}
touch {output.b}
"""
PS - At the moment, I am using sleep command to give it a waiting time before the job is "completed". But this is an awful workaround as this could give rise to several problems.
Snakemake can manage this for you with the --cluster argument on the command line.
You can supply a template for the jobs to be executed on the cluster.
As an example, here is how I use snakemake on a SGE managed cluster:
template which will encapsulate the jobs which I called sge.sh:
#$ -S /bin/bash
#$ -cwd
#$ -V
{exec_job}
then I use directly on the login node:
snakemake -rp --cluster "qsub -e ./logs/ -o ./logs/" -j 20 --jobscript sge.sh --latency-wait 30
--cluster will tell which queuing system to use
--jobscript is the template in which jobs will be encapsulated
--latency-wait is important if the file system takes a bit of time to write the files. You job might end and return before the output of the rules are actually visible to the filesystem which will cause an error
Note that you can specify rules not to be executed on the nodes in the Snakefile with the keyword localrules:
Otherwise, depending on your queuing system, some options exist to wait for job sent to cluster to finish:
SGE:
Wait for set of qsub jobs to complete
SLURM:
How to hold up a script until a slurm job (start with srun) is completely finished?
LSF:
https://superuser.com/questions/46312/wait-for-one-or-all-lsf-jobs-to-complete

Running Unix scrips from SSIS

I am trying to run a Unix script which populates our Aged Debt table for our finance department from SSIS but cannot get my head around it. The script has to be run under user "username" and the script to run is :
P1='0*99999999' P2='2015_03_25*%%YY*Y' P3='Y*0.0' P4='Y*0.0' P5='Y*0.0' P6='Y*0.0' P7='Y*0.0' P8='Y*0.0' /cer_cerprod1/exe/par50219r
I believe that I need to have ssh configured on both sides to do this and I believe that I may do this from the "Execute Process Task" but I don't think that I am populating the parameters correctly.
Can anyone help.
I currently do this using putty/plink. Like sorrell says above, You use an execute process task to call a batch file. That batch file calls plink. I pass plink the shell script on the unix server that I want it to execute.
example of batch file:
echo y | "d:\program files\putty\plink.exe" [username#yourserver.com] -pw [password] -v sh /myremotescriptname.sh
the echo y at the beginning is to tell plink to accept the security credentials of the server.

Difference between Cron and Crontab?

I am not able to understand the answer for this question: "What's the difference between cron and crontab." Are they both schedulers with one executing the files once and the other executing the files on a regular interval OR does cron schedule a job and crontab stores them in a table or file for execution?
Wiki page for Cron mentions :
Cron is driven by a crontab (cron table) file, a configuration file
that specifies shell commands to run periodically on a given schedule.
But wiki.dreamhost for crontab mentiones :
The crontab command, found in Unix and Unix-like operating systems, is
used to schedule commands to be executed periodically. It reads a
series of commands from standard input and collects them into a file
known as a "crontab" which is later read and whose instructions are
carried out.
Specifically, When I schedule a job to be repeated : (Quoting from wiki)
1 0 * * * printf > /var/log/apache/error_log
or executing a job only once
at -f myScripts/call_show_fn.sh 1:55 2014-10-14
Am I doing a cron function in both the commands which is pushed in crontab OR is the first one a crontab and the second a cron function?
cron is the general name for the service that runs scheduled actions. crond is the name of the daemon that runs in the background and reads crontab files. A crontab is a file containing jobs in the format
minute hour day-of-month month day-of-week command
crontabs are normally stored by the system in /var/spool/<username>/crontab. These files are not meant to be edited directly. You can use the crontab command to invoke a text editor (what you have defined for the EDITOR env variable) to modify a crontab file.
There are various implementations of cron. Commonly there will be per-user crontab files (accessed with the command crontab -e) as well as system crontabs in /etc/cron.daily, /etc/cron.hourly, etc.
In your first example you are scheduling a job via a crontab. In your second example you're using the at command to queue a job for later execution.

Resources