I am not able to understand the answer for this question: "What's the difference between cron and crontab." Are they both schedulers with one executing the files once and the other executing the files on a regular interval OR does cron schedule a job and crontab stores them in a table or file for execution?
Wiki page for Cron mentions :
Cron is driven by a crontab (cron table) file, a configuration file
that specifies shell commands to run periodically on a given schedule.
But wiki.dreamhost for crontab mentiones :
The crontab command, found in Unix and Unix-like operating systems, is
used to schedule commands to be executed periodically. It reads a
series of commands from standard input and collects them into a file
known as a "crontab" which is later read and whose instructions are
carried out.
Specifically, When I schedule a job to be repeated : (Quoting from wiki)
1 0 * * * printf > /var/log/apache/error_log
or executing a job only once
at -f myScripts/call_show_fn.sh 1:55 2014-10-14
Am I doing a cron function in both the commands which is pushed in crontab OR is the first one a crontab and the second a cron function?
cron is the general name for the service that runs scheduled actions. crond is the name of the daemon that runs in the background and reads crontab files. A crontab is a file containing jobs in the format
minute hour day-of-month month day-of-week command
crontabs are normally stored by the system in /var/spool/<username>/crontab. These files are not meant to be edited directly. You can use the crontab command to invoke a text editor (what you have defined for the EDITOR env variable) to modify a crontab file.
There are various implementations of cron. Commonly there will be per-user crontab files (accessed with the command crontab -e) as well as system crontabs in /etc/cron.daily, /etc/cron.hourly, etc.
In your first example you are scheduling a job via a crontab. In your second example you're using the at command to queue a job for later execution.
Related
In below example, if shell script shell_script.sh sends a job to cluster, is it possible to have snakemake aware of that cluster job's completion? That is, first, file a should be created by shell_script.sh which sends its own job to the cluster, and then once this cluster job is completed, file b should be created.
For simplicity, let's assume that snakemake is run locally meaning that the only cluster job originating is from shell_script.sh and not by snakemake .
localrules: that_job
rule all:
input:
"output_from_shell_script.txt",
"file_after_cluster_job.txt"
rule that_job:
output:
a = "output_from_shell_script.txt",
b = "file_after_cluster_job.txt"
shell:
"""
shell_script.sh {output.a}
touch {output.b}
"""
PS - At the moment, I am using sleep command to give it a waiting time before the job is "completed". But this is an awful workaround as this could give rise to several problems.
Snakemake can manage this for you with the --cluster argument on the command line.
You can supply a template for the jobs to be executed on the cluster.
As an example, here is how I use snakemake on a SGE managed cluster:
template which will encapsulate the jobs which I called sge.sh:
#$ -S /bin/bash
#$ -cwd
#$ -V
{exec_job}
then I use directly on the login node:
snakemake -rp --cluster "qsub -e ./logs/ -o ./logs/" -j 20 --jobscript sge.sh --latency-wait 30
--cluster will tell which queuing system to use
--jobscript is the template in which jobs will be encapsulated
--latency-wait is important if the file system takes a bit of time to write the files. You job might end and return before the output of the rules are actually visible to the filesystem which will cause an error
Note that you can specify rules not to be executed on the nodes in the Snakefile with the keyword localrules:
Otherwise, depending on your queuing system, some options exist to wait for job sent to cluster to finish:
SGE:
Wait for set of qsub jobs to complete
SLURM:
How to hold up a script until a slurm job (start with srun) is completely finished?
LSF:
https://superuser.com/questions/46312/wait-for-one-or-all-lsf-jobs-to-complete
In my app I've got one task that needs to be done every 48 hours on server side. I've created a console command in order to automatize my job. However I don't know how can I set timer to keep invoking that command. Can you point my a way to do that?
You should see on a cron commands.
Cron will run your command every X (frequency) times.
TO create a cron, (on unix) use: crontab -e
For example
0 0 */2 * * bin/console app:command >/dev/null 2>&1
will run every odd days, bin/console app:command
to help you generating a cron
https://crontab-generator.org/
I have developed a code that composed of two files:
An 'envelop bash file', which does a few things and writes to a log-file, and then at some point runs into a for loop in which within it it executes one job at a time using bsub.
And 'an internal bash file', which gets as input the name of the log-file (in addition to other input values that necessary for its execution), and executes process X (using the input values it received from the 'envelop file'.
Once process X is finished, the 'internal script' writes to the log-file that process X (with its specific serial number) has been completed.
Since the for-loop of the envelop file loops 10 times, there are at least 10 parallel processes that being executed and run in parallel, and they all being executed with bsub given the SAME log-file name. The idea is that they would all report to the same log-file once they completed their execution of Process X.
The general procedure works well, and in each case process X is being executed, and the log-file accumulates as required all the notifications regarding the completion of process X. However, in some incidences we see that the writing to the log-file get disturbed and output lines of two parallel runs are running into each other.
I would like to lock the log-file in such manner that would allow it to receive text only from one parallel run at a time. The idea is to avoid cases where the text becomes mixed due to two processes that write by chance to the log-file exactly at the same time.
Here is the part of my envelop file which call to the bsub (I reduced the content to the minimum necessary):
for ((i=1;i<=$batchesnumber; i++));
do
bsub -J $SerialName -q normal "bash FetchFasta.bash $genome_fa ${SerialFileName}".bed" $logfile"
done
Here is the part of my internal file that echo to the log-file:
(
echo "~~~~~~~~~~~~~~~~~~"
echo "^^^^^^^^^^^^^^^^^^"
echo -n "Completed running "; bedtools -version
echo "bedtools getfasta -s -fi $genome_fasta -bed $mySerialFile -fo ${mySerialFile%.*}".fa" "
echo "Run's completion time is: $timedate"
echo -e "~~~~~~~~~~~~~~~~~~\n"
) >> $logfile
I would appreciate any useful solution!
There's a couple of ways I can think of going about this:
Have each job write its output to a different file (use $LSB_JOBID inside each job to name the file). Then use another "cleanup" job to concatenate all of the ouptut into a single file. You can use job dependencies (bsub -w) to make sure the cleanup job runs after all the other jobs are done.
Implement a lock inside your "internal" job to make sure only one of them writes to a file at a time. This is a lot simpler than it might sound, one way to do it is to have each job try to create the same directory with mkdir before writing to the file and then delete the directory after its done. If they fail to create the directory it's because another one of the jobs got to it first and is currently writing to the file.
Here's a snippet illustrating #2 in bash:
# Try to get the lock every second
while ! mkdir lock &> /dev/null ; do
sleep 1
done
# Got the lock, write to the logfile
echo blahblahblah >> $logfile
# Release the lock
rmdir lock
I should mention an important caveat here though: if one of your jobs dies while it's "holding the lock" (say someone sends it a kill signal at the wrong time) then it'll never remove the directory and all the other jobs won't be able to create it, so they'll just keep sleeping forever.
I am trying to run a Unix script which populates our Aged Debt table for our finance department from SSIS but cannot get my head around it. The script has to be run under user "username" and the script to run is :
P1='0*99999999' P2='2015_03_25*%%YY*Y' P3='Y*0.0' P4='Y*0.0' P5='Y*0.0' P6='Y*0.0' P7='Y*0.0' P8='Y*0.0' /cer_cerprod1/exe/par50219r
I believe that I need to have ssh configured on both sides to do this and I believe that I may do this from the "Execute Process Task" but I don't think that I am populating the parameters correctly.
Can anyone help.
I currently do this using putty/plink. Like sorrell says above, You use an execute process task to call a batch file. That batch file calls plink. I pass plink the shell script on the unix server that I want it to execute.
example of batch file:
echo y | "d:\program files\putty\plink.exe" [username#yourserver.com] -pw [password] -v sh /myremotescriptname.sh
the echo y at the beginning is to tell plink to accept the security credentials of the server.
I have a cron job in cronjob.txt as follows
* * * * * nohup sh cronScheduleInit.sh >> cronlog.txt &
and ran it using command,
crontab cronjob.txt
After my testing ,i deleted the cron job entry using following command,
crontab -e
and when display the list of jobs using
crontab -l
showing no entries but still the cron job is running, i mean it is generating the entries in log file. Even i commented the job entry in cronjob.txt file
Also, tried deleting cron job and listed the jobs. its showing no cron jobs but still the log is running...
crontab -r
What to do.. Please help!!!!
Process can be find out using command ps aux. So check
ps aux|grep crontab #or
ps aux|grep cronjob
Then you will get something like
user 29587 2.0 1.1 748804 88968 pts/31 Sl+ Mar04 19:55 grunt
This result refers for service grunt.You have to search crontab or cronjob
Then kill process using process id
Here:
sudo kill -9 29587
Format
sudo kill -9 <process_id>