two jobs at same line of cron - unix

How are the executed, if two jobs schedules in same cron line: parallely or sequentially?
e.g:
0 3 * * * ./fillers.sh > /dev/null 2>&1; ./pionner.sh > /dev/null 2>&1;

Strictly speaking, that's one job, not two. The command is passed to /bin/sh to be executed. The two sub-commands are executed sequentially, jus as if you had typed the same command at a shell prompt.
If you want them executed in parallel, use an & after the first sub-command rather than a ;.

Related

Use of sleep in bash within for loop launching sbatch

I want to submit an R script myjob.R that takes two arguments for which I have several scenarios (here only a few as an example).
I want to pass these arguments by looping through scens and sets.
In order to avoid overloading the squeue on the cluster, I don't want to submit the whole loop at once.
Instead I want to wait 1h between each individual job submission.
Therefore, I included the sleep 1h command, after each iteration.
I used to launch the bash script via bash mybash.sh, however this command requires to keep the terminal open until all jobs have been submitted.
My solution was then to launch mybash.sh via sbatch mybash.sh. This is somehow nesting two sbatch commands. Seems to work very well.
My question is only if there is any reason against submitting nested sbatch commands.
Thanks!
Here is mybash.sh script:
#!/bin/bash
scens=('AAA' 'BBB')
sets=('set1' 'set2')
wd=/projects/workdir
for sc in "${!scens[#]}";do
for se in "${!sets[#]}" ;do
echo "SCENARIO: ${scens[sc]} --- SET: ${sets[se]}"
sbatch -t 00:05:00 -J myjob --workdir=${wd} -e myjob.err -o myjob.out R --file=myjob.R --args "${scens[sc]}" "${sets[se]}"
# My solution is to include the following line & run this bash script via sbatch
sleep 1h
done
done

open screen session on many remote hosts executing complex command, don't exit afterward

I have a long list of remote hosts and I want to run a shell command on all of them. The command takes a very long time, so I want to run the command inside screen on the remote machine, disconnecting immediately from each, and I want the terminal output on the remote to be preserved after the command exits. There is a "tag" that should be supplied to each command as an argument. I tried to do this with parallel, something like this:
$ cat servers.txt
user1#server1.example.com/tag1
user2#server2.example.com/tag2
# ...
$ cat run.sh
grep -v '^#' servers.txt |
parallel ssh -tt '{//}' \
'tag={/}; exec screen slow_command --option1 --option2 $tag other args'
This doesn't work: all of the remote processes are launched, but they are not detached (so the ssh sessions remain live and I don't get my local shell back), and once each command finishes, its screen exits immediately and the output is lost.
How do I fix this script? Note: if this is easier to do with tmux and/or some other marshalling program besides parallel, I'm happy to hear answers that explain how to do it that way.
Something like this:
grep -v '^#' servers.txt |
parallel -q --colsep / ssh {1} "screen -d -m bash -c 'echo do stuff \"{2}\";sleep 1000000'"
The final sleep makes sure the screen does not die. You will have 1000000 seconds to attach to it and kill it.
There is an awful lot of quoting there - especially if do stuff is complex.
It may be easier to make a function that computes tag on the remote machine. You need GNU Parallel 20200522 for this:
env_parallel --session
f() {
sshlogin="$1"
# TODO given $sshlogin compute $tag (e.g. a table lookup)
do_stuff() {
echo "do stuff $tag"
sleep 1000000
}
export -f do_stuff
screen -d -m bash -c do_stuff "$#"
}
env_parallel --nonall --slf servers_without_tag f '$PARALLEL_SSHLOGIN'
env_parallel --endsession

How to set timers for console commands in symfony 2.x?

In my app I've got one task that needs to be done every 48 hours on server side. I've created a console command in order to automatize my job. However I don't know how can I set timer to keep invoking that command. Can you point my a way to do that?
You should see on a cron commands.
Cron will run your command every X (frequency) times.
TO create a cron, (on unix) use: crontab -e
For example
0 0 */2 * * bin/console app:command >/dev/null 2>&1
will run every odd days, bin/console app:command
to help you generating a cron
https://crontab-generator.org/

Kill all R processes that hang for longer than a minute

I use crontask to regularly run Rscript. Unfortunately, I need to do this on a small instance of aws and the process may hang, building more and more processes on top of each other until the whole system is lagging.
I would like to write a crontask to kill all R processes lasting longer than one minute. I found another answer on Stack Overflow that I've adapted that I think would solve the problem. I came up with;
if [[ "$(uname)" = "Linux" ]];then killall --older-than 1m "/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";fi
I copied the task directly from htop, but it does not work as I expect. I get the No such file or directory error but I've checked it a few times.
I need to kill all R processes that have lasted longer than a minute. How can I do this?
You may want to avoid killing processes from another user and try SIGKILL (kill -9) after SIGTERM (kill -15). Here is a script you could execute every minute with a CRON job:
#!/bin/bash
PROCESS="R"
MAXTIME=`date -d '00:01:00' +'%s'`
function killpids()
{
PIDS=`pgrep -u "${USER}" -x "${PROCESS}"`
# Loop over all matching PIDs
for pid in ${PIDS}; do
# Retrieve duration of the process
TIME=`ps -o time:1= -p "${pid}" |
egrep -o "[0-9]{0,2}:?[0-9]{0,2}:[0-9]{2}$"`
# Convert TIME to timestamp
TTIME=`date -d "${TIME}" +'%s'`
# Check if the process should be killed
if [ "${TTIME}" -gt "${MAXTIME}" ]; then
kill ${1} "${pid}"
fi
done
}
# Leave a chance to kill processes properly (SIGTERM)
killpids "-15"
sleep 5
# Now kill remaining processes (SIGKILL)
killpids "-9"
Why imply an additional process every minute with cron?
Would it not be easier to start R with timeout from coreutils, the processes will then be killed automatically after the time you chose.
timeout [option] duration command [arg]…
I think the best option is to do this with R itself. I am no expert, but it seems the future package will allow executing a function in a separate thread. You could run the actual task in a separate thread, and in the main thread sleep for 60 seconds and then stop().
Previous Update
user1747036's answer which recommends timeout is a better alternative.
My original answer
This question is more appropriate for superuser, but here are a few things wrong with
if [[ "$(uname)" = "Linux" ]];then
killall --older-than 1m \
"/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";
fi
The name argument is either the name of image or path to it. You have included parameters to it as well
If -s signal is not specified killall sends SIGTERM which your process may ignore. Are you able to kill a long running script with this on the command line? You may need SIGKILL / -9
More at http://linux.die.net/man/1/killall

Unix cron job for shell scripts

I would like to have a cron job which executes 3 shell scripts consecutively i.e., execution of next shell script depending on the completion of previous scripts.
How can I do it?
Here is an example showing a cron which executes 3 scripts at 9am Mon-Fri.
00 09 * * 1-5 script1.sh && script2.sh && script3.sh 2>&1 >> /var/tmp/cron.log
If any one of the scripts fails, the next script in the sequence will not be executed.
Write one script which calls these three scripts and pit it into cron.
To elaborate on yi_H's answer: You can combine them in one shell script in different ways, depending on what you want.
job1.sh
job2.sh
job3.sh
will run all three consecutively, regardless of the result.
job1.sh && job2.sh && job3.sh
will run all three, but it will stop if one of them fails (that is, if job1 returns an error, job2 and job3 will not run).

Resources