How to set timers for console commands in symfony 2.x? - symfony

In my app I've got one task that needs to be done every 48 hours on server side. I've created a console command in order to automatize my job. However I don't know how can I set timer to keep invoking that command. Can you point my a way to do that?

You should see on a cron commands.
Cron will run your command every X (frequency) times.
TO create a cron, (on unix) use: crontab -e
For example
0 0 */2 * * bin/console app:command >/dev/null 2>&1
will run every odd days, bin/console app:command
to help you generating a cron
https://crontab-generator.org/

Related

Trying to pass arguments to wp-cli in a bash script

I'm using a wp-cli tool in order to optimize images:
$ wp image-optimize batch --limit=20
I've installed wpcli using composer so it's in an unusual location, but is in my $PATH:
/home/user/.config/composer/vendor/wp-cli/wp-cli/bin/wp
This works great. I'd like to run this command nightly. I've tried two different approaches to this. First, I tried running the command as a cronjob (set every minute for testing):
$ crontab -e
* * * * * cd /path/to/example.com && wp image-optimize batch --limit=20
I got no response. I wondered if the problem had something to do with passing arguments in a cronjob. So, I created a bash script nightly-image-optimize (also in path) hoping that this might get around it:
#!/bin/bash
echo "begin" >> /home/user/cronlog.log
cd /path/to/example.com
sh /home/user/.config/composer/vendor/wp-cli/wp-cli/bin/wp image-optimize batch --limit=2
echo "end" >> /home/user/cronlog.log
I then modified the cronjob to execute this file every minute as my username since cron runs as root:
* * * * * username /usr/local/bin/nightly-image-optimize
I know the cronjob is running because my cronlog.log file is created and is populated every minute with the echo begin and end statements above.
While in context this is a wp-cli problem, I don't believe that the issue has anything to do with wp-cli. I think I'm misunderstanding how to essentially 'tell' bash to run a process as if I had manually entered it in (maybe something to do with the interactivity of wp-cli?).
Any ideas?
Note:
I'm on AWS running Ubuntu 18.04.3 as a non-root user with sudo privileges

Kill all R processes that hang for longer than a minute

I use crontask to regularly run Rscript. Unfortunately, I need to do this on a small instance of aws and the process may hang, building more and more processes on top of each other until the whole system is lagging.
I would like to write a crontask to kill all R processes lasting longer than one minute. I found another answer on Stack Overflow that I've adapted that I think would solve the problem. I came up with;
if [[ "$(uname)" = "Linux" ]];then killall --older-than 1m "/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";fi
I copied the task directly from htop, but it does not work as I expect. I get the No such file or directory error but I've checked it a few times.
I need to kill all R processes that have lasted longer than a minute. How can I do this?
You may want to avoid killing processes from another user and try SIGKILL (kill -9) after SIGTERM (kill -15). Here is a script you could execute every minute with a CRON job:
#!/bin/bash
PROCESS="R"
MAXTIME=`date -d '00:01:00' +'%s'`
function killpids()
{
PIDS=`pgrep -u "${USER}" -x "${PROCESS}"`
# Loop over all matching PIDs
for pid in ${PIDS}; do
# Retrieve duration of the process
TIME=`ps -o time:1= -p "${pid}" |
egrep -o "[0-9]{0,2}:?[0-9]{0,2}:[0-9]{2}$"`
# Convert TIME to timestamp
TTIME=`date -d "${TIME}" +'%s'`
# Check if the process should be killed
if [ "${TTIME}" -gt "${MAXTIME}" ]; then
kill ${1} "${pid}"
fi
done
}
# Leave a chance to kill processes properly (SIGTERM)
killpids "-15"
sleep 5
# Now kill remaining processes (SIGKILL)
killpids "-9"
Why imply an additional process every minute with cron?
Would it not be easier to start R with timeout from coreutils, the processes will then be killed automatically after the time you chose.
timeout [option] duration command [arg]…
I think the best option is to do this with R itself. I am no expert, but it seems the future package will allow executing a function in a separate thread. You could run the actual task in a separate thread, and in the main thread sleep for 60 seconds and then stop().
Previous Update
user1747036's answer which recommends timeout is a better alternative.
My original answer
This question is more appropriate for superuser, but here are a few things wrong with
if [[ "$(uname)" = "Linux" ]];then
killall --older-than 1m \
"/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";
fi
The name argument is either the name of image or path to it. You have included parameters to it as well
If -s signal is not specified killall sends SIGTERM which your process may ignore. Are you able to kill a long running script with this on the command line? You may need SIGKILL / -9
More at http://linux.die.net/man/1/killall

Unable to stop the cron job

I have a cron job in cronjob.txt as follows
* * * * * nohup sh cronScheduleInit.sh >> cronlog.txt &
and ran it using command,
crontab cronjob.txt
After my testing ,i deleted the cron job entry using following command,
crontab -e
and when display the list of jobs using
crontab -l
showing no entries but still the cron job is running, i mean it is generating the entries in log file. Even i commented the job entry in cronjob.txt file
Also, tried deleting cron job and listed the jobs. its showing no cron jobs but still the log is running...
crontab -r
What to do.. Please help!!!!
Process can be find out using command ps aux. So check
ps aux|grep crontab #or
ps aux|grep cronjob
Then you will get something like
user 29587 2.0 1.1 748804 88968 pts/31 Sl+ Mar04 19:55 grunt
This result refers for service grunt.You have to search crontab or cronjob
Then kill process using process id
Here:
sudo kill -9 29587
Format
sudo kill -9 <process_id>

Difference between Cron and Crontab?

I am not able to understand the answer for this question: "What's the difference between cron and crontab." Are they both schedulers with one executing the files once and the other executing the files on a regular interval OR does cron schedule a job and crontab stores them in a table or file for execution?
Wiki page for Cron mentions :
Cron is driven by a crontab (cron table) file, a configuration file
that specifies shell commands to run periodically on a given schedule.
But wiki.dreamhost for crontab mentiones :
The crontab command, found in Unix and Unix-like operating systems, is
used to schedule commands to be executed periodically. It reads a
series of commands from standard input and collects them into a file
known as a "crontab" which is later read and whose instructions are
carried out.
Specifically, When I schedule a job to be repeated : (Quoting from wiki)
1 0 * * * printf > /var/log/apache/error_log
or executing a job only once
at -f myScripts/call_show_fn.sh 1:55 2014-10-14
Am I doing a cron function in both the commands which is pushed in crontab OR is the first one a crontab and the second a cron function?
cron is the general name for the service that runs scheduled actions. crond is the name of the daemon that runs in the background and reads crontab files. A crontab is a file containing jobs in the format
minute hour day-of-month month day-of-week command
crontabs are normally stored by the system in /var/spool/<username>/crontab. These files are not meant to be edited directly. You can use the crontab command to invoke a text editor (what you have defined for the EDITOR env variable) to modify a crontab file.
There are various implementations of cron. Commonly there will be per-user crontab files (accessed with the command crontab -e) as well as system crontabs in /etc/cron.daily, /etc/cron.hourly, etc.
In your first example you are scheduling a job via a crontab. In your second example you're using the at command to queue a job for later execution.

Jenkins seems to be the target for nohup in a script started via ssh, how can I prevent that?

I am trying to create a Jenkins job that restarts a program that runs all the time on one of our servers.
I specify the following as the command to run:
cd /usr/local/tool && ./tool stop && ./tool start
The script 'tool' contains a line like:
nohup java NameOfClass &
The output of that ends up in my build console instead of in nohup.out, so the job never terminates unless I terminate it manually, which terminates the program.
How can I cause nohup to behave the same way it does from a terminal?
If I understood the question correctly, Jenkins is killing all processes at the end of the build and you would like some process to be left running after the build has finished.
You should read https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
Essentially, Jenkins searches for processes with some secret value in BUILD_ID environment variable. Just override it for the processes you want to be left alone.
In the new Pipeline jobs, setting BUILD_ID no longer prevents Jenkins from killing your processes once the job finishes. Instead, you need to set JENKINS_NODE_COOKIE:
sh 'JENKINS_NODE_COOKIE=dontKillMe nohup java NameOfClass &'
See the wiki on ProcessTreeKiller and this comment in the Jenkins Jira for more information.
Try adding the & in the Jenkins build step and redirecting the output using > nohup.out.
I had a similar problem with runnning a shell script from jenkins as a background process. I fixed it by using the below command:
BUILD_ID=dontKillMe nohup ./start-fitnesse.sh &
In your case,
BUILD_ID=dontKillMe nohup java NameOfClass &

Resources