How to run a Cron job for Node.js - unix

I have a cron job that call a shell script.
*/2 * * * * sh cron_test.sh >> output.log
In side the shell script, I run some command lines like:
#!/usr/bin
./mongo/bin/mongodump .....
FILE_NAME='abc'
node mynode.js $FILENAME
It runs if I just call cron_test.sh in command prompt. However, it doesn't run node if it is run by cronjob. It does run the mongodump command. So, what's wrong? is there anything I have to set for permission, etc?

thanks.. I find it out..
either I need to specify the node path or
do that in the sh script:
nodejs/node myscript.js
where nodejs/node is where the node installed.

Related

Trying to pass arguments to wp-cli in a bash script

I'm using a wp-cli tool in order to optimize images:
$ wp image-optimize batch --limit=20
I've installed wpcli using composer so it's in an unusual location, but is in my $PATH:
/home/user/.config/composer/vendor/wp-cli/wp-cli/bin/wp
This works great. I'd like to run this command nightly. I've tried two different approaches to this. First, I tried running the command as a cronjob (set every minute for testing):
$ crontab -e
* * * * * cd /path/to/example.com && wp image-optimize batch --limit=20
I got no response. I wondered if the problem had something to do with passing arguments in a cronjob. So, I created a bash script nightly-image-optimize (also in path) hoping that this might get around it:
#!/bin/bash
echo "begin" >> /home/user/cronlog.log
cd /path/to/example.com
sh /home/user/.config/composer/vendor/wp-cli/wp-cli/bin/wp image-optimize batch --limit=2
echo "end" >> /home/user/cronlog.log
I then modified the cronjob to execute this file every minute as my username since cron runs as root:
* * * * * username /usr/local/bin/nightly-image-optimize
I know the cronjob is running because my cronlog.log file is created and is populated every minute with the echo begin and end statements above.
While in context this is a wp-cli problem, I don't believe that the issue has anything to do with wp-cli. I think I'm misunderstanding how to essentially 'tell' bash to run a process as if I had manually entered it in (maybe something to do with the interactivity of wp-cli?).
Any ideas?
Note:
I'm on AWS running Ubuntu 18.04.3 as a non-root user with sudo privileges

script not running through cron. Works fine when executed manually

I have a shell script where I am calling the hana.scr script from within the main script. The hana.scr contains the below code.
chmod 777 /data/auto/SLT.out; rm -rf /data/auto/SLT.out; hdbsql -n plhesappr61 -i 00 -u USR -p $#^F#$GGG -o /data/auto/SLT.out "Select sum("ERPACC_RPPCLNT200"."VABD"."NETWR") FROM "ACC_CLNT"."VFKH" inner join "ACC_CLNT"."VNRO" on ("ACC_CLNT"."VNRO"."VBELN"="ACC_CLNT"."VFKH"."VBELN") where FKART in ('ZFP1','ZFP3') and FKDAT = (select ADD_DAYS (TO_DATE (current_date, 'YYYY-MM-DD'), -1) "add_days" from dummy) group by FKDAT";
When I run the main script manually, it calls this script fine and the SLT.out file is also generated.
But when I schedule it in cron, the main script executes just fine, except for this hana.scr which does not seem to execute because it does not does not even remove the old file as per the second command rm in the hana.scr.
The cron is the same user as the one I run the script manually with.
I read that if the cron does not get the same environment to run, these issues happen. I tried to import the UNIX profile of the user before executing as the hana.scr as well, but was not successful.
Below is the cron command which runs the main script which calls the hana.scr from within: Used absolute paths..
37 0,2,3,4,5,6 * * * /data/esb/auto/./main.sh R > /data/esb/auto/main.log
The hana.scr is executed in the below manner:
./hana.scr;
check6=$? ;
if [ $check6 = "1" ]
then
echo "***********HANA counts were not generated**********"
fi
After /data/esb/auto/./main.sh your current directory is not changed to /data/esb/auto/. I think you started main.sh from the commandline while your $PWD was the same as where hana.scr was.
Test it from the commandline with
cd /
/data/esb/auto/main.sh
How to fix?
The worst solution is changing the crontab line into
37 0,2,3,4,5,6 * * * cd /data/esb/auto; /data/esb/auto/main.sh R > /data/esb/auto/main.log
That is a workaround for the crontab but main.sh still fails when started from a different directory.
Slightly better is using the complete path in main.sh when you call hana.scr
myscriptdir=/data/esb/auto
..
${myscriptdir}/hana.scr
When you change the folders you need to edit the files and repair the settings.
You can try to use some config file with settings or let main.sh figure out what in which directory it is:
Getting the source directory of a Bash script from within

Unable to stop the cron job

I have a cron job in cronjob.txt as follows
* * * * * nohup sh cronScheduleInit.sh >> cronlog.txt &
and ran it using command,
crontab cronjob.txt
After my testing ,i deleted the cron job entry using following command,
crontab -e
and when display the list of jobs using
crontab -l
showing no entries but still the cron job is running, i mean it is generating the entries in log file. Even i commented the job entry in cronjob.txt file
Also, tried deleting cron job and listed the jobs. its showing no cron jobs but still the log is running...
crontab -r
What to do.. Please help!!!!
Process can be find out using command ps aux. So check
ps aux|grep crontab #or
ps aux|grep cronjob
Then you will get something like
user 29587 2.0 1.1 748804 88968 pts/31 Sl+ Mar04 19:55 grunt
This result refers for service grunt.You have to search crontab or cronjob
Then kill process using process id
Here:
sudo kill -9 29587
Format
sudo kill -9 <process_id>

spark-submit scheduling in cron

I would like to schedule a pyspark script in crontab to run each 5 minutes. I have successfully launched the script manually using this command:
spark-submit script.py
The problem is that the same command does not seem to work when launched from crontab. The logs don't show anything any details (they are truncated)
*/5 * * * * /path/script.sh
The file script.sh contains: spark-submit script.py
Please let me know if you have any ideas on how to solve this issue.
You should put it in a bash file and run this on cron:
Bash File Your_Script.sh:
#!/bin/bash
echo "RUNNING JOB"
/opt/mapr/spark/spark-1.5.2/bin/spark-submit /Path/To/Your_Script.py parama1
So you can easily run it from crantab like this:
32 18 * * * /Path/To/Your_Script.sh
I met the same problem with you.I solved it by 2 steps:
see the cron log: the path of log is /var/spool/mail/${username} in Centos.
my log show : cannot find hadoop and $JAVA_HOME
source /etc/profile : because $JAVA_HOME and $HADOOP_HOME configured in /etc/profile in my OS. If $JAVA_HOME and $HADOOP_HOME configured in ~/.bashrc, should source ~/.bashrc

Cannot get cron to work on Amazon EC2?

I've spent two days trying to understand why I can not get cron to work on my Ubuntu EC2 instance. I've read the documentation. Can anyone help? All I want is to get a working cronjob.
I am using a simple wget command to test cron. I have verified that this works manually from the command line:
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
My crontab file looks like this:
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
I have single spaces between the commands and I have a blank line below the command. I've also tried to execute this command from the system level sudo crontab -e. It still doesn't work.
The cron daemon is running:
ps aux | grep crond
ubuntu 2526 0.0 0.1 8096 928 pts/4 S+ 10:37 0:00 grep crond
The cronjob appear to be running:
$ crontab -l
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
Does anyone have any advice or possible solutions?
Thanks for your time.
Cron can be run in Amazon-based linux server just like in any other linux server.
Login to console with SSH.
Run crontab -e on the command line.
You are now inside a vi editor of the crontab of the current user (which is by default the console user, with root permissions)
To test cron, add the following line: * * * * * /usr/bin/uptime > /tmp/uptime
Now save the file and exit vi (press Esc and enter :wq).
After a minute or two, check that the uptime file was created in /tmp (cat /tmp/uptime).
Compare it with the current system uptime by typing the uptime command on the command line.
The scenario above worked successfully on a server with the Amazon Linux O/S installed, but it should work on other linux boxes as well. This modifies the crontab of the current user, without touching the system's crontabs and doesn't require the user inside the crontab entry, since you are running things under your own user. Easier, and safer!
Your cron daemon is not running. When you're running ps aux | grep crond the result is showing that only the grep command is running. Be aware of this whenever you run ps aux | grep blah.
Check the status of the cron service by running this command.
Try:
sudo service crond status
Additional information here: http://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/.
On some AWS Ubuntu EC2 machines, cron jobs cannot be edited or made to run by using crontab -e or even sudo crontab -e (for whatever reason). I was able to get cron jobs working by:
touch /home/ubuntu/crontest.log to create a log file
sudo vim /etc/crontab which edits the system-wide crontab
add your own cron job on the second to last line using the root user, such as * * * * * root date && echo 'It works!'>> /home/ubuntu/crontest.log 2>&1 which dumps stdout and stderr into the logfile you created in step 1
Verify it is working by waiting 1 minute and then cat /home/ubuntu/crontest.log to see the output of the cron job
Don't forget to specify the user to run it as. Try creating a new file inside your /etc/cron.d folder named after what you want to do like getnytimes and have the contents of that file just be:
02 * * * * root /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
In my case the cron job was working but the script it was running failed. The failure reason was due to the fact that I used relative path instead of absolute path in my include line inside the script.
What did the trick for me was
Make sure the crontab was active:
sudo service crond status
Restart the crontab by running:
sudo service crond restart
Reschedule the cron job as usual:
crontab -e
running
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
gives me an error
/home/ubuntu/backups/testfile: No such file or directory
is this your issue?
I guess cron is not writing this error to anywhere you can redirect stderr to stdout and see the error like this :
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/ > /home/ubuntu/error.log 2&>1

Resources