How to run tail and grep in a crontab - unix

I am trying to run a tail and grep command that gets the last 10 lines containing Error in all logfiles in the folderevery minute using crontab and fill a text file with the output. The line below is the only line I have in my crontab. If i run the command in a normal terminal it shows me the results so the command works, but not in the crontab.
Is there anything I am missing?
* * * * * cd /Users/nassim/Desktop/application1 && tail -n 10 | grep -w "Error" **/*.log > error.txt

Related

Batch and Bash codes while submitting jobs

I was used to the following way of submitting my jobs that to be done in R in an sequential way in 'PBS/Torque'.
following is my R code named simsim.R
#########
set<-1
#########
# Read i
#########
#the following two refers to the bash code
arg <- commandArgs()
arg
itration<- as.numeric(arg)[3]
itration
setwd("/home/habijabi")
save(arg,itration,
file = paste0('simsim_RESULT_',set,itration,'.RData'))
Now I write the following set of codes
#!/bin/bash
Chains=10
for cha in `seq 1 $Chains`
do
echo "Chains: " $cha
sleep 1
qsub -q long -l nodes=1:ppn=12,walltime=24:00:00 -v c=$cha ./diffv1.sh
done
in this 'diffv1.sh' I used to load the module and pass the variable 'c'.
#!/bin/bash
## input values
c=$c
#configure software
module load R/4.1.2
#changed
cd /home/habijabi
R --no-save < simsim.R $c
In this way I was used to sending the '$c' value to my R code. And it would have produced me 10 many .R files with the corresponding names.
But then I had to change to 'SLURM'. Following is the batch code that I was using.
#!/bin/bash
#SBATCH --job-name=R-test
#IO files
#SBATCH --error=R-test.%J.err
#SBATCH --output=R-test.%J.out
#!/bin/bash
module load R/4.1.2
set -e -x
mkdir -p jobs
cd /home/habijabi
for cha in {1..10}
do
sbatch --time=24:00:00 \
--ntasks-per-node=12 \
--nodes=1 \
-p compute \
-o jobs/${cha}_srun.txt \
--wrap="R --no-save < /home/habijabi/simsim.R ${cha}"
done
But with this code, it runs only once or twice. And I do not understand why after submitting 150 jobs it does not run all of them.... The run file shows the following:
+ mkdir -p jobs
+ cd /home/habijabi
+ for cha in '{1..10}'
+ sbatch --time=24:00:00 --ntasks-per-node=12 --nodes=1 -p compute -o jobs/1_srun.txt '--wrap=R --no-save < /home/habijabi/simsim.R 1'
+ for cha in '{1..10}'
+ sbatch --time=24:00:00 --ntasks-per-node=12 --nodes=1 -p compute -o jobs/2_srun.txt '--wrap=R --no-save < /home/habijabi/simsim.R 2'
+ for cha in '{1..10}'
+ sbatch --time=24:00:00 --ntasks-per-node=12 --nodes=1 -p compute -o jobs/3_srun.txt '--wrap=R --no-save < /home/habijabi/simsim.R 3'
...so on...
and the .out file shows the following
Submitted batch job 146299
Submitted batch job 146300
Submitted batch job 146301
Submitted batch job 146302
Submitted batch job 146303
......
......
Both are doing fine...But here, a few of the jobs run, and majority of them gives error as follows.
/opt/ohpc/pub/libs/gnu8/R/4.1.2/lib64/R/bin/exec/R: error while loading shared libraries: libpcre2-8.so.0: cannot open shared object file: No such file or directory
I do not understand what I have done wrong....This does not produce anything... I am new at this type of coding, any help is appreciated.

How can I use system or system2 if the command has pipe to head like "cmd | head"

I noticed that when I run a long command in linux (I am using a cantos 7.3 distro, R 4.0.3 on the terminal) and that I pipe to head only the first outputs are shows to me (and the command stops)
ls -R /opt # on my system I would get tons of output for 10s of seconds
ls -R /opt | head # just get the top 5 and command is stopped straight away
when I try the equivalent in R I cannot get the same behaviour
system(command = "ls -R /opt | head") # will take a long time (I assume the time for ls -R /opt to finish)
Is there a way for me to get the same behaviour in R than the one I get on my system command line ?

Moving files through a crontab task doesn't work, but works if is executed manually

I'm trying to move files crontab to get that scheduled but the crontab is not moving the files. If I do it manually, it works... do you know what could be the possible reason? This is what I have:
13,29 * * * * mv $(grep -l "File was not FOUND" /home/user/test*) /home/user/temp
If I execute the following line it works without any problem:
mv $(grep -l "File was not FOUND" /home/user/test*) /home/user/temp
By default, cron jobs are run using /bin/sh. You should be able to set the shell to use by adding it just before your job like this:
SHELL=/bin/bash
13,29 * * * * mv $(grep -l "File was not FOUND" /home/user/test*) /home/user/temp
...or whichever shell you like.
Alternatively, if your crond does not support that notation, you may explicitly invoke the shell you like, using its -c argument:
13,29 * * * * bash -c 'mv $(grep -l "File was not FOUND" /home/user/test*) /home/user/temp'
Notice the enclosing single quotes. They are required as the whole command must be a single argument to the shell.
Yet another way would be to convert your command to use plain old bourne shell (sh) syntax, which I believe should be:
13,29 * * * * mv `grep -l "File was not FOUND" /home/user/test*` /home/user/temp
...using backticks for command substitution.
Create a simple Schell script. add #!/bin/bash at the top.
#!/bin/bash
mv /your/source/file/path /target/path
# In your case something like below should work.
# make sure your path is absalute path, starting from root folder.
# mv $(grep -l "File was not FOUND" /home/user/test*) /home/user/temp
and then execute above shell script in crontab like below.
13,29 * * * * /path/to/file/movescript.sh
make sure that your script have execute permission, and the user can move files. better run the script manually before scheduling
chmod +x movescript.sh will add execute permission.

How to run cronjob with a user other than root on solaris server

I found many similar questions on internet but no one could resolve my problem..I am working on Solaris 5.10 machine.. Here Inside a shell script a customized command is being run like below.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
This command only runs while logged in as "palf" user. Now this script & subsequently this command is running perfectly from command prompt. But crontab is not able run this command.
I tried few things.. I changed the entry of my crontab file like below which could not even run the script.
40 15 * * * palf bash /opt/bin/scripts/script.sh
Then I tried to edit a cronfile as "palf" user by using the below command but it gave me "invalid options" error.
crontab -u palf -e
I also tried
crontab -e palf
It opened a crontab file but it was same as the root's crontab file not the user's specific
Nothing worked for me. Could anyone please help here? Thanks.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
if [[ $? -ne 0 ]]; then
logger "palf command failed. Please check..." 1
else
logger "palf command successfully executed..." 0
fi
This is how I am checking the status of palf command.. and it prints "palf command failed. Please check..." using logger function every time it runs using cronjob.

Cannot get cron to work on Amazon EC2?

I've spent two days trying to understand why I can not get cron to work on my Ubuntu EC2 instance. I've read the documentation. Can anyone help? All I want is to get a working cronjob.
I am using a simple wget command to test cron. I have verified that this works manually from the command line:
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
My crontab file looks like this:
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
I have single spaces between the commands and I have a blank line below the command. I've also tried to execute this command from the system level sudo crontab -e. It still doesn't work.
The cron daemon is running:
ps aux | grep crond
ubuntu 2526 0.0 0.1 8096 928 pts/4 S+ 10:37 0:00 grep crond
The cronjob appear to be running:
$ crontab -l
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
Does anyone have any advice or possible solutions?
Thanks for your time.
Cron can be run in Amazon-based linux server just like in any other linux server.
Login to console with SSH.
Run crontab -e on the command line.
You are now inside a vi editor of the crontab of the current user (which is by default the console user, with root permissions)
To test cron, add the following line: * * * * * /usr/bin/uptime > /tmp/uptime
Now save the file and exit vi (press Esc and enter :wq).
After a minute or two, check that the uptime file was created in /tmp (cat /tmp/uptime).
Compare it with the current system uptime by typing the uptime command on the command line.
The scenario above worked successfully on a server with the Amazon Linux O/S installed, but it should work on other linux boxes as well. This modifies the crontab of the current user, without touching the system's crontabs and doesn't require the user inside the crontab entry, since you are running things under your own user. Easier, and safer!
Your cron daemon is not running. When you're running ps aux | grep crond the result is showing that only the grep command is running. Be aware of this whenever you run ps aux | grep blah.
Check the status of the cron service by running this command.
Try:
sudo service crond status
Additional information here: http://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/.
On some AWS Ubuntu EC2 machines, cron jobs cannot be edited or made to run by using crontab -e or even sudo crontab -e (for whatever reason). I was able to get cron jobs working by:
touch /home/ubuntu/crontest.log to create a log file
sudo vim /etc/crontab which edits the system-wide crontab
add your own cron job on the second to last line using the root user, such as * * * * * root date && echo 'It works!'>> /home/ubuntu/crontest.log 2>&1 which dumps stdout and stderr into the logfile you created in step 1
Verify it is working by waiting 1 minute and then cat /home/ubuntu/crontest.log to see the output of the cron job
Don't forget to specify the user to run it as. Try creating a new file inside your /etc/cron.d folder named after what you want to do like getnytimes and have the contents of that file just be:
02 * * * * root /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
In my case the cron job was working but the script it was running failed. The failure reason was due to the fact that I used relative path instead of absolute path in my include line inside the script.
What did the trick for me was
Make sure the crontab was active:
sudo service crond status
Restart the crontab by running:
sudo service crond restart
Reschedule the cron job as usual:
crontab -e
running
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
gives me an error
/home/ubuntu/backups/testfile: No such file or directory
is this your issue?
I guess cron is not writing this error to anywhere you can redirect stderr to stdout and see the error like this :
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/ > /home/ubuntu/error.log 2&>1

Resources