I'm trying to check an error every hour in daily log. Garbage collection is creating a file with ".hprof" extensions and I want to write a script that will find "OutOfMemoryError" error in that extension and in that day.That script will run one time in every hour. After that i want it to mail me. How can I do that ?
Thanks
if you had Googled for cron to know how to schedule a script for every hour.
edit crontab as this
0 * * * * <script_path> >/dev/null 2>&1
and for script you can use some thing like this..
month1=`date | awk '{print $2}'`
day1=`date | awk '{print $3}'`
year1=`date | awk '{print $6}'`
grep year1 <file>.hprof | grep month1 | grep day1 | grep OutOfMemoryError | mailx -s "report" <e-mail_addess>
Related
I have to search how many times we have received any specific exceptions on current date. I am using below command but it doesn't work.
This command shows ClassCastException or NumberFormatException that occurred until now.
I just wanted to know how many times ClassCastException or NumberFormatException occurred in today's date only.
grep $(date +"%Y-%m-%d") /*.* |grep -ioh "ClasscastException\|NumberFormatException" /logs/*.* | sort | uniq -c | sort -r
grep -ioh "ClasscastException\|NumberFormatException" /logs/*.* | sort | uniq -c | sort -r
Above command gave me no of count for ClassCastException and NumberFormatException in log file for all dates. I just want for today's date count.
The first command should work after removing the /logs/*.* part
grep $(date +"%Y-%m-%d") /∗.∗ |grep -ioh "ClasscastException\|NumberFormatException" /logs/∗.∗ | sort | uniq -c | sort -r
grep works on the files that are given as argument, otherwise, it defaults to stdin.
Since files are supplied as argument to the the 2nd grep in this pipeline, it is discarding the output from the 1st grep and looking for the pattern in all files under log directory again.
In order to counter a botnet attack, I am trying to analyze a nginx access.log file to find which user agents are the most frequent, so that I can find the culprits and deny them. How can I do that?
Try something like this on your access log, replace with the path to your access log, also keep in mind that some log files would get zipped and new one would be created
sudo awk -F" " '{print $1}' /var/log/nginx/access.log | sort | uniq -dc
EDIT:
Sorry I just noticed you wanted user agent instead of IP
sudo awk -F"\"" '{print $6}' /var/log/nginx/access.log | sort | uniq -dc
To sort ascending append | sort -nr and to limit to 10 append | head -10
so the final total line would be
sudo awk -F"\"" '{print $6}' /var/log/nginx/access.log | sort | uniq -dc | sort -nr | head -10
To get user agent
sudo awk -F'"' '/GET/ {print $6}' /var/log/nginx-access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn
awk(1) - selecting full User-Agent string of GET requests
cut(1) - using first word from it
sort(1) - sorting
uniq(1) - count
sort(1) - sorting by count, reversed
By using "ucbps" command i am able to get all PIDs
$ ucbps
Userid PID CPU % Mem % FD Used Server Port
=========================================================================
512 5783 2.50 16.30 350 managed1_adrrtwls02 61001
512 8896 2.70 21.10 393 admin_adrrtwls02 61000
512 9053 2.70 17.10 351 managed2_adrrtwls02 61002
I want to do it like this, but don't know how to do
variable=get pid of process by processname.
Then use this command kill -9 variable.
If you want to kill -9 based on a string (you might want to try kill first) you can do something like this:
ps axf | grep <process name> | grep -v grep | awk '{print "kill -9 " $1}'
This will show you what you're about to kill (very, very important) and just pipe it to sh when the time comes to execute:
ps axf | grep <process name> | grep -v grep | awk '{print "kill -9 " $1}' | sh
pids=$(pgrep <name>)
will get you the pids of all processes with the given name. To kill them all, use
kill -9 $pids
To refrain from using a variable and directly kill all processes with a given name issue
pkill -9 <name>
On a single line...
pgrep -f process_name | xargs kill -9
Another possibility would be to use pidof it usually comes with most distributions. It will return you the PID of a given process by using it's name.
pidof process_name
This way you could store that information in a variable and execute kill -9 on it.
#!/bin/bash
pid=`pidof process_name`
kill -9 $pid
use grep [n]ame to remove that grep -v name this is first... Sec using xargs in the way how it is up there is wrong to rnu whatever it is piped you have to use -i ( interactive mode) otherwise you may have issues with the command.
ps axf | grep | grep -v grep | awk '{print "kill -9 " $1}' ?
ps aux |grep [n]ame | awk '{print "kill -9 " $2}' ? isnt that better ?
I have a directory that has one file with information (call it masterfile.inc) and several files that are empty (call them file1.inc-file20.inc)
I'm trying to formulate an xargs command that copies the contents of masterfile.inc into all of the empty files.
So far I have
ls -ltr | awk '{print $9}' | grep -v masterfile | xargs -I {} cat masterfile.inc > {}
Unfortunately, all this does is creates a file called {} and prints masterfile.inc into it N times.
Is there something I'm missing with the syntax here?
Thanks in advance
You can use this command to copy file 20 times:
$ tee <masterfile.inc >/dev/null file{1..20}.inc
Note: file{1..20}.inc will expand to file1, file2, ... , file20
If you disternation filenames are random:
$ shopt -s extglob
$ tee <masterfile.inc >/dev/null $(ls !(masterfile.inc))
Note: $(ls !(masterfile.inc)) will expand to all file in current directory except masterfile.inc (please don't use spaces in filename)
While the tee trick is really brilliant you might be interested in a solution that is easier to adapt for other situations. Here using GNU Parallel:
ls -ltr | awk '{print $9}' | grep -v masterfile | parallel "cat masterfile.inc > {}"
It takes literally 10 seconds to install GNU Parallel:
wget pi.dk/3 -qO - | sh -x
Watch the intro videos to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
I'm using AWS CodeDeploy in which server running on pm2 dose not work due to explanation given here in troubleShoot documentation.
I followed the documentation and in AfterInstall script used node . > /dev/null 2> /dev/null < /dev/null & to run the node server in the background.
I've tried following ways to kill the server
fuser -k 3000/tcp
lsof -P | grep ':3000' | awk '{print $2}' | xargs kill -9
kill -9 $(lsof -t -i:3000)
but each time a new process respwans with a different PID.
How can I kill this background process and add it to the ApplicationStop script for CodeDeploy?
One of the problems with finding a pid with grep is that the grep pid will also show up as a result and can kill itself before the target, so try;
ps ax | grep node | grep -v grep
if it looks reasonable, review this;
ps ax | grep node | grep -v grep | awk '{print $1}'
then run the kill;
ps ax | grep node | grep -v grep | awk '{print $1}' | xargs kill -9
pkill is a less flexible option (no regex filtering) but if you use that be sure to use the -I flag so you don't kill anything you did not intend to.
I was able to kill using pkill node command.