Apache airflow run too many processes - airflow

I have all the following processes which are running with Airflow. I am trying to understand why there are so many with some duplicated and if they are all needed.
I am running a local executor with postgres and it run with systemd as explain by the following tutorial. https://towardsdatascience.com/an-apache-airflow-mvp-complete-guide-for-a-basic-production-installation-using-localexecutor-beb10e4886b2
I want to better understand the processes to debug it because it seems that the scheduler is not working currently. Any help would be very appreciated.
ubuntu 30932 0.0 0.0 76692 900 ? Ss 00:53 0:00 /lib/systemd/systemd --user
ubuntu 30933 0.0 0.1 259528 2740 ? S 00:53 0:00 (sd-pam)
root 525 0.0 0.3 107992 7280 ? Ss 01:26 0:00 sshd: ubuntu [priv]
ubuntu 619 0.0 0.3 108228 6120 ? S 01:26 0:00 sshd: ubuntu#pts/0
ubuntu 626 0.0 0.2 23380 5388 pts/0 Ss+ 01:26 0:00 -bash
root 19090 0.0 0.0 0 0 ? I 04:04 0:00 [kworker/u4:1]
root 15692 0.0 0.3 107992 7340 ? Ss 04:44 0:00 sshd: ubuntu [priv]
ubuntu 15809 0.0 0.1 107992 3684 ? S 04:44 0:00 sshd: ubuntu#pts/1
ubuntu 15810 0.0 0.2 23380 5356 pts/1 Ss+ 04:44 0:00 -bash
root 20272 0.0 0.0 0 0 ? I 04:50 0:00 [kworker/u4:0]
root 20274 0.0 0.0 0 0 ? I 04:50 0:00 [kworker/0:0]
root 20676 0.0 0.3 107992 7416 ? Ss 04:51 0:00 sshd: ubuntu [priv]
ubuntu 20783 0.0 0.3 108184 6200 ? S 04:51 0:00 sshd: ubuntu#pts/2
ubuntu 20784 0.0 0.2 23380 5376 pts/2 Ss 04:51 0:00 -bash
root 22974 0.0 0.0 0 0 ? I 04:54 0:00 [kworker/1:0]
ubuntu 23001 1.2 4.7 374404 95696 ? Ss 04:54 0:04 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23002 2.2 6.2 501048 126200 ? Ss 04:54 0:07 /usr/bin/python3 /home/ubuntu/.local/bin/airflow webserver
postgres 23030 0.0 0.8 321700 16512 ? Ss 04:54 0:00 postgres: 10/main: airflow airflow ***.*.*.*(53558) idle
postgres 23031 0.1 0.9 322608 19976 ? Ss 04:54 0:00 postgres: 10/main: airflow airflow ***.*.*.*(53560) idle
ubuntu 23038 0.6 4.2 1636692 84520 ? Sl 04:54 0:02 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23047 0.0 0.0 0 0 ? Z 04:54 0:00 [/usr/bin/python] <defunct>
ubuntu 23048 0.0 0.0 0 0 ? Z 04:54 0:00 [/usr/bin/python] <defunct>
ubuntu 23049 0.0 0.0 0 0 ? Z 04:54 0:00 [/usr/bin/python] <defunct>
ubuntu 23052 0.0 3.9 372588 78492 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23053 0.0 3.9 372588 78492 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23056 0.0 3.9 372588 78492 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23058 0.0 3.9 372588 78428 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23059 0.0 3.9 372588 78428 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23063 0.0 3.9 372588 78436 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23064 0.0 3.9 372588 78436 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23069 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23072 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23074 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23077 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23080 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23082 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23086 0.0 3.9 372588 78440 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23089 0.0 3.9 372588 78444 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23091 0.0 3.9 372588 78444 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23093 0.0 3.9 372588 78448 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23097 0.0 3.9 372588 78448 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23099 0.0 3.9 372588 78448 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23103 0.0 3.9 372588 78448 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23104 0.0 3.9 372588 78448 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23108 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23112 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23113 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23117 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23118 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23122 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23125 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23128 0.0 3.9 372588 78452 ? S 04:54 0:00 /usr/bin/python3 /home/ubuntu/.local/bin/airflow scheduler
ubuntu 23142 1.3 4.1 373364 82908 ? S 04:54 0:04 airflow scheduler -- DagFileProcessorManager
postgres 23179 0.0 0.9 322380 18120 ? Ss 04:54 0:00 postgres: 10/main: airflow airflow 127.0.0.1(53564) idle
ubuntu 23193 0.4 3.1 217464 63712 ? S 04:54 0:01 gunicorn: master [airflow-webserver]
ubuntu 26167 1.3 5.9 494876 119428 ? Sl 04:58 0:01 [ready] gunicorn: worker [airflow-webserver]
postgres 26181 0.0 0.8 321596 16640 ? Ss 04:58 0:00 postgres: 10/main: airflow airflow 127.0.0.1(54584) idle
ubuntu 26545 1.8 5.9 494876 119428 ? Sl 04:58 0:01 [ready] gunicorn: worker [airflow-webserver]
postgres 26559 0.0 0.8 321596 16640 ? Ss 04:58 0:00 postgres: 10/main: airflow airflow 127.0.0.1(54714) idle
ubuntu 26910 3.6 5.9 494876 119428 ? Sl 04:59 0:01 [ready] gunicorn: worker [airflow-webserver]
postgres 26924 0.0 0.8 321596 16640 ? Ss 04:59 0:00 postgres: 10/main: airflow airflow 127.0.0.1(54840) idle
ubuntu 27287 14.1 5.9 494876 119428 ? Sl 04:59 0:01 [ready] gunicorn: worker [airflow-webserver]
postgres 27301 0.0 0.8 321596 16640 ? Ss 04:59 0:00 postgres: 10/main: airflow airflow 127.0.0.1(54966) idle
ubuntu 27411 0.0 0.0 0 0 ? Z 05:00 0:00 [airflow schedul] <defunct>
ubuntu 27414 0.0 0.0 0 0 ? Z 05:00 0:00 [airflow schedul] <defunct>
ubuntu 27423 0.0 0.1 40268 3884 pts/2 R+ 05:00 0:00 ps -aux --sort start_time

Possible duplicate of following question!
Running `airflow scheduler` launches 33 scheduler processes
Having theses many process is default behavior but you can change it from airflow.cfg by changing parallelism value.

Related

Nginx and uwsgi setting for heavy calculation site?

I am using nginx and uwsgi(django) for site on AWS fargate.
This program doing a bit heavy calculating task.
So, I guess I should do some tuning for uwsgi or nginx.
I start uwsgi in django container with processes and threads.
uwsgi --http :8011 --processes 8 --threads 8 --module mysite.wsgi
and do nothing special for nginx
server {
listen 80;
server_name mysite;
charset utf-8;
location / {
proxy_pass http://127.0.0.1:8011/;
include /etc/nginx/uwsgi_params;
}
}
With this setting, program works, but even after heavy task finished.
Server response is still so heavy.
I checked with top command, even after task finished, not many memory are free.
top - 20:15:21 up 24 min, 0 users, load average: 4.65, 4.00, 1.91
Tasks: 15 total, 1 running, 11 sleeping, 0 stopped, 3 zombie
%Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.5 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3703.8 total, 176.6 free, 1095.6 used, 2431.5 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2382.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 5724 284 4 S 0.0 0.0 0:00.19 bash
9 root 20 0 1398196 6260 0 S 0.0 0.2 0:00.50 amazon-ssm-agen
24 root 20 0 1410428 11772 0 S 0.0 0.3 0:00.85 ssm-agent-worke
48 root 20 0 622428 37900 1168 S 0.0 1.0 0:00.38 uwsgi
49 root 20 0 47468 1396 36 S 0.0 0.0 0:00.18 uwsgi
50 root 20 0 0 0 0 Z 0.0 0.0 0:05.75 uwsgi
56 root 20 0 1755900 424896 177332 S 0.0 11.2 0:14.92 uwsgi
59 root 20 0 1754360 365476 120228 S 0.0 9.6 0:07.77 uwsgi
60 root 20 0 0 0 0 Z 0.0 0.0 0:13.31 uwsgi
68 root 20 0 622428 36788 56 S 0.0 1.0 0:00.00 uwsgi
69 root 20 0 1755600 373404 125260 S 0.0 9.8 0:09.18 uwsgi
77 root 20 0 0 0 0 Z 0.0 0.0 0:02.90 uwsgi
129 root 20 0 1327588 10376 0 S 0.0 0.3 0:10.33 ssm-session-wor
139 root 20 0 5988 2288 1764 S 0.0 0.1 0:00.64 bash
261 root 20 0 8900 3648 3136 R 0.0 0.1 0:00.00 top
I guess this means task is not correctly freed??
However where should I check or tuning??

running `mtr` network diagnostic tool in the background like `nohup` processes

mtr is a great tool for debugging the network packet losses. Here i sample mtr output.
My traceroute [v0.85]
myserver.com (0.0.0.0) Thu Jan 19 04:10:04 2017
Resolver: Received error response 2. (server failure)er of fields quit
Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
1. 192.168.104.23 0.0% 11 0.6 0.6 0.5 0.8 0.0
2. machine1.com 0.0% 11 8.5 12.4 2.0 20.5 5.5
3. mchine2.org.com 0.0% 11 1.2 1.0 0.8 1.8 0.0
4. machine3.orgcom 0.0% 11 0.8 0.9 0.7 1.1 0.0
However while running mtr on the server, you can't log-off the server.
I need mtr to output to a textfile and run in background similar to nohup command.
I should also be able to look into the report, something like using tail -f on the output file.
mtr offers -r option, which puts mtr into report mode. In this mode, mtr will run for the number of cycles specified by the -c option then print statistics and exit. So we can create a script to run the command and put the script to cron entries on your schedule. For example:
/usr/sbin/mtr -r -c 2 www.google.com >> /home/mtr.log
Cron entry, run every minute:
* * * * * sh /path/to/script
Then you can tail -f on the output file.
If systemd is used
┌──[root#vms81.liruilongs.github.io]-[~]
└─$systemd-run --on-calendar=*:*:00 --unit mtr-print-log --slice mtr /usr/sbin/mtr -r -b 192.168.29.154
Running timer as unit mtr-print-log.timer.
Will run service as unit mtr-print-log.service.
Viewing mtr logs
┌──[root#vms81.liruilongs.github.io]-[~]
└─$journalctl -u mtr-print-log.service
-- Logs begin at 六 2022-12-24 21:56:02 CST, end at 六 2022-12-24 22:10:19 CST. --
12月 24 22:07:00 vms81.liruilongs.github.io systemd[1]: Started /usr/sbin/mtr -r -b 192.168.29.154.
12月 24 22:07:14 vms81.liruilongs.github.io mtr[15427]: Start: Sat Dec 24 22:07:00 2022
12月 24 22:07:14 vms81.liruilongs.github.io mtr[15427]: HOST: vms81.liruilongs.github.io Loss% Snt Last Avg Best Wrst StDev
12月 24 22:07:14 vms81.liruilongs.github.io mtr[15427]: 1.|-- gateway (192.168.26.2) 0.0% 10 0.4 0.3 0.2 0.5 0.0
12月 24 22:07:14 vms81.liruilongs.github.io mtr[15427]: 2.|-- 192.168.29.154 0.0% 10 1.5 0.9 0.7 1.5 0.0
12月 24 22:08:00 vms81.liruilongs.github.io systemd[1]: Started /usr/sbin/mtr -r -b 192.168.29.154.
12月 24 22:08:14 vms81.liruilongs.github.io mtr[16400]: Start: Sat Dec 24 22:08:00 2022
12月 24 22:08:14 vms81.liruilongs.github.io mtr[16400]: HOST: vms81.liruilongs.github.io Loss% Snt Last Avg Best Wrst StDev
12月 24 22:08:14 vms81.liruilongs.github.io mtr[16400]: 1.|-- gateway (192.168.26.2) 0.0% 10 0.3 0.3 0.2 0.4 0.0
12月 24 22:08:14 vms81.liruilongs.github.io mtr[16400]: 2.|-- 192.168.29.154 0.0% 10 1.0 1.0 0.7 1.4 0.0
12月 24 22:09:00 vms81.liruilongs.github.io systemd[1]: Started /usr/sbin/mtr -r -b 192.168.29.154.
12月 24 22:09:14 vms81.liruilongs.github.io mtr[17411]: Start: Sat Dec 24 22:09:00 2022
12月 24 22:09:14 vms81.liruilongs.github.io mtr[17411]: HOST: vms81.liruilongs.github.io Loss% Snt Last Avg Best Wrst StDev
12月 24 22:09:14 vms81.liruilongs.github.io mtr[17411]: 1.|-- gateway (192.168.26.2) 0.0% 10 0.3 0.3 0.3 0.5 0.0
12月 24 22:09:14 vms81.liruilongs.github.io mtr[17411]: 2.|-- 192.168.29.154 0.0% 10 0.9 0.9 0.7 1.3 0.0
If you only want to see the output and the execution time, you can.
┌──[root#vms81.liruilongs.github.io]-[~]
└─$journalctl -u mtr-print-log.service -o cat | tail -n 10
Started /usr/sbin/mtr -r -b 192.168.29.154.
Start: Sat Dec 24 22:13:00 2022
HOST: vms81.liruilongs.github.io Loss% Snt Last Avg Best Wrst StDev
1.|-- gateway (192.168.26.2) 0.0% 10 0.2 0.3 0.2 0.5 0.0
2.|-- 192.168.29.154 0.0% 10 0.8 0.8 0.7 1.1 0.0
Started /usr/sbin/mtr -r -b 192.168.29.154.
Start: Sat Dec 24 22:14:00 2022
HOST: vms81.liruilongs.github.io Loss% Snt Last Avg Best Wrst StDev
1.|-- gateway (192.168.26.2) 0.0% 10 0.3 0.3 0.2 0.4 0.0
2.|-- 192.168.29.154 0.0% 10 0.9 0.8 0.7 1.0 0.0
Delete mtr process
┌──[root#vms81.liruilongs.github.io]-[~]
└─$systemctl stop mtr-print-log.timer
┌──[root#vms81.liruilongs.github.io]-[~]
└─$systemctl is-active mtr-print-log.service
unknown

Port forwarding: docker -> vagrant -> host

I have a docker machine with ftp service running into a vagrant machine, the vagrant machine is running into a macos host. The docker machine ftp service is accessible from vagrant machine via ftp localhost, but how can I expose it to the mac host?. The Mac -> Vagrant network is NATS, so I made a port forwarding like 21:21 between Mac host and Vagrant but, at host, I make ftp localhost and it doesn't work. :'( what am I doing wrong?
This is part of the output of ps aux in the vagrant machine:
root 7841 0.0 0.5 113612 8948 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1108 -container-ip 172.17.0.1 -container-port 1108
root 7849 0.0 0.6 121808 10176 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1107 -container-ip 172.17.0.1 -container-port 1107
root 7857 0.0 0.7 154592 11212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1106 -container-ip 172.17.0.1 -container-port 1106
root 7869 0.0 0.7 154592 12212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1105 -container-ip 172.17.0.1 -container-port 1105
root 7881 0.0 0.6 113612 10172 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1104 -container-ip 172.17.0.1 -container-port 1104
root 7888 0.0 0.7 162788 11192 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1103 -container-ip 172.17.0.1 -container-port 1103
root 7901 0.0 0.6 121808 10156 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1102 -container-ip 172.17.0.1 -container-port 1102
root 7909 0.0 0.6 154592 9216 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1101 -container-ip 172.17.0.1 -container-port 1101
root 7921 0.0 0.5 121808 9196 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1100 -container-ip 172.17.0.1 -container-port 1100
root 7929 0.0 0.7 162788 12244 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 21 -container-ip 172.17.0.1 -container-port 21
root 7942 0.0 0.5 121808 8936 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 20 -container-ip 172.17.0.1 -container-port 20
message+ 7961 0.0 0.3 111224 5248 ? Ss 12:35 0:00 proftpd: (accepting connections)

What is the cause of cpu high for given result in centos [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am using centos.
When i am running the command free -m then its showing me below:
total used free shared buffers cached
Mem: 2048 373 1674 10 0 147
-/+ buffers/cache: 225 1822
Swap: 0 0 0
I have run the command "Top" and get the below result:
top - 07:08:01 up 16:09, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 39 total, 1 running, 38 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2097152k total, 381024k used, 1716128k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 150200k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 19236 1452 1212 S 0.0 0.1 0:00.02 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd/23354
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper/23354
147 root 16 -4 10644 668 400 S 0.0 0.0 0:00.00 udevd
453 root 20 0 179m 1512 1056 S 0.0 0.1 0:00.27 rsyslogd
489 root 20 0 66692 1296 536 S 0.0 0.1 0:00.03 sshd
497 root 20 0 22192 972 716 S 0.0 0.0 0:00.00 xinetd
658 root 20 0 66876 1028 312 S 0.0 0.0 0:00.00 saslauthd
659 root 20 0 66876 764 48 S 0.0 0.0 0:00.00 saslauthd
731 root 20 0 114m 1260 620 S 0.0 0.1 0:00.24 crond
835 ossecm 20 0 10512 492 312 S 0.0 0.0 0:00.32 ossec-maild
839 root 20 0 13088 960 712 S 0.0 0.0 0:00.00 ossec-execd
843 ossec 20 0 12780 2380 620 S 0.0 0.1 0:10.15 ossec-analysisd
847 root 20 0 4200 444 304 S 0.0 0.0 0:00.84 ossec-logcollec
858 root 20 0 5004 1484 468 S 0.0 0.1 0:07.06 ossec-syscheckd
862 ossec 20 0 6388 624 372 S 0.0 0.0 0:00.03 ossec-monitord
870 root 20 0 92420 21m 1620 S 0.0 1.0 0:01.21 miniserv.pl
4363 root 20 0 96336 4448 3464 S 0.0 0.2 0:00.10 sshd
4365 root 20 0 105m 2024 1532 S 0.0 0.1 0:00.03 bash
4615 root 20 0 96776 4936 3460 S 0.0 0.2 0:00.61 sshd
4617 root 20 0 105m 2052 1548 S 0.0 0.1 0:00.20 bash
4674 root 20 0 96336 4452 3460 S 0.0 0.2 0:00.22 sshd
4676 root 20 0 105m 2012 1532 S 0.0 0.1 0:00.06 bash
7494 root 20 0 96336 4404 3428 S 0.0 0.2 0:00.03 sshd
7496 root 20 0 57712 2704 2028 S 0.0 0.1 0:00.01 sftp-server
7719 root 20 0 83116 2700 836 S 0.0 0.1 0:00.10 sendmail
7728 smmsp 20 0 78692 2128 636 S 0.0 0.1 0:00.00 sendmail
7742 root 20 0 402m 14m 7772 S 0.0 0.7 0:00.13 httpd
7744 asterisk 20 0 502m 22m 10m S 0.0 1.1 0:00.11 httpd
7938 root 20 0 105m 756 520 S 0.0 0.0 0:00.00 safe_asterisk
7940 asterisk 20 0 3157m 26m 8508 S 0.0 1.3 0:07.14 asterisk
8066 root 20 0 105m 1568 1304 S 0.0 0.1 0:00.01 mysqld_safe
8168 mysql 20 0 499m 21m 6472 S 0.0 1.1 0:01.44 mysqld
8607 asterisk 20 0 402m 8288 1404 S 0.0 0.4 0:00.00 httpd
8608 asterisk 20 0 402m 8288 1404 S 0.0 0.4 0:00.00 httpd
8611 asterisk 20 0 402m 8284 1400 S 0.0 0.4 0:00.00 httpd
8615 asterisk 20 0 402m 8296 1412 S 0.0 0.4 0:00.00 httpd
Even when i am trying see by disabling the services asterisk,httpd,sendmail,mysqld still its showing 100% cpu usage.
Can anybody know how can i check what is the actual thing which is taking this much CPU usages?
The CPU Usage in top says:
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Your CPU is 100% idle. This is the explanation:
us: user cpu time (or) % CPU time spent in user space
sy: system cpu time (or) % CPU time spent in kernel space
ni: user nice cpu time (or) % CPU time spent on low priority processes
id: idle cpu time (or) % CPU time spent idle
wa: io wait cpu time (or) % CPU time spent in wait (on disk)
hi: hardware irq (or) % CPU time spent servicing/handling hardware interrupts
si: software irq (or) % CPU time spent servicing/handling software interrupts
st: steal time - - % CPU time in involuntary wait by virtual cpu while hypervisor is servicing another processor (or) % CPU time stolen from a virtual machine

Thousands of instances of index.php opening at the same time

Suddenly my hosting account has been suspended due to thousands of instances of index.php opening at the same time.
The site is built around the latest version of Wordpress and bbpress. here's the email from the hosting company:
*Action Taken: Please be aware we have suspended this account at this
time in order to maintain the
reliability and integrity of the
server. Reason: Thousands of
instances of index.php opening at the
same time:
17270 myserver 15 0 268m 79m 52m R
17.5 2.0 0:00.38 /usr/bin/php /home/myserver/public_html/index.php 17287 myserver 16 0 268m 34m 8712 R
14.4 0.9 0:00.35 /usr/bin/php /home/myserver/public_html/index.php 17332 myserver 15 0 213m 26m 7680 S
12.9 0.7 0:00.17 /usr/bin/php /home/myserver/public_html/index.php 17276 myserver 16 0 283m 40m 7912 R
12.1 1.0 0:00.33 /usr/bin/php /home/myserver/public_html/index.php 17336 myserver 17 0 213m 26m 7680 S
12.1 0.7 0:00.16 /usr/bin/php /home/myserver/public_html/index.php 17341 myserver 18 0 213m 26m 7680 S
12.1 0.7 0:00.16 /usr/bin/php /home/myserver/public_html/index.php 17343 myserver 16 0 213m 26m 7680 S
12.1 0.7 0:00.16 /usr/bin/php /home/myserver/public_html/index.php 17339 myserver 17 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17344 myserver 17 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17347 myserver 17 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17351 myserver 16 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17353 myserver 17 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17364 myserver 17 0 213m 26m 7680 S
11.4 0.7 0:00.15 /usr/bin/php /home/myserver/public_html/index.php 17368 myserver 17 0 209m 23m 7388 R
10.6 0.6 0:00.14 /usr/bin/php /home/myserver/public_html/index.php 17278 myserver 16 0 283m 40m 7896 R
9.9 1.0 0:00.28 /usr/bin/php /home/myserver/public_html/index.php*
They have just emailed this too:
it is possible that your forum script is being abused if it is not secured or it has some security whole, but we can't provide more information as we do not know how it is coded.
Please check and let us know if you have any further questions.
Any ideas at what's going on?
You may have gotten DoS'd.
exactly what dav said or for some reason you are getting an insane load... to prevent that from happening again, you can cache your wordpress using a plugin like supercache to create some semi static pages, filter spam comments pre-reload. Because every single page load = loading index.php.
Seems the problem is with sites getting indexed all at once especially from crawlers like Yandex/Baidu who load up multiple pages at once
every page load via bot is another instance of index.php opening - so if you have 2000 pages on the site and they get indexed all at once - this is what you get
You can try to add the below to your robox.txt (might or might not work)
User-agent: *
Crawl-Delay: 30
Disallow: /wp-admin/
User-agent: Yandex
Crawl-Delay: 30
User-agent: Baidu
Crawl-Delay: 30
or just block IP's of crawlers (100% guarantee)

Resources