I am using an EC2 instance to run a node app. I logged into the server after a while only to realise that the server has run out of disk space. After debugging, I realised that logs are taking up space. I deleted the 3.3Gb log file. However, even after the cleanup there is no space. What should I do?
Here are the commands I ran:
ubuntu#app1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 7.7G 7.7G 0 100% /
tmpfs 496M 8.0K 496M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1001
tmpfs 100M 0 100M 0% /run/user/1000
ubuntu#app1:~$ sudo du -h --max-depth=1 / | sort -n
0 /proc
0 /sys
4.0K /lib64
4.0K /media
4.0K /mnt
4.0K /srv
8.0K /dev
8.0K /snap
16K /lost+found
24K /root
800K /tmp
6.4M /etc
11M /run
14M /sbin
16M /bin
246M /boot
331M /home
397M /opt
429M /var
538M /lib
2.1G /usr
3.7G /data
7.7G /
I deleted a 3.3G log file in /data and ran du again
ubuntu#app1:~$ sudo du -h --max-depth=1 / | sort -h
0 /proc
0 /sys
4.0K /lib64
4.0K /media
4.0K /mnt
4.0K /srv
8.0K /dev
8.0K /snap
16K /lost+found
24K /root
800K /tmp
6.4M /etc
11M /run
14M /sbin
16M /bin
246M /boot
331M /home
352M /data
397M /opt
429M /var
538M /lib
2.1G /usr
4.4G /
ubuntu#app1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 7.7G 7.7G 0 100% /
tmpfs 496M 8.0K 496M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1001
tmpfs 100M 0 100M 0% /run/user/1000
Although the /data directory is now reduced to 352M, still df still shows 100% disk utilization. What am I missing?
Referring to this answer https://unix.stackexchange.com/a/253655/47050, here is the output of strace
ubuntu#app1:~$ strace -e statfs df /
statfs("/", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=2016361, f_bfree=4096, f_bavail=0, f_files=1024000, f_ffree=617995, f_fsid={2136106470, -680157247}, f_namelen=255, f_frsize=4096, f_flags=4128}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8065444 8049060 0 100% /
+++ exited with 0 +++
UPDATE
I ran
sudo lsof | grep deleted
and found
node\x20/ 22318 deploy 12w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 13w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 14w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 15w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 16w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
How do I release these files?
Update 2
ubuntu#app1:~$ sudo ls -l /proc/22318/fd
total 0
lrwx------ 1 deploy deploy 64 Apr 6 10:05 0 -> socket:[74749956]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 1 -> socket:[74749958]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 10 -> /dev/null
l-wx------ 1 deploy deploy 64 Apr 6 10:05 12 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 13 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 14 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 15 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 16 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 17 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 18 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 19 -> /data/app/shared/logs/production.log (deleted)
lrwx------ 1 deploy deploy 64 Apr 6 10:05 2 -> socket:[74749960]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 20 -> /data/app/shared/logs/production.log (deleted)
lrwx------ 1 deploy deploy 64 Apr 6 10:05 21 -> socket:[74750302]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 22 -> socket:[74750303]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 3 -> socket:[74749962]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 4 -> anon_inode:[eventpoll]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 5 -> pipe:[74749978]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 6 -> pipe:[74749978]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 7 -> pipe:[74749979]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 8 -> pipe:[74749979]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 9 -> anon_inode:[eventfd]
ubuntu#app1:~$ ps aux | grep node
deploy 22318 0.0 12.7 1277192 129844 ? Ssl 2019 173:38 node /data/app/releases/20180904094535/app.js
ubuntu 30665 0.0 0.0 12944 972 pts/0 S+ 10:09 0:00 grep --color=auto node
The files were held up by the node application. Determined using:
sudo lsof | grep deleted
Restarting the node application solved my problem.
Find node process id ps aux | grep node. Then kill node server using kill -9 <process_id>. Finally restart node server. In my case pm2 automatically restarted node.
Related
My Nginx server threw the warning:
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
I created nginx-group and nginx-user
groupadd --system --gid 7447 nginx-group
adduser --system --gid 7447 --uid 7447 nginx-user
The Nginx process owned by my user "nginx-user"
bash-4.2$ ps -ef | grep [n]ginx
nginx-u+ 1 0 0 08:05 ? 00:00:00 nginx: master process nginx -g daemon off;
nginx-u+ 7 1 0 08:05 ? 00:00:00 nginx: worker process
The /etc/nginx folder and content owned by the root user
bash-4.2$ ls -la /etc/nginx
total 20
drwxr-xr-x 1 root root 20 Aug 1 07:33 .
drwxr-xr-x 1 root root 41 Aug 1 08:05 ..
drwxr-xr-x 1 root root 54 Aug 1 07:33 conf.d
-rw-r--r-- 1 root root 1007 May 24 15:35 fastcgi_params
-rw-r--r-- 1 root root 3957 Aug 1 07:05 mime.types
lrwxrwxrwx 1 root root 29 Aug 1 07:33 modules -> ../../usr/lib64/nginx/modules
-rw-r--r-- 1 root root 2200 Aug 1 07:05 nginx.conf
-rw-r--r-- 1 root root 636 May 24 15:35 scgi_params
drwxr-xr-x 1 root root 22 Aug 1 07:33 sites-available
drwxr-xr-x 1 root root 22 Aug 1 07:33 sites-enabled
-rw-r--r-- 1 root root 664 May 24 15:35 uwsgi_params
Anyone can help to fix this warning ?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I follow the steps to setup a nginx server. After I create example.com.config and symbolic link for each server block in the sites-enabled directory. My nginx can't start.
I can't restart nginx service. It shows the following message when I entered
$sudo systemctl restart nginx.service
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.
$sudo systemctl status nginx.service -l shows the following message.
● nginx.service - nginx - high performance web server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-06-05 04:26:05 EDT; 1min 27s ago
Docs: http://nginx.org/en/docs/
Process: 4776 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE)
Process: 11491 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE)
Jun 05 04:26:05 localhost.localdomain systemd[1]: Starting nginx - high performance web server...
Jun 05 04:26:05 localhost.localdomain nginx[11491]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Jun 05 04:26:05 localhost.localdomain nginx[11491]: nginx: [emerg] open() "/var/run/nginx.pid" failed (13: Permission denied)
Jun 05 04:26:05 localhost.localdomain nginx[11491]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jun 05 04:26:05 localhost.localdomain systemd[1]: nginx.service: control process exited, code=exited status=1
Jun 05 04:26:05 localhost.localdomain systemd[1]: Failed to start nginx - high performance web server.
Jun 05 04:26:05 localhost.localdomain systemd[1]: Unit nginx.service entered failed state.
Jun 05 04:26:05 localhost.localdomain systemd[1]: nginx.service failed.
PS: I am running under CentOS 7 on virtualbox.
Please help me. Thank you.
Here's the output when I run ls -lart /var/run/
total 56
dr-xr-xr-x. 17 root root 233 Jun 2 05:37 ..
drwxr-xr-x. 2 root root 60 Jun 6 01:21 tmpfiles.d
drwxr-xr-x. 3 root root 60 Jun 6 01:21 log
drwxr-xr-x. 2 root root 60 Jun 6 01:21 mount
drwxr-xr-x. 4 root root 120 Jun 6 01:21 initramfs
prw-------. 1 root root 0 Jun 6 01:21 dmeventd-server
prw-------. 1 root root 0 Jun 6 01:21 dmeventd-client
drwx------. 2 root root 80 Jun 6 01:21 lvm
-rw-r--r--. 1 root root 4 Jun 6 01:21 lvmetad.pid
drwxr-xr-x. 2 root root 60 Jun 6 01:21 sysconfig
drwxr-xr-x. 2 root root 40 Jun 6 01:21 samba
drwxr-xr-x. 2 root root 40 Jun 6 01:21 setrans
drwxrwxr-x. 2 root root 40 Jun 6 01:21 netreport
drwxr-xr-x. 2 root root 40 Jun 6 01:21 faillock
drwx------. 2 rpc rpc 40 Jun 6 01:21 rpcbind
drwxr-xr-x. 2 root root 40 Jun 6 01:21 ppp
drwxrwxr-x. 3 root libstoragemgmt 60 Jun 6 01:21 lsm
drwxr-xr-x. 2 root root 40 Jun 6 01:21 spice-vdagentd
drwxr-xr-x. 2 root root 40 Jun 6 01:21 sepermit
drwxr-xr-x. 2 radvd radvd 40 Jun 6 01:21 radvd
drwx--x---. 2 root root 40 Jun 6 01:21 mdadm
drwxr-xr-x. 2 root root 40 Jun 6 01:21 certmonger
drwx--x--x. 2 setroubleshoot setroubleshoot 40 Jun 6 01:21 setroubleshoot
-rw-r--r--. 1 root root 4 Jun 6 01:21 auditd.pid
drwxr-xr-x. 2 root root 60 Jun 6 01:21 dbus
srw-rw-rw-. 1 root root 0 Jun 6 01:21 rpcbind.sock
drwxr-xr-x. 3 root lp 80 Jun 6 01:21 cups
drwxr-xr-x. 2 avahi avahi 80 Jun 6 01:21 avahi-daemon
-rw-------. 1 root root 11 Jun 6 01:21 alsactl.pid
-rw-r--r--. 1 root root 4 Jun 6 01:21 chronyd.pid
-rw-r--r--. 1 root root 4 Jun 6 01:21 ksmtune.pid
drwxr-xr-x. 2 root root 100 Jun 6 01:21 abrt
-rw-------. 1 root root 4 Jun 6 01:21 gssproxy.pid
srw-rw-rw-. 1 root root 0 Jun 6 01:21 gssproxy.sock
-rw-------. 1 root root 0 Jun 6 01:21 xtables.lock
drwxr-x---. 2 root root 40 Jun 6 01:21 firewalld
-rw-r--r--. 1 root root 4 Jun 6 01:21 dhclient-enp0s3.pid
drwxr-xr-x. 2 root root 80 Jun 6 01:21 NetworkManager
-rw-------. 1 root root 5 Jun 6 01:21 sm-notify.pid
drwxr-xr-x. 7 root root 180 Jun 6 01:21 lock
-rw-------. 1 root root 5 Jun 6 01:21 syslogd.pid
-rw-r--r--. 1 root root 5 Jun 6 01:21 sshd.pid
-rw-r--r--. 1 root root 5 Jun 6 01:21 crond.pid
-rw-r--r--. 1 root root 5 Jun 6 01:21 atd.pid
-rw-r--r--. 1 root root 4 Jun 6 01:21 libvirtd.pid
----------. 1 root root 0 Jun 6 01:21 cron.reboot
drwxr-xr-x. 2 root root 60 Jun 6 01:21 tuned
drwxr-xr-x. 6 root root 220 Jun 6 01:21 libvirt
drwxr-xr-x. 2 root root 40 Jun 6 01:21 plymouth
drwx------. 2 root root 40 Jun 6 01:22 udisks2
drwxr-xr-x. 2 root root 80 Jun 6 01:24 console
drwx--x--x. 4 root gdm 120 Jun 6 01:24 gdm
drwxr-xr-x. 3 root root 60 Jun 6 01:24 user
-rw-rw-r--. 1 root utmp 1536 Jun 6 01:24 utmp
drwxr-xr-x. 7 root root 160 Jun 6 01:24 udev
drwxr-xr-x. 17 root root 420 Jun 6 01:25 systemd
drwxr-xr-x. 37 root root 1140 Jun 6 01:25 .
ps -eaf |grep nginx
root 698 685 0 01:21 ? 00:00:00 runsv nginx
root 748 698 0 01:21 ? 00:00:00 svlogd -tt /var/log/gitlab/ngin
root 749 698 0 01:21 ? 00:00:00 nginx: master process /opt/gitlab/embedded/sbin/nginx -p /var/opt/gitlab/nginx
gitlab-+ 800 749 0 01:21 ? 00:00:00 nginx: worker process
gitlab-+ 801 749 0 01:21 ? 00:00:00 nginx: cache manager process
yen 6683 3840 0 01:44 pts/0 00:00:00 grep --color=auto nginx
Output of ps -eaf |grep nginx and netstat -tulpn |grep 80 before and after.
Here's the output of ps -eaf |grep nginx
root 669 1 0 21:50 ? 00:00:00 runsvdir -P /opt/gitlab/service log: svlogd: warning: unable to lock directory: /var/log/gitlab/nginx: temporary failure svlogd: fatal: no functional log directories. svlogd: warning: unable to lock directory: /var/log/gitlab/nginx: temporary failure svlogd: fatal: no functional log directories. svlogd: warning: unable to lock directory: /var/log/gitlab/nginx: temporary failure svlogd: fatal: no functional log directories. .....
root 4333 669 0 21:57 ? 00:00:00 runsv nginx
root 4348 4333 0 21:57 ? 00:00:00 svlogd -tt /var/log/gitlab/nginx
root 4374 4333 0 21:57 ? 00:00:00 nginx: master process /opt/gitlab/embedded/sbin/nginx -p /var/opt/gitlab/nginx
gitlab-+ 4381 4374 0 21:57 ? 00:00:00 nginx: worker process
gitlab-+ 4382 4374 0 21:57 ? 00:00:00 nginx: cache manager process
yen 14156 4094 0 23:13 pts/0 00:00:00 grep --color=auto nginx
Check your error log with sudo cat /var/log/nginx/error.log|less
You can also with WinScp by entering the path /var/log/nginx/
In my case had the syntax error.
In my server configuration file I had: server_name {api.mydomain.com};
The correct is: server_name api.mydomain.com;
today i tried to setup my vserver with nginx and php5-fpm
but i somehow cannot get php5-fpm to work.
a service php5-fpm start looks like :
$ service php5-fpm start
php5-fpm start/running, process 27153
but nothing gets started. .pid doesnt get created aswell nor the .sock:
$ l /var/run/
total 48K
drwxr-xr-x 19 root root 640 Sep 8 12:51 .
drwxr-xr-x 23 root root 4.0K Sep 8 12:43 ..
drwxr-xr-x 2 root root 40 Sep 1 14:14 apache2
drwxr-xr-x 2 avahi avahi 80 Aug 11 11:12 avahi-daemon
-rw-r--r-- 1 root root 7 Aug 10 11:50 container_type
-rw-r--r-- 1 root root 4 Aug 10 11:50 crond.pid
---------- 1 root root 0 Aug 10 11:50 crond.reboot
drwxr-xr-x 2 messagebus messagebus 80 Aug 11 11:12 dbus
-rw-r--r-- 1 root root 0 Aug 10 11:53 init.upgraded
drwxrwxrwt 3 root root 80 Aug 10 11:52 lock
-rw-r--r-- 1 root root 114 Sep 8 12:16 motd.dynamic
drwxr-xr-x 2 root root 60 Aug 10 11:50 mount
drwxr-xr-x 2 mumble-server adm 40 Sep 1 15:06 mumble-server
drwxr-xr-x 2 mysql root 80 Sep 1 14:21 mysqld
drwxr-xr-x 3 root root 160 Aug 10 11:51 network
-rw-r--r-- 1 root root 6 Sep 8 12:43 nginx.pid
-rw-r--r-- 1 root root 4 Aug 10 11:50 rsyslogd.pid
drwxr-xr-x 3 root root 60 Aug 10 11:53 samba
drwx--x--- 2 root sasl 140 Aug 10 11:50 saslauthd
drwxrwxr-x 2 root utmp 40 Aug 10 11:50 screen
drwxr-xr-x 2 root root 40 Aug 10 11:50 sendsigs.omit.d
drwxrwxrwt 2 root root 40 Aug 10 11:50 shm
drwxr-xr-x 2 root root 40 Aug 10 11:50 sshd
-rw-r--r-- 1 root root 4 Aug 10 11:51 sshd.pid
drwxr-xr-x 5 root root 100 Aug 11 11:12 systemd
drwxr-xr-x 3 root root 100 Aug 10 11:53 udev
-rw-r--r-- 1 root root 5 Aug 10 11:53 upstart-file-bridge.pid
-rw-r--r-- 1 root root 5 Aug 10 11:53 upstart-socket-bridge.pid
-rw-r--r-- 1 root root 5 Aug 10 11:53 upstart-udev-bridge.pid
drwxr-xr-x 2 root root 40 Sep 8 12:16 user
-rw-rw-r-- 1 root utmp 1.5K Sep 8 12:15 utmp
-rw-r--r-- 1 root root 6 Sep 8 12:28 xinetd.pid
php5-fpm -t displays the following:
$ php5-fpm -t
Mon Sep 8 13:11:04 2014 (27954): Fatal Error Unable to allocate shared memory segment of 67108864 bytes: mmap: Cannot allocate memory (12)
php5-fpm is not running:
$ ps -ef 2 ↵
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Aug10 ? 00:00:13 init
root 2 1 0 Aug10 ? 00:00:00 [kthreadd/781170]
root 3 2 0 Aug10 ? 00:00:00 [khelper/7811702]
syslog 348 1 0 Aug10 ? 00:00:45 rsyslogd
root 390 1 0 Aug10 ? 00:00:04 cron
root 577 1 0 Aug10 ? 00:00:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 578 577 0 Aug10 ? 00:00:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 869 1 0 Aug10 ? 00:00:33 /usr/sbin/sshd -D
mysql 3906 1 0 Sep01 ? 00:09:05 /usr/sbin/mysqld
root 8416 1 0 Aug10 ? 00:00:00 /lib/systemd/systemd-udevd --daemon
root 8472 1 0 Aug10 ? 00:00:00 upstart-udev-bridge --daemon
root 8474 1 0 Aug10 ? 00:00:00 upstart-file-bridge --daemon
root 8477 1 0 Aug10 ? 00:00:00 upstart-socket-bridge --daemon
message+ 14085 1 0 Aug11 ? 00:00:00 dbus-daemon --system --fork
root 14142 1 0 Aug11 ? 00:00:00 /lib/systemd/systemd-logind
avahi 14263 1 0 Aug11 ? 00:00:00 avahi-daemon: running [server2.local]
avahi 14265 14263 0 Aug11 ? 00:00:00 avahi-daemon: chroot helper
root 18283 869 0 11:49 ? 00:00:00 sshd: belstgut [priv]
belstgut 18294 18283 0 11:49 ? 00:00:00 sshd: belstgut#pts/0
belstgut 18295 18294 0 11:49 pts/0 00:00:03 -zsh
root 19303 18295 0 11:56 pts/0 00:00:00 sudo -s
root 19304 19303 0 11:56 pts/0 00:00:04 /usr/bin/zsh
root 24237 1 0 12:28 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6
root 24395 1 0 12:28 ? 00:00:00 /usr/lib/postfix/master
postfix 24398 24395 0 12:28 ? 00:00:00 pickup -l -t unix -u -c
postfix 24399 24395 0 12:28 ? 00:00:00 qmgr -l -t unix -u
root 25052 1 0 12:43 ? 00:00:00 nginx: master process /usr/sbin/nginx
www-data 25055 25052 0 12:43 ? 00:00:00 nginx: worker process
www-data 25056 25052 0 12:43 ? 00:00:00 nginx: worker process
www-data 25057 25052 0 12:43 ? 00:00:00 nginx: worker process
www-data 25058 25052 0 12:43 ? 00:00:00 nginx: worker process
root 28039 869 0 13:12 ? 00:00:00 sshd: unknown [priv]
sshd 28040 28039 0 13:12 ? 00:00:00 sshd: unknown [net]
root 28084 869 0 13:12 ? 00:00:00 sshd: [accepted]
sshd 28085 28084 0 13:12 ? 00:00:00 sshd: [net]
root 28086 869 0 13:12 ? 00:00:00 sshd: [accepted]
sshd 28087 28086 0 13:12 ? 00:00:00 sshd: [net]
root 28088 869 0 13:12 ? 00:00:00 sshd: [accepted]
sshd 28089 28088 0 13:12 ? 00:00:00 sshd: [net]
root 28126 19304 0 13:13 pts/0 00:00:00 ps -ef
hope someone can help me.
There are 6 NGINX processes in the server. Ever since NGINX is started, the RES/VIRT values kept growing until it is out of memory. Is it indicating there is a memory leak?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1941 root 20 0 621m 17m 4144 S 290.4 0.1 8415:03 mongod
16383 nobody 20 0 1675m 1.6g 724 S 21.0 5.2 13:19.30 nginx
16382 nobody 20 0 1671m 1.6g 724 S 17.2 5.1 13:21.39 nginx
16381 nobody 20 0 1674m 1.6g 724 S 15.3 5.1 13:28.45 nginx
16380 nobody 20 0 1683m 1.6g 724 S 13.4 5.2 13:24.77 nginx
16384 nobody 20 0 1674m 1.6g 724 S 13.4 5.1 13:19.83 nginx
16385 nobody 20 0 1685m 1.6g 724 S 13.4 5.2 13:25.00 nginx
Try look on this ngx_http_limit_conn_module nginx module.
Also take a look to client_max_body_size
Output of # top -o size
last pid: 61935; load averages: 0.82, 0.44, 0.39 up 10+13:28:42 16:49:43
152 processes: 2 running, 150 sleeping
CPU: 10.3% user, 0.0% nice, 1.8% system, 0.2% interrupt, 87.7% idle
Mem: 5180M Active, 14G Inact, 2962M Wired, 887M Cache, 2465M Buf, 83M Free
Swap: 512M Total, 26M Used, 486M Free, 5% Inuse
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
1471 mysql 62 44 0 763M 349M ucond 3 222:19 74.76% mysqld
1171 root 4 44 0 645M 519M sbwait 0 20:56 3.86% tfs
41173 root 4 44 0 629M 516M sbwait 4 19:17 0.59% tfs
41350 root 4 44 0 585M 467M sbwait 7 15:17 0.10% tfs
36382 root 4 45 0 581M 401M sbwait 1 206:50 0.10% tfs
41157 root 4 44 0 551M 458M sbwait 5 16:23 0.98% tfs
36401 root 4 45 0 199M 108M uwait 2 17:50 0.00% tfs
36445 root 4 44 0 199M 98M uwait 4 20:11 0.00% tfs
36420 root 4 45 0 191M 98M uwait 4 19:57 0.00% tfs
3491 root 9 45 0 79320K 41292K uwait 4 40:22 0.00% tfs_db
40690 root 1 44 0 29896K 4104K select 1 0:05 0.00% sshd
44636 root 1 44 0 29896K 3896K select 4 0:00 0.00% sshd
22224 root 1 44 0 29896K 3848K select 6 0:00 0.00% sshd
42956 root 1 44 0 29896K 3848K select 4 0:00 0.00% sshd
909 bind 11 76 0 27308K 14396K kqread 1 0:00 0.00% named
1586 root 1 44 0 26260K 3464K select 4 0:00 0.00% sshd
40590 root 4 45 0 23480K 7592K uwait 1 5:11 0.00% auth
1472 root 1 44 0 22628K 8776K select 0 0:41 0.00% perl5.8.9
22229 root 1 44 0 20756K 2776K select 0 0:00 0.00% sftp-server
42960 root 1 44 0 20756K 2772K select 2 0:00 0.00% sftp-server
44638 root 1 44 0 10308K 2596K pause 2 0:00 0.00% csh
42958 root 1 47 0 10308K 1820K pause 3 0:00 0.00% csh
22227 root 1 48 0 10308K 1820K pause 0 0:00 0.00% csh
36443 root 1 57 0 10248K 1792K wait 0 0:00 0.00% bash
36418 root 1 51 0 10248K 1788K wait 2 0:00 0.00% bash
41171 root 1 63 0 10248K 1788K wait 0 0:00 0.00% bash
36399 root 1 50 0 10248K 1784K wait 2 0:00 0.00% bash
41155 root 1 56 0 10248K 1784K wait 0 0:00 0.00% bash
40588 root 1 76 0 10248K 1776K wait 6 0:00 0.00% bash
36380 root 1 50 0 10248K 1776K wait 2 0:00 0.00% bash
41348 root 1 54 0 10248K 1776K wait 0 0:00 0.00% bash
1169 root 1 54 0 10248K 1772K wait 0 0:00 0.00% bash
3485 root 1 76 0 10248K 1668K wait 4 0:00 0.00% bash
61934 root 1 44 0 9372K 2356K CPU4 4 0:00 0.00% top
1185 mysql 1 76 0 8296K 1356K wait 3 0:00 0.00% sh
1611 root 1 44 0 7976K 1372K nanslp 0 0:08 0.00% cron
824 root 1 44 0 7048K 1328K select 0 0:03 0.00% syslogd
1700 root 1 76 0 6916K 1052K ttyin 3 0:00 0.00% getty
1703 root 1 76 0 6916K 1052K ttyin 2 0:00 0.00% getty
1702 root 1 76 0 6916K 1052K ttyin 5 0:00 0.00% getty
1706 root 1 76 0 6916K 1052K ttyin 0 0:00 0.00% getty
1705 root 1 76 0 6916K 1052K ttyin 1 0:00 0.00% getty
1701 root 1 76 0 6916K 1052K ttyin 6 0:00 0.00% getty
1707 root 1 76 0 6916K 1052K ttyin 4 0:00 0.00% getty
1704 root 1 76 0 6916K 1052K ttyin 7 0:00 0.00% getty
490 root 1 44 0 3204K 556K select 1 0:00 0.00% devd
My game server lag so much and I have noticed that there is only 83M of free ram.
Its not just top because I have also tried to use other app:
# /usr/local/bin/freem
SYSTEM MEMORY INFORMATION:
mem_wire: 3104976896 ( 2961MB) [ 12%] Wired: disabled for paging out
mem_active: + 5440778240 ( 5188MB) [ 21%] Active: recently referenced
mem_inactive:+ 15324811264 ( 14614MB) [ 61%] Inactive: recently not referenced
mem_cache: + 1015689216 ( 968MB) [ 4%] Cached: almost avail. for allocation
mem_free: + 86818816 ( 82MB) [ 0%] Free: fully available for allocation
mem_gap_vm: + 946176 ( 0MB) [ 0%] Memory gap: UNKNOWN
-------------- ------------ ----------- ------
mem_all: = 24974020608 ( 23817MB) [100%] Total real memory managed
mem_gap_sys: + 772571136 ( 736MB) Memory gap: Kernel?!
-------------- ------------ -----------
mem_phys: = 25746591744 ( 24553MB) Total real memory available
mem_gap_hw: + 23212032 ( 22MB) Memory gap: Segment Mappings?!
-------------- ------------ -----------
mem_hw: = 25769803776 ( 24576MB) Total real memory installed
SYSTEM MEMORY SUMMARY:
mem_used: 9342484480 ( 8909MB) [ 36%] Logically used memory
mem_avail: + 16427319296 ( 15666MB) [ 63%] Logically available memory
-------------- ------------ ----------- ------
mem_total: = 25769803776 ( 24576MB) [100%] Logically total memory
As you can see, the output is similar:
mem_free: + 86818816 ( 82MB) [ 0%] Free: fully available for allocation.
My dedicated has 24GB of RAM and it's pretty much for my game server.
How can I find out which process is eating that amount of memory?
I am using FreeBSD 8.2.
According to top's output, you are only using 5% of your swap. This means, you are not short on RAM -- whatever is slowing you down, it is not the memory shortage. If anything, I'd be suspecting mysqld -- not only was it quite busy, when you took the snapshot, it also accumulated quite a bit of CPU-time prior to that.
Perhaps, some frequently-running queries can be helped by a new index or two?