I never saw a value greater than 0.0% cpu on my apps. Googled also for output of cf apps and never found something other than 0.0%.
Is this 0.0% from VM, container or hypervisor? How is this value calculated? Is this value collected by health manager? Can I trust this value?
Cloud Foundry runs on OpenStack cluster.
state since cpu memory disk de
#0 running 2015-08-28 10:36:33 AM 0.0% 280.8M of 1G 73.7M of 4G
#1 running 2015-08-28 10:36:10 AM 0.0% 289.9M of 1G 73.7M of 4G
#2 running 2015-09-01 09:00:17 PM 0.0% 277.3M of 1G 73.7M of 4G
#3 running 2015-08-31 11:04:17 PM 0.0% 250.4M of 1G 73.7M of 4G
#4 running 2015-09-02 06:03:21 PM 0.0% 51M of 1G 73.7M of 4G
#5 running 2015-08-28 10:36:12 AM 0.0% 348.4M of 1G 73.7M of 4G
#6 running 2015-08-28 10:36:12 AM 0.0% 301.4M of 1G 73.7M of 4G
#7 running 2015-08-31 10:01:45 PM 0.0% 201.2M of 1G 73.7M of 4G
#8 running 2015-09-02 06:03:26 PM 0.0% 50.8M of 1G 73.7M of 4G
#9 running 2015-09-02 06:03:25 PM 0.0% 51.2M of 1G 73.7M of 4G
#10 running 2015-08-28 10:36:07 AM 0.0% 334.1M of 1G 73.7M of 4G
#11 running 2015-09-02 06:02:54 PM 0.0% 51.3M of 1G 73.7M of 4G
#12 running 2015-09-02 06:02:55 PM 0.0% 53.6M of 1G 73.7M of 4G
#13 running 2015-08-28 10:36:08 AM 0.0% 314M of 1G 73.7M of 4G
#14 running 2015-08-28 10:36:45 AM 0.0% 345.9M of 1G 73.7M of 4G
#15 running 2015-08-28 10:36:10 AM 0.0% 412.6M of 1G 73.7M of 4G
#16 running 2015-08-28 10:36:04 AM 0.0% 286.3M of 1G 73.7M of 4G
#17 running 2015-08-28 10:36:11 AM 0.0% 294.5M of 1G 73.7M of 4G
#18 running 2015-08-28 10:36:06 AM 0.0% 304.4M of 1G 73.7M of 4G
#19 running 2015-09-02 06:02:49 PM 0.0% 51.1M of 1G 73.7M of 4G
#20 running 2015-09-01 09:03:20 PM 0.0% 173.8M of 1G 73.7M of 4G
#21 running 2015-08-28 10:36:07 AM 0.0% 292.3M of 1G 73.7M of 4G
#22 running 2015-08-28 10:36:05 AM 0.0% 289.1M of 1G 73.7M of 4G
#23 running 2015-09-01 09:02:07 PM 0.0% 213.8M of 1G 73.7M of 4G
#24 running 2015-09-02 06:03:21 PM 0.0% 51.1M of 1G 73.7M of 4G
#25 running 2015-08-28 10:36:52 AM 0.0% 337.3M of 1G 73.7M of 4G
#26 running 2015-08-28 10:36:52 AM 0.0% 337.1M of 1G 73.7M of 4G
#27 running 2015-08-31 10:47:15 AM 0.0% 302.7M of 1G 73.7M of 4G
#28 running 2015-08-28 10:36:13 AM 0.0% 316M of 1G 73.7M of 4G
#29 running 2015-08-28 10:36:16 AM 0.0% 325.1M of 1G 73.7M of 4G
#30 running 2015-08-28 10:36:17 AM 0.0% 328.5M of 1G 73.7M of 4G
#31 running 2015-09-02 06:02:55 PM 0.0% 53.4M of 1G 73.7M of 4G
#32 running 2015-08-28 10:36:52 AM 0.0% 258.7M of 1G 73.7M of 4G
#33 running 2015-08-28 10:36:48 AM 0.0% 273.7M of 1G 73.7M of 4G
#34 running 2015-08-31 10:01:37 PM 0.0% 216.8M of 1G 73.7M of 4G
#35 running 2015-08-28 10:36:21 AM 0.0% 428.4M of 1G 73.7M of 4G
#36 running 2015-09-02 08:10:51 AM 0.0% 228.2M of 1G 73.7M of 4G
#37 running 2015-08-28 10:36:14 AM 0.0% 262.6M of 1G 73.7M of 4G
#38 running 2015-08-28 10:36:56 AM 0.0% 284.8M of 1G 73.7M of 4G
#39 running 2015-09-02 10:51:07 AM 0.0% 174.3M of 1G 73.7M of 4G
#40 running 2015-09-01 09:03:20 PM 0.0% 210.1M of 1G 73.7M of 4G
#41 running 2015-08-28 10:36:21 AM 0.0% 295.3M of 1G 73.7M of 4G
#42 running 2015-08-28 10:36:22 AM 0.0% 301.7M of 1G 73.7M of 4G
#43 running 2015-08-28 10:36:59 AM 0.0% 307M of 1G 73.7M of 4G
#44 running 2015-08-28 10:36:29 AM 0.0% 293.8M of 1G 73.7M of 4G
#45 running 2015-09-02 08:10:45 AM 0.0% 178.8M of 1G 73.7M of 4G
#46 running 2015-08-28 10:36:54 AM 0.0% 313.5M of 1G 73.7M of 4G
#47 running 2015-08-31 11:04:17 PM 0.0% 274.8M of 1G 73.7M of 4G
#48 running 2015-09-02 06:02:52 PM 0.0% 50.9M of 1G 73.7M of 4G
#49 running 2015-08-28 10:36:17 AM 0.0% 335.8M of 1G 73.7M of 4G
#50 running 2015-08-28 10:36:19 AM 0.0% 310.7M of 1G 73.7M of 4G
#51 running 2015-08-28 10:36:23 AM 0.0% 310M of 1G 73.7M of 4G
#52 running 2015-09-02 06:02:48 PM 0.0% 51M of 1G 73.7M of 4G
#53 running 2015-08-28 10:37:04 AM 0.0% 269.4M of 1G 73.7M of 4G
#54 running 2015-08-28 10:37:04 AM 0.0% 299.8M of 1G 73.7M of 4G
#55 running 2015-08-28 10:37:05 AM 0.0% 361.6M of 1G 73.7M of 4G
#56 running 2015-08-28 10:36:28 AM 0.0% 321.6M of 1G 73.7M of 4G
#57 running 2015-08-28 10:36:21 AM 0.0% 309.4M of 1G 73.7M of 4G
#58 running 2015-08-28 10:36:28 AM 0.0% 330.3M of 1G 73.7M of 4G
#59 running 2015-08-28 10:36:29 AM 0.0% 338M of 1G 73.7M of 4G
#60 running 2015-08-28 10:36:32 AM 0.0% 362.5M of 1G 73.7M of 4G
#61 running 2015-08-28 10:37:05 AM 0.0% 279.8M of 1G 73.7M of 4G
#62 running 2015-08-28 10:37:01 AM 0.0% 271.7M of 1G 73.7M of 4G
#63 running 2015-08-28 10:36:23 AM 0.0% 324M of 1G 73.7M of 4G
#64 running 2015-08-28 10:36:26 AM 0.0% 341.1M of 1G 73.7M of 4G
#65 running 2015-09-02 06:02:54 PM 0.0% 51.7M of 1G 73.7M of 4G
#66 running 2015-09-02 06:02:52 PM 0.0% 50.9M of 1G 73.7M of 4G
#67 running 2015-09-01 09:03:38 PM 0.0% 247.5M of 1G 73.7M of 4G
#68 running 2015-08-28 10:36:40 AM 0.0% 324.8M of 1G 73.7M of 4G
#69 running 2015-08-31 10:34:18 PM 0.0% 257.6M of 1G 73.7M of 4G
#70 running 2015-08-28 10:36:34 AM 0.0% 281.4M of 1G 73.7M of 4G
#71 running 2015-08-28 10:36:35 AM 0.0% 340M of 1G 73.7M of 4G
#72 running 2015-09-02 06:02:52 PM 0.0% 50.7M of 1G 73.7M of 4G
#73 running 2015-08-28 10:37:12 AM 0.0% 299.8M of 1G 73.7M of 4G
#74 running 2015-09-02 06:04:09 PM 0.0% 52.3M of 1G 73.7M of 4G
#75 running 2015-09-02 06:02:38 PM 0.0% 50.9M of 1G 73.7M of 4G
#76 running 2015-09-02 06:02:26 PM 0.0% 50.9M of 1G 73.7M of 4G
#77 running 2015-09-02 06:53:44 PM 0.0% 50.9M of 1G 73.7M of 4G
#78 running 2015-08-28 10:36:38 AM 0.0% 365.7M of 1G 73.7M of 4G
#79 running 2015-08-28 10:37:15 AM 0.0% 331.1M of 1G 73.7M of 4G
#80 running 2015-08-28 10:36:45 AM 0.0% 323.8M of 1G 73.7M of 4G
#81 running 2015-08-28 10:36:38 AM 0.0% 316.3M of 1G 73.7M of 4G
#82 running 2015-09-02 06:02:55 PM 0.0% 53.5M of 1G 73.7M of 4G
#83 running 2015-08-28 10:36:33 AM 0.0% 305.5M of 1G 73.7M of 4G
#84 running 2015-08-28 10:37:17 AM 0.0% 268.2M of 1G 73.7M of 4G
#85 running 2015-08-28 10:37:18 AM 0.0% 298.7M of 1G 73.7M of 4G
#86 running 2015-08-31 11:04:17 PM 0.0% 226.8M of 1G 73.7M of 4G
#87 running 2015-09-02 06:55:09 PM 0.0% 50.7M of 1G 73.7M of 4G
#88 running 2015-08-28 10:36:42 AM 0.0% 306.8M of 1G 73.7M of 4G
#89 running 2015-08-28 10:36:43 AM 0.0% 315.4M of 1G 73.7M of 4G
#90 running 2015-08-28 10:37:18 AM 0.0% 301.5M of 1G 73.7M of 4G
#91 running 2015-08-28 10:36:44 AM 0.0% 319.6M of 1G 73.7M of 4G
#92 running 2015-08-28 10:36:36 AM 0.0% 316.8M of 1G 73.7M of 4G
#93 running 2015-09-01 10:26:51 PM 0.0% 221.7M of 1G 73.7M of 4G
#94 running 2015-08-28 10:36:39 AM 0.0% 327.1M of 1G 73.7M of 4G
#95 running 2015-09-02 06:02:18 PM 0.0% 50.9M of 1G 73.7M of 4G
#96 running 2015-09-02 06:03:20 PM 0.0% 50.8M of 1G 73.7M of 4G
#97 running 2015-09-02 06:04:09 PM 0.0% 52.1M of 1G 73.7M of 4G
#98 running 2015-08-28 10:37:17 AM 0.0% 205.5M of 1G 73.7M of 4G
#99 running 2015-08-28 10:37:18 AM 0.0% 273.6M of 1G 73.7M of 4G
#100 running 2015-08-28 10:37:19 AM 0.0% 268.5M of 1G 73.7M of 4G
#101 running 2015-09-02 06:03:25 PM 0.0% 51M of 1G 73.7M of 4G
#102 running 2015-09-02 06:02:56 PM 0.0% 50.9M of 1G 73.7M of 4G
#103 running 2015-09-02 06:02:54 PM 0.0% 51.6M of 1G 73.7M of 4G
#104 running 2015-09-02 10:50:44 AM 0.0% 228.9M of 1G 73.7M of 4G
#105 running 2015-09-02 06:04:09 PM 0.0% 52.1M of 1G 73.7M of 4G
#106 running 2015-08-31 10:45:14 AM 0.0% 292.2M of 1G 73.7M of 4G
#107 running 2015-09-01 09:01:30 PM 0.0% 215.8M of 1G 73.7M of 4G
#108 running 2015-08-28 10:37:29 AM 0.0% 328.8M of 1G 73.7M of 4G
#109 running 2015-08-28 10:36:52 AM 0.0% 315.8M of 1G 73.7M of 4G
#110 running 2015-08-28 10:36:52 AM 0.0% 291.6M of 1G 73.7M of 4G
#111 running 2015-08-28 10:36:53 AM 0.0% 309.3M of 1G 73.7M of 4G
#112 running 2015-08-28 10:37:26 AM 0.0% 337.6M of 1G 73.7M of 4G
#113 running 2015-08-28 10:36:53 AM 0.0% 401M of 1G 73.7M of 4G
#114 running 2015-09-02 06:04:09 PM 0.0% 53M of 1G 73.7M of 4G
#115 running 2015-09-01 09:01:07 PM 0.0% 231.7M of 1G 73.7M of 4G
#116 running 2015-08-28 10:36:51 AM 0.0% 360.8M of 1G 73.7M of 4G
#117 running 2015-08-28 10:36:42 AM 0.0% 348.9M of 1G 73.7M of 4G
#118 running 2015-09-01 03:33:07 PM 0.0% 236.6M of 1G 73.7M of 4G
#119 running 2015-08-28 10:36:56 AM 0.0% 373M of 1G 73.7M of 4G
#120 running 2015-08-28 10:36:48 AM 0.0% 341.9M of 1G 73.7M of 4G
#121 running 2015-08-28 10:36:56 AM 0.0% 311.8M of 1G 73.7M of 4G
#122 running 2015-08-31 10:45:57 AM 0.0% 285.6M of 1G 73.7M of 4G
#123 running 2015-08-31 10:01:37 PM 0.0% 259.2M of 1G 73.7M of 4G
#124 running 2015-08-28 10:36:48 AM 0.0% 344.7M of 1G 73.7M of 4G
#125 running 2015-08-28 10:37:26 AM 0.0% 283.9M of 1G 73.7M of 4G
#126 running 2015-09-02 06:02:14 PM 0.0% 50.9M of 1G 73.7M of 4G
#127 running 2015-08-28 10:37:25 AM 0.0% 311.4M of 1G 73.7M of 4G
I believe the flow of info is:
cf CLI <-> Cloud Controller <-> DEA <-> Warden <-> cgroup CPU Accounting
See:
CLI code
Cloud Controller code
DEA code
Warden code
CPU Account Controller docs
The stat here is showing you the percentage of time processes in the container spent in the CPU (of the VM) in the last second. If your application were to do something more CPU intensive, you might see this number increase. For instance, I pushed a small hello-world Golang app, which also runs a tight loop in the background where it calculates the sine value of some random float, and then cf app shows 15.5% CPU usage:
package main
import (
"fmt"
"net/http"
"os"
"math"
"math/rand"
)
func main() {
go func() {
for {
println(math.Sin(rand.Float64()))
}
}()
http.HandleFunc("/", hello)
fmt.Println("listening...")
err := http.ListenAndServe(":"+os.Getenv("PORT"), nil)
if err != nil {
panic(err)
}
}
func hello(res http.ResponseWriter, req *http.Request) {
fmt.Fprintln(res, "go, world")
}
Related
I am using an EC2 instance to run a node app. I logged into the server after a while only to realise that the server has run out of disk space. After debugging, I realised that logs are taking up space. I deleted the 3.3Gb log file. However, even after the cleanup there is no space. What should I do?
Here are the commands I ran:
ubuntu#app1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 7.7G 7.7G 0 100% /
tmpfs 496M 8.0K 496M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1001
tmpfs 100M 0 100M 0% /run/user/1000
ubuntu#app1:~$ sudo du -h --max-depth=1 / | sort -n
0 /proc
0 /sys
4.0K /lib64
4.0K /media
4.0K /mnt
4.0K /srv
8.0K /dev
8.0K /snap
16K /lost+found
24K /root
800K /tmp
6.4M /etc
11M /run
14M /sbin
16M /bin
246M /boot
331M /home
397M /opt
429M /var
538M /lib
2.1G /usr
3.7G /data
7.7G /
I deleted a 3.3G log file in /data and ran du again
ubuntu#app1:~$ sudo du -h --max-depth=1 / | sort -h
0 /proc
0 /sys
4.0K /lib64
4.0K /media
4.0K /mnt
4.0K /srv
8.0K /dev
8.0K /snap
16K /lost+found
24K /root
800K /tmp
6.4M /etc
11M /run
14M /sbin
16M /bin
246M /boot
331M /home
352M /data
397M /opt
429M /var
538M /lib
2.1G /usr
4.4G /
ubuntu#app1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 11M 89M 11% /run
/dev/xvda1 7.7G 7.7G 0 100% /
tmpfs 496M 8.0K 496M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1001
tmpfs 100M 0 100M 0% /run/user/1000
Although the /data directory is now reduced to 352M, still df still shows 100% disk utilization. What am I missing?
Referring to this answer https://unix.stackexchange.com/a/253655/47050, here is the output of strace
ubuntu#app1:~$ strace -e statfs df /
statfs("/", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=2016361, f_bfree=4096, f_bavail=0, f_files=1024000, f_ffree=617995, f_fsid={2136106470, -680157247}, f_namelen=255, f_frsize=4096, f_flags=4128}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8065444 8049060 0 100% /
+++ exited with 0 +++
UPDATE
I ran
sudo lsof | grep deleted
and found
node\x20/ 22318 deploy 12w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 13w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 14w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 15w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
node\x20/ 22318 deploy 16w REG 202,1 3541729280 791684 /data/app/shared/logs/production.log (deleted)
How do I release these files?
Update 2
ubuntu#app1:~$ sudo ls -l /proc/22318/fd
total 0
lrwx------ 1 deploy deploy 64 Apr 6 10:05 0 -> socket:[74749956]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 1 -> socket:[74749958]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 10 -> /dev/null
l-wx------ 1 deploy deploy 64 Apr 6 10:05 12 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 13 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 14 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 15 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 16 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 17 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 18 -> /data/app/shared/logs/production.log (deleted)
l-wx------ 1 deploy deploy 64 Apr 6 10:05 19 -> /data/app/shared/logs/production.log (deleted)
lrwx------ 1 deploy deploy 64 Apr 6 10:05 2 -> socket:[74749960]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 20 -> /data/app/shared/logs/production.log (deleted)
lrwx------ 1 deploy deploy 64 Apr 6 10:05 21 -> socket:[74750302]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 22 -> socket:[74750303]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 3 -> socket:[74749962]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 4 -> anon_inode:[eventpoll]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 5 -> pipe:[74749978]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 6 -> pipe:[74749978]
lr-x------ 1 deploy deploy 64 Apr 6 10:05 7 -> pipe:[74749979]
l-wx------ 1 deploy deploy 64 Apr 6 10:05 8 -> pipe:[74749979]
lrwx------ 1 deploy deploy 64 Apr 6 10:05 9 -> anon_inode:[eventfd]
ubuntu#app1:~$ ps aux | grep node
deploy 22318 0.0 12.7 1277192 129844 ? Ssl 2019 173:38 node /data/app/releases/20180904094535/app.js
ubuntu 30665 0.0 0.0 12944 972 pts/0 S+ 10:09 0:00 grep --color=auto node
The files were held up by the node application. Determined using:
sudo lsof | grep deleted
Restarting the node application solved my problem.
Find node process id ps aux | grep node. Then kill node server using kill -9 <process_id>. Finally restart node server. In my case pm2 automatically restarted node.
I have a docker machine with ftp service running into a vagrant machine, the vagrant machine is running into a macos host. The docker machine ftp service is accessible from vagrant machine via ftp localhost, but how can I expose it to the mac host?. The Mac -> Vagrant network is NATS, so I made a port forwarding like 21:21 between Mac host and Vagrant but, at host, I make ftp localhost and it doesn't work. :'( what am I doing wrong?
This is part of the output of ps aux in the vagrant machine:
root 7841 0.0 0.5 113612 8948 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1108 -container-ip 172.17.0.1 -container-port 1108
root 7849 0.0 0.6 121808 10176 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1107 -container-ip 172.17.0.1 -container-port 1107
root 7857 0.0 0.7 154592 11212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1106 -container-ip 172.17.0.1 -container-port 1106
root 7869 0.0 0.7 154592 12212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1105 -container-ip 172.17.0.1 -container-port 1105
root 7881 0.0 0.6 113612 10172 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1104 -container-ip 172.17.0.1 -container-port 1104
root 7888 0.0 0.7 162788 11192 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1103 -container-ip 172.17.0.1 -container-port 1103
root 7901 0.0 0.6 121808 10156 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1102 -container-ip 172.17.0.1 -container-port 1102
root 7909 0.0 0.6 154592 9216 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1101 -container-ip 172.17.0.1 -container-port 1101
root 7921 0.0 0.5 121808 9196 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1100 -container-ip 172.17.0.1 -container-port 1100
root 7929 0.0 0.7 162788 12244 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 21 -container-ip 172.17.0.1 -container-port 21
root 7942 0.0 0.5 121808 8936 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 20 -container-ip 172.17.0.1 -container-port 20
message+ 7961 0.0 0.3 111224 5248 ? Ss 12:35 0:00 proftpd: (accepting connections)
I have an R data frame that looks something like this:
Company Date Number
ACoy 2015-08-28 1000
ACoy 2015-08-29 1300
ACoy 2015-08-30 1500
BCoy 2015-08-30 3000
CCoy 2015-08-30 2000
CCoy 2015-08-31 3000
ACoy 2015-08-31 1500
BCoy 2015-08-31 3000
CCoy 2015-09-01 3500
CCoy 2015-09-02 1000
ACoy 2015-09-02 900
CCoy 2015-09-03 2000
BCoy 2015-08-31 3000
CCoy 2015-08-31 3000
How can I perform a calculation on Number based on the value of Company, but only after a specific date?
Specifically, I am trying to get Number = Number/3 where Company == ACoy and Date > 2015-08-30
Result:
Company Date Number
ACoy 2015-08-28 1000
ACoy 2015-08-29 1300
ACoy 2015-08-30 1500
BCoy 2015-08-30 3000
CCoy 2015-08-30 2000
CCoy 2015-08-31 3000
ACoy 2015-08-31 500
BCoy 2015-08-31 3000
CCoy 2015-09-01 3500
CCoy 2015-09-02 1000
ACoy 2015-09-02 300
CCoy 2015-09-03 2000
BCoy 2015-08-31 3000
CCoy 2015-08-31 3000
This assumes that the Date column is already classed as such.
## determine which rows match the specified condition
w <- with(df, Company == "ACoy" & Date > "2015-08-30")
## replace only those 'w' values with the specified calculation
df$Number <- replace(df$Number, w, df$Number[w] / 3)
## result
df
# Company Date Number
# 1 ACoy 2015-08-28 1000
# 2 ACoy 2015-08-29 1300
# 3 ACoy 2015-08-30 1500
# 4 BCoy 2015-08-30 3000
# 5 CCoy 2015-08-30 2000
# 6 CCoy 2015-08-31 3000
# 7 ACoy 2015-08-31 500
# 8 BCoy 2015-08-31 3000
# 9 CCoy 2015-09-01 3500
# 10 CCoy 2015-09-02 1000
# 11 ACoy 2015-09-02 300
# 12 CCoy 2015-09-03 2000
# 13 BCoy 2015-08-31 3000
# 14 CCoy 2015-08-31 3000
Here is an approach using data.table. We convert the 'data.frame' to 'data.table' (setDT(df1)). Based on the condition in the 'i' (Company=='ACoy' & Date > '2015-08-30'), we assign 'Number' as the Number/3.
library(data.table)
setDT(df1)[Company=='ACoy' & Date > '2015-08-30', Number:= Number/3]
NOTE: We assume that 'Date' column is Date class and the 'Number' is numeric class.
for monitoring purpose of system, i need to redirect the output of top command in a file so i will use/parse it.
i am trying to do same thing but CPU performance stats are not getting saved in a file see
screen shots.
expected output:
[root#v100 /usr/local/bin]# top
last pid: 6959; load averages: 0.01, 0.03, 0.03 up 0+02:47:34 17:51:16
114 processes: 1 running, 108 sleeping, 5 zombie
CPU: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, 98.4% idle
Mem: 734M Active, 515M Inact, 226M Wired, 212M Buf, 491M Free
Swap: 4095M Total, 4095M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
1953 root 150 20 0 3084M 635M uwait 2:44 0.00% java
1663 mysql 46 20 0 400M 139M sbwait 1:29 0.00% mysqld
1354 root 31 20 0 94020K 50796K uwait 0:24 0.00% beam
4233 root 1 20 0 122M 23940K select 0:06 0.00% python
1700 zabbix 1 20 0 20096K 2436K nanslp 0:03 0.00% zabbix_agentd
1799 zabbix 1 20 0 103M 7240K nanslp 0:02 0.00% zabbix_server
4222 root 1 30 0 122M 23300K select 0:02 0.00% python
1696 zabbix 1 20 0 19968K 2424K nanslp 0:02 0.00% zabbix_agentd
2853 root 1 20 0 126M 29780K select 0:02 0.00% python
1793 zabbix 1 20 0 103M 7152K nanslp 0:01 0.00% zabbix_server
1797 zabbix 1 20 0 103M 8348K nanslp 0:01 0.00% zabbix_server
1752 root 1 20 0 122M 22344K select 0:01 0.00% python
1796 zabbix 1 20 0 103M 8136K nanslp 0:01 0.00% zabbix_server
1795 zabbix 1 20 0 103M 8208K nanslp 0:01 0.00% zabbix_server
1801 zabbix 1 20 0 103M 7100K nanslp 0:01 0.00% zabbix_server
3392 root 1 20 0 122M 23392K select 0:01 0.00% python
1798 zabbix 1 20 0 103M 7860K nanslp 0:01 0.00% zabbix_server
2812 root 1 20 0 134M 25184K select 0:01 0.00% python
1791 zabbix 1 20 0 103M 7188K nanslp 0:01 0.00% zabbix_server
1827 root 1 -52 r0 14368K 1400K nanslp 0:01 0.00% watchdogd
1790 zabbix 1 20 0 103M 7164K nanslp 0:01 0.00% zabbix_server
1778 zabbix 1 20 0 103M 8608K nanslp 0:01 0.00% zabbix_server
1780 zabbix 1 20 0 103M 8608K nanslp 0:01 0.00% zabbix_server
2928 root 1 20 0 122M 23272K select 0:01 0.00% python
2960 root 1 20 0 116M 22288K select 0:01 0.00% python
1776 zabbix 1 20 0 103M 7248K nanslp 0:01 0.00% zabbix_server
2892 root 1 20 0 122M 22648K select 0:01 0.00% python
1789 zabbix 1 20 0 103M 7128K nanslp 0:01 0.00% zabbix_server
1814 root 1 20 0 216M 15796K select 0:01 0.00% httpd
1779 zabbix 1 20 0 103M 8608K nanslp 0:01 0.00% zabbix_server
1783 zabbix 1 20 0 103M 8608K nanslp 0:01 0.00% zabbix_server
1800 zabbix 1 20 0 103M 7124K nanslp 0:01 0.00% zabbix_server
1782 zabbix 1 20 0 103M 8608K nanslp 0:01 0.00% zabbix_server
1781 zabbix 1 20 0 103M 8608K nanslp 0:00 0.00% zabbix_server
1792 zabbix 1 20 0 103M 7172K nanslp 0:00 0.00% zabbix_server
2259 root 2 20 0 48088K 4112K uwait 0:00 0.00% cb_heuristics
If i do:
[root#v100 /usr/local/bin]# top > /tmp/top.output
then it shows:
[root#v100 /usr/local/bin]# cat /tmp/top.output
last pid: 7080; load averages: 0.09, 0.06, 0.03 up 0+02:52:24 17:56:06
114 processes: 1 running, 108 sleeping, 5 zombie
Mem: 731M Active, 515M Inact, 219M Wired, 212M Buf, 501M Free
Swap: 4095M Total, 4095M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
1953 root 150 20 0 3084M 633M uwait 2:17 0.00% java
1663 mysql 46 20 0 400M 136M sbwait 1:08 0.00% mysqld
1354 root 31 20 0 94020K 49924K uwait 0:18 0.00% beam
4233 root 1 20 0 122M 23776K select 0:04 0.00% python
1700 zabbix 1 20 0 20096K 2436K nanslp 0:02 0.00% zabbix_agentd
1799 zabbix 1 20 0 103M 7240K nanslp 0:01 0.00% zabbix_server
2853 root 1 20 0 126M 29780K select 0:01 0.00% python
1696 zabbix 1 20 0 19968K 2424K nanslp 0:01 0.00% zabbix_agentd
4222 root 1 28 0 122M 23264K select 0:01 0.00% python
1793 zabbix 1 20 0 103M 7152K nanslp 0:01 0.00% zabbix_server
1752 root 1 20 0 122M 22344K select 0:01 0.00% python
1797 zabbix 1 20 0 103M 8088K nanslp 0:01 0.00% zabbix_server
1796 zabbix 1 20 0 103M 7944K nanslp 0:01 0.00% zabbix_server
1795 zabbix 1 20 0 103M 8044K nanslp 0:01 0.00% zabbix_server
1801 zabbix 1 20 0 103M 7100K nanslp 0:01 0.00% zabbix_server
3392 root 1 20 0 122M 23312K select 0:01 0.00% python
2812 root 1 20 0 134M 25184K select 0:01 0.00% python
1798 zabbix 1 20 0 103M 7628K nanslp 0:01 0.00% zabbix_server
so here, I am able to monitor Memory but not CPU
reason is during redirect output of top CPU stats did not update
How can i capture CPU stats also?
if you have any suggestion pls tell me.
top -b -n 1 seems to work on my Linux box here (-b: batch mode operation, -n: number of iterations).
Edit:
I just tried it on FreeBSD 9.2 which uses the 3.5beta12 version of top. It seems it needs at least one additional iteration to get CPU stats. So you might want to use:
top -b -d2 -s1 | sed -e '1,/USERNAME/d' | sed -e '1,/^$/d'
-b: batch mode, -d2: 2 displays (the first one does not contain CPU stats, second one does), -s1: wait one seconds between displays
The sed pipeline removes the first display which does not contain CPU stats (by skipping header and process list).
Output of # top -o size
last pid: 61935; load averages: 0.82, 0.44, 0.39 up 10+13:28:42 16:49:43
152 processes: 2 running, 150 sleeping
CPU: 10.3% user, 0.0% nice, 1.8% system, 0.2% interrupt, 87.7% idle
Mem: 5180M Active, 14G Inact, 2962M Wired, 887M Cache, 2465M Buf, 83M Free
Swap: 512M Total, 26M Used, 486M Free, 5% Inuse
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
1471 mysql 62 44 0 763M 349M ucond 3 222:19 74.76% mysqld
1171 root 4 44 0 645M 519M sbwait 0 20:56 3.86% tfs
41173 root 4 44 0 629M 516M sbwait 4 19:17 0.59% tfs
41350 root 4 44 0 585M 467M sbwait 7 15:17 0.10% tfs
36382 root 4 45 0 581M 401M sbwait 1 206:50 0.10% tfs
41157 root 4 44 0 551M 458M sbwait 5 16:23 0.98% tfs
36401 root 4 45 0 199M 108M uwait 2 17:50 0.00% tfs
36445 root 4 44 0 199M 98M uwait 4 20:11 0.00% tfs
36420 root 4 45 0 191M 98M uwait 4 19:57 0.00% tfs
3491 root 9 45 0 79320K 41292K uwait 4 40:22 0.00% tfs_db
40690 root 1 44 0 29896K 4104K select 1 0:05 0.00% sshd
44636 root 1 44 0 29896K 3896K select 4 0:00 0.00% sshd
22224 root 1 44 0 29896K 3848K select 6 0:00 0.00% sshd
42956 root 1 44 0 29896K 3848K select 4 0:00 0.00% sshd
909 bind 11 76 0 27308K 14396K kqread 1 0:00 0.00% named
1586 root 1 44 0 26260K 3464K select 4 0:00 0.00% sshd
40590 root 4 45 0 23480K 7592K uwait 1 5:11 0.00% auth
1472 root 1 44 0 22628K 8776K select 0 0:41 0.00% perl5.8.9
22229 root 1 44 0 20756K 2776K select 0 0:00 0.00% sftp-server
42960 root 1 44 0 20756K 2772K select 2 0:00 0.00% sftp-server
44638 root 1 44 0 10308K 2596K pause 2 0:00 0.00% csh
42958 root 1 47 0 10308K 1820K pause 3 0:00 0.00% csh
22227 root 1 48 0 10308K 1820K pause 0 0:00 0.00% csh
36443 root 1 57 0 10248K 1792K wait 0 0:00 0.00% bash
36418 root 1 51 0 10248K 1788K wait 2 0:00 0.00% bash
41171 root 1 63 0 10248K 1788K wait 0 0:00 0.00% bash
36399 root 1 50 0 10248K 1784K wait 2 0:00 0.00% bash
41155 root 1 56 0 10248K 1784K wait 0 0:00 0.00% bash
40588 root 1 76 0 10248K 1776K wait 6 0:00 0.00% bash
36380 root 1 50 0 10248K 1776K wait 2 0:00 0.00% bash
41348 root 1 54 0 10248K 1776K wait 0 0:00 0.00% bash
1169 root 1 54 0 10248K 1772K wait 0 0:00 0.00% bash
3485 root 1 76 0 10248K 1668K wait 4 0:00 0.00% bash
61934 root 1 44 0 9372K 2356K CPU4 4 0:00 0.00% top
1185 mysql 1 76 0 8296K 1356K wait 3 0:00 0.00% sh
1611 root 1 44 0 7976K 1372K nanslp 0 0:08 0.00% cron
824 root 1 44 0 7048K 1328K select 0 0:03 0.00% syslogd
1700 root 1 76 0 6916K 1052K ttyin 3 0:00 0.00% getty
1703 root 1 76 0 6916K 1052K ttyin 2 0:00 0.00% getty
1702 root 1 76 0 6916K 1052K ttyin 5 0:00 0.00% getty
1706 root 1 76 0 6916K 1052K ttyin 0 0:00 0.00% getty
1705 root 1 76 0 6916K 1052K ttyin 1 0:00 0.00% getty
1701 root 1 76 0 6916K 1052K ttyin 6 0:00 0.00% getty
1707 root 1 76 0 6916K 1052K ttyin 4 0:00 0.00% getty
1704 root 1 76 0 6916K 1052K ttyin 7 0:00 0.00% getty
490 root 1 44 0 3204K 556K select 1 0:00 0.00% devd
My game server lag so much and I have noticed that there is only 83M of free ram.
Its not just top because I have also tried to use other app:
# /usr/local/bin/freem
SYSTEM MEMORY INFORMATION:
mem_wire: 3104976896 ( 2961MB) [ 12%] Wired: disabled for paging out
mem_active: + 5440778240 ( 5188MB) [ 21%] Active: recently referenced
mem_inactive:+ 15324811264 ( 14614MB) [ 61%] Inactive: recently not referenced
mem_cache: + 1015689216 ( 968MB) [ 4%] Cached: almost avail. for allocation
mem_free: + 86818816 ( 82MB) [ 0%] Free: fully available for allocation
mem_gap_vm: + 946176 ( 0MB) [ 0%] Memory gap: UNKNOWN
-------------- ------------ ----------- ------
mem_all: = 24974020608 ( 23817MB) [100%] Total real memory managed
mem_gap_sys: + 772571136 ( 736MB) Memory gap: Kernel?!
-------------- ------------ -----------
mem_phys: = 25746591744 ( 24553MB) Total real memory available
mem_gap_hw: + 23212032 ( 22MB) Memory gap: Segment Mappings?!
-------------- ------------ -----------
mem_hw: = 25769803776 ( 24576MB) Total real memory installed
SYSTEM MEMORY SUMMARY:
mem_used: 9342484480 ( 8909MB) [ 36%] Logically used memory
mem_avail: + 16427319296 ( 15666MB) [ 63%] Logically available memory
-------------- ------------ ----------- ------
mem_total: = 25769803776 ( 24576MB) [100%] Logically total memory
As you can see, the output is similar:
mem_free: + 86818816 ( 82MB) [ 0%] Free: fully available for allocation.
My dedicated has 24GB of RAM and it's pretty much for my game server.
How can I find out which process is eating that amount of memory?
I am using FreeBSD 8.2.
According to top's output, you are only using 5% of your swap. This means, you are not short on RAM -- whatever is slowing you down, it is not the memory shortage. If anything, I'd be suspecting mysqld -- not only was it quite busy, when you took the snapshot, it also accumulated quite a bit of CPU-time prior to that.
Perhaps, some frequently-running queries can be helped by a new index or two?