I'm having a severe problem find out where I'm running out of memory on my CentOS cPanel Server. I'll Try to provide all the details that I can. I previously had the same server without cPanel and had no issues, so I'm thinking it's a cPanel problem.
I am currently running a cPanel server with CentOS 6.6 Installed, with 8GB of RAM and 1.5TB of Storage (Keep in mind my previous server without cPanel only has 4GB of RAM too).
Here's all my memory statistics.
$ free -m
total used free shared buffers cached
Mem: 8192 3647 4544 2560 0 1164
-/+ buffers/cache: 2483 5708
Swap: 0 0 0 0 0 0
$ cat /proc/user_beancounters
uid resource held maxheld barrier limit failcnt
16167: kmemsize 642249817 642637824 9223372036854775807 9223372036854775807 0
lockedpages 4610 4610 2097152 2097152 0
privvmpages 1971819 1972978 2097152 2097152 11579
shmpages 655390 655390 9223372036854775807 9223372036854775807 0
numproc 493 504 32567 32567 0
physpages 932503 933409 2097152 2097152 0
vmguarpages 0 0 2097152 2097152 0
oomguarpages 478382 478413 2097152 2097152 0
$ smem -u -t -k
User Count Swap USS PSS RSS
mailnull 1 0 996.0K 1001.0K 1.5M
dovecot 2 0 1.0M 1.1M 3.2M
memcached 2 0 1.1M 1.3M 2.3M
varnish 1 0 1.2M 1.5M 2.7M
apache 5 0 716.0K 2.1M 27.5M
lighttpd 6 0 2.6M 4.0M 30.1M
dovenull 4 0 3.4M 4.1M 13.6M
newrelic 2 0 4.6M 4.9M 6.5M
media 7 0 4.6M 5.8M 18.3M
redis 5 0 5.7M 6.1M 10.5M
ldap 1 0 7.6M 7.7M 8.4M
user 9 0 10.3M 12.6M 24.1M
postgres 7 0 11.8M 14.0M 25.8M
named 1 0 27.1M 27.1M 27.9M
ntop 1 0 30.2M 31.8M 35.0M
mongod 1 0 46.8M 46.8M 47.7M
elasticsearch 1 0 201.7M 205.0M 212.7M
graylog2 1 0 262.1M 265.5M 273.4M
nobody 20 0 434.3M 488.6M 789.4M
mysql 1 0 489.6M 489.8M 492.1M
root 58 0 628.2M 695.7M 847.9M
---------------------------------------------------
136 0 2.1G 2.3G 2.8G
$ vzubc -c
----------------------------------------------------------------
CT 16167 | HELD Bar% Lim%| MAXH Bar% Lim%| BAR | LIM | FAIL
-------------+---------------+---------------+-----+-----+------
lockedpages| 18M 0.2% 0.2%| 18M 0.2% 0.2%| 8G| 8G| -
privvmpages|7.51G 93% 93%|7.52G 94% 94%| 8G| 8G| 11.3K
numproc| 495 2% 2%| 504 2% 2%|31.8K|31.8K| -
physpages|3.55G 44% 44%|3.56G 44% 44%| 8G| 8G| -
vmguarpages| - - - | - - - | 8G| 8G| -
oomguarpages|1.82G 22% 22%|1.82G 22% 22%| 8G| 8G| -
numpty| 4 2% 2%| 4 2% 2%| 255 | 255 | -
numsiginfo| - - - | 12 1% 1%| 1K| 1K| -
----------------------------------------------------------------
Also in the administrative side of my server I can see
CPU USAGE - 13.33%
DISK SPACE USAGE - 2.54% / 1536GB
RAM USAGE - 28.64% / 8GB
Continuous errors I'm getting on Command line are
Unable to Fork, Cannot allocate memory
Segmentation Fault
As well as continuous failure of Tailwatchd failing and a few other services failing here and there. I used the Tweak Setting to change memory from 512MB to 4096MB to Unlimited MB to see if it changed anything, with no change. I also changed the Conserve Memory option both to on and off to see if there were any changes, with none.
Also, I tried to check the vz container settings. I have 1 config for 0.conf but nothing for 16167.conf. I tried to adjust setting in 0.conf with no luck, then created 16167.conf and adjusted settings which continued not to show up after a server restart. I experimented with the differet ve templates from 1G all the way to 4G, and again no improvement.
Any help or point in the right direction would be greatly appreciated. I've tried to make any corrections I could and all the research I could before asking the community, but I think at this pint, I need some help with it. Thanks in advance.
I would suggest to tune your mysql for your system memory (half of it)
and then also tune the apache through cpanel... most times it is the sql part.
you do also have elastic search. and your root memory is too high.
But on the current system you do have 5,7GB free memory. Are you sure you are out of mem?
It seems your provider has oversold the memory (the only thought i have about the segfault you do have)
To resolve this issues, You will have to increase privvmpages value for your VM. You can increase it through your main node with the following command.
vzctl set ${cid} --privvmpages 1024M:2048M --save
With the above command you will get 1024MB Guaranteed, 2048MB Burstable memory. Change it with your requirement and check it again.
Related
I am trying to stress a ubuntu container's memory. Typing free in my command terminal provides the following result:
free -m
total used free shared buff/cache available
Mem: 7958 585 6246 401 1126 6743
Swap: 2048 0 2048
I want to stress exactly 10% of the total available memory. Per stress-ng manual:
-m N, --vm N
start N workers continuously calling mmap(2)/munmap(2) and writing to the allocated
memory. Note that this can cause systems to trip the kernel OOM killer on Linux
systems if not enough physical memory and swap is not available.
--vm-bytes N
mmap N bytes per vm worker, the default is 256MB. One can specify the size as % of
total available memory or in units of Bytes, KBytes, MBytes and GBytes using the
suffix b, k, m or g.
Now, on my target container I run two memory stressors to occupy 10% of my memory:
stress-ng -vm 2 --vm-bytes 10% -t 10
However, the memory usage on the container never reaches 10% no matter how many times I run it. I tried different timeout values, no result. The closet it gets is 8.9% never approaches 10%. I inspect memory usage on my container this way:
docker stats --no-stream kind_sinoussi
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c3fc7a103929 kind_sinoussi 199.01% 638.4MiB / 7.772GiB 8.02% 1.45kB / 0B 0B / 0B 7
In an attempt to understand this behaviour, I tried running the same command with an exact unit of bytes. In my case, I'll opt for 800 mega since 7958m * 0.1 = 795,8 ~ 800m.
stress-ng -vm 2 --vm-bytes 800m -t 15
And, I get 10%!
docker stats --no-stream kind_sinoussi
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c3fc7a103929 kind_sinoussi 198.51% 815.2MiB / 7.772GiB 10.24% 1.45kB / 0B 0B / 0B 7
Can someone explain why this is happening?
Another question, is it possible for stress-ng to stress memory usage to 100%?
stress-ng --vm-bytes 10% will use sysconf(_SC_AVPHYS_PAGES) to determine the available memory. This sysconf() system call will return the number of pages that the application can use without hindering any other process. So this is approximately what the free command is returning for the free memory statistic.
Note that stress-ng will allocate the memory with mmap, so it may be that during run time mmap'd pages may not necessarily be physically backed at the time you check how much real memory is being used.
It may be worth trying to also use the --vm-populate option; this will try and ensure the pages are physically populated on the mmap'd memory that stress-ng is exercising. Also try --vm-madvise willneed to use the madvise() system call to hint that the pages will be required fairly soon.
(New to GNU Parallel)
My aim is to run the same Rscript, with different arguments, over multiple cores. My first problem is to get this working on my laptop (2 real cores, 4 virtual), then I will port this over to one with 64 cores.
Currently:
I have a Rscript, "Test.R", which takes in arguments, does a thing (say adds some numbers then writes it to a file), then stops.
I have a "commands.txt" file containing the following:
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 5 100 100
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 5 100 1000
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 5 100 1000
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 5 100 1000
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 50 100 1000
/Users/name/anaconda3/lib/R/bin/Rscript Test.R 50 200 1000
So this tells GNU parallel to run Test.R using R (I have installed this using anaconda)
In the terminal (after navigating to the desktop which is where Test.R and commands.txt are) I use the command:
parallel --jobs 2 < commands.txt
What I want this to do, is to use 2 cores, and run the commands, from commands.txt, until all tasks are complete. (I have tried variations on this command, such as changing the 2 to a 1, in this case, 2 of the cores run at 100%, and the other 2 run around 20-30%).
When I run this, all of the 4 cores go to 100% (as seen from htop), and the first 2 jobs complete, and no more jobs get complete, despite all 4 cores still being at 100%.
When I run the same command on the 64 core compute, all 64 cores go to 100%, and I have to cancel the jobs.
Any advice on resources to look at, or what I am doing wrong would be greatly appreciated.
Bit of a long question, let me know if I can clarify anything.
The output from htop as requested, during running the above command (sorted by CPU%:
1 [||||||||||||||||||||||||100.0%] Tasks: 490, 490 thr; 4 running
2 [|||||||||||||||||||||||||99.3%] Load average: 4.24 3.46 4.12
3 [||||||||||||||||||||||||100.0%] Uptime: 1 day, 18:56:02
4 [||||||||||||||||||||||||100.0%]
Mem[|||||||||||||||||||5.83G/8.00G]
Swp[|||||||||| 678M/2.00G]
PID USER PRI NI VIRT RES S CPU% MEM% TIME+ Command
9719 user 16 0 4763M 291M ? 182. 3.6 0:19.74 /Users/user/anaconda3
9711 user 16 0 4763M 294M ? 182. 3.6 0:20.69 /Users/user/anaconda3
7575 user 24 0 4446M 94240 ? 11.7 1.1 1:52.76 /Applications/Utilities
8833 user 17 0 86.0G 259M ? 0.8 3.2 1:33.25 /System/Library/StagedF
9709 user 24 0 4195M 2664 R 0.2 0.0 0:00.12 htop
9676 user 24 0 4197M 14496 ? 0.0 0.2 0:00.13 perl /usr/local/bin/par
Based on the output from htop the script /Users/name/anaconda3/lib/R/bin/Rscript uses more than one CPU thread (182%). You have 4 CPU threads and since you run 2 Rscripts we cannot tell if Rscript would eat all 4 CPU threads if it ran by itself. Maybe it will eat all CPU threads that are available (your test on the 64 core machine suggests this).
If you are using GNU/Linux you can limit which CPU threads a program can use with taskset:
taskset 9 parallel --jobs 2 < commands.txt
This should force GNU Parallel (and all its children) to only use CPU thread 1 and 4 (9 in binary: 1001). Thus running that should limit the two jobs to run in two threads only.
By using 9 (1001 binary) or 6 (0110 binary) we are reasonably sure that the two CPU threads are on two different cores. 3 (11 binary) might refer to the two threads on the came CPU core and would therefore probably be slower. The same goes for 5 (101 binary).
In general you want to use as many CPU threads as possible as that will typically make the computation faster. It is unclear from your question why you want to avoid this.
If you are sharing the server with others a better solution is to use nice. This way you can use all the CPU power that others are not using.
Yesterday I was very tired and I had a friend of
mine that was kind of annoying. I had to put an
Linux ISO on a usb drive.
I had stored all of my data temporarily on one
device, which is a 5 TB drive. This device was/is
encrypted using :
cryptsetup -luksFormat ...
I mounted this device to /a and did a dd of an
Linux ISO on the 5 TB drive. The second I hit
enter I realized this and then I did a control C
and removed the plug from my drive.
After this I had a complete mental breakdown and
i started hyperventilating.
Anyhow I have read that even if data is overwritten
there is till a chance of reading the data below the
overwritten part.
I assume this is far more complicated then simply
running testdisk ( maybe testdisk can do this, no clue ).
The entire process of yesterday can be seen below :
:/a/1master-targz/iso# ls
debian-8.1.0-amd64-xfce-CD-1.iso FreeBSD-11.0-RELEASE-amd64-disc1.iso
linuxmint-18.1-cinnamon-32bit.iso nonpaelubuntu.iso
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-
linuxmint-18.1-cinnamon-32bit.iso linuxmint-18.1-mate-32bit.iso
linuxmint-18.1-mate-64bit.iso
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso 5~^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso of=/dev/sda bs=512k
10+1 records in
10+1 records out
5685920 bytes (5.7 MB, 5.4 MiB) copied, 0.81171 s, 7.0 MB/s
:/a/1master-targz/iso#
:/a/1master-targz/iso# ^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
sdc 8:32 0 4.6T 0 disk
└─sdc1 8:33 0 1.6G 0 part
mmcblk0 179:0 0 29.8G 0 disk
├─mmcblk0p1 179:1 0 41.8M 0 part /boot
└─mmcblk0p2 179:2 0 29.8G 0 part /
:/a/1master-targz/iso#
#somewhere here I got a panic attack
TestDisk 7.0, Data Recovery Utility, April 2015
Christophe GRENIER <grenier#cgsecurity.org>
http://www.cgsecurity.org
Disk /dev/sdc - 5000 GB / 4657 GiB - CHS 4769307 64 32
Partition Start End Size in sectors
>P ext4 0 63 31 4769306 63 30 9767538688 [Ghost] # doubt this is right
fdisk -l /dev/sdb
Disk /dev/sdb: 4.6 TiB, 5000981077504 bytes, 9767541167 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x69e72c6a
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 3289087 3289088 1.6G 17 Hidden HPFS/NTFS
# head /dev/mapper/sda
head: error reading '/dev/mapper/sda': Input/output error
Summary
What do I have :
1 : Overwritten part of an encrypted luks device.
2 : I still have the mapper of when cryptsetup opened the device ( I have not
touched my computer since then )
My hope:
-There may be a way of tricking my OS of seeing my encrypted drive as /dev/sda
and connect the /dev/mapper/sda to this device, and then remount the device ignoring the part of the drive being destroyed ? ( I really have no clue ..)
-I know which image I used to overwrite the data, maybe this helps in possibly reading the data it destroyed.
-Any other idea's are very helpful
Clean up the question and remove the story.
"I have read that even if data is overwritten there is till a chance of reading the data below the overwritten part."
You will not be able to recover the over written data with your computer and drive as-is. In order to attempt to recover overwritten data it is necessary to be able to see the drive read head actual voltage so that the remnant of the over written value can be seen as a slight difference in the voltage. That means opening the HD and using specialized equipment. IOW: no.
You have learned the hard way to have continual backups, it seems the "hard way" is the only way that works and often takes multiple lessons, make this one count.
/usr/ucb/ps command output to screen shows a long list envrionment variable and its values, but when redirecting output to a file yields a truncated line.
How to explain this?
bash-3.2$ /usr/ucb/ps -auxwee 6543 >/var/tmp/env-var 2>&1
bash-3.2$ cat /var/tmp/env-var
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
alcrprun 6543 0.0 0.0 8752 5992 ? S 19:35:01 0:15 /usr/bin/python -Bu ./bin/dpfoservice.py CFG_HOME=/opt/apps/algo/algoS
bash-3.2$ /usr/ucb/ps -auxwee 6543
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
alcrprun 6543 0.0 0.0 8752 5992 ? S 19:35:01 0:15 /usr/bin/python -Bu ./bin/dpfoservice.py CFG_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/cfg RISK_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/risk++ ALGO_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr ARE_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/aggregation/are
...
...
It prints a 132-wide output to a file because that's what the -w argument does. Per the ps.1b man page:
-w
Uses a wide output format (132 columns rather than 80); if repeated, that is, -ww, use arbitrarily wide output. This information
is used to decide how much of long commands to print.
If you want arbitrary-width output, use -ww instead of -w.
When using just the -w option and the output is a terminal window, /usr/ucb/ps appears to detect the terminal width on my copy of Solaris 11 - changing the width of the window modifies the amount of output emitted with just the -w option. The documentation for that behavior is likely buried somewhere on a Solaris man page, give the historical nature of Solaris documentation.
# /usr/ucb/ps -aux | head
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
orabudge 13285 4.1 51.18413180884116344 ? O 08:56:30 0:22 oraclebudget (LOCA
orabudge 11998 3.4 51.18413552884119456 ? O 08:51:53 1:49 oraclebudget (LOCA
root 732 3.1 0.0 0 0 ? S Feb 04 326:27 zpool-budgetdb_dat
orabudge 12030 2.8 51.18413296884116648 ? S 08:52:02 2:04 ora_p004_budget
orabudge 12034 2.8 51.18413284084116504 ? S 08:52:02 2:04 ora_p006_budget
orabudge 12032 2.8 51.18413290484116568 ? S 08:52:02 2:04 ora_p005_budget
orabudge 12036 2.7 51.18413296884117176 ? S 08:52:02 2:02 ora_p007_budget
orabudge 21339 1.0 51.18414346484127680 ? S 07:18:27 4:24 oraclebudget (LOCA
orabudge 4347 0.9 51.18414084084125256 ? S 08:19:23 1:10 oraclebudget (LOCA
[root#budgetdb:ravi]#
USER – User who started the process.
PID – Process ID number assigned by the system.
%CPU – Percentage of the CPU time used by the process.
%MEM – Percentage of the total RAM in your system in use by the
process (it will not add up to 100 percent as some RAM is shared
by several processes).
SZ – Amount of nonshared virtual memory, in Kbytes, allocated to
the process. It does not include the program’s text size which is
normally shared.
RSS – Resident set size of the process. It is the basis for
%MEM and is the amount of RAM in use by the process.
TT – Which "teletype" the user is logged in on.
S – The status of the process.
S – Sleeping
O – Using cpu (on CPU) or running
R – Running and waiting for a CPU to become free
Z– Terminating but has not died (zombie)
P – Waiting for page-in
D – Waiting for disk I/ O
Check disk performance and memory usage if many
P or D statuses appear. This could indicate an overload of both subsystems.
START – Time the process started up
TIME – Total amount of CPU time the process has used so far
COMMAND – Command being executed
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have installed OpenStack on my local machine. I am able to perform every functionality such as uploading image, creating and launching instance, associating floating ip etc. But I cannot create volume of more than 2 gb. If I create any volume of more than 2 GB then it gives me the status "error" on my dashboard. Less than 2 GBs are getting created.
Sounds like your VG might only be 2G in size? Try looking deeper in the volume logs, or do a vgs/lvs and have a look at your available vs used capacities.
If you are using a DevStack instance, the default volume backing file is 5GB. Check how much of your volume backing file is free/used by running 'pvs' or 'vgs' on command line (as a root user).
I encountered the same issue. This is how I got it solved.
Execute "vgs" command to see the volume groups of the server. You will see something similar to the below, a volume group which has VSize 2GB.
$vgs
VG #PV #LV #SN Attr VSize VFree
stack-volumes 3 1 0 wz--n- 10.00g 10.00g
stratos1 1 2 0 wz--n- 931.09g 48.00m
This should be the volume group (stack-volumes) Openstack uses to create volume and volume snapshots. In order to create more volumes and larger volumes you have to increase the capacity of the volume group. In this case the volume group to extend is "stack-volumes".
Let's create a partition with 50GB.
dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=50G
losetup /dev/loop3 cinder-volumes
fdisk /dev/loop3
And at the fdisk prompt, enter the following commands:
n
p
1
ENTER
ENTER
t
8e
w
Create a physical volume
root#stratos1:~# pvcreate /dev/loop3
Physical volume "/dev/loop3" successfully created
Extend the volume group "stack-volumes".
root#stratos1:~# vgextend stack-volumes /dev/loop3
Volume group "stack-volumes" successfully extended
Let's see the details about the available physical devices. You will see the new device listed down.
root#stratos1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/loop0 stack-volumes lvm2 a- 10.01g 10.01g
/dev/loop3 stack-volumes lvm2 a- 50.00g 50.00g
Now check the details of the volume groups by executing the vgdisplay command. You will see there is more free space (60GB since we added 50GB more) in the volume group stack-volumes.
root#stratos1:~# vgdisplay
--- Volume group ---
VG Name stack-volumes
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 303
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size 60.00 GiB
PE Size 4.00 MiB
Total PE 23040
Alloc PE / Size 7680 / 30.00 GiB
Free PE / Size 15360 / 60.00 GiB
VG UUID bM4X5R-hC3V-zY5F-ZMVI-s7dz-Kpiu-tPQ2Zt