std::system() strange behaviour on Linux Embedded - out-of-memory

I'm struggling with a strange behaviour of the std::system()...
to check the availability of the SysCall I use the following string
int result = system();
(if result is 0 then shell is operative, otherwise not)
I saw that after my process allocates a large ammount of memory the call system() stops working; when process release the memory system() works again.
To check the repeatability of the matter i wrote a very simple program that every 1ms it allocs a new vector bigger than the previous, it checks the system() availability then it deletes the vector (see following code)
unsigned char *dbg_mem = NULL;
tmr_debug.stop();
int sysSts=0;
dbg_mem = new unsigned char[nTmr_debug*1024*1024];
if (dbg_mem==NULL)
{
qDebug("Mem request fails: size =%d MB !!!!!",nTmr_debug);
}
else
{
memset((void*)dbg_mem,0xFF,nTmr_debug*1024*1024);
}
sysSts =system("");
qDebug("Debug Mem: %dMB\t\t result:%d",nTmr_debug, sysSts);
if (dbg_mem)
delete dbg_mem;
if(sysSts!=0)
{
qDebug("****RETRY**** after delete\t result:%d",system(""));
}
nTmr_debug ++;
tmr_debug.start(1);
Sw returns no error untill vector size reaches 72Mb then return -1; if i relase memory works again.:
Debug Mem: 62MB result:0
Debug Mem: 63MB result:0
Debug Mem: 64MB result:0
Debug Mem: 65MB result:0
Debug Mem: 66MB result:0
Debug Mem: 67MB result:0
Debug Mem: 68MB result:0
Debug Mem: 69MB result:0
Debug Mem: 70MB result:0
Debug Mem: 71MB result:-1
****RETRY**** after delete result:0
Debug Mem: 72MB result:-1
****RETRY**** after delete result:0
Debug Mem: 73MB result:-1
****RETRY**** after delete result:0
Debug Mem: 74MB result:-1
****RETRY**** after delete result:0
Debug Mem: 75MB result:-1
****RETRY**** after delete result:0
Debug Mem: 76MB result:-1
As you can see, I still have a lot of memory:
Debug Mem: 135MB result:-1
****RETRY**** after delete result:0
Debug Mem: 136MB result:-1
****RETRY**** after delete result:0
Debug Mem: 137MB result:-1
****RETRY**** after delete result:0
Debug Mem: 138MB result:-1
****RETRY**** after delete result:0
Debug Mem: 139MB result:-1
****RETRY**** after delete result:0
Debug Mem: 140MB result:-1
****RETRY**** after delete result:0
Out of memory: kill process 2127 (MemSizeChek) score 2374 or a child
Killed process 2127 (MemSizeChek)
Killed
cat /proc/meminfo:
MemTotal: 176732 kB
MemFree: 144536 kB
Buffers: 100 kB
Cached: 2440 kB
SwapCached: 0 kB
Active: 328 kB
Inactive: 3096 kB
Active(anon): 64 kB
Inactive(anon): 1044 kB
Active(file): 264 kB
Inactive(file): 2052 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 912 kB
Mapped: 840 kB
Shmem: 224 kB
Slab: 3208 kB
SReclaimable: 1092 kB
SUnreclaim: 2116 kB
KernelStack: 296 kB
PageTables: 112 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 88364 kB
Committed_AS: 2936 kB
VmallocTotal: 647168 kB
VmallocUsed: 4108 kB
VmallocChunk: 634876 kB
I'm working on a custom board equipped with Linux Kernel 2.6.32 running on ARM TexasDM3730
Toolchain: arm-2009q1-203-arm-none-linux-gnueabi -> gcc version 4.3.3 (Sourcery G++ Lite 2009q1-203)
IDE: Qt Creator 2.5.2
Thanks in advance for any suggestion

I have found the solution: the std::system() call uses inside a fork(). When fork split the main process in 2 processes SO must reserve tha ammount of memory needed for each process. This ammount of memory is not really needed for the child process that call only shell command.
To relsolve I have changed the overcommit memory configuration:
"echo 1 > /proc/sys/vm/overcommit_memory"
and now it works!

Related

ceph cluster installed in K8S admin password is incorrect

I've installed ceph cluster in K8S using Rook, the service is running fine and PV/PVC is working as expected.
I was able to login to the dashboard once, but after a while the password is incorrect.
I used the command to display the password but it is still incorrect.
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
No obvious error message from the pod
k logs -n rook-ceph rook-ceph-mgr-a-547f75956-c5f9t
debug 2022-02-05T00:09:14.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367973: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367974: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.784+0000 ffff53657400 0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:17.684+0000 ffff44bba400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.684+0000 ffff44bba400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.988+0000 ffff40b72400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.988+0000 ffff40b72400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:18.148+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367975: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:20.148+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367976: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:21.788+0000 ffff53657400 0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:22.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367977: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 853 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Optimize plan auto_2022-02-05_00:09:23
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Mode upmap, max misplaced 0.050000
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Some objects (0.333333) are degraded; try again later
ubuntu#:~$
No events from the namespace
ubuntu#df1:~$ k get events -n rook-ceph
No resources found in rook-ceph namespace.
It seems like one can use the cephadm command to reset the password, but how can I login to the pod as a root user?
ceph dashboard ac-user-set-password USERNAME PASSWORD
This cephadm command can't be executed as non-root user:
ubuntu#:~$ k exec -it rook-ceph-tools-7884798859-7vcnz -n rook-ceph -- bash
[rook#rook-ceph-tools-7884798859-7vcnz /]$ cephadm
ERROR: cephadm should be run as root
[rook#rook-ceph-tools-7884798859-7vcnz /]$
I just ran "ceph dashboard ac-user-set-password admin -i 'File with password'" and my password changed.
I dont think cephadm works when execing into the pod.

Total count of HugePages getting reduced from 6000 to 16 and Free pages to 0

I am testing a DPDK application with 2M Hugepages, so I changed the /proc/cmdline of my redhat VM to start with 6000 huge pages as shown below on my VM with total memory of 32GB.
grep Huge /proc/meminfo
AnonHugePages: 6144 kB
HugePages_Total: 6000
HugePages_Free: 6000
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB*
But now when I start my application, it reports that application is asking for 5094 MB of memory but only 32 MB is available as shown below:
./build/app -l 4-7 -n 4 --socket-mem 5094,5094 --file-prefix dp -w 0000:13:00.0 -w 0000:1b:00.0
EAL: Detected 8 lcore(s)
EAL: Multi-process socket /var/run/.dp_unix
EAL: Probing VFIO support...
EAL: Not enough memory available on socket 0! Requested: 5094MB, available: 32MB
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
EAL: Error - exiting with code: 1
Cause: Error with EAL initialization
And now when I check Huge pages again, it only shows 16 pages as below, please let me know why my Huge pages are getting reduced to 16 from initial 6000 due to which my application is not able to get memory.
grep Huge /proc/meminfo
AnonHugePages: 6144 kB
HugePages_Total: 16
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
./dpdk-devbind --status
Network devices using DPDK-compatible driver
============================================
0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
Network devices using kernel driver
===================================
0000:04:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens161 drv=vmxnet3 unused=igb_uio *Active*
0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens192 drv=vmxnet3 unused=igb_uio *Active*
0000:0c:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens193 drv=vmxnet3 unused=igb_uio *Active*
I also tried to increase the huge pages at run time but it doesn't help, it first increases but again on running the app, it reports that memory not available.
echo 6000 > /proc/sys/vm/nr_hugepages
echo "vm.nr_hugepages=6000" >> /etc/sysctl.conf
grep Huge /proc/meminfo
AnonHugePages: 6144 kB
HugePages_Total: 6000
HugePages_Free: 5984
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
./build/app -l 4-7 -n 4 --socket-mem 5094,5094 --file-prefix dp -w 0000:13:00.0 -w 0000:1b:00.0
EAL: Detected 8 lcore(s)
EAL: Multi-process socket /var/run/.dp_unix
EAL: Probing VFIO support...
EAL: Not enough memory available on socket 0! Requested: 5094MB, available: 32MB
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
EAL: Error - exiting with code: 1
Cause: Error with EAL initialization
grep Huge /proc/meminfo
AnonHugePages: 6144 kB
HugePages_Total: 16
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Seems there was some issue with the Centos 7 VM as Huge pages count was not making any sense, so I recreated the VM which resolved the issue.
If the requirement of your application is to have 5094 pages of 2MB, can you re-run your application with --socket-mem 5094,1.
but if your requirement is to have 5094 * 2, can you build the hugepages during boot by editing grub.conf as ' default_hugepagesz=2M hugepagesz=2M hugepages=10188'
Note: there is huge difference between 17.11 LTS and 18.11 LTS how huge pages are mapped and used.

Exit status 223 (out of memory) when pushing to IBM Cloud

I am running into trouble deploying apps from my local dev environment. My cf push always fails with a Exit status 223 (out of memory) error (irrespective of the app).
I am certain both my IBM Cloud Org and my local environment have sufficient space to work with.
Here is the stack trace:
REQUEST: [2018-02-14T09:02:04-05:00]
GET /v2/apps/7426064e-0d6c-469e-8d6d-01e47728be01 HTTP/1.1
Host: api.ng.bluemix.net
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Connection: close
Content-Type: application/json
User-Agent: go-cli 6.32.0+0191c33d9.2017-09-26 / darwin
10% building modules 8/17 modules 9 active ...node_modules/fbjs/lib/containsNode.js
89% additionsets processing Hash: 9d08b2614d7a87cb99ad
Version: webpack 2.7.0
js/bundle.9d08b2614d7a87cb99ad.js 297 kB 0 [emitted] [big] main
js/bundle.9d08b2614d7a87cb99ad.js.map 466 kB 0 [emitted] main
index.html 304 bytes [emitted]
[0] ./~/react/index.js 190 bytes {0} [built]
[4] ./client/app/App.jsx 858 bytes {0} [built]
[5] ./~/react-dom/index.js 1.36 kB {0} [built]
[6] ./client/default.scss 1.03 kB {0} [built]
[8] ./~/css-loader!./~/sass-loader/lib/loader.js!./client/default.scss 193 kB {0} [built]
[9] ./~/css-loader/lib/css-base.js 2.26 kB {0} [built]
[12] ./~/fbjs/lib/containsNode.js 923 bytes {0} [built]
Time: 73789ms
Asset Size Chunks Chunk Names
[7] ./client/index.jsx 222 bytes {0} [built]
[10] ./~/fbjs/lib/EventListener.js 2.25 kB {0} [built]
[11] ./~/fbjs/lib/ExecutionEnvironment.js 935 bytes {0} [built]
[13] ./~/fbjs/lib/focusNode.js 578 bytes {0} [built]
[14] ./~/fbjs/lib/getActiveElement.js 912 bytes {0} [built]
[18] ./~/react-dom/cjs/react-dom.production.min.js 92.7 kB {0} [built]
[19] ./~/react/cjs/react.production.min.js 5.41 kB {0} [built]
[20] ./~/style-loader/addStyles.js 6.91 kB {0} [built]
+ 6 hidden modules
Child html-webpack-plugin for "index.html":
[0] ./~/lodash/lodash.js 540 kB {0} [built]
[1] ./~/html-webpack-plugin/lib/loader.js!./client/index.html 590 bytes {0} [built]
[2] (webpack)/buildin/global.js 509 bytes {0} [built]
[3] (webpack)/buildin/module.js 517 bytes {0} [built]
-----> Build failed
Failed to compile droplet: Failed to compile droplet: exit status 137
Exit status 223 (out of memory)
Staging failed: STG: Exited with status 223 (out of memory)
Stopping instance 0ee88ef2-8cd4-4096-9c3c-dee1870cf758
Destroying container
Successfully destroyed container
Has anyone run into this issue? Does anyone have any ideas on what might be wrong?
Here's what you could try:
Restarting the app
Re-installing npm packages (npm install)
Updating node, npm versions
Increasing the app space on IBM Cloud
Reduce the overall memory used by the app
Looking for possible memory leaks
Possible issues with packages (webpack etc)
Here's what worked for me:
In my NodeJS package.json, I added:
"engines": {
"node": ">= 7.0.0",
"npm": ">= 4.2.0"
}
I believe the issue was with IBM Cloud's default npm version, versus the version I was using in my local environment. Once I specified the version in my package.json, IBM Cloud was able to complete the build and deploy.
If people have a better understanding of what the error was and why this solution worked, please share.
Please check your application's available memory.
and check wether your application produces any memory leak.
The quickest thing to try is :
You can increase memory allocation for your app.
Login to the Cloud's Dashboard.
Select your app and increase MEMORY QUOTA.
This will restart the app.
Try pushing again.
The error is saying that staging has failed because the process used up too much memory. In short, whatever is running is exceeding the memory limit for the staging container.
Failed to compile droplet: Failed to compile droplet: exit status 137
Exit status 223 (out of memory)
Staging failed: STG: Exited with status 223 (out of memory)
You've got three options for working around this.
Cloud Foundry will set the memory limit for the staging container to either an operator defined value or the size of the memory limit you picked for your app, whichever one is larger. I can't say what the operator defined limit is for your platform, but you can probably work around this by simply setting a larger memory limit. Just try pushing again with larger values until it succeeds. Then after the push is successful, you can cf scale -m and lower the memory limit back down to what you need for runtime.
The other option would be to take a look at your build scripts or whatever runs to stage your application, and work to reduce the memory it requires. Making it consume less memory should also resolve this issue.
Lastly, you can stage your app locally. To do this you would run your build scripts on your local machine and then push the final product. You can't skip the staging process altogether, but if things are already prepared to run then staging usually becomes a no-op.
Hope that helps!

W3Total Cache Memory Error

I am Getting Error On WordPress When I Activate w3 Total Cache
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 196608 bytes) in /home/vika8467/public_html/blog/wp-content/plugins/w3-total-cache/lib/W3/ConfigKeys.php on line 1329
How do I fix this?

Error establishing a database connection EC2 Amazon

I hope you can help me. I can not stand having to keep restarting my ec2 instance on Amazon.
I have two wordpress sites hosted there. My sites have always worked well until two months ago, one of them started having this problem. I tried all ways pack up, and the only solution was to reconfigure.
Now that all was right with the two. The second site started the same problem. I think Amazon is clowning me.
I am using a free micro instance. If anyone knows what the problem is, please help me!
Your issue will be the limited memory that is allocated to the T1 Micro instances in EC2. I'm assuming you are using ANI Linux in this case and if an alternate version of Linux is used then you may have different locations for your log and config files.
Make sure you are the root user.
Have a look at your MySQL logs in the following location:
/var/log/mysqld.log
If you see repeated instances of the following it's pretty certain that the 0.6GB of memory allocated to the micro instance is not cutting it.
150714 22:13:33 InnoDB: Initializing buffer pool, size = 12.0M
InnoDB: mmap(12877824 bytes) failed; errno 12
150714 22:13:33 InnoDB: Completed initialization of buffer pool
150714 22:13:33 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150714 22:13:33 [ERROR] Plugin 'InnoDB' init function returned error.
150714 22:13:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150714 22:13:33 [ERROR] Unknown/unsupported storage engine: InnoDB
150714 22:13:33 [ERROR] Aborting
You will notice in the log excerpt above that my buffer pool size is set to 12MB. This can be configured by adding the line innodb_buffer_pool_size = 12M to your MySQL config file /etc/my.cnf.
A pretty good way to deal with InnoDB chewing up your memory is to create a swap file.
Start by checking the status of your memory:
free -m
You will most probably see that your swap is not doing much:
total used free shared buffers cached
Mem: 592 574 17 0 15 235
-/+ buffers/cache: 323 268
Swap: 0 0 0
To start ensure you are logged in as the root user and run the following command:
dd if=/dev/zero of=/swapfile bs=1M count=1024
Wait for a bit as the command is not verbose but you should see the following response after about 15 seconds when the process is complete:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.505 s, 34.1 MB/s
Next set up the swapspace with:
mkswap /swapfile
Now set up the swap event:
swapon /swapfile
If you get a permissions response you can ignore it or address the swap file by changing the permissions to 600 with the chmod command.
chmod 600 /swapfile
Now add the following line to /etc/fstab to create the swap spaces on server start:
/swapfile swap swap defaults 0 0
Restart your MySQL instance:
service mysqld restart
Finally check to see if your swap file is working correctly with the free -m command.
You should see something like:
total used free shared buffers cached
Mem: 592 575 16 0 16 235
-/+ buffers/cache: 323 269
Swap: 1023 0 1023
Hope this helps.

Resources