On my Ubuntu 12 vps I am running a full bitcoin node. When I first start this up it uses around 700mb of memory. If I come back 24 hours later (free -m) will look something like this:
total used free shared buffers cached
4002 3881 120 0 32 2635
-/+ buffers/cache: 1214 2787
Swap: 255 0 255
But then if I clear "cached" using
echo 3 > /proc/sys/vm/drop_caches
and then do free -m again:
total used free shared buffers cached
4002 1260 2742 0 1 88
-/+ buffers/cache: 1170 2831
Swap: 255 0 255
Can see the cached column clears and I have way more free memory than it looked like before.
I have some questions:
what is this cached number?
my guess is it's files being cached for quicker access to the disk?
is it okay to let it grow and use all my free memory?
will other processes that need memory be able to evict the cached memory?
if not, should I routinely clear it using the echo3 command I showed earlier?
Linux tries to utilize the system resources more efficiently. Linux caches the data to reduce the no. of io operations thereby speeding up the system. The metadata about the data is stored in buffers and the actual data will be stored in the cache.
When you clear the cache the processes using cache will lose the data so you have to run
sync
before clearing the cache so that the system will copy the data to secondary storage which reduces the errors.
Related
Could anyone offer any troubleshooting ideas or pointers on where/how to get more information on the difference between sys and real time from the output below?
It is my understanding that the command finished processing in the OS in 4 seconds, but then IO where queued and processing and 38.3 seconds (is that right?). It is somewhat a block box at this point to me on how to get some additional details.
time prealloc /myfolder/testfile 2147483648
real 42.5
user 0.0
sys 4.2
You are writing 2 GB to disk on an HP-UX system; this is most likely using spinning disks (physical hard disks).
The system is writing 2GiB / 42s = 51 MB/s which doesn't sound slow to me.
On these systems you can use tools such as sar. Use sar -ud 5 to see CPU and disk usage during your prealloc command; you will likely see disk usage pegged at 100%.
I have collection with size about 30 MB (on server). It's fully mirrored with client. Tell me, why client uses hundreds of ram (400MB+) instead of 30 MB? I was searching for minimongo memory efficiency, but I found nothing, so I'm asking now
We have a .net web application hosted on IIS 7.5.
Earlier this application was running on a 32bit application pool but some time ago we've switched to 64 bit application pool.
Recently users have started to complain that after 1-2 minutes of idling their session is being killed which we have confirmed today.
In the web.config file the session timeout is set to 60 minutes.
Also we have noticed in task manager that the w3wp process for this application consumes about 2-2,4GB of memory so maybe the problem is that the application pool is trying to recycle some memory?
The recycling is set to limited time periods 21:00 and 4:00
What could be the reason for this problems with sessions?
EDIT:
I have inspected some counters and done the basic memory dump analyze but I don't see any problems.
In the dump eeheap analyze I see only generation 2 objects about 10-30MB for every heap and I have 24 of them
Heap 0 (0000000003083a90) generation 0 starts at 0x00000000fff568b8 generation 1 starts at 0x00000000ffa6acf0 generation 2 starts at 0x00000000ff471000 ephemeral segment allocation context: none segment begin allocated size 00000000ff470000 00000000ff471000 00000000ffff8de0 0xb87de0(12090848) Large object heap starts at 0x00000006ff471000 segment begin allocated size 00000006ff470000 00000006ff471000 00000006ff7495c8 0x2d85c8(2983368)
Heap Size: Size: 0xe603a8 (15074216) bytes.
Heap 1 (00000000030889c0) generation 0 starts at 0x000000013fc36ed8 generation 1 starts at 0x000000013f949348 generation 2 starts at 0x000000013f471000 ephemeral segment allocation context: none segment begin allocated size 000000013f470000 000000013f471000 000000014035e7b8 0xeed7b8(15652792) Large object heap starts at 0x0000000703471000 segment begin allocated size 0000000703470000 0000000703471000 00000007035c5d58 0x154d58(1396056) Heap Size: Size: 0x1042510 (17048848) bytes.
EDIT: 2015-08-19 09:00
Those are the counters for 09:00 2015-08-19
What worries me is why the memory in task manager shows 2,5GB when the Bytes in all Heaps shows only about 100MB and why the Private Bytes (216MB) are bigger then Bytes in all Heaps?
The load in this current moment is about 40 users on this server.
EDIT 2015-08-19 14:09
After some time I see that there could be a problem with assemblies.
How can I check this with windbg when I'm on .NET 4.5 where there is no !dda command?
Try copy the running app to a different pool but in this new one disable all assemblies / references that you dont need, to see what is doing that.
Like you said i think that some assembler is crashing your application pool, maybe because maybe isnt support for 64 bits.
Try disabling all references that you dont use, update all, etc.
I was getting 52 out of 100 for speed when using Google PageSpeed Insight for the website I'm hosting. And I am trying to increase the server response time and I've been searching via Google. So far I found that I need to do some tweaking in my httpd.conf file such as KeepAlive and MaxRequestWorkers since I use httpd 2.4.12. I'm a bit paranoid when it comes to making changes in my httpd.conf. Do I need MPM worker to be able to include KeepAlive and MaxRequestWorkers? Or can I just add them to the conf file?
I run a quick command on my system (runs on Ubuntu Server 12.04.5 LTS 32bit)
$ free -lm
total used free shared buffers cached
Mem: 999 926 72 0 11 73
Low: 869 798 70
High: 130 128 1
-/+ buffers/cache: 841 157
Swap: 5720 954 4766
I realize this is only 1G of RAM.
Any help would be appreciated. Thank you very much
One thing I would suggest for decreasing the server response time is making use of wordpress caching plugins like Wp-super cache (https://wordpress.org/plugins/wp-super-cache/).
This is the quickest solution to drastically bring down the Server response time. You may have to move the dynamic components to load via Ajax to prevent that section from returning the cached result.
These plugins are simple to use and give you better results on the speed front without much code tweaking
We have installed BizTalk 2013 R2 and deployed a simple solution.
What we observed is the memory consumed by BizTalk service keeps on growing.
It does not come down even after it has completed the processing.
Please find the details of the tests done.
BizTalk Solution (contains 2 schemas, 1 map and 1 orchestration).
scenario 1
Test File Size: 2 KB
No of files:250
Start memory : 12MB
END memory :122 MB
scenario 2
Test File Size: 2 KB
250 files processed 3 times one after other
start memory 13.2 MB
end memory 160 MB
scenario 3
Test File Size: 2 KB
250 files processed 6 times one after other
start memory 13.2 MB
end memory 215 MB
BizTalk will actually "cache" the assemblies in memory for a while.
This actually means that the next time the process runs it will have less of a start-up time as it is already in memory.
If the process is not called for a while it will unload it from memory, unless you've configured it to stay in memory permanently which is also possible.
That is also the reason you have to restart the BizTalk hosts if you update the assembly in the GAC. This forces it to unload the assembly from memory and will load it only if the process that needs it runs.
So what you need to do is monitor your BizTalk server for a longer period while it is not processing those files, and eventually you will see it release the memory again.
A tool to do this monitoring and detect memory leakage and other issues is Performance Analysis of Logs (PAL) Tool, this will help you log the data from the Performance Counters and then analyse the results against thresholds to detect issues.
Also did you try running any other applications/services just after you performed this test? .Net framework will activate GC if there is memory in demand by other processes and you could probably see it coming down.