When we launch our game in the AirConsole, the RAM usage jumps really high and we get an "Out of memory" error. The only way to actually test the game is to upload a Development build with exceptions enabled and the WebGL Memory Size set to 2047. That's the only scenario, when the game doesn't crash.
We used Chrome to monitor the RAM. When we launch the game in the AirConsole, the RAM gets heavily loaded (2 GB or so) and after the game loads, the RAM usage becomes much lower (about 1 GB).
I think it is directly connected to the huge JS file that we get, when we make a WebGL build, but that's only a guess.
How can we diagnose the problem and lower the RAM usage?
Well, you need to profile memory usage by the page with the tools a browser provides - just like you would do for a desktop application.
JavaScript Memory Profiling - Google Chrome
MemoryBug Firefox extension
Diagnosing memory problems in your webpages (Windows) - MSDN
They vary in sophistication, but are basically the same as in every other garbage-collected environment. They can be divided into two basic types:
rough data: general dynamics, snapshots (incl. on events), usage by entities (DOM,JS,plugins) - identify main hogs
fine data: GC statistics, object links - locate specific culprits
Related
Please excuse my inexperience, this is my first time on the site.
I have a Dell PowerEdge r710 with 2 Xeon L5630 CPUs and 16G RAM installed. I'm trying to host a Minecraft 1.7.10 Forge Server that runs perfectly fine on my Desktop, but refuses to run properly on the server.
This machine is running Java 8, and runs perfectly otherwise. When running the application without the mods, it loads up without a hitch. As I add more mods, it gets worse. As far as my (very, very limited) knowledge goes, the order of JVM arguments doesn't matter, and didn't on my Desktop, but in order to get the application to even run I had to change the order in my .bat file. With all mods installed, the Out Of Memory Error occurs with a chunk loading error when around 41% spawn loaded.
This is the .bat file that I've made to start the server:
java -jar minecraft_server.jar -Xms512M -Xmx8192M nogui -XX:+HeapDumpOnOutOfMemory
This should load up perfectly fine, everything is compatible and tested on another machine, but the exact same setup will not run on the r710, saying Out Of Memory with more than double the desktop's allocated memory.
First you should use Task Manager or a similar utility to make sure that Java process indeed is using more then the amount you allocated with your arguments. Then I would recommend reading through this lovely post written by Cpw and posted on Reddit. If it doesn't help you with your current situation it should at least give you a bit more information on Minecraft's memory footprint.
In a normal situation where you would be running Minecraft as a local server from your computer I would suggest taking a look at how much memory your GPU is taking up. Since you are running a server this is not relevant, but might still be useful to someone who stumbles upon this post so I will leave it here:
Your graphics card is probably the biggest address hog. Today's graphics adapters often contain a gigabyte or more of RAM, and every one of those bytes needs an address. To be fair, I doubt that many of those multi-gigabyte graphics cards are in 32-bit PCs, but even a 512mb video card will take a sizeable bite out of 4GB.
I am not quite familiar with running dedicated servers but another important thing that is worth mentioning is that in case you are on a 32-bit operating system you will only be able to take advantage of 4GB of your RAM due to architecture constraints.
Every byte of RAM requires its own address, and the processor limits the length of those addresses. A 32-bit processor uses addresses that are 32 bits long. There are only 4,294,967,296, or 4GB, possible 32-bit addresses.
If all else fails you should try to seek help on one of the available Discord channels dedicated to Minecraft modding. This should be a rule in general actually, especially for general purpose problems that are difficult for others to reproduce. Here is a small list of three Discord communities dedicated to Minecraft modding that I have experience with:
Modded Minecraft - The one with most traffic so it can be a bit more difficult for your question to get noticed on busy days, but definitely the best moderated one from this list.
Modding Help - The smallest of the three. I don't have much experience with this one.
Mod Dev Cafe - This one has a decent size and a pretty good response rate, but be prepared for the usual facepalms and other unpleasantness common to younger admins and moderators. However if you are willing to look past that this is a good choice.
The screenshot below shows the change in cache related state of my zope instance over time (3 months so far).
We've increased the cache size several times over the period from 3000 all the way up to 6000000. With the exception of one recent blip, we have hit a ceiling of 30 Million (not sure what parameter that is) (see the 'by year' graph). This happened at a cache size of about 1000000, after which, changes to the cache size seemed to have no effect on the cached objects or memory usage of zope.
The zope/plone process moved from using about 500 MB of memory to using 3GB (we have 8GB on this server).
What I expected was that sliding the cache size upwards would allow zope to take advantage of more of the available server memory, but it is stuck at 3GB (out of a potential 8GB on the server).
Is there another setting that might be "capping" things at 3GB?
At a guess, your OS is limiting per-process memory size.
In a bash shell, check ulimit -v to see if a virtual memory limit is set. See man bash for the full list of options for that command.
See Limit memory usage for a single Linux process for more information on how to use ulimit.
I don't know what's going on with the memory of your server but you are doing this the wrong way: you simply can't have 6 million objects on memory, that's impossible: on a typical installation in Plone 4.x you should need somewhere in between 50 GB and 150 GB of memory for that.
to solve this we need more information: which Plone version are you using? how many objects do you have in your database? how many threads? what's the architecture of your server, 32 or 64-bit?
second, be sure to install the latest version of the munin.zope plugin to collect reliable information about your server (hat tip to Leo Rochael).
third, read this thread on the core developers list to understand how to calculate a more realistic number for your cache size (hat tip to Hanno Schlichting).
fourth, move the number up slowly and take time to monitor the results; check for total number of objects in memory and avoid memory swaps at any cost. You can stop increasing cache size if you see the number of objects is below your goal value. remember: you're never gonna have all the objects in memory, that's quite difficult because people tend to visit mostly only a subset of your content.
fifth, if you are in Plone 4.x test DateTime 3.0.3 (on a staging server before put it in production) this could decrease further your memory consumption by up to 40% (somebody told me it now works also in Plone 3.x, but I haven't check it my self).
sixth, share your result on the Plone setup list!
good night and good luck!
A 32-bit platform -- don't know if this is limited to Intel -- is limited to 3GB per process. That is because it can only address up to 4GB per process, and the bottom 1GB is used by the kernel. Of course PAE allows you to access up to 64GB, but there are certain per process limitations which you are running into here. You really cannot run a high-traffic plone site on a 32-bit platform anymore. Quite often the simplest solution is to upgrade your OS to the 64-bit version, because unless you have seriously ancient hardware, it should already be capable of running x86-64.
I have a website application running in it's own application pool on IIS 7.0. The application is an ASP.NET MVC 3 website.
I have noticed the memory usage for this applications corresponding w3wp IIS worker service is quite high ( 800 MB, with some fluctuation ).
I am trying to diagnose the problem and have tried the following:
I have disabled output page caching for the website at IIS level and then recycled the application pool. This causes the w3wp process to restart. The memory usage for this process then slowly creeps up to around 800 MB, it takes around 30 seconds to do so. There are no page requests being handled at this time. When I restart the website from IIS the memory size of the process does not alter.
I have tried running a debug copy of the application from VS 2010, there are no problems with memory usage.
Some ideas I have/questions are:
Is this problem related to the websites code? - Given that the memory rockets before any page requests have been sent/handled, I would assume this is NOT a code problem?
The application built in MVC has no handling of caching written into it.
The website uses real-time displaying of data, it uses ajax requests periodically, and is generally left 'open' for long periods of time.
Why does the memory usage rocket up after the application is recycled and no user requests are being sent? Is this because it is loading old cache information into it's memory from disk?
The application does NOT crash, I'm just concerned about the memory usage, it is not that big of a website...
Any ideas/help with getting to the bottom of this problem would be appreciated.
The best thing to do if you can afford to use a debugger is install the Windows Debugging Tools and use something like WinDbg and SOS.dll to figure out exactly what is it in memory.
once you've installed the tools then you can:
Launch Windbg.exe running elevated (as Administrator)
Use "File->Attach To Process" and choose w3wp.exe for the app you are trying to figure out. If you have many you can use Task Manager and add the command-line column to see the PID or use IIS Manager->Worker Processes to figure it out, and then choose that process in WinDBG.
run:
.loadby sos clr
!dumpheap -stat
At that point you should be able to see all types sorted by the most memory consumption so you can start with the ones at the bottom. (I would recommend exclude Strings, and Object since those are usually a side-effect and not the cause).
Use "!dumpheap -type type-here" to find the instances and use !gcroot for those to figure out why they are in memory, maybe due to a static field, or an event handler leaked, WCF channels not disposed, or things like that are common sources.
I just looked my server and my pools use 900-1000 MB Virtual size Memory, and 380 MB Working set. My sites run smooth with out problem for some years now, and I have checked the sites from all sides. My pool never recycles and the server runs until the next update continuously with 40% stable free physical memory.
If your memory is not continuously growing, then this memory is the code plus the data that you set as static, const, the string, and the possible cache, inside your application.
You can use process explorer to see the working and the virtual size memory.
You can also think to run a profile against your code to see if you have any "memory leak" or other issue. Find one from google: https://www.google.com/search?hl=en&q=asp.net+memory+profiler.
It probably doesn't apply here but thought I would throw it in for good measure. Recently I had a problem where my memory would go right up and max out when it really could of cleaned up 80% of it. Problem: It thought it about 2 more gig than it actually did so the GC was quite lazy. (It was due to a VM ware bug -windows was reporting 8 Gig but physically there was only 6.4). See blog.http://www.worthalook.net/2014/01/give-back-memory/
Something that might help: if you "rewrite" (open/save) the web.config , then your application will reset, you should monitor the memory usage from that point. If it keeps growing during usage, this could mean memory leak or insane caching. You might be able to identify which actions on your site lead to memory increase. During a long time the memory usage of an application should be stable.
An ASP.NET web app running on IIS6 periodically shoots the CPU up to 100%. It's the W3WP that's responsible for nearly all CPU usage during these episodes. The CPU stays pinned at 100% anywhere from a few minutes to over an hour.
This is on a staging server and the site is only getting very light traffic from testers at this point.
We've running ANTS profiler on the server, but it's been unenlightening.
Where can we start finding out what's causing these episodes and what code is keeping the CPU busy during all that time?
Standard Windows performance counters (look for other correlated activity, such as many GET requests, excessive network or disk I/O, etc); you can read them from code as well as from perfmon (to trigger data collection if CPU use exceeds a threshold, for example)
Custom performance counters (particularly to time for off-box requests and other calls where execution time is uncertain)
Load testing, using tools such as Visual Studio Team Test or WCAT
If you can test on or upgrade to IIS 7, you can configure Failed Request Tracing to generate a trace if requests take more a certain amount of time
Use logparser to see which requests arrived at the time of the CPU spike
Code reviews / walk-throughs (in particular, look for loops that may not terminate properly, such as if an error happens, as well as locks and potential threading issues, such as the use of statics)
CPU and memory profiling (can be difficult on a production system)
Process Explorer
Windows Resource Monitor
Detailed error logging
Custom trace logging, including execution time details (perhaps conditional, based on the CPU-use perf counter)
Are the errors happening when the AppPool recycles? If so, it could be a clue.
It's not much of an answer, but you might need to go old school and capture an image snapshot of the IIS process and debug it. You might also want to check out Tess Ferrandez's blog - she is a kick a** microsoft escalation engineer and her blog focuses on debugging windows ASP.NET, but the blog is relevant to windows debugging in general. If you select the ASP.NET tag (which is what I've linked to) then you'll see several items that are similar.
If your CPU is spiking to 100% and staying there, it's quite likely that you either have a deadlock scenario or an infinite loop. A profiler seems like a good choice for finding an infinite loop. Deadlocks are much more difficult to track down, however.
Process Explorer is an excellent tool for troubleshooting. You can try it for finding the problem of high CPU usage. It gives you an insight into the way your application works.
You can also try Procdump to dump the process and analyze what really happened on the CPU.
Also, look at your perfmon counters. They can tell you where a lot of that cpu time is being spent. Here's a link to the most common counters to use:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/852720c8-7589-49c3-a9d1-73fdfc9126f0.mspx?mfr=true
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/be425785-c1a4-432c-837c-a03345f3885e.mspx?mfr=true
We had this on a recursive query that was dumping tons of data to the output - have you double checked everything does exit and no infinite loops exist?
Might try to narrow it down with a single page - we found ANTS to not be much help in that same case either - what we ended up doing was running the site hit a page watch the CPU - hit the next page watch CPU - very methodical and time consuming but if you cant find it with some code tracing you might be out of luck -
We were able to use IIS log files to track it to a set of pages that were suspect -
Hope that helps !
This is a guess at best, but perhaps your development team is building and deploying the application in debug mode, in stead of release mode. This will cause the occurrence of .pdb files. The implication of this is that your application will take up additional resources to collect system state and debugging information during the execution of your system, causing more processor utilization.
So, it would be simple enough to ensure that they are building and deploying in release mode.
This is a very old post, I know, but this is also a common problem. All of the suggested methods are very nice but they will always point to a process, and there are many chances that we already know that our site is making problems, but we just want to know what specific page is spending too much time in processing.
The most precise and simple tool in my opinion is IIS itself.
Just click on your server in the left pane of IIS.
Click on 'Worker Processes' in the main pane. you already see what application pool is taking too much CPU.
Double click on this line (eventually refresh by clicking 'Show All') to see what pages consume too much CPU time ('Time elapsed'
column) in this pool
If you identify a page that takes time to load, use SharePoint's Developer Dashboard to see which component takes time.
In a number of situations as a programmer, I've found that my compile times are slower than I would like, and I want to understand the reason and fix them. Particular language (yes, I'm using C/C++) tricks have already been discussed, and we apply many of them. I've also seen this question and realize it's related. What I'm more interested in is what tools people use to diagnose hardware/system bottlenecks in the build process. Is there a standard way to prove "Disk read/writes are too slow for our builds - we need SSD's!" or "The anti-virus settings are killing our build times!", etc...?
Resources I've found, none directly related to compiling performance diagnosis:
A TechNet article about using PerfMon (Quite good, close to what I'd like)
This IBM link detailing some PerfMon information, but it's not specific to compiling and appears somewhat out of date.
A webpage specifically describing diagnosis of avg disk queue length
Currently, diagnosing a slow build is very much an art, and my tools of choice are:
PerfMon
Process Explorer
Process Monitor
Push hard enough to get a machine to "just try it". (Basically, trial and error.)
What do others do to diagnose system-level build performance bottlenecks?
Can we come up with a list of PerfMon or Process Explorer statistics to watch for, with thresholds for whats "acceptable" on a modern machine?
PerfMon:
CPU -> % of processor time
MEMORY -> Page/sec
DISK -> Avg. disk queue length
Process Explorer:
CPU -> CPU
DISK -> I/O Delta Total
MEMORY -> Page Faults
I resolved a "too slow build" time issue with Eclipse and Spring recently. For me the solution was to use the Vista Resource Monitor (which identified CPU spiking but not consistently high) and quite a bit of disk activity. I then used Procmon from Sysinternals to identify exactly which files were being heavily accessed.
Part of our build process also involves checking external Maven (binary file) repositories for updates every build. I disabled that check (which also gives me perfect control over when I update those dependencies). If you have resources external to the build machine, benchmark how long it takes to access them (source control, maven, etc.).
Since I'm stuck on 32-bit Vista for now, I decided to try creating a Ramdisk with the 700MB of non-addressable memory (PC has 4GB, Vista only exposes 3.3GB) and place the heavily accessed files as identified by Procmon on the Ramdisk using a nice trick of creating drive junctions to make that move transparent to my IDE. For details see here.
I have used filemon to see the header files that a C++ build was most often opening then used:
“#ifndef checks” so header files are only included once
Precompiled headers
Combined some small header files
Reduce the number of header files included by other header files by tidying up the code.
However these days I would start with a RamDisk and or SSD, but opening lot of header files still uses lots of CPU time.