I am using IBM jdk 1.7(to support TLS cyphers) for an struts based application deployed with embedded tomcat.
We are running with memory leaks(OOM) that generated almost 30 gigs of dumps.This has become a rotine event.
We have tried increasing the heap mem by including
wrapper.java.additional.1="-XX:MaxPermSize=256m -Xss2048k" in the wrapper.conf.
But this didnt help much.
Try using Memory Analyzer, you can follow the instructions here to download and install it:
https://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/
It should provide an overview of your heap usage.
I'd recommend starting with the dominator tree view to see which objects are responsible for keeping data alive on the heap. You can also run various reports which analyse the heap for you.
You should have core files (.dmp) and heap dumps (.phd), the core files will be large but may be faster to access and will also contain all the values for basic types in objects and strings. The phd files just contain object sizes and the connections between them. It may be easier to relate what you are seeing back to your code if you start with the core file.
Related
As the title says, I've got tons of free memory, but I keep getting OutOfMemoryExceptions when processing traces and calling properties on Data Sources. Why is this happening?
The ETL file format is designed to be very space-efficient, and also supports optional compression. Due to these factors taking the data from an .etl file and transforming it into our more useful structures can often require significantly more memory than the original size of the file. However there are two steps that can be taken to make OutOfMemoryExceptions less likely:
Don't use data sources you don't need. Even if none of the properties on a data source are called by your code, simply turning it on by calling its Use method will result in the data source processing events and preparing data for consumption.
Ensure your program is running as a 64-bit process. The default Visual Studio C# project settings are to compile your program targeting AnyCPU, but to prefer running it as a 32-bit process. Unchecking the "Prefer 32-bit" option in your project's Build properties or switching your project's build configuration to x64 will cause your program to run as a 64-bit process.
We all know that Core Dumps are an essential diagnostic tools for analysing various processes in Unix . I know both jstack and gcore are both used for generating Javacore files or Core Dumps but I have a doubt that Gcore is mainly used for Processes and Jstack is used for threads .
As from an Operating System perspective Process and Threads though interrelated (Process comprises of Threads only) they are relatively different from each other w.r.t memory/speed/execution . So is that gcore will diagnose the process and jstack will analyse the threads in that process ???
GCore act at OS level and you got a dump of native code that is currently running. From a java point of view, it is not really understandable.
JStack get you the stack trace at VM level (the java stack) of all thread your application have. You can find from that what is the real java code executed at a point.
Clearly, GCore is almost never used (too low level, native code...). Only really strange issue with native library or stuff like that will perhaps need this kind of tool.
There is also jmap that can generate a hprof file which is the heap data from you VM. A tool like 'Memory Analyser Tool' can open the hprof, and you can drill down on what was going on (on memory side).
If your VM crash because of a OutOfMemory, you can also set parameter to get the hprof when the event occurs. It helps to understand why (too many users, A DB query that fetch too much data...)
Last thing is the fact that you can add a debug option when your start your VM, so that you can connect to it, and put debug on running process. It can help if you have some strange issue that you are not able to reproduce in your local environment.
So I have a series of ASP.net web apps which are each assigned their own AppPool
This results in several instances of w3wp.exe residing in memory.
I've been trying to figure out why a couple of them steadily increase their use of RAM over the course of a day.
I found this suggestion that "Debug Diagnostics Tool" might be of use:
I downloaded installed and attempted to use it to create a full dump of the process.
For some reason it failed.
However afterwards I noticed that the memory used (private bytes) had dropped from nearly 600Mb down to ~90Mb
Did DDT cause the app to restart (or recycle), or did some form of garbage collection get invoked and cause the App to release a bunch of memory?
Windows Web Server 2008 R2 (x64) &
.Net Framework 4.5
It is a classic ASP.Net Web Site (Not a web project, code is in App_Code directory and compiled when the site is being launched)
And it depends on many reference DLLs in /Bin directory.
For those DLLs I have source code, I compile them targeted as "x64" platform.
And I have some other DLLs without source code (mysql.data.dll / etc), which are compiled as "Any CPU".
I modified them in EditBin.exe to ensure the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is indicated in their PE header.
According to this table:
http://msdn.microsoft.com/en-us/library/aa366778%28VS.85%29.aspx#memory_limits
x64 process can't use more than 2GB memory unless IMAGE_FILE_LARGE_ADDRESS_AWARE is set.
How can I verify whether it works?
Is there any place I can see the memory limitation of running x64 process?
I don't know if you can actually "see the memory limitation" explicitly stated (assuming you don't trust MS's own documentation that you cited) other than getting your hands on and digging into the IIS and/or ASP.NET source code.
That being said, you could try stress testing the site, and monitor memory consumption (via Task Manager or Process Monitor), to see if it exceeds 2GB. I would recommend tinyget, part of the IIS 6 Resource Kit, which can still be used with IIS 7.
tinyget -svr:localhost -uri:/<your site> -loop:200 -threads:20
You'll have to play with the loop and thread count to try to push it over 2GB. I would expect to see an System.OutOfMemoryException as you approach about 1.4GB of combined physical and virtual private bytes. You may want to create a stress test function in the site itself, for testing purposes only of course, which will help you reach this limit, by using the exact opposite of good practices. You can read more about what would lead to an System.OutOfMemoryException` here, and then do things they recommend against. For example, add a test method that just concatenates strings in a very large loop.
try procexp, from sysinternals here. This application can monitor .NET specific metrics.
Nevertheless, according to your link, you should be able to address at least 8GB.
Please keep in mind that enforcing the IMAGE_FILE_LARGE_ADDRESS_AWARE is irrelevant in your case. You could have all your components compiled to target "Any CPU", the only flag that is checked is the flag of the executable file.
I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.