We are running web application with 6GB heap assigned to it. But, after some time, it is giving out of memory.
The exception stack trace is given below.
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2" java.lang.OutOfMemoryError: PermGen space
00:46:52,678 WARN ThreadPoolAsynchronousRunner:608 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#772df14c -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
00:46:52,682 WARN ThreadPoolAsynchronousRunner:624 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#772df14c -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 0
Active Tasks:
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#3e3e8a19
Pool thread stack traces:
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1,5,]
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,]
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2,5,]
Exception in thread "Task-Thread-for-com.mchange.v2.async.ThreadPerTaskAsynchronousRunner#6bbc0209" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2" java.lang.OutOfMemoryError: PermGen space
Is this the problem with C3p0 Connection pool?
According to the last few messages, you could be running out of PermGen memory. According to your post, the 6 GB are assigned to heap. If you have enough physical memory, double the memory assigned to PermGen. If the problem persists (and happens after more or less the same amount of time), revert the changes and consider analyzing heap through appropriate methods.
Use
-XX:PermSize=1024m -XX:MaxPermSize=1024m
for a 1-GB (1024 MB) allocation. You could need more.
Rather than guessing, my advice is to use a memory profiler to examine the contents of the heap when you run out of memory. This will give you a definitive answer as to what's taking all the memory, and would let you make informed choices as to what steps to take next.
You are getting permgen out of memory. This is more difficult to resolve than normal out of memory. Its not linked to usage of C3p0.
You need to do memory profiling and see whats taking up permgen space.
As a temporary measure , you can increase permgen space by using -XX:PermSize -XX:MaxPermSize
If you are using spring then one of common reasons for this error is improper usage of cglib library.
c3p0 and Tomcat's somewhat unusual and difficult implementation of class(un)loading under hot redeploy can lead to permgen memory issues. the next c3p0 pre-release has some fixes for this problem; they are implemented already, but for now you'd have to build from source via github, which is not necessarily easy.
in c3p0-0.9.5-pre4 and beyond, setting config param privilegeSpawnedThreads to true and contextClassLoaderSource to library should resolve the issue. i hope to release c3p0-0.9.5-pre4 within the next few days.
Related
I am using a web application for websockets in GlassFish 4.1 for quite sometime and it has been running well until recently I came across this problem twice. It caused my application to crash as expected and I have not been able to pinpoint the exact reason. Here is the error trace I get:
GRIZZLY0013: Exception during FilterChain execution
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3181)
at java.util.ArrayList.grow(ArrayList.java:261)
at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:235)
at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:227)
at java.util.ArrayList.add(ArrayList.java:458)
at org.glassfish.grizzly.filterchain.FilterChainContext.addCompletionListener(FilterChainContext.java:930)
at org.glassfish.grizzly.utils.IdleTimeoutFilter.queueAction(IdleTimeoutFilter.java:249)
at org.glassfish.grizzly.utils.IdleTimeoutFilter.handleRead(IdleTimeoutFilter.java:167)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
From the trace it would seem that FilterChainContext.addCompletionListener is getting called too often causing the ArrayList to increase greatly in size - eating up memory. What could be causing the server to add this type of listener so many times? Is the server receiving too many requests? Is this a GlassFish bug or does this simply have to do with increasing heap size?
For the moment I have increased heap size denoted by the flag -Xmx from 512MB to 2GB. Also enforced parallel collector for GC via -XX:+UseParallelGC.
It would be great if you could provide further insight to help solve this problem.
It's been a long time since I have fixed the issue, but I would like to share the change I had to make just in case it helps out people facing the same issue - who are making the same mistake like I did.
The problem was in my application. Deep down in my code the application created a new object for every new client request. Like so:
Worker workerObj = new Worker();
Initially, when it was deployed this did not cause any problems because the server load was far less but as clients rapidly increased the memory was consumed more and more and finally it caused the server to crash.
The solution was to make a single Worker object and re-use for all the requests in a thread-safe manner.
Recently we experienced crashing on application (ASP.Net) with below issue since issue is resolved but a question arises in my mind; can we capture history of memory used by application? I want to add a job which will capture application memory usage after each half an hour.
Memory gates checking failed because the free memory (373817344 bytes)
is less than 5% of total memory. As a result, the service will not be
available for incoming requests. To resolve this, either reduce the
load on the machine or adjust the value of
minFreeMemoryPercentageToActivateService on the
serviceHostingEnvironment config element.
Edit: Question arise from link
While deploying Flink, I got the following OOM error messages:
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: java.lang.OutOfMemoryError: Direct buffer memory
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:153)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:246)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:224)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
Caused by: io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct buffer memory
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:234)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
... 9 more
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
I set 'taskmanager.network.numberOfBuffers: 120000' in flink-conf file, but it doesn't work.
Number of TaskManger: 50, Memory per TaskManager: 16GB, Cores per TaskManager: 16, Number of Slots per TasmNager: 8
For the job I ran, I used Parallelism as 25 and the raw data file is about 300GB and there are lots of join operations, which, I guess, requires lots of network communications.
Please let me know if you have any idea about what's going on here
Which version of Flink are you using?
Flink 0.10.0 and 0.10.1 have an issue with an upgraded Netty version. This issue was fixed about 3 three weeks ago and not yet available in a release.
It is fixed in the master branch (published as 1.0-SNAPSHOT) or the 0.10 branch.
I'm running Datastax Community Edition on Windows7 64 bit desktop PC with 8GB RAM. Running only a single node. Here allocated heap size is 1GB. While I tried to insert data(10 Milion) into a table through a Java application(Using casssandra java driver), its working fine. But when I tried to insert from an client-server program(It includes 2-3 thread initialization) then its blocking, raising a java.lang.OutOfMemoryError: Java heap space error. Noticeable point is, I checked the heap size, used memory, free memory after every insert transaction and there is enough space in heap. Also from the Ops-center web interface I checked the Heap size and it is never using full space ! I also tried to increase the heap size uncommenting #MAX_HEAP_SIZE="4G" from _cassandea_env.sh_ file but I got same resultQ1. What is causing this error arise?Q2. How to overcome thisThanks for any helpful suggestion
The log file history-
Ok. Its a mistake that I didn't free the cassandra connection after every transaction(In my case, insertion). After I did that now I can handle my desired transaction.
I have a small WCF service which is executed on an XP box with 256 megs of RAM running in VM.
When I make a request (with a request size of approximately 5mbs) to that service I always get the following message in the event log:
aspnet_wp.exe was recycled because memory consumption exceeded the 153 MB (60 percent of available RAM).
and the call fails with error 500.
I've tried to increase memory limit to 95% but it still takes up all the available memory and fails in the same manner.
It looks like something is wrong with my app (I do not reuse byte[] buffers and maybe something else) but I cannot find root cause of such memory overuse.
Profiling showed that all CLR objects that I have in memory together do not take up that much space.
Doing a dump analysis with windbg showed same situation - nothing that big in object heap.
How can I find out what is contributing to such memory overuse?
Is there any way to make a dump right before process is recycled (during peak mem usage)?
Tess Ferrandez's blog "If broken it is, fix it you should" has lots of hints, tips and recommendations for sorting out exactly this sort of problem.
Of particular use to you would be Lab 3: Memory, where she walks you through working out what has caused all the memory on your machine to disappear.
Could be a lot of things, hard to diagnose this one. Have you watched perfmon to see if the memory usage does peak on aspnet process or on the server itself? 256MB is pretty low, but it should still be able to handle it. Do you have a SWAP file on this machine? AT what point do you take the memory dump? Have you stepped though the code, and does it work on other machines? Perhaps it is getting stuck in a loop and leaking memory until it crashes?