Is it possible to allocate more memory for the process running the sbt plugin?
I am getting java.lang.OutOfMemoryError: GC overhead limit exceeded for the one of the plugin I am running.
You should try this:
sbt -mem 8192
Related
I am not entirely sure how ArrayFire manages memory on the RAM when using CPU mode. Based on task manager observation, it appears the device memory on RAM is not released right away, it looks like there is a GC stage.
Is this true?
What happen if I wanted to allocate a lot of RAM when GC hasn't release the device memory (RAM)? Will I run out of RAM? Or will that triggers GC somehow?
I am running into memory issues when allocating host memory (not device memory) and I still trying to figure out what's wrong. In the meantime, does GC really exists on CPU mode and will it cause out of memory if GC is triggered too late? And how should I fix this?
Thank you
ArrayFire will cache allocations and reuse them for later operations. Based on some heuristics or if an allocation fail, ArrayFire will call the garbage collector. You can manually run garbage collector by calling deviceGC which will release unlocked(unused) memory.
I am trying to run a kaa server on a raspberry pi, and have successfully compiled it from source on the ARM processor, and installed the resulting .deb package.
However when i try to start the kaa-node i get the following error.
Starting Kaa Node...
Invalid maximum heap size: -Xmx4G
The specified size exceeds the maximum representable size.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I have tried to search through the /etc/kaa-node/conf directory, and the bin files, but I can't see where the "4G" setting is actually set, so that I might change it to something smaller and launch this on the Pi which has 1G of RAM.
Can someone point me to the correct place to make this modification, while still making use of launching the server as a service using the built in utilities? I know i could just run it with java, and passit my own JAVA_OPTIONS.
I think you can try to find the "kaa-node" file in /etc/default/ and modify the JAVA_OPTIONS in it.
We try to modify it to config heap size and GC for our Kaa server.
You can try starting kaa-node service with
service kaa-node start -Xmx500M
To limit heap size by 500mb.
If it won't work, try
export _JAVA_OPTIONS=-Xmx500m
To set global JVM heap size limit.
While deploying Flink, I got the following OOM error messages:
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: java.lang.OutOfMemoryError: Direct buffer memory
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:153)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:246)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:224)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
Caused by: io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct buffer memory
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:234)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
... 9 more
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
I set 'taskmanager.network.numberOfBuffers: 120000' in flink-conf file, but it doesn't work.
Number of TaskManger: 50, Memory per TaskManager: 16GB, Cores per TaskManager: 16, Number of Slots per TasmNager: 8
For the job I ran, I used Parallelism as 25 and the raw data file is about 300GB and there are lots of join operations, which, I guess, requires lots of network communications.
Please let me know if you have any idea about what's going on here
Which version of Flink are you using?
Flink 0.10.0 and 0.10.1 have an issue with an upgraded Netty version. This issue was fixed about 3 three weeks ago and not yet available in a release.
It is fixed in the master branch (published as 1.0-SNAPSHOT) or the 0.10 branch.
I'm running Datastax Community Edition on Windows7 64 bit desktop PC with 8GB RAM. Running only a single node. Here allocated heap size is 1GB. While I tried to insert data(10 Milion) into a table through a Java application(Using casssandra java driver), its working fine. But when I tried to insert from an client-server program(It includes 2-3 thread initialization) then its blocking, raising a java.lang.OutOfMemoryError: Java heap space error. Noticeable point is, I checked the heap size, used memory, free memory after every insert transaction and there is enough space in heap. Also from the Ops-center web interface I checked the Heap size and it is never using full space ! I also tried to increase the heap size uncommenting #MAX_HEAP_SIZE="4G" from _cassandea_env.sh_ file but I got same resultQ1. What is causing this error arise?Q2. How to overcome thisThanks for any helpful suggestion
The log file history-
Ok. Its a mistake that I didn't free the cassandra connection after every transaction(In my case, insertion). After I did that now I can handle my desired transaction.
We are running web application with 6GB heap assigned to it. But, after some time, it is giving out of memory.
The exception stack trace is given below.
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2" java.lang.OutOfMemoryError: PermGen space
00:46:52,678 WARN ThreadPoolAsynchronousRunner:608 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#772df14c -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
00:46:52,682 WARN ThreadPoolAsynchronousRunner:624 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#772df14c -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 0
Active Tasks:
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#3e3e8a19
Pool thread stack traces:
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1,5,]
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,]
Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2,5,]
Exception in thread "Task-Thread-for-com.mchange.v2.async.ThreadPerTaskAsynchronousRunner#6bbc0209" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0" java.lang.OutOfMemoryError: PermGen space
Exception in thread "com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2" java.lang.OutOfMemoryError: PermGen space
Is this the problem with C3p0 Connection pool?
According to the last few messages, you could be running out of PermGen memory. According to your post, the 6 GB are assigned to heap. If you have enough physical memory, double the memory assigned to PermGen. If the problem persists (and happens after more or less the same amount of time), revert the changes and consider analyzing heap through appropriate methods.
Use
-XX:PermSize=1024m -XX:MaxPermSize=1024m
for a 1-GB (1024 MB) allocation. You could need more.
Rather than guessing, my advice is to use a memory profiler to examine the contents of the heap when you run out of memory. This will give you a definitive answer as to what's taking all the memory, and would let you make informed choices as to what steps to take next.
You are getting permgen out of memory. This is more difficult to resolve than normal out of memory. Its not linked to usage of C3p0.
You need to do memory profiling and see whats taking up permgen space.
As a temporary measure , you can increase permgen space by using -XX:PermSize -XX:MaxPermSize
If you are using spring then one of common reasons for this error is improper usage of cglib library.
c3p0 and Tomcat's somewhat unusual and difficult implementation of class(un)loading under hot redeploy can lead to permgen memory issues. the next c3p0 pre-release has some fixes for this problem; they are implemented already, but for now you'd have to build from source via github, which is not necessarily easy.
in c3p0-0.9.5-pre4 and beyond, setting config param privilegeSpawnedThreads to true and contextClassLoaderSource to library should resolve the issue. i hope to release c3p0-0.9.5-pre4 within the next few days.