SBT runs out of memory - sbt

I am using SBT 0.12.3 to test some code and often I get this error message while testing interactively with the ~test command.
8. Waiting for source changes... (press enter to interrupt)
[info] Compiling 1 Scala source to C:\Users\t\scala-projects\scala test\target\s
cala-2.10\classes...
sbt appears to be exiting abnormally.
The log file for this session is at C:\Users\t\AppData\Local\Temp\sbt566325905
3150896045.log
java.lang.OutOfMemoryError: PermGen space
at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at sbt.ConcurrentRestrictions$$anon$4.take(ConcurrentRestrictions.scala:
196)
at sbt.Execute.next$1(Execute.scala:85)
at sbt.Execute.processAll(Execute.scala:88)
at sbt.Execute.runKeep(Execute.scala:68)
at sbt.EvaluateTask$.run$1(EvaluateTask.scala:162)
at sbt.EvaluateTask$.runTask(EvaluateTask.scala:177)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:46)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:44)
at sbt.EvaluateTask$.withStreams(EvaluateTask.scala:137)
at sbt.Aggregation$.runTasksWithResult(Aggregation.scala:44)
at sbt.Aggregation$.runTasks(Aggregation.scala:59)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:31)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:30)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.sca
la:62)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.sca
la:62)
at sbt.Command$.process(Command.scala:90)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71
)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71
)
at sbt.State$$anon$2.process(State.scala:170)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.MainLoop$.next(MainLoop.scala:71)
at sbt.MainLoop$.run(MainLoop.scala:64)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:53)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:50)
at sbt.Using.apply(Using.scala:25)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:50)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:33)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:17)
Error during sbt execution: java.lang.OutOfMemoryError: PermGen space
The error is clear, I cloud increase the heap size and it may stop throwing that error, but the thing is that it shuts down after a number(I don't know how many) of test interactions with a minimal change in the code, and if a simple increase in the heap would solve the problem or do I have to do additional work not to run out of memory.
Thanks in advance.

If you haven't, try giving more PermGen space in your sbt.bat. I don't run sbt on Windows, but I give java -Xmx1512M -XX:MaxPermSize=512M.
Another thing to try may be to fork during testing: https://www.scala-sbt.org/1.x/docs/Testing.html#Forking+tests
Test / fork := true
specifies that all tests will be executed in a single external JVM.

Related

OutOfMemory Error after updating corda version to 4.1

I had a corda 3.3 test installation and recently updated it to version 4.1 and after that when I run my nodes with deployNodes script and runnodes - I always receive the following exception in node's console as soon as it starts. What can this mean? I don't have a clue what this can be caused by.
I tried to build and run the nodes without cordapps and they work, so somehow my cordapps cause this error happen. What other information should I provide to help you figure out this issue?
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:191)
at kotlin.io.ByteStreamsKt.readBytes(IOStreams.kt:123)
at kotlin.io.ByteStreamsKt.readBytes$default(IOStreams.kt:120)
at net.corda.core.internal.InternalUtils.readFully(InternalUtils.kt:123)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getJarHash(JarScanningCordappLoader.kt:228)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.toCordapp(JarScanningCordappLoader.kt:153)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.loadCordapps(JarScanningCordappLoader.kt:106)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.access$loadCordapps(JarScanningCordappLoader.kt:44)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:56)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:44)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getCordapps(JarScanningCordappLoader.kt)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:422)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:389)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.CordappLoaderTemplate.getCordappSchemas(JarScanningCordappLoader.kt)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:153)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:126)
at net.corda.node.internal.Node.<init>(Node.kt:98)
at net.corda.node.internal.Node.<init>(Node.kt:97)
at net.corda.node.internal.NodeStartup.createNode(NodeStartup.kt:194)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:186)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartupLogging$DefaultImpls.attempt(NodeStartup.kt:509)
at net.corda.node.internal.NodeStartup.attempt(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartup.initialiseAndRun(NodeStartup.kt:185)
at net.corda.node.internal.NodeStartupCli.runProgram(NodeStartup.kt:128)
at net.corda.cliutils.CordaCliWrapper.call(CordaCliWrapper.kt:190)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:83)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:64)
at picocli.CommandLine.execute(CommandLine.java:1056)
Corda's usage of memory has been slowly creeping upwards. It is possible that your machine does not have enough memory to run 3/4+ nodes at the same time after upgrading to 4.
I recommend trying to run a single node with CorDapps installed and see what happens. If it is still happening then, then something else could be going wrong.
Looking at the stacktrace, it is also possible that your CorDapp itself is really, really, really big and it has gone out of memory reading and loading the CorDapp.

Petsc error when running Openmdao v1.7.3 tutorials and benchmarks

I have tried running the Openmdao paraboloid tutorial as well as benchmarks and I consistently receive the same error which reads as following:
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash
---------------------------------------------------------------------
MPI_abort was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59.
NOTE: invoking MPI_ABORT causes MPI to kill all MPI processes.
you may or may not see output from other processes, depending on exactly when Open MPI kills them.
I don't understand why this error is occurring and what I can do to be able to run OpenMDAO without getting this error. Can you please help me with this?
Something has not gone well with your PETSc install. Its hard to debug that from afar though. It could be in your MPI install, or your PETSc install, or your petsc4py install. I suggest not install PETSc or PETSc4Py through pip though. I've had mixed success with that. Both can be installed from source without tremendous difficulty.
However, to run the tutorials you don't need to have PETSc installed. You could remove those packages and the tutorials will run correctly in serial for you

Why can an Rsession process continue after SIGSEGV and what does it mean

I'm developing an R package for myself that interacts with both Java code using rJava and C++ code using Rcpp. While trying to debug Rsession crashes when working under Rstudio using lldb, I noticed that lddb outputs the following message when I try to load the package I'm developing:
(lldb) Process 19030 stopped
* thread #1, name = 'rsession', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
frame #0: 0x00007fe6c7b872b4
-> 0x7fe6c7b872b4: movl (%rsi), %eax
0x7fe6c7b872b6: leaq 0xf8(%rbp), %rsi
0x7fe6c7b872bd: vmovdqu %ymm0, (%rsi)
0x7fe6c7b872c1: vmovdqu %ymm7, 0x20(%rsi)
(where 19030 is the pid of rsession). At this point, Rstudio stops waiting for lldb to resume execution, but instead of getting the dreaded "R session aborted" popup, entering the 'c' command in lldb resumes the rsession process and Rstudio continues chugging along just fine and I can use the loaded package with no problems. i.e.:
c
Process 19030 resuming
What is going on here? Why is Rstudio's rsession not crashing if lldb says it has "stopped"? Is this due to R's (or Rstudio's?) SIGSEGV handling mechanism? Does that mean that the original SIGSEGV is spurious and should not be a cause of concern? And of course (but probably off topic in this question): how do I make sense of lldb's output in order to ascertain if this SIGSEGV on loading my package should be debugged further?
The SIGSEGV does not occur in Rsession's process, but in the JVM process launched by rJava on package load. This behaviour is known and due to JVM's memory management, as stated here:
Java uses speculative loads. If a pointer points to addressable
memory, the load succeeds. Rarely the pointer does not point to
addressable memory, and the attempted load generates SIGSEGV ... which
java runtime intercepts, makes the memory addressable again, and
restarts the load instruction.
The proposed workaround for gdb works fine:
(gdb) handle SIGSEGV nostop noprint pass

java.lang.OutOfMemoryError using bartMachine package in R

I ran a BART model with 11000 samples and 20 features(half of them are categorical variable). My mac has 8G ram. At first, I set memory to 5000 MB via function set_bart_machine_memory(5000).
Then I can fit a model through the function bartMachine one time. If I want to run another model then the R returns a error like this:
Exception in thread "pool-10-thread-1" Exception in thread "pool-10-thread-3"
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
Exception in thread "pool-10-thread-2" java.lang.OutOfMemoryError: Java heap space
Exception in thread "pool-10-thread-4" java.lang.OutOfMemoryError: Java heap space
Error in .jcall(bart_machine$java_bart_machine, "Z", "isDestroyed") :
java.lang.OutOfMemoryError: Java heap space
I think that having two bartMachine object in memory may not be a good idea, so I just kill the first model through function destroy_bart_machine(), then the second model is OK to run.
The main problem is on bartMachineCV(). There are about 20 model to fit in default, and the memory error like the one above hits me when R is running the bart model with second set of parameter setting (that is : bartMachine CV try: k: 2 nu, q: 3, 0.9 m: 200 ).
I'm not familiar to JAVA, is there some way to run bartMachineCV() on a 8GB RAM computer? Thanks.
I'm the maintainer of the bartMachine package. Make sure you download the new version and pay attention the message that appears after you initialize the library:
> library(bartMachine)
...
Welcome to bartMachine v1.2.0! You have 0.48GB memory available.
If you see a low amount of RAM on the message, something is wrong with your JVM setup. 64-bit JVM is a must. Use
options(java.parameters = "-Xmx2500m")
before calling library(bartMachine) to attempt to set more.
You'll need to run a 64-bit Java JVM; the 32-bit JVM only gives you ~1.8GB max heap. I'd recommend that you use JDK 7 or higher; that's production for Oracle these days.
Once you have that, you can set JVM memory settings like this:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
You'll want to set -Xmx=1024M or something like that.

Memory Allocation Error using pkgmk on Solaris10?

I am currently trying to use pkgmk on a Solaris10-x86 machine. When I run the command
pkgmk -o -b $(HOME)/solbuild/pkg_solaris
it returns this error:
## Building pkgmap from package prototype file.
pkgmk: ERROR: memory allocation failure
## Packaging was not successful.
My first thought was that this is an out of free memory error, however I am not sure that it could be. I have close to a gigabyte free in the / partition and 12 gigabytes free in the $(HOME) partition.
Any help would be greatly appreciated.
I saw this error when /var was full.
Deleting some file from /var resolved this problem.

Resources