This question has been asked but I am still facing issues.
hduser#sanjeebpanda-VirtualBox:/usr/local/spark$ sbt/sbt assembly
Using /usr/lib/jvm/java-7-openjdk-i386 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Regards
Sanjeeb
What the error message appears to be saying is there is not enough memory on the machine to allocate the necessary java heap space.
I would check ~/.sbtconfig to see what the Xmx and Xms values are set to, and try to lower these. Though since these are the defaults it may cause you other issues.
Best solution would be to use another machine with larger memory available.
Cheers
OK, I got the same issue when installing Spark. And after setting the _JAVA_OPTIONS, it works.
Following is what I have used. Change the memory size according to your machine:
export _JAVA_OPTIONS='-Xmx128M -Xmx256M'
Related
Context
I'm trying to implement a sort-of orchestrator pattern for our applications.
Basically, we have three different and independent applications developped in Qt that communicate with each other using Web Socket. We'll call them "core", "business" and "ui". This is a flexibility aim as we can simply develop a new application in a more suiting technology and connect it to the others via the same communication protocole.
Now the idea is to have a simple launcher that allows us to specify which part to start. We launch this "orchestrator-like" application and it starts all required processes from a configuration file.
Everything is done in Qt currently (QML for the UI interfaces).
Initial Issue
I've made a custom class oriented towards reading the configuration file, preparing the processes, and starting them with their respective arguments.
This uses a std::map of QProcess related to their name in the configuration file and launch them using QProcess::start(<process_path>) method.
The catch is that everything went smoothly until recently. The sub processes are started and runs perfectly ; everything goes on as normal until we reach some point were the "ui" part crashes (usually an LLVM memory error or vector:: length error).
At first we thought about a memory leak or a code error but after much debugging we found that the application had no error whatsoever when we ran each part individually (without using the custom orchestrator class).
Question / Concerns
So, our question is: could it be that the QProcess:start() method actually shares the same stack with its parent? Three processes having the same parent, it would not be surprising than a vector of ~500 elements stored in each application can exceed the stack size when returned.
Information
We use MacOS Big Sur, IDE is Qt Creator, using Qt 5.15.0 and C++11.
Tried using valgrind but as read here and here, this seems a dead-end for now. The errors below were seen in the .crash file following the application exit.
libc++abi: terminating with uncaught exception of type std::length_error: vector
ui(2503,0x108215e00) malloc: can't allocate region :*** mach_vm_map(size=140280206704640, flags: 100) failed (error code=3) ui(2503,0x108215e00) malloc: *** set a breakpoint in malloc_error_break to debug LLVM ERROR: out of memory
Also tried to redirects or completly remove the application's output. First changing the setProcessChannelMode when starting the application, then with startDetached instead of start. Then, commented my Log method dumping log info into the corresponding Qt output (info/warning/critical/fatal/debug).
As suggested by #stanislav888, we could rewrite the application manager part in bash scripts and it would probably do the trick but I'd like to understand the root issue to avoid future mistakes.
It looks like a bad design. Application running and orchestrating through bash or PowerShell script looks much more better.
But anyway.
You could try to suppress orchestrated applications output and see what happen. The programs output might flood memory and make crash.
You must check what particular trouble cause the crash. Use memory "coredump" and system error messages to understand the all problem details.
I sure the community need that details. Because "оut of memory", "stack depth exceeded" and same errors make big difference.
Try to write bash or PowerShell script which does the same workflow as the Qt application. Hope it is not hard. But it will help you to figure out the issue.
At least you can run this script from the application.
I previously saved a 2.8G RData file and now I'm trying to load it so I can work on it again, but weirdly, I can't. It's giving the error
Error: vector memory exhausted (limit reached?)
This is weird since I was working with it fine before. One thing that changed though is I upgraded to the latest version of R 3.5.0. I saw a previous post with the same error like this but it wasn't resolved. I was hopeful with this solution which increases the memory.limit() but unfortunately, it's only available for Windows.
Can anyone help? I don't really understand what's the problem here since I was able to work with my dataset before the update so it shouldn't be throwing this error.
Did the update somehow decreased the RAM allocated to R? Can we manually increase the memory.limit() in Mac to solve this error?
This change was necessary to deal with operating system memory over-commit issues on Mac OS. From the NEWS file:
\item The environment variable \env{R_MAX_VSIZE} can now be used
to specify the maximal vector heap size. On macOS, unless specified
by this environment variable, the maximal vector heap size is set to
the maximum of 16GB and the available physical memory. This is to
avoid having the \command{R} process killed when macOS over-commits
memory.
Set the environment variable R_MAX_VSIZE to an appropriate value for your system before starting R and you should be able to read your file.
Currently I'm running into a known problem with asan (See report)
==5097==Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.
==5097==ASan shadow was supposed to be located in the [0x00007fff7000-0x10007fff7fff] range.
Is it possible to use an environment variable to stop asan being used to prevent this error?
Or at least stop this error from being fatal.
The reason I want to do this is the failing command happens when generating code, but I'd like to use asan for the resulting binary. Having different CFLAGS for generated binaries and the final binary is possible but it would be hard to do without hard-coding it in for everyone else. So I'd like a way to disable asan during the build step, but use afterwards.
Edit: in case it's useful, this occurs with an extremely simple program: Error, Code.
No, this is a fundamental error which prevents all later instrumentation by Asan from working correctly. E.g. stack poisoning in function prologues would end up causing segfaults or corrupting random memory.
The error you reported is not an addressabilty error found by address sanitizer but an issue with the address sanitizer itself. Read the FAQ here. Reporting here the part that is relevant to your case:
Q: I'm using dynamic ASan runtime and my program crashes at start with
"Shadow memory range interleaves with an existing memory mapping. ASan
cannot proceed correctly.".
A1: If you are using shared ASan DSO, try LD_PRELOAD'ing Asan runtime
into your program.
On Tridion 2011 SP1, after I just restarted an HTTP Deployer, I get the error "Attempt to load JVM failed on native side" when I try to access HTTPUpload.aspx.
What is the issue?
I added an env variable JAVA_HOME, restarted the server, but no luck so far.
Many thanks in advance!
Nevermind... It seems that after rebooting the server AGAIN, problem was fixed.
I guess I'll never know what was it.
The story is way deeper than initially believed and it all boils down to memory allocation.
The culprit in my case was the heap size that we allocate to the Java process running underneath IIS (in JuggerNET). I have 4 CD instances (4 websites running each a CD stack) on 32bit server, with memory 4GB. The heap size was set to 1024M. Naturally there was not enough memory to allocate 4GB of heapspace.
Reducing the heapsize OR stopping a website solved the issue.
Heapsize is controlled in registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Tridion\Content Delivery\General\jvmarg1
with value -Xmx1024M.
Another culprit might be mixing CD DLLs from 64bit with 32bit servers, so check and double check your DLLs!!! I know I did :) hours long...
I'm not sure that windbg is the right tool, but that's what I'm trying now
my asp.net app seems to have a memory leak, it keeps on growing by about 3 MB almost every time a page loads (then it goes back down...)
I want to read the entire process memory and see exactly whats being stored that is unnecessary.
So I run windbg, attach to the webserver40.exe process
then I try
.loadby sos clr
and I get
The call to LoadLibrary(C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos) failed, Win32 error 0n193
"%1 is not a valid Win32 application."
Please check your debugger configuration and/or network access.
It seems that I have this sos.dll in Framework AND Framework64
I tried both using
.load C:\Windows\Microsoft.NET\Framework64\v4.0.30319\sos
but nothing loads
I don't understand why its looking for a vaild 32bit app. im on a 64bit pc with 64bit windows.
How can I get this sos thing to load?
Also when I start I get this warning
WARNING: Process 7240 is not attached as a debuggee
The process can be examined but debug events will not be received
I also tried loadby sos mscorwks it didn't work, but I understand that was discontinued. I'm in asp.net 4
I also read somewhere that the code should be stopped in debug before loading sos, that just hangs VS 2010.
Thank you very much.
Again, if there's another tool that could better help me, I'm all ears :-)
WebDev.WebServer40.exe is a 32 bit executable. To debug that you need to use 32 bit WinDbg. Visual Studio, as well as Callipso server are still executing in 32 bit mode.
For your other question. Yes, WinDbg is a great tool to investigate memory leaks in managed code. This blog will get you started. However in your case I would not be so sure you have a memory leak.
You are saying that memory goes down eventually. This means it is not a memory leak, because a leaked memory never gets released.
Do not waste your time investigate memory problems in Callipso. There are a lot of differences between IIS and Callipso that would make your findings not applicable in production environment. Even if you find that Callipso is in fact leaking does not mean that IIS would be.