JVM Monitoring jar execution - jar

I want to check the memory usage of a JAR that does some calculations. For this I want to use JVM monitor. When starting JVM monitor, I need to pick the JVM that is running my jar. But the problem is that my JAR executes so fast (<1sec) that it never shows up in the list..
Is there any way I can start the JVM without executing the JAR immediatly?

JConsole finds running applications at the time when JConsole starts . Then only the currently running applications port and host will be displayed in the list. But for such very short running applications to be shown in the list adding a wait at the end of program execution is the only choice you can do .
Also whatever the memory stats Jconsole will display will include Jconsole's memory footprint as well. So the better choice for monitoring is jvisualvm which can show memory , threads and gc statistics as well .
Alternatevely if you want to check the code cache or compilation statistics you can use -XX:+LogCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintCodeCache

Related

Rserve sub-processes memory issue

I have a Java server from which I am calling R functions using c.eval.
I use Rserve to do this and preload all libraries using the following call
R CMD Rserver --RS-conf Rserve.conf
The file Rserve.conf has all my loaded libraries. The total memory requirement for this is around 127MB.
The challenge I have is that every time I call a function from within my Java server, a new process is spawned but it looks like the process requires the full 127MB of memory. So with 32GB RAM, roughly, 240 concurrent calls are sufficient to max out the memory usage and brings the server crashing down.
I found this link: Rserve share library code but it talks about exactly what I have been doing. Any help in understanding how to make Rserve work without it loading all libraries for every call would be much appreciated.

On what parameters boot sequence varies?

Does every Unix flavor have same boot sequence code ? I mean there are different kernel version releases going on for different flavors, so is there possibility of different code for boot sequence once kernel is loaded? Or they keep their boot sequence (or code) common always?
Edit: I want to know into detail how boot process is done.
Where does MBR finds a GRUB? How this information is stored? Is it by default hard-coded?
Is there any block level partion architecture available for boot sequence?
How GRUB locates the kernel image? Is it common space, where kernel image is stored?
I searched a lot on web; but it shows common architecture BIOS -> MBR -> GRUB -> Kernel -> Init.
I want to know details of everything. What should I do to know this all? Is there any way I could debug boot process?
Thanks in advance!
First of all, the boot process is extremely platform and kernel dependent.
The point is normally getting the kernel image loaded somewhere in memory and run it, but details may differ:
where do I get the kernel image? (file on a partition? fixed offset on the device? should I just map a device in memory?)
what should be loaded? (only a "core" image? also a ramdisk with additional data?)
where should it be loaded? Is additional initialization (CPU/MMU status, device initialization, ...) required?
are there kernel parameters to pass? Where should they be put for the kernel to see?
where is the configuration for the bootloader itself stored (hard-coded, files on a partition, ...)? How to load the additional modules? (bootloaders like GRUB are actually small OSes by themselves)
Different bootloaders and OSes may do this stuff differently. The "UNIX-like" bit is not relevant, an OS starts being ostensibly UNIXy (POSIX syscalls, init process, POSIX userland,...) mostly after the kernel starts running.
Even on common x86 PCs the start differs deeply between "traditional BIOS" and UEFI mode (in this last case, the UEFI itself can load and start the kernel, without additional bootloaders being involved).
Coming down to the start of a modern Linux distribution on x86 in BIOS mode with GRUB2, the basic idea is to quickly get up and running a system which can deal with "normal" PC abstractions (disk partitions, files on filesystems, ...), keeping at minimum the code that has to deal with hardcoded disk offsets.
GRUB is not a monolithic program, but it's composed in stages. When booting, the BIOS loads and executes the code stored in the MBR, which is the first stage of GRUB. Since the amount of code that can be stored there is extremely limited (few hundred bytes), all this code does is to act as a trampoline for the next GRUB stage (somehow, it "boots GRUB");
the MBR code contains hard-coded the address of the first sector of the "core image"; this, in turn, contains the code to load the rest of the "core image" from disk (again, hard-coded as a list of disk sectors);
Once the core image is loaded, the ugly work is done, since the GRUB core image normally contains basic file system drivers, so it can load additional configuration and modules from regular files on the boot partition;
Now what happens depends on the configuration of the specific boot entry; for booting Linux, usually there are two files involved: the kernel image and the initrd:
initrd contains the "initial ramdrive", containing the barebones userland mounted as / in the early boot process (before the kernel has mounted the filesystems); it mostly contains device detection helpers, device drivers, filesystem drivers, ... to allow the kernel to be able to load on demand the code needed to mount the "real" root partition;
the kernel image is a (usually compressed) executable image in some format, which contains the actual kernel code; the bootloader extracts it in memory (following some rules), puts the kernel parameters and initrd memory position in some memory location and then jumps to the kernel entrypoint, whence the kernel takes over the boot process;
From there, the "real" Linux boot process starts, which normally involves loading device drivers, starting init, mounting disks and so on.
Again, this is all (x86, BIOS, Linux, GRUB2)-specific; points 1-2 are different on architectures without an MBR, and are are skipped completely if GRUB is loaded straight from UEFI; 1-3 are different/avoided if UEFI (or some other loader) is used to load directly the kernel image. The initrd thing may be not involved if the kernel image already bundles all that is needed to start (typical of embedded images); details of points 4-5 are different for different OSes (although the basic idea is usually similar). And, on embedded machines the kernel may be placed directly at a "magic" location that is automatically mapped in memory and run at start.

Process stop getting network data

We have a process (written in c++ /managed), which receives network data via tcpip.
After running the process for a while while tracking network load, it seems that network get into freeze state and the process does not getting data, there are other processes in the system that using networking (same nic) which operates normally.
the process gets out of this frozen situation by itself after several minutes.
Any idea what is happening?
Any counter i can track to see if my process reach some limitations ?
It is going to be very difficult to answer specifically,
-- without knowing what exactly is your process/application about,
-- whether it is a network chat application, or a file server/client, or ......
-- without other details about your process how it is implemented, what libraries it uses, if relevant to problem.
Also you haven't mentioned what OS and environment you are running this process under,
there is very little anyone can help . It could be anything, a busy wait loopl in your code, locking problems if its a multi-threaded code,....
Nonetheless , here are some options to check:
If its linux try below commands to debug and monitor the behaviour of the process and see what could be problem-
top
Check top to see ow much resources(CPU, memory) your process is using and if there is anything abnormally high values in CPU usage for it.
pstack
This should stack frames of the process executing at time of the problem.
netstat
Run this with necessary options (tcp/udp) to check what is the stae of the network sockets opened by your process
gcore -s -c
This forces your process to core when the mentioned problem happens, and then analyze that core file using gdb
gdb
and then use command where at gdb prompt to get full back trace of the process (which functions it was executing last and previous function calls.

Is it possible to choose whether to generate heap dump or not on the fly?

We have an application which is deployed to a WebSphere server running on UNIX, and we are experiencing two issues:
a system hang which recovers after a few minutes - to investigate, we will need the thread dump (javacore).
a system hang which does not recover and requires WebSphere to be restarted - to investigate, we will need the thread dump and heap dump.
The problem is: when a system hang occurs, we do not know whether it is issue 1 or 2.
Ideally we would like to manually generate the thread dump first, and wait to see if the system recovers. If it does not, then we generate the thread dump and the heap dump, before restarting WebSphere.
I know about the kill -3 (or kill -QUIT) command. The command would generate thread dump only (if the parameter IBM_HEAPDUMP=false), or thread dump and heap dump (if IBM_HEAPDUMP=true). However, IBM_HEAPDUMP has to be set before WebSphere is started and cannot be changed while WebSphere is running.
Is my understanding correct, regarding the IBM_HEAPDUMP parameter and the kill -3 command?
Also, is it possible get the logs in the way I described? (i.e. when generating JVM diagnostics, choose whether to generate heap dump or not on the fly)
Your understanding is consistent with everything I've read.
However, I believe you can accomplish what you want by using wsadmin scripting. This article describes how to force javacores and heapdumps on a Windows platform where kill -3 is not available, but the same commands can be run on any WebSphere system.
From within wsadmin or a wsadmin script, execute:
set jvm [$AdminControl completeObjectName type=JVM,process=server1,*]​
$AdminControl invoke $jvm generateHeapDump​
$AdminControl invoke $jvm dumpThreads​

Determine who/what reserved 5.5 GB of virtual memory in w3wp.exe

On my machine (XP, 64) the ASP.net worker process (w3wp.exe) always launches with 5.5GB of Virtual Memory reserved. This happens regardless of the web application it's hosting (it can be anything, even an empty web page in aspx).
This big old chunk of virtual memory is reserved at the moment the process starts, so this isn't a gradual memory "leak" of some sort.
Some snooping around with windbg shows that the memory is question is Private, Reserved and RegionUsageIsVAD, which indicates it might be the work of someone calling VirtualAlloc. It also shows that the memory in question is allocated/reserved in 4 big chunks of 1GB each and a several smaller ones (1/4GB each).
So I guess I need to figure out who's calling VirtualAlloc and reserving all this memory. How do I do that?
Attaching a debugger to the process prior to the memory allocation is tricky, because w3wp.exe is a process launched by svchost.exe (that is, IIS/ASP.Net filter) and if I try to launch it myself in order to debug it it just closes down without all this profuse memory reservation. Also, the command line parameters are invalid if I resuse them (which makes sense because it's a pipe created by the calling process).
I can attach windbg it to the process after the fact (which is how I found the memory regions in question), but I'm not sure it's possible at that point to determine who allocated what.
David Wang answers this to a similar question:
[...] the ASP.Net performance developer tells me that:
The Reserved virtual memory is nothing to worry about. You can view
it as performance/caching prerequisite
of the CLR. And heavy load testing
shows that it is nothing to worry
about.
System.Windows.Forms - It's not pulled in by empty hello world ASPX
page. You can use Microsoft Debugging
Tools and "sx e ld
system.windows.forms" to identify what
is actually pulling it in at runtime.
Or you can ildasm to find the
dependency.
mscorlib - make sure it is GAC'd and NGen'd properly.
Virtual memory is just the address space allocated to the process. It has nothing to do with memory usage.
See:
Virtual Memory
Pushing the Limits of Windows: Virtual Memory
http://support.microsoft.com/kb/555223
Reserved memory is very different from allocated memory. Reserving memory just allocates address space. It doesn't commit any physical pages.
This address space is likely allocated by IIS for its heap. It will only commit pages when needed.
If you really want to launch w3wp.exe from windbg, you probably need to launch it with valid command-line arguments. You can use Process Explorer to determine what the command line for the current w3wp.exe process is. For instance, on my server, mine was:
c:\windows\system32\inetsrv\w3wp.exe -a \.\pipe\iisipmeca56ca2-3a28-452a-9ad3-9e3da7b7c765 -t 20 -ap "DefaultAppPool"
I'm not sure what the UID in there specifies, but it looks it's probably generated on the fly by the W3SVC service (which is what launched w3wp.exe) to name the pipe specified there. So you should definitely look at your command line before launching w3wp from windbg.

Resources