Using OpenJDK 7 from the command line on OS X Lion, how can I use jdb to debug an application that requires execution under a 32-bit JVM, due to JNI native code?
I know I can invoke java as java -d32 and it will use a 32-bit JVM. I can also pass that -d32 flag to jdb without an error, but it does not seem to have any effect: I still get the same error messages when the application tries to link its native code. Passing -J-d32 exhibits the same behaviour.
It is possible to achieve the above by starting java and jdb as separate processes from two different Terminal windows. So execute these commands, each in its own window:
java -d32 -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=127.0.0.1:5463 -classpath . MainClass
jdb -attach 127.0.0.1:5463
The first will start the JVM for the application, but suspend it immediately after creation. The second will start the debugger and attach it to the JVm just created. Then you can type run in that second window to launch the application. As an added bonus, output from the application and the debugger are not intermixed, as each has its own window.
References: The jdb help lists possible command line arguments, and JPDA has a section on transports.
Although the above does work for me, I'd welcome other answers providing easier solutions, preferrably as a single command and/or without any need to choose port numbers in an arbitrary fashion. The shared memory connector does not seem to work for my JVM.
Related
Context
I'm trying to implement a sort-of orchestrator pattern for our applications.
Basically, we have three different and independent applications developped in Qt that communicate with each other using Web Socket. We'll call them "core", "business" and "ui". This is a flexibility aim as we can simply develop a new application in a more suiting technology and connect it to the others via the same communication protocole.
Now the idea is to have a simple launcher that allows us to specify which part to start. We launch this "orchestrator-like" application and it starts all required processes from a configuration file.
Everything is done in Qt currently (QML for the UI interfaces).
Initial Issue
I've made a custom class oriented towards reading the configuration file, preparing the processes, and starting them with their respective arguments.
This uses a std::map of QProcess related to their name in the configuration file and launch them using QProcess::start(<process_path>) method.
The catch is that everything went smoothly until recently. The sub processes are started and runs perfectly ; everything goes on as normal until we reach some point were the "ui" part crashes (usually an LLVM memory error or vector:: length error).
At first we thought about a memory leak or a code error but after much debugging we found that the application had no error whatsoever when we ran each part individually (without using the custom orchestrator class).
Question / Concerns
So, our question is: could it be that the QProcess:start() method actually shares the same stack with its parent? Three processes having the same parent, it would not be surprising than a vector of ~500 elements stored in each application can exceed the stack size when returned.
Information
We use MacOS Big Sur, IDE is Qt Creator, using Qt 5.15.0 and C++11.
Tried using valgrind but as read here and here, this seems a dead-end for now. The errors below were seen in the .crash file following the application exit.
libc++abi: terminating with uncaught exception of type std::length_error: vector
ui(2503,0x108215e00) malloc: can't allocate region :*** mach_vm_map(size=140280206704640, flags: 100) failed (error code=3) ui(2503,0x108215e00) malloc: *** set a breakpoint in malloc_error_break to debug LLVM ERROR: out of memory
Also tried to redirects or completly remove the application's output. First changing the setProcessChannelMode when starting the application, then with startDetached instead of start. Then, commented my Log method dumping log info into the corresponding Qt output (info/warning/critical/fatal/debug).
As suggested by #stanislav888, we could rewrite the application manager part in bash scripts and it would probably do the trick but I'd like to understand the root issue to avoid future mistakes.
It looks like a bad design. Application running and orchestrating through bash or PowerShell script looks much more better.
But anyway.
You could try to suppress orchestrated applications output and see what happen. The programs output might flood memory and make crash.
You must check what particular trouble cause the crash. Use memory "coredump" and system error messages to understand the all problem details.
I sure the community need that details. Because "оut of memory", "stack depth exceeded" and same errors make big difference.
Try to write bash or PowerShell script which does the same workflow as the Qt application. Hope it is not hard. But it will help you to figure out the issue.
At least you can run this script from the application.
If I run a Qt application directly from the Windows command-line (cmd), it immediately returns back to the shell even as the GUI continues running; I assume it creates a second process before the parent exits.
If I run the Qt application indirectly though, from a batch file or Python script, it doesn't behave the same way; it blocks until the application actually exits:
Is this standard Qt behavior? I can't find any mention of it in the documentation or anywhere else. Can it be customized? I would prefer that the application always block when run from the command-line.
This is NORMAL Windows behavior.
In a console console programs are waited on. GUI programs are not. The rules are specified in start /?(mention of new behavior is NT4 to Windows 2000).
So Start /w c:\windows\notepad.
I understood that anything to standard out (System.out) would appear in the Java Console window (when it's enabled). I spotted somewhere though that there might be situations where this isn't try, for example, from Swing apps. Is that the case?
Basically, what situations or setups wouldn't I expect to see standard output in the console? Is there a difference in behavior running on the JDK rather than explicitly on the JRE for example? javaw.exe?
ps, I understand how to display the Console in the Java settings but I'm curious as I've managed to create an application, run as an executable jar, that doesn't start the console despite some calls to System.out) on Windows 7.
The only way you wouldn't see System.out output in the console is if the method System.setOut has been invoked. This method is invoked to redirect output to the graphical Java Console, but I don't know of any other realistic circumstance in which it would be redirected away from the Java Console unless you do so voluntarily.
Depending on terminal settings it can happen that the output is not written until a newline character is sent as well. So if you do System.out.print("test") it might not appear immediately.
On Windows this is usually not the case, but on Unix terminals this is quite common.
Perhaps you use javaw to start virtual machine, this version will not show console messages. You can use java to start the virtual machine, which will show the console message.
javaw is intended for apps with windows, java is intended for console apps.
Same thing happened to me. I could not get System.out.println or Logger.debug either on console.
If you are on a huge project in Eclipse or whatever, you can read below.
Solution: I realized that I had not committed jars and some java files to SubVersioN on network. thats all. Project had not been compiled.
One situation I can think of is to invoke System.setOut(null) (orSystem.setOut(any OutputStream other than System.out or System.err)) then the console, if exists, would show nothing.
I have compiled my freebsd libc source with -g option, so that now I can step in into libc functions.
But I am having trouble stepping into system calls code. I have compiled the freebsd kernel source code with -g. On setting the breakpoint, gdb informs about breakpoint on .S files. On hitting the breakpoint, gdb is unable to step into the syscall source code.
Also, I have tried: gdb$catch syscall open
but this is also not working.
Can you please suggest something?
Thanks.
You appear to have fundamental lack of understanding of how UNIX systems work.
Think about it. Suppose you were able to step into the kernel function that implements a system call, say sys_open. So now you are looking at the kernel source for sys_open in the debugger. The question is: is the kernel running at that point, or is it stopped. Since you will want to do something like next in the debugger, let's assume the kernel is stopped.
So now you press the n key, and what happens?
Normally, the kernel will react to an interrupt raised by the keyboard, figure out which key was pressed, and send that key to the right process (the one that is blocked in read(2) from the terminal that has control of the keyboard).
But your kernel is stopped, so no key press for you.
Conclusion: debugging the kernel via debugger that is running on that same machine is impossible.
In fact, when people debug the kernel, they usually do it by running debugger on another machine (this is called remote debugging).
If you really want to step into kernel, the easiest way to do that is with UML.
After you've played with UML and understand how the userspace/kernel interface works and interacts, you can try kgdb, though the setup is usually a bit more complicated. You don't actually have to have a separate machine for this, you could use VMWare or VirtualPC, or VirtualBox.
As Employed Russian already stated, gdb being in userland cannot inspect anything running in the kernel.
However, nothing prevents to implement a debugger in the kernel itself. In such case, it is possible to set breakpoints and run kernel code step by step from a local debugging session (console). With FreeBSD, such a debugger is available as ddb.
Some limitations would be the lack of connection between your gdb and ddb sessions and I'm unsure source level debugging (-g) is available for kernel code under FreeBSD/ddb.
An alternate and much less intrusive way to 'debug' the kernel from userland would be to use dtrace.
I've developed a Qt application which contains a TCP server and such. I'm now trying to make Ubuntu packages and let the application automatically start on startup.
The application needs to be running even if nobody is logged in, which means a daemon started via a script in /etc/init.d/
I tried simply running the application on start and sending a kill-signal on stop in the init.d script but that means the application runs in the foreground and blocks the init-script.
Forking like in an other question almost seems to work, I get 'unknown error' after trying to start a TCP server. Nevertheless, there should be an easy to way to write a init-script that runs my application in the background on startup on the various Linux distributions.
Could anyone point me in the right direction?
Using Ubuntu 9.10 with Qt 4.5
The best way is probably to use QtService where the work of forking is taken care of for you.
However, if you want to continue to build your own, you should either background the application or run it via start-stop-daemon that comes with OpenRC or a similar utility for your distribution.
Also, make sure that you only link to the QtCore shared library. Although the application might be command line and never pull up the GUI, that doesn't mean that X isn't required in order for the application to run. For example, a set of unit tests:
$ ldd runTests | grep Qt
libQtTest.so.4 => /usr/lib/qt4/libQtTest.so.4 (0x00007fd424de9000)
libQtXml.so.4 => /usr/lib/qt4/libQtXml.so.4 (0x00007fd424baa000)
libQtGui.so.4 => /usr/lib/qt4/libQtGui.so.4 (0x00007fd4240db000)
libQtCore.so.4 => /usr/lib/qt4/libQtCore.so.4 (0x00007fd422644000)
Because QtGui is present, all the X libraries are also brought in, although filtered from the above output.
Is your program a GUI application or does it work without GUI?
Why don't you just background it within the init script using &?
You need to add a symbolic link into any of the rc?.d directories under /etc depending on the default runlevel. Or use the update-rc.d script: first you need to create a script into /etc/init.d that executes the application; second, use the update-rc.d script to add the needed files to start.
You can find information about how to do it by reading update-rc.d manual page:
$man update-rc.d
I think the simplest way is to not have any daemonize logic in your application itself, instead use a helper program to start the app in the background and manage a pid for it.
For example, startproc.
You can take a look at the many scripts already in your /etc/init.d for inspiration. From what I see there, most of standard linux daemons depend on startproc for start, and killproc for stopping.