LabVIEW blocking Qt signals? - qt

I have a LabVIEW 8.6 program that is using a DLL written in Qt; the DLL listens to a TCP port for incoming messages and updates some internal data. My LabVIEW program calls into the DLL occasionally to read the internal data. The DLL works perfectly (i.e., receives data from the TCP port) with another Qt program. However, it does not work at all with my LabVIEW program.
I've attached a debugger to the DLL and can see calls from LabVIEW going into it -- my function for getting the internal data is being called and I can step through it. The code that gets the data from the TCP is never called though; it looks like the signal for incoming data on the TCP port is never triggered.
I know this sounds like a Qt issue but the DLL works perfectly with another Qt program. Unfortunately, it fails miserably with LabVIEW.
One theory:
The event loop is not running when LabVIEW calls the DLL
In the Qt DLL's run() function, I call socket->waitForDisconnected(). Perhaps the DLL is not processing incoming events because the event loop is not running? If I call exec() to start the event loop, LabVIEW crashes (LabVIEW 8.6 Development System has encountered a problem and needs to close."):
AppName: labview.exe AppVer: 8.6.0.4001 ModName: qtcored4.dll
ModVer: 4.5.1.0 Offset: 001af21a
Perhaps when I call the DLL from another Qt program, that program's event loop is allowing for the TCP signal to be seen by the DLL. Unfortunately, kicking off the event loop in the DLL takes down LabVIEW.
Any thoughts on how to keep signals running in the DLL when LabVIEW is the calling program?
EDIT Debug trace of the exec() call:
QThread::exec() -> eventLoop.exec() -> if (qApp->thread() == thread())
in the call to
QObject::thread() {
return d_func()->threadData->thread;
}
The macro Q_DECLARE_PRIVATE(QObject), the second call, triggers the crash.
EDIT 17 Aug 2009: Status update
After two days of trying various ways to get this to work I decided to implement a TCP listener directly in LabVIEW. My LabVIEW application sends data out via the DLL and receives data in via TCP. All is working well.
This question was cross-posted on http://forums.ni.com/ni/board/message?board.id=170&thread.id=431779

You should change the library call to 'run in any thread' that way the UI thread can still run the event loop.

Can you debug through exec() to see where it crashes LabVIEW?
You can also set debugging the maximum in LabVIEW in the configuration page for the Call Library Node.
LabVIEW is finicky with DLLs. It may be easier to run the DLL as a service (write a service that runs the event loop), and then have LabVIEW call a DLL that retrieves the data from the service.

Old NI help note
Just a shot in the dark...could the Qt data be clobbering some of the LV memory space immediately after the exec loop is started?

You probably don't have a QApplication object created when you are trying to call exec() in the QThread. This might be causing your crash. For the main problem, however, I would say that it is very likely you aren't getting any activity in the DLL due to the event loop not executing.

Related

QShared memory for an externally running process?

I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.

which signal does gdb send when attaching to a process?

Which signal does gdb send when attaching to a process? Does this work the same for different UNIXes. E.g. Linux and Mac OS X?
So far I only found out, that SIGTRAP is used to implement breakpoints. Is it used for attaching aswell?
AFAIK it does not need any signals to attach. It just suspends the "inferior" by calling ptrace. It also reads debugged process memory and registers using this calls and it can request instruction single stepping (provided it's implemented on that port of linux), etc.
Software breakpoints are implemented by placing at right location instruction that triggers "trap" or something similar when reached, but debugged process can run full speed until then.
Also (next to reading man ptrace, as already mentioned) see ptrace explanation on wikipedia.

Initialize QProcess to a process already running

I would like know if it's possible to create a QProcess and initialize it to a process which is already running?
My application starts an other application. So if my application is abnormally closed, when it will be restarted, I would like attach the other application.
You should use an IPC system like e.g. Qt D-Bus on Linux. You then communicate with the other process over the IPC system instead of stdin and stdout.
When your frontend application crashes, then the restarted application can reconnect to the backend process.
Unfortunately, due to the internal architecture of QProcess, there's no support for this. You'd need to copy-paste a bunch of QProcess code to a new class and add the missing functionality yourself.
There's an easier way, though: create a process wrapper that exposes a QProcess via QLocalSocket. The wrapper is simple and shouldn't be crashing. It can self-terminate when the process itself terminates, to prevent dangling wrappers from hanging around. When your application crashes or is terminated, the new instance can try to attach to the local socket if a wrapper exists. If it doesn't exist, then it will spawn a new wrapper.

Do I need multithread?

I am developping a project about data sampling and realtime display like oscilloscope. The data are transfered to PC with virtual com port at a high trandfer rate. I am using QT4.8.4 + Qextserialport to accomplish serial port communication on WindowsXP.
I have tested the device using AccessPort. It could receive the data from the port, but have no response to write to it. If I wanted to make the device stop uploading data, I must stop the device and write stop command to it, restarted the device, then the device stopped uploading. It is the same for the application using one thread to process reading and writing. I did not want to close the port completely. I just wanted to enter some command to change something, the sampling rate, the data format, etc.
So, I tried multithread. I downloaded a reference code from the following link.
http://www.qtcentre.org/threads/21063-QextSerialPort-with-QTimer-approch-for-reading?p=103325&highlight=#post103325
(I am sorry, I really do not know how to paste code fitly on this site)
In this code, the author derived two subclass of QThread. One is for reading port, the other is for writing port. He also reimplemented the run() in these two subclass. I tried this code, but found that when receiving code, my GUI frozen.
It seem that the author in the following thread met the same question with me.
Qt: GUI sometimes freezing when using threads and signals/slots
But I have question about this thread.
The author mentioned "When running the code in the GUI-thread, there is no problem."
Did the author mean that everything is OK when the all code in GUI-thread? So, why he used worker thread and process thread?
The author mentioned "Process thread"
Do I need a another process thread to work with GUI thread which is responsible for display?(I need to display not only the data but also the waveform, all must in realtime)
Please give me some tips on how to conquer this problem. Thanks a lot.
The short answer is yes. Doing any heavy processing on a GUI thread will result in freezing the GUI (esp if you block). Instead you should either have an independent thread that updates the data, or spin off worker threads for specific tasks. In either case, when there is new data to display you should signal the GUI thread when there is new data to display. If possible, I'd recommend using the MVC pattern and implement a QAbstractItemModel to provide data to your view (as it has a defined pattern for providing those updates).

Reattaching to an orphan process in QT

We're preparing an application using Qt that has a main process that controls the GUI and spawns processes that do the actual data processing. Messages are exchanged between the main process and the data-processing processes using the Qt mechanisms and the stdin/stdout pipes.
Now, in the event that the GUI crashes, the other processes keep running. What we'd like to be able to do is to, when a new GUI starts, reconnect to these processes as before. Anyone know if this is possible, and if so, how to achieve it?
This is possible if you are using a named pipe for communicating with the process. stdin/out are closed if the process they belong to is terminated.
You might want to investigate shared memory for the communication between processes. I seem to recall that it was able to recover in a very similar situation at a previous job.
Another possibility, if your platform supports it, is to use dbus for the communication between processes. If that is the case, neither process would have to be there, but will act get the appropriate messages if it is running.

Resources