Initialize QProcess to a process already running - qt

I would like know if it's possible to create a QProcess and initialize it to a process which is already running?
My application starts an other application. So if my application is abnormally closed, when it will be restarted, I would like attach the other application.

You should use an IPC system like e.g. Qt D-Bus on Linux. You then communicate with the other process over the IPC system instead of stdin and stdout.
When your frontend application crashes, then the restarted application can reconnect to the backend process.

Unfortunately, due to the internal architecture of QProcess, there's no support for this. You'd need to copy-paste a bunch of QProcess code to a new class and add the missing functionality yourself.
There's an easier way, though: create a process wrapper that exposes a QProcess via QLocalSocket. The wrapper is simple and shouldn't be crashing. It can self-terminate when the process itself terminates, to prevent dangling wrappers from hanging around. When your application crashes or is terminated, the new instance can try to attach to the local socket if a wrapper exists. If it doesn't exist, then it will spawn a new wrapper.

Related

QShared memory for an externally running process?

I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.

Run QProcess with and without a GUI

I have been playing with QProcess as a way to start computationally intensive tasks that way continue after the GUI that created them has been closed. I was now wondering whether I could improve on this so that not too many jobs can be started.
Say I have 20 available cores. User 1 starts a computation that is broken into 30 processes and exits the GUI. At the moment I'm using bash to control all of this so the GUI only really executes a bash script which counts the number of processes running. The whole workflow gets rather messy if another user logs in and starts another large job in the meantime so currently it refuses to submit if the script is already running. Also, there is no way to use the GUI to monitor the processes as they are now being run by bash.
Ideally I would like to improve the flow so that user 1 submits their processes. A separate background process manages the starting of the individual compute tasks - all of which are now QProcesses. Where I am getting stuck is if another user logs in. As opposed to 'please try again later' I would like to pick up the existing managing QProcesses and append any new jobs to that queue. Is this something I can do with QProcess and D-bus? If so what would be a good design for such a process.
Thanks
What you're asking requires two programs; a Gui client and a Server application.
The logged-on users interact with a Gui client interface, to launch and organise processes. The Gui client creates messages and sends them to the server application, which responds by creating and managing processes with QProcess. So the Gui client is simply an interface to the server application.
Of-course, you need the Gui applications and the server application to communicate with each other. While there are multiple methods of interprocess communication available, Qt has QLocalServer and QLocalSocket, which can be used by the server and client applications respectively.

Persistent connection to external source in uwsgi project

I have a project which needs to make a tcp connection to an external source. Each worker thread will be sending messages to this external service.
I'm wondering how I can do this without having a connection be brought up and torn down for every request. I'm pretty sure the pymongo module does something similar but I can't find any documentation on it. Would it be possible to set up some kind of thread-safe queue and have a separate thread consume that queue? I understand I could probably use gearman for this, but I'd like to avoid having another moving part in the system.
uWSGI has a thread-safe process-shared queueing system (http://projects.unbit.it/uwsgi/wiki/QueueFramework) but are you sure using simple python threading.Queue class is not enough ?

Reattaching to an orphan process in QT

We're preparing an application using Qt that has a main process that controls the GUI and spawns processes that do the actual data processing. Messages are exchanged between the main process and the data-processing processes using the Qt mechanisms and the stdin/stdout pipes.
Now, in the event that the GUI crashes, the other processes keep running. What we'd like to be able to do is to, when a new GUI starts, reconnect to these processes as before. Anyone know if this is possible, and if so, how to achieve it?
This is possible if you are using a named pipe for communicating with the process. stdin/out are closed if the process they belong to is terminated.
You might want to investigate shared memory for the communication between processes. I seem to recall that it was able to recover in a very similar situation at a previous job.
Another possibility, if your platform supports it, is to use dbus for the communication between processes. If that is the case, neither process would have to be there, but will act get the appropriate messages if it is running.

LabVIEW blocking Qt signals?

I have a LabVIEW 8.6 program that is using a DLL written in Qt; the DLL listens to a TCP port for incoming messages and updates some internal data. My LabVIEW program calls into the DLL occasionally to read the internal data. The DLL works perfectly (i.e., receives data from the TCP port) with another Qt program. However, it does not work at all with my LabVIEW program.
I've attached a debugger to the DLL and can see calls from LabVIEW going into it -- my function for getting the internal data is being called and I can step through it. The code that gets the data from the TCP is never called though; it looks like the signal for incoming data on the TCP port is never triggered.
I know this sounds like a Qt issue but the DLL works perfectly with another Qt program. Unfortunately, it fails miserably with LabVIEW.
One theory:
The event loop is not running when LabVIEW calls the DLL
In the Qt DLL's run() function, I call socket->waitForDisconnected(). Perhaps the DLL is not processing incoming events because the event loop is not running? If I call exec() to start the event loop, LabVIEW crashes (LabVIEW 8.6 Development System has encountered a problem and needs to close."):
AppName: labview.exe AppVer: 8.6.0.4001 ModName: qtcored4.dll
ModVer: 4.5.1.0 Offset: 001af21a
Perhaps when I call the DLL from another Qt program, that program's event loop is allowing for the TCP signal to be seen by the DLL. Unfortunately, kicking off the event loop in the DLL takes down LabVIEW.
Any thoughts on how to keep signals running in the DLL when LabVIEW is the calling program?
EDIT Debug trace of the exec() call:
QThread::exec() -> eventLoop.exec() -> if (qApp->thread() == thread())
in the call to
QObject::thread() {
return d_func()->threadData->thread;
}
The macro Q_DECLARE_PRIVATE(QObject), the second call, triggers the crash.
EDIT 17 Aug 2009: Status update
After two days of trying various ways to get this to work I decided to implement a TCP listener directly in LabVIEW. My LabVIEW application sends data out via the DLL and receives data in via TCP. All is working well.
This question was cross-posted on http://forums.ni.com/ni/board/message?board.id=170&thread.id=431779
You should change the library call to 'run in any thread' that way the UI thread can still run the event loop.
Can you debug through exec() to see where it crashes LabVIEW?
You can also set debugging the maximum in LabVIEW in the configuration page for the Call Library Node.
LabVIEW is finicky with DLLs. It may be easier to run the DLL as a service (write a service that runs the event loop), and then have LabVIEW call a DLL that retrieves the data from the service.
Old NI help note
Just a shot in the dark...could the Qt data be clobbering some of the LV memory space immediately after the exec loop is started?
You probably don't have a QApplication object created when you are trying to call exec() in the QThread. This might be causing your crash. For the main problem, however, I would say that it is very likely you aren't getting any activity in the DLL due to the event loop not executing.

Resources