We're preparing an application using Qt that has a main process that controls the GUI and spawns processes that do the actual data processing. Messages are exchanged between the main process and the data-processing processes using the Qt mechanisms and the stdin/stdout pipes.
Now, in the event that the GUI crashes, the other processes keep running. What we'd like to be able to do is to, when a new GUI starts, reconnect to these processes as before. Anyone know if this is possible, and if so, how to achieve it?
This is possible if you are using a named pipe for communicating with the process. stdin/out are closed if the process they belong to is terminated.
You might want to investigate shared memory for the communication between processes. I seem to recall that it was able to recover in a very similar situation at a previous job.
Another possibility, if your platform supports it, is to use dbus for the communication between processes. If that is the case, neither process would have to be there, but will act get the appropriate messages if it is running.
Related
I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.
I have been playing with QProcess as a way to start computationally intensive tasks that way continue after the GUI that created them has been closed. I was now wondering whether I could improve on this so that not too many jobs can be started.
Say I have 20 available cores. User 1 starts a computation that is broken into 30 processes and exits the GUI. At the moment I'm using bash to control all of this so the GUI only really executes a bash script which counts the number of processes running. The whole workflow gets rather messy if another user logs in and starts another large job in the meantime so currently it refuses to submit if the script is already running. Also, there is no way to use the GUI to monitor the processes as they are now being run by bash.
Ideally I would like to improve the flow so that user 1 submits their processes. A separate background process manages the starting of the individual compute tasks - all of which are now QProcesses. Where I am getting stuck is if another user logs in. As opposed to 'please try again later' I would like to pick up the existing managing QProcesses and append any new jobs to that queue. Is this something I can do with QProcess and D-bus? If so what would be a good design for such a process.
Thanks
What you're asking requires two programs; a Gui client and a Server application.
The logged-on users interact with a Gui client interface, to launch and organise processes. The Gui client creates messages and sends them to the server application, which responds by creating and managing processes with QProcess. So the Gui client is simply an interface to the server application.
Of-course, you need the Gui applications and the server application to communicate with each other. While there are multiple methods of interprocess communication available, Qt has QLocalServer and QLocalSocket, which can be used by the server and client applications respectively.
I need to implement (in Qt) some solution to communicate between two programs running on Linux machine. One program is Worker, and the second is Watchdog. Basically I need Watchdog to periodically check on Worker and in case something wrong (no process,hangup - no answer from Worker) kill Worker (if present) and start it again.
Worker runs as a daemon, so I think starting it from unix /etc/init.d/worker would be appropriate.
I can see two solutions
Unix signals - both of them can send and receive Unix SIGUSR1
Shared memory
Which one to choose?
With signals both of programs will have to know others pid, probably reading from filesystem /var/run so it looks like a drawback.
With shared memory, all I need is "key" that programs will have hardcoded, so no need to read pids from filesystem. Since Watchdog should start first it can create shared mem segment, and Worker will only attach to it and maybe update its timestamp value??? However, to stop Worker by Watchdog (in case of hungup) Watchdog will still need Worker pid to send him SIGKILL, maybe it can read it from shared mem? Both concepts are new to me.
So what is the proper way to build reliable Watchdog, or am I missing something?
best regards
Marek
I think this is the best solution available through Qt:
http://qt-project.org/doc/qt-4.8/qlocalsocket.html
http://qt-project.org/doc/qt-4.8/qlocalserver.html
The QLocalSocket class provides a local socket. On Windows this is a
named pipe and on Unix this is a local domain socket.
http://qt-project.org/doc/qt-4.8/ipc-localfortuneserver.html
http://qt-project.org/doc/qt-4.8/ipc-localfortuneclient.html
Hope that helps.
I am developping a project about data sampling and realtime display like oscilloscope. The data are transfered to PC with virtual com port at a high trandfer rate. I am using QT4.8.4 + Qextserialport to accomplish serial port communication on WindowsXP.
I have tested the device using AccessPort. It could receive the data from the port, but have no response to write to it. If I wanted to make the device stop uploading data, I must stop the device and write stop command to it, restarted the device, then the device stopped uploading. It is the same for the application using one thread to process reading and writing. I did not want to close the port completely. I just wanted to enter some command to change something, the sampling rate, the data format, etc.
So, I tried multithread. I downloaded a reference code from the following link.
http://www.qtcentre.org/threads/21063-QextSerialPort-with-QTimer-approch-for-reading?p=103325&highlight=#post103325
(I am sorry, I really do not know how to paste code fitly on this site)
In this code, the author derived two subclass of QThread. One is for reading port, the other is for writing port. He also reimplemented the run() in these two subclass. I tried this code, but found that when receiving code, my GUI frozen.
It seem that the author in the following thread met the same question with me.
Qt: GUI sometimes freezing when using threads and signals/slots
But I have question about this thread.
The author mentioned "When running the code in the GUI-thread, there is no problem."
Did the author mean that everything is OK when the all code in GUI-thread? So, why he used worker thread and process thread?
The author mentioned "Process thread"
Do I need a another process thread to work with GUI thread which is responsible for display?(I need to display not only the data but also the waveform, all must in realtime)
Please give me some tips on how to conquer this problem. Thanks a lot.
The short answer is yes. Doing any heavy processing on a GUI thread will result in freezing the GUI (esp if you block). Instead you should either have an independent thread that updates the data, or spin off worker threads for specific tasks. In either case, when there is new data to display you should signal the GUI thread when there is new data to display. If possible, I'd recommend using the MVC pattern and implement a QAbstractItemModel to provide data to your view (as it has a defined pattern for providing those updates).
I would like to know if there is a way that an MPI process send a kill signal to another MPI process?
Or differently, is there a way to exit from an MPI environment graciously, when one of the process is still active? (i.e. mpi_abort() prints an error message).
Thanks
No, this is not possible within an MPI application using the MPI library.
Individual processes would not be aware of the location of the other processes, nor of the process IDs of the other processes - and there is nothing in the MPI spec to make the kill you are wanting.
If you were to do this manually, then you'd need to MPI_Alltoall to exchange process IDs and hostnames across the system, and then you would need to spawn ssh/rsh to visit the required node when you wanted to kill something. All in all, it's not portable, not clean.
MPI_Abort is the right way to do what you are trying to achieve. From the Open MPI manual:
"This routine makes a "best attempt" to abort all tasks in the group of comm." (ie. MPI_Abort(MPI_COMM_WORLD, -1) is what you need.
Any output during MPI_Abort would be machine specific - so you may, or may not, receive the error message you mention.