Do I need multithread? - qt

I am developping a project about data sampling and realtime display like oscilloscope. The data are transfered to PC with virtual com port at a high trandfer rate. I am using QT4.8.4 + Qextserialport to accomplish serial port communication on WindowsXP.
I have tested the device using AccessPort. It could receive the data from the port, but have no response to write to it. If I wanted to make the device stop uploading data, I must stop the device and write stop command to it, restarted the device, then the device stopped uploading. It is the same for the application using one thread to process reading and writing. I did not want to close the port completely. I just wanted to enter some command to change something, the sampling rate, the data format, etc.
So, I tried multithread. I downloaded a reference code from the following link.
http://www.qtcentre.org/threads/21063-QextSerialPort-with-QTimer-approch-for-reading?p=103325&highlight=#post103325
(I am sorry, I really do not know how to paste code fitly on this site)
In this code, the author derived two subclass of QThread. One is for reading port, the other is for writing port. He also reimplemented the run() in these two subclass. I tried this code, but found that when receiving code, my GUI frozen.
It seem that the author in the following thread met the same question with me.
Qt: GUI sometimes freezing when using threads and signals/slots
But I have question about this thread.
The author mentioned "When running the code in the GUI-thread, there is no problem."
Did the author mean that everything is OK when the all code in GUI-thread? So, why he used worker thread and process thread?
The author mentioned "Process thread"
Do I need a another process thread to work with GUI thread which is responsible for display?(I need to display not only the data but also the waveform, all must in realtime)
Please give me some tips on how to conquer this problem. Thanks a lot.

The short answer is yes. Doing any heavy processing on a GUI thread will result in freezing the GUI (esp if you block). Instead you should either have an independent thread that updates the data, or spin off worker threads for specific tasks. In either case, when there is new data to display you should signal the GUI thread when there is new data to display. If possible, I'd recommend using the MVC pattern and implement a QAbstractItemModel to provide data to your view (as it has a defined pattern for providing those updates).

Related

How do I have code on one partion execute code on another partition (or SPIFFS) on an ESP32?

I'm working on an ESP32 based project that will involve OTA flashes of the ESP.
I know that you can set up different partitions on an ESP32. I'm wondering if it's possible for my code on one partition to then execute code on another partition? (For my usage, I don't need data passed back and forth - just execution handed off from one partition to the other and than given back after execution)
Yes, assuming you don't have Flash encryption with different keys or other tricky copy protection mechanisms enabled. Partitions on Flash are a completely abstract concept for the developers' peace of mind. The CPU doesn't know and doesn't care - it will execute code from anywhere in Flash if told to jump to it.
The challenge here is that you need to convince the linker to find and call a method from an external, hard-coded address. And you need to make sure said method is actually there :) Since this is not a standard solution, you likely won't find a tutorial but have to know your linker really well :)
There's some useful stuff in the ESP IDF linker documentation and probably the GNU linker documentation, if I could find it :)
You will need some freeRTOS information.
I will not write any code here but I can give you some ideas on what you are trying to achieve.
Assume you have two tasks and a digital pin, which is pulled high by the setup() function. You define the task handler for each task. Now you want to change the current running program to some other program, you will check if the digital pin has been LOW for 5 seconds if that's the case , you can use the vTaskSuspend() function in the primary task by providing it the handler of your current task and suspend your current task. Be sure to resume your OTA task by vTaskResume() function before calling vTaskSuspend() . When you have completed the update, restart the esp32 by calling ESP.restart().
If you want to abort the OTA task just pull the same digital pin LOW and using the above method in your OTA task also, you can switch back to your primary task.

QShared memory for an externally running process?

I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.

Reattaching to an orphan process in QT

We're preparing an application using Qt that has a main process that controls the GUI and spawns processes that do the actual data processing. Messages are exchanged between the main process and the data-processing processes using the Qt mechanisms and the stdin/stdout pipes.
Now, in the event that the GUI crashes, the other processes keep running. What we'd like to be able to do is to, when a new GUI starts, reconnect to these processes as before. Anyone know if this is possible, and if so, how to achieve it?
This is possible if you are using a named pipe for communicating with the process. stdin/out are closed if the process they belong to is terminated.
You might want to investigate shared memory for the communication between processes. I seem to recall that it was able to recover in a very similar situation at a previous job.
Another possibility, if your platform supports it, is to use dbus for the communication between processes. If that is the case, neither process would have to be there, but will act get the appropriate messages if it is running.

LabVIEW blocking Qt signals?

I have a LabVIEW 8.6 program that is using a DLL written in Qt; the DLL listens to a TCP port for incoming messages and updates some internal data. My LabVIEW program calls into the DLL occasionally to read the internal data. The DLL works perfectly (i.e., receives data from the TCP port) with another Qt program. However, it does not work at all with my LabVIEW program.
I've attached a debugger to the DLL and can see calls from LabVIEW going into it -- my function for getting the internal data is being called and I can step through it. The code that gets the data from the TCP is never called though; it looks like the signal for incoming data on the TCP port is never triggered.
I know this sounds like a Qt issue but the DLL works perfectly with another Qt program. Unfortunately, it fails miserably with LabVIEW.
One theory:
The event loop is not running when LabVIEW calls the DLL
In the Qt DLL's run() function, I call socket->waitForDisconnected(). Perhaps the DLL is not processing incoming events because the event loop is not running? If I call exec() to start the event loop, LabVIEW crashes (LabVIEW 8.6 Development System has encountered a problem and needs to close."):
AppName: labview.exe AppVer: 8.6.0.4001 ModName: qtcored4.dll
ModVer: 4.5.1.0 Offset: 001af21a
Perhaps when I call the DLL from another Qt program, that program's event loop is allowing for the TCP signal to be seen by the DLL. Unfortunately, kicking off the event loop in the DLL takes down LabVIEW.
Any thoughts on how to keep signals running in the DLL when LabVIEW is the calling program?
EDIT Debug trace of the exec() call:
QThread::exec() -> eventLoop.exec() -> if (qApp->thread() == thread())
in the call to
QObject::thread() {
return d_func()->threadData->thread;
}
The macro Q_DECLARE_PRIVATE(QObject), the second call, triggers the crash.
EDIT 17 Aug 2009: Status update
After two days of trying various ways to get this to work I decided to implement a TCP listener directly in LabVIEW. My LabVIEW application sends data out via the DLL and receives data in via TCP. All is working well.
This question was cross-posted on http://forums.ni.com/ni/board/message?board.id=170&thread.id=431779
You should change the library call to 'run in any thread' that way the UI thread can still run the event loop.
Can you debug through exec() to see where it crashes LabVIEW?
You can also set debugging the maximum in LabVIEW in the configuration page for the Call Library Node.
LabVIEW is finicky with DLLs. It may be easier to run the DLL as a service (write a service that runs the event loop), and then have LabVIEW call a DLL that retrieves the data from the service.
Old NI help note
Just a shot in the dark...could the Qt data be clobbering some of the LV memory space immediately after the exec loop is started?
You probably don't have a QApplication object created when you are trying to call exec() in the QThread. This might be causing your crash. For the main problem, however, I would say that it is very likely you aren't getting any activity in the DLL due to the event loop not executing.

Serial Comms dies in WinXP

A bit of history: We have an application, which was originally written many years ago (1998 is the first date in PVCS but the app is about 5 years older than that as it originally was a DOS program). This application communicates with a piece of hardware via serial. When we got to Windows XP we started receiving reports of the app dying after a short time of running. It seems that the serial comms just 'died' and the app was left in a stuck state. The only way to recover from this situation was to restart the application.
The only information I can find regarding this problem was apparently the Windows Message system would miss that information was received, the buffer would fill and the system would get stuck. This snippet of information was left in a old word document, but there's no evidence to back this up. It also mentions that this is only prevalent at high baud rates (115200+).
The solution was to provide customers with USB->Serial converters along with the hardware.
Today: We are working on a new version of the hardware that will run across a network as well as serial ports. So to allow me to work on the network code, minus the actual hardware we are using a VSCOM NetCom113 device. It also installs a virtual comm port on the users (ie: mine) machine.
Now I have got the network code integrated with the app, it appears that the NetCom device exhibits the same behaviour as a physical commport. This is undesirable as I need the app to run longer than ~30 seconds.
Google turns up zero problems that we experience.
I was wondering:
Has anyone experienced this before? If so what did you do to fix/workaround the problem?
Does anyone have any suggestions as to whether the original author of the document is correct and what I can do to test the theory?
Unfortunately I can't post code as the serial code is tightly couple with the rest of the system, though if you have questions regarding it I can answer questions about it.
Updates:
The code is written using Win32 Comm routines - so I am using CreateFile, ReadFile. There's also judicious calls to GetOverlappedResult.
It's not hanging per se, it's just that the comms stops. You can access the menus, click the buttons, but nothing can interact with the connected hardware. Using realterm you can see that no data is coming in or going out.
I think the reference to the windows message is that the problem is internal to windows. Data has arrived but the kernal has missed it and thus not told the rest of the system about it.
Flow control is not used.
Writing a 'simple' test is difficult due the the fact that the code is tightly coupled and the underlying protocol is quite complex and would require a lot of work.
Are you using DOS-style serial code, or the Win32 CreateFile approach?
If the former, be very suspicious: if at all possible I'd convert to the latter.
If the latter, do you know on what kind of system call it's hanging? Are you in a blocking read call? or an overlapped I/O call? or waiting on an event? (I'm not sure I have enough experience to help, but those are the kinds of questions that come to mind)
You might also check into the queue size, which you can set with the SetupComm function.
I don't buy the "Windows Message system" stuff -- it sounds fishy; you can write good Win32 serial i/o code that never uses Windows messages.
edit: does your Overlapped I/O use events? I seem to remember something about auto-reset events occasionally missing their trigger... check your overlapped I/O calls very carefully to see whether you're handling the possible outcomes properly. Perhaps there's a way to make your code more robust by automatically cancelling the overlapped i/o and restarting another read. (I assume the problem is in the read half, not the write half?)
edit 2: A suggestion: assuming the win32 side has missed a byte or packet, and your devices are in deadlock because they're both expecting each other to respond to something, can you tweak the other side of the serial I/O to regularly send some type of "ping" packet with an incrementing counter? (and log the ping packets on the PC side; that way you can see whether you've missed any)
Are you sure you have your flow control set up correctly? DTR, RTS, etc...
-Adam
i have written apps that use usb / bluetooth serial ports and have never had an issue. with bluetooth i have seen bit rates (sustained) of 800,000 bps for long periods of time. most people don't properly implement the port.
My serial port
Not sure if this is a possibility for you, but if you could re-write the code using C#.NET you'd have access to the SerialPort class there. It might remedy your problem. I know a lot of legacy code based around the Win32 API for hardware I/O ports tended to fail in XP due to timing (had a small bit of experience with MIDI).
In addition, I don't know if you can use the Win32 method of Serial Port access in Vista, so that might shut out future MS OSes from being able to use your code.

Resources