How can I define a barrier point In a Qthread run() method for synchronization.
My run method code consists of two stages, and all of the threads must reach the end of the first stage before they can pass the second stage.
void ThreadClass::run()
{
barrier// All of the thread must reach this point before passing below the line
}
From the top of my head ::
1 : Create a mutex and then lock it in your main() before creating the thread pool. Create the thread pool, let them run, your barrier should read like this.
perThreadReachedThisPointFlag = 1;
mutexCreatedByMain.lock();
mutexCreatedByMain.unlock();
In your main(), monitor the thread pool. If you observe (do not forget the memory fences) that all the threads in the pool have set the perThreadReachedThisPointFlag, then execute mutexCreatedByMain.unlock(); on your main().
All the threads were waiting to lock the mentioned mutex, then you let them go. All of them will lock then unlock the mutex.
2 : Another way would be using conditionVariable and conditionSignal functionality of pthread but I do not know a replacement for Windows.
Related
For example:
void MainWidget::testThreadTask()
{
qDebug() << "On test task";
}
void MainWidget::onBtnClick()
{
QThread *thread = new QThread;
connect(thread, QThread::started, this, testThreadTask);
thread->start();
qDebug() << "Thread START, now we wait 5s";
QElapsedTimer timer;
timer.start();
while (timer.elapsed() < 5000)
{
}
qDebug() << "END";
}
The program output is:
START wait 5s
END
On test task
I want to create a task to handle something after the button is pressed, and then the function will wait for the task to complete before returning.
In fact, it may not be necessary to create a new task and wait for it to execute, because since you have to wait and get stuck there, why not run it directly in the function.
But this is actually a problem when I deal with QT serial data. I want to send the data to the serial port after pressing the button, and then wait for the data (by constantly reading), but I find that when I have been waiting, the serial port can not read the data at all, only when I exit the function the serial port can read the data.
Is there any way to deal with serial data sending and receiving synchronization?
void MainWidget::onBtnClick()
{
serial->write("Test");
if (serial->bytesAvailable())
{
QByteArray data = serialIo->readAll();
// handle the data
}
}
You are mistaken with what is happening in your application. I suggest you read Threads and QObjects (the entire page), Qt::ConnectionType and the detailed description of QThread.
What is happening to you is:
MainWidget does not live in thread. For the slot of a regular object to be called from thread, it first needs to be moved to that thread.Note that subclasses of QWidget cannot be moved to another thread. Because some OS supported by Qt limit where windows can live, they made the choice to force all QWidget to stay in the main thread, in all OS Qt can execute on.
When you connect thread to this (which BTW is incorrect in your question, it should have been with ampersands connect(thread, &QThread::started, this, &MainWidget::testThreadTask);), you create a queued connection, even though the thread has not technically started yet.
When you start the thread:
It fires its started signal.
Because the connection is a Qt::QueuedConnection, the slot will only be executed after returning to the main thread's event loop, i.e. some time after returning from onBtnClick.
Notes:
You would have more useful information in qDebug() about the threads running your code by using QThread::currentThread().Even better than that, your IDE should provide you a window specifically to see what thread has reached a breakpoint (Ctrl+Alt+H on Visual Studio).
At the risk of insisting, keep in mind this warning from the Qt help:
Be aware that using direct connections when the sender and receiver live in different threads is unsafe if an event loop is running in the receiver's thread, for the same reason that calling any function on an object living in another thread is unsafe.
With that said, because you wait 5 seconds before returning to the event loop and because it is only test code (= there should be no bug + it does not matter even if there is one), you should try to create a Qt::DirectConnection, just to see the slot be invoked from the worker thread.
The detailed description of QThread (link above) shows a complete working example of a worker object being moved to the new thread before it is started. The point is:
A worker object is created, then moved to the worker thread.
Connections are created for the controller to send QString to the worker object via signal/slot and for the worker object to return result to the controller via signal/slot too.
All these connections are Qt::QueuedConnection by default since the worker object was moved.
The worker thread is started. Since run was not overriden, it starts an event loop (in exec).
And there you have it.
Remember 1 things: widgets cannot be moved!!! Create your own worker object.
I have a Quarkus application where I use the event bus.
the code in question looks like this:
#ConsumeEvent(value = "execution-request", blocking = true)
#Transactional
#TransactionConfiguration(timeout = 3600)
public void consume(final Message<ExecutionRequest> msg) {
try {
execute(...);
} catch (final Exception e) {
// some logging
}
}
private void execute(...)
throws InterruptedException {
// it actually runs a long running task, but for
// this example this has the same effect
Thread.sleep(65000);
}
Why do I still get a
WARN [io.ver.cor.imp.BlockedThreadChecker] (vertx-blocked-thread-checker) Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 63066 ms, time limit is 60000 ms: io.vertx.core.VertxException: Thread blocked
I'm I doing something wrong? Is the blocking parameter at the ConsumeEvent annotation not enough to let that handle in a separate Worker?
Your annotation is working as designed; the method is running in a worker thread. You can tell by both the name of the thread "vert.x-worker-thread-0", and by the 60 second timeout before the warnings were logged. The eventloop thread only has a 3 second timeout, I believe.
The default Vert.x worker thread pool is not designed for "very" long running blocking code, as stated in their docs:
Warning:
Blocking code should block for a reasonable amount of time (i.e no more than a few seconds). Long blocking operations or polling operations (i.e a thread that spin in a loop polling events in a blocking fashion) are precluded. When the blocking operation lasts more than the 10 seconds, a message will be printed on the console by the blocked thread checker. Long blocking operations should use a dedicated thread managed by the application, which can interact with verticles using the event-bus or runOnContext
That message mentions blocking for more than 10 seconds triggers a warning, but I think that's a typo; the default is actually 60.
To avoid the warning, you'll need to create a dedicated WorkerExecutor (via vertx.createSharedWorkerExecutor) configured with a very high maxExcecuteTime. However, it does not appear you can tell the #ConsumeEvent annotation to use it instead of the default worker pool, so you'd need to manually create an event bus consumer, as well, or use a regular #ConsumeEvent annotation, but call workerExectur.executeBlocking inside of it.
I stumbled upon a deadlock condition when using Tokio:
use tokio::time::{delay_for, Duration};
use std::sync::Mutex;
#[tokio::main]
async fn main() {
let mtx = Mutex::new(0);
tokio::join!(work(&mtx), work(&mtx));
println!("{}", *mtx.lock().unwrap());
}
async fn work(mtx: &Mutex<i32>) {
println!("lock");
{
let mut v = mtx.lock().unwrap();
println!("locked");
// slow redis network request
delay_for(Duration::from_millis(100)).await;
*v += 1;
}
println!("unlock")
}
Produces the following output, then hangs forever.
lock
locked
lock
According to the Tokio docs, using std::sync::Mutex is ok:
Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code.
However, replacing the Mutex with a tokio::sync::Mutex will not trigger the deadlock, and everything works "as intended", but only in the example case listed above. In a real world scenario, where the delay is caused by some Redis request, it will still fail.
I think it might be because I am actually not spawning threads at all, and therefore, even though executed "in parallel", I will lock on the same thread as await just yields execution.
What is the Rustacean way to achieve what I want without spawning a separate thread?
The reason why it is not OK to use a std::sync::Mutex here is that you hold it across the .await point. In this case:
task 1 holds the Mutex, but got suspended on delay_for.
task 2 gets scheduled and runs, but can not obtain the Mutex since its still owned by task 1. It will block synchronously on obtaining the Mutex.
Since task 2 is blocked, this also means the runtime thread is fully blocked. It can not actually go into its timer handling state (which happens when the runtime is idle and does not handle user tasks), and thereby can not resume task 1.
Therefore you now are observing a deadlock.
==> If you need to hold a Mutex across an .await point you have to use an async Mutex. Synchronous Mutexes are ok to use with async programs as the tokio documentation describes - but they may not be held across .await points.
Brief of environment:
I have a device on which runs an application written in qt. It has a main thread which handles Database operations (SQlite) and a separate thread for networking operations (via 3G).
Mains thread Event loop is ran by QCoreApplication::exec and the other thread which handles networking operations is ran by QThread::exec. Btw, socket thread affinity is changed after it's started (Eg. moveToThread(socketThreadPtr))
Brief of problem:
Main thread is busy in a loop, in order to select around 10k records from database and that loop takes about +30 seconds. In the network thread there is a 15 seconds timer which has to send a keep alive message each time expires. The problem is that the slot for timeout() signal is executed only after the loop is finished.
Solution founded until now(but not satisfying):
If I call QCoreApplication::processEvents in the loop that selects the records, problem is solved, but I wonder if a solution exists instead of this workaround.
Remark:
The timer, signal and slot, which gives the command to send the keep alive message is currently handled in the main thread( but the read/write happens in the network thread). Also, I moved the timer on the network thread but I got the same result as being on the main thread.
You have to create timer in your network thread, maybe Qtimer is a member of network thread, so the network thread will be constructed in main thread and thread affinity of its children's
set to main thread, next you have moved network thread to new Qthread, but what about Qtimer? it still lives in main thread (except when you explicitly define network class as it's parent, so as you said moveToThread will affect object's children as well as Qtimer)
The Qtimer should be constructed in network thread, you can construct a new Qtimer in one of the network thread slots and connecting it to the Qthread::start signal. So by calling start method of Qthrad your slot will be executed on new thread and Qtimer will be constructed on that thread respectively.
Something like this should actually create the socket and the timer in your dedicated thread, given your main-thread is the server and clients should be handled in threads. Otherwise just use QTcpSocket::connectToHost() or QTcpServer:::bind()in your initialize function.
Mainthread:
auto t = new QThread();
t->start();
auto o = new MyThreadObject();
o.moveToThread(t);
o.setDescriptor(socketDesc);
QMetaObject::invokeMethod(o, "initialize", Qt::QueuedConnection);
MyThreadObject:
class MyThreadObject : public QObject
{
Q_OBJECT
public:
MyThreadObject(){...};
void setDescriptor(qintptr socketdescriptor)
{
m_desc = socketdescriptor;
}
public slots:
void initialize()
{
m_tcpSocket = new QTcpSocket();
m_tcpSocket->setSocketDescriptor(m_desc);
//socket configuration
m_timer = new QTimer();
//timer configuration
m_timer->start();
}
private:
QTcpSocket* m_tcpSocket;
QTimer* m_timer;
qintptr m_desc;
}
I have the message "QWaitCondition: Destroyed while threads are still waiting" following the launch of N threads in a loop, and waiting for each in another loop.
Here is the code :
int nb_threads = QThread::idealThreadCount();
QFuture<void> futures[nb_threads];
bool shared_boolean;
// launch threads
for(int i = 0;i<nb_threads;++i){
futures[i] = QtConcurrent::run(this,gpMainLoopMT,&shared_boolean,&next_pop_size,next_population);
}
// wait for threads to finish
for(int i = 0;i<nb_threads;++i){
futures[i].waitForFinished();
}
I just can't figure out why this is happening, while I am waiting for each thread.
Actually I had the same warning when using Qt in a DLL. Windows kills all threads at application exit, before the DLL's global objects are destroyed. A global object destructor is where I was deleting the QApplication instance. This leads to an inconsistency because the QWaitConditions still think a thread is waiting, when in fact the native thread isn't running anymore, killed by Windows with no chance of proper cleanup. That's what leads to this warning.
It's unfixable, even in Qt. Windows doesn't give us any chance to perform any cleanup, the threads just disappear.
You're not waiting for the threads, you're waiting for the tasks.
The threads keep running until QApplication deletes the global QThreadPool instance. So the question is - are you leaking QApplication or destroying it properly?