one timer per thread using Qt - qt

I modified Qt's broadcast sender example so that it has ten threads and in each thread it starts a timer, but only timer of the first thread is triggered. How can I have one timer running for each thread?

Timers only work if the thread has an event loop.

A couple years later in OS course I learned:
Timers are a per process thing. When a the OS kernel sends a timer trigger even whatever thread that is currently running gets the call and processes it.
So I couldn't have ten timers per thread in a straight-forward manner.

Related

Scheduler called asyncronously RTOS and time measuring

In order to shorten follow up latency I am forcing task scheduler to run form ISR. By doing so I found that those tasks that are blocked are sooner woken up. That is because scheduler tick counter was incremented not only by tick timer but form ISR as well.
Is there any other method to block task for exact amount of time even though scheduler is being called asynchronously without relaying on other hardware such as timers?

If a process is waiting on a semaphore does os allocate it cpu?

I am working on a windows system. I have a main thread from which I start few threads.The new threads do the processing.Now my main thread waits on WaitForMultipleObjects(). So is my main thread also allocated cpu on regular interval?Or as it is on wait the other threads share cpu?
(Slightly different) Duplicate: Does WaitForSingleObject give up a thread's time slice?
"[No] -- the thread is blocked until whatever it's waiting on becomes signaled. The thread won't be scheduled to run while it's blocked, so other threads get all the CPU time."

How does async task interrupt main thread (from itself - the main one)?

I can't seem to find this specific implementation detail, or even a pointer to where in an OS book to find this.
Basically, main thread calls an async task (to be run later) on itself. So... when does it run?
Does it wait for the run loop to finish? Or does it just randomly interrupt the run-loop in the middle of any function?
I understand the registers will be the same (unless separate thread), but not really the instruction pointer and what happens to the stack, if anything does happen.
Thank you
In C# the task is scheduled to be run on the current SynchronizationContext. The context basically has a queue of tasks which it schedules to run on the threads it is associated with, in a GUI app there is only one thread so the task is scheduled to run there.
The GUI thread is not interrupted but it executes the task when it finishes all other tasks preceding it in the queue.
The threads of a process all share the same address space, not the same CPU registers. The thread scheduling is done depends on the programming language and the O/S. Usually there are explicit scheduling points, such as returning from a system call, blocking awaiting I/O completion, or between p-code instructions for interpreted languages. Some O/S implemtations reschedule depending on how long a thread has run for time-based scheduling. Often languages include a function that explicitly offers the CPU to any other thread or process by transferring control to the process or thread scheduler component of the O/S.
The act of switching from one thread or process to another is known as a context switch and is carefully tuned code because this is often done thousands of times per second. This can make the code difficult to follow.
The best explanation of this I've ever seen is http://www.amazon.com/The-Design-UNIX-Operating-System/dp/0132017997 classic.

How does Jesque work with Huge Payload?

I read about Jesque at https://github.com/gresrun and I would like to understand how does it perform under huge payload. Is the only way of queuing a job to create an instance of Job class and then using a Thread to start off the worker or are there any other approaches? I am a little skeptical about using java.lang.Thread objects like it is done in the example on this link for batch jobs where data payload is huge.
Actually spawing threads without control is never a good idea.
I would suggest the approach to put your workers in a BlockingQueue and then spawn a very limited number of threads (as much as your CPUs, in order to reduce contention) to start off those workers. Once the work has finished, the thread pick up a new worker and start the process again. Once there are no worker in the queue, the threads just hangs on the queue, waiting for new workers.
You can have a look at the Thread Pool Pattern

APC execution context question?

When an Asynchronous Procedure Call (APC) occurs it is executed 'Asynchronously' to the current context of the thread. Per this MSDN info: APC
Now my question is, what exactly does it mean 'execute asynchronously to the context of the current thread'? Is it that executes besides what the thread is already executing, or is the thread interrupted to execute the APC first and then continue its work?
Because to my knowledge a processor can not 'really' do two things at once. Unless I've completely misunderstood the 'asynchronous' concept here.
Can anyone offer an explanation or a link to an explanation?
A thread must be in an alertable state to run a user-mode APC.
When a user-mode APC is queued, the thread to which it is queued is not directed to call the APC function unless it is in an alertable state.
A thread enters an alertable state when it calls the SleepEx, SignalObjectAndWait, MsgWaitForMultipleObjectsEx, WaitForMultipleObjectsEx, or WaitForSingleObjectEx function. If the wait is satisfied before the APC is queued, the thread is no longer in an alertable wait state so the APC function will not be executed. However, the APC is still queued, so the APC function will be executed when the thread calls another alertable wait function.
execute asynchronously to the context of the current thread means
the APC function will be executed when the thread calls alertable wait function and switch to the alertable state.
I recommand you read
Windows via C/C++, Fifth Edition
Chapter 10 - Synchronous and Asynchronous Device I/O
This is a much more general question. How do you think a computer handles multi-tasking if it couldn't do many things at once? It's true that at any given instant, it might only be doing one thing, but each task (be it running a web browser or executing your APC thread) is time-sliced and executed concurrently on the processor. They appear to be executing at the same time, although they're actually interleaved on the processor.
Of course, if you have multiple cores, as most machines do now, they genuinely can execute many things at once.

Resources