I have a question regarding how the init process in UNIX works. As i understand it the init process is the first to start and then other processes fork off it.
Say we start the init process then fork a child process which we call exec on with a new program which happens to cause the child to wait for some I/O input. Now the parent init process could wait on the child but if it did that then there are no other processes to be run. Conversely if the init process does not wait and instead falls into a waiting loop or something then when the child is resumed the parent is now taking up processor time doing nothing.
What is the best way to manage this problem? Should the init process simply always run an infinite loop and we not worry about the wasted resources? Or is there a better way.
Any help would be much appreciated,
Ben
Process 1 must never exit; many (all?) implementations of Unix will force a system crash if it does.
However, process 1 doesn't need to do anything more than this (I'm assuming the kernel opens fds 0, 1, and 2 on the console before transferring control to user space - check your kernel's documentation for that and other details of the bootstrap environment, if you're actually gonna write init yourself):
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <stdio.h>
int main(void)
{
pid_t child = fork();
if (child == -1) {
perror("init: fork");
return 1;
}
if (child == 0) {
execl("/etc/rc", "/etc/rc", (char*)0);
perror("/etc/rc");
return 1;
}
for (;;)
wait(0);
}
After starting /etc/rc it does go into an infinite loop, calling wait over and over again, and throwing away the results. But wait is a blocking system call. Each time it's called the kernel will take the CPU away from process 1 and give it to a process that has useful work to do; wait will only return when there is an exited child to report. (If there are no processes with useful work to do, the CPU will be put into a low-power "sleep" state until some external event, e.g. a human typing on the keyboard or a network packet arriving, gives a running process some work to do.)
With this minimal init, it is entirely /etc/rc's responsibility to start up all of the programs needed to make the computer do something useful, and those programs' responsibility to keep running as long as needed; if it should come to pass that every process other than this one exits, it'll just sleep in wait forever. More sophisticated implementations will do more, e.g. restarting network servers if they crash.
There is a solution for this: SIGCHLD. It's a signal that can be delivered to parent when a child changes its status (stops or exits). So the parent can goes sleeping (sigpause, sigsuspend for example) and will be interrupted when a child terminates, then the parent runs an appropriate signal handler to call one of the wait-family functions.
I would not worry about resources during init start up. Your server is booting up and not being used for it's intended purpose, therefore there is not a performance demand on it during that time.
I have never seen a process written to take standard input during the boot up process, although this is possible if you wanted to write one. I know the init scripts can be written with dependencies, depending on which distro you use and what exactly is the boot up process(upstart, system V init, etc). But by default, they run in a sync fashion on a order that the init uses. I am not sure how blocking that sync process...waiting for input would effect the system. Most likely, it would do just that....stop and wait for input before continuing.
The init process does indeed run an infinite loop but this doesn't use any significant resource as it is interrupt driven. It simply waits for processes to die or for other signals to be sent to it. During the wait intervals, zero CPU cycles are used by init.
Related
Assume you have some functions that must be called at different point in times but continuosly (constant task like each 250ms, each 2s, each 5 mins).
Is it better to use 4-5 timers each one dedicated to a task or is it better to code everything in the smaller task and then use a counter variable to run the other function?
e.g.
//callback each 250ms
void 250ms_TASK(){
if (counter % 8 != 0){ //250ms*8 = 2s
return;
}
// do 2 sec stuff
if (counter != 4800){ //250ms*4800 = 20min
return;
}
//do 20min stuff
counter = 0;
}
Assume also that you want to avoid/be bulletproof to situations like this:
before doing 2 secs stuff you MUST be sure that the 8th 250ms task is computed.
before doing 20 min stuff you MUST be sure that the 4800th 250ms and the 600th 2s task is computed.
The question is related to best practice and performance.
Moreover is it better to perform those calculations in the callback or use the callback to modify flags and perform the calculations in the main loop ?
I assume you are using STM32 since you tagged STM32.
Unless your application is very much time critical that you need to use preemptive and asynchronous timer interrupts (for example 5 mins task is very important so it should be called even while a separated 250ms callback task is running), using multiple timer interrupts is just waste of timers and you need to use as fewer interrupts as possible IMHO. Counting variable is not costly so it is okay to do that.
The real consideration is the length of tasks. The ISRs should be as short as possible so if the timer callback tasks are quite long you should use flags and use polling operation in the main loop. Polling flags is more preferable especially when you are using multiple callbacks in a single timer ISR. Imagine the moment that 250ms, 2s, and 20min callbacks should be called in the ISR and the ISR will take 3 times longer than usual.
By the way, if you decide to use a single timer, why not using SysTick? The SysTick timer is provided in every Cortex M MCUs and its operation is the same across the MCU families. You can easily configure this as 1ms interrupt timer very easily. As far as you use polling in the main loop 1ms interrupt must be fine. There are many tutorials on Systick (for example, part1 and part2)
The standard way to do this for tasks that aren't very time critical, is to implement a single timer, which triggers once every millisecond.
That timer then goes through a list of registered "software timers" and checks if it is time for them to be executed. If so, the timer then calls a function pointer which contains the timer-specific code. That is, a callback function called upon by the timer driver.
If these functions are kept minimal, for example just setting a flag, you can execute them from the main timer ISR.
You can make various arguments regarding power consumption and real timer requirement. It really depends on your application. But these question can deliver insightful answers for beginners, and even more experienced developers. The keyword here is scheduling.
The typical setup I prefer, bare metal real-time:
Main runs all low priority and idle tasks. Main bases these timings on the systick timer that ticks every 1 ms: if( (now - then) > delay ){ then = now; foo(); }
These tasks can be interrupted by everything, except in a critical zone (when using ISR threadspace data).
Low priority tasks are blinking LED's and handling communications.
There are peripheral interrupts and timers that set IRQ pending bits to signal real-time work is ready to be done. Eg: read uart or adc register before overrun.
The interrupt priorities and timers are setup in a way that the work is done in the correct order at the correct time. Eg: when processing ADC samples, and the hardware alarm IRQ arrives, this is handled immediately.
This way I have the DMA signal samples are ready to be processed, whilst a synchronized timer at a lower frequency set the IRQ-pending for the process loop. The process loop must run after the samples, thus has lower priority in the NVIC.
Advantage: Real time performance is not impeded when the communication channel is overflowed with data.
Disadvantage: The cpu never sleeps long.
The ISR's of the real time tasks may not exceed their time window. This is where Windowed Watchdog Timers are useful. Also, idle tasks will only run when there is time to spare. They might be late.
A similar option here is to use a real time operating system. Like ChibiOS.
However, when you're a battery application you don't want the MCU to wake up every second. You want the MCU to wake up only when work has to be done. You can do this in two ways.
Multiple hardware timers signal the wake-up event.
This requires multiple timers to keep running and might still use too much energy.
Tickless operation. You use one timer, the chip wakes up and does work when the time is reached. Then it reloads the timer compare with the time of the next deadline. If your intervals are long enough apart you can use the RTC for this to get ultra low power consumption.
Advantage: chip is allowed to go to sleep for longer period depending on workload.
Disadvantage: the design is a bit more complicated to implement and debug.
Similar option here is to use a tickless operating system.
Assuming you're not using a real time OS, I'd use a timer to do the time critical stuff (if it's handled with few clock cycles) and long timer counters through an interrupt and use non time critical stuff and longer periods in the main loop (with or without a watchdog timer/sleep).
The interrupts will interrupt the main loop stuff so you can be sure the time critical stuff happens when it needs to, the less time critical stuff happens whenever it can.
You could use a state machine in the main loop to do the logic stuff to make sure everything is done in the right order, things are checked, loaded, sensors read etc.
There is no right answer here, best practices would be to implement the design to meet the requirements, since requirements for a project vary from project to project there is no single right answer. One common solution will fail to work right for a wide array of products, as would another common solution. You could force one solution but that can add a lot of hacked up band-aids simply adding risk to the project, possibly lead to failure and or recalls or field upgrades that were unecessary that make the product and the company look bad. Do your system engineering and most of the time the correct solution will simply present itself, dont do your system engineering and the failures will simply present themselves.
I have a small IDE for a modeling language I wrote, implemented in PyQt/PySide, and am trying to implement a code navigator that let's you jump to different sections in the file being edited.
The current implementation is: (1) connect to QPlainTextEditor.textChanged, (2) any time a change is made, (sloppily) parse the file and update the navigator pane
It seems to work OK, but I'm worried this could cause major performance issues for large files on slower systems, in particular if more stuff is connected to textChanged in the future.
My question: Has anybody here implemented a delayed reaction to events, so that multiple events (i.e. keystrokes) within a short period only trigger a single update (say once per second)? And is there a proper QT way of doing this?
Thanks,
Michael
You can try using timers if you want some "delay".
There would be 2 ways to use them (with different results).
One is only parse after no input has been done for a certain amount of time
NOTE: I only know C++ Qt but I assume the same things are valid for pyqt so this is kind of "pseudocode" I hope you get the concept though.
QTimer timer; //somewhere
timer.setSingleShot(true); //only fire once
connect(timer,QTimer::timeout(),OnTimerDone(...);
OnTextChanged(...)
{
timer.start(500); //wait 500ms
}
OnTimerDone(...)
{
DoStuff(...);
}
This will restart the timer every input, so when you call that and the timer is not done the timeout signal is not emitted. When no input is done for an amount of time the timer timeouts and you parse the file.
The second option would be to have a periodic timer running (singleShot(false)).
Just start the timer for like each second. and timeout will be called once a second. You can combine that with a variable which you set to true when the input changes and to false when the file is parsed. So you avoid parsing when nothing has changed.
In C++Qt you won't have to worry about multi-threading because the slot gets called in the GUI thread. I assume it is the same for python but you should probably check this.
I have a Qt application that launches two threads from the main thread at start up. Both these threads make network requests using distinct instances of the QNetworkAccessManager object. My program keeps crashing about 50% of the times and I'm not sure which thread is crashing.
There is no data sharing or signalling occurring directly between the two threads. When a certain event occurs, one the threads signals the main thread, which may in turn signal the second thread. However, by printing logs, I am pretty certain the crash doesn't occur during the signalling.
The structure of both threads is as follows. There's hardly any difference between the threads except for the URL etc.
MyThread() : QThread() {
moveToThread(this);
}
MyThread()::~MyThread() {
delete m_manager;
delete m_request;
}
MyThread::run() {
m_manager = new QNetworkAccessManager();
m_request = new QNetworkRequest(QUrl("..."));
makeRequest();
exec();
}
MyThread::makeRequest() {
m_reply = m_manager->get(*m_request);
connect(m_reply, SIGNAL(finished()), this, SLOT(processReply()));
// my log line
}
MyThread::processReply() {
if (!m_reply->error()) {
QString data = QString(m_reply->readAll());
emit signalToMainThread(data);
}
m_reply->deleteLater();
exit(0);
}
Now the weird thing is that if I don't start one of the threads, the program runs fine, or at least doesn't crash in around 20 invocations. If both threads run one after the other, the program doesn't crash. The program only crashes about half the times if I start and run both the threads concurrently.
Another interesting thing I gathered from logs is that whenever the program crashes, the line labelled with the comment my log line is the last to be executed by both the threads. So I don't know which thread causes the crash. But it leads me to suspect that QNetworkAccessManager is somehow to blame.
I'm pretty blank about what's causing the crash. I will appreciate any suggestions or pointers. Thanks in advance.
First of all you're doing it wrong! Fix your threading first
// EDIT
From my own experience with this pattern i know that it may lead to many unclear crashes. I would start from clearing this thing out, as it may straighten some things and make finding problem clear. Also I don't know how do you invoke makeRequest. Also about QNetworkRequest. It is only a data structure so you don't need to make it on heap. Stack construction would be enough. Also you should remember (or protect somehow) from overwriting m_reply pointer. Do you call makeRequest more than once? If you do, then it may lead to deleting currently processed request after previous request finished.
What does happen if you call makeRequest twice:
First call of makeRequest assigns m_reply pointer.
Second call of makeRequest assigns m_reply pointer second time (replacing assigned pointer but not deleting pointed object)
Second request finishes before first, so processReply is called. deleteLater is queued at second
Somewhere in eventloop second reply is deleted, so from now m_reply pointer is pointing at some random (deleted) memory.
First reply finishes, so another processReply is called, but it operates on m_reply that is pointing a garbage, so every call at m_reply produces crash.
It is one of possible scenarios. That's why you don't get crash every time.
I'm not sure why do you call exit(0) at reply finish. It's also incorrect here if you use more then one call of makeRequest. Remember that QThread is interface to a single thread, not thread pool. So you can't call start() second time on thread instance when it is still running. Also if you're creating network access manager in entry point run() you should delete it in same place after exec(). Remember that exec() is blocking, so your objects won't be deleted before your thread exits.
I always use QObject::connect() in all my applications but it is not clear to me its effect when my program is currently inside a function. Suppose I have the following code:
void main() {
//other stuffs here
QObject::connect(xxx,SIGNAL(yyy()),this,SLOT(zzz());
}
void aFunction()
{
//a bunch of codes here
//i am here when suddenly signal is emitted from QObject::connect();
//another bunch of codes here
}
I assume that when the signal is emitted, QObject::connect leaves the function "aFunction()" to execute "zzz()". What will happen to the remaining codes in the "aFunction()"
Thanks.
I can understand the confusion, coming from procedural to event based programming gives me same experience like you do now.
Short answer:
in non multi threaded environment, slot zzz() will be executed after aFunction() finishes. In fact, the signal probably gets emitted after aFunction finishes.
in multi threaded env., same thing but it is "after some time", not after.
Key to understanding this is Event Loop. QApplication::exec() runs a forever loop polling for event. New event is then handled, signals get emitted and depending on the fifth argument of QObject::connect which is a Qt::ConnectionType, ultimately runs the connected slot. You can read QObject documentation for more detail..
So your aFunction probably gets called after some signal, so after it is finished, it's back to event loop again, then your 'suddenly emitted' signal actually gets emitted and zzz is executed.
Even in multi threading environment, inter thread signals and slots work with Qt::QueuedConnection which basically just posts the emitted signal to corresponding thread so that when that thread's event loop come around to process it - it will be executed sequentially as well.
Ultimately what you will have to remember is that this Turing Machine called computers is executing sequences of codes, whether it's semi paralel (e.g time sharing, pipelining) or truly paralel (e.g multi cores, multi cpu) the part where codes get sent (or distributed? or fetched?) to its execution will always run in sequences or in one or more ways have to be simulated to be sequential so that no code is executed twice in multiple execution node.
There is no "suddenly"
if the maximum wait time is 10 ms can i use qwaitcondition in Qt's main thread?
Nothing stops you from using QWaitCondition in the main thread. If you are setting the wait time to 10ms, and it passes without unlocking you will probably not get the desired effects you want. The default is to wait indefinitely.
However, using a wait condition in the main thread will cause the GUI to become unresponsive while it waits. This is almost always undesired.