How to interrupt a blocking accept() call - unix

I have written a multithreaded application in C. I have two threads created, one for catching all the signals and another for accept()-ing client connections. When I kill the appilcation using killproc, the thread with the accept call is not interrupted. How can I fix that?
The code looks like:
int stop_exec=0;
sigCatcherThread()
{
int sig
sigset_t allsignals;
sigfillset(allsignals);
do{
sigwait(&allsignals, &sig);
if(sig==SIGTERM)
stop_exec=1;
}while(!stop_exec)
}
clientHandler()
{
...
while(!stop_exec)
{
accept(...);
}
main()
{
pthread_create(..., sigCatcherThread,..);
pthread_create(..., clientHandler,...);
}

Here you see the use of interrupted system calls. But the convenience of a signal handling thread is probably higher than the use of interrupted systems calls.
So you need you client handler to block until it can accept an incoming connection or the signal occurs. Waiting for potential input means either signal driven IO -- a path I wouldn't follow -- or select(2) (or pool). But select(2) can wait only on IO. So transform your signal occurrence in IO: open a pipe, have your signal handling thread write to the pipe when SIGQUIT occurs and have your client thread select(2) for the socket and the other end of the pipe.

Only one thread receives a signal targeted to a process. So, it must be not the thread blocked on accept(). See signal concepts for more details.
As already mentioned here, you should probably be using an event loop based on select(). I would suggest using libevent.

There's no need to interrupt the blocking accept call. Just make sure that if the thread does return from accept, say by receiving an actual connection, it won't do anything harmful.
If there's some specific reason you need the accept call to interrupt, explain what it is. Likely there's a simple way to remove the requirement.

Related

Qt: Detect a QTcpSocket disconnection in a console app when the user closes it

My question title should be enough. I already tried (without success):
Using a C-style destructor in a function: __attribute__((destructor)):
void sendToServerAtExit() __attribute__((destructor)) {
mySocket->write("$%BYE_CODE%$");
}
The application destructor is called, but the socket is already disconnected and I can't write to the server.
Using the standard C function atexit(), but the TCP connection is already lost so I can't send anything to the server.
atexit(sendToServerAtExit); // is the same function of point 1
The solution I found is check every second if all connected sockets are still connected, but I don't want to do so inefficient thing. It's only a temporary solution. Also, I want that others apps (even web ones) can join the chat room of my console app, and I don't want to request data every second.
What should I do?
Handle the below signal (QTcpSocket is inherited from QAbstractSocket)
void QAbstractSocket::stateChanged(QAbstractSocket::SocketState socketState)
Inside the slot called, check if socketState is QAbstractSocket::ClosingState.
QAbstractSocket::ClosingState indicates the socket is about to close.
http://doc.qt.io/qt-5/qabstractsocket.html#SocketState-enum
You can connect a slot to the disconnect signal.
connect(m_socket, &QTcpSocket::disconnected, this, &Class::clientDisconnected);
Check the documentation.
You can also know which user has been disconnected using a slot like this:
void Class::clientDisconnected
{
QTcpSocket* client = qobject_cast<QTcpSocket*>(sender());
if(client)
{
// Do something
client->deleteLater();
}
else
{
// Handle error
}
}
This method is usefull if you have a connections pool. You can use it as well if you have a single connection, but do not forget nullptr after client->deleteLater().
If I understand you question correctly, you want to send data over TCP to notify the remote computer that you are closing the socket.
Technically this can be done in Qt by listenning to the QIODevice::aboutToClose() or QAbstractSocket::stateChanged() signals.
However, if you graciously exit your program and close the QTcpSocket by sending a FIN packet to the remote computer. This means that on the remote computer,
the running program will be notified that the TCP connection finished. For instance, if the remote program is also using QTcpSocket, the QAbstractSocket::disconnected()
signal will be emitted.
The real issues arise when one of the program does not graciously exit (crash, hardware issue, cable unplugged, etc.). In this case, the TCP FIN packet will
not be sent and the remote computer will never get notified that the other side of the TCP connection is disconnected. The TCP connection will just time-out after a few minutes.
However, in this case you cannot send your final piece of data to the server either.
In the end the only solution is to send a "I am here" packet every now and then. Even though you claim it is ineficient, it is a widely used technique and it also has the advantage that it works.

How non-blocking web server works?

I'm trying to understand the idea of non-blocking web server and it seems like there is something I miss.
I can understand there are several reasons for "block" web request(psuedocode):
CPU bound
string on_request(arg)
{
DO_SOME_HEAVY_CPU_CALC
return "done";
}
IO bound
string on_request(arg)
{
DO_A_CALL_TO_EXTERNAL_RESOURCE_SUCH_AS_WEB_IO
return "done";
}
sleep
string on_request(arg)
{
sleep(VERY_VERY_LONG_TIME);
return "done";
}
are all the three can benefit from non-blocking server?
how the situation that do benefit from the non-blocking web server really do that?
I mean, when looking at the Tornado server documentation, it seems
like it "free" the thread. I know that a thread can be put to sleep
and wait for a signal from the operation system (at least in Linux),
is this the meaning of "freeing" the thread? is this some higher
level implementation? something that actually create a new thread
that is waiting for new request instead of the "sleeping" one?
Am I missing something here?
Thanks
Basically the way the non-blocking sockets I/O work is by using polling and the state machine. So your scheme for many connections would be something like that:
Create many sockets and make them nonblocking
Switch the state of them to "connect"
Initiate the connect operation on each of them
Poll all of them until some events fire up
Process the fired up events (connection established or connection failed)
Switch the state those established to "sending"
Prepare the Web request in a buffer
Poll "sending" sockets for WRITE operation
send the data for those who got the WRITE event set
For those which have all the data sent, switch the state to "receiving"
Poll "receiving" sockets for READ operation
For those which have the READ event set, perform read and process the read data according to the protocol
Repeat if the protocol is bidirectional, or close the socket if it is not
Of course, at each stage you need to handle errors, and that the state of each socket is different (one may be connecting while another may be already reading).
Regarding polling I have posted an article about how different polling methods work here: http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/ - I suggest you check it.
To benefit from a non-blocking server, your code must also be non-blocking - you can't just run blocking code on a non-blocking server and expect better performance. For example, you must remove all calls to sleep() and replace them with non-blocking equivalents like IOLoop.add_timeout (which in turn involves restructuring your code to use callbacks or coroutines).
How To Use Linux epoll with Python http://scotdoyle.com/python-epoll-howto.html may give you some points about this topic.

behaviour of unix process when a signal arrives and the process is already in signal handler?

I have a process which is already in signal handler , and called a process blocking call. What will happen if one more signal arrives for this process ?
By default signals don't block each other. A signal only blocks itself during its own delivery. So, in general, an handling code can be interrupted by another signal delivery.
You can control this behavior by setting the process signal mask relatively to each signal delivery. This means that you can block (or serialize) signal delivery. For instance you can declare that you accept to be interrupted with signal S1 while handling signal S2, but not the converse...
Remember that signal delivery introduces some concurrency into your code, so controlling the blocking is needed.
I'm pretty sure signals are blocked while a handler is being executed, but I'm having a hard time finding something that says that definitively.
Also, you may wish to see this question - some of the answers talk about what functions you should and shouldn't call from a signal handler.
In general, you should consider a signal handler like an interrupt handler - do the very least you can in the handler, and return quickly.

emit and slots order execution

One thread do an emit signal1();
A second thread do an emit signal2(); after that first thread has sent its signal ( there is the same mutex locked before emit call on both thread and I logged it, I can see in my log that first thread acquire lock before second thread)
first thread and second thread or not the GUI thread.
Is there any guarentees that signal1's slot will be call before signal2's slot ?
As the emitter and the receiver objects are running in different threads, the slots will not be executed synchronously: Qt is using a queued connection by default instead of a direct connection. You can however force a synchronous execution by using a blocking queued connection (see also http://qt-project.org/doc/qt-4.8/qt.html#ConnectionType-enum for the description of the different connection types) when connecting signals and slots.
But a blocking queue connection has a cost: the emitter thread is blocked until all the connected slots are executed, which is not necessarily a good idea. If you want to use a non-blocking connection, however, the order of execution depends on the objects were the slots are executed.
The important thing to consider is that each QThread has its own event queue. It means that the order of execution is only guaranteed for the slots of a given thread. This means that you have to consider the following cases:
signal1's slot and signal2's slot are defined in QObject's living in the same thread: in that case, you can be sure that the slots are executed in the expected order because they are triggered by the same event queue
both slots are running in different threads: here you have no control over the order of execution as the signals are posted to 2 independent event queues. If this is the case, you have to use mutexes or wait conditions (or use a blocking connection).
emit is just syntactic sugar, look at the .cpp generated by the Meta Object Compiler (moc).
So, emit signal1(); is compiled as signal1();, and the answer to your question is YES, but of course you have no guarentees that signal1() execution ends before signal2() invocation.
I am not sure if I understand you correctly, but this might help you:
When a signal is emitted, the slots connected to it are usually executed immediately, just like a normal function call.
from http://doc.qt.io/qt-5/signalsandslots.html
So think of calling emit() as calling any other function.

Performing asynchronous write operations over a TCP socket with Boost Asio

I am writing a Client/Server application in C++ with the help of Boost Asio. I have a working server, and the server workflow is something I understand well.
My client application handles the connect gracefully as shown in Asio examples, after which, it exchanges a handshake with the server. After that however, the users should be able to send requests to the server when and how they want, which is where I have a problem understanding the paradigm.
The initial workflow goes like a little like this:
OnConnected() { SendHandshake() }
SendHandshake() { async.write_some(handshake...), async_read_some(&OnRead) }
OnRead() { ReadServerHandshake() *** }
And users would send messages by using Write(msg):
Write (msg) { async_write_some(msg,&OnWrite), async_Read_some(&OnRead) }
OnWrite() {}
EDIT: Rephrasing the question to be clearer, here is the scenario:
After the initial handshaking is complete, the Client is only used to send requests to the server, on which it will get a reply. So, for instance, a user sends a write. Client waits for the read operation to complete, reads the reply and does something with it. The next user write will only come after, say, 5 minutes. Will the io_service stop working in the meanwhile because there are no outstanding asynchronous operations in between the last reply read and the next write?
On an informative note, you can provide it with io_service::work to stop an io_service from running out of work. This will ensure that the io_service::run never returns until the work object is destroyed.
To control the lifetime of the work object, you can use a shared_ptr pointer and reset it once the work is done, or you can use boost::optional as outlined here.
Of course you still need to handle the case where either the server closes the TCP connection, or the connection dies for whatever reason. To handle this case, one solution would be to have an outstanding async_read on the socket to the server. The read handler should be called with an error_code when/if something goes wrong with the connection. If you have the outstanding read on the connection, you do not need to use the work object.
If you want the IO service to complete a read, you must start a read. If you want to read data any time the client sends it, you must have an asynchronous read operation pending at all times. Otherwise, how would the library know what to do with the data?

Resources