explicitly listening for signals - qt

I'm new to Qt, but I have some experience in C and Java.
I'm trying to write a program that makes multiple TCP connections to different servers on the network.
the ip's are read in from a text file and i use connectToHost to establish a connection then the socket is added to a QList. this happens in a loop.
the problem is that i only start receiving the connected() signals when the program exits the loop, this causes some unexpected behaviour.
so is there a way to poll for signals in the loop?

call QCoreApplication::processEvents() inside your loop to avoid freezing

You can use QAbstractSocket::waitForConnected()
http://doc.qt.io/qt-5/qabstractsocket.html#waitForConnected

Related

Move QTcpSocket to a new Thread after the connection is initiated

I've got a threaded server.
QTcpSocket needs to be created on the thread it needs to be ran on, FI: Qt - Handle QTcpSocket in a new thread by passing the socket descriptor.
My problem is that, I need to have a pool of thread and move the socket on a specific thread AFTER the client has sent a specific token which defines on which thread the socket needs to be.
In other words, I need to read the socket to know on which thread to place it beforehand.
Some idea would be to bind first to a QTcpSocket, read, then send the descriptor to the thread and create another QTcpSocket but the doc says:
Note: It is not possible to initialize two abstract sockets with the
same native socket descriptor.
Another solution is to create the socket in a separated thread and then join both thread together, though I don't know if that is possible.
Or perhaps be able to read the socket descriptor on the main thread before calling setSocketDescriptor on the child thread, if that is even possible?
You can absolutely easily move sockets across QThreads, just pay attention to four things:
1) Make sure your QTcpSocket does not have parent before you move
2) Disconnect everything from socket object before move
3) Connect anything you need back in a function which is running in a destination thread (you may use some sort of pool in a thread there those 'moved' objects stored before thread pick them up
4) Call readAll() after init as you may miss some readyRead() signals
Don't see any reasons not to do this if that fits design, at least I used it many times for multithreaded services to split sockets handlers across cores.

Signal and waitpid coexistence

I have the following question: can I use a signal handler for SIGCHLD and at specific places use waitpid(3) instead?
Here is my scenario: I start a daemon process that listens on a socket (at this point it's irrelevant if it's a TCP or a UNIX socket). Each time a client connects, the daemon forks a child to handle the request and the parent process keeps on accepting incoming connections. The child handling the request needs at some point to execute a command on the server; let's assume in our example that it needs to perform a copy like this:
cp -a /src/folder /dst/folder
In order to do so, the clild forks a new process that uses execl(3) (or execve(3), etc.) to execute the copy command.
In order to control my code better, I would ideally wish to catch the exit status of the child executing the copy with waitpid(3). Moreover, since my daemon process is forking children to handle requests, I need to have a signal handler for SIGCHLD so as to prevent zombie processes from being created.
In my code, I setup a signal handler for SIGCHLD using signal(3), I daemonize my program by forking twice, then I listen on my socket for incoming connections, I fork a process to handle each coming request and my child-process forks a grand-child-process to perform the copy, trying to catch its exit status via waitpid(3).
What happens is that SIGCHLD is caught by my handler when a grand-child-process dies, before waitpid(3) takes action and waitpid(3) returns -1 even though the grand-child-process exits with success.
My first thought was to add:
signal(SIGCHLD, SIG_DFL);
just before forking the child process to handle my connecting clients, without any success. Using SIG_IGN didn't work either.
Is there a suggestion on how to make my scenario work?
Thank you all for your help in advance!
PS. If you need code, I'll post it, but due to its size I decided to do so only if necessary.
PS2. My intention is to use my code in FreeBSD, but my checks are performed in Linux.
EDIT [SOLVED]:
The problem I was facing is solved. The "unexpected" behaviour was caused by my waitpid(3) handling code which was buggy at some point.
Hence, the above method can indeed be used to allow for signal(3) and waitpid(3) coexistence in daemon-like programs.
Thanx for your help and I hope that this method helps someone wishing to accomplish such a thing!

Erlang accept incoming tcp connections dynamically

What I am trying to solve: have an Erlang TCP server that listens on a specific port (the code should reside in some kind of external facing interface/API) and each incoming connection should be handled by a gen_server (that is even the gen_tcp:accept should be coded inside the gen_server), but I don't actually want to initially spawn a predefined number of processes that accepts an incoming connection). Is that somehow possible ?
Basic Procedure
You should have one static process (implemented as a gen_server or a custom process) that performs the following procedure:
Listens for incoming connections using gen_tcp:accept/1
Every time it returns a connection, tell a supervisor to spawn of a worker process (e.g. another gen_server process)
Get the pid for this process
Call gen_tcp:controlling_process/2 with the newly returned socket and that pid
Send the socket to that process
Note: You must do it in that order, otherwise the new process might use the socket before ownership has been handed over. If this is not done, the old process might get messages related to the socket when the new process has already taken over, resulting in dropped or mishandled packets.
The listening process should only have one responsibility, and that is spawning of workers for new connections. This process will block when calling gen_tcp:accept/1, which is fine because the started workers will handle ongoing connections concurrently. Blocking on accept ensure the quickest response time when new connections are initiated. If the process needs to do other things in-between, gen_tcp:accept/2 could be used with other actions interleaved between timeouts.
Scaling
You can have multiple processes waiting with gen_tcp:accept/1 on a single listening socket, further increasing concurrency and minimizing accept latency.
Another optimization would be to pre-start some socket workers to further minimize latency after accepting the new socket.
Third and final, would be to make your processes more lightweight by implementing the OTP design principles in your own custom processes using proc_lib (more info). However, this you should only do if you benchmark and come to the conclusion that it is the gen_server behavior that slows you down.
The issue with gen_tcp:accept is that it blocks, so if you call it within a gen_server, you block the server from receiving other messages. You can try to avoid this by passing a timeout but that ultimately amounts to a form of polling which is best avoided. Instead, you might try Kevin Smith's gen_nb_server instead; it uses an internal undocumented function prim_inet:async_accept and other prim_inet functions to avoid blocking.
You might want to check out http://github.com/oscarh/gen_tcpd and use the handle_connection function to convert the process you get to a gen_server.
You should use "prim_inet:async_accept(Listen_socket, -1)" as said by Steve.
Now the incoming connection would be accepted by your handle_info callback
(assuming you interface is also a gen_server) as you have used an asynchronous
accept call.
On accepting the connection you can spawn another ger_server(I would recommend
gen_fsm) and make that as the "controlling process" by calling
"gen_tcp:controlling_process(CliSocket, Pid of spwned process)".
After this all the data from socket would be received by that process
rather than by your interface code. Like that a new controlling process
would be spawned for another connection.

Keep TCP connection persistent even after sub-routine calling connect() is end. How?

I have an application that calls connect() in a subroutine A.
This sub-routine A is called when a button A is pressed.
After connection has established, user can choose to click button B.
This button B has to be programmed as a separate sub-routine.
However, I need TCP connection to run sub-routine B.
After connect() is called in sub-routine A, sub-routine A is exit.
The connection is closed as well during exit.
Is there any way to keep this connection after connected even sub-routine A is exit?
Many thanks!
what programming you are using? Anyway you can have the socket fd & the socket struct defined in public to make it persistent across the routines or have them as parameter of the subroutine. I would expect more code to answer more precisely.
Actually I am using objective-C programming for iPhone. However, the content are written in C. I copied and modified the sample code from internet. I am getting quite some problem with the code written.
Then, I found sample code in Objective-C language, It solved the problem. The connection can be kept alive. The sample code is from here:
http://www.devx.com/wireless/Article/43551/1954
This program connect to network automatically once application is running without having user to click any connect button. This might be just fine. Now, it is time to figure out how to add a button so that user can disconnect from the network anytime.

Is there a way to close a Unix socket for only reading or writing?

Is there a way to only close "one end" of a TCP socket to cleanly indicate one side of a connection is done writing to the connection? (Just like you do with a pipe in every Unix pipe tutorial ever.) Or should I use some in-band solution like a sentinel value or some such?
You can shutdown a socket for read or write using the second parameter to the method:
shutdown(sock, SHUT_RD)
shutdown(sock, SHUT_WR)
If the server is doing the writing, and does a shutdown() for write, the client should get an end of file when it tries to read (rather than blocking and waiting for data to arrive). It will however still be able to write to the socket.

Resources