Applications of fork system call - unix

fork is used to create a copy of process from which its called.
This is typically followed by calls to exec family of functions.
Are there any usages of fork other than this?
I can think of one. Doing IPC with pipe functions.

Yes of course. It's quite common to start a process, do some data initialization and then spawn multiple workers. They all have the same data in their address space and it's Copy On Write.
Another common thing is to have the main process listen to a TCP socket and fork() for every connection that comes in. That way new connections can be handled immediately while existing connections are handled in parallel.
I think you're forgetting that after a fork(), both processes have access to all data that existed in the process before the fork().

Another use of fork is to detach from the parent process (falling back to init, process 1). If some process, say bash with pid 1111, starts myserver which gets pid 2222, it will have 1111 as parent. Assume 2222 forks and the child gets pid 3333. If now process 2222 exits, the 3333 will loose its parent, and instead it will get init as its new parent.
This strategy is sometimes used by deamons when starting up in order to not have a parent relationship with the process that started it. See also this answer.

If you have some kind of server listening for incoming connections, you can fork a child process to handle the incoming request (which will not necessarily involve exec or pipes).

A "usage" of fork is to create a Fork Bomb

I have written a small shell, and it was full of forks (yes this is exec..), especially for piping elements. wikipedia page on pipe

Related

Openresty dynamic spawning of worker process

Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?
Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.

Communication between two programs signals or shared mem?

I need to implement (in Qt) some solution to communicate between two programs running on Linux machine. One program is Worker, and the second is Watchdog. Basically I need Watchdog to periodically check on Worker and in case something wrong (no process,hangup - no answer from Worker) kill Worker (if present) and start it again.
Worker runs as a daemon, so I think starting it from unix /etc/init.d/worker would be appropriate.
I can see two solutions
Unix signals - both of them can send and receive Unix SIGUSR1
Shared memory
Which one to choose?
With signals both of programs will have to know others pid, probably reading from filesystem /var/run so it looks like a drawback.
With shared memory, all I need is "key" that programs will have hardcoded, so no need to read pids from filesystem. Since Watchdog should start first it can create shared mem segment, and Worker will only attach to it and maybe update its timestamp value??? However, to stop Worker by Watchdog (in case of hungup) Watchdog will still need Worker pid to send him SIGKILL, maybe it can read it from shared mem? Both concepts are new to me.
So what is the proper way to build reliable Watchdog, or am I missing something?
best regards
Marek
I think this is the best solution available through Qt:
http://qt-project.org/doc/qt-4.8/qlocalsocket.html
http://qt-project.org/doc/qt-4.8/qlocalserver.html
The QLocalSocket class provides a local socket. On Windows this is a
named pipe and on Unix this is a local domain socket.
http://qt-project.org/doc/qt-4.8/ipc-localfortuneserver.html
http://qt-project.org/doc/qt-4.8/ipc-localfortuneclient.html
Hope that helps.

Interprocess Communication Unix C

I have two processes, A and B. B is a process that performs some functions. Process A is the one that controls B. i.e Process A instruct process B by providing data (control and functional) to it.
I have a thread in B dedicated to IPC, All that thread does is to get instructions from process A while the other threads which are running do whatever they have to with the already existing data.
I thought of pipes and shared memory using shmat. But i am not satisfied, I want something like, whenever Process A writes a msg to B, only then should the ipc thread in B has to wake up.. Any idea as how to acheive this?
The specifics sort of depend on what kind of flexibility you need and who is using what pipes, but this should work: Have process B's IPC thread select for readability on the pipe. When process A writes to the pipe, process B's IPC thread will be awoken.
I found a solution. I made one of the threads open one end of the pipe for read, do the actual read and close it. This goes on in a while loop which is infinite one!
The process which wants to write to it will open it only when it needs to write and close it and will eventually end.
Infact this setup avoids synchronisation issues as well. But I don know what are the consequences of it though interms of performances!

Kill an mpi process

I would like to know if there is a way that an MPI process send a kill signal to another MPI process?
Or differently, is there a way to exit from an MPI environment graciously, when one of the process is still active? (i.e. mpi_abort() prints an error message).
Thanks
No, this is not possible within an MPI application using the MPI library.
Individual processes would not be aware of the location of the other processes, nor of the process IDs of the other processes - and there is nothing in the MPI spec to make the kill you are wanting.
If you were to do this manually, then you'd need to MPI_Alltoall to exchange process IDs and hostnames across the system, and then you would need to spawn ssh/rsh to visit the required node when you wanted to kill something. All in all, it's not portable, not clean.
MPI_Abort is the right way to do what you are trying to achieve. From the Open MPI manual:
"This routine makes a "best attempt" to abort all tasks in the group of comm." (ie. MPI_Abort(MPI_COMM_WORLD, -1) is what you need.
Any output during MPI_Abort would be machine specific - so you may, or may not, receive the error message you mention.

TCP Socket Piping

Suppose that you have 2 sockets(each will be listened by other TCP peers) each resides on the same process, how these sockets could be bound, meaning input stream of each other will be bound to output stream of other. Sockets will continuously carry data, no waiting will happen. Normally thread can solve this problem but, rather than creating threads is there more efficient way of piping sockets?
If you need to connect both ends of the socket to the same process, use the pipe() function instead. This function returns two file descriptors, one used for writing and the other used for reading. There isn't really any need to involve TCP for this purpose.
Update: Based on your clarification of your use case, no, there isn't any way to tell the OS to connect the ends of two different sockets together. You will have to write code to read from one socket and write the same data to the other. Depending on the architecture of your process, you may or may not need an additional thread to do this work. For example, if your application is based on a select() loop, then creating another thread is not necessary.
You can avoid threads with an event queue within the process. The WP Message queue article assumes you want interprocess message passing, but if you are using sockets, you kind of are doing interprocess message passing over the same process.

Resources