Is unix pipe limited to use only between 2 processes? - unix

I am reading about the pipes in UNIX for inter process communication between 2 processes. I have following questions.
1) Is Unix pipe limited to use only between 2 processes or can we use 3 or more related processes to communicate using a single pipe? For example, if I have one parent & 2 child processes, Can I use the pipe to write from the parent process and can I read using the same pipe from both children? If that is the case how both children will get same contents because
if one child reads from the pipe, the data will be removed from the pipe by kernel?
2) Is it really necessary to close the unused end of the pipe? for example, if my parent process is writing data in to the pipe and child is reading from pipe, is it really necessary to close the read end of the pipe in parent process and close the write end from child process? Are there any side effects if I won't close those ends? Why do we need to close those ends?

A single pipe is not a natural solution to allowing a parent broadcast to its children. Shared memory would provide a more natural method to solve that problem. There is only one message that is naturally broadcast from the parent to the children: the parent can close the write end of the pipe and cause all the children to see a read return 0 on the read end of the pipe.
However, a single pipe can be used by children to relay information back to the parent. So long as the messages are properly framed with source information from the child, the parent can field responses from all its children from the read end of the pipe, while each child writes to the write end of the pipe.

Related

Asynchronous I/O with io_uring and pipe

I'm trying to fork() the io_uring simulation by creating parent and child process, each process have its own rings, parent for writing to the pipe, child for reading from the pipe, however I stumbled upon some error when the reader side read an empty pipe and instead of waiting for content in the pipe (if therers no error the simulation indeed run faaster), it update the CQE Async task failed. To overcome the problem, I didn't set the O_NONBLOCK in the pipe2(), this makes the reader to wait until theres content in the pipes before unblocking. But on the downside, it will call fcntl() with call syscall and the performance of the simulation is slower than the original simulation file. I wanted to know how to set reader side of pipe to be ready without syscalls
*Edit
I wonder if I can incoporate io_uring (pipes as file descriptor) with eventfd so instead of using blocking pipes, once the write to pipe is completed, it will notify the eventfd, so the reader SQE will wait until eventfd is ready before it read from pipe to prevent reading error

Pipe between child processes

Is it possible to create a pipe between 2 or more child processes?
If so, does it have to be created by the father or can it be created by one of the childs?
Yes, it is possible to create pipes between child processes.
The pipe identifier needs to be known by both ends in order to be able to connect to it - But how should they exchange this identifier when they are not connected already? This is why normally pipes are created by a common ancestor that communicates this common identifier to all its children on creation.
What you seem to be looking for is named pipes - These can be opened by a commonly known (by convention) name without receiving information first. Named pipes are, however, not connected to the life-time of a process - You need to have some outside instance that creates and destroys them when they are no longer needed. Otherwise, they will continue to use system resources until the system is rebooted.

execl behavior in unix

In my program I am executing long lived tail (with -f) via execl.
Anything after that call to execl does not get executed.
Do I need to call/execute this tail in background so that I can do other things in my program?
I usually exit out of my program by ctrl C.
execl() will replace the calling process, so your calling program won't exist anymore once you've called it.
To get around this, you could call execl() after a call to fork(). Fork splits your program in two (a parent and a child), and you'll be able to check which is the child process and which is the parent. This example explains how to use fork - in the second example there, you'd put your execl() call inside the child process section.
The man page for the exec family of calls starts with:
The exec family of functions replaces
the current process image with a
new process image.
Not entirely sure what you want to accomplish, but it looks like exec isn't the solution. If you want your first program to remain alive, you'll need to fork. Does your initial program do something with the output of tail -f?
If your parent program would like to capture the output from tail, you should look a the popen() function. This will start the tail process and the output can be read from the FILE* it returns.
If your parent program has no interest in capturing the output then you'll want to create a child process using fork() which then calls execl(). The child process image will be replaced by tail and your parent process will continue.

Receiving a specific message from POSIX Message Queue

I am supposed to write a C application in Unix such that N children processes will be forked from the parent process and I will send messages to these children and children are supposed to send messages each other.
However the problem is, I need to send messages to a specific target child process. i.e. parent will send to child 1, child 1 will send to child 2, ... and child n will send to 1 (circularly).
The problem is, if I create only one message queue, any of n children may dequeue the message (since any of them may run after parent process due to kernel scheduler) therefore the message will be dequeued in wrong process!
In my application, there will be max. 1 message in the queue at a time. The only solution comes to my mind is to create n different message queues and pass messages to appropriate queue so that a specific target process can receive it. But I think there must me a more legitimate solution.
Any ideas?
Contraints: Pipes between processes are not allowed, I know that mq is inefficient here. I'll also implement them, both are required. P.S. This is kinda homework (damn I am the creator of http://canyoudomyhomework.com), however this is not just a homework, a challenging question IMHO.)
Depending on the performance requirements, a brokered (router) solution feels most appropriate.
The parent could act as the router, or could spawn a specific process to do this job.
Define a simple message structure that has its first element as the intended target, we can also designate the parent process as zero.
Each process has only one queue, between itself and the broker. All messages are processed and routed in one place, thereby avoiding the NxN fan-out you mention.
Good Luck

Piping as interprocess communication

I am interested in writing separate program modules that run as independent threads that I could hook together with pipes. The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines. There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Are there differences in the behaviour named and unnamed pipes?
Does it matter which end of the pipe I open first with named pipes?
Is the behaviour of pipes consistent between different Linux systems?
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Wow, that's a lot of questions. Let's see if I can cover everything...
It seems like the receiving end will
block waiting for input, which I would
expect
You expect correctly an actual 'read' call will block until something is there. However, I believe there are some C functions that will allow you to 'peek' at what (and how much) is waiting in the pipe. Unfortunately, I don't remember if this blocks as well.
will the sending end block sometimes
waiting for someone to read from the
stream
No, sending should never block. Think of the ramifications if this were a pipe across the network to another computer. Would you want to wait (through possibly high latency) for the other computer to respond that it received it? Now this is a different case if the reader handle of the destination has been closed. In this case, you should have some error checking to handle that.
If I write an eof to the stream can I
keep continue writing to that stream
until I close it
I would think this depends on what language you're using and its implementation of pipes. In C, I'd say no. In a linux shell, I'd say yes. Someone else with more experience would have to answer that.
Are there differences in the behaviour
named and unnamed pipes?
As far as I know, yes. However, I don't have much experience with named vs unnamed. I believe the difference is:
Single direction vs Bidirectional communication
Reading AND writing to the "in" and "out" streams of a thread
Does it matter which end of the pipe I
open first with named pipes?
Generally no, but you could run into problems on initialization trying to create and link the threads with each other. You'd need to have one main thread that creates all the sub-threads and syncs their respective pipes with each other.
Is the behaviour of pipes consistent
between different linux systems?
Again, this depends on what language, but generally yes. Ever heard of POSIX? That's the standard (at least for linux, Windows does it's own thing).
Does the behaviour of the pipes depend
on the shell I'm using or the way I've
configured it?
This is getting into a little more of a gray area. The answer should be no since the shell should essentially be making system calls. However, everything up until that point is up for grabs.
Are there any other questions I should
be asking
The questions you've asked shows that you have a decent understanding of the system. Keep researching and focus on what level you're going to be working on (shell, C, so on). You'll learn a lot more by just trying it though.
This is all based on a UNIX-like system; I'm not familiar with the specific behavior of recent versions of Windows.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes, although on a modern machine it may not happen often. The pipe has an intermediate buffer that can potentially fill up. If it does, the write side of the pipe will indeed block. But if you think about it, there aren't a lot of files that are big enough to risk this.
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Um, you mean like a CTRL-D, 0x04? Sure, as long as the stream is set up that way. Viz.
506 # cat | od -c
abc
^D
efg
0000000 a b c \n 004 \n e f g \n
0000012
Are there differences in the behaviour named and unnamed pipes?
Yes, but they're subtle and implementation dependent. The biggest one is that you can write to a named pipe before the other end is running; with unnamed pipes, the file descriptors get shared during the fork/exec process, so there's no way to access the transient buffer without the processes being up.
Does it matter which end of the pipe I open first with named pipes?
Nope.
Is the behaviour of pipes consistent between different linux systems?
Within reason, yes. Buffer sizes etc may vary.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No. When you create a pipe, under the covers what happens is your parent process (the shell) creates a pipe which has a pair of file descriptors, then does a fork exec like this pseudocode:
Parent:
create pipe, returning two file descriptors, call them fd[0] and fd[1]
fork write-side process
fork read-side process
Write-side:
close fd[0]
connect fd[1] to stdout
exec writer program
Read-side:
close fd[1]
connect fd[0] to stdin
exec reader program
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Is everything you want to do really going to lay out in a line like this? If not, you might want to think about a more general architecture. But the insight that having lots of separate processes interacting through the "narrow" interface of a pipe is desirable is a good one.
[Updated: I had the file descriptor indices reversed at first. They're correct now, see man 2 pipe.]
As Dashogun and Charlie Martin noted, this is a big question. Some parts of their answers are inaccurate, so I'm going to answer too.
I am interested in writing separate program modules that run as independent threads that I could hook together with pipes.
Be wary of trying to use pipes as a communication mechanism between threads of a single process. Because you would have both read and write ends of the pipe open in a single process, you would never get the EOF (zero bytes) indication.
If you were really referring to processes, then this is the basis of the classic Unix approach to building tools. Many of the standard Unix programs are filters that read from standard input, transform it somehow, and write the result to standard output. For example, tr, sort, grep, and cat are all filters, to name but a few. This is an excellent paradigm to follow when the data you are manipulating permits it. Not all data manipulations are conducive to this approach, but there are many that are.
The motivation would be that I could write and test each module completely independently, perhaps even write them in different languages, or run the different modules on different machines.
Good points. Be aware that there isn't really a pipe mechanism between machines, though you can get close to it with programs such as rsh or (better) ssh. However, internally, such programs may read local data from pipes and send that data to remote machines, but they communicate between machines over sockets, not using pipes.
There are a wide variety of possibilities here. I have used piping for a while, but I am unfamiliar with the nuances of its behaviour.
OK; asking questions is one (good) way to learn. Experimenting is another, of course.
It seems like the receiving end will block waiting for input, which I would expect, but will the sending end block sometimes waiting for someone to read from the stream?
Yes. There is a limit to the size of a pipe buffer. Classically, this was quite small - 4096 or 5120 were common values. You may find that modern Linux uses a larger value. You can use fpathconf() and _PC_PIPE_BUF to find out the size of a pipe buffer. POSIX only requires the buffer to be 512 (that is, _POSIX_PIPE_BUF is 512).
If I write an eof to the stream can I keep continue writing to that stream until I close it?
Technically, there is no way to write EOF to a stream; you close the pipe descriptor to indicate EOF. If you are thinking of control-D or control-Z as an EOF character, then those are just regular characters as far as pipes are concerned - they only have an effect like EOF when typed at a terminal that is running in canonical mode (cooked, or normal).
Are there differences in the behaviour named and unnamed pipes?
Yes, and no. The biggest differences are that unnamed pipes must be set up by one process and can only be used by that process and children who share that process as a common ancestor. By contrast, named pipes can be used by previously unassociated processes. The next big difference is a consequence of the first; with an unnamed pipe, you get back two file descriptors from a single function (system) call to pipe(), but you open a FIFO or named pipe using the regular open() function. (Someone must create a FIFO with the mkfifo() call before you can open it; unnamed pipes do not need any such prior setup.) However, once you have a file descriptor open, there is precious little difference between a named pipe and an unnamed pipe.
Does it matter which end of the pipe I open first with named pipes?
No. The first process to open the FIFO will (normally) block until there's a process with the other end open. If you open it for reading and writing (aconventional but possible) then you won't be blocked; if you use the O_NONBLOCK flag, you won't be blocked.
Is the behaviour of pipes consistent between different Linux systems?
Yes. I've not heard of or experienced any problems with pipes on any of the systems where I've used them.
Does the behaviour of the pipes depend on the shell I'm using or the way I've configured it?
No: pipes and FIFOs are independent of the shell you use.
Are there any other questions I should be asking or issues I should be aware of if I want to use pipes in this way?
Just remember that you must close the reading end of a pipe in the process that will be writing, and the writing end of the pipe in the process that will be reading. If you want bidirectional communication over pipes, use two separate pipes. If you create complicated plumbing arrangements, beware of deadlock - it is possible. A linear pipeline does not deadlock, however (though if the first process never closes its output, the downstream processes may wait indefinitely).
I observed both above and in comments to other answers that pipe buffers are classically limited to quite small sizes. #Charlie Martin counter-commented that some versions of Unix have dynamic pipe buffers and these can be quite large.
I'm not sure which ones he has in mind. I used the test program that follows on Solaris, AIX, HP-UX, MacOS X, Linux and Cygwin / Windows XP (results below):
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
static const char *arg0;
static void err_syserr(char *str)
{
int errnum = errno;
fprintf(stderr, "%s: %s - (%d) %s\n", arg0, str, errnum, strerror(errnum));
exit(1);
}
int main(int argc, char **argv)
{
int pd[2];
pid_t kid;
size_t i = 0;
char buffer[2] = "a";
int flags;
arg0 = argv[0];
if (pipe(pd) != 0)
err_syserr("pipe() failed");
if ((kid = fork()) < 0)
err_syserr("fork() failed");
else if (kid == 0)
{
close(pd[1]);
pause();
}
/* else */
close(pd[0]);
if (fcntl(pd[1], F_GETFL, &flags) == -1)
err_syserr("fcntl(F_GETFL) failed");
flags |= O_NONBLOCK;
if (fcntl(pd[1], F_SETFL, &flags) == -1)
err_syserr("fcntl(F_SETFL) failed");
while (write(pd[1], buffer, sizeof(buffer)-1) == sizeof(buffer)-1)
{
putchar('.');
if (++i % 50 == 0)
printf("%u\n", (unsigned)i);
}
if (i % 50 != 0)
printf("%u\n", (unsigned)i);
kill(kid, SIGINT);
return 0;
}
I'd be curious to get extra results from other platforms. Here are the sizes I found. All the results are larger than I expected, I must confess, but Charlie and I may be debating the meaning of 'quite large' when it comes to buffer sizes.
8196 - HP-UX 11.23 for IA-64 (fcntl(F_SETFL) failed)
16384 - Solaris 10
16384 - MacOS X 10.5 (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
32768 - AIX 5.3
65536 - Cygwin / Windows XP (O_NONBLOCK did not work, though fcntl(F_SETFL) did not fail)
65536 - SuSE Linux 10 (and CentOS) (fcntl(F_SETFL) failed)
One point that is clear from these tests is that O_NONBLOCK works with pipes on some platforms and not on others.
The program creates a pipe, and forks. The child closes the write end of the pipe, and then goes to sleep until it gets a signal - that's what pause() does. The parent then closes the read end of the pipe, and sets the flags on the write descriptor so that it won't block on an attempt to write on a full pipe. It then loops, writing one character at a time, and printing a dot for each character written, and a count and newline every 50 characters. When it detects a write problem (buffer full, since the child is not reading a thing), it stops the loop, writes the final count, and kills the child.

Resources