my program has two processes.one process writes to FIFO(named pipe) and must wait until another process read from FIFO and then it waits for that process to return the result(writing to FIFO to be read by this process).
my question is that how to know the data is read form FIFO by another process and then call read() system call for result?
Most UNIXen have bidirectional pipes (man pipe)
Linux, IIRC hasn't got those, so you need to use socket_pair, which can conveniently use UNIX domain sockets giving roughly the same functionality.
In my experience porting code often required little else beyond replacing the call to pipe by a call to socket_pair
Related
Stdin and stdout are single files that are shared by multiple processes to take in input from the users. So how does the OS make sure that only the input give to a particular program is visible in the stdin for than program?
Your assumption that stdin/stdout (while having the same logical name) are shared among all processes is wrong at best.
stdin/stdout are logical names for open files that are forwarded (or initialized) by the process that has started a given process. Actually, with the standard fork-and-exec pattern the setup of those may occur already in the new process (after fork) before exec is being called.
stdin/stdout usually are just inherited from parent. So, yes there exist groups of processes that share stdin and/or stdout for a given filenode.
Also, as a filedescriptor may be a side of a pipe, you need not have file from a filesystem (or a device node) linked to any of the well known standard channels (you also should include stderr into your considerations).
The normal way of setup is:
the parent (e.g. your shell) is calling fork
the forked process (child) is setting up environment, standard I/O channels and anything else.
the child then executes exec to overlay the process with the target image to be executed.
When setting up: it either will keep the existing channels or replace them with new ones e.g. creating a pipe and linking the endpoints appropriately. (To be honest, creating the pipe need to happen before the fork in that simplified description)
This way, most of the process have their own I/O channels.
Nevertheless, multiple processes may write into a channel they are connected to (have a valid filedescriptor to). When reading each junk of data (usually lines with terminals or blocks with files) is being read by a single reader only. So if you have several (running) processes reading from a terminal as stdin, only one will read your typing, while the other(s) will not see this typing at all.
why does there is limitation that with pipe() only parent and child process can communicate, why not unrelated processes?
why can't two children of a process can't communicate using pipe()?
There do have limitation.
Pipe use fd to read/write data, fd is per-process, a process maintain a fd table, child inherit the fd table when fork, and each inherited fd refer to the same open file as in parent process, which is maintained by kernel.
Processes that communicate via the same pipe should be related.
It means, the 2 processes should both aware of the 2 fd of the pipe.
<TLPI> says:
The pipe should be created by a common ancestor before the series of fork() calls that led to the existence of the processes.
There is no such limitation. Any two processes which have a means of obtaining references to each end of the pipe can communicate. A process can even communicate with itself using a pipe.
Any process could obtain a reference to one of the ends of a pipe using any of the following generic means of communicating file descriptors between processes. Pipes are not special in this respect.
The process itself called pipe() and obtained file descriptors for both ends.
The process received the file descriptor as SCM_RIGHTS ancillary data through a socket.
The process obtained the file descriptor from another arbitrary process using platform-specific means like /proc/<pid>/fd on Linux.
(There might be other methods.)
The process inherited the file descriptor from an ancestor (direct or indirect) that obtained it using one of the aforementioned methods.
If I have to move a moderate amount of memory between two processes, I can do the following:
create a file for writing
ftruncate to desired size
mmap and unlink it
use as desired
When another process requires that data, it:
connects to the first process through a unix socket
the first process sends the fd of the file through a unix socket message
mmap the fd
use as desired
This allows us to move memory between processes without any copy - but the file created must be on a memory-mounted filesystem, otherwise we might get a disk hit, which would degrade performance. Is there a way to do something like that without using a filesystem? A malloc-like function that returned a fd along with a pointer would do it.
[Edit] Having a file descriptor provides also a reference count mechanism that is maintained by the kernel.
Is there anything wrong with System V or POSIX shared memory (which are somewhat different, but end up with the same result)? With any such system, you have to worry about coordination between the processes as they access the memory, but that is true with memory-mapped files too.
I have two processes, A and B. B is a process that performs some functions. Process A is the one that controls B. i.e Process A instruct process B by providing data (control and functional) to it.
I have a thread in B dedicated to IPC, All that thread does is to get instructions from process A while the other threads which are running do whatever they have to with the already existing data.
I thought of pipes and shared memory using shmat. But i am not satisfied, I want something like, whenever Process A writes a msg to B, only then should the ipc thread in B has to wake up.. Any idea as how to acheive this?
The specifics sort of depend on what kind of flexibility you need and who is using what pipes, but this should work: Have process B's IPC thread select for readability on the pipe. When process A writes to the pipe, process B's IPC thread will be awoken.
I found a solution. I made one of the threads open one end of the pipe for read, do the actual read and close it. This goes on in a while loop which is infinite one!
The process which wants to write to it will open it only when it needs to write and close it and will eventually end.
Infact this setup avoids synchronisation issues as well. But I don know what are the consequences of it though interms of performances!
Suppose that you have 2 sockets(each will be listened by other TCP peers) each resides on the same process, how these sockets could be bound, meaning input stream of each other will be bound to output stream of other. Sockets will continuously carry data, no waiting will happen. Normally thread can solve this problem but, rather than creating threads is there more efficient way of piping sockets?
If you need to connect both ends of the socket to the same process, use the pipe() function instead. This function returns two file descriptors, one used for writing and the other used for reading. There isn't really any need to involve TCP for this purpose.
Update: Based on your clarification of your use case, no, there isn't any way to tell the OS to connect the ends of two different sockets together. You will have to write code to read from one socket and write the same data to the other. Depending on the architecture of your process, you may or may not need an additional thread to do this work. For example, if your application is based on a select() loop, then creating another thread is not necessary.
You can avoid threads with an event queue within the process. The WP Message queue article assumes you want interprocess message passing, but if you are using sockets, you kind of are doing interprocess message passing over the same process.