If I have to move a moderate amount of memory between two processes, I can do the following:
create a file for writing
ftruncate to desired size
mmap and unlink it
use as desired
When another process requires that data, it:
connects to the first process through a unix socket
the first process sends the fd of the file through a unix socket message
mmap the fd
use as desired
This allows us to move memory between processes without any copy - but the file created must be on a memory-mounted filesystem, otherwise we might get a disk hit, which would degrade performance. Is there a way to do something like that without using a filesystem? A malloc-like function that returned a fd along with a pointer would do it.
[Edit] Having a file descriptor provides also a reference count mechanism that is maintained by the kernel.
Is there anything wrong with System V or POSIX shared memory (which are somewhat different, but end up with the same result)? With any such system, you have to worry about coordination between the processes as they access the memory, but that is true with memory-mapped files too.
Related
Stdin and stdout are single files that are shared by multiple processes to take in input from the users. So how does the OS make sure that only the input give to a particular program is visible in the stdin for than program?
Your assumption that stdin/stdout (while having the same logical name) are shared among all processes is wrong at best.
stdin/stdout are logical names for open files that are forwarded (or initialized) by the process that has started a given process. Actually, with the standard fork-and-exec pattern the setup of those may occur already in the new process (after fork) before exec is being called.
stdin/stdout usually are just inherited from parent. So, yes there exist groups of processes that share stdin and/or stdout for a given filenode.
Also, as a filedescriptor may be a side of a pipe, you need not have file from a filesystem (or a device node) linked to any of the well known standard channels (you also should include stderr into your considerations).
The normal way of setup is:
the parent (e.g. your shell) is calling fork
the forked process (child) is setting up environment, standard I/O channels and anything else.
the child then executes exec to overlay the process with the target image to be executed.
When setting up: it either will keep the existing channels or replace them with new ones e.g. creating a pipe and linking the endpoints appropriately. (To be honest, creating the pipe need to happen before the fork in that simplified description)
This way, most of the process have their own I/O channels.
Nevertheless, multiple processes may write into a channel they are connected to (have a valid filedescriptor to). When reading each junk of data (usually lines with terminals or blocks with files) is being read by a single reader only. So if you have several (running) processes reading from a terminal as stdin, only one will read your typing, while the other(s) will not see this typing at all.
I have some questions on performing File I/Os using MPI.
A set of files are distributed across different processes.
I want the processes to read the files in the other processes.
For example, in one-sided communication, each process sets a window visible to other processors. I need the exactly same functionality. (Create 'windows' for all files and share them so that any process can read any file from any offset)
Is it possible in MPI? I read lots of documentations about MPI, but couldn't find the exact one.
The simple answer is that you can't do that automatically with MPI.
You can convince yourself by seeing that MPI_File_open() is a collective call taking an intra-communicator as first argument and returning a file handler to the opened file as last argument. In this communicator, all processes open the file and therefore, all processes must see the file. So unless a process sees a file, it cannot get a MPI_file handler to access it.
Now, that doesn't mean there's no solution. A possibility could be to do by hand exactly what you described, namely:
Each MPI process opens individually the file they see and are responsible of; then
Each of theses processes reads this local file into a buffer;
Theses individual buffers are all exposed, using either a global MPI_Win memory windows, or several individual ones, ready for one-sided read accesses; and finally
All read accesses to any data that were previously stored in these individual local files, are now done through MPI_Get() calls using the memory window(s).
The true limitation of this approach is that it requires to fully read all of the individual files, therefore, you need to have sufficient memory per node for storing each of them. I'm well aware that this is a very very big caveat that could just make the solution completely impractical. However, if the memory is sufficient, this is an easy approach.
Another even simpler solution would be to store the files into a shared file system, or having them all copied on all local file systems. I imagine this isn't an option since the question wouldn't have been asked otherwise...
Finally, in last resort, a possibility I see would be to dedicate a MPI process (or an OpenMP thread of a MPI process) per node to serve each files. This process would just act as a "file server", answering "read" request coming from the other MPI processes, and serving them by reading the requested data from the file, and sending it back via MPI. It's a bit lengthy to write, but it should work.
i'm studying for a test is OS (unix is our model).
i have the following question:
which of the following 2 does NOT cause the user's program to stop and to switch to OS code?
A. the program found an error and is
printing it to the screen.
B. the program allocated memory that
will be read later on from the disk.
well, i have answers, however, i'm not sure how good they are.
they say the answer is B.
but, B is when the user uses malloc which is a system call no? allocating memory doesn't go through the OS?
and why printing to the screen should need the OS for it?
thanks for your help
malloc is not a system call. It's just a function.
When you call malloc it checks to see if it (internally) has enough memory to give you. If it does, it just returns the address - no need to trap into kernel mode. IF it doesn't have it, it asks the operating system (indeed a system call).
Depending on the way printing is done, that too might or might not elicit a system call. For instance if you use stdio, then printing is user-buffered. What that means is that a printf means copying to some stdio buffer without any actual I/O. However, if printf decides to flush, then indeed a system call must be performed.
printf() and malloc() calls invoke the C runtime library (libc). The C runtime library is a layer on top of the kernel, and may end up calling the kernel depending on circumstances.
The kernel provides somewhat primitive memory allocation via brk() (extend/shrink the data segment), and mmap() (map pages of memory into the process virtual address space). Libc's malloc() internally manages memory it has obtained from the kernel, and tries to minimize system calls (among other things, it also tries to avoid excessive fragmentation, and tries to have good performance on multi-threaded programs, so has to make some compromises).
stdio input/ouput (via *printf()/*scanf()) is buffered, and ends up calling the kernel's write()/read() system calls. By default, stderr (the error stream) is unbuffered or line-buffered (ISO C §7.19.3 ¶7), so that errors can be seen immediately. stdin and stdout are line-buffered or unbuffered unless it can be determined they aren't attached to an interactive device, so that interactive prompts for input work correctly. stdin and stdout can be fully-buffered (block-buffered) if they refer to a disk file, or other non-interactive stream.
That means that error output is by default guaranteed to be seen as soon as you output a '\n'(newline) character (unless you use setbuf()/setvbuf()). Normal output additionally requires to be connected to a terminal or other interactive device to provide that guarantee.
In A the user program is responsible for detecting the error and deciding how to provide that information. However in most cases actually rendering characters to a display device or terminal will involve an OS call at some point.
In B the OS is certainly responsible for memory management, and allocation may at some point request memory from the OS or the OS may have to provide disk swapping.
So the answer is probably strictly neither. But A will require a system call, whereas B may require a system call.
The answer is A. Handling an error after it is detected is handled by the programming language run time and user space application. On the other hand, mmap'ing a file requires entering kernel mode to allocate the necesary pages and queue up any disk IO. So B is definitely not the correct option.
I have an executable program which outputs data to the harddisk e.g. C:\documents.
I need some means to intercept the data in Windows 7 before they get to the hard drive. Then I will encrypt the data and send it back to the harddisk. Unfortunately, the .exe file does not support redirection command i.e. > in command prompt. Do you know how I can achieve such a thing in any programming language (c, c++, JAVA, php).
The encryption can only be done before the plain data is sent to the disk not after.
Any ideas most welcome. Thanks
This is virtually impossible in general. Many programs write to disk using memory-mapped files. In such a scheme, a memory range is mapped to (part of) a file. In such a scheme, writes to file can't be distinguished from writes to memory. A statement like p[OFFSET_OF_FIELD_X] = 17; is a logically write to file. Furthermore, the OS will keep track of the synchronization of memory and disk. Not all logical writes to memory are directly translated into physical writes to disk. From time to time, at the whim of the OS, dirty memory pages are copied back to disk.
Even in the simpler case of CreateFile/WriteFile, there's little room to intercept the data on the fly. The closest you could achieve is the use of Microsoft Detours. I know of at least one snakeoil encyption program (WxVault, crapware shipped on Dells) that does that. It repeatedly crashed my application in the field, which is why my program unpatches any attempt to intercept data on the fly. So, not even such hacks are robust against programs that dislike interference.
I've been researching a number of networking libraries and frameworks lately such as libevent, libev, Facebook Tornado, and Concurrence (Python).
One thing I notice in their implementations is the use of application-level per-client read/write buffers (e.g. IOStream in Tornado) -- even HAProxy has such buffers.
In addition to these application-level buffers, there's the OS kernel TCP implementation's buffers per socket.
I can understand the app/lib's use of a read buffer I think: the app/lib reads from the kernel buffer into the app buffer and the app does something with the data (deserializes a message therein for instance).
However, I have confused myself about the need/use of a write buffer. Why not just write to the kernel's send/write buffer? Is it to avoid the overhead of system calls (write)? I suppose the point is to be ready with more data to push into the kernel's write buffer when the kernel notifies the app/lib that the socket is "writable" (e.g. EPOLLOUT). But, why not just do away with the app write buffer and configure the kernel's TCP write buffer to be equally large?
Also, consider a service for which disabling the Nagle algorithm makes sense (e.g a game server). In such a configuration, I suppose I'd want the opposite: no kernel write buffer but an application write buffer, yes? When the app is ready to send a complete message, it writes the app buffer via send() etc. and the kernel passes it through.
Help me to clear up my head about these understandings if you would. Thanks!
Well, speaking for haproxy, it has no distinction between read and write buffers, a buffer is used for both purposes, which saves a copy. However, it is really painful to do some changes. For instance, sometimes you have to rewrite an HTTP header and you have to manage to move data correctly for your rewrite, and to save some state about the previous header's value. In haproxy, the connection header can be rewritten, and its previous and new states are saved because they are need later, after being rewritten. Using a read and a write buffer, you don't have this complexity, as you can always look back in your read buffer if you need any original data.
Haproxy is also able to make use of splicing between sockets on Linux. This means that it does not read nor write data, it just tells the kernel what to take where, and where to move it. The kernel then automatically moves pointers without copying data to transfer TCP segments from a network card to another one (when possible), but data are then never transferred to user space, thus avoiding a double copy.
You're completely right about the fact that in general you don't need to copy data between buffers. It's a waste of memory bandwidth. Haproxy runs at 10Gbps with 20% CPU with splicing, but without splicing (2 more copies), it's close to 100%. But then consider the complexity of the alternatives, and make your choice.
Hoping this helps.
When you use asynchronous socket IO operation, the asynchronous read/write operation returns immediately, since the asynchronous operation does not guaranty dealing all the data (ie put all the required data to TCP socket buffer or get all the required data from it) successfully with one invocation, the partial data must outlive through mutiple operations. Then you need an application buffer space to keep the data as long as IO operations last.