I have some questions on performing File I/Os using MPI.
A set of files are distributed across different processes.
I want the processes to read the files in the other processes.
For example, in one-sided communication, each process sets a window visible to other processors. I need the exactly same functionality. (Create 'windows' for all files and share them so that any process can read any file from any offset)
Is it possible in MPI? I read lots of documentations about MPI, but couldn't find the exact one.
The simple answer is that you can't do that automatically with MPI.
You can convince yourself by seeing that MPI_File_open() is a collective call taking an intra-communicator as first argument and returning a file handler to the opened file as last argument. In this communicator, all processes open the file and therefore, all processes must see the file. So unless a process sees a file, it cannot get a MPI_file handler to access it.
Now, that doesn't mean there's no solution. A possibility could be to do by hand exactly what you described, namely:
Each MPI process opens individually the file they see and are responsible of; then
Each of theses processes reads this local file into a buffer;
Theses individual buffers are all exposed, using either a global MPI_Win memory windows, or several individual ones, ready for one-sided read accesses; and finally
All read accesses to any data that were previously stored in these individual local files, are now done through MPI_Get() calls using the memory window(s).
The true limitation of this approach is that it requires to fully read all of the individual files, therefore, you need to have sufficient memory per node for storing each of them. I'm well aware that this is a very very big caveat that could just make the solution completely impractical. However, if the memory is sufficient, this is an easy approach.
Another even simpler solution would be to store the files into a shared file system, or having them all copied on all local file systems. I imagine this isn't an option since the question wouldn't have been asked otherwise...
Finally, in last resort, a possibility I see would be to dedicate a MPI process (or an OpenMP thread of a MPI process) per node to serve each files. This process would just act as a "file server", answering "read" request coming from the other MPI processes, and serving them by reading the requested data from the file, and sending it back via MPI. It's a bit lengthy to write, but it should work.
Related
Stdin and stdout are single files that are shared by multiple processes to take in input from the users. So how does the OS make sure that only the input give to a particular program is visible in the stdin for than program?
Your assumption that stdin/stdout (while having the same logical name) are shared among all processes is wrong at best.
stdin/stdout are logical names for open files that are forwarded (or initialized) by the process that has started a given process. Actually, with the standard fork-and-exec pattern the setup of those may occur already in the new process (after fork) before exec is being called.
stdin/stdout usually are just inherited from parent. So, yes there exist groups of processes that share stdin and/or stdout for a given filenode.
Also, as a filedescriptor may be a side of a pipe, you need not have file from a filesystem (or a device node) linked to any of the well known standard channels (you also should include stderr into your considerations).
The normal way of setup is:
the parent (e.g. your shell) is calling fork
the forked process (child) is setting up environment, standard I/O channels and anything else.
the child then executes exec to overlay the process with the target image to be executed.
When setting up: it either will keep the existing channels or replace them with new ones e.g. creating a pipe and linking the endpoints appropriately. (To be honest, creating the pipe need to happen before the fork in that simplified description)
This way, most of the process have their own I/O channels.
Nevertheless, multiple processes may write into a channel they are connected to (have a valid filedescriptor to). When reading each junk of data (usually lines with terminals or blocks with files) is being read by a single reader only. So if you have several (running) processes reading from a terminal as stdin, only one will read your typing, while the other(s) will not see this typing at all.
I'm having an issue with a MPI program running across a group of Linux nodes. The group is currently set up with NFS, with /home/mpi mounted across all nodes. The problem is that the program requires all of the nodes to open a file in the file system in write mode (use fopen on /home/mpi/file), and write to while it does calculations. One node will be able to open it, and the others won't and will throw an error. Instead I want each node to have its own file to write to.
I was wondering if there was a way to get around this. I was thinking about making a separate file for each node, with the nodes rank appended to the filename, but was wondering if there were simpler ways to get around this issue. Is there a way to set up the group so that all the worker nodes have their own copy of the /home/mpi directory that is auto-updated with any changes that the master node does to its copy?
Thanks.
As far as I know, the standard way of doing things is to open one file per node, indexed by rank as you described. Depending on what these files are used for (e.g. logging), you then have to write a script to re-combine them at the end of the computation.
If you really need all processes to write to the same file on the filesystem, you'll have to somehow coordinate concurrent outputs from all processes wanting to write to the file.
There is no way to do this at the filesystem level as far as I know, but you can do this within you MPI code. The standard, historical implementation of this is to have all MPI processes send messages to rank 0, which is in charge of effectively writing them to the filesystem.
Another option would be to look at the IO features introduced in MPI2, which allow all processes to work on different parts of the same file.
If I have to move a moderate amount of memory between two processes, I can do the following:
create a file for writing
ftruncate to desired size
mmap and unlink it
use as desired
When another process requires that data, it:
connects to the first process through a unix socket
the first process sends the fd of the file through a unix socket message
mmap the fd
use as desired
This allows us to move memory between processes without any copy - but the file created must be on a memory-mounted filesystem, otherwise we might get a disk hit, which would degrade performance. Is there a way to do something like that without using a filesystem? A malloc-like function that returned a fd along with a pointer would do it.
[Edit] Having a file descriptor provides also a reference count mechanism that is maintained by the kernel.
Is there anything wrong with System V or POSIX shared memory (which are somewhat different, but end up with the same result)? With any such system, you have to worry about coordination between the processes as they access the memory, but that is true with memory-mapped files too.
I have an executable program which outputs data to the harddisk e.g. C:\documents.
I need some means to intercept the data in Windows 7 before they get to the hard drive. Then I will encrypt the data and send it back to the harddisk. Unfortunately, the .exe file does not support redirection command i.e. > in command prompt. Do you know how I can achieve such a thing in any programming language (c, c++, JAVA, php).
The encryption can only be done before the plain data is sent to the disk not after.
Any ideas most welcome. Thanks
This is virtually impossible in general. Many programs write to disk using memory-mapped files. In such a scheme, a memory range is mapped to (part of) a file. In such a scheme, writes to file can't be distinguished from writes to memory. A statement like p[OFFSET_OF_FIELD_X] = 17; is a logically write to file. Furthermore, the OS will keep track of the synchronization of memory and disk. Not all logical writes to memory are directly translated into physical writes to disk. From time to time, at the whim of the OS, dirty memory pages are copied back to disk.
Even in the simpler case of CreateFile/WriteFile, there's little room to intercept the data on the fly. The closest you could achieve is the use of Microsoft Detours. I know of at least one snakeoil encyption program (WxVault, crapware shipped on Dells) that does that. It repeatedly crashed my application in the field, which is why my program unpatches any attempt to intercept data on the fly. So, not even such hacks are robust against programs that dislike interference.
I have an application that processes files in a directory and moves them to another directory along with the processed output. Nothing special about that. An interesting requirement was introduced:
Implement fault tolerance and processing throughput by allowing multiple remote instances to work on the same file store.
Additional considerations are that we can not assume the file system, as we support both Windows and NFS.
Of course the problems is, how do I make sure that the different instances do not try and process the same work, potentially corrupting work or reducing throughput? File locking can be problematic, especially across network shares. We can use a more sophisticated method, such as a simple database or messaging framework, (a la JMS or similar), but the entire cluster needs to be fault tolerant. We can't have one database or messaging provider because of the single point of failure that it introduces.
We've implemented a solution that uses multicast messages to self-discover processing instances and elect a supervisor who assigns work. There's a timeout in case the supervisor goes down and another election takes place. Our networking library, however, isn't very mature and the our implementation of messages is clunky.
My instincts, however, tell me that there is a simpler way.
Thoughts?
I think you can safely assume that rename operations are atomic on all network file systems that you care about. So if you arrange an amount of work to be a single file (or keyed to a single file), then have each server first list the directory containing new work, pick a piece of work, and then have it rename the file to its own server name (say, machine name or IP address). For one of the instances who concurrently perform the same operation, the rename will succeed, so they should then process the work. For the others, it will fail, so they should pick a different file from the listing they got.
For creation of new work, assume that directory creation (mkdir) is atomic, but file creation is not (for file creation, the second writer might overwrite the existing file). So if there are multiple producers of work also, create a new directory for each piece of work.