May MPI_SEND use my input data array as Buffer? - mpi

As we know, there is a thing called an MPI send buffer used during a send action.
And for following code:
MPI_Isend(data, ..., req);
...
MPI_Wait(req, &status)
Is it safe to use data between MPI_Isend and MPI_Wait ?
That means, will MPI_Isend use data as the internal send buffer?
And more, if I don't use data anymore, could I indicate MPI to use data as the send buffer rather than waste time to copy data?
BTW, I've heard of MPI_Bsend, but I don't think it could save memory and time in this case.

MPI provides two kinds of operations: blocking and non-blocking. The difference between the two is when it is safe to reuse the data buffer passed to the MPI function.
When a blocking call like MPI_Send returns, the buffer is no longer needed by the MPI library and can be safely reused. On the other hand, non-blocking calls only initiate the corresponding operation and let it continue asynchronously. Only after a successful call to a routine like MPI_Wait or after a positive test result from MPI_Test one can safely reuse the buffer.
As for how the library utilises the user buffer, that is very implementation-specific. Shorter messages are usually copied to internal (for the MPI library) buffers for performance reasons. Longer messages are usually directly read from the user buffer and sent to the network, therefore the buffer will be in use by MPI until the whole message has been sent.

It is absolutely not save to use data between MPI_Isend and MPI_Wait.
Between MPI_Isend and MPI_Wait you actually don't know when data can be reused. Only after MPI_Wait you can be sure that data is sent and you can reuse it.
If you don't use data anymore you should call MPI_Wait at the end of your program.

Related

In Trio, how do you write data to a socket without waiting?

In Trio, if you want to write some data to a TCP socket then the obvious choice is send_all:
my_stream = await trio.open_tcp_stream("localhost", 1234)
await my_stream.send_all(b"some data")
Note that this both sends that data over the socket and waits for it to be written. But what if you just want to queue up the data to be sent, but not wait for it to be written (at least, you don't want to wait in the same coroutine)?
In asyncio this is straightforward because the two parts are separate functions: write() and drain(). For example:
writer.write(data)
await writer.drain()
So of course if you just want to write the data and not wait for it you can just call write() without awaiting drain(). Is there equivalent functionality in Trio? I know this two-function design is controversial because it makes it hard to properly apply backpressure, but in my application I need them separated.
For now I've worked around it by creating a dedicated writer coroutine for each connection and having a memory channel to send data to that coroutine, but it's quite a lot of faff compared to choosing between calling one function or two, and it seems a bit wasteful (presumably there's still a send buffer under the hood, and my memory channel is like a buffer on top of that buffer).
I posted this on the Trio chat and Nathaniel J. Smith, the creator of Trio, replied with this:
Trio doesn't maintain a buffer "under the hood", no. There's just the kernel's send buffer, but the kernel will apply backpressure whether you want it to or not, so that doesn't help you.
Using a background writer task + an unbounded memory channel is basically what asyncio does for you implicitly.
The other option, if you're putting together a message in multiple pieces and then want to send it when you're done would be to append them into a bytearray and then call send_all once at the end, at the same place where you'd call drain in asyncio
(but obviously that only works if you're calling drain after every logical message; if you're just calling write and letting asyncio drain it in the background then that doesn't help)
So the question was based on a misconception: I wanted to write into Trio's hidden send buffer, but no such thing exists! Using a separate coroutine that waits on a stream and calls send_all() makes more sense than I had thought.
I ended up using a hybrid of the two ideas (using separate coroutine with a memory channel vs using bytearray): save the data to a bytearray, then use a condition variable ParkingLot to signal to the other coroutine that it's ready to be written. That lets me coalesce writes, and also manually check if the buffer's getting too large.

I don't understand what exactly does the function bytesToWrite() Qt

I searched for bytesToWrite in doc and that what I found "For buffered devices, this function returns the number of bytes waiting to be written. For devices with no buffer, this function returns 0."
First what does mean buffered devices. And can anyone please explain to me what exactly this function does and where or how can I use it.
Many IO devices are buffered, which means that data isn't sent straight away, but it is accumulated to be sent in bulk when there is a sufficient amount.
This is done essentially to have better performance, as sending data normally has some fixed overhead (at the very least the syscall overhead), which is well amortized when sending data in bulk, but would have to be paid for each write if no buffering would be used.
(notice that here we are only talking about QIODevice buffers, normally there are also all kinds of kernel-mode buffers and buffers internal to hardware devices themselves)
bytesToWrite tells you how much stuff is in the QIODevice write buffer, i.e. how many bytes you wrote that are waiting to be actually written (as in, given to the OS to write).
I never actually had to use that member, but I suppose it could be useful e.g. to in a producer-consumer scenario (=if the write buffer is lower than something, then you have to actually calculate the next chunk of data to send), to manually handle buffering in some places or even just for debugging/logging purposes.
it's actually very usefull when you're using an asynchronous API.
you can for example, use it inside a bytesWritten() slot to tell wether the buffer is empty and the data has been fully written or not.

MPI - Equivalent of MPI_SENDRCV with asynchronous functions

I know that MPI_SENDRECV allow to overcome the problem of deadlocks (when we use the classic MPI_SEND and MPI_RECV functions).
I would like to know if MPI_SENDRECV(sent_to_process_1, receive_from_process_0) is equivalent to:
MPI_ISEND(sent_to_process_1, request1)
MPI_IRECV(receive_from_process_0, request2)
MPI_WAIT(request1)
MPI_WAIT(request2)
with asynchronous MPI_ISEND and MPI_RECV functions?
From I have seen, MPI_ISEND and MPI_RECV creates a fork (i.e. 2 processes). So if I follow this logic, the first call of MPI_ISEND generates 2 processes. One does the communication and the other calls MPI_RECV which forks itself 2 processes.
But once the communication of first MPI_ISEND is finished, does the second process call MPI_IRECV again? With this logic, the above equivalent doesn't seem to be valid...
Maybe I should change to this:
MPI_ISEND(sent_to_process_1, request1)
MPI_WAIT(request1)
MPI_IRECV(receive_from_process_0, request2)
MPI_WAIT(request2)
But I think that it could be create also deadlocks.
Anyone could give to me another solution using MPI_ISEND, MPI_IRECV and MPI_WAIT to get the same behaviour of MPI_SEND_RECV?
There's some dangerous lines of thought in the question and other answers. When you start a non-blocking MPI operation, the MPI library doesn't create a new process/thread/etc. You're thinking of something more like a parallel region of OpenMP I believe, where new threads/tasks are created to do some work.
In MPI, starting a non-blocking operation is like telling the MPI library that you have some things that you'd like to get done whenever MPI gets a chance to do them. There are lots of equally valid options for when they are actually completed:
It could be that they all get done later when you call a blocking completion function (like MPI_WAIT or MPI_WAITALL). These functions guarantee that when the blocking completion call is done, all of the requests that you passed in as arguments are finished (in your case, the MPI_ISEND and the MPI_IRECV). Regardless of when the operations actually take place (see next few bullets), you as an application can't consider them done until they are actually marked as completed by a function like MPI_WAIT or MPI_TEST.
The operations could get done "in the background" during another MPI operation. For instance, if you do something like the code below:
MPI_Isend(..., MPI_COMM_WORLD, &req[0]);
MPI_Irecv(..., MPI_COMM_WORLD, &req[1]);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Waitall(2, req);
The MPI_ISEND and the MPI_IRECV would probably actually do the data transfers in the background during the MPI_BARRIER. This is because as an application, you are transferring "control" of your application to the MPI library during the MPI_BARRIER call. This lets the library make progress on any ongoing MPI operation that it wants. Most likely, when the MPI_BARRIER is complete, so are most other things that finished first.
Some MPI libraries allow you to specify that you want a "progress thread". This tells the MPI library to start up another thread (not that thread != process) in the background that will actually do the MPI operations for you while your application continues in the main thread.
Remember that all of these in the end require that you actually call MPI_WAIT or MPI_TEST or some other function like it to ensure that your operation is actually complete, but none of these spawn new threads or processes to do the work for you when you call your nonblocking functions. Those really just act like you stick them on a list of things to do (which in reality, is how most MPI libraries implement them).
The best way to think of how MPI_SENDRECV is implemented is to do two non-blocking calls with one completion function:
MPI_Isend(..., MPI_COMM_WORLD, &req[0]);
MPI_Irecv(..., MPI_COMM_WORLD, &req[1]);
MPI_Waitall(2, req);
How I usually do this on node i communicating with node i+1:
mpi_isend(send_to_process_iPlus1, requests(1))
mpi_irecv(recv_from_process_iPlus1, requests(2))
...
mpi_waitall(2, requests)
You can see how ordering your commands this way with non-blocking communication allows you (during the ... above) to perform any computation that does not rely on the send/recv buffers to be done during your communication. Overlapping computation with communication is often crucial for maximizing performance.
mpi_send_recv on the other hand (while avoiding any deadlock issues) is still a blocking operation. Thus, your program must remain in that routine during the entire send/recv process.
Final points: you can initialize more than 2 requests and wait on all of them the same way using the above structure as dealing with 2 requests. For instance, it's quite easy to start communication with node i-1 as well and wait on all 4 of the requests. Using mpi_send_recv you must always have a paired send and receive; what if you only want to send?

MPI non-blocking send and recv and mpi_iprobe() for unknown message size

In a spatially decomposed 2D domain, I need to send particles to the 8 neighbors. I know how many I'm sending but not how many I'll receive from these neighbors.
I had implemented a code with MPI_Send(), MPI_Probe() and MPI_Recv() but I realized that it caused deadlocks whenever the message was too big.
I decided to go for non-blocking communications but then I can't figure out in what order MPI_Isend, MPI_Irecv and MPI_Iprobe should be called? I definitely need to know the size my receiving buffer should be allocated to before actually calling MPI_Irecv so I'm tempted by the order MPI_Isend() then MPI_Iprobe() then MPI_Irecv(), but the problem is that MPI_Iprove() always returns a flag equal to false and I get stuck in the while loop. As far as I understand there no obligation for MPI to actually complete the send before the call to MPI_Wait(), therefore I understand that MPI_Iprobe might never return true. But if so, how does one receives an unknown size message in non-blocking MPI point-to-point communications?
You don't have to make all 3 operations non-blocking. You can use an MPI_ISEND with a regular MPI_PROBE and/or MPI_RECV. It sounds like that might be a better option for you.

What is the difference between isend and issend?

Need clarification to my understanding of isend and issend as given in Send Types
My understanding is that isend will return once the send buffer is free, i.e. when all the data has been released. Issend on the other hand returns only when it receives an ack from the receive of getting/not getting the entire data. Is this all there is to it?
Both MPI_Isend() and MPI_Issend() return immediately, but in both cases you can't use the send buffer immediately.
Think of the difference that there is between MPI_Send() and MPI_Ssend():
MPI_Send() can be buffered or it can be synchronous if the buffer is too
large to be buffered locally, and in this case it waits to complete sending the
data to the corresponding receive operation.
MPI_Ssend() is always synchronous: it always waits to complete sending the data
to the corresponding receive operation.
The inner working of the corresponding "I"-operations is very similar, except for the fact that they both don't block (return immediately): the difference is only when the MPI library signals to the user program that you can use the send-buffer (that is: MPI_Wait() returns or MPI_Test() returns true - the so called send-complete operation of the non-blocking send):
with MPI_Isend() this can happen either when the data has been copied locally in a buffer owned by the MPI library, if below the "synchronous threshold", or when the data has been actually moved to the sibling task: the send-complete operation can be local, in case the underlying send operation is buffered.
With MPI_Issend() MPI doesn't ever buffer data locally and the "buffer-free condition" is returned only after the data has been actually transferred (and probably ack'ed, at low level): the send-complete operation is non-local.
The MPI standard document is quite pedantic on these aspects. See section 3.7 Nonblocking Communication.
Correct. Obviously both of those will only be true when the request that you get back from the call to MPI_ISEND or MPI_ISSEND is completed via a MPI_WAIT* or MPI_TEST* function.

Resources