according to this documentation page: https://www.rookiehpc.com/mpi/docs/mpi_irsend.php#:~:text=Definition,to%20have%20been%20issued%20first.
"Indeed, MPI_Irsend requires the corresponding receive (MPI_Recv or MPI_Irecv) to have been issued first."
Does this mean that MPI_Irsend works pretty much like a blocking routine? Thank you.
The I in the routine name indicates that it is non-blocking. You get a request object, and from the request object you can query if the transfer has been completed.
The "ready" part means that when you call this, you guarantee that the send has been posted, so MPI can skip some of the handshake overhead.
I understand your confusion. Normally you would use a non-blocking receive so that you don't have idle time if the sending process is lagging. Why then a non-blocking ready send, where you know that the sending process is ready? Well, maybe you want to post more than one, and you know that all the sends have been posted, but you want to be independent of the order in which data arrives. Maybe transferring the data is slow and you want to overlap with communication.
That said, I always figured that this routine exists more because orthogonality of design suggests it, rather than that there are compelling use cases for it.
Related
Suppose my MPI process is waiting for a very big message, and I am waiting for it with MPI_Probe. Is it correct to suppose the MPI_Probe call will return as soon as the process receives the first notice of the message from the network (like a header with the size or something like)?
I.e., will it return much faster than if I was waiting for the message with MPI_Recv, because it wouldn't need to receive the full message?
The standard is fairly silent on this matter (MPI-3.0, section 3.8.1), but does offer this:
The MPI implementation of MPI_PROBE and MPI_IPROBE needs to guarantee progress:
if a call to MPI_PROBE has been issued by a process, and a send that matches the probe
has been initiated by some process, then the call to MPI_PROBE will return, unless the
message is received by another concurrent receive operation (that is executed by another
thread at the probing process).
Since both MPI_PROBE and MPI_RECV will engage the progress engine, I would doubt there is much difference between the two functions, aside from a memory copy. By engaging the progress engine, it's likely the message will be received (internally) by the MPI implementation. The last step of copying it into the user's buffer can be avoided in MPI_PROBE.
If you are worried about performance, then avoiding MPI_ANY_SOURCE and MPI_ANY_TAG if possible will help most implementations (certainly MPICH) take a faster path.
I am using a PersistentConnection for publishing large amounts of data (many small packages) to the connected clients.
It is basically a one way direction of data (since each client will call endpoints on other servers to set up various subscriptions, so they will not push any data back to the server via the SignalR connection).
Is there any way to detect that the client cannot keep up with the messages sent to it?
The example could be a mobile client on a poor connection (e.g. in a roaming situation, the speed may vary a lot). If we are sending 100 messages per second, but the client can only handle 10, we will eventually lose the messages (due to the message buffer on the server side).
I was looking for a server side event, similar to what has been done on the (SignalR) client, e.g.
protected override Task OnConnectionSlow(IRequest request, string connectionId) {}
but that is not part of the framework (for good reasons, I assume).
I have considered using the approach (suggested elsewhere on Stackoverflow), to let the client tell the server (e.g. every 10-30 seconds) how many messages it has received, and if that number differentiates a lot from the number of messages sent to the client, it is likely that the client cannot keep up.
The event would be used to tell the distributed backend that the client cannot keep up, and then turn down the data generation rate.
There's no way to to this right now other than coding something custom. We have discussed this in the past as a potential feature but it isn't anywhere the roadmap right now. It's also not clear what "slow" means as it's up to the application to decide. There'd probably be some kind of bandwidth/time/message based setting that would make this hypothetical event trigger.
If you want to hook in at a really low level, you could use owin middleware to replace the client's underlying stream with one that you owned so that you'd see all of the data going over the write (you'd have to do the same for websockets though and that might be non trivial).
Once you have that, you could write some time based logic that determined if the flush was taking too long and kill the client that way.
That's very fuzzy but it's basically a brain dump of how a feature like this could work.
I am writing a Client/Server application in C++ with the help of Boost Asio. I have a working server, and the server workflow is something I understand well.
My client application handles the connect gracefully as shown in Asio examples, after which, it exchanges a handshake with the server. After that however, the users should be able to send requests to the server when and how they want, which is where I have a problem understanding the paradigm.
The initial workflow goes like a little like this:
OnConnected() { SendHandshake() }
SendHandshake() { async.write_some(handshake...), async_read_some(&OnRead) }
OnRead() { ReadServerHandshake() *** }
And users would send messages by using Write(msg):
Write (msg) { async_write_some(msg,&OnWrite), async_Read_some(&OnRead) }
OnWrite() {}
EDIT: Rephrasing the question to be clearer, here is the scenario:
After the initial handshaking is complete, the Client is only used to send requests to the server, on which it will get a reply. So, for instance, a user sends a write. Client waits for the read operation to complete, reads the reply and does something with it. The next user write will only come after, say, 5 minutes. Will the io_service stop working in the meanwhile because there are no outstanding asynchronous operations in between the last reply read and the next write?
On an informative note, you can provide it with io_service::work to stop an io_service from running out of work. This will ensure that the io_service::run never returns until the work object is destroyed.
To control the lifetime of the work object, you can use a shared_ptr pointer and reset it once the work is done, or you can use boost::optional as outlined here.
Of course you still need to handle the case where either the server closes the TCP connection, or the connection dies for whatever reason. To handle this case, one solution would be to have an outstanding async_read on the socket to the server. The read handler should be called with an error_code when/if something goes wrong with the connection. If you have the outstanding read on the connection, you do not need to use the work object.
If you want the IO service to complete a read, you must start a read. If you want to read data any time the client sends it, you must have an asynchronous read operation pending at all times. Otherwise, how would the library know what to do with the data?
I plan to use MPI to build a solver that supports asynchronous communication. The basic idea is as follows.
Assume there are two parallel processes. Process 1 wants to send good solutions it finds periodically to process 2, and ask for good solutions from process 2 when it needs diversification.
At some point, process 1 uses MPI_send to send a solution to process 2. How to guarantee there is an MPI_Rev matching this MPI_Send, since this send is triggered dynamically?
When process 1 needs a solution, how can it send a request to process 2, and process 2 will notice its request in time?
There are three ways to achieve what you want, although it is not truly asynchronous communication.
1) Use non-blocking send/recvs. Replace your send/recv calls with irecv/isend and wait. The sender can issue an isend and continue working on the next problem. At some point, you will have to issue a mpi-wait to make sure your previous send was received. Your process2 can issue a recv ahead of time using irecv and continue doing its work. Again, at some point you will call mpi-wait to make sure your irecv was received. this may be a bit cumbersome if I understand you requirement correctly.
2) A Elegant way would be to use One-Sided communication. MPI_Put, Get.
3) Restructure your algorithm in such a way that at certain intervals of time, process 1 & 2 exchange information and state.
Depending on the nature of the MPI_* function you call, the send will block until a matching receive has been called by another process, so you need to make sure that's going to happen in your code.
There are also non-blocking function calls MPI_Isend f.ex, which gives you a request-handle which you can check on later to see if the process' send has been received by a matching receive.
Regarding your issue, you could issue a non-blocking receive (MPI_Irecv being the most basic) and check on the status every n seconds depending on your application. The status will then be set to complete when a message has been received and is ready to be read.
If it's time sensitive, use a blocking call while waiting for a message. The blocking mechanism (in OpenMPI at least) uses a spinning poll however, so the waiting process will be eating 100% cpu.
What's a good way to connect the synchronous http request/response model with an asynchronous queue based model?
When the user's HTTP request comes it generates a work request that goes onto a queue (beanstalkd in this case). One of the workers picks up the request, does the work, and prepares a response.
The queue model is not request/response - there are only requests, not responses. So the question is, how best do we get the response back into the world of HTTP and back to the user?
Ideas:
Beanstalkd supports light weight topics or queues (they call them tubes). We could create a tube for each request, have the worker create a message on that tube, and have the http process sit and wait on the tube for the response. Don't particularly like this one since it has apache processes sitting around taking memory.
Have the http client poll for the response. The user's initial HTTP request kicks off the job on the queue and returns immediately. The client (the user's browser) polls periodically for a response. On the backend the worker puts its response into memcached, and we connect nginx to memcached so the polling is light weight.
Use Comet. Similar to the second option, but with fancier http communication to avoid polling.
I'm leaning towards 2 since it's easy and well know (I haven't used comet yet). I'm guessing there's probably also a much better obvious model I haven't thought of. What do you think?
Here's how to implement request-response efficiently on JMS which might be helpful (though Java/JMS centric). The general idea is to create a temporary queue per client/thread then use correlationIDs to correlate requests to replies etc.
Polling is the simple solution; comet is the more efficient solution. You've got it nailed :)
I personally love comet (although I'm biased, since I helped write WebSync), it nicely lets your clients subscribe to a channel and get the message when your server process is ready. Works like a champ.
I'm looking to implement a Beanstalkd and memcached system to run a number of processes following a request - in this case, looking up information when a user logs in (the number of messages a user has waiting for example). The info is stored in Memcached and then read back on the next page load.
Without knowing a little more about what tasks you are doing though, it's not so easy to say what needs to be done, or how. Option #2 is however the simplest, and that may be all you need - depending on what you are pushing back into the workers.