Sync/async interoperable channels - asynchronous

When you wish to send a sequence of things across threads (in a thread-blocking way), you have e.g. crossbeam_channel.
When you wish to send a sequence of things across futures (in a non-thread-blocking, future-blocking way), you have e.g. tokio::sync::mpsc.
What would be something that enables me to send from a blocking thread, and receive from an asynchronous context ? (Btw, I could imagine needing the opposite at some point.)
I need the channel to be bounded, thread-blocking when sending and future-blocking when receiving.
I am looking for something somewhat performant, like an equivalent of what is done in crossbeam_channel, but waking up the future instead of the thread, with the ability to buffer some messages to avoid blocking as much as possible. The answer given here for the multiple messages scenario looks a bit like a patch-up to that regard.

The channels provided by Tokio have gained the feature to do this since this question was asked. You can simply call the blocking_send and blocking_recv methods on the channel when you are in non-async code:
let (mut tx, mut rx) = tokio::sync::mpsc::channel(10);
std::thread::spawn(move || {
// send a value, blocking synchronously
// this allows async channels to be used in non-async contexts
tx.blocking_send("testing").unwrap();
});
// receive a value, blocking asynchronously
assert_eq!(rx.recv().await.unwrap(), "testing");

Futures can be run synchronously in a blocking fashion. You can use futures::exector::block_on to do this, to allow sending in a non-async context:
let (mut tx, mut rx) = tokio::sync::mpsc::channel(10);
// send a value, blocking synchronously
// this allows async channels to be used in non-async contexts
futures::executor::block_on(tx.send("testing")).unwrap();
// receive a value, blocking asynchronously
assert_eq!(rx.recv().await.unwrap(), "testing");
With this snippet, running the future to send a value will block the thread until the future completes, similar to how the standard library channels work. This can also be used on the receiving side if desired.

crossfire seems to be a crate meant for exacly this.
ATM it's pretty recent and not widely used though.

Related

Efficiently connecting an asynchronous IMFSourceReader to a synchronous IMFTransform

Given an asynchronous IMFSourceReader connected to a synchronous only IMFTransform.
Then for the IMFSourceReaderCallback::OnReadSample() callback is it a good idea not to call IMFTransform::ProcessInput directly within OnReadSample, but instead push the produced sample onto another queue for another thread to call the transforms ProcessInput on?
Or would I just be replicating identical work source readers typically do internally? Or put another way does work within OnReadSample run the risk of blocking any further decoding work within the source reader that could have otherwise happened more asynchronously?
So I am suggesting something like:
WorkQueue transformInputs;
...
// Called back async
HRESULT OnReadSampleCallback(... IMFSample* sample)
{
// Push sample and return immediately
Push(transformInputs, sample);
}
// Different worker thread awoken for transformInputs queue samples
void OnTransformInputWork()
{
// Transform object is not async capable
transform->TransformInput(0, Pop(transformInputs), 0);
...
}
This is touched on, but not elaborated on here 'Implementing the Callback Interface':
https://learn.microsoft.com/en-us/windows/win32/medfound/using-the-source-reader-in-asynchronous-mode
Or is it completely dependent on whatever the source reader sets up internally and not easily determined?
It is not a good idea to perform a long blocking operation in IMFSourceReaderCallback::OnReadSample. Nothing is going to be fatal or serious but this is not the intended usage.
Taking into consideration your previous question about audio format conversion though, audio sample data conversion is fast enough to happen on such callback.
Also, it is not clear or documented (depends on actual implementation), ProcessInput is often instant and only references input data. ProcessOutput would be computationally expensive in this case. If you don't do ProcessOutput right there in the same callback you might run into situation where MFT is no longer accepting input, and so you'd have to implement a queue anyway.
With all this in mind you would just do the processing in the callback neglecting performance impact assuming your processing is not too heavy, or otherwise you would just start doing the queue otherwise.

In Trio, how do you write data to a socket without waiting?

In Trio, if you want to write some data to a TCP socket then the obvious choice is send_all:
my_stream = await trio.open_tcp_stream("localhost", 1234)
await my_stream.send_all(b"some data")
Note that this both sends that data over the socket and waits for it to be written. But what if you just want to queue up the data to be sent, but not wait for it to be written (at least, you don't want to wait in the same coroutine)?
In asyncio this is straightforward because the two parts are separate functions: write() and drain(). For example:
writer.write(data)
await writer.drain()
So of course if you just want to write the data and not wait for it you can just call write() without awaiting drain(). Is there equivalent functionality in Trio? I know this two-function design is controversial because it makes it hard to properly apply backpressure, but in my application I need them separated.
For now I've worked around it by creating a dedicated writer coroutine for each connection and having a memory channel to send data to that coroutine, but it's quite a lot of faff compared to choosing between calling one function or two, and it seems a bit wasteful (presumably there's still a send buffer under the hood, and my memory channel is like a buffer on top of that buffer).
I posted this on the Trio chat and Nathaniel J. Smith, the creator of Trio, replied with this:
Trio doesn't maintain a buffer "under the hood", no. There's just the kernel's send buffer, but the kernel will apply backpressure whether you want it to or not, so that doesn't help you.
Using a background writer task + an unbounded memory channel is basically what asyncio does for you implicitly.
The other option, if you're putting together a message in multiple pieces and then want to send it when you're done would be to append them into a bytearray and then call send_all once at the end, at the same place where you'd call drain in asyncio
(but obviously that only works if you're calling drain after every logical message; if you're just calling write and letting asyncio drain it in the background then that doesn't help)
So the question was based on a misconception: I wanted to write into Trio's hidden send buffer, but no such thing exists! Using a separate coroutine that waits on a stream and calls send_all() makes more sense than I had thought.
I ended up using a hybrid of the two ideas (using separate coroutine with a memory channel vs using bytearray): save the data to a bytearray, then use a condition variable ParkingLot to signal to the other coroutine that it's ready to be written. That lets me coalesce writes, and also manually check if the buffer's getting too large.

How to cancel a GRPC streaming call?

Usually, the client can cancel the gRPC call with:
(requestObserver as ClientCallStreamObserver<Request>)
.cancel("Cancelled", null)
However, it is shown in the Javadoc:
CancellableContext withCancellation = Context.current().withCancellation();
// do stuff
withCancellation.cancel(t);
Which one is the "correct" way of cancelling a client call, and letting the server know?
Edit:
To make matters more confusing, there's also ManagedChannel.shutdown*.
Both are acceptable and appropriate.
clientCallStreamObserver.cancel() is generally easier as it has less boilerplate. It should generally be preferred. However, it is not thread-safe; it is like normal sending on the StreamObserver. It also requires direct awareness with the RPC; you can't have higher-level code orchestrate the cancellation as it may not even be aware of the RPC.
Use Context cancellation for thread-safe cancellation and less RPC-aware cancellation. Context cancellation could be used in circumstances similar to thread interruption or Future cancellation. Not that CancellableContexts should be treated as a resource and must be cancelled eventually to avoid leaking memory. (context.cancel(null) can be used when the context reaches its "normal" end of life.)

Hopac -- can I give value to a channel only if someone is listening to it?

Hopac allows to synchronously give value to a channel which will block if no-one is listening, and to asynchronously send which will buffer the values if they are not read.
I would like to do something in-between: to give a value if there is a listener, or to proceed without blocking or buffering the value if there is none. Is there a way to do it?
You can implement polling give and take operations using the alternative mechanism of Hopac. Here is a possible signature for such polling operations:
module Ch =
module Poll =
val give: Ch<'x> -> 'x -> Job<bool>
val take: Ch<'x> -> Job<option<'x>>
And here is an implementation of that signature:
module Ch =
module Poll =
let give xCh x =
Alt.pick (xCh <-? x >>%? true <|> Alt.always false)
let take xCh =
Alt.pick (xCh |>>? Some <|> Alt.always None)
The way these work is that the left hand side alternative is committed to if it is available. Otherwise the right hand side alternative is committed to because it is always available.
Note that that these only make sense if it is indeed the case that the other side of the communication is waiting on the channel. If both ends try to poll, communication is unlikely to happen.
The Concurrent ML library that Hopac is based upon directly provides
polling operations.
It would also be possible to implement polling operations as optimized primitives within Hopac.
Update: I've added polling, or non-blocking, give and take operations as optimized primitives on synchronous channels to Hopac. They are provided by the Ch.Try module.

A MailboxProcessor that operates with a LIFO logic

I am learning about F# agents (MailboxProcessor).
I am dealing with a rather unconventional problem.
I have one agent (dataSource) which is a source of streaming data. The data has to be processed by an array of agents (dataProcessor). We can consider dataProcessor as some sort of tracking device.
Data may flow in faster than the speed with which the dataProcessor may be able to process its input.
It is OK to have some delay. However, I have to ensure that the agent stays on top of its work and does not get piled under obsolete observations
I am exploring ways to deal with this problem.
The first idea is to implement a stack (LIFO) in dataSource. dataSource would send over the latest observation available when dataProcessor becomes available to receive and process the data. This solution may work but it may get complicated as dataProcessor may need to be blocked and re-activated; and communicate its status to dataSource, leading to a two way communication problem. This problem may boil down to a blocking queue in the consumer-producer problem but I am not sure..
The second idea is to have dataProcessor taking care of message sorting. In this architecture, dataSource will simply post updates in dataProcessor's queue. dataProcessor will use Scanto fetch the latest data available in his queue. This may be the way to go. However, I am not sure if in the current design of MailboxProcessorit is possible to clear a queue of messages, deleting the older obsolete ones. Furthermore, here, it is written that:
Unfortunately, the TryScan function in the current version of F# is
broken in two ways. Firstly, the whole point is to specify a timeout
but the implementation does not actually honor it. Specifically,
irrelevant messages reset the timer. Secondly, as with the other Scan
function, the message queue is examined under a lock that prevents any
other threads from posting for the duration of the scan, which can be
an arbitrarily long time. Consequently, the TryScan function itself
tends to lock-up concurrent systems and can even introduce deadlocks
because the caller's code is evaluated inside the lock (e.g. posting
from the function argument to Scan or TryScan can deadlock the agent
when the code under the lock blocks waiting to acquire the lock it is
already under).
Having the latest observation bounced back may be a problem.
The author of this post, #Jon Harrop, suggests that
I managed to architect around it and the resulting architecture was actually better. In essence, I eagerly Receive all messages and filter using my own local queue.
This idea is surely worth exploring but, before starting to play around with code, I would welcome some inputs on how I could structure my solution.
Thank you.
Sounds like you might need a destructive scan version of the mailbox processor, I implemented this with TPL Dataflow in a blog series that you might be interested in.
My blog is currently down for maintenance but I can point you to the posts in markdown format.
Part1
Part2
Part3
You can also check out the code on github
I also wrote about the issues with scan in my lurking horror post
Hope that helps...
tl;dr I would try this: take Mailbox implementation from FSharp.Actor or Zach Bray's blog post, replace ConcurrentQueue by ConcurrentStack (plus add some bounded capacity logic) and use this changed agent as a dispatcher to pass messages from dataSource to an army of dataProcessors implemented as ordinary MBPs or Actors.
tl;dr2 If workers are a scarce and slow resource and we need to process a message that is the latest at the moment when a worker is ready, then it all boils down to an agent with a stack instead of a queue (with some bounded capacity logic) plus a BlockingQueue of workers. Dispatcher dequeues a ready worker, then pops a message from the stack and sends this message to the worker. After the job is done the worker enqueues itself to the queue when becomes ready (e.g. before let! msg = inbox.Receive()). Dispatcher consumer thread then blocks until any worker is ready, while producer thread keeps the bounded stack updated. (bounded stack could be done with an array + offset + size inside a lock, below is too complex one)
Details
MailBoxProcessor is designed to have only one consumer. This is even commented in the source code of MBP here (search for the word 'DRAGONS' :) )
If you post your data to MBP then only one thread could take it from internal queue or stack.
In you particular use case I would use ConcurrentStack directly or better wrapped into BlockingCollection:
It will allow many concurrent consumers
It is very fast and thread safe
BlockingCollection has BoundedCapacity property that allows you to limit the size of a collection. It throws on Add, but you could catch it or use TryAdd. If A is a main stack and B is a standby, then TryAdd to A, on false Add to B and swap the two with Interlocked.Exchange, then process needed messages in A, clear it, make a new standby - or use three stacks if processing A could be longer than B could become full again; in this way you do not block and do not lose any messages, but could discard unneeded ones is a controlled way.
BlockingCollection has methods like AddToAny/TakeFromAny, which work on an arrays of BlockingCollections. This could help, e.g.:
dataSource produces messages to a BlockingCollection with ConcurrentStack implementation (BCCS)
another thread consumes messages from BCCS and sends them to an array of processing BCCSs. You said that there is a lot of data. You may sacrifice one thread to be blocking and dispatching your messages indefinitely
each processing agent has its own BCCS or implemented as an Agent/Actor/MBP to which the dispatcher posts messages. In your case you need to send a message to only one processorAgent, so you may store processing agents in a circular buffer to always dispatch a message to least recently used processor.
Something like this:
(data stream produces 'T)
|
[dispatcher's BCSC]
|
(a dispatcher thread consumes 'T and pushes to processors, manages capacity of BCCS and LRU queue)
| |
[processor1's BCCS/Actor/MBP] ... [processorN's BCCS/Actor/MBP]
| |
(process) (process)
Instead of ConcurrentStack, you may want to read about heap data structure. If you need your latest messages by some property of messages, e.g. timestamp, rather than by the order in which they arrive to the stack (e.g. if there could be delays in transit and arrival order <> creation order), you can get the latest message by using heap.
If you still need Agents semantics/API, you could read several sources in addition to Dave's links, and somehow adopt implementation to multiple concurrent consumers:
An interesting article by Zach Bray on efficient Actors implementation. There you do need to replace (under the comment // Might want to schedule this call on another thread.) the line execute true by a line async { execute true } |> Async.Start or similar, because otherwise producing thread will be consuming thread - not good for a single fast producer. However, for a dispatcher like described above this is exactly what needed.
FSharp.Actor (aka Fakka) development branch and FSharp MPB source code (first link above) here could be very useful for implementation details. FSharp.Actors library has been in a freeze for several months but there is some activity in dev branch.
Should not miss discussion about Fakka in Google Groups in this context.
I have a somewhat similar use case and for the last two days I have researched everything I could find on the F# Agents/Actors. This answer is a kind of TODO for myself to try these ideas, of which half were born during writing it.
The simplest solution is to greedily eat all messages in the inbox when one arrives and discard all but the most recent. Easily done using TryReceive:
let rec readLatestLoop oldMsg =
async { let! newMsg = inbox.TryReceive 0
match newMsg with
| None -> oldMsg
| Some newMsg -> return! readLatestLoop newMsg }
let readLatest() =
async { let! msg = inbox.Receive()
return! readLatestLoop msg }
When faced with the same problem I architected a more sophisticated and efficient solution I called cancellable streaming and described in in an F# Journal article here. The idea is to start processing messages and then cancel that processing if they are superceded. This significantly improves concurrency if significant processing is being done.

Resources