F# Async: equivalent of Boost asio's strands - asynchronous

Boost's asio library allows the serialisation of asynchronous code in the following way. Handlers to asynchronous functions such as those which read from a stream, may be associated to a strand. A strand is associated with an "IO context". An IO context owns a thread pool. However many threads in the pool, it is guaranteed that no two handlers associated with the same strand are run concurrently. This makes it possible, for instance, to implement a state machine as if it were single-threaded, where all handlers for that machine serialise over a private strand.
I have been trying to figure out how this might be done with F#'s Async. I could not find any way to make sure that chosen sets of Async processes never run concurrently. Can anyone suggest how to do this?

It would be useful to know what is the use case that you are trying to implement. I don't think F# async has anything that would directly map to strands and you would likely use different techniques for implementing different things that might all be implemented using strands.
For example, if you are concerend with reading data from a stream, F# async block lets you write code that is asynchronous but sequential. The following runs a single logical process (which might be moved between threads of a thread pool when you wait using let!):
let readTest () = async {
let fs = File.OpenRead(#"C:\Temp\test.fs")
let buffer = Array.zeroCreate 10
let mutable read = 1
while read <> 0 do
let! r = fs.AsyncRead(buffer, 0, 10)
printfn "Read: %A" buffer.[0 .. r-1]
read <- r }
readTest() |> Async.Start
If you wanted to deal with events that occur without any control (i.e. push based rather than pull based), for example, when you cannot ask the system to read next buffer of data, you could serialize the events using a MailboxProcessor. The following sends two messages to the agent almost at the same time, but they are processed sequentially, with 1 second delay:
let agent = MailboxProcessor.Start(fun inbox -> async {
while true do
let! msg = inbox.Receive()
printfn "Got: %s" msg
do! Async.Sleep(1000)
})
agent.Post("hello")
agent.Post("world")

Related

How can I wait for a specific result from a pool of futures with async rust?

Using the futures crate.
I have a vec of futures which return a bool and I want to wait specifically for the future that returns true.
consider the following pool of futures.
async fn async_function(guess: u8) -> bool {
let random_wait = rand::thread_rng().gen_range(0..2);
std::thread::sleep(std::time::Duration::from_secs(random_wait));
println!("Running guess {guess}");
guess == 231
}
fn main() {
let mut pool: Vec<impl Future<Output = bool>> = vec![];
for guess in 0..=255 {
pool.push(async_function(guess));
}
}
How do I wait for the futures in the vec?
Is it possible to wait until only one future returns true?
Can I identify the value of guess for the future that returns true?
I'm new to async rust, so I've been looking at the async-book.
From there, these are the options I've considered:
join! waits until all threads are done, so that doesn't work for me since I want to drop the remaining futures.
select! doesn't seem to be an option, because I need to specify the specific future in the select block and I'm not about to make 255 line select.
try_join! is tempting me to break semantics and have my async_function return Err(guess)so that it causes the try_join to exit and return the value I want.
I tried using async_fn(guess).boxed.into_stream() and then using select_all from futures::stream but it doesn't seem to run concurrently. I see my async functions running in order.
Ok, my thinking of futures was wrong. I knew that they weren't executed immediately, but I wasn't using the executors correctly from the futures crate.
Here's what I've got that seems to work.
let thread_pool = ThreadPool::new().unwrap();
let mut pool = vec![];
for guess in 0..=255 {
let thread = thread_pool.spawn_with_handle(async_fn(guess)).expect("Failed to spawn thread");
pool.push(thread.into_stream());
}
let stream = block_on_stream(futures::stream::select_all(pool));
for value in stream {
println!("Got value {value}");
}
the thread pool executor is what creates the separate threads needed to run. Without this my application was single threaded so no matter what I tried, it would only run functions one at a time, not concurrently.
This way I spawn each request into the thread pool. The thread pool appears to spawn 4 background threads. By pushing them all into a stream, using select_all, and iterating over the stream, my main thread blocks until a new value is available.
There's always 4 workers and the thread pool is scheduling them in the order they were requested like a queue. This is perfect.

How to make async function yield on block?

I just started learning asynchronous Rust, so this is propably not a difficult question to answer, however, I am scratching my head here.
I am not trying to run tasks in parallel yet, only trying to get them to run concurrently.
According to the guide at https://rust-lang.github.io/async-book/,
The futures::join macro makes it possible to wait for multiple different futures to complete while executing them all concurrently.
So when I create 2 Futures, I should be able to "await" both of them at once. It also states that
Whereas calling a blocking function in a synchronous method would block the whole thread, blocked Futures will yield control of the thread, allowing other Futures to run.
From what I understand here, if I await multiple Futures with join!, should the first one be blocked, the second one will start running.
So I made a very simple example where I created 2 async fns and tried to join! both, making sure the first one gets blocked. I used a mpsc::channel for the blocking, since the docs stated that thread::sleep() should not be used in async fns and that recv()
will always block the current thread if there is no data available
However, the behavior is not what I expected, as calling the blocking function will not yield control of the thread, allowing the other Future to run, like I would expect from the second quote I provided. Instead, it will just wait untill it is no longer blocked, finish the first Future and only then start the second. Pretty much as if they were synchronous and I would have just called one after the other.
My complete example code:
use std::{thread::{self}, sync::{mpsc::{self, Sender, Receiver}}, time::Duration};
use futures::{executor}; //added futures = "0.3" in cargo.toml dependencies
fn main(){
let fut = main_async();
executor::block_on(fut);
}
async fn main_async(){
let (sender, receiver) = mpsc::channel();
let thread_handle = std::thread::spawn(move || { //this thread is just here so the f1 function gets blocked by something and can later resume
wait_send_function(sender);
});
let f1 = f1(receiver);
let f2 = f2();
futures::join!(f1, f2);
thread_handle.join().unwrap();
}
fn wait_send_function(sender: Sender<i32>){
thread::sleep(Duration::from_millis(5000));
sender.send(1234).unwrap();
}
async fn f1(receiver: Receiver<i32>){
println!("starting f1");
let new_nmbr = receiver.recv().unwrap(); //I would expect f2 to start now, since this is blocking
println!("Received nmbr is: {}", new_nmbr);
}
async fn f2(){
println!("starting f2");
}
And the output is simply:
starting f1
Received nmbr is: 1234
starting f2
My question is what am I missing here, why does f2 only start after f1 is completed and what would I need to do to get the behavior I want (completing f2 first if f1 is blocked and then waiting for f1)?
Maybe the book is a little misleading, but when it refers to "a blocked future", it does not mean in the sense of blocking synchronous code (if that was the case, there would be no problem to use std::thread::sleep()), but rather, it means that the future is waiting to be polled by the executor.
Thus, std::mpsc that blocks the thread will not have the desired effect (definitely not on a single-threaded executor like future's, but it's a bad idea on multi-threaded executors too). Use futures::channel::mpsc and everything will work.

F#: Synchronously start Async within a SynchronizationContext

Async.SwitchSynchronizationContext allows an Async action to switch to running within a given SynchronizationContext. I would like to synchronously begin an Async computation within a SynchronizationContext, rather than switching inside the Async.
This would ensure that Async actions are run in the desired order, and that they are not run concurrently.
Is this possible? The documentation for Async.SwitchSynchronizationContext mentions using SynchronizationContext.Post, but no such functionality is exposed for Async.
I have not tested this, but I think the easiest way to achieve what you want is to combine the SynchronizationContext.Send method (mentioned in the comments) with the Async.StartImmediate operation. The former lets you start some work synchronously in the synchronization context. The latter lets you start an async workflow in the current context.
If you combine the two, you can define a helper that starts a function in a given synchronization context and, in this function, immediately starts an async workflow:
let startInContext (sync:SynchronizationContext) work =
SynchronizationContext.Current.Send((fun _ ->
Async.StartImmediate(work)), null)
For example:
async {
printfn "Running on the given sync context"
do! Async.Sleep(1000)
printfn "Should be back on the original sync context" }
|> startInContext SynchronizationContext.Current

Round robin concurrent algorithm with F# and Task<T>

I have a C# API like this:
Task<T> Foo(serverUri)
Let's say I have 4 possible serverUris. I want to implement a function that will return DiscUnionBar type:
type DiscUnionBar of T =
Safe of T | Weak of T | ConnectionError
The implementation will have the following requirements:
Do 3 (max) concurrent calls to Foo() with 3 different serverUris.
Pick the 2 fastest successful responses. If they give same result T1 and T2 (being T1==T2), stop doing concurrent requests and ignore/cancel requests that are in progress and return Safe of T. If T1!=T2, keep doing more requests (or looking at responses) until two equal responses are found.
If any of the requests fails (throws ServerException), try with a serverUri that has not been requested before.
If all requests to all 4 servers fail, return ConnectionError.
If only 1 request succeeds, return Weak of T.
Is this easy to do given that I cannot use F#'s Async and have to stick with C#'s Task usage? I'm a bit lost on this one.
Unless there is a reason you cannot use Async anywhere in your code, and your only limitation is that Foo has to return a Task, you should have no problem converting the Task resulting from calling Foo to an Async with Async.AwaitTask.
This way you can build the logic using F#'s async computation expressions as if Foo returned an Async
let simpleComputation serverUri = async {
let! fooResult = Foo(serverUri) |> Async.AwaitTask
(* here you can work with the T returned by Foo's task *)
}
I also have good experience with FSharp.Control.FusionTasks library, which lets you use Task directly in async computation expressions, without having to call AwaitTask explicitly, and helps in Async/Task interop in general. Although some may not like that it tries to hide the Tasks.

F# Async File Copy

To copy a file asynchronously, will something like this work?
let filecopyasync (source, target) =
let task = Task.Run((fun () ->File.Copy(source, target, true)))
// do other stuff
Async.AwaitIAsyncResult task
In particular, will this fire up a new thread to do the copy while I "do other stuff"?
UPDATE:
Found another solution:
let asyncFileCopy (source, target, overwrite) =
printfn "Copying %s to %s" source target
let fn = new Func<string * string * bool, unit>(File.Copy)
Async.FromBeginEnd((source, target, overwrite), fn.BeginInvoke, fn.EndInvoke)
let copyfile1 = asyncFileCopy("file1", "file2", true)
let copyfile2 = asyncFileCopy("file3", "file4", true)
[copyfile1; copyfile2] |> seq |> Async.Parallel |> Async.RunSynchronously |> ignore
Your question is conflating two issues, namely multithreading and asychrony. It's important to realise that these things are entirely different concepts:
Asychrony is about a workflow of tasks where we respond to the completion of those tasks independently of the main program flow.
Multithreading is an execution model, one which can be used to implement asychrony, although asychrony can be acheived in other ways (such as hardware interrupts).
Now, when it comes to I/O, the question you should not be asking is "Can I spin up another thread to do it for me?"
Why, you ask?
If you do some I/O in the main thread, you typically block the main thread waiting for results. If you evade this problem by creating a new thread, you haven't actually solved the issue, you've just moved it around. Now you've blocked either a new thread that you've created or a thread pool thread. Oh dear, same problem.
Threads are an expensive and valuable resources and shouldn't be squandered on waiting for blocking I/O to complete.
So, what is the real solution?
Well, we achieve asynchrony via one of these other approaches. That way, we can request that the OS perform some I/O and request that it let us know when the I/O operation is complete. That way, the thread is not blocked while we're waiting for results. In Windows, this is implemented via something called I/O completion ports.
How do I do this in F#?
The .NET CopyToAsync method is probably the easiest approach. Since this returns a plain task, it's helpful to create a helper method:
type Async with
static member AwaitPlainTask (task : Task) =
task.ContinueWith(ignore) |> Async.AwaitTask
Then
[<Literal>]
let DEFAULT_BUFFER_SIZE = 4096
let copyToAsync source dest =
async {
use sourceFile = new FileStream(source, FileMode.Open, FileAccess.Read, FileShare.Read, DEFAULT_BUFFER_SIZE, true);
use destFile = new FileStream(dest, FileMode.OpenOrCreate, FileAccess.Write, FileShare.None, DEFAULT_BUFFER_SIZE, true);
do! sourceFile.CopyToAsync(destFile) |> Async.AwaitPlainTask
}
You could then use this with Async.Parallel to perform multiple copies concurrently.
Note: This is different to what you wrote above because File.Copy is a sychronous method that returns unit while CopyToAsync is an async method that returns Task. You cannot magically make synchronous methods asychronous by putting async wrappers around them, instead you need to make sure you are using async all the way down.
You can test it yourself with a few printfns. I found I had to RunAsynchronously to force the main thread to wait for the copy to complete. I'm not sure why the await didn't work, but you can see the expected set of outputs indicating that the copy happened in the background.
open System
open System.IO
open System.Threading
open System.Threading.Tasks
let filecopyasync (source, target) =
let task = Task.Run((fun () ->
printfn "CopyThread: %d" Thread.CurrentThread.ManagedThreadId;
Thread.Sleep(10000);
File.Copy(source, target, true); printfn "copydone"))
printfn "mainThread: %d" Thread.CurrentThread.ManagedThreadId;
let result=Async.AwaitIAsyncResult task
Thread.Sleep(3000)
printfn "doing stuff"
Async.RunSynchronously result
printfn "done"
Output:
filecopyasync (#"foo.txt",#"bar.txt");;
mainThread: 1
CopyThread: 7
doing stuff
copydone
done
If all you're trying to do is run something on another thread while you do something else, then your initial Task.Run approach should be fine (note that you can get a Task<unit> if you call Task.Run<_> instead of the non-generic Task.Run, which might be marginally easier to deal with).
However, you should be clear about your goals - arguably a "proper" asynchronous file copy wouldn't require a separate .NET thread (which is a relatively heavy-weight primitive) and would rely on operating system features like completion ports instead; since System.IO.File doesn't provide a native CopyAsync method you'd need to write your own (see https://stackoverflow.com/a/35467471/82959 for a simple C# implementation that would be easy to transliterate).

Resources