How to use async_std::task::sleep to simulate blocking operation? - asynchronous

I have a simple code like this to simulate how asynchronous code works on a blocking operation.
I'm expecting all these "Hello" prints will be shown after a 1000ms.
But this code works like a normal blocking code, each hello_wait call waits 1000ms and prints another Hello after 1000ms.
How can I make it run concurrently?
use std::{time::Duration};
use async_std::task;
async fn hello_wait(){
task::sleep(Duration::from_millis(1000)).await;
println!("Hello");
}
#[async_std::main]
async fn main() {
hello_wait().await;
hello_wait().await;
hello_wait().await;
hello_wait().await;
hello_wait().await;
}
This is what is happening:
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
This is what I want to:
// -- Wait 1000ms --
Hello
Hello
Hello
Hello
Hello

How can I make it run concurrently?
You can either:
spawn a task, which will make every hello_wait be scheduled independently
or "merge" the futures, which will concurrently drive all the futures in parallel
Your expectation might come from Javascript or C# async, where the "base" awaitable is a task. Tasks are "active", as soon as you create them they can be scheduled and do their thing concurrently.
But rust's core awaitable is more of a coroutine, so it's inert (/ passive): creating one doesn't really do anything, futures have to be polled to progress and await will repeatedly poll until completion before resuming. So when you await something, it runs completely with no opportunity to interleave at that point.
Therefore running futures concurrently requires one of two things:
upgrading them to a task, meaning they can be scheduled on their own, that is what spawn does
or composing futures into a single "meta-future" which can poll them all in turn when it's polled, that's what constructs like join_all or tokio::join do
Note that composing futures doesn't allow parallelism, since the futures are "siblings" they can only get polled (and thus actually do things) consecutively, it's just that this polling (and thus progress) gets interleaved.
Spawning tasks does allow for parallelism (if the runtime is multithtreaded and the machine has mutiple cores -- though the latter is pretty much universal these days), but has its own limitations with respect to lifetime and memory management, and is a bit more expensive.
Here is a playground demo of various options. It uses tokio because apparently the playground doesn't have async_std, and I'm not sure it'd be possible to enable the "unstable" feature anyway, but aside from that and the use of tokio::join (async_std's Future::join can only join 2 futures at a time so you have to chain the call) it should work roughly the same in async_std.

I was able to do it this way with futures crate's join_all:
use std::time::Duration;
use async_std::task;
use futures::future;
async fn hello_wait(){
task::sleep(Duration::from_millis(1000)).await;
println!("Hello");
}
#[async_std::main]
async fn main() {
let mut asyncfuncs = vec![];
asyncfuncs.push(hello_wait());
asyncfuncs.push(hello_wait());
asyncfuncs.push(hello_wait());
asyncfuncs.push(hello_wait());
asyncfuncs.push(hello_wait());
future::join_all(asyncfuncs.into_iter()).await;
}

Related

How can I wait for a specific result from a pool of futures with async rust?

Using the futures crate.
I have a vec of futures which return a bool and I want to wait specifically for the future that returns true.
consider the following pool of futures.
async fn async_function(guess: u8) -> bool {
let random_wait = rand::thread_rng().gen_range(0..2);
std::thread::sleep(std::time::Duration::from_secs(random_wait));
println!("Running guess {guess}");
guess == 231
}
fn main() {
let mut pool: Vec<impl Future<Output = bool>> = vec![];
for guess in 0..=255 {
pool.push(async_function(guess));
}
}
How do I wait for the futures in the vec?
Is it possible to wait until only one future returns true?
Can I identify the value of guess for the future that returns true?
I'm new to async rust, so I've been looking at the async-book.
From there, these are the options I've considered:
join! waits until all threads are done, so that doesn't work for me since I want to drop the remaining futures.
select! doesn't seem to be an option, because I need to specify the specific future in the select block and I'm not about to make 255 line select.
try_join! is tempting me to break semantics and have my async_function return Err(guess)so that it causes the try_join to exit and return the value I want.
I tried using async_fn(guess).boxed.into_stream() and then using select_all from futures::stream but it doesn't seem to run concurrently. I see my async functions running in order.
Ok, my thinking of futures was wrong. I knew that they weren't executed immediately, but I wasn't using the executors correctly from the futures crate.
Here's what I've got that seems to work.
let thread_pool = ThreadPool::new().unwrap();
let mut pool = vec![];
for guess in 0..=255 {
let thread = thread_pool.spawn_with_handle(async_fn(guess)).expect("Failed to spawn thread");
pool.push(thread.into_stream());
}
let stream = block_on_stream(futures::stream::select_all(pool));
for value in stream {
println!("Got value {value}");
}
the thread pool executor is what creates the separate threads needed to run. Without this my application was single threaded so no matter what I tried, it would only run functions one at a time, not concurrently.
This way I spawn each request into the thread pool. The thread pool appears to spawn 4 background threads. By pushing them all into a stream, using select_all, and iterating over the stream, my main thread blocks until a new value is available.
There's always 4 workers and the thread pool is scheduling them in the order they were requested like a queue. This is perfect.

How to make async function yield on block?

I just started learning asynchronous Rust, so this is propably not a difficult question to answer, however, I am scratching my head here.
I am not trying to run tasks in parallel yet, only trying to get them to run concurrently.
According to the guide at https://rust-lang.github.io/async-book/,
The futures::join macro makes it possible to wait for multiple different futures to complete while executing them all concurrently.
So when I create 2 Futures, I should be able to "await" both of them at once. It also states that
Whereas calling a blocking function in a synchronous method would block the whole thread, blocked Futures will yield control of the thread, allowing other Futures to run.
From what I understand here, if I await multiple Futures with join!, should the first one be blocked, the second one will start running.
So I made a very simple example where I created 2 async fns and tried to join! both, making sure the first one gets blocked. I used a mpsc::channel for the blocking, since the docs stated that thread::sleep() should not be used in async fns and that recv()
will always block the current thread if there is no data available
However, the behavior is not what I expected, as calling the blocking function will not yield control of the thread, allowing the other Future to run, like I would expect from the second quote I provided. Instead, it will just wait untill it is no longer blocked, finish the first Future and only then start the second. Pretty much as if they were synchronous and I would have just called one after the other.
My complete example code:
use std::{thread::{self}, sync::{mpsc::{self, Sender, Receiver}}, time::Duration};
use futures::{executor}; //added futures = "0.3" in cargo.toml dependencies
fn main(){
let fut = main_async();
executor::block_on(fut);
}
async fn main_async(){
let (sender, receiver) = mpsc::channel();
let thread_handle = std::thread::spawn(move || { //this thread is just here so the f1 function gets blocked by something and can later resume
wait_send_function(sender);
});
let f1 = f1(receiver);
let f2 = f2();
futures::join!(f1, f2);
thread_handle.join().unwrap();
}
fn wait_send_function(sender: Sender<i32>){
thread::sleep(Duration::from_millis(5000));
sender.send(1234).unwrap();
}
async fn f1(receiver: Receiver<i32>){
println!("starting f1");
let new_nmbr = receiver.recv().unwrap(); //I would expect f2 to start now, since this is blocking
println!("Received nmbr is: {}", new_nmbr);
}
async fn f2(){
println!("starting f2");
}
And the output is simply:
starting f1
Received nmbr is: 1234
starting f2
My question is what am I missing here, why does f2 only start after f1 is completed and what would I need to do to get the behavior I want (completing f2 first if f1 is blocked and then waiting for f1)?
Maybe the book is a little misleading, but when it refers to "a blocked future", it does not mean in the sense of blocking synchronous code (if that was the case, there would be no problem to use std::thread::sleep()), but rather, it means that the future is waiting to be polled by the executor.
Thus, std::mpsc that blocks the thread will not have the desired effect (definitely not on a single-threaded executor like future's, but it's a bad idea on multi-threaded executors too). Use futures::channel::mpsc and everything will work.

F# Async: equivalent of Boost asio's strands

Boost's asio library allows the serialisation of asynchronous code in the following way. Handlers to asynchronous functions such as those which read from a stream, may be associated to a strand. A strand is associated with an "IO context". An IO context owns a thread pool. However many threads in the pool, it is guaranteed that no two handlers associated with the same strand are run concurrently. This makes it possible, for instance, to implement a state machine as if it were single-threaded, where all handlers for that machine serialise over a private strand.
I have been trying to figure out how this might be done with F#'s Async. I could not find any way to make sure that chosen sets of Async processes never run concurrently. Can anyone suggest how to do this?
It would be useful to know what is the use case that you are trying to implement. I don't think F# async has anything that would directly map to strands and you would likely use different techniques for implementing different things that might all be implemented using strands.
For example, if you are concerend with reading data from a stream, F# async block lets you write code that is asynchronous but sequential. The following runs a single logical process (which might be moved between threads of a thread pool when you wait using let!):
let readTest () = async {
let fs = File.OpenRead(#"C:\Temp\test.fs")
let buffer = Array.zeroCreate 10
let mutable read = 1
while read <> 0 do
let! r = fs.AsyncRead(buffer, 0, 10)
printfn "Read: %A" buffer.[0 .. r-1]
read <- r }
readTest() |> Async.Start
If you wanted to deal with events that occur without any control (i.e. push based rather than pull based), for example, when you cannot ask the system to read next buffer of data, you could serialize the events using a MailboxProcessor. The following sends two messages to the agent almost at the same time, but they are processed sequentially, with 1 second delay:
let agent = MailboxProcessor.Start(fun inbox -> async {
while true do
let! msg = inbox.Receive()
printfn "Got: %s" msg
do! Async.Sleep(1000)
})
agent.Post("hello")
agent.Post("world")

Why Async.StartChild does not take CancellationToken?

I am struggling to understand Async.[StartChild|Start] API design.
What I would like is to start an async process which does some tcp stream reading and calling a callback according to commands arriving on tcp.
As this async process does not really return any single value, it seems like I should use Async.Start. At some point I want to "close" my tcp client and `Async.Start takes CancellationToken, which gives me ability to implement 'close". So far so good.
The problem is, I would like to know when tcp client is done with cancellation. There is some buffer flushing work done, once Cancel is requested, so I do not want to terminate application before tcp client is done cleanup. But Async.Start returns unit, which means I have no way of knowing when such async process is complete. So, looks like Async.StartChild should help. I should be able to invoke cancellation, and when cleanup is done, this async will invoke next contiuation in chain (or throw an exception?). But... Async.StartChild does not take CancellationToken, only timeout.
Why Async.StartChild implements just single case of cancellation strategy (timeout) instead of exposing more generic way (accept CancellationToken)?
To answer the first part of the question - if you need to do some cleanup work, you can just put it in finally and it will be called when the workflow is cancelled. For example:
let work =
async {
try
printfn "first work"
do! Async.Sleep 1000
printfn "second work"
finally
printfn "cleanup" }
Say you run this using Async.Start, wait for 500ms and then cancel the computation:
let cts = new System.Threading.CancellationTokenSource()
Async.Start(work, cts.Token)
System.Threading.Thread.Sleep(500)
cts.Cancel()
The output will be "first work, cleanup". As you can see, cancelling the computation will run all the finally clauses.
To answer the second part of the question - if you need to wait until the work completes, you can use RunSynchronously (but then, perhaps you do not actually need asynchronous workflows, if you are blocking anyway...).
The following starts a background process that cancels the main work after 500ms and then starts the main work synchronously:
let cts = new System.Threading.CancellationTokenSource()
async {
do! Async.Sleep(500)
cts.Cancel() } |> Async.Start
try Async.RunSynchronously(work, cancellationToken=cts.Token)
with :? System.OperationCanceledException -> ()
printfn "completed"
This prints "first work, cleanup, completed" - as you can see, the RunSynchronously call was blocked until the work was cancelled.

Sleep for x seconds before running next operation

I have been trying in various ways to make my program sleep for 10 seconds before running the next line of code.
this.SetContentView (Resource_Layout.Main)
let timer = new System.Timers.Timer(10000.0)
async{do timer.Start()}
this.SetContentView (Resource_Layout.next)
I can't get any solution to work.
If you want to use async rather than the more direct way (of creating a timer and setting the content view in the event handler of the timer), then you need something like this:
this.SetContentView (Resource_Layout.Main)
async{
do! Async.Sleep(10000.0)
this.SetContentView (Resource_Layout.next) }
|> Async.StartImmediate
The key points:
Using do! Async.Sleep you can block the execution of asynchronous computation
By moving the SetContentView call inside the async, it will happen after the sleep
Using Async.StartImmediate, you start the workflow - and the sleeping ensures that the rest of the computation runs in the same threading context (meaning that it will run on the UI thread and the code will be able to access UI elements).

Resources