Should I return await in Rust? - asynchronous

In JavaScript, async code is written with Promises and async/await syntax similar to that of Rust. It is generally considered redundant (and therefore discouraged) to return and await a Promise when it can simply be returned (i.e., when an async function is executed as the last thing in another function):
async function myFn() { /* ... */ }
async function myFn2() {
// do setup work
return await myFn()
// ^ this is not necessary when we can just return the Promise
}
I am wondering whether a similar pattern applies in Rust. Should I prefer this:
pub async fn my_function(
&mut self,
) -> Result<()> {
// do synchronous setup work
self.exec_command(
/* ... */
)
.await
}
Or this:
pub fn my_function(
&mut self,
) -> impl Future<Output = Result<()>> {
// do synchronous setup work
self.exec_command(
/* ... */
)
}
The former feels more ergonomic to me, but I suspect that the latter might be more performant. Is this the case?

One semantic difference between the two variants is that in the first variant the synchronous setup code will run only when the returned future is awaited, while in the second variant it will run as soon as the function is called:
let fut = x.my_function();
// in the second variant, the synchronous setup has finished by now
...
let val = fut.await; // in the first variant, it runs here
For the difference to be noticeable, the synchronous setup code must have side effects, and there needs to be a delay between calling the async function and awaiting the future it returns.
Unless you have specific reason to execute the preamble immediately, go with the async function, i.e. the first variant. It makes the function slightly more predictable, and makes it easier to add more awaits later as the function is refactored.

There is no real difference between the two since async just resolves down to impl Future<Output=Result<T, E>>. I don't believe there is any meaningful performance difference between the two, at least in my empirical usage of both.
If you are asking for preference in style then in my opinion the first one is preferred as the types are clearer to me and I agree it is more ergonomic.

Related

Testing Async Functions

I am using Wasm-Pack and I need to write a unit test for a asynchronous function that references a JavaScript Library. I tried using the futures::executor::block_on in order to get the asynchronous function to return so I could make an assert. However, blocking is not supported in the wasm build target. I can't test in a different target because the asynchronous function I am testing is referencing a JavaScript library. I also don't think I can spawn a new thread and handle the future there, because it need to return to the assert statement in the original thread. What is the best way to go about testing this asynchronous function?
Code being tested in src/lib.rs
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub async fn func_to_test() -> bool {
return some_long_running_fuction().await;
}
Testing code in tests/web.rs
#![cfg(target_arch = "wasm32")]
extern crate wasm_bindgen_test;
use test_crate;
use futures::executor::block_on;
#[wasm_bindgen_test]
fn can_return_from_async(){
let ret = block_on(test_crate::func_to_test());
assert!(ret);
}
How do I test an async function if I can't use any blocking?
Rust can handle tests that are async functions themselves. Just change the test fuction to be async and throw in an await.
#[wasm_bindgen_test]
async fn can_return_from_async(){
let ret = test_crate::func_to_test().await
assert!(ret);
}

Is there a way to poll several futures simultaniously in rust async

I'm trying to execute several sqlx queries in parallel given by a iterator.
This is probably the closest I've got so far.
let mut futures = HahshMap::new() // placeholder, filled HashMap in reality
.iter()
.map(async move |(_, item)| -> Result<(), sqlx::Error> {
let result = sqlx::query_file_as!(
// omitted
)
.fetch_one(&pool)
.await?;
channel.send(Enum::Event(result)).ignore();
Ok(())
})
.clollect();
futures::future::join_all(futures);
All queries and sends are independent from each other, so if one of them fails, the others should still get processed.
Futthermore the current async closure is not possible like this.
Rust doesn't yet have async closures. You instead need to have the closure return an async block:
move |(_, item)| async move { ... }
Additionally, make sure you .await the future returned by join_all in order to ensure the individual tasks are actually polled.

Calling async function from closure

I want to await an async function inside a closure used in an iterator. The function requiring the closure is called inside a struct implementation. I can't figure out how to do this.
This code simulates what I'm trying to do:
struct MyType {}
impl MyType {
async fn foo(&self) {
println!("foo");
(0..2).for_each(|v| {
self.bar(v).await;
});
}
async fn bar(&self, v: usize) {
println!("bar: {}", v);
}
}
#[tokio::main]
async fn main() {
let mt = MyType {};
mt.foo().await;
}
Obviously, this will not work since the closure is not async, giving me:
error[E0728]: `await` is only allowed inside `async` functions and blocks
--> src/main.rs:8:13
|
7 | (0..2).for_each(|v| {
| --- this is not `async`
8 | self.bar(v).await;
| ^^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
After looking for an answer on how to call an async function from a non-async one, I eded up with this:
tokio::spawn(async move {
self.bar(v).await;
});
But now I'm hitting lifetime issues instead:
error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
--> src/main.rs:4:18
|
4 | async fn foo(&self) {
| ^^^^^
| |
| this data with an anonymous lifetime `'_`...
| ...is captured here...
...
8 | tokio::spawn(async move {
| ------------ ...and is required to live as long as `'static` here
This also doesn't surprise me since from what I understand the Rust compiler cannot know how long a thread will live. Given this, the thread spawned with tokio::spawn might outlive the type MyType.
The first fix I came up with was to make bar an associate function, copy everything I need in my closure and pass it as a value to bar and call it with MyType::bar(copies_from_self) but this is getting ugly since there's a lot of copying. It also feels like a workaround for not knowing how lifetimes work.
I was instead trying to use futures::executor::block_on which works for simple tasks like the one in this post:
(0..2).for_each(|v| {
futures::executor::block_on(self.bar(v));
});
But when putting this in my real life example where I use a third party library1 which also uses tokio, things no longer work. After reading the documentation, I realise that #[tokio::main] is a macro that eventually wraps everything in block_on so by doing this there will be nested block_on. This might be the reason why one of the async methods called in bar just stops working without any error or logging (works without block_on so shouldn't be anything with the code). I reached out to the authors who said that I could use for_each(|i| async move { ... }) which made me even more confused.
(0..2).for_each(|v| async move {
self.bar(v).await;
});
Will result in the compilation error
expected `()`, found opaque type`
which I think makes sense since I'm now returning a future and not (). My naive approach to this was to try and await the future with something like this:
(0..2).for_each(|v| {
async move {
self.bar(v).await;
}
.await
});
But that takes me back to square one, resulting in the following compilation error which I also think makes sense since I'm now back to using await in the closure which is sync.
only allowed inside `async` functions and blocks` since the
This discovery also makes it hard for me to make use of answers such as the ones found here and here.
The question after all this cargo cult programming is basically, is it possible, and if so how do I call my async function from the closure (and preferably without spawning a thread to avoid lifetime problems) in an iterator? If this is not possible, what would an idiomatic implementation for this look like?
1This is the library/method used
Iterator::for_each expects a synchronous closure, thus you can't use .await in it (not directly at least), nor can you return a future from it.
One solution is to just use a for loop instead of .for_each:
for v in 0..2 {
self.bar(v).await;
}
The more general approach is to use streams instead of iterators, since those are the asynchronous equivalent (and the equivalent methods on streams are typically asynchronous as well). This would work not only for for_each but for most other iterator methods:
use futures::prelude::*;
futures::stream::iter(0..2)
.for_each(|c| async move {
self.bar(v).await;
})
.await;

How to use Rust futures in callbacks?

Is there any way to use futures in callbacks? For example...
// Send message on multiple channels while removing ones that are closed.
use smol::channel::Sender;
...
// (expecting bool, found opaque type)
vec_of_sender.retain( |sender| async {
sender.send(msg.clone()).await.is_ok()
});
My work-around is to loop twice: On the first pass I delete closed senders (non-async) and on the second I do the actual send (async using for sender in ...). But it seems like I should be able to do it all in a single retain() call.
You can't use retain in this way. The closure that retain accepts must implement FnMut(&T) -> bool, but every async function returns an implementation of Future.
You can turn an async function into a synchronous one by blocking on it. For example, if you were using tokio, you could do this:
use tokio::runtime::Runtime;
let rt = Runtime::new().unwrap();
vec_of_sender.retain(|sender| {
rt.block_on(async { sender.send().await.is_ok() })
});
However, there is overhead to adding an async runtime, and I have a feeling that you are trying to solve the wrong problem.
The closure passed to retain must return a bool, but every async function returns impl Future. Instead, you can use Stream, which is the asynchronous version of Iterator. You can convert the vector into a Stream:
let stream = stream::iter(vec_of_sender);
And then use the filter method, which accepts an asynchronous closure and returns a new Stream:
let vec_of_sender = stream.filter(|sender| async {
sender.send(msg.clone()).await.is_ok()
}).collect::<Vec<Sender>>();
To avoid creating a new Vec, you can also use swap_remove:
let mut i = 0usize;
while i < vec_of_sender.len() {
if vec_of_sender[i].send(msg.clone()).await.is_ok() {
i += 1;
} else {
vec_of_sender.swap_remove(i);
}
}
Note that this will change the order of the vector.

How do I execute an async/await function without using any external dependencies?

I am attempting to create simplest possible example that can get async fn hello() to eventually print out Hello World!. This should happen without any external dependency like tokio, just plain Rust and std. Bonus points if we can get it done without ever using unsafe.
#![feature(async_await)]
async fn hello() {
println!("Hello, World!");
}
fn main() {
let task = hello();
// Something beautiful happens here, and `Hello, World!` is printed on screen.
}
I know async/await is still a nightly feature, and it is subject to change in the foreseeable future.
I know there is a whole lot of Future implementations, I am aware of the existence of tokio.
I am just trying to educate myself on the inner workings of standard library futures.
My helpless, clumsy endeavours
My vague understanding is that, first off, I need to Pin task down. So I went ahead and
let pinned_task = Pin::new(&mut task);
but
the trait `std::marker::Unpin` is not implemented for `std::future::GenFuture<[static generator#src/main.rs:7:18: 9:2 {}]>`
so I thought, of course, I probably need to Box it, so I'm sure it won't move around in memory. Somewhat surprisingly, I get the same error.
What I could get so far is
let pinned_task = unsafe {
Pin::new_unchecked(&mut task)
};
which is obviously not something I should do. Even so, let's say I got my hands on the Pinned Future. Now I need to poll() it somehow. For that, I need a Waker.
So I tried to look around on how to get my hands on a Waker. On the doc it kinda looks like the only way to get a Waker is with another new_unchecked that accepts a RawWaker. From there I got here and from there here, where I just curled up on the floor and started crying.
This part of the futures stack is not intended to be implemented by many people. The rough estimate that I have seen in that maybe there will be 10 or so actual implementations.
That said, you can fill in the basic aspects of an executor that is extremely limited by following the function signatures needed:
async fn hello() {
println!("Hello, World!");
}
fn main() {
drive_to_completion(hello());
}
use std::{
future::Future,
ptr,
task::{Context, Poll, RawWaker, RawWakerVTable, Waker},
};
fn drive_to_completion<F>(f: F) -> F::Output
where
F: Future,
{
let waker = my_waker();
let mut context = Context::from_waker(&waker);
let mut t = Box::pin(f);
let t = t.as_mut();
loop {
match t.poll(&mut context) {
Poll::Ready(v) => return v,
Poll::Pending => panic!("This executor does not support futures that are not ready"),
}
}
}
type WakerData = *const ();
unsafe fn clone(_: WakerData) -> RawWaker {
my_raw_waker()
}
unsafe fn wake(_: WakerData) {}
unsafe fn wake_by_ref(_: WakerData) {}
unsafe fn drop(_: WakerData) {}
static MY_VTABLE: RawWakerVTable = RawWakerVTable::new(clone, wake, wake_by_ref, drop);
fn my_raw_waker() -> RawWaker {
RawWaker::new(ptr::null(), &MY_VTABLE)
}
fn my_waker() -> Waker {
unsafe { Waker::from_raw(my_raw_waker()) }
}
Starting at Future::poll, we see we need a Pinned future and a Context. Context is created from a Waker which needs a RawWaker. A RawWaker needs a RawWakerVTable. We create all of those pieces in the simplest possible ways:
Since we aren't trying to support NotReady cases, we never need to actually do anything for that case and can instead panic. This also means that the implementations of wake can be no-ops.
Since we aren't trying to be efficient, we don't need to store any data for our waker, so clone and drop can basically be no-ops as well.
The easiest way to pin the future is to Box it, but this isn't the most efficient possibility.
If you wanted to support NotReady, the simplest extension is to have a busy loop, polling forever. A slightly more efficient solution is to have a global variable that indicates that someone has called wake and block on that becoming true.

Resources