How to convert a Future into a Stream? - asynchronous

I'm trying to use async_std to receive UDP datagrams from the network.
There is a UdpSocket that implements async recv_from, this method returns a future but I need a async_std::stream::Stream that gives a stream of UDP datagrams because it is a better abstraction.
I've found tokio::net::UdpFramed that does exactly what I need but it is not available in current versions of tokio.
Generally speaking the question is how do I convert Futures from a given async function into a Stream?

For a single item, use FutureExt::into_stream:
use futures::prelude::*; // 0.3.1
fn outer() -> impl Stream<Item = i32> {
inner().into_stream()
}
async fn inner() -> i32 {
42
}
For a stream from a number of futures generated by a closure, use stream::unfold:
use futures::prelude::*; // 0.3.1
fn outer() -> impl Stream<Item = i32> {
stream::unfold((), |()| async { Some((inner().await, ())) })
}
async fn inner() -> i32 {
42
}
In your case, you can use stream::unfold:
use async_std::{io, net::UdpSocket}; // 1.4.0, features = ["attributes"]
use futures::prelude::*; // 0.3.1
fn read_many(s: UdpSocket) -> impl Stream<Item = io::Result<Vec<u8>>> {
stream::unfold(s, |s| {
async {
let data = read_one(&s).await;
Some((data, s))
}
})
}
async fn read_one(s: &UdpSocket) -> io::Result<Vec<u8>> {
let mut data = vec![0; 1024];
let (len, _) = s.recv_from(&mut data).await?;
data.truncate(len);
Ok(data)
}
#[async_std::main]
async fn main() -> io::Result<()> {
let s = UdpSocket::bind("0.0.0.0:9876").await?;
read_many(s)
.for_each(|d| {
async {
match d {
Ok(d) => match std::str::from_utf8(&d) {
Ok(s) => println!("{}", s),
Err(_) => println!("{:x?}", d),
},
Err(e) => eprintln!("Error: {}", e),
}
}
})
.await;
Ok(())
}

Generally speaking the question is how do I convert Futures from a given async function into a Stream?
There is FutureExt::into_stream, but don't let the name fool you; it is not a good fit for your situation.
There is a UdpSocket that implements async recv_from, this method returns a future but I need a async_std::stream::Stream that gives a stream of UDP datagrams because it is a better abstraction.
It is not necessarily a better abstraction here.
Specifically, async-std's UdpSocket::recv_from returns a future that has output type of (usize, SocketAddr) — the size of the data received and the peer address. If you were to use into_stream to convert it to a stream, it would give you just that, not the data received.
I've found tokio::net::UdpFramed that does exactly what I need but it is not available in current versions of tokio.
It has been moved to tokio-util crate. Unfortunately, you can't (easily) use that either. It requires a tokio::net::UdpSocket, which is not the same as async_std::net::UdpSocket.
You can, of course, use futures utility functions such as futures::stream::poll_fn or futures::stream::unfold to give UdpSocket::recv_from a futures::stream::Stream facade, but then what will you do with that? If you end up calling StreamExt::next to poll a value out of it, you could have used recv_from directly.
It is only necessary to reach for Stream if some API you are using requires a Stream input, such as rusoto:
Is it possible to create a Stream from a File rather than loading the file contents into memory?

Related

Is it possible to implement a feature like Java's CompletableFuture::complete in Rust?

I'm a beginner in rust, and I'm trying to use rust's asynchronous programming.
In my requirement scenario, I want to create a empty Future and complete it in another thread after a complex multi-round scheduling process. The CompletableFuture::complete of Java can meet my needs very well.
I have tried to find an implementation of Rust, but haven't found one yet.
Is it possible to do it in Rust?
I understand from the comments below that using a channel for this is more in line with rust's design.
My scenario is a hierarchical scheduling executor.
For example, Task1 will be splitted to several Drivers, each Driver will use multi thread(rayon threadpool) to do some computation work, and the former driver's state change will trigger the execution of next driver, the result of the whole task is the last driver's output and the intermedia drivers have no output. That is to say, my async function cannot get result from one spawn task directly, so I need a shared stack variable or a channel to transfer the result.
So what I really want is this: the last driver which is executed in a rayon thread, it can get a channel's tx by it's identify without storing it (to simplify the state change process).
I found the tx and rx of oneshot cannot be copies and they are not thread safe, and the send method of tx need ownership. So, I can't store the tx in main thread and let the last driver find it's tx by identify. But I can use mpsc to do that, I worte 2 demos and pasted it into the body of the question, but I have to create mpsc with capacity 1 and close it manually.
I wrote 2 demos, as bellow.I wonder if this is an appropriate and efficient use of mpsc?
Version implemented using oneshot, cannot work.
#[tokio::test]
pub async fn test_async() -> Result<()>{
let mut executor = Executor::new();
let res1 = executor.run(1).await?;
let res2 = executor.run(2).await?;
println!("res1 {}, res2 {}", res1, res2);
Ok(())
}
struct Executor {
pub pool: ThreadPool,
pub txs: Arc<DashMap<i32, RwLock<oneshot::Sender<i32>>>>,
}
impl Executor {
pub fn new() -> Self {
Executor{
pool: ThreadPoolBuilder::new().num_threads(10).build().unwrap(),
txs: Arc::new(DashMap::new()),
}
}
pub async fn run(&mut self, index: i32) -> Result<i32> {
let (tx, rx) = oneshot::channel();
self.txs.insert(index, RwLock::new(tx));
let txs_clone = self.txs.clone();
self.pool.spawn(move || {
let spawn_tx = txs_clone.get(&index).unwrap();
let guard = block_on(spawn_tx.read());
// cannot work, send need ownership, it will cause move of self
guard.send(index);
});
let res = rx.await;
return Ok(res.unwrap());
}
}
Version implemented using mpsc, can work, not sure about performance
#[tokio::test]
pub async fn test_async() -> Result<()>{
let mut executor = Executor::new();
let res1 = executor.run(1).await?;
let res2 = executor.run(2).await?;
println!("res1 {}, res2 {}", res1, res2);
// close channel after task finished
executor.close(1);
executor.close(2);
Ok(())
}
struct Executor {
pub pool: ThreadPool,
pub txs: Arc<DashMap<i32, RwLock<mpsc::Sender<i32>>>>,
}
impl Executor {
pub fn new() -> Self {
Executor{
pool: ThreadPoolBuilder::new().num_threads(10).build().unwrap(),
txs: Arc::new(DashMap::new()),
}
}
pub fn close(&mut self, index:i32) {
self.txs.remove(&index);
}
pub async fn run(&mut self, index: i32) -> Result<i32> {
let (tx, mut rx) = mpsc::channel(1);
self.txs.insert(index, RwLock::new(tx));
let txs_clone = self.txs.clone();
self.pool.spawn(move || {
let spawn_tx = txs_clone.get(&index).unwrap();
let guard = block_on(spawn_tx.value().read());
block_on(guard.deref().send(index));
});
// 0 mock invalid value
let mut res:i32 = 0;
while let Some(data) = rx.recv().await {
println!("recv data {}", data);
res = data;
break;
}
return Ok(res);
}
}
Disclaimer: It's really hard to picture what you are attempting to achieve, because the examples provided are trivial to solve, with no justification for the added complexity (DashMap). As such, this answer will be progressive, though it will remain focused on solving the problem you demonstrated you had, and not necessarily the problem you're thinking of... as I have no crystal ball.
We'll be using the following Result type in the examples:
type Result<T> = Result<T, Box<dyn Error + Send + Sync + 'static>>;
Serial execution
The simplest way to execute a task, is to do so right here, right now.
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Future<Output = Result<i32>>,
{
task().await
}
}
Async execution - built-in
When the execution of a task may involve heavy-weight calculations, it may be beneficial to execute it on a background thread.
Whichever runtime you are using probably supports this functionality, I'll demonstrate with tokio:
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Result<i32>,
{
Ok(tokio::task::spawn_block(task).await??)
}
}
Async execution - one-shot
If you wish to have more control on the number of CPU-bound threads, either to limit them, or to partition the CPUs of the machine for different needs, then the async runtime may not be configurable enough and you may prefer to use a thread-pool instead.
In this case, synchronization back with the runtime can be achieved via channels, the simplest of which being the oneshot channel.
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Result<i32>,
{
let (tx, mut rx) = oneshot::channel();
self.pool.spawn(move || {
let result = task();
// Decide on how to handle the fact that nobody will read the result.
let _ = tx.send(result);
});
Ok(rx.await??)
}
}
Note that in all of the above solutions, task remains agnostic as to how it's executed. This is a property you should strive for, as it makes it easier to change the way execution is handled in the future by more neatly separating the two concepts.

How to deal with non-Send futures in a Tokio spawn context?

Tokio's spawn can only work with a Send future. This makes the following code invalid:
async fn async_foo(v: i32) {}
async fn async_computation() -> Result<i32, Box<dyn std::error::Error>> {
Ok(1)
}
async fn async_start() {
match async_computation().await {
Ok(v) => async_foo(v).await,
_ => unimplemented!(),
};
}
#[tokio::main]
async fn main() {
tokio::spawn(async move {
async_start().await;
});
}
The error is:
future cannot be sent between threads safely
the trait `Send` is not implemented for `dyn std::error::Error`
If I understand correctly: because async_foo(v).await might yield, Rust internally have to save all the context which might be on a different thread - hence the result of async_computation().await must be Send - which dyn std::error::Error is not.
This could be mitigated if the non-Send type can be dropped before the .await, such as:
async fn async_start() {
let result;
match async_computation().await {
Ok(v) => result = v,
_ => return,
};
async_foo(result).await;
}
However once another .await is needed before the non-Send type is dropped, the workarounds are more and more awkward.
What a is a good practice for this - especially when the non-Send type is for generic error handling (the Box<dyn std::error::Error>)? Is there a better type for errors that common IO ops implements (async or not) in Rust? Or there is a better way to group nested async calls?
Most errors are Send so you can just change the return type to:
Box<dyn std::error::Error + Send>
It's also common to have + Sync.

How to extract values from async functions to a non-async one? [duplicate]

I am trying to use hyper to grab the content of an HTML page and would like to synchronously return the output of a future. I realized I could have picked a better example since synchronous HTTP requests already exist, but I am more interested in understanding whether we could return a value from an async calculation.
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio;
use futures::{future, Future, Stream};
use hyper::Client;
use hyper::Uri;
use hyper_tls::HttpsConnector;
use std::str;
fn scrap() -> Result<String, String> {
let scraped_content = future::lazy(|| {
let https = HttpsConnector::new(4).unwrap();
let client = Client::builder().build::<_, hyper::Body>(https);
client
.get("https://hyper.rs".parse::<Uri>().unwrap())
.and_then(|res| {
res.into_body().concat2().and_then(|body| {
let s_body: String = str::from_utf8(&body).unwrap().to_string();
futures::future::ok(s_body)
})
}).map_err(|err| format!("Error scraping web page: {:?}", &err))
});
scraped_content.wait()
}
fn read() {
let scraped_content = future::lazy(|| {
let https = HttpsConnector::new(4).unwrap();
let client = Client::builder().build::<_, hyper::Body>(https);
client
.get("https://hyper.rs".parse::<Uri>().unwrap())
.and_then(|res| {
res.into_body().concat2().and_then(|body| {
let s_body: String = str::from_utf8(&body).unwrap().to_string();
println!("Reading body: {}", s_body);
Ok(())
})
}).map_err(|err| {
println!("Error reading webpage: {:?}", &err);
})
});
tokio::run(scraped_content);
}
fn main() {
read();
let content = scrap();
println!("Content = {:?}", &content);
}
The example compiles and the call to read() succeeds, but the call to scrap() panics with the following error message:
Content = Err("Error scraping web page: Error { kind: Execute, cause: None }")
I understand that I failed to launch the task properly before calling .wait() on the future but I couldn't find how to properly do it, assuming it's even possible.
Standard library futures
Let's use this as our minimal, reproducible example:
async fn example() -> i32 {
42
}
Call executor::block_on:
use futures::executor; // 0.3.1
fn main() {
let v = executor::block_on(example());
println!("{}", v);
}
Tokio
Use the tokio::main attribute on any function (not just main!) to convert it from an asynchronous function to a synchronous one:
use tokio; // 0.3.5
#[tokio::main]
async fn main() {
let v = example().await;
println!("{}", v);
}
tokio::main is a macro that transforms this
#[tokio::main]
async fn main() {}
Into this:
fn main() {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async { {} })
}
This uses Runtime::block_on under the hood, so you can also write this as:
use tokio::runtime::Runtime; // 0.3.5
fn main() {
let v = Runtime::new().unwrap().block_on(example());
println!("{}", v);
}
For tests, you can use tokio::test.
async-std
Use the async_std::main attribute on the main function to convert it from an asynchronous function to a synchronous one:
use async_std; // 1.6.5, features = ["attributes"]
#[async_std::main]
async fn main() {
let v = example().await;
println!("{}", v);
}
For tests, you can use async_std::test.
Futures 0.1
Let's use this as our minimal, reproducible example:
use futures::{future, Future}; // 0.1.27
fn example() -> impl Future<Item = i32, Error = ()> {
future::ok(42)
}
For simple cases, you only need to call wait:
fn main() {
let s = example().wait();
println!("{:?}", s);
}
However, this comes with a pretty severe warning:
This method is not appropriate to call on event loops or similar I/O situations because it will prevent the event loop from making progress (this blocks the thread). This method should only be called when it's guaranteed that the blocking work associated with this future will be completed by another thread.
Tokio
If you are using Tokio 0.1, you should use Tokio's Runtime::block_on:
use tokio; // 0.1.21
fn main() {
let mut runtime = tokio::runtime::Runtime::new().expect("Unable to create a runtime");
let s = runtime.block_on(example());
println!("{:?}", s);
}
If you peek in the implementation of block_on, it actually sends the future's result down a channel and then calls wait on that channel! This is fine because Tokio guarantees to run the future to completion.
See also:
How can I efficiently extract the first element of a futures::Stream in a blocking manner?
As this is the top result that come up in search engines by the query "How to call async from sync in Rust", I decided to share my solution here. I think it might be useful.
As #Shepmaster mentioned, back in version 0.1 futures crate had beautiful method .wait() that could be used to call an async function from a sync one. This must-have method, however, was removed from later versions of the crate.
Luckily, it's not that hard to re-implement it:
trait Block {
fn wait(self) -> <Self as futures::Future>::Output
where Self: Sized, Self: futures::Future
{
futures::executor::block_on(self)
}
}
impl<F,T> Block for F
where F: futures::Future<Output = T>
{}
After that, you can just do following:
async fn example() -> i32 {
42
}
fn main() {
let s = example().wait();
println!("{:?}", s);
}
Beware that this comes with all the caveats of original .wait() explained in the #Shepmaster's answer.
This works for me using tokio:
tokio::runtime::Runtime::new()?.block_on(fooAsyncFunction())?;

Is there a way to do recursive async calls on Rust without Box?

Recursive calls with a retry count embedded into the argument is good for the mental model of the project because you don't have to keep thinking about the state of the object. The retry_count is passed in every call. Here's a simple implementation:
use std::future::Future;
use futures::future::{BoxFuture, FutureExt};
fn do_or_fail() -> std::result::Result<(),()> {
Ok(())
}
fn do_something<'a>(
retry_count: u32
) -> BoxFuture<'a, std::result::Result<(), ()>> {
async move {
match do_or_fail() {
Ok(_) => Ok(()),
Err(_) => do_something(retry_count -1).await
}
}.boxed()
}
fn main() {
do_something(3);
}
Playground
The problem is that this requires a dynamic allocation at every call so it can return the BoxFuture. This is because of how async is implemented in Rust. It generates a state machine for every await call so the return type is not Sized.
What would be good ways to overcome dynamic allocation in recursive async calls?
While it technically doesn't overcome the problem (it just hides it), I can recommend the crate async_recursion.
https://crates.io/crates/async-recursion
use async_recursion::async_recursion;
use tokio;
fn do_or_fail() -> std::result::Result<(), ()> {
Ok(())
}
#[async_recursion]
async fn do_something(retry_count: u32) -> std::result::Result<(), ()> {
match do_or_fail() {
Ok(_) => Ok(()),
Err(_) => do_something(retry_count - 1).await,
}
}
#[tokio::main]
async fn main() {
do_something(3).await.unwrap()
}
I don't think you actually can overcome dynamic allocation in async calls, that's just how async works in general.

Trouble Pinning Async Fn Object Using Tokio

I seem unable to use pnet's datalink library in any asynchronous way. The trouble appears to relate to pinning. This snippet adapted from pnet's main example:
async fn ethernet_channel(i: NetworkInterface) {
// Create Ethernet channel:
let (mut tx, mut rx) = match datalink::channel(&i, Default::default()) {
Ok(Ethernet(tx, rx)) => (tx, rx),
_ => panic!("Error creating channel")
};
loop { // handle inbound Ethernet packets forever
tokio::select! {
Ok(packet) = rx.next() => {}
// TODO: include some arm like an mpsc oneshot to break out, like Tokio's examples
}
}
}
Leads to compiler error: ror[E0599]: no method named 'poll' found for struct 'Pin<&mut std::result::Result<&[u8], std::io::Error>>' in the current scope
The Tokio tutorial covers this exact scenario and includes this remark: "to .await a reference, the value being referenced must be pinned or implement Unpin."
Combined with knowing the return value of the rx.next() function is the Result<&[u8], Error> mentioned in the error, I gather the packet byte array is the culprit "value being referenced" that must be pinned. I've tried many combinations of tokio::pin!(), including what makes the most sense to me based on Tokio's same example, to no avail:
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn Error>> {
for interface in datalink::interfaces() {
let operation = ethernet_channel(interface);
tokio::pin!(operation);
tokio::select! {
_ = &mut operation => {}
}
}
}
I've also tried tokio::pin!() within my "ethernet_channel()" function, before and after datalink::channel(), on both the NetworkInterface and rx. I always end up with the same error as above. Any guidance appreciated, pulling my hair out.

Resources