I am prototyping channel based system and when converting code to use async tasks, I came across the error that &mut self needs to have an appropriate lifetime.
I have tried setting the &mut self to &'static self but that does not work. I have tried to wrap the entire code in an async block returning Future<Output=()> + 'static which also did not work.
Here is the code:
struct RequestManager {
connection_state: ConnectionState,
backoff: u64,
exponent: u32,
maximum: u64,
}
impl RequestManager {
async fn run(&mut self, mut feed_queue: Receiver<FeedItem>) {
let (mut fetch_result_sender, mut fetch_result_receiver) = channel(5);
let (mut fetch_request_sender, mut fetch_request_receiver) = channel::<FeedItem>(5);
let request_sender = mpsc::Sender::clone(&fetch_result_sender);
tokio::spawn(async move {
loop {
match self.connection_state {
ConnectionState::Closed => {
while let Some(feed) = fetch_request_receiver.recv().await {
let mut request_sender = mpsc::Sender::clone(&request_sender);
tokio::spawn(async move {
let response = make_request(&feed.feed_address).await;
if let Err(_) = response {
self.connection_state =
ConnectionState::HalfOpen(feed.feed_address);
} else {
let response = read_response_body(response.unwrap()).await;
let result = FetchResult { body: response };
if let Err(e) = request_sender.send(result).await {
eprintln!("could not send fetch result: {}", e);
}
}
});
}
}
ConnectionState::HalfOpen(url) => {
let response = make_request(&url).await;
if let Err(_) = response {
self.connection_state = ConnectionState::Open(url);
} else {
let response = read_response_body(response.unwrap()).await;
let result = FetchResult { body: response };
// // TODO: sends to task/feedService
connection_state = ConnectionState::Closed;
if let Err(e) = fetch_result_sender.send(result).await {
eprintln!("could not send fetch result: {}", e);
}
}
}
ConnectionState::Open(url) => {
let new_backoff = calculate_backoff();
delay_for(Duration::from_secs(new_backoff));
self.connection_state = ConnectionState::HalfOpen(url)
}
}
}
});
}
}
Related
I have a struct G like this
struct G { /* some member */ }
impl G {
async fn ref_foo(&self) { /* some code uses G's member */ }
async fn mut_foo(&mut self) { /* some code modifies G's member */ }
}
which is responsible to handle requests from a mpsc::Receiver, like this -- which won't compile:
async fn run_loop(mut rx: impl Stream<Item = Task> + FusedStream + Unpin) {
let mut g = G {};
let mut QS = FuturesUnordered::new();
loop {
select! {
t = rx.select_next_some() => match t {
TR {..} => QS.push(g.ref_foo()),
TM {..} => QS.push(g.mut_foo()),
},
_ = QS.select_next_some() => {},
}
}
}
the above code won't compile due to multiple mutable reference to g.
Target:
What I want is that the loop runs in parallel for any number of ref_foo tasks, and when it needs to run a mut_foo task, it waits until every ref_foo task finish, and then run the mut_foo task, then it can run other tasks as usual.
/ g.ref_foo() \ / ...
| g.ref_foo() | | ...
g.mut_foo() => < g.ref_foo() > => g.mut_foo() => g.mut_foo() => < ...
| g.ref_foo() | | ...
\ g.ref_foo() / \ ...
Additional Infomation:
I used to move implementation of mut_foo to the select loop, and remove async on g.mut_foo() so that no mutable reference would be used in stream QS.
But this implementation is really cubersome and undoubtedly broke G's design.
Just now, I come up with another implementation by make a wrapper:
async fn run_task(mut g: G, t: Task) -> G {
match t {
TR {..} => g.ref_foo().await,
TM {..} => g.mut_foo().await,
};
g
}
while in the select loop:
async fn run_loop(mut rx: impl Stream<Item = Task> + FusedStream + Unpin) {
let g0 = G {};
let mut QS = FuturesUnordered::new();
let mut getter = FuturesUnordered::new();
getter.push(ready(g0));
loop {
select! {
t = rx.select_next_some() => {
let mut g = getter.select_next_some().await;
QS.push(run_task(g, t));
},
mut g = QS.select_next_some() => getter.push(ready(g)),
}
}
}
this one compiles, but it's not so "async" as it can possibly be. In this implementation, ref_foo tasks are also running sequentially.
Question:
Are there more material I should learn to solve this problem? The technics I'm using comes from rust-async-book
Do I HAVE TO use RefCell to solve this problem? IMHO, this should be a trivial problem that can be solved without breaking rust's borrowing rules (by using RefCell).
Can I change my wrap run_task and the select loop so that ref_foo runs in parallel? I have problem in the implementation because G is flowing getter => QS => getter => ..., there's no long-term G instance, and I cannot figure out where I can store it.
Append some of my thoughts:
Since mut_foo can not be run in parallel, I am trying to solve this problem by removing async keyword on mut_foo -- with little progress. The core problem is that, immutable ref to G is needed for parallel running of ref_foo, but I have to get rid of all these immutable ref G when it's time for mut_foo. The fact do not change whether mut_foo is async or not ( or "whether mut_foo returns ref G or not").
I've solved Question 3 with a lot of if statements. I hope there are some more elegant implementations. And, I really appreciate any learning material as stated in Question 1.
here's the full code that compiles (simplified):
use tokio::runtime;
use std::thread;
use std::time::Duration;
use futures::{
select, StreamExt, SinkExt,
future::{ready},
stream::{FusedStream, FuturesUnordered, Stream},
};
struct G;
impl G {
async fn ref_foo(&self) { println!("ref_foo +++"); tokio::time::sleep(Duration::from_millis(500)).await; println!("ref_foo ---"); }
async fn mut_foo(&mut self) { println!("mut_foo +++"); tokio::time::sleep(Duration::from_millis(500)).await; println!("mut_foo ---"); }
}
#[derive(Clone)]
enum Task {
TR,
TM,
}
// wrappers
async fn run_ref_task(g: &G, task: Task) {
match task {
Task::TR => g.ref_foo().await,
_ => {},
};
}
async fn run_mut_task(mut g: G, task: Task) -> G {
match task {
Task::TM => g.mut_foo().await,
_ => {},
};
g
}
async fn run_loop(mut rx: impl Stream<Item = Task> + FusedStream + Unpin) {
let g0 = G;
let mut getter = FuturesUnordered::new();
getter.push(ready(g0));
// the following streams stores only `ready(task)`
let mut mut_tasks = FuturesUnordered::new(); // for tasks that's scheduled in this loop
let mut ref_tasks = FuturesUnordered::new();
let mut mut_delay = FuturesUnordered::new(); // for tasks that's scheduled in next loop
let mut ref_delay = FuturesUnordered::new();
loop {
println!("============ avoid idle loops ============");
let g = getter.select_next_some().await;
{
let mut queries = FuturesUnordered::new(); // where we schedule ref_foo tasks
loop {
println!("------------ avoid idle ref_task loops ------------");
select! {
task = rx.select_next_some() => {
match &task {
Task::TR => ref_delay.push(ready(task)),
Task::TM => mut_tasks.push(ready(task)),
};
if mut_delay.is_empty() && ref_tasks.is_empty() && queries.is_empty() { break; }
},
task = mut_delay.select_next_some() => {
mut_tasks.push(ready(task));
if mut_delay.is_empty() && ref_tasks.is_empty() && queries.is_empty() { break; }
}
task = ref_tasks.select_next_some() => {
queries.push(run_ref_task(&g, task));
}
_ = queries.select_next_some() => {
if mut_delay.is_empty() && ref_tasks.is_empty() && queries.is_empty() { break; }
},
}
}
}
getter.push(ready(g));
{
let mut queries = FuturesUnordered::new(); // where we schedule mut_foo tasks
loop {
println!("------------ avoid idle mut_task loops ------------");
select! {
task = rx.select_next_some() => {
match &task {
Task::TR => ref_tasks.push(ready(task)),
Task::TM => mut_delay.push(ready(task)),
};
if ref_delay.is_empty() && mut_tasks.is_empty() && queries.is_empty() { break; }
},
task = ref_delay.select_next_some() => {
ref_tasks.push(ready(task));
if ref_delay.is_empty() && mut_tasks.is_empty() && queries.is_empty() { break; }
}
g = getter.select_next_some() => {
if let Some(task) = mut_tasks.next().await {
queries.push(run_mut_task(g, task));
} else {
getter.push(ready(g));
if ref_delay.is_empty() && queries.is_empty() { break; }
}
}
g = queries.select_next_some() => {
getter.push(ready(g));
if ref_delay.is_empty() && mut_tasks.is_empty() && queries.is_empty() { break; }
}
}
}
}
}
}
fn main() {
let (mut tx, rx) = futures::channel::mpsc::channel(10000);
let th = thread::spawn(move || thread_main(rx));
let tasks = vec![Task::TR, Task::TR, Task::TM, Task::TM, Task::TR, Task::TR, Task::TR, Task::TM, Task::TM];
let rt = runtime::Builder::new_multi_thread().enable_time().build().unwrap();
rt.block_on(async {
loop {
for task in tasks.clone() {
tx.send(task).await.expect("");
}
tokio::time::sleep(Duration::from_secs(10)).await;
}
});
th.join().expect("");
}
fn thread_main(rx: futures::channel::mpsc::Receiver<Task>) {
let rt = runtime::Builder::new_multi_thread().enable_time().build().unwrap();
rt.block_on(async {
run_loop(rx).await;
});
}
My first Rust program compiles and runs:
use structopt::StructOpt;
use pcap::{Device,Capture};
use std::process::exit;
#[derive(StructOpt)]
struct Cli {
/// the capture device
device: String,
}
fn main() {
let devices = Device::list();
let args = Cli::from_args();
let mut optdev :Option<Device> = None;
for d in devices.unwrap() {
//println!("device: {:?}", d);
if d.name == args.device {
optdev = Some(d);
}
}
let dev = match optdev {
None => {
println!("Device {} not found.", args.device);
exit(1);
},
Some(dev) => dev,
};
let mut cap = Capture::from_device(dev).unwrap()
.promisc(true)
.snaplen(100)
.open().unwrap();
while let Ok(packet) = cap.next() {
println!("received packet! {:?}", packet);
}
}
I have some complex code which iterates through the Vec of devices, testing each one's .name property against args.device.
I'm guessing that there is a method of 'looking-up' an entry in a Vec, such that I can replace all the optdev lines with something like:
let dev = match devices.unwrap().look_up(.name == args.device) {
None => {
println!("Device {} not found.", args.device);
exit(1);
},
Some(dev) => dev,
};
What is the syntax for such a look_up()?
Or is there a more idiomatic way of doing this?
What is the syntax for such a look_up()?
Iterator::find. Since the operation is not specific to vectors (or slices), it doesn't live there, and is applicable to any iterator instead.
It'd look something like this:
let dev = match devices.unwrap().into_iter().find(|d| d.name == args.device) {
None => {
println!("Device {} not found.", args.device);
exit(1);
},
Some(dev) => dev,
};
or
let dev = if let Some(dev) = devices.unwrap().into_iter().find(|d| d.name == args.device) {
dev
} else {
println!("Device {} not found.", args.device);
exit(1);
};
(side-note: you may also want to use eprintln for, well, error reporting).
Though a somewhat cleaner error handling could be along the lines of (note: not tested so there might be semantic or syntactic mistakes):
use std::fmt;
use std:errors::Error;
#[derive(Debug)]
struct NoDevice(String);
impl fmt::Display for NoDevice {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "Device {} not found", self.0)
}
}
impl Error for NoDevice {}
fn main() -> Result<(), Box<dyn Error>> {
let devices = Device::list()?;
let args = Cli::from_args();
let dev = devices.into_iter()
.find(|d| d.name == args.device)
.ok_or_else(|| NoDevice(args.device))?
let mut cap = Capture::from_device(dev)?
.promisc(true)
.snaplen(100)
.open()?;
while let Ok(packet) = cap.next() {
println!("received packet! {:?}", packet);
}
}
I have a function which returns Future. It accepts another function which accepts one argument and returns Future. Second function can be implemented as combinators chain passed into first function. It looks like this:
use bb8::{Pool, RunError};
use bb8_postgres::PostgresConnectionManager;
use tokio_postgres::{error::Error, Client, NoTls};
#[derive(Clone)]
pub struct DataManager(Pool<PostgresConnectionManager<NoTls>>);
impl DataManager {
pub fn new(pool: Pool<PostgresConnectionManager<NoTls>>) -> Self {
Self(pool)
}
pub fn create_user(
&self,
reg_req: UserRequest,
) -> impl Future<Item = User, Error = RunError<Error>> {
let sql = "long and awesome sql";
let query = move |mut conn: Client| { // function which accepts one argument and returns Future
conn.prepare(sql).then(move |r| match r {
Ok(select) => {
let f = conn
.query(&select, &[®_req.email, ®_req.password])
.collect()
.map(|mut rows| {
let row = rows.remove(0);
row.into()
})
.then(move |r| match r {
Ok(v) => Ok((v, conn)),
Err(e) => Err((e, conn)),
});
Either::A(f)
}
Err(e) => Either::B(future::err((e, conn))),
})
};
self.0.run(query) // function which returns Future and accepts another function
}
}
But I want to write code of create_user as a struct implementing Future.
struct UserCreator(Pool<PostgresConnectionManager<NoTls>>, UserRequest);
impl UserCreator {
fn new(pool: Pool<PostgresConnectionManager<NoTls>>, reg_req: UserRequest) -> Self {
Self(pool, reg_req)
}
}
How to implement Future for this struct that works as first function? Please help me with an example.
Now I tried to make it like this, but nothing is computed and execution always blocks.
impl Future for UserCreator {
type Item = User;
type Error = RunError<Error>;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
// Code which which works like `DataManager.create_user`
let sql = "long and awesome sql";
let reg_req = &self.1;
let query = move |mut conn: Client| {
conn.prepare(sql).then(move |r| match r {
Ok(select) => {
let f = conn
.query(&select, &[®_req.email, ®_req.password])
.collect()
.map(|mut rows| {
let row = rows.remove(0);
row.into()
})
.then(move |r| match r {
Ok(v) => Ok((v, conn)),
Err(e) => Err((e, conn)),
});
Either::A(f)
}
Err(e) => Either::B(future::err((e, conn))),
})
};
self.0.run(query).poll()
}
}
I've been mucking with tokio for a few weeks in the pursuit of writing a protocol using tokio_uds. There are several issues with the following code:
framed.for_each is called over and over from a single response.
The socket only sends 1 real message, but the Decoder decodes the exact same event as many times as it can until it fills up the bounded channel.
Nothing is ever received over the channel (rx.for_each never prints anything), though it appears to be written until it fills up.
I need to use a UnixStream and not a UnixListener because there's some data I must put over the socket first to 'subscribe' to the service and let it know what to send.
use byteorder::{ByteOrder, LittleEndian};
use bytes::{Buf, BufMut, Bytes, BytesMut, IntoBuf};
use futures::prelude::*;
use futures::sync::mpsc::{self, Receiver, Sender};
use futures::Stream;
use tokio::prelude::*;
use tokio_codec::{Decoder, Encoder, FramedRead};
use tokio_uds::UnixStream;
fn subscribe(tx: Sender<event::Evt>, events: Vec<Event>) -> io::Result<()> {
let fut = UnixStream::connect(socket_path()?)
.and_then(move |stream| {
// some setup
tokio::io::write_all(stream, buf)
})
.and_then(|(stream, _buf)| {
let buf = [0_u8; 30]; // <i3-ipc (6 bytes)><len (4 bytes)><type (4 bytes)><{success:true} 16 bytes>
tokio::io::read_exact(stream, buf)
})
.and_then(|(stream, initial)| {
if &initial[0..6] != MAGIC.as_bytes() {
panic!("Magic str not received");
}
// decoding initial response and returning stream
future::ok(stream)
})
.and_then(move |stream| {
let framed = FramedRead::new(stream, EvtCodec);
let sender = framed
.for_each(move |evt| {
let tx = tx.clone();
tx.send(evt).wait(); // this line is called continuously until buffer fills
Ok(())
})
.map_err(|err| println!("{}", err));
tokio::spawn(sender);
Ok(())
})
.map(|_| ())
.map_err(|e| eprintln!("{:?}", e));
tokio::run(fut);
Ok(())
}
fn test_sub() -> io::Result<()> {
let (tx, rx) = mpsc::channel(5);
subscribe(tx, vec![Event::Window])?;
let fut = rx.for_each(|e: event::Evt| {
println!("received"); // never reaches
future::ok(())
});
tokio::spawn(fut);
Ok(())
}
My Decoder:
pub struct EvtCodec;
/// decoding: "<i3-ipc><payload len: u32><msg type: u32><payload>"
impl Decoder for EvtCodec {
type Item = event::Evt;
type Error = io::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, io::Error> {
if src.len() > 14 {
if &src[0..6] != MAGIC.as_bytes() {
return Err(io::Error::new(
io::ErrorKind::Other,
format!("Expected 'i3-ipc' but received: {:?}", &src[0..6]),
));
}
let payload_len = LittleEndian::read_u32(&src[6..10]) as usize;
let evt_type = LittleEndian::read_u32(&src[10..14]);
dbg!(&src.len()); // 878
dbg!(payload_len); // 864
if src.len() < 14 + payload_len {
Ok(None)
} else {
let evt = decode_evt(evt_type, src[14..].as_mut().to_vec())?;
dbg!(&evt); // correctly prints out a well-formed event
Ok(Some(evt))
}
} else {
Ok(None)
}
}
}
I saw that you resolved your other issue, and I'd be really interested to see how you solved this problem. Here's how I fixed it on my TCP Tokio side project:
use byteorder::{ByteOrder, LittleEndian};
use bytes::{Buf, BufMut, Bytes, BytesMut, IntoBuf};
use futures::prelude::*;
use futures::sync::mpsc::{self, Receiver, Sender};
use futures::Stream;
use tokio::prelude::*;
use tokio_codec::{Decoder, Encoder, FramedRead};
use tokio_uds::UnixStream;
fn subscribe(tx: Sender<event::Evt>, rx: Receiver<event::Evt>, events: Vec<Event>) -> io::Result<()> {
let fut = UnixStream::connect(socket_path()?)
.and_then(move |stream| {
// some setup
tokio::io::write_all(stream, buf)
})
.and_then(|(stream, _buf)| {
let buf = [0_u8; 30]; // <i3-ipc (6 bytes)><len (4 bytes)><type (4 bytes)><{success:true} 16 bytes>
tokio::io::read_exact(stream, buf)
})
.and_then(|(stream, initial)| {
if &initial[0..6] != MAGIC.as_bytes() {
panic!("Magic str not received");
}
// decoding initial response and returning stream
future::ok(stream)
})
.and_then(move |stream| {
let framed = FramedRead::new(stream, EvtCodec);
let (writer, reader) = framed.split();
// Connect your framed reader to the channel
let sink = rx.forward(writer.sink_map_err(|_| ()));
tokio::spawn(sink.map(|_| ()));
let sender = reader
.for_each(move |evt| {
let tx = tx.clone();
tx.send(evt).wait(); // this line is called continuously until buffer fills
Ok(())
})
.map_err(|err| println!("{}", err));
tokio::spawn(sender);
Ok(())
})
.map(|_| ())
.map_err(|e| eprintln!("{:?}", e));
tokio::run(fut);
Ok(())
}
fn test_sub() -> io::Result<()> {
let (tx, rx) = mpsc::channel(5);
subscribe(tx, rx, vec![Event::Window])?;
let fut = rx.for_each(|e: event::Evt| {
println!("received"); // never reaches
future::ok(())
});
tokio::spawn(fut);
Ok(())
}
And the Decoder with the buffer clear:
pub struct EvtCodec;
/// decoding: "<i3-ipc><payload len: u32><msg type: u32><payload>"
impl Decoder for EvtCodec {
type Item = event::Evt;
type Error = io::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, io::Error> {
if src.len() > 14 {
if &src[0..6] != MAGIC.as_bytes() {
return Err(io::Error::new(
io::ErrorKind::Other,
format!("Expected 'i3-ipc' but received: {:?}", &src[0..6]),
));
}
let payload_len = LittleEndian::read_u32(&src[6..10]) as usize;
let evt_type = LittleEndian::read_u32(&src[10..14]);
dbg!(&src.len()); // 878
dbg!(payload_len); // 864
if src.len() < 14 + payload_len {
Ok(None)
} else {
let evt = decode_evt(evt_type, src[14..].as_mut().to_vec())?;
dbg!(&evt); // correctly prints out a well-formed event
src.clear(); // Clears the buffer, so you don't have to keep decoding the same packet over and over.
Ok(Some(evt))
}
} else {
Ok(None)
}
}
}
Hope this helps!
EDIT:
According to a user on the rust subreddit that commented after I included this solution in a blog post, src.clear() is probably the wrong answer for me. I should instead be using `src.advance(14+payload_len)
linking the reddit comment here
I'm using Alamofire to execute a number of asynchronous requests concurrently, and SwiftyJSON to handle the response.
I need help making sure that appending to moviesByCategory occurs in order.
For example, the "top_rated" data response should be the first element appended to moviesByCategory, not "upcoming".
var moviesByCategory = [[JSON]]()
override func viewDidLoad() {
super.viewDidLoad()
let apiEndPoints = ["top_rated", "popular", "now_playing", "upcoming"]
let dispatchGroup = DispatchGroup()
for endPoint in apiEndPoints {
let endPointURL = URL(string: "https://api.themoviedb.org/3/movie/\(endPoint)?api_key=\(apiKey)&language=en-US&page=1")!
dispatchGroup.enter()
getMoviesFromEndPoint(url: endPointURL)
}
dispatchGroup.notify(queue: DispatchQueue.main) {
self.tableView.reloadData()
}
}
func getMoviesFromEndPoint(url: URL, group: dispatchGroup) {
Alamofire.request(url).responseData { response in
if let data = response.result.value {
let json = JSON(data: data)
self.moviesByCategory.append(json["results"].arrayValue)
}
}
}
The purpose for DispatchGroup is to reload the UITableView once all requests have completed.
Any help with this would be tremendously appreciated. Please do point out where I am wrong.
Add a completion handler parameter to getMoviesFromEndPoint:
func getMoviesFromEndPoint(url: URL, completion: () -> Void) { ... }
and leave the group within after the network call completed:
getMoviesFromEndPoint(url: endPointURL) {
dispatchGroup.leave()
}
Complete code:
override func viewDidLoad() {
super.viewDidLoad()
let apiEndPoints = ["top_rated", "popular", "now_playing", "upcoming"]
let dispatchGroup = DispatchGroup()
for endPoint in apiEndPoints {
let endPointURL = URL(string: "https://api.themoviedb.org/3/movie/\(endPoint)?api_key=\(apiKey)&language=en-US&page=1")!
dispatchGroup.enter()
getMoviesFromEndPoint(url: endPointURL) {
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: DispatchQueue.main) {
self.tableView.reloadData()
}
}
func getMoviesFromEndPoint(url: URL, completion: () -> Void) {
Alamofire.request(url).responseData { response in
if let data = response.result.value {
let json = JSON(data: data)
self.moviesByCategory.append(json["results"].arrayValue)
}
completion()
}
}