I'm working on a project to implement a distributed key value store in rust. I've made the server side code using Tokio's asynchronous runtime. I'm running into an issue where it seems my asynchronous code is blocking so when I have multiple connections to the server only one TcpStream is processed. I'm new to implementing async code, both in general and on rust, but I thought that other streams would be accepted and processed if there was no activity on a given tcp stream.
Is my understanding of async wrong or am I using tokio incorrectly?
This is my entry point:
use std::error::Error;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use env_logger;
use log::{debug, info};
use structopt::StructOpt;
use tokio::net::TcpListener;
extern crate blue;
use blue::ipc::message;
use blue::store::args;
use blue::store::cluster::{Cluster, NodeRole};
use blue::store::deserialize::deserialize_store;
use blue::store::handler::handle_stream;
use blue::store::wal::WriteAheadLog;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
let opt = args::Opt::from_args();
let addr = SocketAddr::from_str(format!("{}:{}", opt.host, opt.port).as_str())?;
let role = NodeRole::from_str(opt.role.as_str()).unwrap();
let leader_addr = match role {
NodeRole::Leader => addr,
NodeRole::Follower => SocketAddr::from_str(opt.follow.unwrap().as_str())?,
};
let wal_name = addr.to_string().replace(".", "").replace(":", "");
let wal_full_name = format!("wal{}.log", wal_name);
let wal_path = PathBuf::from(wal_full_name);
let mut wal = match wal_path.exists() {
true => {
info!("Existing WAL found");
WriteAheadLog::open(&wal_path)?
}
false => {
info!("Creating WAL");
WriteAheadLog::new(&wal_path)?
}
};
debug!("WAL: {:?}", wal);
let store_name = addr.to_string().replace(".", "").replace(":", "");
let store_pth = format!("{}.pb", store_name);
let store_path = Path::new(&store_pth);
let mut store = match store_path.exists() {
true => deserialize_store(store_path)?,
false => message::Store::default(),
};
let listener = TcpListener::bind(addr).await?;
let cluster = Cluster::new(addr, &role, leader_addr, &mut wal, &mut store).await?;
let store_path = Arc::new(store_path);
let store = Arc::new(Mutex::new(store));
let wal = Arc::new(Mutex::new(wal));
let cluster = Arc::new(Mutex::new(cluster));
info!("Blue launched. Waiting for incoming connection");
loop {
let (stream, addr) = listener.accept().await?;
info!("Incoming request from {}", addr);
let store = Arc::clone(&store);
let store_path = Arc::clone(&store_path);
let wal = Arc::clone(&wal);
let cluster = Arc::clone(&cluster);
handle_stream(stream, store, store_path, wal, cluster, &role).await?;
}
}
Below is my handler (handle_stream from the above). I excluded all the handlers in match input as I didn't think they were necessary to prove the point (full code for that section is here: https://github.com/matthewmturner/Bradfield-Distributed-Systems/blob/main/blue/src/store/handler.rs if it actually helps).
Specifically the point that is blocking is the line let input = async_read_message::<message::Request>(&mut stream).await;
This is where the server is waiting for communication from either a client or another server in the cluster. The behavior I currently see is that after connecting to server with client the server doesn't receive any of the requests to add other nodes to the cluster - it only handles the client stream.
use std::io;
use std::net::{SocketAddr, TcpStream};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use log::{debug, error, info};
use serde_json::json;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream as asyncTcpStream;
use super::super::ipc::message;
use super::super::ipc::message::request::Command;
use super::super::ipc::receiver::async_read_message;
use super::super::ipc::sender::{async_send_message, send_message};
use super::cluster::{Cluster, NodeRole};
use super::serialize::persist_store;
use super::wal::WriteAheadLog;
// TODO: Why isnt async working? I.e. connecting servers after client is connected stays on client stream.
pub async fn handle_stream<'a>(
mut stream: asyncTcpStream,
store: Arc<Mutex<message::Store>>,
store_path: Arc<&Path>,
wal: Arc<Mutex<WriteAheadLog<'a>>>,
cluster: Arc<Mutex<Cluster>>,
role: &NodeRole,
) -> io::Result<()> {
loop {
info!("Handling stream: {:?}", stream);
let input = async_read_message::<message::Request>(&mut stream).await;
debug!("Input: {:?}", input);
match input {
...
}
}
}
This is the code for async_read_message
pub async fn async_read_message<M: Message + Default>(
stream: &mut asyncTcpStream,
) -> io::Result<M> {
let mut len_buf = [0u8; 4];
debug!("Reading message length");
stream.read_exact(&mut len_buf).await?;
let len = i32::from_le_bytes(len_buf);
let mut buf = vec![0u8; len as usize];
debug!("Reading message");
stream.read_exact(&mut buf).await?;
let user_input = M::decode(&mut buf.as_slice())?;
debug!("Received message: {:?}", user_input);
Ok(user_input)
}
Your problem lies with how you're handling messages after clients have connected:
handle_stream(stream, store, store_path, wal, cluster, &role).await?;
This .await means your listening loop will wait for handle_stream to return, but (making some assumptions) this function won't return until the client has disconnected. What you want is to tokio::spawn a new task that can run independently:
tokio::spawn(handle_stream(stream, store, store_path, wal, cluster, &role));
You may have to change some of your parameter types to avoid lifetimes; tokio::spawn requires 'static since the task's lifetime is decoupled from the scope where it was spawned.
Related
I'm a beginner in rust, and I'm trying to use rust's asynchronous programming.
In my requirement scenario, I want to create a empty Future and complete it in another thread after a complex multi-round scheduling process. The CompletableFuture::complete of Java can meet my needs very well.
I have tried to find an implementation of Rust, but haven't found one yet.
Is it possible to do it in Rust?
I understand from the comments below that using a channel for this is more in line with rust's design.
My scenario is a hierarchical scheduling executor.
For example, Task1 will be splitted to several Drivers, each Driver will use multi thread(rayon threadpool) to do some computation work, and the former driver's state change will trigger the execution of next driver, the result of the whole task is the last driver's output and the intermedia drivers have no output. That is to say, my async function cannot get result from one spawn task directly, so I need a shared stack variable or a channel to transfer the result.
So what I really want is this: the last driver which is executed in a rayon thread, it can get a channel's tx by it's identify without storing it (to simplify the state change process).
I found the tx and rx of oneshot cannot be copies and they are not thread safe, and the send method of tx need ownership. So, I can't store the tx in main thread and let the last driver find it's tx by identify. But I can use mpsc to do that, I worte 2 demos and pasted it into the body of the question, but I have to create mpsc with capacity 1 and close it manually.
I wrote 2 demos, as bellow.I wonder if this is an appropriate and efficient use of mpsc?
Version implemented using oneshot, cannot work.
#[tokio::test]
pub async fn test_async() -> Result<()>{
let mut executor = Executor::new();
let res1 = executor.run(1).await?;
let res2 = executor.run(2).await?;
println!("res1 {}, res2 {}", res1, res2);
Ok(())
}
struct Executor {
pub pool: ThreadPool,
pub txs: Arc<DashMap<i32, RwLock<oneshot::Sender<i32>>>>,
}
impl Executor {
pub fn new() -> Self {
Executor{
pool: ThreadPoolBuilder::new().num_threads(10).build().unwrap(),
txs: Arc::new(DashMap::new()),
}
}
pub async fn run(&mut self, index: i32) -> Result<i32> {
let (tx, rx) = oneshot::channel();
self.txs.insert(index, RwLock::new(tx));
let txs_clone = self.txs.clone();
self.pool.spawn(move || {
let spawn_tx = txs_clone.get(&index).unwrap();
let guard = block_on(spawn_tx.read());
// cannot work, send need ownership, it will cause move of self
guard.send(index);
});
let res = rx.await;
return Ok(res.unwrap());
}
}
Version implemented using mpsc, can work, not sure about performance
#[tokio::test]
pub async fn test_async() -> Result<()>{
let mut executor = Executor::new();
let res1 = executor.run(1).await?;
let res2 = executor.run(2).await?;
println!("res1 {}, res2 {}", res1, res2);
// close channel after task finished
executor.close(1);
executor.close(2);
Ok(())
}
struct Executor {
pub pool: ThreadPool,
pub txs: Arc<DashMap<i32, RwLock<mpsc::Sender<i32>>>>,
}
impl Executor {
pub fn new() -> Self {
Executor{
pool: ThreadPoolBuilder::new().num_threads(10).build().unwrap(),
txs: Arc::new(DashMap::new()),
}
}
pub fn close(&mut self, index:i32) {
self.txs.remove(&index);
}
pub async fn run(&mut self, index: i32) -> Result<i32> {
let (tx, mut rx) = mpsc::channel(1);
self.txs.insert(index, RwLock::new(tx));
let txs_clone = self.txs.clone();
self.pool.spawn(move || {
let spawn_tx = txs_clone.get(&index).unwrap();
let guard = block_on(spawn_tx.value().read());
block_on(guard.deref().send(index));
});
// 0 mock invalid value
let mut res:i32 = 0;
while let Some(data) = rx.recv().await {
println!("recv data {}", data);
res = data;
break;
}
return Ok(res);
}
}
Disclaimer: It's really hard to picture what you are attempting to achieve, because the examples provided are trivial to solve, with no justification for the added complexity (DashMap). As such, this answer will be progressive, though it will remain focused on solving the problem you demonstrated you had, and not necessarily the problem you're thinking of... as I have no crystal ball.
We'll be using the following Result type in the examples:
type Result<T> = Result<T, Box<dyn Error + Send + Sync + 'static>>;
Serial execution
The simplest way to execute a task, is to do so right here, right now.
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Future<Output = Result<i32>>,
{
task().await
}
}
Async execution - built-in
When the execution of a task may involve heavy-weight calculations, it may be beneficial to execute it on a background thread.
Whichever runtime you are using probably supports this functionality, I'll demonstrate with tokio:
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Result<i32>,
{
Ok(tokio::task::spawn_block(task).await??)
}
}
Async execution - one-shot
If you wish to have more control on the number of CPU-bound threads, either to limit them, or to partition the CPUs of the machine for different needs, then the async runtime may not be configurable enough and you may prefer to use a thread-pool instead.
In this case, synchronization back with the runtime can be achieved via channels, the simplest of which being the oneshot channel.
impl Executor {
pub async fn run<F>(&self, task: F) -> Result<i32>
where
F: FnOnce() -> Result<i32>,
{
let (tx, mut rx) = oneshot::channel();
self.pool.spawn(move || {
let result = task();
// Decide on how to handle the fact that nobody will read the result.
let _ = tx.send(result);
});
Ok(rx.await??)
}
}
Note that in all of the above solutions, task remains agnostic as to how it's executed. This is a property you should strive for, as it makes it easier to change the way execution is handled in the future by more neatly separating the two concepts.
Problem is in a microservice with Tokio, where connect to db and other stuff async, but when some connection failed, microservice dont end work. Its a great when you need this case, but I need to end work of this microservice when connection lost... so could you help me how to safety shutdown process...?
src/main.rs
use tokio; // 1.0.0+
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(workers_number)
.enable_all()
.build()
.unwrap();
rt.block_on(async {
// health cheacker connection
let health_checker = match HealthChecker::new(some_configuration).await?;
// some connection to db
// ...
// transport client connection
// ...
// so when connection failed or lost I need to
// end process like `std::process::abort()`
// but I cant use it, because its unsafe
let mut task_handler = vec![];
// create some task
join_all(task_handler).await;
});
}
anyone have some ideas?
You can call any of the Runtime shutdown methods shutdown_timeout or shutdown_background.
If it is needed some king of waiting, you could spawn a task with a tokio::sync::oneshot that will trigger the shutdown when signaled.
use core::time::Duration;
use crate::tokio::time::sleep;
use tokio;
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
let handle = rt.handle().clone();
let (s, r) = tokio::sync::oneshot::channel();
rt.spawn(async move {
sleep(Duration::from_secs(1)).await;
s.send(0);
});
handle.block_on(async move {
r.await;
rt.shutdown_background();
});
}
Playground
I seem unable to use pnet's datalink library in any asynchronous way. The trouble appears to relate to pinning. This snippet adapted from pnet's main example:
async fn ethernet_channel(i: NetworkInterface) {
// Create Ethernet channel:
let (mut tx, mut rx) = match datalink::channel(&i, Default::default()) {
Ok(Ethernet(tx, rx)) => (tx, rx),
_ => panic!("Error creating channel")
};
loop { // handle inbound Ethernet packets forever
tokio::select! {
Ok(packet) = rx.next() => {}
// TODO: include some arm like an mpsc oneshot to break out, like Tokio's examples
}
}
}
Leads to compiler error: ror[E0599]: no method named 'poll' found for struct 'Pin<&mut std::result::Result<&[u8], std::io::Error>>' in the current scope
The Tokio tutorial covers this exact scenario and includes this remark: "to .await a reference, the value being referenced must be pinned or implement Unpin."
Combined with knowing the return value of the rx.next() function is the Result<&[u8], Error> mentioned in the error, I gather the packet byte array is the culprit "value being referenced" that must be pinned. I've tried many combinations of tokio::pin!(), including what makes the most sense to me based on Tokio's same example, to no avail:
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn Error>> {
for interface in datalink::interfaces() {
let operation = ethernet_channel(interface);
tokio::pin!(operation);
tokio::select! {
_ = &mut operation => {}
}
}
}
I've also tried tokio::pin!() within my "ethernet_channel()" function, before and after datalink::channel(), on both the NetworkInterface and rx. I always end up with the same error as above. Any guidance appreciated, pulling my hair out.
I'm trying to use async_std to receive UDP datagrams from the network.
There is a UdpSocket that implements async recv_from, this method returns a future but I need a async_std::stream::Stream that gives a stream of UDP datagrams because it is a better abstraction.
I've found tokio::net::UdpFramed that does exactly what I need but it is not available in current versions of tokio.
Generally speaking the question is how do I convert Futures from a given async function into a Stream?
For a single item, use FutureExt::into_stream:
use futures::prelude::*; // 0.3.1
fn outer() -> impl Stream<Item = i32> {
inner().into_stream()
}
async fn inner() -> i32 {
42
}
For a stream from a number of futures generated by a closure, use stream::unfold:
use futures::prelude::*; // 0.3.1
fn outer() -> impl Stream<Item = i32> {
stream::unfold((), |()| async { Some((inner().await, ())) })
}
async fn inner() -> i32 {
42
}
In your case, you can use stream::unfold:
use async_std::{io, net::UdpSocket}; // 1.4.0, features = ["attributes"]
use futures::prelude::*; // 0.3.1
fn read_many(s: UdpSocket) -> impl Stream<Item = io::Result<Vec<u8>>> {
stream::unfold(s, |s| {
async {
let data = read_one(&s).await;
Some((data, s))
}
})
}
async fn read_one(s: &UdpSocket) -> io::Result<Vec<u8>> {
let mut data = vec![0; 1024];
let (len, _) = s.recv_from(&mut data).await?;
data.truncate(len);
Ok(data)
}
#[async_std::main]
async fn main() -> io::Result<()> {
let s = UdpSocket::bind("0.0.0.0:9876").await?;
read_many(s)
.for_each(|d| {
async {
match d {
Ok(d) => match std::str::from_utf8(&d) {
Ok(s) => println!("{}", s),
Err(_) => println!("{:x?}", d),
},
Err(e) => eprintln!("Error: {}", e),
}
}
})
.await;
Ok(())
}
Generally speaking the question is how do I convert Futures from a given async function into a Stream?
There is FutureExt::into_stream, but don't let the name fool you; it is not a good fit for your situation.
There is a UdpSocket that implements async recv_from, this method returns a future but I need a async_std::stream::Stream that gives a stream of UDP datagrams because it is a better abstraction.
It is not necessarily a better abstraction here.
Specifically, async-std's UdpSocket::recv_from returns a future that has output type of (usize, SocketAddr) — the size of the data received and the peer address. If you were to use into_stream to convert it to a stream, it would give you just that, not the data received.
I've found tokio::net::UdpFramed that does exactly what I need but it is not available in current versions of tokio.
It has been moved to tokio-util crate. Unfortunately, you can't (easily) use that either. It requires a tokio::net::UdpSocket, which is not the same as async_std::net::UdpSocket.
You can, of course, use futures utility functions such as futures::stream::poll_fn or futures::stream::unfold to give UdpSocket::recv_from a futures::stream::Stream facade, but then what will you do with that? If you end up calling StreamExt::next to poll a value out of it, you could have used recv_from directly.
It is only necessary to reach for Stream if some API you are using requires a Stream input, such as rusoto:
Is it possible to create a Stream from a File rather than loading the file contents into memory?
I have a concept project where the client sends a server a number (PrimeClientRequest), the server computes if the value is prime or not, and returns a response (PrimeClientResponse). I want the client to be a simple CLI which prompts the user for a number, sends the request to the server, and displays the response. Ideally I want to do this using TcpClient from Tokio and Streams from Futures-Rs.
I've written a Tokio server using services and I want to reuse the same codec and proto for the client.
Part of the client is a function called read_prompt which returns a Stream. Essentially it is an infinite loop at which each iteration reads in some input from stdin.
Here's the relevant code:
main.rs
use futures::{Future, Stream};
use std::env;
use std::net::SocketAddr;
use tokio_core::reactor::Core;
use tokio_prime::protocol::PrimeClientProto;
use tokio_prime::request::PrimeRequest;
use tokio_proto::TcpClient;
use tokio_service::Service;
mod cli;
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let addr_string = env::args().nth(1).unwrap_or("127.0.0.1:8080".to_string());
let remote_addr = addr_string.parse::<SocketAddr>().unwrap();
println!("Connecting on {}", remote_addr);
let tcp_client = TcpClient::new(PrimeClientProto).connect(&remote_addr, &handle);
core.run(tcp_client.and_then(|client| {
client
.call(PrimeRequest { number: Ok(0) })
.and_then(|response| {
println!("RESP = {:?}", response);
Ok(())
})
})).unwrap();
}
cli.rs
use futures::{Future, Sink, Stream};
use futures::sync::mpsc;
use std::{io, thread};
use std::io::{Stdin, Stdout};
use std::io::prelude::*;
pub fn read_prompt() -> impl Stream<Item = u64, Error = ()> {
let (tx, rx) = mpsc::channel(1);
thread::spawn(move || loop {
let thread_tx = tx.clone();
let input = prompt(io::stdout(), io::stdin()).unwrap();
let parsed_input = input
.parse::<u64>()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid u64"));
thread_tx.send(parsed_input.unwrap()).wait().unwrap();
});
rx
}
fn prompt(stdout: Stdout, stdin: Stdin) -> io::Result<String> {
let mut stdout_handle = stdout.lock();
stdout_handle.write(b"> ")?;
stdout_handle.flush()?;
let mut buf = String::new();
let mut stdin_handle = stdin.lock();
stdin_handle.read_line(&mut buf)?;
Ok(buf.trim().to_string())
}
With the code above, the client sends a single request to the server before the client terminates. I want to be able to use the stream generated from read_prompt to provide input to the TcpClient and make a request per item in the stream. How would I go about doing this?
The full code can be found at joshleeb/tokio-prime.
The solution I have come up with (so far) has been to use the LoopFn in the Future-Rs crate. It's not ideal as a new connection still has to be made but it is at least a step in the right direction.
main.rs
use futures::{future, Future};
use std::{env, io};
use std::net::SocketAddr;
use tokio_core::reactor::{Core, Handle};
use tokio_prime::protocol::PrimeClientProto;
use tokio_prime::request::PrimeRequest;
use tokio_proto::TcpClient;
use tokio_service::Service;
mod cli;
fn handler<'a>(
handle: &'a Handle, addr: &'a SocketAddr
) -> impl Future<Item = (), Error = ()> + 'a {
cli::prompt(io::stdin(), io::stdout())
.and_then(move |number| {
TcpClient::new(PrimeClientProto)
.connect(addr, handle)
.and_then(move |client| Ok((client, number)))
})
.and_then(|(client, number)| {
client
.call(PrimeRequest { number: Ok(number) })
.and_then(|response| {
println!("{:?}", response);
Ok(())
})
})
.or_else(|err| {
println!("! {}", err);
Ok(())
})
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let addr_string = env::args().nth(1).unwrap_or("127.0.0.1:8080".to_string());
let remote_addr = addr_string.parse::<SocketAddr>().unwrap();
println!("Connecting on {}", remote_addr);
let client = future::loop_fn((), |_| {
handler(&handle, &remote_addr)
.map(|_| -> future::Loop<(), ()> { future::Loop::Continue(()) })
});
core.run(client).ok();
}
cli.rs
use futures::prelude::*;
use std::io;
use std::io::{Stdin, Stdout};
use std::io::prelude::*;
#[async]
pub fn prompt(stdin: Stdin, stdout: Stdout) -> io::Result<u64> {
let mut stdout_handle = stdout.lock();
stdout_handle.write(b"> ")?;
stdout_handle.flush()?;
let mut buf = String::new();
let mut stdin_handle = stdin.lock();
stdin_handle.read_line(&mut buf)?;
parse_input(buf.trim().to_string())
}
fn parse_input(s: String) -> io::Result<u64> {
s.parse::<u64>()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid u64"))
}