Is there some way how to shutdown tokio::runtime::Runtime? - asynchronous

Problem is in a microservice with Tokio, where connect to db and other stuff async, but when some connection failed, microservice dont end work. Its a great when you need this case, but I need to end work of this microservice when connection lost... so could you help me how to safety shutdown process...?
src/main.rs
use tokio; // 1.0.0+
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(workers_number)
.enable_all()
.build()
.unwrap();
rt.block_on(async {
// health cheacker connection
let health_checker = match HealthChecker::new(some_configuration).await?;
// some connection to db
// ...
// transport client connection
// ...
// so when connection failed or lost I need to
// end process like `std::process::abort()`
// but I cant use it, because its unsafe
let mut task_handler = vec![];
// create some task
join_all(task_handler).await;
});
}
anyone have some ideas?

You can call any of the Runtime shutdown methods shutdown_timeout or shutdown_background.
If it is needed some king of waiting, you could spawn a task with a tokio::sync::oneshot that will trigger the shutdown when signaled.
use core::time::Duration;
use crate::tokio::time::sleep;
use tokio;
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
let handle = rt.handle().clone();
let (s, r) = tokio::sync::oneshot::channel();
rt.spawn(async move {
sleep(Duration::from_secs(1)).await;
s.send(0);
});
handle.block_on(async move {
r.await;
rt.shutdown_background();
});
}
Playground

Related

Unable to send message to select! loop via an mpsc::unbounded_channel

I have an issue with a Redis client which I'm trying to integrate into a larger message broker.
The problem is that I am using the PUBSUB functionality of Redis in order to subscribe to a topic and the async implementation shown in the docs example does not properly react to disconnects from the server.
Basically doing a loop { tokio::select!{ Some(msg) = pubsub_stream.next() => { handle_message(msg); } } } would properly handle new messages, but when the server went down or got unreachable, I would not get notified and pubsub_stream.next() would wait forever on a dead connection. I assume that the client would then drop this connection as soon as a command would get sent to Redis, but this is a listen-only service with no intention to issue other commands.
So I tried to use an approach which I learned while adding WebSocket support via axum to this broker, where an unbound mpsc channel is used to deliver messages to a specific WebSocket client, and there it works.
The following is the approach which I'm trying to get to work, but for some reason the code in the select! loop is never executed. I'm intending to add more code from other channels to the select! loop, but I've removed them to keep this example clean.
Basically I am seeing the - 1 REDIS subscription event printouts but not the - 2 REDIS subscription event printouts.
pub async fn redis_async_task(storage: Arc<Storage>) {
//-----------------------------------------------------------------
let mut eb_broadcast_rx = storage.eb_broadcast_tx.subscribe();
let (mpsc_tx, mut mpsc_rx) = mpsc::unbounded_channel::<Msg>();
let mut interval_5s = IntervalStream::new(tokio::time::interval(Duration::from_secs(5)));
//-----------------------------------------------------------------
let _task = tokio::spawn({
async move {
loop {
tokio::select! {
Some(msg) = mpsc_rx.recv() => {
// Why is this never called?
let channel = msg.get_channel_name().to_string();
let payload = msg.get_payload::<String>().unwrap();
println!(" - 2 REDIS: subscription event: {} channel: {} payload: {}", channel, payload);
},
Some(_ts) = interval_5s.next() => {
// compute messages per second
println!("timer");
},
Ok(evt) = eb_broadcast_rx.recv() => {
// Some other events unrelated to Redis
if let Event::WsClientConnected{id: _id, name: _name} = evt {}
else if let Event::WsClientDisconnected{id: _id, name: _name} = evt {}
},
}
}
}
});
//-----------------------------------------------------------------
loop {
// loop which runs forever, reconnecting to Redis server upon disconnect
// and resubscribing to the topic.
println!("REDIS connecting");
let client = redis::Client::open("redis://127.0.0.1:6379/").unwrap();
if let Ok(mut connection) = client.get_connection() {
// We have a connection
println!("REDIS connected");
if let Err(error) = connection.subscribe(&["tokio"], |msg| {
// We are subscribed to the topic and receiving messages
if let Ok(payload) = msg.get_payload::<String>() {
let channel = msg.get_channel_name().to_string();
println!(" - 1 REDIS subscription event: channel: {} payload: {}", channel, payload);
// Send the message through the channel into the select! loop
if let Err(error) = mpsc_tx.send(msg) {
eprintln!("REDIS: error sending: {}", error);
}
};
// ControlFlow::Break(())
ControlFlow::<i32>::Continue
}) {
// Connection to Redis is lost, subscription aborts
eprintln!("REDIS subscription error: {:?} ", error);
};
} else {
// Connection to Redis failed, is probably not reachable.
println!("REDIS connection failed");
}
// Sleep for 1 second before reconnecting.
sleep(Duration::from_millis(1000)).await;
}
}
The code above is called from main like so, in parallel to other clients like WebSocket and MQTT, which do work.
#[tokio::main]
async fn main() {
// ...
tokio::spawn(task_redis::redis_async_task(storage.clone()))
// ...
}

reqwest post request freezes after random amount of time

I started learning rust 2 weeks ago, and has been making this application that watches a log file, and sends a bulk of the information to an elasticsearch DB.
The problem is that after certain amount of time, it freezes (using 100% CPU) and I don't understand why.
I've cut down on a lot of code to try to figure out the issue, but it still keeps freezing on this line according to clion debugger
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php")
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send() // <-- Exactly here
.await;
It freezes and doesn't return any error message.
This is the code in context:
use std::{env};
use std::io::{stdout, Write};
use std::path::Path;
use std::time::Duration;
use logwatcher::{LogWatcher, LogWatcherAction};
use serde_json::{json, Value};
use serde_json::Value::Null;
use tokio;
#[tokio::main]
async fn main() {
let mut log_watcher = LogWatcher::register("/var/log/test.log").unwrap();
let mut counter = 0;
let BULK_SIZE = 500;
log_watcher.watch(&mut move |line: String| { // This triggers each time a new line is appended to /var/log/test.log
counter += 1;
if counter >= BULK_SIZE {
futures::executor::block_on(async { // This has to be async because log_watcher is not async
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php") // <-- This is just for testing, it fails towards the DB too
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send() // <-- Freezes here
.await;
if _response.is_ok(){
println!("Ok");
}
});
counter = 0;
}
LogWatcherAction::None
});
}
The log file gets about 625 new lines every minute. The crash happends after about ~5500 - ~25000 lines has gone through, or it seems a bit random in general.
I'm suspecting the issue is either something to do with LogWatcher, reqwest, the block_on or the mix of async.
Does anyone have any clue why it randomly freezes?
The problem was indeed because of a mix of async with tokio and block_on, NOT directly reqwest.
The problem was solved when changing main to be non-async, and using tokio as the block_on for async calls instead of futures::executor::block_on.
fn main() {
let mut log_watcher = LogWatcher::register("/var/log/test.log").unwrap();
let mut counter = 0;
let BULK_SIZE = 500;
log_watcher.watch(&mut move |line: String| {
counter += 1;
if counter >= BULK_SIZE {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php")
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send()
.await;
if _response.is_ok(){
println!("Ok");
}
});
counter = 0;
}
LogWatcherAction::None
});
}

Why is async TcpStream blocking?

I'm working on a project to implement a distributed key value store in rust. I've made the server side code using Tokio's asynchronous runtime. I'm running into an issue where it seems my asynchronous code is blocking so when I have multiple connections to the server only one TcpStream is processed. I'm new to implementing async code, both in general and on rust, but I thought that other streams would be accepted and processed if there was no activity on a given tcp stream.
Is my understanding of async wrong or am I using tokio incorrectly?
This is my entry point:
use std::error::Error;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use env_logger;
use log::{debug, info};
use structopt::StructOpt;
use tokio::net::TcpListener;
extern crate blue;
use blue::ipc::message;
use blue::store::args;
use blue::store::cluster::{Cluster, NodeRole};
use blue::store::deserialize::deserialize_store;
use blue::store::handler::handle_stream;
use blue::store::wal::WriteAheadLog;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
let opt = args::Opt::from_args();
let addr = SocketAddr::from_str(format!("{}:{}", opt.host, opt.port).as_str())?;
let role = NodeRole::from_str(opt.role.as_str()).unwrap();
let leader_addr = match role {
NodeRole::Leader => addr,
NodeRole::Follower => SocketAddr::from_str(opt.follow.unwrap().as_str())?,
};
let wal_name = addr.to_string().replace(".", "").replace(":", "");
let wal_full_name = format!("wal{}.log", wal_name);
let wal_path = PathBuf::from(wal_full_name);
let mut wal = match wal_path.exists() {
true => {
info!("Existing WAL found");
WriteAheadLog::open(&wal_path)?
}
false => {
info!("Creating WAL");
WriteAheadLog::new(&wal_path)?
}
};
debug!("WAL: {:?}", wal);
let store_name = addr.to_string().replace(".", "").replace(":", "");
let store_pth = format!("{}.pb", store_name);
let store_path = Path::new(&store_pth);
let mut store = match store_path.exists() {
true => deserialize_store(store_path)?,
false => message::Store::default(),
};
let listener = TcpListener::bind(addr).await?;
let cluster = Cluster::new(addr, &role, leader_addr, &mut wal, &mut store).await?;
let store_path = Arc::new(store_path);
let store = Arc::new(Mutex::new(store));
let wal = Arc::new(Mutex::new(wal));
let cluster = Arc::new(Mutex::new(cluster));
info!("Blue launched. Waiting for incoming connection");
loop {
let (stream, addr) = listener.accept().await?;
info!("Incoming request from {}", addr);
let store = Arc::clone(&store);
let store_path = Arc::clone(&store_path);
let wal = Arc::clone(&wal);
let cluster = Arc::clone(&cluster);
handle_stream(stream, store, store_path, wal, cluster, &role).await?;
}
}
Below is my handler (handle_stream from the above). I excluded all the handlers in match input as I didn't think they were necessary to prove the point (full code for that section is here: https://github.com/matthewmturner/Bradfield-Distributed-Systems/blob/main/blue/src/store/handler.rs if it actually helps).
Specifically the point that is blocking is the line let input = async_read_message::<message::Request>(&mut stream).await;
This is where the server is waiting for communication from either a client or another server in the cluster. The behavior I currently see is that after connecting to server with client the server doesn't receive any of the requests to add other nodes to the cluster - it only handles the client stream.
use std::io;
use std::net::{SocketAddr, TcpStream};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use log::{debug, error, info};
use serde_json::json;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream as asyncTcpStream;
use super::super::ipc::message;
use super::super::ipc::message::request::Command;
use super::super::ipc::receiver::async_read_message;
use super::super::ipc::sender::{async_send_message, send_message};
use super::cluster::{Cluster, NodeRole};
use super::serialize::persist_store;
use super::wal::WriteAheadLog;
// TODO: Why isnt async working? I.e. connecting servers after client is connected stays on client stream.
pub async fn handle_stream<'a>(
mut stream: asyncTcpStream,
store: Arc<Mutex<message::Store>>,
store_path: Arc<&Path>,
wal: Arc<Mutex<WriteAheadLog<'a>>>,
cluster: Arc<Mutex<Cluster>>,
role: &NodeRole,
) -> io::Result<()> {
loop {
info!("Handling stream: {:?}", stream);
let input = async_read_message::<message::Request>(&mut stream).await;
debug!("Input: {:?}", input);
match input {
...
}
}
}
This is the code for async_read_message
pub async fn async_read_message<M: Message + Default>(
stream: &mut asyncTcpStream,
) -> io::Result<M> {
let mut len_buf = [0u8; 4];
debug!("Reading message length");
stream.read_exact(&mut len_buf).await?;
let len = i32::from_le_bytes(len_buf);
let mut buf = vec![0u8; len as usize];
debug!("Reading message");
stream.read_exact(&mut buf).await?;
let user_input = M::decode(&mut buf.as_slice())?;
debug!("Received message: {:?}", user_input);
Ok(user_input)
}
Your problem lies with how you're handling messages after clients have connected:
handle_stream(stream, store, store_path, wal, cluster, &role).await?;
This .await means your listening loop will wait for handle_stream to return, but (making some assumptions) this function won't return until the client has disconnected. What you want is to tokio::spawn a new task that can run independently:
tokio::spawn(handle_stream(stream, store, store_path, wal, cluster, &role));
You may have to change some of your parameter types to avoid lifetimes; tokio::spawn requires 'static since the task's lifetime is decoupled from the scope where it was spawned.

Graceful exit TcpListener.incoming()

From the rust std net library:
let listener = TcpListener::bind(("127.0.0.1", port)).unwrap();
info!("Opened socket on localhost port {}", port);
// accept connections and process them serially
for stream in listener.incoming() {
break;
}
info!("closed socket");
How does one make the listener stop listening? It says in the API that when the listener is dropped, it stops. But how do we drop it if incoming() is a blocking call? Preferably without external crates like tokio/mio.
You'll want to put the TcpListener into non-blocking mode using the set_nonblocking() method, like so:
use std::io;
use std::net::TcpListener;
let listener = TcpListener::bind("127.0.0.1:7878").unwrap();
listener.set_nonblocking(true).expect("Cannot set non-blocking");
for stream in listener.incoming() {
match stream {
Ok(s) => {
// do something with the TcpStream
handle_connection(s);
}
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => {
// Decide if we should exit
break;
// Decide if we should try to accept a connection again
continue;
}
Err(e) => panic!("encountered IO error: {}", e),
}
}
Instead of waiting for a connection, the incoming() call will immediately return a Result<> type. If Result is Ok(), then a connection was made and you can process it. If the Result is Err(WouldBlock), this isn't actually an error, there just wasn't a connection pending at the exact moment incoming() checked the socket.
Note that in the WouldBlock case, you may want to put a sleep() or something before continuing, otherwise your program will rapidly poll the incoming() function checking for a connection, resulting in high CPU usage.
Code example adapted from here
The standard library doesn't provide an API for this, but there are a few strategies you can use to work around it:
Shut down reads on the socket
You can use platform-specific APIs to shutdown reads on the socket which will cause the incoming iterator to return an error. You can then break out of handling connections when the error is received. For example, on a Unix system:
use std::net::TcpListener;
use std::os::unix::io::AsRawFd;
use std::thread;
let listener = TcpListener::bind("localhost:0")?;
let fd = listener.as_raw_fd();
let handle = thread::spawn(move || {
for connection in listener.incoming() {
match connection {
Ok(connection) => { /* handle connection */ }
Err(_) => break,
}
});
libc::shutdown(fd, libc::SHUT_RD);
handle.join();
Force the listener to wake up
Another (cross-platform) trick is to set a variable indicating that you want to stop listening, and then connect to the socket yourself to force the listening thread to wake up. When the listening thread wakes up, it checks the "stop listening" variable, and then exits cleanly if it's set.
use std::net::{TcpListener, TcpStream};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::thread;
let listener = TcpListener::bind("localhost:0")?;
let local_addr = listener.local_addr()?;
let shutdown = Arc::new(AtomicBool::new(false));
let server_shutdown = shutdown.clone();
let handle = thread::spawn(move || {
for connection in listener.incoming() {
if server_shutdown.load(Ordering::Relaxed) {
return;
}
match connection {
Ok(connection) => { /* handle connection */ }
Err(_) => break,
}
}
});
shutdown.store(true, Ordering::Relaxed);
let _ = TcpStream::connect(local_addr);
handle.join().unwrap();
You can poll your socket with an eventfd, which used for signaling.
I wrote a helper for this.
let shutdown = EventFd::new();
let listener = TcpListener::bind("0.0.0.0:12345")?;
let incoming = CancellableIncoming::new(&listener, &shutdown);
for stream in incoming {
// Your logic
}
// While in other thread
shutdown.add(1); // Light the shutdown signal, now your incoming loop exits gracefully.
use nix;
use nix::poll::{poll, PollFd, PollFlags};
use nix::sys::eventfd::{eventfd, EfdFlags};
use nix::unistd::{close, write};
use std;
use std::net::{TcpListener, TcpStream};
use std::os::unix::io::{AsRawFd, RawFd};
pub struct EventFd {
fd: RawFd,
}
impl EventFd {
pub fn new() -> Self {
EventFd {
fd: eventfd(0, EfdFlags::empty()).unwrap(),
}
}
pub fn add(&self, v: i64) -> nix::Result<usize> {
let b = v.to_le_bytes();
write(self.fd, &b)
}
}
impl AsRawFd for EventFd {
fn as_raw_fd(&self) -> RawFd {
self.fd
}
}
impl Drop for EventFd {
fn drop(&mut self) {
let _ = close(self.fd);
}
}
// -----
//
pub struct CancellableIncoming<'a> {
listener: &'a TcpListener,
eventfd: &'a EventFd,
}
impl<'a> CancellableIncoming<'a> {
pub fn new(listener: &'a TcpListener, eventfd: &'a EventFd) -> Self {
Self { listener, eventfd }
}
}
impl<'a> Iterator for CancellableIncoming<'a> {
type Item = std::io::Result<TcpStream>;
fn next(&mut self) -> Option<std::io::Result<TcpStream>> {
use nix::errno::Errno;
let fd = self.listener.as_raw_fd();
let evfd = self.eventfd.as_raw_fd();
let mut poll_fds = vec![
PollFd::new(fd, PollFlags::POLLIN),
PollFd::new(evfd, PollFlags::POLLIN),
];
loop {
match poll(&mut poll_fds, -1) {
Ok(_) => break,
Err(nix::Error::Sys(Errno::EINTR)) => continue,
_ => panic!("Error polling"),
}
}
if poll_fds[0].revents().unwrap() == PollFlags::POLLIN {
Some(self.listener.accept().map(|p| p.0))
} else if poll_fds[1].revents().unwrap() == PollFlags::POLLIN {
None
} else {
panic!("Can't be!");
}
}
}

How to create a global mutable bool status flag

Preface: I have done my research and know that it is really not a good idea/nor is it idiomatic Rust to have one. Completely open to suggestions of other ways to solve this issue.
Background: I have a console application that connects to a websocket and once connected successfully, the server sends a "Connected" message. I have the sender, and the receiver is separate threads and all is working great. After the connect() call a loop begins and places a prompt in the terminal, signaling that the application is ready to receive input from the user.
Problem: The issue is that the current flow of execution calls connect, and then immediately displays the prompt, and then the application receives the message from the server stating it is connected.
How I would solve this in higher level languages: Place a global bool (we'll call it ready) and once the application is "ready" then display the prompt.
How I think this might look in Rust:
//Possible global ready flag with 3 states (true, false, None)
let ready: Option<&mut bool> = None;
fn main(){
welcome_message(); //Displays a "Connecting..." message to the user
//These are special callback I created and basically when the
//message is received the `connected` is called.
//If there was an error getting the message (service is down)
//then `not_connected` is called. *This is working code*
let p = mylib::Promise::new(connected, not_connected);
//Call connect and start websocket send and receive threads
mylib::connect(p);
//Loop for user input
loop {
match ready {
Some(x) => {
if x == true { //If ready is true, display the prompt
match prompt_input() {
true => {},
false => break,
}
} else {
return; //If ready is false, quit the program
}
},
None => {} //Ready is None, so continue waiting
}
}
}
fn connected() -> &mut bool{
println!("Connected to Service! Please enter a command. (hint: help)\n\n");
true
}
fn not_connected() -> &mut bool{
println!("Connection to Service failed :(");
false
}
Question:
How would you solve this issue in Rust? I have tried passing it around to all the libraries method calls, but hit some major issues about borrowing an immutable object in a FnOnce() closure.
It really sounds like you want to have two threads that are communicating via channels. Check out this example:
use std::thread;
use std::sync::mpsc;
use std::time::Duration;
enum ConsoleEvent {
Connected,
Disconnected,
}
fn main() {
let (console_tx, console_rx) = mpsc::channel();
let socket = thread::spawn(move || {
println!("socket: started!");
// pretend we are taking time to connect
thread::sleep(Duration::from_millis(300));
println!("socket: connected!");
console_tx.send(ConsoleEvent::Connected).unwrap();
// pretend we are taking time to transfer
thread::sleep(Duration::from_millis(300));
println!("socket: disconnected!");
console_tx.send(ConsoleEvent::Disconnected).unwrap();
println!("socket: closed!");
});
let console = thread::spawn(move || {
println!("console: started!");
for msg in console_rx.iter() {
match msg {
ConsoleEvent::Connected => println!("console: I'm connected!"),
ConsoleEvent::Disconnected => {
println!("console: I'm disconnected!");
break;
}
}
}
});
socket.join().expect("Unable to join socket thread");
console.join().expect("Unable to join console thread");
}
Here, there are 3 threads at play:
The main thread.
A thread to read from the "socket".
A thread to interface with the user.
Each of these threads can maintain it's own non-shared state. This allows reasoning about each thread to be easier. The threads use a channel to send updates between them safely. The data that crosses threads is encapsulated in an enum.
When I run this, I get
socket: started!
console: started!
socket: connected!
console: I'm connected!
socket: disconnected!
socket: closed!
console: I'm disconnected!

Resources