How to create a global mutable bool status flag - global-variables

Preface: I have done my research and know that it is really not a good idea/nor is it idiomatic Rust to have one. Completely open to suggestions of other ways to solve this issue.
Background: I have a console application that connects to a websocket and once connected successfully, the server sends a "Connected" message. I have the sender, and the receiver is separate threads and all is working great. After the connect() call a loop begins and places a prompt in the terminal, signaling that the application is ready to receive input from the user.
Problem: The issue is that the current flow of execution calls connect, and then immediately displays the prompt, and then the application receives the message from the server stating it is connected.
How I would solve this in higher level languages: Place a global bool (we'll call it ready) and once the application is "ready" then display the prompt.
How I think this might look in Rust:
//Possible global ready flag with 3 states (true, false, None)
let ready: Option<&mut bool> = None;
fn main(){
welcome_message(); //Displays a "Connecting..." message to the user
//These are special callback I created and basically when the
//message is received the `connected` is called.
//If there was an error getting the message (service is down)
//then `not_connected` is called. *This is working code*
let p = mylib::Promise::new(connected, not_connected);
//Call connect and start websocket send and receive threads
mylib::connect(p);
//Loop for user input
loop {
match ready {
Some(x) => {
if x == true { //If ready is true, display the prompt
match prompt_input() {
true => {},
false => break,
}
} else {
return; //If ready is false, quit the program
}
},
None => {} //Ready is None, so continue waiting
}
}
}
fn connected() -> &mut bool{
println!("Connected to Service! Please enter a command. (hint: help)\n\n");
true
}
fn not_connected() -> &mut bool{
println!("Connection to Service failed :(");
false
}
Question:
How would you solve this issue in Rust? I have tried passing it around to all the libraries method calls, but hit some major issues about borrowing an immutable object in a FnOnce() closure.

It really sounds like you want to have two threads that are communicating via channels. Check out this example:
use std::thread;
use std::sync::mpsc;
use std::time::Duration;
enum ConsoleEvent {
Connected,
Disconnected,
}
fn main() {
let (console_tx, console_rx) = mpsc::channel();
let socket = thread::spawn(move || {
println!("socket: started!");
// pretend we are taking time to connect
thread::sleep(Duration::from_millis(300));
println!("socket: connected!");
console_tx.send(ConsoleEvent::Connected).unwrap();
// pretend we are taking time to transfer
thread::sleep(Duration::from_millis(300));
println!("socket: disconnected!");
console_tx.send(ConsoleEvent::Disconnected).unwrap();
println!("socket: closed!");
});
let console = thread::spawn(move || {
println!("console: started!");
for msg in console_rx.iter() {
match msg {
ConsoleEvent::Connected => println!("console: I'm connected!"),
ConsoleEvent::Disconnected => {
println!("console: I'm disconnected!");
break;
}
}
}
});
socket.join().expect("Unable to join socket thread");
console.join().expect("Unable to join console thread");
}
Here, there are 3 threads at play:
The main thread.
A thread to read from the "socket".
A thread to interface with the user.
Each of these threads can maintain it's own non-shared state. This allows reasoning about each thread to be easier. The threads use a channel to send updates between them safely. The data that crosses threads is encapsulated in an enum.
When I run this, I get
socket: started!
console: started!
socket: connected!
console: I'm connected!
socket: disconnected!
socket: closed!
console: I'm disconnected!

Related

Unable to send message to select! loop via an mpsc::unbounded_channel

I have an issue with a Redis client which I'm trying to integrate into a larger message broker.
The problem is that I am using the PUBSUB functionality of Redis in order to subscribe to a topic and the async implementation shown in the docs example does not properly react to disconnects from the server.
Basically doing a loop { tokio::select!{ Some(msg) = pubsub_stream.next() => { handle_message(msg); } } } would properly handle new messages, but when the server went down or got unreachable, I would not get notified and pubsub_stream.next() would wait forever on a dead connection. I assume that the client would then drop this connection as soon as a command would get sent to Redis, but this is a listen-only service with no intention to issue other commands.
So I tried to use an approach which I learned while adding WebSocket support via axum to this broker, where an unbound mpsc channel is used to deliver messages to a specific WebSocket client, and there it works.
The following is the approach which I'm trying to get to work, but for some reason the code in the select! loop is never executed. I'm intending to add more code from other channels to the select! loop, but I've removed them to keep this example clean.
Basically I am seeing the - 1 REDIS subscription event printouts but not the - 2 REDIS subscription event printouts.
pub async fn redis_async_task(storage: Arc<Storage>) {
//-----------------------------------------------------------------
let mut eb_broadcast_rx = storage.eb_broadcast_tx.subscribe();
let (mpsc_tx, mut mpsc_rx) = mpsc::unbounded_channel::<Msg>();
let mut interval_5s = IntervalStream::new(tokio::time::interval(Duration::from_secs(5)));
//-----------------------------------------------------------------
let _task = tokio::spawn({
async move {
loop {
tokio::select! {
Some(msg) = mpsc_rx.recv() => {
// Why is this never called?
let channel = msg.get_channel_name().to_string();
let payload = msg.get_payload::<String>().unwrap();
println!(" - 2 REDIS: subscription event: {} channel: {} payload: {}", channel, payload);
},
Some(_ts) = interval_5s.next() => {
// compute messages per second
println!("timer");
},
Ok(evt) = eb_broadcast_rx.recv() => {
// Some other events unrelated to Redis
if let Event::WsClientConnected{id: _id, name: _name} = evt {}
else if let Event::WsClientDisconnected{id: _id, name: _name} = evt {}
},
}
}
}
});
//-----------------------------------------------------------------
loop {
// loop which runs forever, reconnecting to Redis server upon disconnect
// and resubscribing to the topic.
println!("REDIS connecting");
let client = redis::Client::open("redis://127.0.0.1:6379/").unwrap();
if let Ok(mut connection) = client.get_connection() {
// We have a connection
println!("REDIS connected");
if let Err(error) = connection.subscribe(&["tokio"], |msg| {
// We are subscribed to the topic and receiving messages
if let Ok(payload) = msg.get_payload::<String>() {
let channel = msg.get_channel_name().to_string();
println!(" - 1 REDIS subscription event: channel: {} payload: {}", channel, payload);
// Send the message through the channel into the select! loop
if let Err(error) = mpsc_tx.send(msg) {
eprintln!("REDIS: error sending: {}", error);
}
};
// ControlFlow::Break(())
ControlFlow::<i32>::Continue
}) {
// Connection to Redis is lost, subscription aborts
eprintln!("REDIS subscription error: {:?} ", error);
};
} else {
// Connection to Redis failed, is probably not reachable.
println!("REDIS connection failed");
}
// Sleep for 1 second before reconnecting.
sleep(Duration::from_millis(1000)).await;
}
}
The code above is called from main like so, in parallel to other clients like WebSocket and MQTT, which do work.
#[tokio::main]
async fn main() {
// ...
tokio::spawn(task_redis::redis_async_task(storage.clone()))
// ...
}

Is there some way how to shutdown tokio::runtime::Runtime?

Problem is in a microservice with Tokio, where connect to db and other stuff async, but when some connection failed, microservice dont end work. Its a great when you need this case, but I need to end work of this microservice when connection lost... so could you help me how to safety shutdown process...?
src/main.rs
use tokio; // 1.0.0+
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(workers_number)
.enable_all()
.build()
.unwrap();
rt.block_on(async {
// health cheacker connection
let health_checker = match HealthChecker::new(some_configuration).await?;
// some connection to db
// ...
// transport client connection
// ...
// so when connection failed or lost I need to
// end process like `std::process::abort()`
// but I cant use it, because its unsafe
let mut task_handler = vec![];
// create some task
join_all(task_handler).await;
});
}
anyone have some ideas?
You can call any of the Runtime shutdown methods shutdown_timeout or shutdown_background.
If it is needed some king of waiting, you could spawn a task with a tokio::sync::oneshot that will trigger the shutdown when signaled.
use core::time::Duration;
use crate::tokio::time::sleep;
use tokio;
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
let handle = rt.handle().clone();
let (s, r) = tokio::sync::oneshot::channel();
rt.spawn(async move {
sleep(Duration::from_secs(1)).await;
s.send(0);
});
handle.block_on(async move {
r.await;
rt.shutdown_background();
});
}
Playground

Read/write problem in async tokio application (beginner)

I'm new to network programming and thread in Rust so I may be missing something obvious here. I've been following along with this trying to build a simple chat application. Only, he does it with the standard library and I'm trying to do it with tokio. The functionality is very simple: Client sends a message to Server, Server acknowledges it and sends it back to the Client. Here's my code for the client and server, stripped down as much as I can:
server.rs
#[tokio::main]
async fn main() {
let server = TcpListener::bind("127.0.0.1:7878").await.unwrap();
let mut clients = vec![];
let (tx, mut rx) = mpsc::channel(32);
loop {
if let Ok((socket, addr)) = server.accept().await {
let tx = tx.clone();
let (mut reader, writer) = split(socket);
clients.push(writer);
tokio::spawn(async move {
loop {
let mut buffer = vec![0; 1024];
reader.read(&mut buffer).await.unwrap();
//get message written by the client and print it
//then transmit it on the channel
let msg = buffer.into_iter().take_while(|&x| x != 0).collect::<Vec<_>>();
let msg = String::from_utf8(msg).expect("Invalid utf8 message");
println!("{}: {:?}", addr, msg);
match tx.send(msg).await {
Ok(_) => { ()}
Err(_) => { println!("Error");}
}
}
});
}
//write each message received back to its client
if let Some(msg) = rx.recv().await {
clients = clients.into_iter().filter_map(|mut x| {
println!("writing: {:?}", &msg);
x.write(&msg.clone().into_bytes());
Some(x)
}).collect::<Vec<_>>();
}
}
}
client.rs
#[tokio::main]
async fn main() {
let client = TcpStream::connect("127.0.0.1:7878").await.unwrap();
let (tx, mut rx) = mpsc::channel::<String>(32);
tokio::spawn(async move {
loop {
let mut buffer = vec![0; 1024];
// get message sent by the server and print it
match client.try_read(&mut buffer) {
Ok(_) => {
let msg = buffer.into_iter().take_while(|&x| x != 0).collect::<Vec<_>>();
println!("Received from server: {:?}", msg);
}
Err(ref err) if err.kind() == io::ErrorKind::WouldBlock => {
()
}
Err(_) => {
println!("Connection with server was severed");
break;
}
}
// get message transmitted from user input loop
// then write it to the server
match rx.try_recv() {
Ok(message) => {
let mut buffer = message.clone().into_bytes();
buffer.resize(1024, 0);
match client.try_write(&buffer) {
Ok(_) => { println!("Write successful");}
Err(_) => { println!("Write error");}
}
}
Err(TryRecvError::Empty) => (),
_ => break
}
}
} );
// user input loop here
// takes user message and transmits it on the channel
}
Sending to the server works fine, and the server appears to be successfully writing as indicated by its output:
127.0.0.1:55346: "test message"
writing: "test message"
The issue is the client never reads back the message from the server, instead getting WouldBlock errors every time it hits the match client.try_read(&mut buffer) block.
If I stop the server while keeping the client running, the client is suddenly flooded with successful reads of empty messages:
Received from server: []
Received from server: []
Received from server: []
Received from server: []
Received from server: []
Received from server: []
Received from server: []
Received from server: []
...
Can anyone tell me what's going on?
Here's what happens in your server:
Wait for a client to connect.
When the client is connected, spawn a background task to receive from the client.
Try to read from the channel, since it is very unlikely that the client has already sent anything at this point the channel is empty.
Loop → wait for another client to connect.
While waiting for another client, the background task receives the message from the first client and sends it to the channel, but the main task is blocked waiting for another client and never tries to read again from the channel.
Easiest way to get it to work is to get rid of the channel in the server and simply echo the message from the spawned task.
Another solution is to spawn an independent task to process the channel and write to the clients.
As for what happens when you kill the server: once the connection is lost attempting to read from the socket does not return an error but instead returns an empty buffer.

Why does holding a non-Send type across an await point result in a non-Send Future?

In the documentation for the Send trait, there is a nice example of how something like Rc is not Send, since cloning/dropping in two different threads can cause the reference count to get out of sync.
What is less clear, however, is why holding a binding to a non-Send type across an await point in an async fn causes the generated future to also be non-Send. I was able to find a work around for when the compiler has been too conservative in the work-arounds chapter of the async handbook, but it does not go as far as answering the question that I am asking here.
Perhaps someone could shed some light on this with an example of why having a non-Send type in a Future is ok, but holding it across an await is not?
When you use .await in an async function, the compiler builds a state machine behind the scenes. Each .await introduces a new state (while it waits for something) and the code in between are state transitions (aka tasks), which will be triggered based on some external event (e.g. from IO or a timer etc).
Each task gets scheduled to be executed by the async runtime, which could choose to use a different thread from the previous task. If the state transition is not safe to be sent between threads then the resulting Future is also not Send so that you get a compilation error if you try to execute it in a multi-threaded runtime.
It is completely OK for a Future not to be Send, it just means you can only execute it in a single-threaded runtime.
Perhaps someone could shed some light on this with an example of why having a non-Send type in a Future is ok, but holding it across an await is not?
Consider the following simple example:
async fn add_votes(current: Rc<Cell<i32>>, post: Url) {
let new_votes = get_votes(&post).await;
*current += new_votes;
}
The compiler will construct a state machine like this (simplified):
enum AddVotes {
Initial {
current: Rc<Cell<i32>>,
post: Url,
},
WaitingForGetVotes {
current: Rc<Cell<i32>>,
fut: GetVotesFut,
},
}
impl AddVotes {
fn new(current: Rc<Cell<i32>>, post: Url) {
AddVotes::Initial { current, post }
}
fn poll(&mut self) -> Poll {
match self {
AddVotes::Initial(state) => {
let fut = get_votes(&state.post);
*self = AddVotes::WaitingForGetVotes {
current: state.current,
fut
}
Poll::Pending
}
AddVotes::WaitingForGetVotes(state) => {
if let Poll::Ready(votes) = state.fut.poll() {
*state.current += votes;
Poll::Ready(())
} else {
Poll::Pending
}
}
}
}
}
In a multithreaded runtime, each call to poll could be from a different thread, in which case the runtime would move the AddVotes to the other thread before calling poll on it. This won't work because Rc cannot be sent between threads.
However, if the future just used an Rc within the same state transition, it would be fine, e.g. if votes was just an i32:
async fn add_votes(current: i32, post: Url) -> i32 {
let new_votes = get_votes(&post).await;
// use an Rc for some reason:
let rc = Rc::new(1);
println!("rc value: {:?}", rc);
current + new_votes
}
In which case, the state machine would look like this:
enum AddVotes {
Initial {
current: i32,
post: Url,
},
WaitingForGetVotes {
current: i32,
fut: GetVotesFut,
},
}
The Rc isn't captured in the state machine because it is created and dropped within the state transition (task), so the whole state machine (aka Future) is still Send.

didFinishUserInfoTransfer finished successfully - but still with outstandingUserInfoTransfers Objects

First I use the "transferUserInfo"-method in order to send a dictionary from the iPhone to the Apple Watch:
let dicty = //...my dictionary of property-list values...
if WCSession.isSupported() {
let session = WCSession.defaultSession()
if session.paired == true { // Check if your Watch is paired with your iPhone
if session.watchAppInstalled == true { // Check if your Watch-App is installed on your Watch
session.transferUserInfo(dicty)
}
}
}
Then I am using the following delegate callback method "didFinishUserInfoTransfer" to check upon the state of the transfer:
func session(session: WCSession, didFinishUserInfoTransfer userInfoTransfer: WCSessionUserInfoTransfer, error: NSError?) {
if error == nil {
let session = WCSession.defaultSession()
let transfers = session.outstandingUserInfoTransfers
if transfers.count > 0 { //--> is always > 0, why ?????????
for trans in transfers {
trans.cancel() // cancel transfer that will be sent by updateApplicationContext
let dict = trans.userInfo
session.transferUserInfo(dict) //--> creates enless-transfer cycle !!!!!
}
}
}
else {
print(error)
}
}
In the Apple documentation, it sais about the didFinishUserInfoTransfer method:
The session object calls this method when a data transfer initiated by the
current app finished, either successfully or unsuccessfully. Use this method
to note that the transfer completed or to respond to errors, perhaps by
trying to send the data again at a later time.
So far so good - I understood. But now - there is something I do not understand:
If didFinishUserInfoTransfer is entered and the error == nil, why on earth can the session.outstandingUserInfoTransfers COUNT be bigger than zero ??????
According to the Apple-documentation, the only non-error-state of didFinishUserInfoTransfer should be when the transfer is over !! Bit it does not seem to be over... Why ???
Thanks for any clarification on this.
And also, I am glad of any example-code on how to use these 3 methods correctly !
(i.e.
session.transferUserInfo(dicty)
didFinishUserInfoTransfer
session.outstandingUserInfoTransfers)
It seems that the userInfoTransfer that triggers the didFinishUserInfoTransfer callback is not removed from the outstandingUserInfoTransfers until the delegate callback has returned. To get the behavior you want (where count can go down to 0) you'd want to dispatch_async away from the delegate callback thread. So this should work:
func session(session: WCSession, didFinishUserInfoTransfer userInfoTransfer: WCSessionUserInfoTransfer, error: NSError?) {
if error == nil {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
let transfers = session.outstandingUserInfoTransfers
if transfers.count > 0 { //--> will in most cases now be 0
for trans in transfers {
trans.cancel() // cancel transfer that will be sent by updateApplicationContext
let dict = trans.userInfo
session.transferUserInfo(dict) // ***
}
}
});
}
else {
print(error)
}
}
That said, I don't quite understand why you'd want to cancel all the remaining outstanding userInfoTransfers whenever any of them completes, just to re-queue them (spot in question is indicated by ***)
There is a little misunderstanding, as far as I read the docs: only send again if an error occurs. To have outstanding userInfoTransfers if no error has been raised is the expected behavior; they have not yet successfully been send and are still queued.
Btw. code uses the actual dispatchQueue.
func session(_ session: WCSession, didFinish userInfoTransfer: WCSessionUserInfoTransfer, error: Error?) {
if error != nil { // resend if an error occured
DispatchQueue.main.async {
let transfers = session.outstandingUserInfoTransfers
if transfers.count > 0 {
// print("open transfers: \(transfers.count)")
for trans in transfers {
// print("resend transfer")
trans.cancel() // cancel transfer
let dict = trans.userInfo
session.transferUserInfo(dict) // send again
}
}
}
}
}

Resources