reqwest post request freezes after random amount of time - http

I started learning rust 2 weeks ago, and has been making this application that watches a log file, and sends a bulk of the information to an elasticsearch DB.
The problem is that after certain amount of time, it freezes (using 100% CPU) and I don't understand why.
I've cut down on a lot of code to try to figure out the issue, but it still keeps freezing on this line according to clion debugger
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php")
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send() // <-- Exactly here
.await;
It freezes and doesn't return any error message.
This is the code in context:
use std::{env};
use std::io::{stdout, Write};
use std::path::Path;
use std::time::Duration;
use logwatcher::{LogWatcher, LogWatcherAction};
use serde_json::{json, Value};
use serde_json::Value::Null;
use tokio;
#[tokio::main]
async fn main() {
let mut log_watcher = LogWatcher::register("/var/log/test.log").unwrap();
let mut counter = 0;
let BULK_SIZE = 500;
log_watcher.watch(&mut move |line: String| { // This triggers each time a new line is appended to /var/log/test.log
counter += 1;
if counter >= BULK_SIZE {
futures::executor::block_on(async { // This has to be async because log_watcher is not async
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php") // <-- This is just for testing, it fails towards the DB too
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send() // <-- Freezes here
.await;
if _response.is_ok(){
println!("Ok");
}
});
counter = 0;
}
LogWatcherAction::None
});
}
The log file gets about 625 new lines every minute. The crash happends after about ~5500 - ~25000 lines has gone through, or it seems a bit random in general.
I'm suspecting the issue is either something to do with LogWatcher, reqwest, the block_on or the mix of async.
Does anyone have any clue why it randomly freezes?

The problem was indeed because of a mix of async with tokio and block_on, NOT directly reqwest.
The problem was solved when changing main to be non-async, and using tokio as the block_on for async calls instead of futures::executor::block_on.
fn main() {
let mut log_watcher = LogWatcher::register("/var/log/test.log").unwrap();
let mut counter = 0;
let BULK_SIZE = 500;
log_watcher.watch(&mut move |line: String| {
counter += 1;
if counter >= BULK_SIZE {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let _response = reqwest::Client::new()
.post("http://127.0.0.1/test.php")
.header("Content-Type", "application/json")
.body("{\"test\": true}")
.timeout(Duration::from_secs(30))
.send()
.await;
if _response.is_ok(){
println!("Ok");
}
});
counter = 0;
}
LogWatcherAction::None
});
}

Related

How to integrate async data collection with threadpool data processing in Rust

I'd like to improve the integration of my async data collection with my rayon data processing by overlapping the retrieval and the processing. Currently, I pull lots of pages from a web site using normal async code. Once that is complete, I do the cpu-intensive work using rayon's par_iter.
It seems like I should be able to easily overlap the processing, so that I'm not waiting for every last page before I begin the grunt work. Every page that I retrieve is independent of the others, so there is no need to wait before the conversion.
Here's what I have working currently (simplified just a bit):
use rayon::prelude::*;
use futures::{stream, StreamExt};
use reqwest::{Client, Result};
const CONCURRENT_REQUESTS: usize = usize::MAX;
const MAX_PAGE: usize = 1000;
#[tokio::main]
async fn main() {
// get data from server
let client = Client::new();
let bodies: Vec<Result<String>> = stream::iter(1..MAX_PAGE+1)
.map(|page_number| {
let client = &client;
async move {
client
.get(format!("https://someurl?{page_number}"))
.send()
.await?
.text()
.await
}
})
.buffer_unordered(CONCURRENT_REQUESTS)
.collect()
.await;
// transform the data
let mut rows: Vec<MyRow> = bodies
.par_iter()
.filter_map(|body| body.as_ref().ok())
.map(|data| {
let page = serde_json::from_str::<MyPage>(data).unwrap();
page.rows
.iter()
.map(|x| Row::new(x))
.collect::<Vec<MyRow>>()
})
.flatten()
.collect();
// do something with rows
}

Is there some way how to shutdown tokio::runtime::Runtime?

Problem is in a microservice with Tokio, where connect to db and other stuff async, but when some connection failed, microservice dont end work. Its a great when you need this case, but I need to end work of this microservice when connection lost... so could you help me how to safety shutdown process...?
src/main.rs
use tokio; // 1.0.0+
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(workers_number)
.enable_all()
.build()
.unwrap();
rt.block_on(async {
// health cheacker connection
let health_checker = match HealthChecker::new(some_configuration).await?;
// some connection to db
// ...
// transport client connection
// ...
// so when connection failed or lost I need to
// end process like `std::process::abort()`
// but I cant use it, because its unsafe
let mut task_handler = vec![];
// create some task
join_all(task_handler).await;
});
}
anyone have some ideas?
You can call any of the Runtime shutdown methods shutdown_timeout or shutdown_background.
If it is needed some king of waiting, you could spawn a task with a tokio::sync::oneshot that will trigger the shutdown when signaled.
use core::time::Duration;
use crate::tokio::time::sleep;
use tokio;
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
let handle = rt.handle().clone();
let (s, r) = tokio::sync::oneshot::channel();
rt.spawn(async move {
sleep(Duration::from_secs(1)).await;
s.send(0);
});
handle.block_on(async move {
r.await;
rt.shutdown_background();
});
}
Playground

How to run multiple Tokio async tasks in a loop without using tokio::spawn?

I built a LED clock that also displays weather. My program does a couple of different things in a loop, each thing with a different interval:
updates the LEDs every 50ms,
checks the light level (to adjust the brightness) every 1 second,
fetches weather every 10 minutes,
actually some more, but that's irrelevant.
Updating the LEDs is the most critical: I don't want this to be delayed when e.g. weather is being fetched. This should not be a problem as fetching weather is mostly an async HTTP call.
Here's the code that I have:
let mut measure_light_stream = tokio::time::interval(Duration::from_secs(1));
let mut update_weather_stream = tokio::time::interval(WEATHER_FETCH_INTERVAL);
let mut update_leds_stream = tokio::time::interval(UPDATE_LEDS_INTERVAL);
loop {
tokio::select! {
_ = measure_light_stream.tick() => {
let light = lm.get_light();
light_smooth.sp = light;
},
_ = update_weather_stream.tick() => {
let fetched_weather = weather_service.get(&config).await;
// Store the fetched weather for later access from the displaying function.
weather_clock.weather = fetched_weather.clone();
},
_ = update_leds_stream.tick() => {
// Some code here that actually sets the LEDs.
// This code accesses the weather_clock, the light level etc.
},
}
}
I realised the code doesn't do what I wanted it to do - fetching the weather blocks the execution of the loop. I see why - the docs of tokio::select! say the other branches are cancelled as soon as the update_weather_stream.tick() expression completes.
How do I do this in such a way that while fetching the weather is waiting on network, the LEDs are still updated? I figured out I could use tokio::spawn to start a separate non-blocking "thread" for fetching weather, but then I have problems with weather_service not being Send, let alone weather_clock not being shareable between threads. I don't want this complication, I'm fine with everything running in a single thread, just like what select! does.
Reproducible example
use std::time::Duration;
use tokio::time::{interval, sleep};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut slow_stream = interval(Duration::from_secs(3));
let mut fast_stream = interval(Duration::from_millis(200));
// Note how access to this data is straightforward, I do not want
// this to get more complicated, e.g. care about threads and Send.
let mut val = 1;
loop {
tokio::select! {
_ = fast_stream.tick() => {
println!(".{}", val);
},
_ = slow_stream.tick() => {
println!("Starting slow operation...");
// The problem: During this await the dots are not printed.
sleep(Duration::from_secs(1)).await;
val += 1;
println!("...done");
},
}
}
}
You can use tokio::join! to run multiple async operations concurrently within the same task.
Here's an example:
async fn measure_light(halt: &Cell<bool>) {
while !halt.get() {
let light = lm.get_light();
// ....
tokio::time::sleep(Duration::from_secs(1)).await;
}
}
async fn blink_led(halt: &Cell<bool>) {
while !halt.get() {
// LED blinking code
tokio::time::sleep(UPDATE_LEDS_INTERVAL).await;
}
}
async fn poll_weather(halt: &Cell<bool>) {
while !halt.get() {
let weather = weather_service.get(&config).await;
// ...
tokio::time::sleep(WEATHER_FETCH_INTERVAL).await;
}
}
// example on how to terminate execution
async fn terminate(halt: &Cell<bool>) {
tokio::time::sleep(Duration::from_secs(10)).await;
halt.set(true);
}
async fn main() {
let halt = Cell::new(false);
tokio::join!(
measure_light(&halt),
blink_led(&halt),
poll_weather(&halt),
terminate(&halt),
);
}
If you're using tokio::TcpStream or other non-blocking IO, then it should allow for concurrent execution.
I've added a Cell flag for halting execution as an example. You can use the same technique to share any mutable state between join branches.
EDIT: Same thing can be done with tokio::select!. The main difference with your code is that the actual "business logic" is inside the futures awaited by select.
select allows you to drop unfinished futures instead of waiting for them to exit on their own (so halt termination flag is not necessary).
async fn main() {
tokio::select! {
_ = measure_light() => {},
_ = blink_led() = {},
_ = poll_weather() => {},
}
}
Here's a concrete solution, based on the second part of stepan's answer:
use std::time::Duration;
use tokio::time::sleep;
#[tokio::main]
async fn main() {
// Cell is an acceptable complication when accessing the data.
let val = std::cell::Cell::new(1);
tokio::select! {
_ = async {loop {
println!(".{}", val.get());
sleep(Duration::from_millis(200)).await;
}} => {},
_ = async {loop {
println!("Starting slow operation...");
// The problem: During this await the dots are not printed.
sleep(Duration::from_secs(1)).await;
val.set(val.get() + 1);
println!("...done");
sleep(Duration::from_secs(3)).await;
}} => {},
}
}
Playground link

Why does tokio::spawn have a delay when called next to crossbeam_channel::select?

I'm creating a task which will spawn other tasks. Some of them will take some time, so they cannot be awaited, but they can run in parallel:
src/main.rs
use crossbeam::crossbeam_channel::{bounded, select};
#[tokio::main]
async fn main() {
let (s, r) = bounded::<usize>(1);
tokio::spawn(async move {
let mut counter = 0;
loop {
let loop_id = counter.clone();
tokio::spawn(async move { // why this one was not fired?
println!("inner task {}", loop_id);
}); // .await.unwrap(); - solves issue, but this is long task which cannot be awaited
println!("loop {}", loop_id);
select! {
recv(r) -> rr => {
// match rr {
// Ok(ee) => {
// println!("received from channel {}", loop_id);
// tokio::spawn(async move {
// println!("received from channel task {}", loop_id);
// });
// },
// Err(e) => println!("{}", e),
// };
},
// more recv(some_channel) ->
}
counter = counter + 1;
}
});
// let s_clone = s.clone();
// tokio::spawn(async move {
// s_clone.send(2).unwrap();
// });
loop {
// rest of the program
}
}
I've noticed strange behavior. This outputs:
loop 0
I was expecting it to also output inner task 0.
If I send a value to channel, the output will be:
loop 0
inner task 0
loop 1
This is missing inner task 1.
Why is inner task spawned with one loop of delay?
The first time I noticed such behavior with 'received from channel task' delayed one loop, but when I reduced code to prepare sample this started to happen with 'inner task'. It might be worth mentioning that if I write second tokio::spawn right to another, only the last one will have this issue. Is there something I should be aware when calling tokio::spawn and select!? What causes this one loop of delay?
Cargo.toml dependencies
[dependencies]
tokio = { version = "0.2", features = ["full"] }
crossbeam = "0.7"
Rust 1.46, Windows 10
select! is blocking, and the docs for tokio::spawn say:
The spawned task may execute on the current thread, or it may be sent to a different thread to be executed.
In this case, the select! "future" is actually a blocking function, and spawn doesn't use a new thread (either in the first invocation or the one inside the loop).
Because you don't tell tokio that you are going to block, tokio doesn't think another thread is needed (from tokio's perspective, you only have 3 futures which should never block, so why would you need another thread anyway?).
The solution is to use the tokio::task::spawn_blocking for the select!-ing closure (which will no longer be a future, so async move {} is now move || {}).
Now tokio will know that this function actually blocks, and will move it to another thread (while keeping all the actual futures in other execution threads).
use crossbeam::crossbeam_channel::{bounded, select};
#[tokio::main]
async fn main() {
let (s, r) = bounded::<usize>(1);
tokio::task::spawn_blocking(move || {
// ...
});
loop {
// rest of the program
}
}
Link to playground
Another possible solution is to use a non-blocking channel like tokio::sync::mpsc, on which you can use await and get the expected behavior, like this playground example with direct recv().await or with tokio::select!, like this:
use tokio::sync::mpsc;
#[tokio::main]
async fn main() {
let (mut s, mut r) = mpsc::channel::<usize>(1);
tokio::spawn(async move {
loop {
// ...
tokio::select! {
Some(i) = r.recv() => {
println!("got = {}", i);
}
}
}
});
loop {
// rest of the program
}
}
Link to playground

Sharing data between async requests in Tokio

EDIT: I refactored the code to make it simpler.
I'm writing a small program to check a website for dead links, using tokio and reqwests to make requests async without the need for threading. But I also need to be able to return something from each of the requests that tokio is running; namely if a request failed or not.
fn fetch(req: Vec<&'static str>) {
let client = Client::new();
let (tx, rx) = mpsc::channel();
let req_len = req.len();
let work = stream::iter_ok(req)
.map(move |url| client.get(url).send())
.buffer_unordered(PARALLEL_REQUESTS)
.then(move |response| {
let this_tx = tx.clone();
match response {
Ok(x) => {
format_response(x);
this_tx.send(1).unwrap();
}
Err(x) => {
format_error(x);
}
}
future::ok(())
})
.for_each(|n| Ok(()));
tokio::run(work);
The code works, but I'd like some feedback as to what the best way of writing this in Rust would be.

Resources