I am trying to do a full-async download attempt.
The download works fine so far.
Using std::fs::File it works fine but I wanted to try tokios File to make the code fully async.
If I just download the file and let the data vanish, it works. But when I use tokio::fs::File to write async the data to disk, the download gets stuck at random locations. Sometimes at 1.1MB, mostly at ~1.6MB. Total is ~9MB.
My test URL is https://github.com/Kitware/CMake/releases/download/v3.20.5/cmake-3.20.5.tar.gz
The last output I get is the debug!("Received...") line.
The nearly complete output is:
DEBUG: Temp File: /tmp/26392_1625868800106141_ZhWUtnaD.tmp
DEBUG: add_pem_file processed 133 valid and 0 invalid certs
DEBUG: No cached session for DNSNameRef("github.com")
DEBUG: Not resuming any session
DEBUG: Using ciphersuite TLS13_CHACHA20_POLY1305_SHA256
DEBUG: Not resuming
DEBUG: TLS1.3 encrypted extensions: [ServerNameAck, Protocols([PayloadU8([104, 50])])]
DEBUG: ALPN protocol is Some(b"h2")
DEBUG: Ticket saved
DEBUG: Ticket saved
DEBUG: Status: 302 Found
[...]
DEBUG: content-length: 621
DEBUG: Sending warning alert CloseNotify
DEBUG: add_pem_file processed 133 valid and 0 invalid certs
DEBUG: No cached session for DNSNameRef("github-releases.githubusercontent.com")
DEBUG: Not resuming any session
DEBUG: Using ciphersuite TLS13_CHACHA20_POLY1305_SHA256
DEBUG: Not resuming
DEBUG: TLS1.3 encrypted extensions: [ServerNameAck, Protocols([PayloadU8([104, 50])])]
DEBUG: ALPN protocol is Some(b"h2")
DEBUG: Ticket saved
DEBUG: Status: 200 OK
[...]
DEBUG: content-length: 9441947
DEBUG: Received 16384 bytes (16384 total)
DEBUG: Written 16384 bytes (16384 total)
DEBUG: Received 9290 bytes (25674 total)
DEBUG: Written 9290 bytes (25674 total)
DEBUG: Received 16384 bytes (42058 total)
DEBUG: Written 16384 bytes (42058 total)
[...]
DEBUG: Received 8460 bytes (1192010 total)
DEBUG: Written 8460 bytes (1192010 total)
DEBUG: Received 8948 bytes (1200958 total)
DEBUG: Written 8948 bytes (1200958 total)
DEBUG: Received 8460 bytes (1209418 total)
DEBUG: Written 8460 bytes (1209418 total)
DEBUG: Received 8948 bytes (1218366 total)
[PROCESS STUCK HERE]
It feels like there is a deadlock or something that is blocking the write. But I can't find out whats wrong. Why does this get stuck?
Code:
async fn download_http<P: AsRef<Path>>(url: &Url, localpath: P) -> MyResult<()> {
let mut uri = hyper::Uri::from_str(url.as_str())?;
let mut total_read: usize = 0;
let mut total_written: usize = 0;
let mut localfile = File::create(localpath).await?;
// Redirection Limit
for i in 0..10 {
let https = HttpsConnector::with_native_roots();
let client = Client::builder().build::<_, hyper::Body>(https);
let mut resp = client.get(uri.clone()).await?;
let status = resp.status();
let header = resp.headers();
debug!("Status: {}", status);
for (key, value) in resp.headers() {
debug!("HEADER {}: {}", key, value.to_str().unwrap());
}
if status.is_success() {
// tokio::io::copy(&mut resp.body_mut().data(), &mut localfile).await?;
let expected_size = header.get("content-length").map(|v| v.to_str().unwrap().parse::<usize>().unwrap());
while let Some(next) = resp.data().await {
let mut chunk = next?;
let num_bytes = chunk.len();
total_read = total_read + num_bytes;
debug!("Received {} bytes ({} total)", num_bytes, total_read);
// localfile.write_all(&chunk).await?;
let written = localfile.write(&chunk).await?;
total_written = total_written + written;
debug!("Written {} bytes ({} total)", written, total_written);
if total_read != total_written {
error!("Could not write all data!");
}
if expected_size.is_some() && total_read.eq(&expected_size.unwrap()) {
return Ok(());
}
}
return Ok(());
} else if status.is_redirection() {
let location = header.get("location").unwrap().to_str().unwrap();
uri = hyper::Uri::from_str(location)?;
} else {
let uri_str = uri.to_string();
return Err(MyError::CustomError(CustomError::from_string(format!("HTTP responded with status {}: {}", status, uri_str))))
}
}
Err(MyError::CustomError(CustomError::from_string(format!("HTTP too many redirections"))))
}
Crates (incomplete, relevant only):
futures = "0.3"
futures-cpupool = "0.1"
hyper = { version = "0.14", features = ["full"] }
hyper-rustls = "0.22"
rustls = "0.19"
tokio = { version = "1.6", features = ["full"] }
url = "2.2"
As you can see the download loop matches example code of the Hyper documentation.
I added the tokio::fs::File writing part
I added debug information (mostly byte sizes) to find out whats going on and where.
The comments are the ideal way: Using write_all or if possible io::copy.
But I can't manage to get it working without getting stuck.
Could you please give me an advice where my mistake is?
Thank you very much
Thanks to #HHK in the comments above.
He recommended to build a minimal, reproducible example. While doing that, the example worked fine.
So I iteratively added the code from the original project around it.
The last step I added was a relic I did not remove when making the project async and learning about async.
I had a futures::block_on call within an async function calling an async function which resulted in blocking the whole program randomly.
So I should have made a full working piece of code before posting which would have lead me to the original problem an saved me a lot of headache.
For the future reader:
futures = "0.3"
hyper = { version = "0.14", features = ["full"] }
hyper-rustls = "0.22"
rustls = "0.19"
log = "0.4"
tokio = { version = "1.6", features = ["full"] }
url = "2.2"
Code:
use std::io::{stderr, stdout, Write};
use std::path::{Path, PathBuf};
use std::str::FromStr;
use futures::executor::block_on;
use hyper::body::HttpBody;
use hyper::Client;
use hyper_rustls::HttpsConnector;
use log::{debug, error, LevelFilter, Log, Metadata, Record};
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use url::Url;
async fn download_http<P: AsRef<Path>>(url: &Url, localpath: P) -> Result<(), ()> {
let mut uri = hyper::Uri::from_str(url.as_str()).unwrap();
let mut total_read: usize = 0;
let mut total_written: usize = 0;
let mut localfile = File::create(localpath).await.unwrap();
// Redirection Limit
for _ in 0..10 {
let https = HttpsConnector::with_native_roots();
let client = Client::builder().build::<_, hyper::Body>(https);
let mut resp = client.get(uri.clone()).await.unwrap();
let status = resp.status();
let header = resp.headers();
debug!("Status: {}", status);
for (key, value) in resp.headers() {
debug!("HEADER {}: {}", key, value.to_str().unwrap());
}
if status.is_success() {
// tokio::io::copy(&mut resp.body_mut().data(), &mut localfile).await.unwrap();
let expected_size = header.get("content-length").map(|v| v.to_str().unwrap().parse::<usize>().unwrap());
while let Some(next) = resp.data().await {
let chunk = next.unwrap();
let num_bytes = chunk.len();
total_read = total_read + num_bytes;
debug!("Received {} bytes ({} total)", num_bytes, total_read);
// localfile.write_all(&chunk).await.unwrap();
let written = localfile.write(&chunk).await.unwrap();
total_written = total_written + written;
debug!("Written {} bytes ({} total)", written, total_written);
if total_read != total_written {
error!("Could not write all data!");
}
if expected_size.is_some() && total_read.eq(&expected_size.unwrap()) {
return Ok(());
}
}
return Ok(());
} else if status.is_redirection() {
let location = header.get("location").unwrap().to_str().unwrap();
uri = hyper::Uri::from_str(location).unwrap();
} else {
return Err(());
}
}
return Err(());
}
struct Logger;
impl Log for Logger {
fn enabled(&self, _: &Metadata) -> bool {
true
}
fn log(&self, record: &Record) {
eprintln!("{}: {}", record.level().as_str().to_uppercase(), record.args());
stdout().flush().unwrap();
stderr().flush().unwrap();
}
fn flush(&self) {
stdout().flush().unwrap();
stderr().flush().unwrap();
}
}
static LOGGER: Logger = Logger;
#[tokio::main]
async fn main() {
log::set_logger(&LOGGER).map(move |()| log::set_max_level(LevelFilter::Debug)).unwrap();
let url = Url::parse("https://github.com/Kitware/CMake/releases/download/v3.20.5/cmake-3.20.5.tar.gz").unwrap();
let localfile = PathBuf::from("/tmp/cmake-3.20.5.tar.gz");
block_on(download_http(&url, &localfile)).unwrap();
// download_http(&url, &localfile).await.unwrap();
}
Switching between block_on and not using it makes the difference.
Now I can switch back to using write_all and remove my debugging code.
Related
I'm sending large objects over a network and noticed that using a single network connection is significantly slower than using multiple.
Server code:
use async_std::{
io::{BufWriter, Write},
net::TcpListener,
prelude::*,
task,
};
use bench_utils::{end_timer, start_timer};
use futures::stream::{FuturesOrdered, StreamExt};
async fn send(buf: &[u8], writer: &mut (impl Write + Unpin)) {
// Send the message length
writer.write_all(&(buf.len() as u64).to_le_bytes()).await.unwrap();
// Send the rest of the message
writer.write_all(&buf).await.unwrap();
writer.flush().await.unwrap();
}
fn main() {
task::block_on(async move {
let listener = TcpListener::bind("0.0.0.0:8000").await.unwrap();
let mut incoming = listener.incoming();
let mut writers = Vec::with_capacity(16);
for _ in 0..16 {
let stream = incoming.next().await.unwrap().unwrap();
writers.push(BufWriter::new(stream))
};
let buf = vec![0u8; 1 << 30];
let send_time = start_timer!(|| "Sending buffer across 1 connection");
send(&buf, &mut writers[0]).await;
end_timer!(send_time);
let send_time = start_timer!(|| "Sending buffer across 16 connections");
writers
.iter_mut()
.zip(buf.chunks(buf.len() / 16))
.map(|(w, chunk)| {
send(chunk, w)
})
.collect::<FuturesOrdered<_>>()
.collect::<Vec<_>>()
.await;
end_timer!(send_time);
});
}
Client code:
use async_std::{
io::{BufReader, Read},
net::TcpStream,
prelude::*,
task,
};
use bench_utils::{end_timer, start_timer};
use futures::stream::{FuturesOrdered, StreamExt};
async fn recv(reader: &mut (impl Read + Unpin)) {
// Read the message length
let mut len_buf = [0u8; 8];
reader.read_exact(&mut len_buf).await.unwrap();
let len: u64 = u64::from_le_bytes(len_buf);
// Read the rest of the message
let mut buf = vec![0u8; usize::try_from(len).unwrap()];
reader.read_exact(&mut buf[..]).await.unwrap();
}
fn main() {
let host = &std::env::args().collect::<Vec<_>>()[1];
task::block_on(async move {
let mut readers = Vec::with_capacity(16);
for _ in 0..16 {
let stream = TcpStream::connect(host).await.unwrap();
readers.push(BufReader::new(stream));
}
let read_time = start_timer!(|| "Reading buffer from 1 connection");
recv(&mut readers[0]).await;
end_timer!(read_time);
let read_time = start_timer!(|| "Reading buffer from 16 connections");
readers
.iter_mut()
.map(|r| recv(r))
.collect::<FuturesOrdered<_>>()
.collect::<Vec<_>>()
.await;
end_timer!(read_time);
});
}
Server result:
Start: Sending buffer across 1 connection
End: Sending buffer across 1 connection....................................55.134s
Start: Sending buffer across 16 connections
End: Sending buffer across 16 connections..................................4.19s
Client result:
Start: Reading buffer from 1 connection
End: Reading buffer from 1 connection......................................55.396s
Start: Reading buffer from 16 connections
End: Reading buffer from 16 connections....................................3.914s
I am assuming that this difference is due to the sending connection having to wait for an ACK when the TCP buffer is filled (both machines have TCP window scaling enabled)? It doesn't appear that Rust provides an API to modify the size of these things.
Is there anyway to achieve similar throughput on a single connection? It seems annoying to have to pass around multiple since all of this is going through a single network interface anyways.
I want to write a server using the current master branch of Hyper that saves a message that is delivered by a POST request and sends this message to every incoming GET request.
I have this, mostly copied from the Hyper examples directory:
extern crate futures;
extern crate hyper;
extern crate pretty_env_logger;
use futures::future::FutureResult;
use hyper::{Get, Post, StatusCode};
use hyper::header::{ContentLength};
use hyper::server::{Http, Service, Request, Response};
use futures::Stream;
struct Echo {
data: Vec<u8>,
}
impl Echo {
fn new() -> Self {
Echo {
data: "text".into(),
}
}
}
impl Service for Echo {
type Request = Request;
type Response = Response;
type Error = hyper::Error;
type Future = FutureResult<Response, hyper::Error>;
fn call(&self, req: Self::Request) -> Self::Future {
let resp = match (req.method(), req.path()) {
(&Get, "/") | (&Get, "/echo") => {
Response::new()
.with_header(ContentLength(self.data.len() as u64))
.with_body(self.data.clone())
},
(&Post, "/") => {
//self.data.clear(); // argh. &self is not mutable :(
// even if it was mutable... how to put the entire body into it?
//req.body().fold(...) ?
let mut res = Response::new();
if let Some(len) = req.headers().get::<ContentLength>() {
res.headers_mut().set(ContentLength(0));
}
res.with_body(req.body())
},
_ => {
Response::new()
.with_status(StatusCode::NotFound)
}
};
futures::future::ok(resp)
}
}
fn main() {
pretty_env_logger::init().unwrap();
let addr = "127.0.0.1:12346".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(Echo::new())).unwrap();
println!("Listening on http://{} with 1 thread.", server.local_addr().unwrap());
server.run().unwrap();
}
How do I turn the req.body() (which seems to be a Stream of Chunks) into a Vec<u8>? I assume I must somehow return a Future that consumes the Stream and turns it into a single Vec<u8>, maybe with fold(). But I have no clue how to do that.
Hyper 0.13 provides a body::to_bytes function for this purpose.
use hyper::body;
use hyper::{Body, Response};
pub async fn read_response_body(res: Response<Body>) -> Result<String, hyper::Error> {
let bytes = body::to_bytes(res.into_body()).await?;
Ok(String::from_utf8(bytes.to_vec()).expect("response was not valid utf-8"))
}
I'm going to simplify the problem to just return the total number of bytes, instead of echoing the entire stream.
Futures 0.3
Hyper 0.13 + TryStreamExt::try_fold
See euclio's answer about hyper::body::to_bytes if you just want all the data as one giant blob.
Accessing the stream allows for more fine-grained control:
use futures::TryStreamExt; // 0.3.7
use hyper::{server::Server, service, Body, Method, Request, Response}; // 0.13.9
use std::convert::Infallible;
use tokio; // 0.2.22
#[tokio::main]
async fn main() {
let addr = "127.0.0.1:12346".parse().expect("Unable to parse address");
let server = Server::bind(&addr).serve(service::make_service_fn(|_conn| async {
Ok::<_, Infallible>(service::service_fn(echo))
}));
println!("Listening on http://{}.", server.local_addr());
if let Err(e) = server.await {
eprintln!("Error: {}", e);
}
}
async fn echo(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
let (parts, body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::POST, "/") => {
let entire_body = body
.try_fold(Vec::new(), |mut data, chunk| async move {
data.extend_from_slice(&chunk);
Ok(data)
})
.await;
entire_body.map(|body| {
let body = Body::from(format!("Read {} bytes", body.len()));
Response::new(body)
})
}
_ => {
let body = Body::from("Can only POST to /");
Ok(Response::new(body))
}
}
}
Unfortunately, the current implementation of Bytes is no longer compatible with TryStreamExt::try_concat, so we have to switch back to a fold.
Futures 0.1
hyper 0.12 + Stream::concat2
Since futures 0.1.14, you can use Stream::concat2 to stick together all the data into one:
fn concat2(self) -> Concat2<Self>
where
Self: Sized,
Self::Item: Extend<<Self::Item as IntoIterator>::Item> + IntoIterator + Default,
use futures::{
future::{self, Either},
Future, Stream,
}; // 0.1.25
use hyper::{server::Server, service, Body, Method, Request, Response}; // 0.12.20
use tokio; // 0.1.14
fn main() {
let addr = "127.0.0.1:12346".parse().expect("Unable to parse address");
let server = Server::bind(&addr).serve(|| service::service_fn(echo));
println!("Listening on http://{}.", server.local_addr());
let server = server.map_err(|e| eprintln!("Error: {}", e));
tokio::run(server);
}
fn echo(req: Request<Body>) -> impl Future<Item = Response<Body>, Error = hyper::Error> {
let (parts, body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::POST, "/") => {
let entire_body = body.concat2();
let resp = entire_body.map(|body| {
let body = Body::from(format!("Read {} bytes", body.len()));
Response::new(body)
});
Either::A(resp)
}
_ => {
let body = Body::from("Can only POST to /");
let resp = future::ok(Response::new(body));
Either::B(resp)
}
}
}
You could also convert the Bytes into a Vec<u8> via entire_body.to_vec() and then convert that to a String.
See also:
How do I convert a Vector of bytes (u8) to a string
hyper 0.11 + Stream::fold
Similar to Iterator::fold, Stream::fold takes an accumulator (called init) and a function that operates on the accumulator and an item from the stream. The result of the function must be another future with the same error type as the original. The total result is itself a future.
fn fold<F, T, Fut>(self, init: T, f: F) -> Fold<Self, F, Fut, T>
where
F: FnMut(T, Self::Item) -> Fut,
Fut: IntoFuture<Item = T>,
Self::Error: From<Fut::Error>,
Self: Sized,
We can use a Vec as the accumulator. Body's Stream implementation returns a Chunk. This implements Deref<[u8]>, so we can use that to append each chunk's data to the Vec.
extern crate futures; // 0.1.23
extern crate hyper; // 0.11.27
use futures::{Future, Stream};
use hyper::{
server::{Http, Request, Response, Service}, Post,
};
fn main() {
let addr = "127.0.0.1:12346".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(Echo)).unwrap();
println!(
"Listening on http://{} with 1 thread.",
server.local_addr().unwrap()
);
server.run().unwrap();
}
struct Echo;
impl Service for Echo {
type Request = Request;
type Response = Response;
type Error = hyper::Error;
type Future = Box<futures::Future<Item = Response, Error = Self::Error>>;
fn call(&self, req: Self::Request) -> Self::Future {
match (req.method(), req.path()) {
(&Post, "/") => {
let f = req.body()
.fold(Vec::new(), |mut acc, chunk| {
acc.extend_from_slice(&*chunk);
futures::future::ok::<_, Self::Error>(acc)
})
.map(|body| Response::new().with_body(format!("Read {} bytes", body.len())));
Box::new(f)
}
_ => panic!("Nope"),
}
}
}
You could also convert the Vec<u8> body to a String.
See also:
How do I convert a Vector of bytes (u8) to a string
Output
When called from the command line, we can see the result:
$ curl -X POST --data hello http://127.0.0.1:12346/
Read 5 bytes
Warning
All of these solutions allow a malicious end user to POST an infinitely sized file, which would cause the machine to run out of memory. Depending on the intended use, you may wish to establish some kind of cap on the number of bytes read, potentially writing to the filesystem at some breakpoint.
See also:
How do I apply a limit to the number of bytes read by futures::Stream::concat2?
Most of the answers on this topic are outdated or overly complicated. The solution is pretty simple:
/*
WARNING for beginners!!! This use statement
is important so we can later use .data() method!!!
*/
use hyper::body::HttpBody;
let my_vector: Vec<u8> = request.into_body().data().await.unwrap().unwrap().to_vec();
let my_string = String::from_utf8(my_vector).unwrap();
You can also use body::to_bytes as #euclio answered. Both approaches are straight-forward! Don't forget to handle unwrap properly.
I am trying to rewrite the proxy example of Asynchronous Programming in Rust book by migrating to :
futures-preview = { version = "0.3.0-alpha.19", features = ["async-await"]}`
hyper = "0.13.0-alpha.4"`
from:
futures-preview = { version = "=0.3.0-alpha.17", features = ["compat"] }`
hyper = "0.12.9"
The current example converts the returned Future from a futures 0.3 into a futures 0.1, because hyper = "0.12.9" is not compatible with futures 0.3's async/await.
My code:
use {
futures::future::{FutureExt, TryFutureExt},
hyper::{
rt::run,
service::{make_service_fn, service_fn},
Body, Client, Error, Request, Response, Server, Uri,
},
std::net::SocketAddr,
std::str::FromStr,
};
fn forward_uri<B>(forward_url: &'static str, req: &Request<B>) -> Uri {
let forward_uri = match req.uri().query() {
Some(query) => format!("{}{}?{}", forward_url, req.uri().path(), query),
None => format!("{}{}", forward_url, req.uri().path()),
};
Uri::from_str(forward_uri.as_str()).unwrap()
}
async fn call(
forward_url: &'static str,
mut _req: Request<Body>,
) -> Result<Response<Body>, hyper::Error> {
*_req.uri_mut() = forward_uri(forward_url, &_req);
let url_str = forward_uri(forward_url, &_req);
let res = Client::new().get(url_str).await;
res
}
async fn run_server(forward_url: &'static str, addr: SocketAddr) {
let forwarded_url = forward_url;
let serve_future = service_fn(move |req| call(forwarded_url, req).boxed());
let server = Server::bind(&addr).serve(serve_future);
if let Err(err) = server.await {
eprintln!("server error: {}", err);
}
}
fn main() {
// Set the address to run our socket on.
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
let url = "http://127.0.0.1:9061";
let futures_03_future = run_server(url, addr);
run(futures_03_future);
}
First, I receive this error for server in run_server function:
the trait tower_service::Service<&'a
hyper::server::tcp::addr_stream::AddrStream> is not implemented for
hyper::service::service::ServiceFn<[closure#src/main.rs:35:35: 35:78
forwarded_url:_], hyper::body::body::Body>
Also, I cannot use hyper::rt::run because it might have been implemented differently in hyper = 0.13.0-alpha.4.
I will be grateful if you tell me your ideas on how to fix it.
By this issue, to create a new service for each connection you need to create MakeService in hyper = "0.13.0-alpha.4". You can create MakeService with a closure by using make_service_fn.
Also, I cannot use hyper::rt::run because it might have been implemented differently in hyper = 0.13.0-alpha.4.
Correct, under the hood hyper::rt::run was calling tokio::run, it has been removed from the api but currently i don't know the reason. You can run your future with calling tokio::run by yourself or use #[tokio::main] annotation. To do this you need to add tokio to your cargo:
#this is the version of tokio inside hyper "0.13.0-alpha.4"
tokio = "=0.2.0-alpha.6"
then change your run_server like this:
async fn run_server(forward_url: &'static str, addr: SocketAddr) {
let server = Server::bind(&addr).serve(make_service_fn(move |_| {
async move { Ok::<_, Error>(service_fn(move |req| call(forward_url, req))) }
}));
if let Err(err) = server.await {
eprintln!("server error: {}", err);
}
}
and main :
#[tokio::main]
async fn main() -> () {
// Set the address to run our socket on.
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
let url = "http://www.google.com:80"; // i have tested with google
run_server(url, addr).await
}
I'm trying to implement a simple TCP messaging protocol in Rust with the standard TCP Sockets. The protocol is like so:
[C0 FF EE EE] header
[XX XX] type U16LE
[XX XX] size U16LE
... (size - 8) bytes of arbitrary data ...
I have come up with the following code:
const REQUEST_HEADER: [u8; 4] = [0xC0, 0xFF, 0xEE, 0xEE];
pub fn run_server(host: &str, port: &str) {
let listener = TcpListener::bind(format!("{}:{}", host, port))
.expect("Could not bind to the requested host/port combination.");
for stream in listener.incoming() {
thread::spawn(|| {
let stream = stream.unwrap();
handle_client(stream);
});
}
}
fn handle_client(stream: TcpStream) {
stream.set_nodelay(true).expect("set_nodelay call failed");
loop {
match get_packet(stream) {
Ok(deframed) => {
match deframed.msgid {
Some(FrameType::Hello) => say_hello(stream),
Some(FrameType::Text) => {
println!("-- Got text message: {:X?}", deframed.content);
save_text(deframed);
},
Some(FrameType::Goodbye) => {
println!("-- You say goodbye, and I say hello");
break; // end of connection
},
_ => println!("!! I don't know this packet type")
}
},
Err(x) => {
match x {
1 => println!("!! Malformed frame header"),
2 => println!("!! Client gone?"),
_ => println!("!! Unknown error")
}
break;
}
}
}
}
// Read one packet from the stream
fn get_packet(reader: TcpStream) -> Result<MyFrame, u8> {
// read the packet header in here
let mut heading = [0u8; 0x8];
match reader.read_exact(&mut heading) {
Ok(_) => {
// check if header is OK
if heading[..4] == REQUEST_HEADER
{
// extract metadata
let mid: u16 = ((heading[4] as u16) << 0)
+ ((heading[5] as u16) << 8);
let len: u16 = ((heading[6] as u16) << 0)
+ ((heading[7] as u16) << 8);
let remain: u16 = len - 0x8;
println!("-> Pkt: 0x{:X}, length: 0x{:X}, remain: 0x{:X}", mid, len, remain);
let mut frame = vec![0; remain as usize];
reader.read_exact(&mut frame).expect("Read packet failed");
let deframed = MyFrame {
msgid: FromPrimitive::from_u16(mid),
content: frame
};
Ok(deframed)
} else {
println!("!! Expected packet header but got {:X?}", heading.to_vec());
Err(1)
}
},
_ => Err(2)
}
}
This is the Python client:
import socket, sys
sock = socket.socket()
sock.connect(("127.0.0.1", 12345))
f = open("message.bin",'rb')
dat = f.read()
sock.sendall(dat)
while True:
sock.recv(64)
When the program receives a multiple messages in a row, they seem to be processed fine. If the client sends a message and waits for a response, the program gets stuck at reading the frame content until ~16 more bytes are received. As a result, the program cannot send a response to the client.
I can tell it's stuck in there and not somewhere else because the ->Pkt log line appears with the correct metadata but it doesn't continue further processing despite the client having already sent everything.
I've tried replacing read_exact with read but it just keeps being stuck there. Once the client software drops the connection the message is suddenly processed as normal.
Is this a design problem or am I missing out a setting I need to change on the socket?
I have a concept project where the client sends a server a number (PrimeClientRequest), the server computes if the value is prime or not, and returns a response (PrimeClientResponse). I want the client to be a simple CLI which prompts the user for a number, sends the request to the server, and displays the response. Ideally I want to do this using TcpClient from Tokio and Streams from Futures-Rs.
I've written a Tokio server using services and I want to reuse the same codec and proto for the client.
Part of the client is a function called read_prompt which returns a Stream. Essentially it is an infinite loop at which each iteration reads in some input from stdin.
Here's the relevant code:
main.rs
use futures::{Future, Stream};
use std::env;
use std::net::SocketAddr;
use tokio_core::reactor::Core;
use tokio_prime::protocol::PrimeClientProto;
use tokio_prime::request::PrimeRequest;
use tokio_proto::TcpClient;
use tokio_service::Service;
mod cli;
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let addr_string = env::args().nth(1).unwrap_or("127.0.0.1:8080".to_string());
let remote_addr = addr_string.parse::<SocketAddr>().unwrap();
println!("Connecting on {}", remote_addr);
let tcp_client = TcpClient::new(PrimeClientProto).connect(&remote_addr, &handle);
core.run(tcp_client.and_then(|client| {
client
.call(PrimeRequest { number: Ok(0) })
.and_then(|response| {
println!("RESP = {:?}", response);
Ok(())
})
})).unwrap();
}
cli.rs
use futures::{Future, Sink, Stream};
use futures::sync::mpsc;
use std::{io, thread};
use std::io::{Stdin, Stdout};
use std::io::prelude::*;
pub fn read_prompt() -> impl Stream<Item = u64, Error = ()> {
let (tx, rx) = mpsc::channel(1);
thread::spawn(move || loop {
let thread_tx = tx.clone();
let input = prompt(io::stdout(), io::stdin()).unwrap();
let parsed_input = input
.parse::<u64>()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid u64"));
thread_tx.send(parsed_input.unwrap()).wait().unwrap();
});
rx
}
fn prompt(stdout: Stdout, stdin: Stdin) -> io::Result<String> {
let mut stdout_handle = stdout.lock();
stdout_handle.write(b"> ")?;
stdout_handle.flush()?;
let mut buf = String::new();
let mut stdin_handle = stdin.lock();
stdin_handle.read_line(&mut buf)?;
Ok(buf.trim().to_string())
}
With the code above, the client sends a single request to the server before the client terminates. I want to be able to use the stream generated from read_prompt to provide input to the TcpClient and make a request per item in the stream. How would I go about doing this?
The full code can be found at joshleeb/tokio-prime.
The solution I have come up with (so far) has been to use the LoopFn in the Future-Rs crate. It's not ideal as a new connection still has to be made but it is at least a step in the right direction.
main.rs
use futures::{future, Future};
use std::{env, io};
use std::net::SocketAddr;
use tokio_core::reactor::{Core, Handle};
use tokio_prime::protocol::PrimeClientProto;
use tokio_prime::request::PrimeRequest;
use tokio_proto::TcpClient;
use tokio_service::Service;
mod cli;
fn handler<'a>(
handle: &'a Handle, addr: &'a SocketAddr
) -> impl Future<Item = (), Error = ()> + 'a {
cli::prompt(io::stdin(), io::stdout())
.and_then(move |number| {
TcpClient::new(PrimeClientProto)
.connect(addr, handle)
.and_then(move |client| Ok((client, number)))
})
.and_then(|(client, number)| {
client
.call(PrimeRequest { number: Ok(number) })
.and_then(|response| {
println!("{:?}", response);
Ok(())
})
})
.or_else(|err| {
println!("! {}", err);
Ok(())
})
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let addr_string = env::args().nth(1).unwrap_or("127.0.0.1:8080".to_string());
let remote_addr = addr_string.parse::<SocketAddr>().unwrap();
println!("Connecting on {}", remote_addr);
let client = future::loop_fn((), |_| {
handler(&handle, &remote_addr)
.map(|_| -> future::Loop<(), ()> { future::Loop::Continue(()) })
});
core.run(client).ok();
}
cli.rs
use futures::prelude::*;
use std::io;
use std::io::{Stdin, Stdout};
use std::io::prelude::*;
#[async]
pub fn prompt(stdin: Stdin, stdout: Stdout) -> io::Result<u64> {
let mut stdout_handle = stdout.lock();
stdout_handle.write(b"> ")?;
stdout_handle.flush()?;
let mut buf = String::new();
let mut stdin_handle = stdin.lock();
stdin_handle.read_line(&mut buf)?;
parse_input(buf.trim().to_string())
}
fn parse_input(s: String) -> io::Result<u64> {
s.parse::<u64>()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid u64"))
}