How to get the cookie from a GET response? - http

I am writing a function that makes a GET request to a website and returns the response cookie:
extern crate futures;
extern crate hyper;
extern crate tokio_core;
use tokio_core::reactor::Core;
use hyper::Client;
use std::error::Error;
use hyper::header::Cookie;
use futures::future::Future;
fn get_new_cookie() -> Result<String, Box<Error>> {
println!("Getting cookie...");
let core = Core::new()?;
let client = Client::new(&core.handle());
println!("Created client");
let uri = "http://www.cnn.com".parse().expect("Cannot parse url");
println!("Parsed url");
let response = client.get(uri).wait().expect("Cannot get url.");
println!("Got response");
let cookie = response
.headers()
.get::<Cookie>()
.expect("Cannot get cookie");
println!("Cookie: {}", cookie);
Ok(cookie)
}
fn main() {
println!("{:?}", get_new_cookie());
}
This doesn't work; it is stuck on the client.get(...) string. The output I'm getting is:
Getting cookie...
Created client
Parsed url
and after that nothing happens.
What am I doing wrong and how I can change it so it'd work?

As Stefan points out, by calling wait, you are putting the thread to sleep until the future has completed. However, that thread needs to run the event loop, so you've just caused a deadlock. Using Core::run is more correct.
As Francis Gagné points out, the "Cookie" header is used to send a cookie to the server. SetCookie is used to send a cookie to the client. It also returns a vector of all the cookies together:
fn get_new_cookie() -> Result<String, Box<Error>> {
println!("Getting cookie...");
let mut core = Core::new()?;
let client = Client::new(&core.handle());
println!("Created client");
let uri = "http://www.cnn.com".parse().expect("Cannot parse url");
println!("Parsed url");
let response = core.run(client.get(uri)).expect("Cannot get url.");
println!("Got response");
let cookie = response
.headers()
.get::<SetCookie>()
.expect("Cannot get cookie");
println!("Cookie: {:?}", cookie);
Ok(cookie.join(","))
}
However, if you only want a synchronous API, use Reqwest instead. It is built on top of Hyper:
extern crate reqwest;
use std::error::Error;
use reqwest::header::SetCookie;
fn get_new_cookie() -> Result<String, Box<Error>> {
let response = reqwest::get("http://www.cnn.com")?;
let cookies = match response.headers().get::<SetCookie>() {
Some(cookies) => cookies.join(","),
None => String::new(),
};
Ok(cookies)
}
fn main() {
println!("{:?}", get_new_cookie());
}

See the documentation for the wait method:
Note: This method is not appropriate to call on event loops or similar
I/O situations because it will prevent the event loop from making
progress (this blocks the thread). This method should only be called
when it's guaranteed that the blocking work associated with this
future will be completed by another thread.
Future::wait is already deprecated in the tokio-reform branch.
I'd recommend to design the full application to deal with async concepts (i.e. get_new_cookie should take a Handle and return a Future, not allocating its own event loop).
You could run the request with Core::run like this:
let response = core.run(client.get(uri)).expect("Cannot get url.");

reqwest 0.11 (and perhaps earlier) update
In the get_new_cookie function, I believe the code snippet to retrieve the cookies from a reqwest::Response goes something like:
// returns Option<&HeaderValue>
response.headers().get(http::header::SET_COOKIE)

Related

In Rust+Tokio, should you return a oneshot::Receiver as a processing callback?

I'm making an API where the user can submit items to be processed, and they might want to check whether their item was processed successfully. I thought that this would be a good place to use tokio::sync::oneshot channels, where I'd return the receiver to the caller, and they can later await on it to get the result they're looking for.
let processable_item = ...;
let where_to_submit: impl Submittable = get_submit_target();
let status_handle: oneshot::Receiver<SubmissionResult> = where_to_submit.submit(processable_item).await;
// ... do something that does not depend on the SubmissionResult ...
// Now we want to get the status of our submission
let status = status_handle.await;
Submitting the item involves creating a oneshot channel, and putting the Sender half into a queue while the Receiver goes back to the calling code:
#[async_trait]
impl Submittable for Something {
async fn submit(item: ProcessableItem) -> oneshot::Receiver<SubmissionResult> {
let (sender, receiver) = oneshot::channel();
// Put the item, with the associated sender, into a queue
let queue: mpsc::Receiver<(ProcessableItem, oneshot::Sender<SubmissionResult>)> = get_processing_queue();
queue.send( (item, sender) ).await.expect("Processing task closed!");
return receiver;
}
}
When I do this, cargo clippy complains (via the [clippy::async_yields_async] lint) that I'm returning oneshot::Receiver, which can be awaited, from an async function, and suggests that I await it then.
This is not what I wanted, which is to allow a degree of background processing while the user doesn't need the SubmissionResult yet, as opposed to making them wait until it's available.
Is this API even a good idea? Does there exist a common approach to doing this?
Looks fine to me. This is a false positive of Clippy, so you can just silence it: #[allow(clippy::async_yields_async)].

How to use Rust futures in callbacks?

Is there any way to use futures in callbacks? For example...
// Send message on multiple channels while removing ones that are closed.
use smol::channel::Sender;
...
// (expecting bool, found opaque type)
vec_of_sender.retain( |sender| async {
sender.send(msg.clone()).await.is_ok()
});
My work-around is to loop twice: On the first pass I delete closed senders (non-async) and on the second I do the actual send (async using for sender in ...). But it seems like I should be able to do it all in a single retain() call.
You can't use retain in this way. The closure that retain accepts must implement FnMut(&T) -> bool, but every async function returns an implementation of Future.
You can turn an async function into a synchronous one by blocking on it. For example, if you were using tokio, you could do this:
use tokio::runtime::Runtime;
let rt = Runtime::new().unwrap();
vec_of_sender.retain(|sender| {
rt.block_on(async { sender.send().await.is_ok() })
});
However, there is overhead to adding an async runtime, and I have a feeling that you are trying to solve the wrong problem.
The closure passed to retain must return a bool, but every async function returns impl Future. Instead, you can use Stream, which is the asynchronous version of Iterator. You can convert the vector into a Stream:
let stream = stream::iter(vec_of_sender);
And then use the filter method, which accepts an asynchronous closure and returns a new Stream:
let vec_of_sender = stream.filter(|sender| async {
sender.send(msg.clone()).await.is_ok()
}).collect::<Vec<Sender>>();
To avoid creating a new Vec, you can also use swap_remove:
let mut i = 0usize;
while i < vec_of_sender.len() {
if vec_of_sender[i].send(msg.clone()).await.is_ok() {
i += 1;
} else {
vec_of_sender.swap_remove(i);
}
}
Note that this will change the order of the vector.

How do I make this example of an HTTP request work?

I am trying to make the example in the documentation work, but I can't.
use http::{Request, Response};
let mut request = Request::builder();
request.uri("https://www.rust-lang.org/")
.header("User-Agent", "my-awesome-agent/1.0");
if needs_awesome_header() {
request.header("Awesome", "yes");
}
let response = send(request.body(()).unwrap());
fn send(req: Request<()>) -> Response<()> {
// ...
}
The question is, how can I print the response in a string to save it? It seems it is not in the response.

How can I asynchronously retrieve data and modify it with a Tokio-based echo server?

I am working on an echo server which takes data from TCP and applies some logic to that data. For example, if the client data comes in as hello I want to respond it ashello from server.
I am able to forward the input data using the copy function, but this is not useful in my case.
Here is the starting code that I am working on:
extern crate futures;
extern crate tokio_core;
extern crate tokio_io;
use futures::stream::Stream;
use futures::Future;
use std::net::SocketAddr;
use tokio_core::net::TcpListener;
use tokio_core::reactor::Core;
use tokio_io::io::copy;
use tokio_io::AsyncRead;
fn main() {
let addr = "127.0.0.1:15000".parse::<SocketAddr>().unwrap();
let mut core = Core::new().unwrap();
let handle = core.handle();
let socket = TcpListener::bind(&addr, &handle).unwrap();
println!("Listening on: {}", addr);
let done = socket.incoming().for_each(move |(socket, addr)| {
let (reader, writer) = socket.split();
let amt = copy(reader, writer);
let msg = amt.then(move |result| {
match result {
Ok((amt, _, _)) => println!("wrote {} bytes to {}", amt, addr),
Err(e) => println!("error on {}: {}", addr, e),
}
Ok(())
});
handle.spawn(msg);
Ok(())
});
core.run(done).unwrap();
}
I know that I need to add some logic instead of this copy function but how?
let amt = copy(reader, writer);
An echo server is kind of special in a sense, that exactly one "request" from a client is followed by exactly one response from the server. A very nice example for such a use-case is tokio's TinyDB example.
One thing that should be considered, however, is that while UDP is based on packets, that hit the other side in the exact form that you sent them with, TCP is not. TCP is a stream protocol - it has strong guarantees relating that a packet was received by the other side and that the data sent is received in exactly the order it was sent in. However, what is not guaranteed is, that one call to "send" on the one side leads to exactly one "receive" call on the other side, returning the exact same chunk of data that was sent. This is especially of interest when sending very long chunks of data, where one send maps to multiple receives. Thus you should settle for a delimiter that the server can wait for before trying to send a response to the client. In Telnet, that delimiter would be "\r\n".
That is where tokio's Decoder/Encoder infrastructure comes to play. An example implementation of such a codec is LinesCodec. If you want to have
Telnet, this does exactly what you want. It will give you exactly one message at a time and allow you to send exactly one such message at a time as response:
extern crate tokio;
use tokio::codec::Decoder;
use tokio::net::TcpListener;
use tokio::prelude::*;
use tokio::codec::LinesCodec;
use std::net::SocketAddr;
fn main() {
let addr = "127.0.0.1:15000".parse::<SocketAddr>().unwrap();
let socket = TcpListener::bind(&addr).unwrap();
println!("Listening on: {}", addr);
let done = socket.incoming()
.map_err(|e| println!("failed to accept socket; error = {:?}", e))
.for_each(move |socket| {
// Fit the line-based codec on top of the socket. This will take on the task of
// parsing incomming messages, as well as formatting outgoing ones (appending \r\n).
let (lines_tx, lines_rx) = LinesCodec::new().framed(socket).split();
// This takes every incomming message and allows to create one outgoing message for it,
// essentially generating a stream of responses.
let responses = lines_rx.map(|incomming_message| {
// Implement whatever transform rules here
if incomming_message == "hello" {
return String::from("hello from server");
}
return incomming_message;
});
// At this point `responses` is a stream of `Response` types which we
// now want to write back out to the client. To do that we use
// `Stream::fold` to perform a loop here, serializing each response and
// then writing it out to the client.
let writes = responses.fold(lines_tx, |writer, response| {
//Return the future that handles to send the response to the socket
writer.send(response)
});
// Run this request/response loop until the client closes the connection
// Then return Ok(()), ignoring all eventual errors.
tokio::spawn(
writes.then(move |_| Ok(()))
);
return Ok(());
});
tokio::run(done);
}

IRC server doesn't respond to Rust IRC Client identify requests

I'm working on an IRC bot using TcpStream from the standard library.
I'm able to read all the lines that come in, but the IRC server doesn't seem to respond to my identify requests. I thought I was sending the request too soon so I tried sleeping before sending the IDENT but that doesn't work. I write using both BufReader, BufWriter and calling read and write directly on the stream to no avail.
use std::net::TcpStream;
use std::io::{BufReader, BufWriter, BufRead, Write, Read};
use std::{thread, time};
struct Rusty {
name: String,
stream: TcpStream,
reader: BufReader<TcpStream>,
writer: BufWriter<TcpStream>,
}
impl Rusty {
fn new(name: &str, address: &str) -> Rusty {
let stream = TcpStream::connect(address).expect("Couldn't connect to server");
let reader = BufReader::new(stream.try_clone().unwrap());
let writer = BufWriter::new(stream.try_clone().unwrap());
Rusty {
name: String::from(name),
reader: reader,
writer: writer,
stream: stream,
}
}
fn write_line(&mut self, string: String) {
let line = format!("{}\r\n", string);
&self.writer.write(line.as_bytes());
}
fn identify(&mut self) {
let nick = &self.name.clone();
self.write_line(format!("USER {} {} {} : {}", nick, nick, nick, nick));
self.write_line(format!("NICK {}", nick));
}
fn read_lines(&mut self) {
let mut line = String::new();
loop {
self.reader.read_line(&mut line);
println!("{}", line);
}
}
}
fn main() {
let mut bot = Rusty::new("rustyrusty", "irc.rizon.net:6667");
thread::sleep_ms(5000);
bot.identify();
bot.read_lines();
}
It's very important to read the documentation for the components we use when programming. For example, the docs for BufWriter states (emphasis mine):
Wraps a writer and buffers its output.
It can be excessively inefficient to work directly with something that
implements Write. For example, every call to write on TcpStream
results in a system call. A BufWriter keeps an in-memory buffer of
data and writes it to an underlying writer in large, infrequent
batches.
The buffer will be written out when the writer is dropped.
Said another way, the entire purpose of a buffered reader or writer is that read or write requests don't have a one-to-one mapping to the underlying stream.
That means when you call write, you are only writing to the buffer. You also need to call flush if you need to ensure that the bytes are written to the underlying stream.
Additionally, you should:
Handle the errors that can arise from read, write, and flush.
Re-familiarize yourself with what each function does. For example, read and write don't guarantee that they read or write as much data as you ask them to. They may perform a partial read or write, and it's up to you to handle that. That's why there are helper methods like read_to_end or write_all.
Clear your String that you are reading into. Otherwise the output just repeats every time the loop cycles.
Use write! instead of building up a string that is immediately thrown away.
fn write_line(&mut self, string: &str) {
write!(self.writer, "{}\r\n", string).unwrap();
self.writer.flush().unwrap();
}
With these changes, I was able to get a PING message from the server.

Resources