How do I handle parallel reads and writes on a TcpStream? - tcp

I read Idiomatic way to handle writes to a TcpStream while waiting on read, but I'm still unsure of how to handle this. I'm connecting to a Rust binary via Telnet and would like to send "commands" and receive "status". Almost like a simple echo server.

I ended up cloning the stream and it's working fine:
let second_stream = stream.try_clone().expect("Cannot clone stream");
let mut reader = BufferedReader::new(second_stream);
let mut writer = BufferedWriter::new(stream);

Just an update for Rust 1.1, this method no longer works. To make this work, I used stream.shutdown(Shutdown::Write) when I was finished with writing to the server socket. I don't know how I would do this if I had to repeatedly read and write from the same connection.

Related

How can I interpret an HTTPS with non-utf8 characters?

I'm working on a very simple and straightforward reverse proxy in rust without any external libraries. I'm come to my first roadblock. I've noticed that when I try to parse an https request into utf8 it fails. I printed the request as a lossy string. Here is the output:
�f�^���;�r�;�d��N7# ^�8�6 �m�xpPk�
����B]���Fi��֚*G]"�+�/̨̩�,�0�
� ����/5�rus
I was thinking this has something to do with ssl because on the client side, it says something along the lines of "Secure Connection has Failed". I've looked into decoding ssl requests or whatever this is and have found nothing useful. Any ideas would be greatly appreciated.
I have tried parsing the request using several different solution from other platforms. They consisted of relying on base64 and other ssl related crates meant for decoding text.
For more context, below is a general example for how I go about getting the output from above:
use std::{
io::{Read, Result},
net::TcpListener,
};
fn main() -> Result<()> {
let server = TcpListener::bind("localhost:443")?;
for mut stream in server.incoming().filter_map(Result::ok) {
let mut buf = [0; 256];
let bytes = stream.read(&mut buf)?;
let utf8_lossy = String::from_utf8_lossy(&buf); // this contains the non-utf8 wumbo jumbo
let utf8 = String::from_utf8(buf.to_vec()).unwrap(); // this fails
}
Ok(())
}
When an https client connects with the server, they establish a secure socket to protect data transfer between them. The data that is being passed over this socket is not necessarily text, and cannot be interpreted as such.
The process of establishing a socket is a multi-step protocol, where the client sends a ClientHello message, to which you should reply with a ServerHello which contains your certificate. The client then replies with it's keys and some cipher information, before the socket is finally ready to be used for data. All of these initalization steps are happening with a binary protocol, that cannot be interpreted as text. That is the reason you're not seeing any sensible output.
Once that socket is setup, only then does http data begin to flow over the connection. This is likely what you're expecting to see, as it contains the familiar 'HTTP/1.1 GET', etc.
Openssl, the library you mentioned using, has a way to setup a socket that will perform handshakes for you. See the docs.

Where to start with TCP communications?

I'm getting ready to attempt my first project involving networked communications. This is just a tinker-toy app with the purpose of self-teaching - nothing mission-critical here. I will need two nodes to communicate with each other. One will be an Android platform, so I will be using Java. The other node will be a RaspberryPi running Debian Linux. While I COULD use Java on this end as well and maybe just use RPC, what I would LIKE to do is develop my own little implementation-agnostic TCP/IP "protocol" for the two to communicate, and let the each implement it however works best. What I mean by "protocol" is I want a standard set of messages to be passed back and forth, along with some values with each. E.g.:
"Protocol" Definition:
MESSAGE TYPE A (Float arg, Int arg)
MESSAGE TYPE B (Int arg)
MESSAGE TYPE C (Int arg, String arg, Int arg)
An example "conversation":
Node 1 Node 2
A(5.4, 4) --->
B(6) --->
<---- C(3, 'Hello', 0xFF)
B(5) --->
<---- A(43.0, 16)
So my questions are:
(1) Does the above even make sense? Do I need to clarify my intent? Provide more info? This is my first forray into networked communication between two running programs, so I may be way off-base in what I'm asking for. If I'm approaching this the wrong way, I'd be happy for better recommendations.
(2) How would I go about this? Do I just stuff one long string into a TCP packet? Is there a better way?
Thanks!
You only need to fill a buffer with the data you want and then learn how to open and send data through a TCP socket. The kernel will handle how to arrange the payload and how to control the TCP stream. On the server end, you must learn how to listen on a TCP socket and read incoming data.
Socket Programming is the word you should be searching for.

Performing asynchronous write operations over a TCP socket with Boost Asio

I am writing a Client/Server application in C++ with the help of Boost Asio. I have a working server, and the server workflow is something I understand well.
My client application handles the connect gracefully as shown in Asio examples, after which, it exchanges a handshake with the server. After that however, the users should be able to send requests to the server when and how they want, which is where I have a problem understanding the paradigm.
The initial workflow goes like a little like this:
OnConnected() { SendHandshake() }
SendHandshake() { async.write_some(handshake...), async_read_some(&OnRead) }
OnRead() { ReadServerHandshake() *** }
And users would send messages by using Write(msg):
Write (msg) { async_write_some(msg,&OnWrite), async_Read_some(&OnRead) }
OnWrite() {}
EDIT: Rephrasing the question to be clearer, here is the scenario:
After the initial handshaking is complete, the Client is only used to send requests to the server, on which it will get a reply. So, for instance, a user sends a write. Client waits for the read operation to complete, reads the reply and does something with it. The next user write will only come after, say, 5 minutes. Will the io_service stop working in the meanwhile because there are no outstanding asynchronous operations in between the last reply read and the next write?
On an informative note, you can provide it with io_service::work to stop an io_service from running out of work. This will ensure that the io_service::run never returns until the work object is destroyed.
To control the lifetime of the work object, you can use a shared_ptr pointer and reset it once the work is done, or you can use boost::optional as outlined here.
Of course you still need to handle the case where either the server closes the TCP connection, or the connection dies for whatever reason. To handle this case, one solution would be to have an outstanding async_read on the socket to the server. The read handler should be called with an error_code when/if something goes wrong with the connection. If you have the outstanding read on the connection, you do not need to use the work object.
If you want the IO service to complete a read, you must start a read. If you want to read data any time the client sends it, you must have an asynchronous read operation pending at all times. Otherwise, how would the library know what to do with the data?

MPI_Bsend and MPI_Isend. How do they work?

using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread.
Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result.
Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ?
Thanks.
What you're looking for is a nonblocking send using your own buffer (MPI_Isend).There is no need to worry about threading -- ISend should return immediately to let you continue your own code. You would then continue your work, and post an MPI_Wait request on the MPI_Request that you passed to Isend. This will then block until the buffer is free to use again. If you have space for several buffers, you could improve parallelism by allocating several buffers and using whichever becomes available through MPI_Waitany.

Why we need to read() before write() in TCP server program?

As per my understanding a simple TCP server will be coded as follows.
socket() - bind() - listen() - accept() - read() - write()
The clients will be written as follows.
socket() - bind()(Optional) - connect() - write() - read()
Please note the order difference in read() and write() calls between client and server program.
Is it a requirement to always read() before write() in a server program and if, then why?
Thanks,
Naga
That isn't mandatory, but it makes sense for the server to read the request before writing a response. Note that it is necessary to read on both sides often enough to prevent a distributed deadlock: for example, if the both sides are trying to write and not reading, then the buffers in-between will get full and neither one's write will be able to proceed. One solution for this is to have a separate thread which keeps reading, if there is something to read (this applies to both the client and the server).
The simple answer is no. You are free to do whatever you like.
However, I'll caveat that quickly with the fact that most protocols are designed to wait for the client to send something. After all, the server, by nature, serves requests and needs to wait to know what that request is, be it "GET /" or "HELO" or whatever. So, it is fairly natural for a sever to read before writing any response back to the client.
That said, you could if you felt like it dump version information down to the client before you do any reading. To see the effect, connect to your server using telnet.
You can perform them in either order. However, a server will normally generate a response from the read() operation, then write it with the write() operation, so this order makes sense.
If you're handling multiple clients, you should use a multiplexer like select to notify you when clients have data ready to read, so your server won't lock up the every time you try to read() from a client who hasn't sent anything.
It isn't a requirement, server program can write to socket without reading first. But in many cases server program must know what client wants - so it calls read() first.

Resources