When does a broken pipe occur in a TCP stream? - tcp

I am trying to write an echo server in Rust.
use std::net::{TcpStream, TcpListener};
use std::io::prelude::*;
fn main() {
let listener = TcpListener::bind("0.0.0.0:8000").unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("A connection established");
handle_connection(stream);
}
}
fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 512];
stream.read(&mut buffer).unwrap();
println!("Request: {}", String::from_utf8_lossy(&buffer[..]));
stream.write(&buffer[..]).unwrap();
stream.flush().unwrap();
}
The first request with nc localhost 8000 is working as expected but subsequent request aren't. What am I doing wrong? Is the problem in how the server is reading requests from clients? Though there is no error server side.
I am sending data by typing them on the terminal:
$ nc localhost 8000
hi
hi
hello
# no response
# on pressing enter
Ncat: Broken pipe.

A 'Broken pipe' message happens when you write to a stream where the other end has been closed. In your example, your handle_connection routine reads a single buffer from the client, copies that back to the client, and then returns, which will close the stream. When you run netcat from the terminal like that, the terminal defaults to line buffering, so each line you type will be sent to the server as a single write.
The first line is sent, read by the server, echoed back, and then the server closes the connection. Netcat gets a second line, writes that to the socket, and gets a 'Broken pipe' because the server has closed the connection.
If you want your server to read multiple messages, you need to have your handle_connection routine loop, reading from the stream until it gets an EOF.

Related

what is the difference between grpc stream error and TCP error

It is introduced in "google.golang.org/grpc/codes" that some errors can be generated by grpc and some cannot be generated, and the grpc stream should correspond to the stream in http2.0. Then I want to know whether it means that there is an error with the TCP connection when those grpc streams throw exceptions, so do I only need to reconnect the TCP connection, or do I have somes methods for only reconnect stream (such as reconnecting streams, etc.)
for {
request, err3 := stream.Recv()
if err3 == io.EOF {
return nil
}
if err3 != nil {
return err3 // how can i handle this error (grpc generated)
}
do something
}
gRPC-go handles network level reconnections for you: https://pkg.go.dev/google.golang.org/grpc#ClientConn
A ClientConn encapsulates a range of functionality including name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes. It also handles errors on established connections by re-resolving the name and reconnecting.
If the error returns an error and is not type io.EOF, it means that something went wrong (including network errors) and that you have to request a new stream: https://github.com/grpc/grpc-go/blob/master/stream.go#L126 but you don't have to worry to create a new TCP connection.
RecvMsg blocks until it receives a message into m or the stream is done. It returns io.EOF when the stream completes successfully. On any other error, the stream is aborted and the error contains the RPC status.
If it can't get a new stream, means that it can't connect to the server or that something is very wrong with it, but if it is a transient network error, you will eventually be able to get a new stream.

Connecting output of one pipe to input of one FIFO

I am trying to write a client-server program in which there are three executables D1, D2 and D3 which provide some data as the output. The clients request for any one of these data sources and send their pid to the server with the help of a common fifo. The structure for sending this request is:
struct Request
{
char p[10]; // the pid of the client program in string form
int req; // 1,2,or 3 depending on which one is required D1,D2 or D3
};
After getting a request the server will open a fifo whose pathname is the pid of the client. So it works as a client specific fifo.
mkfifo(pid,O_CREAT|0666);
int fd1 = open(pid,O_WRONLY);
Now, suppose the req field is 1. If it is the first request for D1, the Server will run:
FILE* fp = popen("./D1","r");
int fd = fileno(fp); //for getting the file descriptor for the reading end of the pipe connected to D1
Now I want my client to read from the pipe of D1.D1 contains simple logic program like:
while(1)
{
write(1,"Data from D1",12);
sleep(1);
}
I tried dup2(fd,fd1) but it did not work. Is there any way connecting the two file descriptors fd and fd1?
Also, if another client requests for D1, how to connect the file descriptor of client2 to fd so that both clients receive the same message together?
Instead of "connecting" two file descriptors, you can send the file descriptor to the client and let the client read:
The server listens on a UNIX stream socket.
The client connects the socket and sends the request.
The server receives the request, does popen and obtains the file descriptor.
The server then sends the file descriptor to the client and closes the file descriptor.
The client receives the file descriptor and reads from it till EOF.
See man unix(7) for details about sending file descriptors between processes with SCM_RIGHTS.
Alternatively, instead of using popen:
The server forks itself. The child does mkfifo (the client passed the filename in the request), opens it for write and redirects its stdout into the named pipe's file descriptor.
The child execs the application. This application writes into stdout and that goes into the named pipe.
The client opens the named pipe and reads the output of the application. The client can unlink the pipe filename after opening it.

Using a KillSwitch in an akka http streaming request

I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!
I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.

Cannot reconnect to serial port that was previously opened in the same TCL script

I have been working towards automating hardware testing using TCL, where the hardware is connected to a serial port. The current script can connect to the serial port the first time through, and disconnect at the end. However, it cannot reconnect to the serial port again unless the application is closed and reopened.
The code to connect to the serial port is:
if { [catch {spawn -open [open $port r+] } results] } {
puts $results
puts "Could not connect to port.\n"
return -1 }
with the successful return statement being return $spawn_id
The code that is supposed to close the connection to the serial port is:
if {[catch {close -i $handle} results]} {
puts "$results"
puts "Failed to Close Session $handle\n\r"
return -1 }
#waits for handle to be properly closed
exp_wait
where $handle is the spawn_id returned by the open procedure.
I wrote a short test script to demonstrate how I am trying to use this:
source console.tcl
puts "available COM ports are: [console::availableSerial]"
set handle [console::openSession COM6 BARE>]
if {[catch {console::closeSession $handle} results]} {
puts $results }
if {[catch {console::openSession COM6 BARE>} results]} {
puts $results }
where 'console::' is the namespace of the open and close procedures in question
I have tried playing around with some of the fconfigure parameters, such as enabling and disabling blocking, but to no avail.
The error message displayed by TCL is `couldn't open serial "COM6": permission denied' , suggesting the port is not being closed properly. The book 'Exploring Expect' does not have much information specific to this, so I was hoping someone here would be able to provide some insight into what I am doing wrong. I am using 32-bit ActiveState ActiveTCL 8.6.3.1 and my shell is Tclsh36
Any feedback will be appreciated. Thanks.
The problem here stems from using proc to deal with a spawned connection. If a connection is spawned in a procedure (call it foo1), than another procedure (call if foo2) cannot immediately interact with it. For this to work, the spawn_id must be returned from foo1 and passed in as a parameter to foo2. This does not only affect sending information to the spawned connection, but also receiving information from that connection.
In my case, I called close -i $handle, which was correct, but then simply called exp_wait. the exp_wait was not called using the spawn_id passed in, so was not waiting for the correct response.
The fix was to simply replace exp_wait with exp_wait -i $handle

How do I detect tcp-client disconnect with gen_tcp?

I'am trying to use gen_tcp module.
There is example of server-side code, which I have troubles.
%% First, I bind server port and wait for peer connection
{ok, Sock} = gen_tcp:listen(7890, [{active, false}]),
{ok, Peer} = gen_tcp:accept(Sock),
%% Here client calls `gen_tcp:close/1` on socket and goes away.
%% After that I am tryin' send some message to client
SendResult = gen_server:send(Peer, <<"HELLO">>),
%% Now I guess that send is failed with {error, closed}, but...
ok = SendResult.
When I call gen_tcp:send/2 again, second call wil return {error, closed} as expected. But I want to understand, why first call succeeded? Am I missing some tcp-specific details?
This strange (for me) behavior is only for {active, false} connection.
In short, the reason for this is that there's no activity on the socket that can determine that the other end has closed. The first send appears to work because it operates on a socket that, for all intents and purposes, appears to be connected and operational. But that write activity determines that the other end is closed, which is why the second send fails as expected.
If you were to first read or recv from the socket, you'd quickly learn the other end was closed. Alternatively, if the socket were in an Erlang active mode, then you'd also learn of the other end closing because active modes poll the socket.
Aside from whether the socket is in an active mode or not, this has nothing to do with Erlang. If you were to write C code directly to the sockets API, for example, you'd see the same behavior.

Resources