net.TCPConn allowing Write after FIN packet - tcp

I'm trying to write unit tests for some server-side code, but I'm having trouble being deterministic with my shutdown test cases. It seems a loopback TCP connection isn't correctly handling a clean shutdown. I've reprod this in a sample app which does the following in lockstep:
Create a client & server connection.
Verify connectivity by sending a message successfully from client to server.
Use channels to tell the server to call conn.Close() and wait until that call has completed.
(Try to) verify the connection is cleanly broken by calling Write on the client connection again.
Step 4 succeeds without error. I've tried using a json.Encoder and a bare call to TCPConn.Write. I checked the traffic with WireShark. The server sent a FIN packet, but the client never does (even with a 1s sleep) The server even sent a RST packet in response to (4) and the client conn.Write still returned nil for its error.
This seems totally bonkers. Am I missing something here? Currently running Go v1.2.1/Darwin
Edit: Obligatory repro
package main
import (
"bufio"
"fmt"
"net"
)
var (
loopback = make(chan string)
shouldClose = make(chan struct{})
didClose = make(chan struct{})
)
func serve(listener *net.TCPListener) {
conn, err := listener.Accept()
if err != nil {
panic(err)
}
s := bufio.NewScanner(conn)
if !s.Scan() {
panic(fmt.Sprint("Failed to scan for line: ", s.Err()))
}
loopback <- s.Text() + "\n"
<-shouldClose
conn.Close()
close(didClose)
if s.Scan() {
panic("Expected error reading from a socket closed on this side")
}
}
func main() {
listener, err := net.ListenTCP("tcp", &net.TCPAddr{})
if err != nil {
panic(err)
}
go serve(listener)
conn, err := net.Dial("tcp", listener.Addr().String())
if err != nil {
panic(fmt.Sprint("Dialer got error ", err))
}
oracle := "Mic check\n"
if _, err = conn.Write([]byte(oracle)); err != nil {
panic(fmt.Sprint("Dialer failed to write oracle: ", err))
}
test := <-loopback
if test != oracle {
panic("Server did not receive the value sent by the client")
}
close(shouldClose)
<-didClose
// For giggles, I can also add a <-time.After(500 * time.Millisecond)
if _, err = conn.Write([]byte("This should fail after active disconnect")); err == nil {
panic("Sender 'successfully' wrote to a closed socket")
}
}

This is how an active close of a TCP connection works. When the client detects that the server has closed, it is then expected to close its half of the connection.
In your case, instead of closing the client you're sending more data. This causes the server to send an RST packet to force the connection closed since the message received isn't valid.
If you're still unsure, here's and equivalent python client+server which displays the same behavior. (I find using python helpful, since it closely follows the underlying BSD socket API, without using C)
Server:
import socket, time
server = socket.socket()
server.bind(("127.0.0.1", 9999))
server.listen(1)
sock, addr = server.accept()
msg = sock.recv(1024)
print msg
print "closing"
sock.close()
time.sleep(3)
print "done"
Client:
import socket, time
sock = socket.socket()
sock.connect(("127.0.0.1", 9999))
sock.send("test\n")
time.sleep(1)
print "sending again!"
sock.send("no error here")
time.sleep(1)
print "sending one last time"
sock.send("broken pipe this time")
To properly detect a remote close on the connection, you should do Read(), and look for an io.EOF error in return.
// we technically need to try and read at least one byte,
// or we will get an EOF even if the connection isn't closed.
buff := make([]byte, 1)
if _, err := conn.Read(buff); err != io.EOF {
panic("connection not closed")
}

Related

why is golang http server failing with "broken pipe" when response exceeds 8kb?

I have a example web server below where if you call curl localhost:3000 -v then ^C (cancel) it immediately (before 1 second), it will report write tcp 127.0.0.1:3000->127.0.0.1:XXXXX: write: broken pipe.
package main
import (
"fmt"
"net/http"
"time"
)
func main() {
log.Fatal(http.ListenAndServe(":3000", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(1 * time.Second)
// Why 8061 bytes? Because the response header on my computer
// is 132 bytes, adding up the entire response to 8193 (1 byte
// over 8kb)
if _, err := w.Write(make([]byte, 8061)); err != nil {
fmt.Println(err)
return
}
})))
}
Based on my debugging, I have been able to conclude that this will only happen if the entire response is writing more than 8192 bytes (or 8kb). If my entire response write less than 8192, the broken pipe error is not returned.
My question is where is this 8192 bytes (or 8kb) buffer limit set? Is this a limit in Golang's HTTP write buffer? Is this related to the response being chunked? Is this only related to the curl client or the browser client? How can I change this limit so I can have a bigger buffer written before the connection is closed (for debugging purposes)?
Thanks!
In net/http/server.go the output buffer is set to 4<<10, i.e. 4KB.
The reason you see the error at 8KB, is that it takes at least 2 writes to a socket to detect a closed remote connection. The first write succeeds, but the remote host sends an RST packet. The second write will be to a closed socket, which is what returns the broken pipe error.
Depending on the socket write buffer, and the connection latency, it's possible that even more writes could succeed before the first RST packet is registered.
It is broken pipe, but u should use ioutil.ReadAll for small data size of response or io.copy for large data size of response.
For ioutil.ReadAll
defer response.Body.Close()
body, err := ioutil.ReadAll(response.Body)
if err != nil {
logger.Errorf(ctx, "err is %+v", err)
return nil, err
}
For io.copy
// 10MB
var wb = make([]byte, 0, 10485760)
buf := bytes.NewBuffer(wb)
written, err := io.Copy(buf, response.Body)
body := wb[:written]

Keeping a TCP connection alive in a goroutine and check if it timesout if the connection is lost

I have a TCP server up and running listetning to a port and a go routine for handeling the connections. I wonder if it's possible to have a go routine running for every connection keeping them alive with net.SetKeepAlive(true). Also with error handling so that if the connection times out it will execute cleanup functions like removing the connection from a list?
Handle routine:
func handleConnection(conn net.Conn, rec chan string) {
var item QueItem
buf := make([]byte, bufSize)
l, err := conn.Read(buf)
if err != nil || l < 0 {
fmt.Println("Error reading from conn: ", conn)
fmt.Println("Error reading: ", err)
}
err = json.Unmarshal(buf[:l], &item)
if err != nil {
fmt.Println("Error converting to JSON", jErr)
}
fmt.Printf("Received : %+v\n", item)
fmt.Println("recived from:", conn.RemoteAddr())
rec <- item.IP
}
TCPserver:
for {
conn, err := ln.Accept()
if err != nil {
fmt.Println("No accept", err)
log.Println("Unable to accept connection", err)
}
go handleConnection(conn, recived)
}
To keep a check on the established TCP connections, one can do keep-alive mechanism in two ways.
At application level:
In idle conditions both client and server agree on a protocol to send pre-determined packets to each other. Lack of a message from peer with in certain time can signal a problem in the connection.
At TCP layer:
Enable the TCP stack to check the connection status. The TCP layer at which this mechanism is enabled will send keep-alive messages at regular determined interval. It would expect that the peer TCP stack send keep-alive-ack. Absence of ACK after needed re-transmission signals connection problem and the application would be duly notified.
I think net.SetKeepAlive(true) is a go way of informing the TCP to do keep-alive. You need not do any thing special constructs in go routine.
TCP keep-alive works well. Application need not be burdened to check for connection status

Go bufio.Scanner stops while reading TCP connection to Redis

Reading TCP connection between Redis-server by using bufio.Scanner
fmt.Fprintf(conn, "*3\r\n$3\r\nSET\r\n$5\r\nmykey\r\n$7\r\nHello!!\r\n")
scanner := bufio.NewScanner(conn)
for {
// fmt.Println("marker00")
if ok := scanner.Scan(); !ok {
// fmt.Println("marker01")
break
}
// fmt.Println("marker02")
fmt.Println(scanner.Text())
}
"+OK" comes as the result for first scanning, but the second scanning stops just in invoking Scan method. (marker00 -> marker02 -> marker00 and no output any more)
Why does Scan stop and how can I know the end of TCP response (without using bufio.Reader)?
Redis does not close the connection for you after sending a command. Scan() ends after io.EOF which is not sent.
Check out this:
package main
import (
"bufio"
"fmt"
"net"
)
// before go run, you must hit `redis-server` to wake redis up
func main() {
conn, _ := net.Dial("tcp", "localhost:6379")
message := "*3\r\n$3\r\nSET\r\n$1\r\na\r\n$1\r\nb\r\n"
go func(conn net.Conn) {
for i := 0; i < 10; i++ {
fmt.Fprintf(conn, message)
}
}(conn)
scanner := bufio.NewScanner(conn)
for {
if ok := scanner.Scan(); !ok {
break
}
fmt.Println(scanner.Text())
}
fmt.Println("Scanning ended")
}
Old question, but I had the same issue. Two solutions:
1) Add a "QUIT\r\n" command to your Redis message. This will cause Redis to close the connection which will terminate the scan. You'll have to deal with the extra "+OK" that the quit outputs.
2) Add
conn.SetReadDeadline(time.Now().Add(time.Second*5))
just before you start scanning. This will cause the scan to stop trying after 5 seconds. Unfortunately, it will always take 5 seconds to complete the scan so choose this time wisely.

How come the redis-benchmark command is not following the redis protocol?

I was reading in directly from a tcp connection after running the redis-benchmark command and as far as I can tell, redis-benchmark is NOT following the redis protocol.
The redis protocol is as stated in its website:
The way RESP is used in Redis as a request-response protocol is the
following:
Clients send commands to a Redis server as a RESP Array of Bulk Strings.
The server replies with one of the RESP types according to the command implementation.
Meaning that a correct client implementation must always send RESP arrays of bulk strings.
If that is true, then, anything that does not start with a * is considered a syntax error (since its not an RESP array).
Thus, if one were to send a ping command to a redis-server, then it must be sent as a resp array of length 1 with 1 bulk string containing the word ping. For example:
"*1\r\n$4\r\nPING\r\n"
However, whenever I listen directly to the redis-benchmark command and read its tcp connection I get instead:
"PING\r\n"
which does not follow the redis protocol. Is that a bug or is there something implied in the redis protocol that makes pings special? As far as I could tell I couldn't find anything that said that pings were special, nor that length 1 commands were special. Does someone know whats going on?
To see reproduce these results yourself you can copy my code to inspect it directly:
package main
import (
"fmt"
"log"
"net"
)
func main() {
RedisBenchmark()
}
func RedisBenchmark() {
url := "127.0.0.1:6379"
fmt.Println("listen: ", url)
ln, err := net.Listen("tcp", url) //announces on local network
if err != nil {
log.Fatal(err)
}
for {
conn, err := ln.Accept() //waits and returns the next connection to the listener
if err != nil {
log.Fatal(err)
}
tcpConn := conn.(*net.TCPConn)
go HandleConnection(tcpConn)
}
}
func HandleConnection(tcpConn *net.TCPConn) {
b := make([]byte, 256) //TODO how much should I read at a time?
n, err := tcpConn.Read(b)
if err != nil {
fmt.Println("n: ", n)
log.Fatal(err)
}
fmt.Printf("+++++> raw input string(b): %q\n", string(b))
msg := string(b[:n])
fmt.Printf("+++++> raw input msg: %q\n", msg)
}
and run it using go with:
go run main.go
followed on a different terminal (or tmux pane):
redis-benchmark
for all the test or if you only want to run ping with 1 client:
redis-benchmark -c 1 -t ping -n 1
you can see the details of how I am running it with the flags at: http://redis.io/topics/benchmarks
That is called an inline command. Check the Inline Commands section of the Redis Protocol article.
You can refer to the source code to find out the differences between inline command and RESP.
readQueryFromClient
|--> if command begins with * --> processInlineBuffer()process it as RESP
|
|--> if command not begins with * --> processMultibulkBuffer():process it as inline command
RESP is a more efficent way to parse the command for the Redis Server

Strange behaviour of golang UDP server

I wrote a simple UDP server in go.
When I do go run udp.go it prints all packages I send to it. But when running go run udp.go > out it stops passing stdout to the out file when the client stops.
The client is simple program that sends 10k requests. So in the file I have around 50% of sent packages. When I run the client again, the out file grows again until the client script finishes.
Server code:
package main
import (
"net"
"fmt"
)
func main() {
addr, _ := net.ResolveUDPAddr("udp", ":2000")
sock, _ := net.ListenUDP("udp", addr)
i := 0
for {
i++
buf := make([]byte, 1024)
rlen, _, err := sock.ReadFromUDP(buf)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(buf[0:rlen]))
fmt.Println(i)
//go handlePacket(buf, rlen)
}
}
And here is the client code:
package main
import (
"net"
"fmt"
)
func main() {
num := 0
for i := 0; i < 100; i++ {
for j := 0; j < 100; j++ {
num++
con, _ := net.Dial("udp", "127.0.0.1:2000")
fmt.Println(num)
buf := []byte("bla bla bla I am the packet")
_, err := con.Write(buf)
if err != nil {
fmt.Println(err)
}
}
}
}
As you suspected, it seems like UDP packet loss due to the nature of UDP. Because UDP is connectionless, the client doesn't care if the server is available or ready to receive data. So if the server is busy processing, it won't be available to handle the next incoming datagram. You can check with netstat -u (which should include UDP packet loss info). I ran into the same thing, in which the server (receive side) could not keep up with the packets sent.
You can try two things (the second worked for me with your example):
Call SetReadBuffer. Ensure the receive socket has enough buffering to handle everything you throw at it.
sock, _ := net.ListenUDP("udp", addr)
sock.SetReadBuffer(1048576)
Do all packet processing in a go routine. Try to increase the datagrams per second by ensuring the server isn't busy doing other work when you want it to be available to receive. i.e. Move the processing work to a go routine, so you don't hold up ReadFromUDP().
//Reintroduce your go handlePacket(buf, rlen) with a count param
func handlePacket(buf []byte, rlen int, count int)
fmt.Println(string(buf[0:rlen]))
fmt.Println(count)
}
...
go handlePacket(buf, rlen, i)
One final option:
Lastly, and probably not what you want, you put a sleep in your client which would slow down the rate and would also remove the problem. e.g.
buf := []byte("bla bla bla I am the packet")
time.Sleep(100 * time.Millisecond)
_, err := con.Write(buf)
Try syncing stdout after the write statements.
os.Stdout.Sync()

Resources