Reading TCP connection between Redis-server by using bufio.Scanner
fmt.Fprintf(conn, "*3\r\n$3\r\nSET\r\n$5\r\nmykey\r\n$7\r\nHello!!\r\n")
scanner := bufio.NewScanner(conn)
for {
// fmt.Println("marker00")
if ok := scanner.Scan(); !ok {
// fmt.Println("marker01")
break
}
// fmt.Println("marker02")
fmt.Println(scanner.Text())
}
"+OK" comes as the result for first scanning, but the second scanning stops just in invoking Scan method. (marker00 -> marker02 -> marker00 and no output any more)
Why does Scan stop and how can I know the end of TCP response (without using bufio.Reader)?
Redis does not close the connection for you after sending a command. Scan() ends after io.EOF which is not sent.
Check out this:
package main
import (
"bufio"
"fmt"
"net"
)
// before go run, you must hit `redis-server` to wake redis up
func main() {
conn, _ := net.Dial("tcp", "localhost:6379")
message := "*3\r\n$3\r\nSET\r\n$1\r\na\r\n$1\r\nb\r\n"
go func(conn net.Conn) {
for i := 0; i < 10; i++ {
fmt.Fprintf(conn, message)
}
}(conn)
scanner := bufio.NewScanner(conn)
for {
if ok := scanner.Scan(); !ok {
break
}
fmt.Println(scanner.Text())
}
fmt.Println("Scanning ended")
}
Old question, but I had the same issue. Two solutions:
1) Add a "QUIT\r\n" command to your Redis message. This will cause Redis to close the connection which will terminate the scan. You'll have to deal with the extra "+OK" that the quit outputs.
2) Add
conn.SetReadDeadline(time.Now().Add(time.Second*5))
just before you start scanning. This will cause the scan to stop trying after 5 seconds. Unfortunately, it will always take 5 seconds to complete the scan so choose this time wisely.
Related
I recently run into a problem when I develope a high concurrency http client via valyala/fasthttp: The client works fine in the first 15K~ requests but after that more and more dial tcp4 127.0.0.1:80: i/o timeout and dialing to the given TCP address timed out error occours.
Sample Code
var Finished = 0
var Failed = 0
var Success = 0
func main() {
for i := 0; i < 1000; i++ {
go get()
}
start := time.Now().Unix()
for {
fmt.Printf("Rate: %.2f/s Success: %d, Failed: %d\n", float64(Success)/float64(time.Now().Unix()-start), Success, Failed)
time.Sleep(100 * time.Millisecond)
}
}
func get() {
ticker := time.NewTicker(time.Duration(100+rand.Intn(2900)) * time.Millisecond)
defer ticker.Stop()
client := &fasthttp.Client{
MaxConnsPerHost: 10000,
}
for {
req := &fasthttp.Request{}
req.SetRequestURI("http://127.0.0.1:80/require?number=10")
req.Header.SetMethod(fasthttp.MethodGet)
req.Header.SetConnectionClose()
res := &fasthttp.Response{}
err := client.DoTimeout(req, res, 5*time.Second)
if err != nil {
fmt.Println(err.Error())
Failed++
} else {
Success++
}
Finished++
client.CloseIdleConnections()
<-ticker.C
}
}
Detail
The server is built on labstack/echo/v4 and when client got timeout error, the server didn't have any error, and manually perform the request via Postman or Browser like Chrome are works fine.
The client runs pretty well in the first 15K~ request, but after that, more and more timeout error occours and the output Rate is decreasing. I seached for google and github and I found this issue may be the most suitable one, but didn't found a solution.
Another tiny problem...
As you can notice, when the client start, it will first generate some the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection error, and then works fine till around 15K issues, and then start generating more and more timeout error.Why it would generate the Connection closed error in the begining?
Machine Info
Macbook Pro 14 2021 (Apple M1 Pro) with 16GB Ram and running macOS Monterey 12.4
So basically, If you trying to open a connection and then close it as soon as possibile, it's not going to be like "connection#1 use a port then immediately return it back", there gonna be lots of processing needs to be done, so If you want to send many request at the same time, I think it's better to reuse the connection as possible as you can.
For example, in fasthttp:
req := fasthttp.AcquireRequest()
res := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(res)
// Then do the request below
I have an API that receives a CSV file to process. I'd like to be able to send back an 202 Accepted (or any status really) while processing the file in the background. I have a handler that checks the request, writes the success header, and then continues processing via a producer/consumer pattern. The problem is that, due to the WaitGroup.Wait() calls, the accepted header isn't sending back. The errors on the handler validation are sending back correctly but that's because of the return statements.
Is it possible to send that 202 Accepted back with the wait groups as I'm hoping (and if so, what am I missing)?
func SomeHandler(w http.ResponseWriter, req *http.Request) {
endAccepted := time.Now()
err := verifyRequest(req)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
data := JSONErrors{Errors: []string{err.Error()}}
json.NewEncoder(w).Encode(data)
return
}
// ...FILE RETRIEVAL CLIPPED (not relevant)...
// e.g. csvFile, openErr := os.Open(tmpFile.Name())
//////////////////////////////////////////////////////
// TODO this isn't sending due to the WaitGroup.Wait()s below
w.WriteHeader(http.StatusAccepted)
//////////////////////////////////////////////////////
// START PRODUCER/CONSUMER
jobs := make(chan *Job, 100) // buffered channel
results := make(chan *Job, 100) // buffered channel
// start consumers
for i := 0; i < 5; i++ { // 5 consumers
wg.Add(1)
go consume(i, jobs, results)
}
// start producing
go produce(jobs, csvFile)
// start processing
wg2.Add(1)
go process(results)
wg.Wait() // wait for all workers to finish processing jobs
close(results)
wg2.Wait() // wait for process to finish
log.Println("===> Done Processing.")
}
You're doing all the processing in the background, but you're still waiting for it to finish. The solution would be to just not wait. The best solution would move all of the handling elsewhere to a function you can just call with go to run it in the background, but the simplest solution leaving it inline would just be
w.WriteHeader(http.StatusAccepted)
go func() {
// START PRODUCER/CONSUMER
jobs := make(chan *Job, 100) // buffered channel
results := make(chan *Job, 100) // buffered channel
// start consumers
for i := 0; i < 5; i++ { // 5 consumers
wg.Add(1)
go consume(i, jobs, results)
}
// start producing
go produce(jobs, csvFile)
// start processing
wg2.Add(1)
go process(results)
wg.Wait() // wait for all workers to finish processing jobs
close(results)
wg2.Wait() // wait for process to finish
log.Println("===> Done Processing.")
}()
Note that you elided the CSV file handling, so you'll need to ensure that it's safe to use this way (i.e. that you haven't defered closing or deleting the file, which would cause that to occur as soon as the handler returns).
I was reading in directly from a tcp connection after running the redis-benchmark command and as far as I can tell, redis-benchmark is NOT following the redis protocol.
The redis protocol is as stated in its website:
The way RESP is used in Redis as a request-response protocol is the
following:
Clients send commands to a Redis server as a RESP Array of Bulk Strings.
The server replies with one of the RESP types according to the command implementation.
Meaning that a correct client implementation must always send RESP arrays of bulk strings.
If that is true, then, anything that does not start with a * is considered a syntax error (since its not an RESP array).
Thus, if one were to send a ping command to a redis-server, then it must be sent as a resp array of length 1 with 1 bulk string containing the word ping. For example:
"*1\r\n$4\r\nPING\r\n"
However, whenever I listen directly to the redis-benchmark command and read its tcp connection I get instead:
"PING\r\n"
which does not follow the redis protocol. Is that a bug or is there something implied in the redis protocol that makes pings special? As far as I could tell I couldn't find anything that said that pings were special, nor that length 1 commands were special. Does someone know whats going on?
To see reproduce these results yourself you can copy my code to inspect it directly:
package main
import (
"fmt"
"log"
"net"
)
func main() {
RedisBenchmark()
}
func RedisBenchmark() {
url := "127.0.0.1:6379"
fmt.Println("listen: ", url)
ln, err := net.Listen("tcp", url) //announces on local network
if err != nil {
log.Fatal(err)
}
for {
conn, err := ln.Accept() //waits and returns the next connection to the listener
if err != nil {
log.Fatal(err)
}
tcpConn := conn.(*net.TCPConn)
go HandleConnection(tcpConn)
}
}
func HandleConnection(tcpConn *net.TCPConn) {
b := make([]byte, 256) //TODO how much should I read at a time?
n, err := tcpConn.Read(b)
if err != nil {
fmt.Println("n: ", n)
log.Fatal(err)
}
fmt.Printf("+++++> raw input string(b): %q\n", string(b))
msg := string(b[:n])
fmt.Printf("+++++> raw input msg: %q\n", msg)
}
and run it using go with:
go run main.go
followed on a different terminal (or tmux pane):
redis-benchmark
for all the test or if you only want to run ping with 1 client:
redis-benchmark -c 1 -t ping -n 1
you can see the details of how I am running it with the flags at: http://redis.io/topics/benchmarks
That is called an inline command. Check the Inline Commands section of the Redis Protocol article.
You can refer to the source code to find out the differences between inline command and RESP.
readQueryFromClient
|--> if command begins with * --> processInlineBuffer()process it as RESP
|
|--> if command not begins with * --> processMultibulkBuffer():process it as inline command
RESP is a more efficent way to parse the command for the Redis Server
I'm trying to write unit tests for some server-side code, but I'm having trouble being deterministic with my shutdown test cases. It seems a loopback TCP connection isn't correctly handling a clean shutdown. I've reprod this in a sample app which does the following in lockstep:
Create a client & server connection.
Verify connectivity by sending a message successfully from client to server.
Use channels to tell the server to call conn.Close() and wait until that call has completed.
(Try to) verify the connection is cleanly broken by calling Write on the client connection again.
Step 4 succeeds without error. I've tried using a json.Encoder and a bare call to TCPConn.Write. I checked the traffic with WireShark. The server sent a FIN packet, but the client never does (even with a 1s sleep) The server even sent a RST packet in response to (4) and the client conn.Write still returned nil for its error.
This seems totally bonkers. Am I missing something here? Currently running Go v1.2.1/Darwin
Edit: Obligatory repro
package main
import (
"bufio"
"fmt"
"net"
)
var (
loopback = make(chan string)
shouldClose = make(chan struct{})
didClose = make(chan struct{})
)
func serve(listener *net.TCPListener) {
conn, err := listener.Accept()
if err != nil {
panic(err)
}
s := bufio.NewScanner(conn)
if !s.Scan() {
panic(fmt.Sprint("Failed to scan for line: ", s.Err()))
}
loopback <- s.Text() + "\n"
<-shouldClose
conn.Close()
close(didClose)
if s.Scan() {
panic("Expected error reading from a socket closed on this side")
}
}
func main() {
listener, err := net.ListenTCP("tcp", &net.TCPAddr{})
if err != nil {
panic(err)
}
go serve(listener)
conn, err := net.Dial("tcp", listener.Addr().String())
if err != nil {
panic(fmt.Sprint("Dialer got error ", err))
}
oracle := "Mic check\n"
if _, err = conn.Write([]byte(oracle)); err != nil {
panic(fmt.Sprint("Dialer failed to write oracle: ", err))
}
test := <-loopback
if test != oracle {
panic("Server did not receive the value sent by the client")
}
close(shouldClose)
<-didClose
// For giggles, I can also add a <-time.After(500 * time.Millisecond)
if _, err = conn.Write([]byte("This should fail after active disconnect")); err == nil {
panic("Sender 'successfully' wrote to a closed socket")
}
}
This is how an active close of a TCP connection works. When the client detects that the server has closed, it is then expected to close its half of the connection.
In your case, instead of closing the client you're sending more data. This causes the server to send an RST packet to force the connection closed since the message received isn't valid.
If you're still unsure, here's and equivalent python client+server which displays the same behavior. (I find using python helpful, since it closely follows the underlying BSD socket API, without using C)
Server:
import socket, time
server = socket.socket()
server.bind(("127.0.0.1", 9999))
server.listen(1)
sock, addr = server.accept()
msg = sock.recv(1024)
print msg
print "closing"
sock.close()
time.sleep(3)
print "done"
Client:
import socket, time
sock = socket.socket()
sock.connect(("127.0.0.1", 9999))
sock.send("test\n")
time.sleep(1)
print "sending again!"
sock.send("no error here")
time.sleep(1)
print "sending one last time"
sock.send("broken pipe this time")
To properly detect a remote close on the connection, you should do Read(), and look for an io.EOF error in return.
// we technically need to try and read at least one byte,
// or we will get an EOF even if the connection isn't closed.
buff := make([]byte, 1)
if _, err := conn.Read(buff); err != io.EOF {
panic("connection not closed")
}
I wrote a simple UDP server in go.
When I do go run udp.go it prints all packages I send to it. But when running go run udp.go > out it stops passing stdout to the out file when the client stops.
The client is simple program that sends 10k requests. So in the file I have around 50% of sent packages. When I run the client again, the out file grows again until the client script finishes.
Server code:
package main
import (
"net"
"fmt"
)
func main() {
addr, _ := net.ResolveUDPAddr("udp", ":2000")
sock, _ := net.ListenUDP("udp", addr)
i := 0
for {
i++
buf := make([]byte, 1024)
rlen, _, err := sock.ReadFromUDP(buf)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(buf[0:rlen]))
fmt.Println(i)
//go handlePacket(buf, rlen)
}
}
And here is the client code:
package main
import (
"net"
"fmt"
)
func main() {
num := 0
for i := 0; i < 100; i++ {
for j := 0; j < 100; j++ {
num++
con, _ := net.Dial("udp", "127.0.0.1:2000")
fmt.Println(num)
buf := []byte("bla bla bla I am the packet")
_, err := con.Write(buf)
if err != nil {
fmt.Println(err)
}
}
}
}
As you suspected, it seems like UDP packet loss due to the nature of UDP. Because UDP is connectionless, the client doesn't care if the server is available or ready to receive data. So if the server is busy processing, it won't be available to handle the next incoming datagram. You can check with netstat -u (which should include UDP packet loss info). I ran into the same thing, in which the server (receive side) could not keep up with the packets sent.
You can try two things (the second worked for me with your example):
Call SetReadBuffer. Ensure the receive socket has enough buffering to handle everything you throw at it.
sock, _ := net.ListenUDP("udp", addr)
sock.SetReadBuffer(1048576)
Do all packet processing in a go routine. Try to increase the datagrams per second by ensuring the server isn't busy doing other work when you want it to be available to receive. i.e. Move the processing work to a go routine, so you don't hold up ReadFromUDP().
//Reintroduce your go handlePacket(buf, rlen) with a count param
func handlePacket(buf []byte, rlen int, count int)
fmt.Println(string(buf[0:rlen]))
fmt.Println(count)
}
...
go handlePacket(buf, rlen, i)
One final option:
Lastly, and probably not what you want, you put a sleep in your client which would slow down the rate and would also remove the problem. e.g.
buf := []byte("bla bla bla I am the packet")
time.Sleep(100 * time.Millisecond)
_, err := con.Write(buf)
Try syncing stdout after the write statements.
os.Stdout.Sync()