Why is Go HTTPS Client not reusing connections? - http

I have an http client which creates multiple connections to the host. I want to set a maximum number of connections it can set to a particular host.
There are no such options in go's request.Transport.
My code looks like
package main
import (
"fmt"
"net/http"
"net/url"
)
const (
endpoint_url_fmt = "https://blah.com/api1?%s"
)
func main() {
transport := http.Transport{ DisableKeepAlives : false }
outParams := url.Values{}
outParams.Set("method", "write")
outParams.Set("message", "BLAH")
for {
// Encode as part of URI.
outboundRequest, err := http.NewRequest(
"GET",
fmt.Sprintf(endpoint_url_fmt, outParams.Encode()),
nil
)
outboundRequest.Close = false
_ , err = transport.RoundTrip(outboundRequest)
if err != nil {
fmt.Println(err)
}
}
}
I would expect this to create 1 connection. As I am calling it in a for-loop. But this keeps creating an infinite number of connections.
Where as similar python code using the requests library creates only one connection.
#!/usr/bin/env python
import requests
endpoint_url_fmt = "https://something.com/restserver.php"
params = {}
params['method'] = 'write'
params['category'] = category_errors_scuba
params['message'] = "blah"
while True:
r = requests.get(endpoint_url_fmt, params = params)
For some reason the go code is not reusing http connections.
EDIT :
The go code needs the body to be closed to reuse the connection.
resp , err = transport.RoundTrip(outboundRequest)
resp.Close() // This allows the connection to be reused

Based on further clarification from the OP. The default client does reuse connections.
Be sure to close the response.
Callers should close resp.Body when done reading from it. If resp.Body is not closed, the Client's underlying RoundTripper (typically Transport) may not be able to re-use a persistent TCP connection to the server for a subsequent "keep-alive" request.
Additionally, I've found that I also needed to read until the response was complete before calling Close().
e.g.
res, _ := client.Do(req)
io.Copy(ioutil.Discard, res.Body)
res.Body.Close()
To ensure http.Client connection reuse be sure to do two things:
Read until Response is complete (i.e. ioutil.ReadAll(resp.Body))
Call Body.Close()
Old answer, useful for rate limiting, but not what the OP was after:
I don't think setting max connections is possible via the golang 1.1 http APIs. This means you can shoot yourself in the foot with tons of TCP connections (until you run out of file descriptors or whatever) if you aren't careful.
That said, you could limit the rate at which you call the go routine for a particular host (and therefore outbound requests and connections) via time.Tick.
For example:
import "time"
requests_per_second := 5
throttle := time.Tick(1000000000 / requests_per_second)
for i := 0; i < 16; i += 1 {
<-throttle
go serveQueue()
}

There are interesting improvements in http.Transport:
// DisableKeepAlives, if true, disables HTTP keep-alives and
// will only use the connection to the server for a single
// HTTP request.
//
// This is unrelated to the similarly named TCP keep-alives.
DisableKeepAlives bool
// ...
// MaxIdleConns controls the maximum number of idle (keep-alive)
// connections across all hosts. Zero means no limit.
MaxIdleConns int // Go 1.7
// MaxIdleConnsPerHost, if non-zero, controls the maximum idle
// (keep-alive) connections to keep per-host. If zero,
// DefaultMaxIdleConnsPerHost is used.
MaxIdleConnsPerHost int
// MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
MaxConnsPerHost int // Go 1.11

Related

VUE Front end to go server (http) and clients connected to go server (tcp) error

I'm currently creating a go TCP server that handles file sharing between multiple go clients, that works fine. However, I'm also building a front end using vue.js showing some server stats like the number of users, bytes sent, etc.
The problem occurs when I include the 'http.ListenAndServe(":3000", nil)' function handles the requests from the front end of the server. Is it impossible to have a TCP and an HTTP server on the same go file?
If so, how can a link the three (frontend, go-server, clients)
Here is the code of the 'server.go'
func main() {
// Create TCP server
serverConnection, error := net.Listen("tcp", ":8085")
// Check if an error occured
// Note: because 'go' forces you to use each variable you declare, error
// checking is not optional, and maybe that's good
if error != nil {
fmt.Println(error)
return
}
// Create server Hub
serverHb := newServerHub()
// Close the server just before the program ends
defer serverConnection.Close()
// Handle Front End requests
http.HandleFunc("/api/thumbnail", requestHandler)
fs := http.FileServer(http.Dir("../../tcp-server-frontend/dist"))
http.Handle("/", fs)
fmt.Println("Server listening on port 3000")
http.ListenAndServe(":3000", nil)
// Each client sends data, that data is received in the server by a client struct
// the client struct then sends the data, which is a request to a 'go' channel, which is similar to a queue
// Somehow this for loop runs only when a new connection is detected
for {
// Accept a new connection if a request is made
// serverConnection.Accept() blocks the for loop
// until a connection is accepted, then it blocks the for loop again!
connection, connectionError := serverConnection.Accept()
// Check if an error occurred
if connectionError != nil {
fmt.Println("1: Woah, there's a mistake here :/")
fmt.Println(connectionError)
fmt.Println("1: Woah, there's a mistake here :/")
// return
}
// Create new user
var client *Client = newClient(connection, "Unregistered_User", serverHb)
fmt.Println(client)
// Add client to serverHub
serverHb.addClient(client)
serverHb.listClients()
// go client.receiveFile()
go client.handleClientRequest()
}
}

Dialing TCP Error: Timeout or i/o Timeout after a while of high concurrency request

I recently run into a problem when I develope a high concurrency http client via valyala/fasthttp: The client works fine in the first 15K~ requests but after that more and more dial tcp4 127.0.0.1:80: i/o timeout and dialing to the given TCP address timed out error occours.
Sample Code
var Finished = 0
var Failed = 0
var Success = 0
func main() {
for i := 0; i < 1000; i++ {
go get()
}
start := time.Now().Unix()
for {
fmt.Printf("Rate: %.2f/s Success: %d, Failed: %d\n", float64(Success)/float64(time.Now().Unix()-start), Success, Failed)
time.Sleep(100 * time.Millisecond)
}
}
func get() {
ticker := time.NewTicker(time.Duration(100+rand.Intn(2900)) * time.Millisecond)
defer ticker.Stop()
client := &fasthttp.Client{
MaxConnsPerHost: 10000,
}
for {
req := &fasthttp.Request{}
req.SetRequestURI("http://127.0.0.1:80/require?number=10")
req.Header.SetMethod(fasthttp.MethodGet)
req.Header.SetConnectionClose()
res := &fasthttp.Response{}
err := client.DoTimeout(req, res, 5*time.Second)
if err != nil {
fmt.Println(err.Error())
Failed++
} else {
Success++
}
Finished++
client.CloseIdleConnections()
<-ticker.C
}
}
Detail
The server is built on labstack/echo/v4 and when client got timeout error, the server didn't have any error, and manually perform the request via Postman or Browser like Chrome are works fine.
The client runs pretty well in the first 15K~ request, but after that, more and more timeout error occours and the output Rate is decreasing. I seached for google and github and I found this issue may be the most suitable one, but didn't found a solution.
Another tiny problem...
As you can notice, when the client start, it will first generate some the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection error, and then works fine till around 15K issues, and then start generating more and more timeout error.Why it would generate the Connection closed error in the begining?
Machine Info
Macbook Pro 14 2021 (Apple M1 Pro) with 16GB Ram and running macOS Monterey 12.4
So basically, If you trying to open a connection and then close it as soon as possibile, it's not going to be like "connection#1 use a port then immediately return it back", there gonna be lots of processing needs to be done, so If you want to send many request at the same time, I think it's better to reuse the connection as possible as you can.
For example, in fasthttp:
req := fasthttp.AcquireRequest()
res := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(res)
// Then do the request below

Terminate http request from IP layer using golang

I am making an http post request to a server using golang. Suppose the server is currently turned off (Means the machine on which the server runs is turned off) then the request is stuck at the IP layer. So my program execution is unable to proceed further. It is unable to proceed to the Application layer. So is there any way in golang to stop this.
I am using the following code.
req, err := http.NewRequest("POST", url, bytes.NewReader(b))
if err != nil {
return errors.Wrap(err, "new request error")
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return errors.Wrap(err, "http request error")
}
defer resp.Body.Close()
Is there anything that can be added to this, to terminate the request if it doesn't find anything from the IP layer.
The default http Client has no timeout. You can create an explicit http.Client yourself and set the timeout:
var cl = &http.Client{
Timeout: time.Second * 10,
}
resp, err := cl.Do(req)
if err != nil {
// err will be set on timeout
return errors.Wrap(err, "http request error")
}
defer resp.Body.Close()
If the server does not answer any more in the middle of a request, you can handle the timeout.
Use a non-default http.Transport with its DialContext field set to a function which uses a custom context with the properly configured timeout/deadline. Another option is to use a custom net.Dialer.
Something like this:
cli := http.Client{
Transport: &http.Transport{
DialContext: func (ctx context.Context, network, address string) (net.Conn, error) {
dialer := net.Dialer{
Timeout: 3 * time.Second,
}
return dialer.DialContext(ctx, network, address)
},
},
}
req, err := http.NewRequest(...)
resp, err := cli.Do(req)
Note that as per the net.Dialer's docs the context passed to its DialContext might trump the timeout set on the dialer itself—this is
exactly what we need: the dialer's Timeout field controls exactly the
"dialing" (TCP connection establishment) while you might also arm your
HTTP request with a context (using http.Request.WithContext) controlling
the timeout of the whole request, and also be able to cancel it at any time (including the dialing step).
Playground example.
The Transport #kostix refers to is definitely what you're looking for in this case. Transports as well as Clients are safe for concurrent use as well. But please read about the Transport (and I also advise reading about the Client as well) as there are a number of different ways to affect how you handle idle connections, not just the pre-mentioned DialContext.
As you may want to set your ResponseHeaderTimeout:
ResponseHeaderTimeout, if non-zero, specifies the amount of
time to wait for a server's response headers after fully
writing the request (including its body, if any). This
time does not include the time to read the response body.
Or, if you are using a secure connection, you may want to set your TLSHandshakeTimeout:
TLSHandshakeTimeout specifies the maximum amount of time waiting to
wait for a TLS handshake. Zero means no timeout.
For readability and maintainability, I suggest also maybe creating a function to build your Client, something along the lines of:
func buildClient(timeout time.Duration) *http.Client {
tr := &http.Transport{
IdleConnTimeout: timeout,
ResponseHeaderTimeout: timeout,
TLSHandshakeTimeout: timeout,
}
client := &http.Client{
Transport: tr,
Timeout: timeout,
}
return client
}

Go bufio.Scanner stops while reading TCP connection to Redis

Reading TCP connection between Redis-server by using bufio.Scanner
fmt.Fprintf(conn, "*3\r\n$3\r\nSET\r\n$5\r\nmykey\r\n$7\r\nHello!!\r\n")
scanner := bufio.NewScanner(conn)
for {
// fmt.Println("marker00")
if ok := scanner.Scan(); !ok {
// fmt.Println("marker01")
break
}
// fmt.Println("marker02")
fmt.Println(scanner.Text())
}
"+OK" comes as the result for first scanning, but the second scanning stops just in invoking Scan method. (marker00 -> marker02 -> marker00 and no output any more)
Why does Scan stop and how can I know the end of TCP response (without using bufio.Reader)?
Redis does not close the connection for you after sending a command. Scan() ends after io.EOF which is not sent.
Check out this:
package main
import (
"bufio"
"fmt"
"net"
)
// before go run, you must hit `redis-server` to wake redis up
func main() {
conn, _ := net.Dial("tcp", "localhost:6379")
message := "*3\r\n$3\r\nSET\r\n$1\r\na\r\n$1\r\nb\r\n"
go func(conn net.Conn) {
for i := 0; i < 10; i++ {
fmt.Fprintf(conn, message)
}
}(conn)
scanner := bufio.NewScanner(conn)
for {
if ok := scanner.Scan(); !ok {
break
}
fmt.Println(scanner.Text())
}
fmt.Println("Scanning ended")
}
Old question, but I had the same issue. Two solutions:
1) Add a "QUIT\r\n" command to your Redis message. This will cause Redis to close the connection which will terminate the scan. You'll have to deal with the extra "+OK" that the quit outputs.
2) Add
conn.SetReadDeadline(time.Now().Add(time.Second*5))
just before you start scanning. This will cause the scan to stop trying after 5 seconds. Unfortunately, it will always take 5 seconds to complete the scan so choose this time wisely.

How come the redis-benchmark command is not following the redis protocol?

I was reading in directly from a tcp connection after running the redis-benchmark command and as far as I can tell, redis-benchmark is NOT following the redis protocol.
The redis protocol is as stated in its website:
The way RESP is used in Redis as a request-response protocol is the
following:
Clients send commands to a Redis server as a RESP Array of Bulk Strings.
The server replies with one of the RESP types according to the command implementation.
Meaning that a correct client implementation must always send RESP arrays of bulk strings.
If that is true, then, anything that does not start with a * is considered a syntax error (since its not an RESP array).
Thus, if one were to send a ping command to a redis-server, then it must be sent as a resp array of length 1 with 1 bulk string containing the word ping. For example:
"*1\r\n$4\r\nPING\r\n"
However, whenever I listen directly to the redis-benchmark command and read its tcp connection I get instead:
"PING\r\n"
which does not follow the redis protocol. Is that a bug or is there something implied in the redis protocol that makes pings special? As far as I could tell I couldn't find anything that said that pings were special, nor that length 1 commands were special. Does someone know whats going on?
To see reproduce these results yourself you can copy my code to inspect it directly:
package main
import (
"fmt"
"log"
"net"
)
func main() {
RedisBenchmark()
}
func RedisBenchmark() {
url := "127.0.0.1:6379"
fmt.Println("listen: ", url)
ln, err := net.Listen("tcp", url) //announces on local network
if err != nil {
log.Fatal(err)
}
for {
conn, err := ln.Accept() //waits and returns the next connection to the listener
if err != nil {
log.Fatal(err)
}
tcpConn := conn.(*net.TCPConn)
go HandleConnection(tcpConn)
}
}
func HandleConnection(tcpConn *net.TCPConn) {
b := make([]byte, 256) //TODO how much should I read at a time?
n, err := tcpConn.Read(b)
if err != nil {
fmt.Println("n: ", n)
log.Fatal(err)
}
fmt.Printf("+++++> raw input string(b): %q\n", string(b))
msg := string(b[:n])
fmt.Printf("+++++> raw input msg: %q\n", msg)
}
and run it using go with:
go run main.go
followed on a different terminal (or tmux pane):
redis-benchmark
for all the test or if you only want to run ping with 1 client:
redis-benchmark -c 1 -t ping -n 1
you can see the details of how I am running it with the flags at: http://redis.io/topics/benchmarks
That is called an inline command. Check the Inline Commands section of the Redis Protocol article.
You can refer to the source code to find out the differences between inline command and RESP.
readQueryFromClient
|--> if command begins with * --> processInlineBuffer()process it as RESP
|
|--> if command not begins with * --> processMultibulkBuffer():process it as inline command
RESP is a more efficent way to parse the command for the Redis Server

Resources