transport.Roundtrip context canceled error - http

I made a roundtrip in client, and I set a retry for 1 time. Question is sometimes my client triggered "context canceled", and entire processing seems like end in 300ms, but I didn't set a timeout. how is it triggered?
func RoundTrip(req *http.Request)(res *http.Response, err error){
for i:=0; i<1; i++ {
transport = http.DefaultTransport
res, err = transport.RoundTrip(req)
if err == nil {
break;
}
}
return
}

You can add the context to the request with the Request.WithContext method or NewRequestWithContext function. When you cancel the context it should propagate to all functions handling the request.
Update
Sorry I thought you wanted to cancel the request, not asking why. Anyhow given your comment on this answer:
but where canceled the context in 300ms
300ms is pretty short. If it was longer I would recommend adjusting the timers in net.DefaultTransport; however, instead I would guess that the underlying TCP connection is being refused/limited and causing the RoundTrip to fail. TCP limit could be something internal (DNS lookup error, etc) or on the server side. You would need to scope the problem more to debug.

Related

Dialing TCP Error: Timeout or i/o Timeout after a while of high concurrency request

I recently run into a problem when I develope a high concurrency http client via valyala/fasthttp: The client works fine in the first 15K~ requests but after that more and more dial tcp4 127.0.0.1:80: i/o timeout and dialing to the given TCP address timed out error occours.
Sample Code
var Finished = 0
var Failed = 0
var Success = 0
func main() {
for i := 0; i < 1000; i++ {
go get()
}
start := time.Now().Unix()
for {
fmt.Printf("Rate: %.2f/s Success: %d, Failed: %d\n", float64(Success)/float64(time.Now().Unix()-start), Success, Failed)
time.Sleep(100 * time.Millisecond)
}
}
func get() {
ticker := time.NewTicker(time.Duration(100+rand.Intn(2900)) * time.Millisecond)
defer ticker.Stop()
client := &fasthttp.Client{
MaxConnsPerHost: 10000,
}
for {
req := &fasthttp.Request{}
req.SetRequestURI("http://127.0.0.1:80/require?number=10")
req.Header.SetMethod(fasthttp.MethodGet)
req.Header.SetConnectionClose()
res := &fasthttp.Response{}
err := client.DoTimeout(req, res, 5*time.Second)
if err != nil {
fmt.Println(err.Error())
Failed++
} else {
Success++
}
Finished++
client.CloseIdleConnections()
<-ticker.C
}
}
Detail
The server is built on labstack/echo/v4 and when client got timeout error, the server didn't have any error, and manually perform the request via Postman or Browser like Chrome are works fine.
The client runs pretty well in the first 15K~ request, but after that, more and more timeout error occours and the output Rate is decreasing. I seached for google and github and I found this issue may be the most suitable one, but didn't found a solution.
Another tiny problem...
As you can notice, when the client start, it will first generate some the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection error, and then works fine till around 15K issues, and then start generating more and more timeout error.Why it would generate the Connection closed error in the begining?
Machine Info
Macbook Pro 14 2021 (Apple M1 Pro) with 16GB Ram and running macOS Monterey 12.4
So basically, If you trying to open a connection and then close it as soon as possibile, it's not going to be like "connection#1 use a port then immediately return it back", there gonna be lots of processing needs to be done, so If you want to send many request at the same time, I think it's better to reuse the connection as possible as you can.
For example, in fasthttp:
req := fasthttp.AcquireRequest()
res := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(res)
// Then do the request below

How can I make Go's http.Server exit after being idle for a period of time?

I am writing a web server in Go using the standard library net/http package that makes use of systemd socket activation.
I have the basics working such that the server is started the first time a connection is made to the listening socket, and I can perform a graceful shutdown when signalled (i.e. so systemctl stop will work without cutting off active requests).
What I would like is for the server to automatically exit when it has been idle for some period of time. Something like the following:
when the last active request completes, start a timer for say 30 seconds.
if any new request arrives during that period, stop the timer.
if the timer expires, perform a graceful shutdown.
The idea being to release the resources the server was using, in the knowledge that systemd will start us again when a new client turns up.
It's parts (1) and (2) that I'm not sure about. Ideally I'd like a solution that doesn't involve modifying all the registered handlers too.
Using #CeriseLimón's suggestion, the following helper type seems to do the trick:
type IdleTracker struct {
mu sync.Mutex
active map[net.Conn]bool
idle time.Duration
timer *time.Timer
}
func NewIdleTracker(idle time.Duration) *IdleTracker {
return &IdleTracker{
active: make(map[net.Conn]bool),
idle: idle,
timer: time.NewTimer(idle),
}
}
func (t *IdleTracker) ConnState(conn net.Conn, state http.ConnState) {
t.mu.Lock()
defer t.mu.Unlock()
oldActive := len(t.active)
switch state {
case http.StateNew, http.StateActive, http.StateHijacked:
t.active[conn] = true
// stop the timer if we transitioned to idle
if oldActive == 0 {
t.timer.Stop()
}
case http.StateIdle, http.StateClosed:
delete(t.active, conn)
// Restart the timer if we've become idle
if oldActive > 0 && len(t.active) == 0 {
t.timer.Reset(t.idle)
}
}
}
func (t *IdleTracker) Done() <-chan time.Time {
return t.timer.C
}
Assigning its ConnState method to the server's ConnState member will track whether the server is busy, and signal us when we've been idle for the requested amount of time:
idle := NewIdleTracker(5 * time.Second)
server.ConnState = idle.ConnState
go func() {
<-idle.Done()
if err := server.Shutdown(context.Background()); err != nil {
log.Fatalf("error shutting down: %v\n", err)
}
}()

Terminate http request from IP layer using golang

I am making an http post request to a server using golang. Suppose the server is currently turned off (Means the machine on which the server runs is turned off) then the request is stuck at the IP layer. So my program execution is unable to proceed further. It is unable to proceed to the Application layer. So is there any way in golang to stop this.
I am using the following code.
req, err := http.NewRequest("POST", url, bytes.NewReader(b))
if err != nil {
return errors.Wrap(err, "new request error")
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return errors.Wrap(err, "http request error")
}
defer resp.Body.Close()
Is there anything that can be added to this, to terminate the request if it doesn't find anything from the IP layer.
The default http Client has no timeout. You can create an explicit http.Client yourself and set the timeout:
var cl = &http.Client{
Timeout: time.Second * 10,
}
resp, err := cl.Do(req)
if err != nil {
// err will be set on timeout
return errors.Wrap(err, "http request error")
}
defer resp.Body.Close()
If the server does not answer any more in the middle of a request, you can handle the timeout.
Use a non-default http.Transport with its DialContext field set to a function which uses a custom context with the properly configured timeout/deadline. Another option is to use a custom net.Dialer.
Something like this:
cli := http.Client{
Transport: &http.Transport{
DialContext: func (ctx context.Context, network, address string) (net.Conn, error) {
dialer := net.Dialer{
Timeout: 3 * time.Second,
}
return dialer.DialContext(ctx, network, address)
},
},
}
req, err := http.NewRequest(...)
resp, err := cli.Do(req)
Note that as per the net.Dialer's docs the context passed to its DialContext might trump the timeout set on the dialer itself—this is
exactly what we need: the dialer's Timeout field controls exactly the
"dialing" (TCP connection establishment) while you might also arm your
HTTP request with a context (using http.Request.WithContext) controlling
the timeout of the whole request, and also be able to cancel it at any time (including the dialing step).
Playground example.
The Transport #kostix refers to is definitely what you're looking for in this case. Transports as well as Clients are safe for concurrent use as well. But please read about the Transport (and I also advise reading about the Client as well) as there are a number of different ways to affect how you handle idle connections, not just the pre-mentioned DialContext.
As you may want to set your ResponseHeaderTimeout:
ResponseHeaderTimeout, if non-zero, specifies the amount of
time to wait for a server's response headers after fully
writing the request (including its body, if any). This
time does not include the time to read the response body.
Or, if you are using a secure connection, you may want to set your TLSHandshakeTimeout:
TLSHandshakeTimeout specifies the maximum amount of time waiting to
wait for a TLS handshake. Zero means no timeout.
For readability and maintainability, I suggest also maybe creating a function to build your Client, something along the lines of:
func buildClient(timeout time.Duration) *http.Client {
tr := &http.Transport{
IdleConnTimeout: timeout,
ResponseHeaderTimeout: timeout,
TLSHandshakeTimeout: timeout,
}
client := &http.Client{
Transport: tr,
Timeout: timeout,
}
return client
}

Program halts after successive timeout while performing GET request

I'm making a crawler that fetches html, css and js pages. The crawler is a typical one with 4 go-routines running concurrently to fetch the resources. To study, I've been using 3 test sites. The crawler works fine and shows program completion log while testing two of them.
In the 3rd website however, there are too many timeouts happening while fetching css links. This eventually causes my program to stop. It fetches the links but after 20+ successive timeouts, the program stops showing log. Basically it halts. I don't think it's problem with Event log console.
Do I need to handle timeouts separately ? I'm not posting the full code because it won't relate to conceptual answer that I'm seeking. However the code goes something like this :
for {
site, more := <-sites
if more {
url, err := url.Parse(site)
if err != nil {
continue
}
response, error := http.Get(url.String())
if error != nil {
fmt.Println("There was an error with Get request: ", error.Error())
continue
}
// Crawl function
}
}
The default behavior of the http client is to block forever. Set a timeout when you create the client: (http://godoc.org/net/http#Client)
func main() {
client := http.Client{
Timeout: time.Second * 30,
}
res, err := client.Get("http://www.google.com")
if err != nil {
panic(err)
}
fmt.Println(res)
}
After 30 seconds Get will return an error.

Why is Go HTTPS Client not reusing connections?

I have an http client which creates multiple connections to the host. I want to set a maximum number of connections it can set to a particular host.
There are no such options in go's request.Transport.
My code looks like
package main
import (
"fmt"
"net/http"
"net/url"
)
const (
endpoint_url_fmt = "https://blah.com/api1?%s"
)
func main() {
transport := http.Transport{ DisableKeepAlives : false }
outParams := url.Values{}
outParams.Set("method", "write")
outParams.Set("message", "BLAH")
for {
// Encode as part of URI.
outboundRequest, err := http.NewRequest(
"GET",
fmt.Sprintf(endpoint_url_fmt, outParams.Encode()),
nil
)
outboundRequest.Close = false
_ , err = transport.RoundTrip(outboundRequest)
if err != nil {
fmt.Println(err)
}
}
}
I would expect this to create 1 connection. As I am calling it in a for-loop. But this keeps creating an infinite number of connections.
Where as similar python code using the requests library creates only one connection.
#!/usr/bin/env python
import requests
endpoint_url_fmt = "https://something.com/restserver.php"
params = {}
params['method'] = 'write'
params['category'] = category_errors_scuba
params['message'] = "blah"
while True:
r = requests.get(endpoint_url_fmt, params = params)
For some reason the go code is not reusing http connections.
EDIT :
The go code needs the body to be closed to reuse the connection.
resp , err = transport.RoundTrip(outboundRequest)
resp.Close() // This allows the connection to be reused
Based on further clarification from the OP. The default client does reuse connections.
Be sure to close the response.
Callers should close resp.Body when done reading from it. If resp.Body is not closed, the Client's underlying RoundTripper (typically Transport) may not be able to re-use a persistent TCP connection to the server for a subsequent "keep-alive" request.
Additionally, I've found that I also needed to read until the response was complete before calling Close().
e.g.
res, _ := client.Do(req)
io.Copy(ioutil.Discard, res.Body)
res.Body.Close()
To ensure http.Client connection reuse be sure to do two things:
Read until Response is complete (i.e. ioutil.ReadAll(resp.Body))
Call Body.Close()
Old answer, useful for rate limiting, but not what the OP was after:
I don't think setting max connections is possible via the golang 1.1 http APIs. This means you can shoot yourself in the foot with tons of TCP connections (until you run out of file descriptors or whatever) if you aren't careful.
That said, you could limit the rate at which you call the go routine for a particular host (and therefore outbound requests and connections) via time.Tick.
For example:
import "time"
requests_per_second := 5
throttle := time.Tick(1000000000 / requests_per_second)
for i := 0; i < 16; i += 1 {
<-throttle
go serveQueue()
}
There are interesting improvements in http.Transport:
// DisableKeepAlives, if true, disables HTTP keep-alives and
// will only use the connection to the server for a single
// HTTP request.
//
// This is unrelated to the similarly named TCP keep-alives.
DisableKeepAlives bool
// ...
// MaxIdleConns controls the maximum number of idle (keep-alive)
// connections across all hosts. Zero means no limit.
MaxIdleConns int // Go 1.7
// MaxIdleConnsPerHost, if non-zero, controls the maximum idle
// (keep-alive) connections to keep per-host. If zero,
// DefaultMaxIdleConnsPerHost is used.
MaxIdleConnsPerHost int
// MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
MaxConnsPerHost int // Go 1.11

Resources