Specify timeout when tracing HTTP request in Go - http

I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
However, I can't seem to figure out how to do the same when tracing HTTP requests. Here is the piece of code I am working with:
func timeGet(url string) (httpTimingBreakDown, error) {
req, _ := http.NewRequest("GET", url, nil)
var start, connect, dns, tlsHandshake time.Time
var timingData httpTimingBreakDown
timingData.url = url
trace := &httptrace.ClientTrace{
TLSHandshakeStart: func() { tlsHandshake = time.Now() },
TLSHandshakeDone: func(cs tls.ConnectionState, err error) { timingData.tls = time.Since(tlsHandshake) },
}
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
start = time.Now()
http.DefaultTransport.(*http.Transport).ResponseHeaderTimeout = time.Second * 10 // hacky way, worked earlier but don't work anymore
if _, err := http.DefaultTransport.RoundTrip(req); err != nil {
fmt.Println(err)
return timingData, err
}
timingData.total = time.Since(start)
return timingData, nil
}
I am firing this function inside a goroutine. My sample data set is 100 urls. All goroutines fire, but eventually the program ends in 30+ secs as if the timeout is 30secs.
Earlier I made the same to work by using the hacky way of changing the default inside of it to 10 secs and anything that took too long, timed out and the program ended at 10.xxx secs but now its taking 30.xx secs.
What would be a proper way of specifying a timeout in this scenario?

I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
Actually, the preferred method is to use a context.Context on the request. The method you've used is just a short-cut suitable for simple use cases.
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(context.Background(), 5 * time.Second)
defer cancel()
req = req.WithContext(ctx)
And this method should work nicely for your situation as well.

Related

Golang HTTP Timeout test, not timing out as expected

I built a small testcase, that checks that code on my end timesout after a set amount of time has passed. But this isnt working as expected
I am hitting an server sided endpoint which works fine, but what happens if it is slower than usual, I will need code on my end to timeout, this has been implemented but I need to test that I implemented it correctly.
This is what I have so far
func TestTimeout(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(time.Second * 15)
}))
defer ts.Close()
client := &http.Client{
Timeout: time.Second * 10,
}
myRequest,err := createMyRequest(Somedata,SomeMoreData)
res, err := client.Do(myRequest)
if err != nil {
t.Fatal(err)
}
res.Body.Close()}
However my code successfully runs without giving a timeout error (as in I am not waiting for 10 seconds, where am I going wrong?
As already pointed out in some of the comments, you need to make a request to the test server's endpoint (using ts.URL) as follows:
myRequest, err := http.NewRequest(http.MethodPost, ts.URL, bytes.NewBuffer([]byte("test")))
if err != nil {
t.Fatal(err)
}
res, err := client.Do(myRequest)
if err != nil {
t.Fatal(err)
}
[...]

Diagnosing root cause long HTTP response turnaround in Golang

So my HTTP client initialisation and send request code looks like this.
package http_util
import (
"crypto/tls"
"net/http"
"time"
)
var httpClient *http.Client
func Init() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
MaxIdleConnsPerHost: 200,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
}
httpClient = &http.Client{Transport: tr, Timeout: 30 * time.Second}
}
func SendRequest(ctx context.Context, request *http.Request) (*SomeRespStruct, error) {
httpResponse, err := httpClient.Do(request)
if err != nil {
return nil, err
}
responseBody, err := ioutil.ReadAll(httpResponse.Body)
defer httpResponse.Body.Close()
if err != nil {
return nil, err
}
response := &SomeRespStruct{}
err = json.Unmarshal(responseBody, response)
if err != nil {
return nil, err
}
return response, nil
}
When I launch my server, I call http_util.Init().
The issue arises when I receive multiple requests (20+) at once to call this external server. In one of my functions I do
package external_api
import (
"context"
"log"
)
func SomeAPICall(ctx context.Context) (SomeRespStruct, error) {
// Build request
request := buildHTTPRequest(...)
log.Printf("Send request: %v", request)
response, err := http_util.SendRequest(ctx, request)
// Error checks
if err != nil {
log.Printf("HTTP request timed out: %v", err)
return nil, err
}
log.Printf("Received response: %v", response)
return response, nil
}
My issue is that I get a 15~20s lag in between the Send request and Received response logs based on the output timestamp when there is high request volume. Upon checking with the server that's handling my requests, I found out that on their end, processing time from end-to-end takes less than a second (the same exact request that had a long turnaround time according to my own logs), so I'm not too sure what is the root cause of this high turnaround time. I also did a traceroute and a ping to the server as well and there was no delay, so this should not be a network error.
I've looked around and it seems like the suggested solutions are:
to increase the MaxIdleConnsPerHost
to read the HTTP response body in full and close it
Both of which I have already done.
I'm not sure if there is more tuning to be done regarding the configuration of my HTTP client to resolve this issue, or if I should investigate other workarounds, for instance retry or perhaps scaling (but my CPU and memory utilisation are at the 2-3% range).

Performance issues using DialContext in Go

I did a quick benchmark using Go's built-in http.Client and net. It resulted in some noticeable performance issues when using DialContext as opposed to when not using it.
I am basically trying to imitate a use case we have in my company where that http.Client setup is way less performant than the default configuration when used exactly for the same things. And I noticed commenting the DialContext part made it go faster.
The benchmark just opens a pool of threads (8 in the example) to create connections to a simple URL, using a buffered channel of the same size as number of threads (8).
Here is the code with DialContext (2.266333805s):
func main() {
var httpClient *http.Client
httpClient = &http.Client{
Transport: &http.Transport{
DialContext: (&net.Dialer{
Timeout: 3 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
},
}
url := "https://stackoverflow.com/"
wg := sync.WaitGroup{}
threads := 8
wg.Add(threads)
c := make(chan struct{}, threads)
start := time.Now()
for i := 0; i < threads; i++ {
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err == nil {
resp.Body.Close()
}
}
wg.Done()
}()
}
for i := 0; i < 200; i++ {
c <- struct{}{}
}
close(c)
wg.Wait()
fmt.Println(time.Since(start))
}
The outputed time was 2.266333805s
And here is the code without DialContext (731.154103ms):
func main() {
var httpClient *http.Client
httpClient = &http.Client{
Transport: &http.Transport{
},
}
url := "https://stackoverflow.com/"
wg := sync.WaitGroup{}
threads := 8
wg.Add(threads)
c := make(chan struct{}, threads)
start := time.Now()
for i := 0; i < threads; i++ {
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err == nil {
resp.Body.Close()
}
}
wg.Done()
}()
}
for i := 0; i < 200; i++ {
c <- struct{}{}
}
close(c)
wg.Wait()
fmt.Println(time.Since(start))
}
The outputed time was 731.154103ms
The difference between results was consistent among several runs of the program.
Does anyone has a clue about why this is happening?
Thanks!
EDIT: So I tried net/http/httptrace and I made sure the body of the response was fully read and closed:
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
req = req.WithContext(httptrace.WithClientTrace(req.Context(), &httptrace.ClientTrace{
GotConn: t.gotConn,
}))
resp, _ := httpClient.Do(req)
ioutil.ReadAll(resp.Body)
resp.Body.Close()
}
wg.Done()
}()
The findings where interesting when using DialContext vs not using it.
NOT USING DialContext:
time taken to run 200 requests: 5.639808793s
new connections: 1
reused connections: 199
USING DialContext:
time taken to run 200 requests: 5.682882723s
new connections: 8
reused connections: 192
It is faster... But why one opens 8 new connections, and the other one just 1?
The only way you would be getting such large differences is if one transport is reusing the connections, and the other is not. In order to ensure you can reuse the connection, you must always read the response body. It is possible that in some cases the connection would be reused without explicitly reading the body, but it's never guaranteed and depends on many things like the remote server closing the connection, whether the response was fully buffered by the Transport, and if there was a context on the request.
The net/http/httptrace package can give you insight into a lot of the request internals, including whether the connections were reused or not. See https://blog.golang.org/http-tracing for examples.
Setting DisableKeepAlive will always prevent the connections from being reused, making both similarly slow. Always reading the response like so will make both similarly fast:
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err != nil {
// handle error
continue
}
io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
If you want a cap on how much can be read before dropping the connection, you can simply wrap the body in an io.LimitedReader

How can I implement an inactivity timeout on an http download

I've been reading up on the various timeouts that are available on an http request and they all seem to act as hard deadlines on the total time of a request.
I am running an http download, I don't want to implement a hard timeout past the initial handshake as I don't know anything about my users connection and don't want to timeout on slow connections. What I would ideally like is to timeout after a period of inactivity (when nothing has been downloaded for x seconds). Is there any way to do this as a built in or do I have to interrupt based on stating the file?
The working code is a little hard to isolate but I think these are the relevant parts, there is another loop that stats the file to provide progress but I will need to refactor a bit to use this to interrupt the download:
// httspClientOnNetInterface returns an http client using the named network interface, (via proxy if passed)
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {
log.Printf("Got IP addr : %s\n", string(interfaceIP))
// create address for the dialer
tcpAddr := &net.TCPAddr{
IP: interfaceIP,
}
// create the dialer & transport
netDialer := net.Dialer{
LocalAddr: tcpAddr,
}
var proxyURL *url.URL
var err error
if httpsProxy != nil {
proxyURL, err = url.Parse(httpsProxy.String())
if err != nil {
return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
}
}
httpTransport := &http.Transport{
Dial: netDialer.Dial,
Proxy: http.ProxyURL(proxyURL),
}
httpClient := &http.Client{
Transport: httpTransport,
}
return httpClient, nil
}
/*
StartDownloadWithProgress will initiate a download from a remote url to a local file,
providing download progress information
*/
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {
// start an http client on the selected net interface
httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
if err != nil {
return nil, err
}
// grab the header
headResp, err := httpClient.Head(srcURL)
if err != nil {
log.Printf("error on head request (download size): %s", err)
return nil, err
}
// pull out total size
size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
if err != nil {
headResp.Body.Close()
return nil, err
}
headResp.Body.Close()
errChan := make(chan error)
doneChan := make(chan struct{})
// spawn the download process
go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
resp, err := httpClient.Get(srcURL)
if err != nil {
errChan <- err
return
}
defer resp.Body.Close()
// create the file
outFile, err := os.Create(dstFilepath)
if err != nil {
errChan <- err
return
}
defer outFile.Close()
log.Println("starting copy")
// copy to file as the response arrives
_, err = io.Copy(outFile, resp.Body)
// return err
if err != nil {
log.Printf("\n Download Copy Error: %s \n", err.Error())
errChan <- err
return
}
doneChan <- struct{}{}
return
}(httpClient, srcURL, dstFilepath, errChan, doneChan)
// return Download
return (&Download{
updateFrequency: time.Microsecond * 500,
total: size,
errRecieve: errChan,
doneRecieve: doneChan,
filepath: dstFilepath,
}).Start(), nil
}
Update
Thanks to everyone who had input into this.
I've accepted JimB's answer as it seems like a perfectly viable approach that is more generalised than the solution I chose (and probably more useful to anyone who finds their way here).
In my case I already had a loop monitoring the file size so I threw a named error when this did not change for x seconds. It was much easier for me to pick up on the named error through my existing error handling and retry the download from there.
I probably crash at least one goroutine in the background with my approach (I may fix this later with some signalling) but as this is a short running application (its an installer) so this is acceptable (at least tolerable)
Doing the copy manually is not particularly difficult. If you're unsure how to properly implement it, it's only a couple dozen lines from the io package to copy and modify to suit your needs (I only removed the ErrShortWrite clause, because we can assume that the std library io.Writer implementations are correct)
Here is a copy work-alike function, that also takes a cancelation context and an idle timeout parameter. Every time there is a successful read, it signals to the cancelation goroutine to continue and start a new timer.
func idleTimeoutCopy(dst io.Writer, src io.Reader, timeout time.Duration,
ctx context.Context, cancel context.CancelFunc) (written int64, err error) {
read := make(chan int)
go func() {
for {
select {
case <-ctx.Done():
return
case <-time.After(timeout):
cancel()
case <-read:
}
}
}()
buf := make([]byte, 32*1024)
for {
nr, er := src.Read(buf)
if nr > 0 {
read <- nr
nw, ew := dst.Write(buf[0:nr])
written += int64(nw)
if ew != nil {
err = ew
break
}
}
if er != nil {
if er != io.EOF {
err = er
}
break
}
}
return written, err
}
While I used time.After for brevity, it's more efficient to reuse the Timer. This means taking care to use the correct reset pattern, as the return value of the Reset function is broken:
t := time.NewTimer(timeout)
for {
select {
case <-ctx.Done():
return
case <-t.C:
cancel()
case <-read:
if !t.Stop() {
<-t.C
}
t.Reset(timeout)
}
}
You could skip calling Stop altogether here, since in my opinion if the timer fires while calling Reset, it was close enough to cancel anyway, but it's often good to have the code be idiomatic in case this code is extended in the future.

Go to test website status (ping)

Is there any other better way to ping websites and check if the website is available or not?
I just need to get the status code not get(download) all websites...
func Ping(domain string) int {
timeout := time.Duration(2 * time.Second)
dialTimeout := func(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, timeout)
}
transport := http.Transport{
Dial: dialTimeout,
}
client := http.Client{
Transport: &transport,
}
url := "http://" + domain
req, _ := http.NewRequest("GET", url, nil)
resp, _ := client.Do(req)
return resp.StatusCode
}
This function is too slow and when I run with goroutines, it goes over the limits and gives me the errors...
Thanks!
Use a single transport. Because the transport maintains a pool of connections, you should not create and ignore transports willy nilly.
Close the response body as described at the beginning of the net/http doc.
Use HEAD if you are only interested in the status.
Check errors.
Code:
var client = http.Client{
Transport: &http.Transport{
Dial: net.Dialer{Timeout: 2 * time.Second}.Dial,
},
}
func Ping(domain string) (int, error) {
url := "http://" + domain
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return 0, err
}
resp, err := client.Do(req)
if err != nil {
return 0, err
}
resp.Body.Close()
return resp.StatusCode, nil
}
Since this is the top result on Google for Pinging in Go, just know there have been several packages written for this purpose, but if you plan to use this answer, I had to make some changes for this to work.
import (
"time"
"net/http"
)
var client = http.Client{
Timeout: 2 * time.Second,
}
But otherwise keeping the same with the accepted answer.
But I'm a beginner in Go so there may be a better way to do this.

Resources