I'm doing X parallel http requests and when one of them does not respond in X ms (imagine is 100ms) or less I want to cut this connection. The code I wrote does not seem to work so, how can I cut the connection and get the response as nil?
This is my sample code:
cx, cancel := context.WithCancel(context.Background())
ch := make(chan *HttpResponse)
var responses []*HttpResponse
timeout := 1.000 //1ms for testing purposes
var client = &http.Client{
Timeout: 1 * time.Second,
}
startTime := time.Now()
for _, url := range urls {
go func(url string) {
fmt.Printf("Fetching %s \n", url)
req, _ := http.NewRequest("POST", url, bytes.NewReader(request)) //request is json string
req.WithContext(cx)
resp, err := client.Do(req)
ch <- &HttpResponse{url, resp, err}
var timeElapsed = time.Since(startTime)
msec := timeElapsed.Seconds() * float64(time.Second/time.Millisecond)
if msec >= timeout {
cancel()
}
if err != nil && resp != nil && resp.StatusCode == http.StatusOK {
resp.Body.Close()
}
}(url)
}
for {
select {
case r := <-ch:
fmt.Printf("%s was fetched\n", r.Url)
if r.Err != nil {
fmt.Println("with an error", r.Err)
}
responses = append(responses, r)
if len(responses) == len(*feeds) {
return responses
}
case <-time.After(100):
//Do something
}
}
Your code waits until a requests finishes (and get a resposne or an error), and then calculate the time passed, and if it was longer than the time expect, your code would cancel all the requests.
req, _ := http.NewRequest("POST", url, bytes.NewReader(request)) //request is json string
req.WithContext(cx) //Here you use a common cx, which all requests share.
resp, err := client.Do(req) //Here the request is being sent and you wait it until done.
ch <- &HttpResponse{url, resp, err}
var timeElapsed = time.Since(startTime)
msec := timeElapsed.Seconds() * float64(time.Second/time.Millisecond)
if msec >= timeout {
cancel() //here you cancel all the requests.
}
The fix is to utilize the context package right.
req, _ := http.NewRequest("POST", url, bytes.NewReader(request)) //request is json string
ctx,cancel := context.WithTimeout(request.Context(),time.Duration(timeout)*time.Millisecond)
resp,err:=client.Do(req.WithContext(ctx))
defer cancel()
With that, you will get a nil resp (and an error) and get the connection cut when time out.
Related
I am getting error:
dial tcp 127.0.0.1:3333: can't assign requested address
whenever i try to hit http request concurrently using go routines for 40000 iterations,
it was working fine till 20-30000 iterations,
Here is the code I am trying to run:
func TestHttp(c *gin.Context) {
limit, _ := strconv.Atoi(c.Param("limit"))
for i := 1; i <= limit; i++ {
url := "http://hostname/merchant" + strconv.Itoa(i) + "/print-test?p=" + strconv.Itoa(i)
go MakeRequest2(url, i)
}
}
/*
* Function to hit url with get method and close the response body
*/
func MakeRequest2(url string, i int) {
client := &http.Client{}
req, err := http.NewRequest("GET", url, nil)
if err != nil {
fmt.Println(err)
return
}
req.Close = true
resp, err := client.Do(req)
if err != nil {
fmt.Println(err)
return
}
defer resp.Body.Close()
}
func PrintTest(c *gin.Context) {
a := make(map[string]int)
for i := 1; i <= 10; i++ {
a[strconv.Itoa(i)] = i
}
fmt.Println(a)
}
I am getting this output:
Get "http://hostname/merchant34096/print-test?p=34096": dial tcp hostname: connect: cannot assign requested address >
I am expecting to run this code for 1 million iterations concurrently.
I found fasthttp godoc as fellow:
func Get
func Get(dst []byte, url string) (statusCode int, body []byte, err error)
Get appends url contents to dst and returns it as body.
The function follows redirects. Use Do* for manually handling redirects.
New body buffer is allocated if dst is nil.
But, when I run fellow code
package main
import (
"fmt"
fh "github.com/valyala/fasthttp"
)
func main() {
url := "https://www.okcoin.cn/api/v1/ticker.do?symbol=btc_cny"
dst := []byte("ok100")
_, body, err := fh.Get(dst, url)
if err != nil {
fmt.Println(err)
}
fmt.Println("body:", string(body))
fmt.Println("dst:", string(dst))
}
body does not have "ok100", and dst is still "ok100".
why?
Looking at the code where it is used in fasthttp's client.go func clientGetURLDeadlineFreeConn (line 672), you can see that if there is a timeout, dst's contents are copied to body at line 712. So based on what I read in the code (and debugged with Delve using your code), I saw that dst doesn't get updated in this usage. It seems that it can be used to provide default content to body in the event of a timeout - probably worth a direct question to fasthttp's author for more detail.
func clientGetURLDeadlineFreeConn(dst []byte, url string, deadline time.Time, c clientDoer) (statusCode int, body []byte, err error) {
timeout := -time.Since(deadline)
if timeout <= 0 {
return 0, dst, ErrTimeout
}
var ch chan clientURLResponse
chv := clientURLResponseChPool.Get()
if chv == nil {
chv = make(chan clientURLResponse, 1)
}
ch = chv.(chan clientURLResponse)
req := AcquireRequest()
// Note that the request continues execution on ErrTimeout until
// client-specific ReadTimeout exceeds. This helps limiting load
// on slow hosts by MaxConns* concurrent requests.
//
// Without this 'hack' the load on slow host could exceed MaxConns*
// concurrent requests, since timed out requests on client side
// usually continue execution on the host.
go func() {
statusCodeCopy, bodyCopy, errCopy := doRequestFollowRedirects(req, dst, url, c)
ch <- clientURLResponse{
statusCode: statusCodeCopy,
body: bodyCopy,
err: errCopy,
}
}()
tc := acquireTimer(timeout)
select {
case resp := <-ch:
ReleaseRequest(req)
clientURLResponseChPool.Put(chv)
statusCode = resp.statusCode
body = resp.body
err = resp.err
case <-tc.C:
body = dst
err = ErrTimeout
}
releaseTimer(tc)
return statusCode, body, err
}
In client.go func doRequestFollowRedirects (line 743) it is used at line 748: bodyBuf.B = dst
func doRequestFollowRedirects(req *Request, dst []byte, url string, c clientDoer) (statusCode int, body []byte, err error) {
resp := AcquireResponse()
bodyBuf := resp.bodyBuffer()
resp.keepBodyBuffer = true
oldBody := bodyBuf.B
bodyBuf.B = dst
redirectsCount := 0
for {
req.parsedURI = false
req.Header.host = req.Header.host[:0]
req.SetRequestURI(url)
if err = c.Do(req, resp); err != nil {
break
}
statusCode = resp.Header.StatusCode()
if statusCode != StatusMovedPermanently && statusCode != StatusFound && statusCode != StatusSeeOther {
break
}
redirectsCount++
if redirectsCount > maxRedirectsCount {
err = errTooManyRedirects
break
}
location := resp.Header.peek(strLocation)
if len(location) == 0 {
err = errMissingLocation
break
}
url = getRedirectURL(url, location)
}
body = bodyBuf.B
bodyBuf.B = oldBody
resp.keepBodyBuffer = false
ReleaseResponse(resp)
return statusCode, body, err
}
I wrote a golang program which run well in the past several months in ubuntu 12.04 LTS until I upgraded it to 14.04 LTS
My program is focused on sending HTTP requests which send about 2-10 HTTP requests per second. The HTTP request address vary.
When the problem occurs, first, some of the requests shows read tcp [ip]:[port]: i/o timeout, then after several minutes all requests show read tcp [ip]:[port]: i/o timeout, not any request can be sent.
I restart the program, everything become right again.
All of our servers(2 server) have such problem after upgraded from 12.04 to 14.04
I create new goroutine for every request
the problem does not occur in the same interval, sometimes it won't occur one or two day, sometimes It occur twice in an hour
Bellow is my code requesting HTTP Address:
t := &http.Transport{
Dial: timeoutDial(data.Timeout),
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
//req := s.ParseReq(data)
req := data.convert2Request()
if req == nil {
return
}
var resp *http.Response
if data.Redirect {
c := &http.Client{
Transport: t,
}
resp, err = c.Do(req)
} else {
resp, err = t.RoundTrip(req)
}
data.updateTry()
r := s.ParseResp(data, resp, err)
updateTry:
func (d *SendData) updateTry() {
d.Try++
d.LastSend = time.Now()
}
timeoutDial:
func timeoutDial(timeout int) func(netw, addr string) (net.Conn, error) {
if timeout <= 0 {
timeout = 10
}
return func(netw, addr string) (net.Conn, error) {
deadline := time.Now().Add(time.Duration(timeout) * time.Second)
c, err := net.DialTimeout(netw, addr, time.Second*time.Duration(timeout+5))
if err != nil {
return nil, err
}
c.SetDeadline(deadline)
return c, nil
}
}
and My dealing with response is:
func (s *Sender) ParseResp(data SendData, resp *http.Response, err error) (r Resp) {
r = Resp{URL: data.URL}
if err != nil {
r.Err = err.Error()
} else {
r.HttpCode = resp.StatusCode
r.Header = resp.Header
r.URL = resp.Request.URL.String()
defer resp.Body.Close()
// we just read part of response and log it.
reader := bufio.NewReader(resp.Body)
buf := make([]byte, bytes.MinRead) // 512 byte
for len(r.Body) < 1024 { // max 1k
var n int
if n, _ = reader.Read(buf); n == 0 {
break
}
r.Body += string(buf[:n])
}
}
return
}
I also found setting in /etc/sysctl.conf which can make the problem happen less frequently:
net.core.somaxconn = 65535
net.netfilter.nf_conntrack_max = 655350
net.netfilter.nf_conntrack_tcp_timeout_established = 1200
I need help for solving this problem.
It seems like this but I don't see any solution https://bugs.launchpad.net/juju-core/+bug/1307434
To more explicitly state what Not_a_Golfer and OneOfOne have said, when you're done with the response, you need to close the connection which has been left open (through the Body field which is an io.ReadCloser). So basically, one simple though would be to change the code pertaining to making an http request to:
var resp *http.Response
if data.Redirect {
c := &http.Client{
Transport: t,
}
resp, err = c.Do(req)
} else {
resp, err = t.RoundTrip(req)
}
if err == nil {
defer resp.Body.Close() // we need to close the connection
}
Without seeing the code to timeoutDial, my wild guess is that you don't close the connection when you're done with it.
Is there any other better way to ping websites and check if the website is available or not?
I just need to get the status code not get(download) all websites...
func Ping(domain string) int {
timeout := time.Duration(2 * time.Second)
dialTimeout := func(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, timeout)
}
transport := http.Transport{
Dial: dialTimeout,
}
client := http.Client{
Transport: &transport,
}
url := "http://" + domain
req, _ := http.NewRequest("GET", url, nil)
resp, _ := client.Do(req)
return resp.StatusCode
}
This function is too slow and when I run with goroutines, it goes over the limits and gives me the errors...
Thanks!
Use a single transport. Because the transport maintains a pool of connections, you should not create and ignore transports willy nilly.
Close the response body as described at the beginning of the net/http doc.
Use HEAD if you are only interested in the status.
Check errors.
Code:
var client = http.Client{
Transport: &http.Transport{
Dial: net.Dialer{Timeout: 2 * time.Second}.Dial,
},
}
func Ping(domain string) (int, error) {
url := "http://" + domain
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return 0, err
}
resp, err := client.Do(req)
if err != nil {
return 0, err
}
resp.Body.Close()
return resp.StatusCode, nil
}
Since this is the top result on Google for Pinging in Go, just know there have been several packages written for this purpose, but if you plan to use this answer, I had to make some changes for this to work.
import (
"time"
"net/http"
)
var client = http.Client{
Timeout: 2 * time.Second,
}
But otherwise keeping the same with the accepted answer.
But I'm a beginner in Go so there may be a better way to do this.
I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
However, I can't seem to figure out how to do the same when tracing HTTP requests. Here is the piece of code I am working with:
func timeGet(url string) (httpTimingBreakDown, error) {
req, _ := http.NewRequest("GET", url, nil)
var start, connect, dns, tlsHandshake time.Time
var timingData httpTimingBreakDown
timingData.url = url
trace := &httptrace.ClientTrace{
TLSHandshakeStart: func() { tlsHandshake = time.Now() },
TLSHandshakeDone: func(cs tls.ConnectionState, err error) { timingData.tls = time.Since(tlsHandshake) },
}
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
start = time.Now()
http.DefaultTransport.(*http.Transport).ResponseHeaderTimeout = time.Second * 10 // hacky way, worked earlier but don't work anymore
if _, err := http.DefaultTransport.RoundTrip(req); err != nil {
fmt.Println(err)
return timingData, err
}
timingData.total = time.Since(start)
return timingData, nil
}
I am firing this function inside a goroutine. My sample data set is 100 urls. All goroutines fire, but eventually the program ends in 30+ secs as if the timeout is 30secs.
Earlier I made the same to work by using the hacky way of changing the default inside of it to 10 secs and anything that took too long, timed out and the program ended at 10.xxx secs but now its taking 30.xx secs.
What would be a proper way of specifying a timeout in this scenario?
I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
Actually, the preferred method is to use a context.Context on the request. The method you've used is just a short-cut suitable for simple use cases.
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(context.Background(), 5 * time.Second)
defer cancel()
req = req.WithContext(ctx)
And this method should work nicely for your situation as well.