Go to test website status (ping) - http

Is there any other better way to ping websites and check if the website is available or not?
I just need to get the status code not get(download) all websites...
func Ping(domain string) int {
timeout := time.Duration(2 * time.Second)
dialTimeout := func(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, timeout)
}
transport := http.Transport{
Dial: dialTimeout,
}
client := http.Client{
Transport: &transport,
}
url := "http://" + domain
req, _ := http.NewRequest("GET", url, nil)
resp, _ := client.Do(req)
return resp.StatusCode
}
This function is too slow and when I run with goroutines, it goes over the limits and gives me the errors...
Thanks!

Use a single transport. Because the transport maintains a pool of connections, you should not create and ignore transports willy nilly.
Close the response body as described at the beginning of the net/http doc.
Use HEAD if you are only interested in the status.
Check errors.
Code:
var client = http.Client{
Transport: &http.Transport{
Dial: net.Dialer{Timeout: 2 * time.Second}.Dial,
},
}
func Ping(domain string) (int, error) {
url := "http://" + domain
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return 0, err
}
resp, err := client.Do(req)
if err != nil {
return 0, err
}
resp.Body.Close()
return resp.StatusCode, nil
}

Since this is the top result on Google for Pinging in Go, just know there have been several packages written for this purpose, but if you plan to use this answer, I had to make some changes for this to work.
import (
"time"
"net/http"
)
var client = http.Client{
Timeout: 2 * time.Second,
}
But otherwise keeping the same with the accepted answer.
But I'm a beginner in Go so there may be a better way to do this.

Related

Sending data in Chunks using single HTTP Post connection

I receive the contents of a file from a data source in chunks. As and when I receive the chunk I want to send the chunk data to a service using http POST request. And by keeping alive the same http POST connection used for sending the first chunk I want to send the remaining chunks of data.
I came up with the following code snippet to implement something similar.
Server-Side
func handle(w http.ResponseWriter, req *http.Request) {
buf := make([]byte, 256)
var n int
for {
n, err := req.Body.Read(buf)
if n == 0 && err == io.EOF {
break
}
fmt.Printf(string(buf[:n]))
}
fmt.Printf(string(buf[:n]))
fmt.Printf("Transfer Complete")
}
Client-Side
type alphaReader struct {
reader io.Reader
}
func newAlphaReader(reader io.Reader) *alphaReader {
return &alphaReader{reader: reader}
}
func (a *alphaReader) Read(p []byte) (int, error) {
n, err := a.reader.Read(p)
return n, err
}
func (a *alphaReader) Reset(str string) {
a.reader = strings.NewReader(str)
}
func (a *alphaReader) Close() error {
return nil
}
func main() {
tr := http.DefaultTransport
alphareader := newAlphaReader(strings.NewReader("First Chunk"))
client := &http.Client{
Transport: tr,
Timeout: 0,
}
req := &http.Request{
Method: "POST",
URL: &url.URL{
Scheme: "http",
Host: "localhost:8080",
Path: "/upload",
},
ProtoMajor: 1,
ProtoMinor: 1,
ContentLength: -1,
Body: alphareader,
}
fmt.Printf("Doing request\n")
_, err := client.Do(req)
alphareader.Reset("Second Chunk")
fmt.Printf("Done request. Err: %v\n", err)
}
Here I want that when I do alphareader.Reset("Second Chunk"), the string "Second Chunk" should be sent using the POST connection made earlier. But that is not happening. The connection gets closed after sending the First Chunk of data. Also I have not written the Close() method properly which I'm not sure how to implement.
I'm newbie to golang and any suggestions would be greatly helpful regarding the same.
A *strings.Reader returns io.EOF after the initial string has been read and your wrapper does nothing to change that, so it cannot be reused. You're looking for io.Pipe to turn the request body into an io.Writer.
package main
import (
"io"
"net/http"
)
func main() {
pr, pw := io.Pipe()
req, err := http.NewRequest("POST", "http://localhost:8080/upload", pr)
if err != nil {
// TODO: handle error
}
go func() {
defer pw.Close()
if _, err := io.WriteString(pw, "first chunk"); err != nil {
_ = err // TODO: handle error
}
if _, err := io.WriteString(pw, "second chunk"); err != nil {
_ = err // TODO: handle error
}
}()
res, err := http.DefaultClient.Do(req)
if err != nil {
// TODO: handle error
}
res.Body.Close()
}
Also, don't initialize the request using a struct literal. Use one of the constructors instead. In your code you're not setting the Host and Header fields, for instance.

http requests returning to the wrong thread in golang

I'm having a real head scratcher of a problem. I'm syncing data for clients from an external api. Currently it's structured like this
func SyncInitHandler(w http.ResponseWriter, r *http.Request) {
accounts := data.GetAccounts()
done := make(chan error)
var numRequests int
for _, account := range accounts {
numRequests++
go func(a models.Account) {
subTaskURL := fmt.Sprintf("http://localhost:8080/sync/account/%v", a.Id)
_, err := http.Get(subTaskURL)
done <- err
}(account)
}
for i := 0; i < numRequests; i++ {
<-done
}
}
func SyncAccountHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
accountId, _ := strconv.ParseInt(vars["account-id"], 10, 64)
account, _ := data.GetAccount(accountId)
externalAPIURL := fmt.Sprintf("https://www.example.com/account/%v", account.ApiToken)
req, _ := http.NewRequest("POST", apiUrl, nil)
client := http.Client{} // urlfetch.Client(c)
resp, _ := client.Do(req)
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
apiResp := models.ApiRespModel{}
err = json.Unmarshal(body, &apiResp)
if err != nil {
log.Printf("An Error Occurred Parsing JSON: %v", err)
return
}
log.Printf("Account API Token: %v, Response API Token: %v", account.ApiToken, apiResp.Token)
}
What should happen is that the Account API Token printed and the response Api Token should be the same (i.e. the returned data is for the same account that requested it). What is happening is that the tokens don't match, and the wrong data is being returned to the wrong account.
How is this even possible? As you can see, the external API requests are being spawned in separate requests on the local server. Go http requests are supposed to be thread safe aren't they?

Performance issues using DialContext in Go

I did a quick benchmark using Go's built-in http.Client and net. It resulted in some noticeable performance issues when using DialContext as opposed to when not using it.
I am basically trying to imitate a use case we have in my company where that http.Client setup is way less performant than the default configuration when used exactly for the same things. And I noticed commenting the DialContext part made it go faster.
The benchmark just opens a pool of threads (8 in the example) to create connections to a simple URL, using a buffered channel of the same size as number of threads (8).
Here is the code with DialContext (2.266333805s):
func main() {
var httpClient *http.Client
httpClient = &http.Client{
Transport: &http.Transport{
DialContext: (&net.Dialer{
Timeout: 3 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
},
}
url := "https://stackoverflow.com/"
wg := sync.WaitGroup{}
threads := 8
wg.Add(threads)
c := make(chan struct{}, threads)
start := time.Now()
for i := 0; i < threads; i++ {
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err == nil {
resp.Body.Close()
}
}
wg.Done()
}()
}
for i := 0; i < 200; i++ {
c <- struct{}{}
}
close(c)
wg.Wait()
fmt.Println(time.Since(start))
}
The outputed time was 2.266333805s
And here is the code without DialContext (731.154103ms):
func main() {
var httpClient *http.Client
httpClient = &http.Client{
Transport: &http.Transport{
},
}
url := "https://stackoverflow.com/"
wg := sync.WaitGroup{}
threads := 8
wg.Add(threads)
c := make(chan struct{}, threads)
start := time.Now()
for i := 0; i < threads; i++ {
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err == nil {
resp.Body.Close()
}
}
wg.Done()
}()
}
for i := 0; i < 200; i++ {
c <- struct{}{}
}
close(c)
wg.Wait()
fmt.Println(time.Since(start))
}
The outputed time was 731.154103ms
The difference between results was consistent among several runs of the program.
Does anyone has a clue about why this is happening?
Thanks!
EDIT: So I tried net/http/httptrace and I made sure the body of the response was fully read and closed:
go func() {
for range c {
req, _ := http.NewRequest(http.MethodGet, url, nil)
req = req.WithContext(httptrace.WithClientTrace(req.Context(), &httptrace.ClientTrace{
GotConn: t.gotConn,
}))
resp, _ := httpClient.Do(req)
ioutil.ReadAll(resp.Body)
resp.Body.Close()
}
wg.Done()
}()
The findings where interesting when using DialContext vs not using it.
NOT USING DialContext:
time taken to run 200 requests: 5.639808793s
new connections: 1
reused connections: 199
USING DialContext:
time taken to run 200 requests: 5.682882723s
new connections: 8
reused connections: 192
It is faster... But why one opens 8 new connections, and the other one just 1?
The only way you would be getting such large differences is if one transport is reusing the connections, and the other is not. In order to ensure you can reuse the connection, you must always read the response body. It is possible that in some cases the connection would be reused without explicitly reading the body, but it's never guaranteed and depends on many things like the remote server closing the connection, whether the response was fully buffered by the Transport, and if there was a context on the request.
The net/http/httptrace package can give you insight into a lot of the request internals, including whether the connections were reused or not. See https://blog.golang.org/http-tracing for examples.
Setting DisableKeepAlive will always prevent the connections from being reused, making both similarly slow. Always reading the response like so will make both similarly fast:
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := httpClient.Do(req)
if err != nil {
// handle error
continue
}
io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
If you want a cap on how much can be read before dropping the connection, you can simply wrap the body in an io.LimitedReader

Connect via Proxy while using UTLS and HTTP 1.1 Request

I am trying to connect to a Host using Random TLS Fingerprinting. I am using https://github.com/refraction-networking/utls (see my issue i created on https://github.com/refraction-networking/utls/issues/42)
My issue is now, how can i utilize a HTTP or SOCKS5 Proxy while opening that connection?
The Code im using right now is:
package main
import (
"bufio"
"fmt"
"log"
"net"
"net/http"
"net/http/httputil"
"net/url"
"time"
"github.com/refraction-networking/utls"
)
var (
dialTimeout = time.Duration(15) * time.Second
)
var requestHostname = "google.com"
var requestAddr = "172.217.22.110:443"
// this example generates a randomized fingeprint, then re-uses it in a follow-up connection
func HttpGetConsistentRandomized(hostname string, addr , uri string) (*http.Response, error) {
config := tls.Config{ServerName: hostname}
tcpConn, err := net.DialTimeout("tcp", addr, dialTimeout)
if err != nil {
return nil, fmt.Errorf("net.DialTimeout error: %+v", err)
}
uTlsConn := tls.UClient(tcpConn, &config, tls.HelloRandomized)
defer uTlsConn.Close()
err = uTlsConn.Handshake()
if err != nil {
return nil, fmt.Errorf("uTlsConn.Handshake() error: %+v", err)
}
uTlsConn.Close()
// At this point uTlsConn.ClientHelloID holds a seed that was used to generate
// randomized fingerprint. Now we can establish second connection with same fp
tcpConn2, err := net.DialTimeout("tcp", addr, dialTimeout)
if err != nil {
return nil, fmt.Errorf("net.DialTimeout error: %+v", err)
}
uTlsConn2 := tls.UClient(tcpConn2, &config, uTlsConn.ClientHelloID)
defer uTlsConn2.Close()
err = uTlsConn2.Handshake()
if err != nil {
return nil, fmt.Errorf("uTlsConn.Handshake() error: %+v", err)
}
return httpGetOverConn(uTlsConn2, uTlsConn2.HandshakeState.ServerHello.AlpnProtocol, uri)
}
func main() {
var response *http.Response
var err error
response, err = HttpGetConsistentRandomized(requestHostname, requestAddr, "/2.0/ssocookie")
if err != nil {
fmt.Printf("#> HttpGetConsistentRandomized() failed: %+v\n", err)
} else {
//fmt.Printf("#> HttpGetConsistentRandomized() response: %+s\n", httputil.DumpResponse(response,true))
dump, err := httputil.DumpResponse(response, true)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%+s\n", dump)
}
return
}
func httpGetOverConn(conn net.Conn, alpn string, uri string) (*http.Response, error) {
req := &http.Request{
Method: "GET",
URL: &url.URL{Host: "www." + requestHostname + uri},
Header: make(http.Header),
Host: "www." + requestHostname,
}
req.Proto = "HTTP/1.1"
req.ProtoMajor = 1
req.ProtoMinor = 1
err := req.Write(conn)
if err != nil {
return nil, err
}
return http.ReadResponse(bufio.NewReader(conn), req)
}
As Steffen said, you have to create a proxy dialer first, dial the proxy to create a net.Conn, then use that net.Conn when creating the uTLS Client, before handshaking. For brevity's sake, your custom dialTLS function would look something like:
import (
"crypto/tls"
"net"
"net/url"
"github.com/magisterquis/connectproxy"
"golang.org/x/net/proxy"
utls "github.com/refraction-networking/utls"
)
var proxyString = "http://127.0.0.1:8080"
dialTLS := func(network, addr string, _ *tls.Config) (net.Conn, error) {
proxyURI, _ := url.Parse(proxyString)
switch proxyURI.Scheme {
case "socks5":
proxyDialer, err = proxy.SOCKS5("tcp", proxyString, nil, proxy.Direct)
case "http":
proxyDialer, err = connectproxy.New(proxyURI, proxy.Direct)
}
conn, err := proxyDialer.Dial("tcp", addr)
uconn := utls.UClient(conn, cfg, &utls.HelloRandomizedALPN)
...
}
Two suggestions:
Use the "connectproxy" module referenced above if you intend to tunnel through a HTTP CONNECT proxy.
Make life easier for yourself and take a look at the Meek pluggable transport source for Tor. There's a 'utls.go' module which takes care of everything for you, including setting up either a http or http2 transport depending on the negotiated ALPN protocol. It only supports SOCKS but you could easily adapt it to handle HTTP proxies.
A HTTP proxy and SOCKS proxy work be having some initial proxy specific handshake after the TCP connect. After this handshake is done they provide a normal TCP socket which then can be used for doing the TLS handshake etc. Thus, all you need is to replace your
tcpConn, err := net.DialTimeout("tcp", addr, dialTimeout)
with a proxy specific method to setup the TCP connection. This can be done by using SOCKS5 in x/net/proxy to create the appropriate Dialer or similar using the HTTP CONNECT method is done in connectproxy.

Specify timeout when tracing HTTP request in Go

I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
However, I can't seem to figure out how to do the same when tracing HTTP requests. Here is the piece of code I am working with:
func timeGet(url string) (httpTimingBreakDown, error) {
req, _ := http.NewRequest("GET", url, nil)
var start, connect, dns, tlsHandshake time.Time
var timingData httpTimingBreakDown
timingData.url = url
trace := &httptrace.ClientTrace{
TLSHandshakeStart: func() { tlsHandshake = time.Now() },
TLSHandshakeDone: func(cs tls.ConnectionState, err error) { timingData.tls = time.Since(tlsHandshake) },
}
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
start = time.Now()
http.DefaultTransport.(*http.Transport).ResponseHeaderTimeout = time.Second * 10 // hacky way, worked earlier but don't work anymore
if _, err := http.DefaultTransport.RoundTrip(req); err != nil {
fmt.Println(err)
return timingData, err
}
timingData.total = time.Since(start)
return timingData, nil
}
I am firing this function inside a goroutine. My sample data set is 100 urls. All goroutines fire, but eventually the program ends in 30+ secs as if the timeout is 30secs.
Earlier I made the same to work by using the hacky way of changing the default inside of it to 10 secs and anything that took too long, timed out and the program ended at 10.xxx secs but now its taking 30.xx secs.
What would be a proper way of specifying a timeout in this scenario?
I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
Actually, the preferred method is to use a context.Context on the request. The method you've used is just a short-cut suitable for simple use cases.
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(context.Background(), 5 * time.Second)
defer cancel()
req = req.WithContext(ctx)
And this method should work nicely for your situation as well.

Resources