Disable Common Name Validation - Go HTTP Client - http

How do I disable common name validation inside of a go http client. I am doing mutual TLS with a common CA and hence common name validation means nothing.
The tls docs say,
// ServerName is used to verify the hostname on the returned
// certificates unless InsecureSkipVerify is given. It is also included
// in the client's handshake to support virtual hosting unless it is
// an IP address.
ServerName string
I don't want to do InsecureSkipVerify but I don't want to validate the common name.

You would pass a tls.Config struct with your own VerifyPeerCertificate function, and then you would check the certificate yourself.
VerifyPeerCertificate func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error
If normal verification fails then the handshake will abort before
considering this callback. If normal verification is disabled by
setting InsecureSkipVerify then this callback will be considered but
the verifiedChains argument will always be nil.
You can look here for an example of how to verify a certificate. Iif you look here, you'll see that part of even this verification process includes checking the hostname, but luckily you'll see that it skips it if it's set to the empty string.
So, basically you write your own VerifyPeerCertificate function, convert the rawCerts [][]byte, which I think would look something like:
customVerify := func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
roots := x509.NewCertPool()
for _, rawCert := range rawCerts {
cert, _ := x509.ParseCertificate(rawCert)
roots.AddCert(cert)
}
opts := x509.VerifyOptions{
Roots: roots,
}
_, err := cert.Verify(opts)
return err
}
conf := tls.Config{
//...
VerifyPeerCertificate: customVerify,
}

Normal https post like this
pool := x509.NewCertPool()
caStr, err := ioutil.ReadFile(serverCAFile)
if err != nil {
return nil, fmt.Errorf("read server ca file fail")
}
pool.AppendCertsFromPEM(caStr)
tr := &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
}
client := &http.Client{Transport: tr}
client.Post(url, bodyType, body)
But if your url is use ip(ex. https://127.0.0.1:8080/api/test) or you URL is not match certificate common name, and you want to only ignore certificate common name check, should do like this
pool := x509.NewCertPool()
caStr, err := ioutil.ReadFile(serverCAFile)
if err != nil {
return nil, fmt.Errorf("read server ca file fail")
}
block, _ := pem.Decode(caStr)
if block == nil {
return nil, fmt.Errorf("Decode ca file fail")
}
if block.Type != "CERTIFICATE" || len(block.Headers) != 0 {
return nil, fmt.Errorf("Decode ca block file fail")
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return nil, fmt.Errorf("ParseCertificate ca block file fail")
}
pool.AddCert(cert)
tr := &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
ServerName: cert.Subject.CommonName, //manual set ServerName
},
}
client := &http.Client{Transport: tr}
client.Post(url, bodyType, body)

Related

Keep alive request for _change continuous feed

I am trying to convert below nodejs code to Go. I have to establish keep alive http request to PouchDB server's _changes?feed=continuous. However, I'm not able to achieve it in Go.
var http = require('http')
var agent = new http.Agent({
keepAlive: true
});
var options = {
host: 'localhost',
port: '3030',
method: 'GET',
path: '/downloads/_changes?feed=continuous&include_docs=true',
agent
};
var req = http.request(options, function(response) {
response.on('data', function(data) {
let val = data.toString()
if(val == '\n')
console.log('newline')
else {
console.log(JSON.parse(val))
//to close the connection
//agent.destroy()
}
});
response.on('end', function() {
// Data received completely.
console.log('end');
});
response.on('error', function(err) {
console.log(err)
})
});
req.end();
Below is the Go code
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
defer resp.Body.Close()
result, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Println(err)
}
fmt.Println(result)
I am getting status 200 Ok, but no data gets printed, its stuck. On the other hand if I use longpoll option ie. http://localhost:3030/downloads/_changes?feed=longpoll then I am receiving data.
Your code is working "as expected" and what you wrote in Go is not equivalent to code shown in Node.js. Go code blocks on ioutil.ReadAll(resp.Body) because connection is kept open by CouchDB server. Once server closes the connection your client code will print out result as ioutil.ReadAll() will be able to read all data down to EOF.
From CouchDB documentation about continuous feed:
A continuous feed stays open and connected to the database until explicitly closed and changes are sent to the client as they happen, i.e. in near real-time. As with the longpoll feed type you can set both the timeout and heartbeat intervals to ensure that the connection is kept open for new changes and updates.
You can try experiment and add &timeout=1 to URL which will force CouchDB to close connection after 1s. Your Go code then should print the whole response.
Node.js code works differently, event data handler is called every time server sends some data. If you want to achieve same and process partial updates as they come (before connection is closed) you cannot use ioutil.ReadAll() as that waits for EOF (and thus blocks in your case) but something like resp.Body.Read() to process partial buffers. Here is very simplified snippet of code that demonstrates that and should give you basic idea:
package main
import (
"fmt"
"net/http"
"net/url"
"strings"
)
func main() {
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
defer resp.Body.Close()
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
buf := make([]byte, 1024)
for {
l, err := resp.Body.Read(buf)
if l == 0 && err != nil {
break // this is super simplified
}
// here you can send off data to e.g. channel or start
// handler goroutine...
fmt.Printf("%s", buf[:l])
}
fmt.Println()
}
In real world application you probably want to make sure your buf holds something that looks like a valid message and then pass it to channel or handler goroutine for further processing.
Finally, I was able to resolve the issue. The issue was related to DisableCompression flag. https://github.com/golang/go/issues/16488 this issue gave me some hint.
By setting DisableCompression: true fixed the issue.
client := &http.Client{Transport: &http.Transport{
DisableCompression: true,
}}
I am assuming client := &http.Client{} sends DisableCompression : false by default and pouchdb server is sending compressed json, Hence received data was compressed and resp.Body.Read was not able to read.

How to use a self-signed certificate through proxy server client

I am trying to create an HTTP client that can send self-signed HTTP requests through a proxy server.
I tried this code but I am not sure if there is a problem here, will the following code work?
func CreateProxyClient(serverProxy string, sid string, portProxy int) (*Client, error) {
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
proxyUrl, _ := url.Parse(serverProxy+":"+strconv.Itoa(portProxy))
tr := &http.Transport{
Proxy: http.ProxyURL(proxyUrl),
}
var netClient = &http.Client{
Timeout: time.Second * 10,
Transport: tr,
}
return &Client{netClient, serverProxy, sid}, nil
}
"Is there a problem"? Only if you consider blindly trusting the certificate a problem (that's why it's called InsecureSkipVerify).
The better option is to configure the client to trust the specific certificate that the server is using, so you get MITM protection in addition to encryption.
To do this, get a copy of the server's certificate via a trusted channel (e.g. copy it from the server's filesystem), then add it to the client's CA pool (this will also trust all certificates signed by the server's cert, if applicable).
Here is an example for the test certificate in net/http, which is used by httptest.NewTLSServer:
package main
import (
"crypto/tls"
"crypto/x509"
"fmt"
"log"
"net/http"
"net/http/httptest"
)
// cert is used by httptest.NewTLSServer.
//
// In a real application you're going to want to load the certificate from
// disk, rather than hard-coding it. Otherwise you have to recompile the program
// when the certificate is updated.
var cert = []byte(`-----BEGIN CERTIFICATE-----
MIICEzCCAXygAwIBAgIQMIMChMLGrR+QvmQvpwAU6zANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQKEwdBY21lIENvMCAXDTcwMDEwMTAwMDAwMFoYDzIwODQwMTI5MTYw
MDAwWjASMRAwDgYDVQQKEwdBY21lIENvMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB
iQKBgQDuLnQAI3mDgey3VBzWnB2L39JUU4txjeVE6myuDqkM/uGlfjb9SjY1bIw4
iA5sBBZzHi3z0h1YV8QPuxEbi4nW91IJm2gsvvZhIrCHS3l6afab4pZBl2+XsDul
rKBxKKtD1rGxlG4LjncdabFn9gvLZad2bSysqz/qTAUStTvqJQIDAQABo2gwZjAO
BgNVHQ8BAf8EBAMCAqQwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDwYDVR0TAQH/BAUw
AwEB/zAuBgNVHREEJzAlggtleGFtcGxlLmNvbYcEfwAAAYcQAAAAAAAAAAAAAAAA
AAAAATANBgkqhkiG9w0BAQsFAAOBgQCEcetwO59EWk7WiJsG4x8SY+UIAA+flUI9
tyC4lNhbcF2Idq9greZwbYCqTTTr2XiRNSMLCOjKyI7ukPoPjo16ocHj+P3vZGfs
h1fIw3cSS2OolhloGw/XM6RWPWtPAlGykKLciQrBru5NAPvCMsb/I1DAceTiotQM
fblo6RBxUQ==
-----END CERTIFICATE-----`)
func main() {
pool, err := x509.SystemCertPool()
if err != nil {
log.Fatal(err)
}
if !pool.AppendCertsFromPEM(cert) {
log.Fatal("Cannot append self-signed cert to CA pool")
}
c := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
},
}
s := httptest.NewTLSServer(nil)
res, err := c.Get(s.URL)
if err != nil {
log.Fatal(err)
}
fmt.Println(res.Status)
}
Try it on the playground: https://play.golang.org/p/HsI2RyOd5qd

How can I implement an inactivity timeout on an http download

I've been reading up on the various timeouts that are available on an http request and they all seem to act as hard deadlines on the total time of a request.
I am running an http download, I don't want to implement a hard timeout past the initial handshake as I don't know anything about my users connection and don't want to timeout on slow connections. What I would ideally like is to timeout after a period of inactivity (when nothing has been downloaded for x seconds). Is there any way to do this as a built in or do I have to interrupt based on stating the file?
The working code is a little hard to isolate but I think these are the relevant parts, there is another loop that stats the file to provide progress but I will need to refactor a bit to use this to interrupt the download:
// httspClientOnNetInterface returns an http client using the named network interface, (via proxy if passed)
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {
log.Printf("Got IP addr : %s\n", string(interfaceIP))
// create address for the dialer
tcpAddr := &net.TCPAddr{
IP: interfaceIP,
}
// create the dialer & transport
netDialer := net.Dialer{
LocalAddr: tcpAddr,
}
var proxyURL *url.URL
var err error
if httpsProxy != nil {
proxyURL, err = url.Parse(httpsProxy.String())
if err != nil {
return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
}
}
httpTransport := &http.Transport{
Dial: netDialer.Dial,
Proxy: http.ProxyURL(proxyURL),
}
httpClient := &http.Client{
Transport: httpTransport,
}
return httpClient, nil
}
/*
StartDownloadWithProgress will initiate a download from a remote url to a local file,
providing download progress information
*/
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {
// start an http client on the selected net interface
httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
if err != nil {
return nil, err
}
// grab the header
headResp, err := httpClient.Head(srcURL)
if err != nil {
log.Printf("error on head request (download size): %s", err)
return nil, err
}
// pull out total size
size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
if err != nil {
headResp.Body.Close()
return nil, err
}
headResp.Body.Close()
errChan := make(chan error)
doneChan := make(chan struct{})
// spawn the download process
go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
resp, err := httpClient.Get(srcURL)
if err != nil {
errChan <- err
return
}
defer resp.Body.Close()
// create the file
outFile, err := os.Create(dstFilepath)
if err != nil {
errChan <- err
return
}
defer outFile.Close()
log.Println("starting copy")
// copy to file as the response arrives
_, err = io.Copy(outFile, resp.Body)
// return err
if err != nil {
log.Printf("\n Download Copy Error: %s \n", err.Error())
errChan <- err
return
}
doneChan <- struct{}{}
return
}(httpClient, srcURL, dstFilepath, errChan, doneChan)
// return Download
return (&Download{
updateFrequency: time.Microsecond * 500,
total: size,
errRecieve: errChan,
doneRecieve: doneChan,
filepath: dstFilepath,
}).Start(), nil
}
Update
Thanks to everyone who had input into this.
I've accepted JimB's answer as it seems like a perfectly viable approach that is more generalised than the solution I chose (and probably more useful to anyone who finds their way here).
In my case I already had a loop monitoring the file size so I threw a named error when this did not change for x seconds. It was much easier for me to pick up on the named error through my existing error handling and retry the download from there.
I probably crash at least one goroutine in the background with my approach (I may fix this later with some signalling) but as this is a short running application (its an installer) so this is acceptable (at least tolerable)
Doing the copy manually is not particularly difficult. If you're unsure how to properly implement it, it's only a couple dozen lines from the io package to copy and modify to suit your needs (I only removed the ErrShortWrite clause, because we can assume that the std library io.Writer implementations are correct)
Here is a copy work-alike function, that also takes a cancelation context and an idle timeout parameter. Every time there is a successful read, it signals to the cancelation goroutine to continue and start a new timer.
func idleTimeoutCopy(dst io.Writer, src io.Reader, timeout time.Duration,
ctx context.Context, cancel context.CancelFunc) (written int64, err error) {
read := make(chan int)
go func() {
for {
select {
case <-ctx.Done():
return
case <-time.After(timeout):
cancel()
case <-read:
}
}
}()
buf := make([]byte, 32*1024)
for {
nr, er := src.Read(buf)
if nr > 0 {
read <- nr
nw, ew := dst.Write(buf[0:nr])
written += int64(nw)
if ew != nil {
err = ew
break
}
}
if er != nil {
if er != io.EOF {
err = er
}
break
}
}
return written, err
}
While I used time.After for brevity, it's more efficient to reuse the Timer. This means taking care to use the correct reset pattern, as the return value of the Reset function is broken:
t := time.NewTimer(timeout)
for {
select {
case <-ctx.Done():
return
case <-t.C:
cancel()
case <-read:
if !t.Stop() {
<-t.C
}
t.Reset(timeout)
}
}
You could skip calling Stop altogether here, since in my opinion if the timer fires while calling Reset, it was close enough to cancel anyway, but it's often good to have the code be idiomatic in case this code is extended in the future.

Get all the headers of HTTP response and send it back in next HTTP request

Go version: go1.8.1 windows/amd64
Sample code for HTTP request is:
func (c *Client) RoundTripSoap12(action string, in, out Message) error {
fmt.Println("****************************************************************")
headerFunc := func(r *http.Request) {
r.Header.Add("Content-Type", fmt.Sprintf("text/xml; charset=utf-8"))
r.Header.Add("SOAPAction", fmt.Sprintf(action))
r.Cookies()
}
return doRoundTrip(c, headerFunc, in, out)
}
func doRoundTrip(c *Client, setHeaders func(*http.Request), in, out Message) error {
req := &Envelope{
EnvelopeAttr: c.Envelope,
NSAttr: c.Namespace,
Header: c.Header,
Body: Body{Message: in},
}
if req.EnvelopeAttr == "" {
req.EnvelopeAttr = "http://schemas.xmlsoap.org/soap/envelope/"
}
if req.NSAttr == "" {
req.NSAttr = c.URL
}
var b bytes.Buffer
err := xml.NewEncoder(&b).Encode(req)
if err != nil {
return err
}
cli := c.Config
if cli == nil {
cli = http.DefaultClient
}
r, err := http.NewRequest("POST", c.URL, &b)
if err != nil {
return err
}
setHeaders(r)
if c.Pre != nil {
c.Pre(r)
}
fmt.Println("*************", r)
resp, err := cli.Do(r)
if err != nil {
fmt.Println("error occured is as follows ", err)
return err
}
fmt.Println("response headers are: ", resp.Header.Get("sprequestguid"))
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
// read only the first Mb of the body in error case
limReader := io.LimitReader(resp.Body, 1024*1024)
body, _ := ioutil.ReadAll(limReader)
return fmt.Errorf("%q: %q", resp.Status, body)
}
return xml.NewDecoder(resp.Body).Decode(out)
I will call the RoundTripSoap12 function on the corresponding HTTP client.
When I send a request for the first time I will be getting some headers in the HTTP response, so these HTTP response headers should be sent as-is in my next HTTP request.
You may be interested in the httputil package and the reverse proxy example provided if you wish to proxy requests transparently:
https://golang.org/src/net/http/httputil/reverseproxy.go
You can copy the headers from one request to another one fairly easily - the Header is a separate object, if r and rc are http.Requests and you don't mind them sharing a header (you may need to clone instead if you want independent requests):
rc.Header = r.Header // note shallow copy
fmt.Println("Headers", r.Header, rc.Header)
https://play.golang.org/p/q2KUHa_qiP
Or you can look through keys and values and only copy certain headers, and/or do a clone instead to ensure you share no memory. See the http util package here for examples of this - see the functions cloneHeader and copyHeader inside reverseproxy.go linked above.

golang program determine if user uses proxy

I want my golang http client to use a proxy only if the user provides the proxy value.
// Make HTTP GET/POST request
proxyUrl, err := url.Parse(proxy)
tr := &http.Transport{
DisableKeepAlives: true,
Proxy: http.ProxyURL(proxyUrl),
}
The above code always tries to connect through proxy even if the proxy variable is blank.
Thanks for the suggestion. Now I am able to make it work. Below is the modified code.
tr := &http.Transport{}
tr.DisableKeepAlives = true
if len(proxy) != 0 { // Set the proxy only if the proxy param is specified
proxyUrl, err := url.Parse(proxy)
if err == nil {
tr.Proxy = http.ProxyURL(proxyUrl)
}
}

Resources