I saw this code.
go func() {
var err error
if hasCert(s.TLSConfig) {
err = s.ServeTLS(ln, "" /*certFile*/, "" /*keyFile*/)
} else {
err = s.Serve(ln)
}
if err != http.ErrServerClosed {
errs <- err
}
}()
The ServeTLS is located in net/http. Why are there comments in the arguments? If the ServeTLS function receives certificates from the config, why add it to the arguments.
ServeTLS prototype
func (srv *Server) ServeTLS(l net.Listener, certFile, keyFile string) error
Take a look at https://pkg.go.dev/crypto/tls#Config
It configures many things for TLS, but not server key and cert. So it's not actually redundant to specify them to ServeTLS
Related
Go version: go1.8.1 windows/amd64
Sample code for HTTP request is:
func (c *Client) RoundTripSoap12(action string, in, out Message) error {
fmt.Println("****************************************************************")
headerFunc := func(r *http.Request) {
r.Header.Add("Content-Type", fmt.Sprintf("text/xml; charset=utf-8"))
r.Header.Add("SOAPAction", fmt.Sprintf(action))
r.Cookies()
}
return doRoundTrip(c, headerFunc, in, out)
}
func doRoundTrip(c *Client, setHeaders func(*http.Request), in, out Message) error {
req := &Envelope{
EnvelopeAttr: c.Envelope,
NSAttr: c.Namespace,
Header: c.Header,
Body: Body{Message: in},
}
if req.EnvelopeAttr == "" {
req.EnvelopeAttr = "http://schemas.xmlsoap.org/soap/envelope/"
}
if req.NSAttr == "" {
req.NSAttr = c.URL
}
var b bytes.Buffer
err := xml.NewEncoder(&b).Encode(req)
if err != nil {
return err
}
cli := c.Config
if cli == nil {
cli = http.DefaultClient
}
r, err := http.NewRequest("POST", c.URL, &b)
if err != nil {
return err
}
setHeaders(r)
if c.Pre != nil {
c.Pre(r)
}
fmt.Println("*************", r)
resp, err := cli.Do(r)
if err != nil {
fmt.Println("error occured is as follows ", err)
return err
}
fmt.Println("response headers are: ", resp.Header.Get("sprequestguid"))
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
// read only the first Mb of the body in error case
limReader := io.LimitReader(resp.Body, 1024*1024)
body, _ := ioutil.ReadAll(limReader)
return fmt.Errorf("%q: %q", resp.Status, body)
}
return xml.NewDecoder(resp.Body).Decode(out)
I will call the RoundTripSoap12 function on the corresponding HTTP client.
When I send a request for the first time I will be getting some headers in the HTTP response, so these HTTP response headers should be sent as-is in my next HTTP request.
You may be interested in the httputil package and the reverse proxy example provided if you wish to proxy requests transparently:
https://golang.org/src/net/http/httputil/reverseproxy.go
You can copy the headers from one request to another one fairly easily - the Header is a separate object, if r and rc are http.Requests and you don't mind them sharing a header (you may need to clone instead if you want independent requests):
rc.Header = r.Header // note shallow copy
fmt.Println("Headers", r.Header, rc.Header)
https://play.golang.org/p/q2KUHa_qiP
Or you can look through keys and values and only copy certain headers, and/or do a clone instead to ensure you share no memory. See the http util package here for examples of this - see the functions cloneHeader and copyHeader inside reverseproxy.go linked above.
I want to create a simple script that checks if a certain hostname:port is running. I only want to get a bool response if that URL is live, but I'm not sure if there's a straightforward way of doing it.
If you only want see if a URL is reachable you could use net.DialTimeout. Like this:
timeout := 1 * time.Second
conn, err := net.DialTimeout("tcp","mysyte:myport", timeout)
if err != nil {
log.Println("Site unreachable, error: ", err)
}
If you want to check if a Web server answers on a certain URL, you can invoke an HTTP GET request using net/http.
You will get a timeout if the server doesn't response at all. You might also check the response status.
resp, err := http.Get("http://google.com/")
if err != nil {
print(err.Error())
} else {
print(string(resp.StatusCode) + resp.Status)
}
You can change the default timeout by initializing a http.Client.
timeout := time.Duration(1 * time.Second)
client := http.Client{
Timeout: timeout,
}
resp, err := client.Get("http://google.com")
Bonus:
Go generally does not rely on exceptions and the built in libraries generally do not panic, but return an error as a second value.
See Why does Go not have exceptions?.
You can assume that something very bad happened if your call to a native function panics.
You can make a HEAD request:
package main
import "net/http"
func head(s string) bool {
r, e := http.Head(s)
return e == nil && r.StatusCode == 200
}
func main() {
b := head("https://stackoverflow.com")
println(b)
}
https://golang.org/pkg/net/http#Head
If you don't mind the port, use http.Get(web):
package main
import (
"fmt"
"net/http"
"os"
)
func main() {
web := os.Args[1]
fmt.Println(webIsReachable(web))
}
func webIsReachable(web string) bool {
response, errors := http.Get(web)
if errors != nil {
_, netErrors := http.Get("https://www.google.com")
if netErrors != nil {
fmt.Fprintf(os.Stderr, "no internet\n")
os.Exit(1)
}
return false
}
if response.StatusCode == 200 {
return true
}
return false
}
I am trying to pass an additional parameter in the request I am trying to send to the Go server -
websocket.create_connection("ws://<ip>:port/x/y?token="qwerty")
The Go server implementation is as follows -
func main() {
err := config.Parse()
if err != nil {
glog.Error(err)
os.Exit(1)
return
}
flag.Parse()
defer glog.Flush()
router := mux.NewRouter()
http.Handle("/", httpInterceptor(router))
router.Handle("/v1/x", common.ErrorHandler(stats.GetS)).Methods("GET")
router.Handle("/v1/x/y", common.ErrorHandler(stats.GetS)).Methods("GET")
var listen = fmt.Sprintf("%s:%d", config.Config.Ip, config.Config.Port)
err = http.ListenAndServe(listen, nil)
if err != nil {
glog.Error(err)
os.Exit(1)
}
}
func httpInterceptor(router http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
startTime := time.Now()
if !auth.Auth(w, req) {
http.Error(w, "Failed authentication", 401)
return
}
router.ServeHTTP(w, req)
finishTime := time.Now()
elapsedTime := finishTime.Sub(startTime)
switch req.Method {
case "GET":
case "POST":
}
})
}
How should I look and parse for the token in the Go server so that the authentication is successful?
Library function
func ParseFromRequest(req *http.Request, keyFunc Keyfunc) (token *Token, err error) {
// Look for an Authorization header
if ah := req.Header.Get("Authorization"); ah != "" {
// Should be a bearer token
if len(ah) > 6 && strings.ToUpper(ah[0:6]) == "BEARER" {
return Parse(ah[7:], keyFunc)
}
}
// Look for "access_token" parameter
req.ParseMultipartForm(10e6)
if tokStr := req.Form.Get("access_token"); tokStr != "" {
return Parse(tokStr, keyFunc)
}
return nil, ErrNoTokenInRequest
}
Call FormValue to get a query parameter:
token := req.FormValue("token")
req is a the *http.Request
An alternative is to call ParseForm and access req.Form directly:
if err := req.ParseForm(); err != nil {
// handle error
}
token := req.Form.Get("token")
The OP asks in a comment how to map "token" to "access_token" for an external package that's looking "access_token". Execute this code before calling the external package:
if err := req.ParseForm(); err != nil {
// handle error
}
req.Form["access_token"] = req.Form["token"]
When the external package calls req.Form.Get("access_token"), it will get the same value as the "token" parameter.
Depending on the way you want to parse the token , if its coming from the form or the URL.
The first answer can be used if the token is being sent from the form while in case of a URL, I would suggest using this. This works for me
token := req.URL.Query().Get("token")
For url query parameters:
mux.Vars(r)["token"]
I'd like to log 301s vs 302s but can't see a way to read the response status code in Client.Do, Get, doFollowingRedirects, CheckRedirect. Will I have to implement redirection myself to achieve this?
The http.Client type allows you to specify a custom transport, which should allow you to do what you're after. Something like the following should do:
type LogRedirects struct {
Transport http.RoundTripper
}
func (l LogRedirects) RoundTrip(req *http.Request) (resp *http.Response, err error) {
t := l.Transport
if t == nil {
t = http.DefaultTransport
}
resp, err = t.RoundTrip(req)
if err != nil {
return
}
switch resp.StatusCode {
case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther, http.StatusTemporaryRedirect:
log.Println("Request for", req.URL, "redirected with status", resp.StatusCode)
}
return
}
(you could simplify this a little if you only support chaining to the default transport).
You can then create a client using this transport, and any redirects should be logged:
client := &http.Client{Transport: LogRedirects{}}
Here is a full example you can experiment with: http://play.golang.org/p/8uf8Cn31HC
What I'm doing is fairly straight-forward. I need to create a "proxy" server that is very minimal and fast. Currently I have a baseline server that is proxied to (nodejs) and a proxy-service (go). Please excuse the lack of actual "proxy'ing" - just testing for now.
Baseline Service
var http = require('http');
http.createServer(function (req, res) {
// console.log("received request");
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, '127.0.0.1');
console.log('Server running at http://127.0.0.1:8080/');
Proxy Service
package main
import (
"flag"
"log"
"net/http"
"net/url"
)
var (
listen = flag.String("listen", "0.0.0.0:9000", "listen on address")
logp = flag.Bool("log", false, "enable logging")
)
func main() {
flag.Parse()
proxyHandler := http.HandlerFunc(proxyHandlerFunc)
log.Fatal(http.ListenAndServe(*listen, proxyHandler))
log.Println("Started router-server on 0.0.0.0:9000")
}
func proxyHandlerFunc(w http.ResponseWriter, r *http.Request) {
// Log if requested
if *logp {
log.Println(r.URL)
}
/*
* Tweak the request as appropriate:
* - RequestURI may not be sent to client
* - Set new URL
*/
r.RequestURI = ""
u, err := url.Parse("http://localhost:8080/")
if err != nil {
log.Fatal(err)
}
r.URL = u
// And proxy
// resp, err := client.Do(r)
c := make(chan *http.Response)
go doRequest(c)
resp := <-c
if resp != nil {
err := resp.Write(w)
if err != nil {
log.Println("Error writing response")
} else {
resp.Body.Close()
}
}
}
func doRequest(c chan *http.Response) {
// new client for every request.
client := &http.Client{}
resp, err := client.Get("http://127.0.0.1:8080/test")
if err != nil {
log.Println(err)
c <- nil
} else {
c <- resp
}
}
My issue, as mentioned within the title, is that I am getting errors stating 2013/10/28 21:22:30 Get http://127.0.0.1:8080/test: dial tcp 127.0.0.1:8080: can't assign requested address from the doRequest function, and I have no clue why. Googling this particular error yields seemingly irrelevant results.
There are 2 major problems with this code.
You are not handling the client stalling or using keep alives (handled below by getTimeoutServer)
You are not handling the server (what your http.Client is talking to) timing out (handled below by TimeoutConn).
This is probably why you are exhausting your local ports. I know from past experience node.js will keep-alive you very aggressively.
There are lots of little issues, creating objects every-time when you don't need to. Creating unneeded goroutines (each incoming request is in its own goroutine before you handle it).
Here is a quick stab (that I don't have time to test well). Hopefully it will put you on the right track: (You will want to upgrade this to not buffer the responses locally)
package main
import (
"bytes"
"errors"
"flag"
"fmt"
"log"
"net"
"net/http"
"net/url"
"runtime"
"strconv"
"time"
)
const DEFAULT_IDLE_TIMEOUT = 5 * time.Second
var (
listen string
logOn bool
localhost, _ = url.Parse("http://localhost:8080/")
client = &http.Client{
Transport: &http.Transport{
Proxy: NoProxyAllowed,
Dial: func(network, addr string) (net.Conn, error) {
return NewTimeoutConnDial(network, addr, DEFAULT_IDLE_TIMEOUT)
},
},
}
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
flag.StringVar(&listen, "listen", "0.0.0.0:9000", "listen on address")
flag.BoolVar(&logOn, "log", true, "enable logging")
flag.Parse()
server := getTimeoutServer(listen, http.HandlerFunc(proxyHandlerFunc))
log.Printf("Starting router-server on %s\n", listen)
log.Fatal(server.ListenAndServe())
}
func proxyHandlerFunc(w http.ResponseWriter, req *http.Request) {
if logOn {
log.Printf("%+v\n", req)
}
// Setup request URL
origURL := req.URL
req.URL = new(url.URL)
*req.URL = *localhost
req.URL.Path, req.URL.RawQuery, req.URL.Fragment = origURL.Path, origURL.RawQuery, origURL.Fragment
req.RequestURI, req.Host = "", req.URL.Host
// Perform request
resp, err := client.Do(req)
if err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
defer resp.Body.Close()
var respBuffer *bytes.Buffer
if resp.ContentLength != -1 {
respBuffer = bytes.NewBuffer(make([]byte, 0, resp.ContentLength))
} else {
respBuffer = new(bytes.Buffer)
}
if _, err = respBuffer.ReadFrom(resp.Body); err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
// Write result of request
headers := w.Header()
var key string
var val []string
for key, val = range resp.Header {
headers[key] = val
}
headers.Set("Content-Length", strconv.Itoa(respBuffer.Len()))
w.WriteHeader(resp.StatusCode)
w.Write(respBuffer.Bytes())
}
func getTimeoutServer(addr string, handler http.Handler) *http.Server {
//keeps people who are slow or are sending keep-alives from eating all our sockets
const (
HTTP_READ_TO = DEFAULT_IDLE_TIMEOUT
HTTP_WRITE_TO = DEFAULT_IDLE_TIMEOUT
)
return &http.Server{
Addr: addr,
Handler: handler,
ReadTimeout: HTTP_READ_TO,
WriteTimeout: HTTP_WRITE_TO,
}
}
func NoProxyAllowed(request *http.Request) (*url.URL, error) {
return nil, nil
}
//TimeoutConn-------------------------
//Put me in my own TimeoutConn.go ?
type TimeoutConn struct {
net.Conn
readTimeout, writeTimeout time.Duration
}
var invalidOperationError = errors.New("TimeoutConn does not support or allow .SetDeadline operations")
func NewTimeoutConn(conn net.Conn, ioTimeout time.Duration) (*TimeoutConn, error) {
return NewTimeoutConnReadWriteTO(conn, ioTimeout, ioTimeout)
}
func NewTimeoutConnReadWriteTO(conn net.Conn, readTimeout, writeTimeout time.Duration) (*TimeoutConn, error) {
this := &TimeoutConn{
Conn: conn,
readTimeout: readTimeout,
writeTimeout: writeTimeout,
}
now := time.Now()
err := this.Conn.SetReadDeadline(now.Add(this.readTimeout))
if err != nil {
return nil, err
}
err = this.Conn.SetWriteDeadline(now.Add(this.writeTimeout))
if err != nil {
return nil, err
}
return this, nil
}
func NewTimeoutConnDial(network, addr string, ioTimeout time.Duration) (net.Conn, error) {
conn, err := net.DialTimeout(network, addr, ioTimeout)
if err != nil {
return nil, err
}
if conn, err = NewTimeoutConn(conn, ioTimeout); err != nil {
return nil, err
}
return conn, nil
}
func (this *TimeoutConn) Read(data []byte) (int, error) {
this.Conn.SetReadDeadline(time.Now().Add(this.readTimeout))
return this.Conn.Read(data)
}
func (this *TimeoutConn) Write(data []byte) (int, error) {
this.Conn.SetWriteDeadline(time.Now().Add(this.writeTimeout))
return this.Conn.Write(data)
}
func (this *TimeoutConn) SetDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetReadDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetWriteDeadline(time time.Time) error {
return invalidOperationError
}
We ran into this and after a lot of time trying to debug, I came across this: https://code.google.com/p/go/source/detail?r=d4e1ec84876c
This shifts the burden onto clients to read their whole response
bodies if they want the advantage of reusing TCP connections.
So be sure you read the entire body before closing, there are a couple of ways to do it. This function can come in handy to close to let you see whether you have this issue by logging the extra bytes that haven't been read and cleaning the stream out for you so it can reuse the connection:
func closeResponse(response *http.Response) error {
// ensure we read the entire body
bs, err2 := ioutil.ReadAll(response.Body)
if err2 != nil {
log.Println("Error during ReadAll!!", err2)
}
if len(bs) > 0 {
log.Println("Had to read some bytes, not good!", bs, string(bs))
}
return response.Body.Close()
}
Or if you really don't care about the body, you can just discard it with this:
io.Copy(ioutil.Discard, response.Body)
I have encountered this problem too, and i add an option {DisableKeepAlives: true} to http.Transport fixed this issue, you can have a try.
I came here when running a massive amount of SQL queries per second on a system without limiting the number of idle connections over a long period of time. As pointed out in this issue comment on github explicitly setting db.SetMaxIdleConns(5) completely solved my problem.