The goal was to add customer headers to the reverse proxy. Found a way to do it using ModifyResponse. Added 3 response headers mentioned below.
Golang version
go version go1.8 linux/amd64
package middleware
import (
"context"
"net/http"
"net/http/httputil"
"net/url"
"time"
"github.com/Sirupsen/logrus"
"github.com/felixge/httpsnoop"
)
var customHeader1 string
var customHeader2 string
var customHeader3 string
func UpdateResponse(r *http.Response) error {
r.Header.Set("Custom-Header1", customHeader1)
r.Header.Set("Custom-Header2", customHeader2)
r.Header.Set("Custom-Header3", customHeader3)
return nil
}
func ReverseProxy(w http.ResponseWriter, r *http.Request, serverURL *url.URL, path string) {
proxy := httputil.NewSingleHostReverseProxy(serverURL)
ctx, cancel := context.WithTimeout(context.Background(), 65*time.Second)
defer cancel()
go func() {
select {
case <-ctx.Done():
Log.Error(ctx.Err()) // prints "context deadline exceeded"
return
}
}()
proxy.Director = func(req *http.Request) {
targetQuery := r.URL.RawQuery
req.Host = serverURL.Host
req.URL.Scheme = serverURL.Scheme
req.URL.Host = serverURL.Host
req.URL.Path = path
req.URL.RawQuery = targetQuery //+ "&" + req.URL.RawQuery
if _, ok := req.Header["User-Agent"]; !ok {
req.Header.Set("User-Agent", "")
}
}
// Printing it out in log
CustomHeader1 = r.Header.Get("Custom-Header1")
CustomHeader2 = r.Header.Get("Custom-Header2")
CustomHeader3 = r.Header.Get("Custom-Header3")
var Uid string
Uid = r.Header.Get("Resource-Owner-Uid")
Log.Printf("---daily--- %s =-= %s", CustomHeader1, Uid)
Log.Printf("---before daily--- %s =-= %s", CustomHeader3, Uid)
proxy.ModifyResponse = UpdateResponse
Log.Printf("---after-daily--- %s =-= %s", CustomHeader1, Uid)
Log.Printf("---after-daily--- %s =-= %s", CustomHeader3, Uid)
//If Server have already path with it.
if len(serverURL.Path) > 0 {
path = serverURL.Path + path
}
r.URL.Path = path
Log.Formatter = &logrus.JSONFormatter{}
r = r.WithContext(ctx)
m := httpsnoop.CaptureMetrics(proxy, w, r)
Log.Printf(
"host=%s remoteaddr=%s url=%s rawquery=%s Method=%s URL=%s resourceOwnerID=%s resourceOwnerType=%s resourceOwnerUID=%s (StatusCode=%d duration=%s written=%d) customheader1=%s customerheader3=%s respcustomheader1=%s respcustomheader3=%s",
r.Host,
r.RemoteAddr,
r.URL.Path,
r.URL.RawQuery,
r.Method,
r.URL,
r.Header["Resource-Owner-Id"],
r.Header["Resource-Owner-Type"],
r.Header["Resource-Owner-Uid"],
m.Code,
m.Duration,
m.Written,
r.Header["Custom-Header1"],
r.Header["Custom-Header3"],
w.Header().Get("Custom-Header1"),
w.Header().Get("Custom-Header3"),
)
}
// Log
{"level":"info","msg":"---daily--- false =-= uid","time":""}
{"level":"info","msg":"---daily --- false =-= uid","time":""}
{"level":"info","msg":"---after-daily--- false =-= uid","time":""}
{"level":"info","msg":"---after-daily --- false =-= uid","time":""}
time="" level=info msg="host= remoteaddr= url= rawquery= Method=POST URL= resourceOwnerID=[] resourceOwnerType=[] resourceOwnerUID=[] (StatusCode=200 duration=26.098272ms written=78) customheader1=[false] customheader3=[false] respcustomheader1=false respcustomheader3=true"
Expected was respcustomheader3= false
As you can see in the log, custom printed logs are false for the same but printed inside the Log.Printf is true. This value is set in the backend server which is sent back to reverse-proxy in response header and customheader1 is false in backend servers too. Even same data i'm storing in DB is also false from where im reading it.
If I retry again the respcustomheader3 is false.
Is there anything I'm doing wrong? Is my approach correct any guidance would be helpful.
Related
When set grpc filter
func GetChainUnaryServerInterceptor() grpc.UnaryServerInterceptor {
return grpc_middleware.ChainUnaryServer(
grpc_auth.UnaryServerInterceptor(auth.CookieAuth),
parseSessionToUidFilter,
)
}
func parseSessionToUidFilter(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
ctx = metadata.NewOutgoingContext(ctx, metadata.Pairs("uid", "123"))
return handler(ctx, req)
}
In server, echo()
func (s *server) Echo(ctx context.Context, req *pb.EchoRequest) (resp *pb.EchoReply, err error) {
md, _ := metadata.FromIncomingContext(ctx)
fmt.Println(md)
u := md.Get("uid")[0]
username := u
if username == "" {
username = "whoever you are"
}
return &pb.EchoReply{Echo: "Hello, " + username}, nil
}
ctx detail in debug mode
As you can see, uid is not with grpc-... above.
Now I figured it out, I should use NewIncomingContext() in filter.
But how to set uid with mdIncomingKey above, with the pattern grpcgateway-*, e.g. grpcgateway-uid, do I have to rewrite the incomingHeaderMatcher function when boot my grpc-gateway?
I want to write a http proxy with authentication in golang but I couldn't find any example.
Here is what I trid but didn't work: (I get Error parsing basic auth)
server := &http.Server{
Addr: "0.0.0.0:8080",
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
b, err := httputil.DumpRequest(r, true)
if err == nil {
fmt.Println("dump", string(b))
} else {
fmt.Println("dump error", err)
}
u, p, ok := r.BasicAuth()
if !ok {
fmt.Println("Error parsing basic auth")
w.WriteHeader(401)
return
}
if u != "USERNAME"{
fmt.Printf("Username provided is correct: %s\n", u)
w.WriteHeader(401)
return
}
if p != "PASSWORD" {
fmt.Printf("Password provided is correct: %s\n", u)
w.WriteHeader(401)
return
}
if r.Method == http.MethodConnect {
handleTunneling(w, r)
} else {
handleHTTP(w, r)
}
}),
// Disable HTTP/2.
TLSNextProto: make(map[string]func(*http.Server, *tls.Conn, http.Handler)),
}
log.Fatal(server.ListenAndServe())
I tested the application with firefox and foxyproxy
enter image description here
HTTP Authentication has two headers for providing authentication information: Authorization and Proxy-Authorization.
Authorization header:
The "Authorization" header field allows a user agent to authenticate
itself with an origin server -- usually, but not necessarily, after
receiving a 401 (Unauthorized) response. Its value consists of
credentials containing the authentication information of the user
agent for the realm of the resource being requested.
Proxy-Authorization:
The "Proxy-Authorization" header field allows the client to identify
itself (or its user) to a proxy that requires authentication. Its
value consists of credentials containing the authentication
information of the client for the proxy and/or realm of the resource
being requested.
Request.BasicAuth() is for "Authorization" header, not for "Proxy-Authorization" header.
BasicAuth returns the username and password provided in the request's Authorization header, if the request uses HTTP Basic Authentication. See RFC 2617, Section 2.
For parsing "Proxy-Authorization" header you can copy parseBasicAuth() function from request.go.
func ProxyBasicAuth(header http.Header) (username, password string, ok bool) {
auth := header.Get("Proxy-Authorization")
if auth == "" {
return "", "", false
}
return parseBasicAuth(auth)
}
// parseBasicAuth parses an HTTP Basic Authentication string.
// "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==" returns ("Aladdin", "open sesame", true).
func parseBasicAuth(auth string) (username, password string, ok bool) {
const prefix = "Basic "
// Case insensitive prefix match. See Issue 22736.
if len(auth) < len(prefix) || !asciiEqualFold(auth[:len(prefix)], prefix) {
return "", "", false
}
c, err := base64.StdEncoding.DecodeString(auth[len(prefix):])
if err != nil {
return "", "", false
}
cs := string(c)
username, password, ok = strings.Cut(cs, ":")
if !ok {
return "", "", false
}
return username, password, true
}
// EqualFold is strings.EqualFold, ASCII only. It reports whether s and t
// are equal, ASCII-case-insensitively.
func asciiEqualFold(s, t string) bool {
if len(s) != len(t) {
return false
}
for i := 0; i < len(s); i++ {
if asciiLower(s[i]) != asciiLower(t[i]) {
return false
}
}
return true
}
// lower returns the ASCII lowercase version of b.
func asciiLower(b byte) byte {
if 'A' <= b && b <= 'Z' {
return b + ('a' - 'A')
}
return b
}
Here is the source code with minor changes. You can also check out packages like elazarl/goproxy and snail007/goproxy.
I want to create a simple script that checks if a certain hostname:port is running. I only want to get a bool response if that URL is live, but I'm not sure if there's a straightforward way of doing it.
If you only want see if a URL is reachable you could use net.DialTimeout. Like this:
timeout := 1 * time.Second
conn, err := net.DialTimeout("tcp","mysyte:myport", timeout)
if err != nil {
log.Println("Site unreachable, error: ", err)
}
If you want to check if a Web server answers on a certain URL, you can invoke an HTTP GET request using net/http.
You will get a timeout if the server doesn't response at all. You might also check the response status.
resp, err := http.Get("http://google.com/")
if err != nil {
print(err.Error())
} else {
print(string(resp.StatusCode) + resp.Status)
}
You can change the default timeout by initializing a http.Client.
timeout := time.Duration(1 * time.Second)
client := http.Client{
Timeout: timeout,
}
resp, err := client.Get("http://google.com")
Bonus:
Go generally does not rely on exceptions and the built in libraries generally do not panic, but return an error as a second value.
See Why does Go not have exceptions?.
You can assume that something very bad happened if your call to a native function panics.
You can make a HEAD request:
package main
import "net/http"
func head(s string) bool {
r, e := http.Head(s)
return e == nil && r.StatusCode == 200
}
func main() {
b := head("https://stackoverflow.com")
println(b)
}
https://golang.org/pkg/net/http#Head
If you don't mind the port, use http.Get(web):
package main
import (
"fmt"
"net/http"
"os"
)
func main() {
web := os.Args[1]
fmt.Println(webIsReachable(web))
}
func webIsReachable(web string) bool {
response, errors := http.Get(web)
if errors != nil {
_, netErrors := http.Get("https://www.google.com")
if netErrors != nil {
fmt.Fprintf(os.Stderr, "no internet\n")
os.Exit(1)
}
return false
}
if response.StatusCode == 200 {
return true
}
return false
}
I found fasthttp godoc as fellow:
func Get
func Get(dst []byte, url string) (statusCode int, body []byte, err error)
Get appends url contents to dst and returns it as body.
The function follows redirects. Use Do* for manually handling redirects.
New body buffer is allocated if dst is nil.
But, when I run fellow code
package main
import (
"fmt"
fh "github.com/valyala/fasthttp"
)
func main() {
url := "https://www.okcoin.cn/api/v1/ticker.do?symbol=btc_cny"
dst := []byte("ok100")
_, body, err := fh.Get(dst, url)
if err != nil {
fmt.Println(err)
}
fmt.Println("body:", string(body))
fmt.Println("dst:", string(dst))
}
body does not have "ok100", and dst is still "ok100".
why?
Looking at the code where it is used in fasthttp's client.go func clientGetURLDeadlineFreeConn (line 672), you can see that if there is a timeout, dst's contents are copied to body at line 712. So based on what I read in the code (and debugged with Delve using your code), I saw that dst doesn't get updated in this usage. It seems that it can be used to provide default content to body in the event of a timeout - probably worth a direct question to fasthttp's author for more detail.
func clientGetURLDeadlineFreeConn(dst []byte, url string, deadline time.Time, c clientDoer) (statusCode int, body []byte, err error) {
timeout := -time.Since(deadline)
if timeout <= 0 {
return 0, dst, ErrTimeout
}
var ch chan clientURLResponse
chv := clientURLResponseChPool.Get()
if chv == nil {
chv = make(chan clientURLResponse, 1)
}
ch = chv.(chan clientURLResponse)
req := AcquireRequest()
// Note that the request continues execution on ErrTimeout until
// client-specific ReadTimeout exceeds. This helps limiting load
// on slow hosts by MaxConns* concurrent requests.
//
// Without this 'hack' the load on slow host could exceed MaxConns*
// concurrent requests, since timed out requests on client side
// usually continue execution on the host.
go func() {
statusCodeCopy, bodyCopy, errCopy := doRequestFollowRedirects(req, dst, url, c)
ch <- clientURLResponse{
statusCode: statusCodeCopy,
body: bodyCopy,
err: errCopy,
}
}()
tc := acquireTimer(timeout)
select {
case resp := <-ch:
ReleaseRequest(req)
clientURLResponseChPool.Put(chv)
statusCode = resp.statusCode
body = resp.body
err = resp.err
case <-tc.C:
body = dst
err = ErrTimeout
}
releaseTimer(tc)
return statusCode, body, err
}
In client.go func doRequestFollowRedirects (line 743) it is used at line 748: bodyBuf.B = dst
func doRequestFollowRedirects(req *Request, dst []byte, url string, c clientDoer) (statusCode int, body []byte, err error) {
resp := AcquireResponse()
bodyBuf := resp.bodyBuffer()
resp.keepBodyBuffer = true
oldBody := bodyBuf.B
bodyBuf.B = dst
redirectsCount := 0
for {
req.parsedURI = false
req.Header.host = req.Header.host[:0]
req.SetRequestURI(url)
if err = c.Do(req, resp); err != nil {
break
}
statusCode = resp.Header.StatusCode()
if statusCode != StatusMovedPermanently && statusCode != StatusFound && statusCode != StatusSeeOther {
break
}
redirectsCount++
if redirectsCount > maxRedirectsCount {
err = errTooManyRedirects
break
}
location := resp.Header.peek(strLocation)
if len(location) == 0 {
err = errMissingLocation
break
}
url = getRedirectURL(url, location)
}
body = bodyBuf.B
bodyBuf.B = oldBody
resp.keepBodyBuffer = false
ReleaseResponse(resp)
return statusCode, body, err
}
What I'm doing is fairly straight-forward. I need to create a "proxy" server that is very minimal and fast. Currently I have a baseline server that is proxied to (nodejs) and a proxy-service (go). Please excuse the lack of actual "proxy'ing" - just testing for now.
Baseline Service
var http = require('http');
http.createServer(function (req, res) {
// console.log("received request");
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, '127.0.0.1');
console.log('Server running at http://127.0.0.1:8080/');
Proxy Service
package main
import (
"flag"
"log"
"net/http"
"net/url"
)
var (
listen = flag.String("listen", "0.0.0.0:9000", "listen on address")
logp = flag.Bool("log", false, "enable logging")
)
func main() {
flag.Parse()
proxyHandler := http.HandlerFunc(proxyHandlerFunc)
log.Fatal(http.ListenAndServe(*listen, proxyHandler))
log.Println("Started router-server on 0.0.0.0:9000")
}
func proxyHandlerFunc(w http.ResponseWriter, r *http.Request) {
// Log if requested
if *logp {
log.Println(r.URL)
}
/*
* Tweak the request as appropriate:
* - RequestURI may not be sent to client
* - Set new URL
*/
r.RequestURI = ""
u, err := url.Parse("http://localhost:8080/")
if err != nil {
log.Fatal(err)
}
r.URL = u
// And proxy
// resp, err := client.Do(r)
c := make(chan *http.Response)
go doRequest(c)
resp := <-c
if resp != nil {
err := resp.Write(w)
if err != nil {
log.Println("Error writing response")
} else {
resp.Body.Close()
}
}
}
func doRequest(c chan *http.Response) {
// new client for every request.
client := &http.Client{}
resp, err := client.Get("http://127.0.0.1:8080/test")
if err != nil {
log.Println(err)
c <- nil
} else {
c <- resp
}
}
My issue, as mentioned within the title, is that I am getting errors stating 2013/10/28 21:22:30 Get http://127.0.0.1:8080/test: dial tcp 127.0.0.1:8080: can't assign requested address from the doRequest function, and I have no clue why. Googling this particular error yields seemingly irrelevant results.
There are 2 major problems with this code.
You are not handling the client stalling or using keep alives (handled below by getTimeoutServer)
You are not handling the server (what your http.Client is talking to) timing out (handled below by TimeoutConn).
This is probably why you are exhausting your local ports. I know from past experience node.js will keep-alive you very aggressively.
There are lots of little issues, creating objects every-time when you don't need to. Creating unneeded goroutines (each incoming request is in its own goroutine before you handle it).
Here is a quick stab (that I don't have time to test well). Hopefully it will put you on the right track: (You will want to upgrade this to not buffer the responses locally)
package main
import (
"bytes"
"errors"
"flag"
"fmt"
"log"
"net"
"net/http"
"net/url"
"runtime"
"strconv"
"time"
)
const DEFAULT_IDLE_TIMEOUT = 5 * time.Second
var (
listen string
logOn bool
localhost, _ = url.Parse("http://localhost:8080/")
client = &http.Client{
Transport: &http.Transport{
Proxy: NoProxyAllowed,
Dial: func(network, addr string) (net.Conn, error) {
return NewTimeoutConnDial(network, addr, DEFAULT_IDLE_TIMEOUT)
},
},
}
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
flag.StringVar(&listen, "listen", "0.0.0.0:9000", "listen on address")
flag.BoolVar(&logOn, "log", true, "enable logging")
flag.Parse()
server := getTimeoutServer(listen, http.HandlerFunc(proxyHandlerFunc))
log.Printf("Starting router-server on %s\n", listen)
log.Fatal(server.ListenAndServe())
}
func proxyHandlerFunc(w http.ResponseWriter, req *http.Request) {
if logOn {
log.Printf("%+v\n", req)
}
// Setup request URL
origURL := req.URL
req.URL = new(url.URL)
*req.URL = *localhost
req.URL.Path, req.URL.RawQuery, req.URL.Fragment = origURL.Path, origURL.RawQuery, origURL.Fragment
req.RequestURI, req.Host = "", req.URL.Host
// Perform request
resp, err := client.Do(req)
if err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
defer resp.Body.Close()
var respBuffer *bytes.Buffer
if resp.ContentLength != -1 {
respBuffer = bytes.NewBuffer(make([]byte, 0, resp.ContentLength))
} else {
respBuffer = new(bytes.Buffer)
}
if _, err = respBuffer.ReadFrom(resp.Body); err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
// Write result of request
headers := w.Header()
var key string
var val []string
for key, val = range resp.Header {
headers[key] = val
}
headers.Set("Content-Length", strconv.Itoa(respBuffer.Len()))
w.WriteHeader(resp.StatusCode)
w.Write(respBuffer.Bytes())
}
func getTimeoutServer(addr string, handler http.Handler) *http.Server {
//keeps people who are slow or are sending keep-alives from eating all our sockets
const (
HTTP_READ_TO = DEFAULT_IDLE_TIMEOUT
HTTP_WRITE_TO = DEFAULT_IDLE_TIMEOUT
)
return &http.Server{
Addr: addr,
Handler: handler,
ReadTimeout: HTTP_READ_TO,
WriteTimeout: HTTP_WRITE_TO,
}
}
func NoProxyAllowed(request *http.Request) (*url.URL, error) {
return nil, nil
}
//TimeoutConn-------------------------
//Put me in my own TimeoutConn.go ?
type TimeoutConn struct {
net.Conn
readTimeout, writeTimeout time.Duration
}
var invalidOperationError = errors.New("TimeoutConn does not support or allow .SetDeadline operations")
func NewTimeoutConn(conn net.Conn, ioTimeout time.Duration) (*TimeoutConn, error) {
return NewTimeoutConnReadWriteTO(conn, ioTimeout, ioTimeout)
}
func NewTimeoutConnReadWriteTO(conn net.Conn, readTimeout, writeTimeout time.Duration) (*TimeoutConn, error) {
this := &TimeoutConn{
Conn: conn,
readTimeout: readTimeout,
writeTimeout: writeTimeout,
}
now := time.Now()
err := this.Conn.SetReadDeadline(now.Add(this.readTimeout))
if err != nil {
return nil, err
}
err = this.Conn.SetWriteDeadline(now.Add(this.writeTimeout))
if err != nil {
return nil, err
}
return this, nil
}
func NewTimeoutConnDial(network, addr string, ioTimeout time.Duration) (net.Conn, error) {
conn, err := net.DialTimeout(network, addr, ioTimeout)
if err != nil {
return nil, err
}
if conn, err = NewTimeoutConn(conn, ioTimeout); err != nil {
return nil, err
}
return conn, nil
}
func (this *TimeoutConn) Read(data []byte) (int, error) {
this.Conn.SetReadDeadline(time.Now().Add(this.readTimeout))
return this.Conn.Read(data)
}
func (this *TimeoutConn) Write(data []byte) (int, error) {
this.Conn.SetWriteDeadline(time.Now().Add(this.writeTimeout))
return this.Conn.Write(data)
}
func (this *TimeoutConn) SetDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetReadDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetWriteDeadline(time time.Time) error {
return invalidOperationError
}
We ran into this and after a lot of time trying to debug, I came across this: https://code.google.com/p/go/source/detail?r=d4e1ec84876c
This shifts the burden onto clients to read their whole response
bodies if they want the advantage of reusing TCP connections.
So be sure you read the entire body before closing, there are a couple of ways to do it. This function can come in handy to close to let you see whether you have this issue by logging the extra bytes that haven't been read and cleaning the stream out for you so it can reuse the connection:
func closeResponse(response *http.Response) error {
// ensure we read the entire body
bs, err2 := ioutil.ReadAll(response.Body)
if err2 != nil {
log.Println("Error during ReadAll!!", err2)
}
if len(bs) > 0 {
log.Println("Had to read some bytes, not good!", bs, string(bs))
}
return response.Body.Close()
}
Or if you really don't care about the body, you can just discard it with this:
io.Copy(ioutil.Discard, response.Body)
I have encountered this problem too, and i add an option {DisableKeepAlives: true} to http.Transport fixed this issue, you can have a try.
I came here when running a massive amount of SQL queries per second on a system without limiting the number of idle connections over a long period of time. As pointed out in this issue comment on github explicitly setting db.SetMaxIdleConns(5) completely solved my problem.