golang program determine if user uses proxy - http

I want my golang http client to use a proxy only if the user provides the proxy value.
// Make HTTP GET/POST request
proxyUrl, err := url.Parse(proxy)
tr := &http.Transport{
DisableKeepAlives: true,
Proxy: http.ProxyURL(proxyUrl),
}
The above code always tries to connect through proxy even if the proxy variable is blank.

Thanks for the suggestion. Now I am able to make it work. Below is the modified code.
tr := &http.Transport{}
tr.DisableKeepAlives = true
if len(proxy) != 0 { // Set the proxy only if the proxy param is specified
proxyUrl, err := url.Parse(proxy)
if err == nil {
tr.Proxy = http.ProxyURL(proxyUrl)
}
}

Related

http forward with domain blocking

I'm trying to implement a http forwarding server that supports domain blocking. I've tried
go io.Copy(dst, src)
go io.Copy(src, dst)
and it works like a charm on tcp forwarding. Then I've tried to do request line parsing with something similar to
go func(){
reader := io.TeeReader(src, dst)
textReader := textproto.NewReader(bufio.NewReader(reader))
requestLine, _ = textReader.ReadLine()
// ...
ioutil.ReadAll(reader)
}
It works fine, but I was getting worried about bad performance(with ioutil.ReadAll). So I've written the code below.
func (f *Forwarder) handle(src, dst net.Conn) {
defer dst.Close()
defer src.Close()
done := make(chan struct{})
go func() {
textReader := bufio.NewReader(src)
requestLine, _ = textReader.ReadString('\n')
// parse request line and apply domain blocking
dst.Write([]byte(requestLine))
io.Copy(dst, src)
done <- struct{}{}
}()
go func() {
textReader := bufio.NewReader(dst)
s.statusLine, _ = textReader.ReadString('\n')
src.Write([]byte(s.statusLine))
io.Copy(src, dst)
done <- struct{}{}
}()
<-done
<-done
}
Unfortunately, it doesn't work at all. Requests get to print out, but not for responses. I've stuck here and don't know what's wrong.
TCP forwarding is to realize that the tunnel proxy does not need to parse data. The reverse proxy can use the standard library.
The tunnel proxy is implemented to separate the http and https protocols. The client generally uses the tunnel to send https and sends the Connect method. Sending http is the Get method. For the https request service, only dail creates the connection tcp conversion, and the http request is implemented using a reverse proxy.
func(w http.ResponseWriter, r *http.Request) {
// check url host
if r.URL.Host != "" {
if r.Method == eudore.MethodConnect {
// tunnel proxy
conn, err := net.Dial("tcp", r.URL.Host)
if err != nil {
w.WriteHeader(502)
return
}
client, _, err := w.Hijack()
if err != nil {
w.WriteHeader(502)
conn.Close()
return
}
client.Write([]byte("HTTP/1.0 200 OK\r\n\r\n"))
go func() {
io.Copy(client, conn)
client.Close()
conn.Close()
}()
go func() {
io.Copy(conn, client)
client.Close()
conn.Close()
}()
} else {
// reverse proxy
httputil.NewSingleHostReverseProxy(r.URL).ServeHTTP(w, r)
}
}
}
Implementing a reverse proxy will parse the client request, and the proxy will send the request to the target server.
Reverse proxy conversion request, not tested :
func(w http.ResponseWriter, r *http.Request) {
// set host
r.URL.Scheme = "http"
r.URL.Path = "example.com"
// send
resp,err := http.DefaultClient.Do(r)
if err != nil {
w.WriteHeader(502)
return
}
// write respsonse
defer resp.Body.Close()
w.WriteHeader(resp.StatusCode)
h := w.Header()
for k,v := range resp.Header {
h[k]=v
}
io.Copy(w, resp.Body)
}
However, the direct forwarding request does not process the hop-to-hop header. The hop-to-hop header is clearly stated in the rfc. The hop-to-hop header is the transmission information between two connections. For example, the client to the proxy and the proxy to the server are two. And the client to the server is end-to-end.
Please use the standard library directly for the reverse proxy, it has already handled the hop-to-hop header and Upgrade for you.
exmample NewSingleHostReverseProxy with filter:
package main
import (
"net/http"
"strings"
"net/http/httputil"
"net/url"
)
func main() {
addr, _ := url.Parse("http://localhost:8088")
proxy := httputil.NewSingleHostReverseProxy(addr)
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if strings.HasPrefix(r.URL.Path, "/api/") {
proxy.ServeHTTP(w, r)
} else {
w.WriteHeader(404)
}
})
// Listen Server
}

Keep alive request for _change continuous feed

I am trying to convert below nodejs code to Go. I have to establish keep alive http request to PouchDB server's _changes?feed=continuous. However, I'm not able to achieve it in Go.
var http = require('http')
var agent = new http.Agent({
keepAlive: true
});
var options = {
host: 'localhost',
port: '3030',
method: 'GET',
path: '/downloads/_changes?feed=continuous&include_docs=true',
agent
};
var req = http.request(options, function(response) {
response.on('data', function(data) {
let val = data.toString()
if(val == '\n')
console.log('newline')
else {
console.log(JSON.parse(val))
//to close the connection
//agent.destroy()
}
});
response.on('end', function() {
// Data received completely.
console.log('end');
});
response.on('error', function(err) {
console.log(err)
})
});
req.end();
Below is the Go code
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
defer resp.Body.Close()
result, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Println(err)
}
fmt.Println(result)
I am getting status 200 Ok, but no data gets printed, its stuck. On the other hand if I use longpoll option ie. http://localhost:3030/downloads/_changes?feed=longpoll then I am receiving data.
Your code is working "as expected" and what you wrote in Go is not equivalent to code shown in Node.js. Go code blocks on ioutil.ReadAll(resp.Body) because connection is kept open by CouchDB server. Once server closes the connection your client code will print out result as ioutil.ReadAll() will be able to read all data down to EOF.
From CouchDB documentation about continuous feed:
A continuous feed stays open and connected to the database until explicitly closed and changes are sent to the client as they happen, i.e. in near real-time. As with the longpoll feed type you can set both the timeout and heartbeat intervals to ensure that the connection is kept open for new changes and updates.
You can try experiment and add &timeout=1 to URL which will force CouchDB to close connection after 1s. Your Go code then should print the whole response.
Node.js code works differently, event data handler is called every time server sends some data. If you want to achieve same and process partial updates as they come (before connection is closed) you cannot use ioutil.ReadAll() as that waits for EOF (and thus blocks in your case) but something like resp.Body.Read() to process partial buffers. Here is very simplified snippet of code that demonstrates that and should give you basic idea:
package main
import (
"fmt"
"net/http"
"net/url"
"strings"
)
func main() {
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
defer resp.Body.Close()
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
buf := make([]byte, 1024)
for {
l, err := resp.Body.Read(buf)
if l == 0 && err != nil {
break // this is super simplified
}
// here you can send off data to e.g. channel or start
// handler goroutine...
fmt.Printf("%s", buf[:l])
}
fmt.Println()
}
In real world application you probably want to make sure your buf holds something that looks like a valid message and then pass it to channel or handler goroutine for further processing.
Finally, I was able to resolve the issue. The issue was related to DisableCompression flag. https://github.com/golang/go/issues/16488 this issue gave me some hint.
By setting DisableCompression: true fixed the issue.
client := &http.Client{Transport: &http.Transport{
DisableCompression: true,
}}
I am assuming client := &http.Client{} sends DisableCompression : false by default and pouchdb server is sending compressed json, Hence received data was compressed and resp.Body.Read was not able to read.

Accepting and routing http requests over raw TCP socket

I am building a web server that must accept HTTP requests from a client, but must also accept requests over a raw TCP socket from peers. Since HTTP runs over TCP, I am trying to route the HTTP requests by the TCP server rather than running two separate services.
Is there an easy way to read in the data with net.Conn.Read(), determine if it is an HTTP GET/POST request and pass it off to the built in HTTP handler or Gorilla mux? Right now my code looks like this and I am building the http routing logic myself:
func ListenConn() {
listen, _ := net.Listen("tcp", ":8080")
defer listen.Close()
for {
conn, err := listen.Accept()
if err != nil {
logger.Println("listener.go", "ListenConn", err.Error())
}
go HandleConn(conn)
}
}
func HandleConn(conn net.Conn) {
defer conn.Close()
// determines if it is an http request
scanner := bufio.NewScanner(conn)
for scanner.Scan() {
ln := scanner.Bytes()
fmt.Println(ln)
if strings.Fields(string(ln))[0] == "GET" {
http.GetRegistrationCode(conn, strings.Fields(string(ln))[1])
return
}
... raw tcp handler code
}
}
It is not a good idea to mix HTTP and raw TCP traffic.
Think about all firewalls and routers between your application and clients. They all designed to enable safe HTTP(s) delivery. What they will do with your tcp traffic coming to the same port as valid HTTP?
As a solution you can split your traffic to two different ports in the same application.
With ports separation you can route your HTTP and TCP traffic independently and configure appropriate network security for every channel.
Sample code to listen for 2 different ports:
package main
import (
"fmt"
"net"
"net/http"
"os"
)
type httpHandler struct {
}
func (m *httpHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
fmt.Println("HTTP request")
}
func main() {
// http
go func() {
http.ListenAndServe(":8001", &httpHandler{})
}()
// tcp
l, err := net.Listen("tcp", "localhost:8002")
if err != nil {
fmt.Println("Error listening:", err.Error())
os.Exit(1)
}
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
fmt.Println("Error accepting: ", err.Error())
os.Exit(1)
}
go handleRequest(conn)
}
}
// Handles incoming requests.
func handleRequest(conn net.Conn) {
// read/write from connection
fmt.Println("TCP connection")
conn.Close()
}
open http://localhost:8001/ in browser and run command line echo -n "test" | nc localhost 8002 to test listeners

Connect to server through proxy

I want to connect to a server through proxy server that I have.
I am searching for something that is similar to Python's HTTPConnection.set_tunnel, is there something like this in golang?
----edit-----
I'm trying to create a connection to a server that allows self signed certificates & transfers through proxy, will this code work properly?
func CreateProxyClient(serverProxy string, sid string, portProxy int) (*Client, error) {
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
proxyUrl, _ := url.Parse(serverProxy+":"+strconv.Itoa(portProxy))
tr := &http.Transport{
Proxy: http.ProxyURL(proxyUrl),
}
var netClient = &http.Client{
Timeout: time.Second * 10,
Transport: tr,
}
return &Client{netClient, serverProxy, sid}, nil
}
You can set environment variable HTTP_PROXY for HTTP or HTTPS_PROXY for HTTPS so the default http transport will use it.
Also as an alternative you can create http.Transport by yourself with Proxy field set to http.ProxyURL function call or use you custom implementation.
Example:
proxyURL, _ := url.Parse("http://proxy.example.com:port")
http.DefaultTransport = &http.Transport{
Proxy: http.ProxyURL(proxyURL),
}
// request using proxy
resp, _ := http.Get("https://google.com"))

Disable Common Name Validation - Go HTTP Client

How do I disable common name validation inside of a go http client. I am doing mutual TLS with a common CA and hence common name validation means nothing.
The tls docs say,
// ServerName is used to verify the hostname on the returned
// certificates unless InsecureSkipVerify is given. It is also included
// in the client's handshake to support virtual hosting unless it is
// an IP address.
ServerName string
I don't want to do InsecureSkipVerify but I don't want to validate the common name.
You would pass a tls.Config struct with your own VerifyPeerCertificate function, and then you would check the certificate yourself.
VerifyPeerCertificate func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error
If normal verification fails then the handshake will abort before
considering this callback. If normal verification is disabled by
setting InsecureSkipVerify then this callback will be considered but
the verifiedChains argument will always be nil.
You can look here for an example of how to verify a certificate. Iif you look here, you'll see that part of even this verification process includes checking the hostname, but luckily you'll see that it skips it if it's set to the empty string.
So, basically you write your own VerifyPeerCertificate function, convert the rawCerts [][]byte, which I think would look something like:
customVerify := func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
roots := x509.NewCertPool()
for _, rawCert := range rawCerts {
cert, _ := x509.ParseCertificate(rawCert)
roots.AddCert(cert)
}
opts := x509.VerifyOptions{
Roots: roots,
}
_, err := cert.Verify(opts)
return err
}
conf := tls.Config{
//...
VerifyPeerCertificate: customVerify,
}
Normal https post like this
pool := x509.NewCertPool()
caStr, err := ioutil.ReadFile(serverCAFile)
if err != nil {
return nil, fmt.Errorf("read server ca file fail")
}
pool.AppendCertsFromPEM(caStr)
tr := &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
}
client := &http.Client{Transport: tr}
client.Post(url, bodyType, body)
But if your url is use ip(ex. https://127.0.0.1:8080/api/test) or you URL is not match certificate common name, and you want to only ignore certificate common name check, should do like this
pool := x509.NewCertPool()
caStr, err := ioutil.ReadFile(serverCAFile)
if err != nil {
return nil, fmt.Errorf("read server ca file fail")
}
block, _ := pem.Decode(caStr)
if block == nil {
return nil, fmt.Errorf("Decode ca file fail")
}
if block.Type != "CERTIFICATE" || len(block.Headers) != 0 {
return nil, fmt.Errorf("Decode ca block file fail")
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return nil, fmt.Errorf("ParseCertificate ca block file fail")
}
pool.AddCert(cert)
tr := &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
ServerName: cert.Subject.CommonName, //manual set ServerName
},
}
client := &http.Client{Transport: tr}
client.Post(url, bodyType, body)

Resources