How to bypass golang's HTTP request (net/http) RFC compliance - http

I'm developing a Security Scanner and therefore need to send HTTP requests which don't honor RFC specifications. However, golang is very strict to comply with these.
Issue
I want to send a HTTP request which contains prohibited special characters such as "".
For example: "Ill\egal": "header value"
However, golang always throws the error: 'net/http: invalid header field name "Ill\egal"'.
This error is thrown on line 523 at https://go.dev/src/net/http/transport.go
Issue
I want to send a single HTTP request which contains either two content-length, two transfer-encoding or one content-length & one transfer-encoding header (for HTTP request smuggling). Those need sometimes to have wrong values.
However, it isn't possible to set those headers oneself, they are generated automatically. So it's only possible to use one of these headers with a correct value.
I've bypassed this by using a Raw TCP Stream, however this solution isn't satisfying, as I can't use a proxy this way: Use Dialer with Proxy. Route TCP stream through Proxy
Issue
I want to send a HTTP request where the header name is mixed upper and lowercase. E.g. "HeAdErNaMe": "header value".
This is possible for HTTP 1 requests by writing directly to the header map (req.Header["HeAdErNaMe"] = []string{"header value"})
However for HTTP 2 requests the headers will still be capitalized to meet the RFC specifications.

You can dump request into a buffer, modify the buffer (with regexp or replace), and send modified buffer to the host using net.Dial.
Example:
package main
import (
"bufio"
"crypto/tls"
"fmt"
"log"
"net/http"
"net/http/httputil"
"strings"
)
func main() {
// create and dump request
req, err := http.NewRequest(http.MethodGet, "https://golang.org", nil)
if err != nil {
log.Fatal(err)
}
req.Header.Add("User-Agent", "aaaaa")
buf, err := httputil.DumpRequest(req, true)
if err != nil {
log.Fatal(err)
}
// Corrupt request
str := string(buf)
str = strings.Replace(str, "User-Agent: aaaaa", "UsEr-AgEnT: aaa\"aaa", 1)
println(str)
// Dial and send raw request text
conn, err := tls.Dial("tcp", "golang.org:443", nil)
if err != nil {
log.Fatal(err)
}
defer conn.Close()
fmt.Fprintf(conn, str)
// Read response
br := bufio.NewReader(conn)
resp, err := http.ReadResponse(br, nil)
if err != nil {
log.Fatal(err)
}
log.Printf("%+v", resp)
}

Related

Golang HTTP Get Request Not Resolving for some URL

I was trying to build some sort of website status checker. I figure out that the golang HTTP get request is not resolved and hung forever for some URL like https://www.hetzner.com. But the same URL works if we do curl.
Golang
Here there is no error thrown. It just hangs on http.Get
func main() {
resp, err := http.Get("https://www.hetzner.com")
if err != nil {
fmt.Println("Error while retrieving site", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
fmt.Println("Eroor while reading response body", err)
}
fmt.Println("RESPONSE", string(body))}
CURL
I get the response while running following command.
curl https://www.hetzner.com
What may be the reason? And how do I resolve this issue from golang HTTP?
Your specific case can be fixed by specifying HTTP User-Agent Header:
import (
"fmt"
"io"
"net/http"
)
func main() {
client := &http.Client{}
req, err := http.NewRequest("GET", "https://www.hetzner.com", nil)
if err != nil {
fmt.Println("Error while retrieving site", err)
}
req.Header.Set("User-Agent", "Golang_Spider_Bot/3.0")
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error while retrieving site", err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
fmt.Println("Eroor while reading response body", err)
}
fmt.Println("RESPONSE", string(body))
}
Note: many other hosts will reject requests from your server because of some security rules on their side. Some ideas:
Empty or bot-like User-Agent HTTP header
Location of your IP address. For example, online shops in the USA don't need to handle requests from Russia.
Autonomous System or CIDR of your provider. Some ASNs are completely blackholed because of the enormous malicious activities from their residents.
Note 2: Many modern websites have DDoS protection or CDN systems in front of them. If Cloudflare protects your target website, your HTTP request will be blocked despite the status code 200. To handle this, you need to build something able to render JavaScript-based websites and add some scripts to resolve a captcha.
Also, if you check a considerable amount of websites in a short time, you will be blocked by your DNS servers as they have some inbuild rate limits. In this case, you may want to take a look at massdns or similar solutions.

Go HTTP RoundTripper: Preventing Connection Reuse Based on Response

I have a use case where I want to use an HTTP client in Go with pooled connections (connection re-use), but with the special case where a connection is intentionally closed (not allowed for re-use) if a request on that connection returns a specific HTTP status code.
I've implemented a custom http.RoundTripper, which wraps an http.Transport, and can inspect the response status code. However, I can't seem to find a way to prevent the http.Transport from re-using that connection, without also preventing it from re-using any other connection.
Is this possible using the net/http package? If not, any suggested workaround for accomplishing this?
My current code looks something like this:
type MyTransport struct {
transport *http.Transport
}
func (mt *MyTransport) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := tt.transport.RoundTrip(req)
if err != nil {
return resp, err
}
if resp.StatusCode == 567 {
// HERE:
// Do something to prevent re-use of this connection
}
return resp, err
}

Unable to extract value from r.PostFormValue in Go?

I'm trying to extract a value from an HTTP POST request body (in my simple Go HTTP server) using the net/http PostFormValue and my output is an empty string when I'm looking for the any key in general, but in my case trying to fetch the hub.secret for use in a HMAC check. I use Postman to send the request to my localhost:8080 instance using the Gorilla/mux router, with header Content-Type: application/x-www-form-urlencoded set.
My handler looks like so:
func rootPostHandler(w http.ResponseWriter, r *http.Request) {
var expectedMac []byte
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("r.Body is:", string(body)) // debug: print the request POST body
message := body // debug: set message just for extra clarity
errParse := r.ParseForm()
if errParse != nil {
// handle err
}
secret := []byte(r.PostFormValue("hub.secret"))
log.Println("secret is: ", string(secret))
mac := hmac.New(sha256.New, secret)
mac.Write(message)
expectedMac = mac.Sum(nil)
fmt.Println("Is HMAC equal? ", hmac.Equal(message, expectedMac))
w.Header().Add("X-Hub-Signature", "sha256="+string(message))
}
The r.Body:
hub.callback=http%253A%252F%252Fweb-sub-client%253A8080%252FbRxvcmOcNk&hub.mode=subscribe&hub.secret=xTgSGLOtPNrBLLgYcKnL&hub.topic=%252Fa%252Ftopic
And the output for print secret etc is empty string, meaning it can't find hub.secret, right? What am I missing here?
The application reads the request body to EOF on this line:
body, err := ioutil.ReadAll(r.Body)
ParseForm returns an empty form because the body is at EOF at this line:
errParse := r.ParseForm()
The request body is read from the network connection. The request body cannot be read a second time.
Remove the call to ioutil.ReadAll or create a new body reader using the data returned from ioutil.ReadAll:
r.Body = io.NopCloser(bytes.NewReader(body))

Why is there a 60 second delay on my HTTP POST request when using a Go HTTP client?

My goal is to scrape a website that requires me to log in first using HTTP requests in Golang. I actually succeeded by finding out I can send a post request to the website writing form-data into the body of the request. When I test this through an API development software I use called Postman, the response is instantaneous with no delays. However, when performing the request with an HTTP client in Go, there is a consistent 60 second delay every single time. I end up getting a logged in page, but for my program I need the response to be nearly instantaneous.
As you can see in my code, I've tried adding a bunch of headers to the request like "Connection", "Content-Type", "User-Agent" since I thought maaaaaybe the website can tell I'm requesting from a program and is forcing me to wait 60 seconds for a response. Adding these headers to make my request more legitimate(?) doesn't work at all.
Is the delay coming from Go's HTTP client being slow or is there something wrong with how I'm forming my HTTP POST request? Also, was I on to something with my headers and HTTP client is rewriting them when they send out?
Here's my simple program...
package main
import (
"bytes"
"fmt"
"mime/multipart"
"net/http"
"net/http/cookiejar"
"os"
)
func main() {
url := "https://easypronunciation.com/en/log-in"
method := "POST"
payload := &bytes.Buffer{}
writer := multipart.NewWriter(payload)
_ = writer.WriteField("email", "foo#bar.com")
_ = writer.WriteField("password", "*********")
_ = writer.WriteField("persistent_login", "on")
_ = writer.WriteField("submit", "")
err := writer.Close()
if err != nil {
fmt.Println(err)
}
cookieJar, _ := cookiejar.New(nil)
client := &http.Client{
Jar: cookieJar,
}
req, err := http.NewRequest(method, url, payload)
if err != nil {
fmt.Println(err)
}
req.Header.Set("Content-Type", writer.FormDataContentType())
req.Header.Set("Connection", "Keep-Alive")
req.Header.Set("Accept-Language", "en-US")
req.Header.Set("User-Agent", "Mozilla/5.0")
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
}
defer res.Body.Close()
f, err := os.Create("response.html")
defer f.Close()
res.Write(f)
}
I doubt, this is the go client library too. I would suggest printing out the latencies for different components and see if/where the 60 second delay is. I would also replace and try different URLs instead

Go http client doesn't automatically dechunk body

I'm streaming http from Go and the server responds with "Transfer-Encoding: chunked" as expected. And I've been told that the http client in Go shall automatically dechunk the body from the http response, removing the \r\n. But in my case it isn't removed automatically, so I have to use a ChunkedReader to read the bodies.
Any idea why golang doesn't dechunk my body automatically?
EDIT:
Here is the http request:
var transport = http.Transport{
Proxy: nil,
ExpectContinueTimeout: 0,
MaxResponseHeaderBytes: 16384}
var httpClient = http.Client{
Transport: &transport,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}}
bodyReader, bodyWriter := io.Pipe()
req, _ := http.NewRequest("GET", "http://x.x.x.x/stream", bodyReader)
response, err := httpClient.Do(req)
buffer := make([]byte, 2 << 15)
n, readErr = response.Body.Read(buffer) <-- should be dechunked body
The data read into the buffer is not dechunked. Any idea why?
I figured out why the body is not dechunked automatically. It's because the HTTP response was HTTP/1.0. In which case golang ignores the transfer encoding header.
https://github.com/golang/go/issues/12785

Resources