I heard here that
once you write anything in the response, the request body will be
closed which prevents you from reading anything from it
If that is true, how can I write a proper duplex handler that is able to read from the request body, make some kind of transformation, and then write to the response body, in a streaming fashion just like people do in node.js ?
I ended up managing to do this with http.Hijacker.
After the request is made and the request headers are parsed, I can read from *http.Request.Body, then hijack the connection and write to it, at the same time, like this:
hj, ok := w.(http.Hijacker)
if !ok {
http.Error(w, "hijacking not supported", 500)
return
}
conn, bufrw, err := hj.Hijack()
if err != nil {
http.Error(w, err.Error(), 500)
return
}
defer conn.Close()
And then conn is a net.Conn which is the underlying TCP connection to the client, bufrw is a *bufio.ReadWriter, and to write the response without closing the body all I have to do is
_, err = bufrw.WriteString("HTTP/1.1 200 OK\n\n")
_, err = bufrw.WriteString("this")
_, err = bufrw.WriteString("is")
_, err = bufrw.WriteString("the")
_, err = bufrw.WriteString("response")
_, err = bufrw.WriteString("body")
And then I'm not sure about this but maybe someone can complete the answer, it's a good idea to flush the buffers down the connection once in a while with
err := bufrw.Flush()
From the net/http docs you are referring to
var ErrBodyReadAfterClose = errors.New("http: invalid Read on closed
Body")
ErrBodyReadAfterClose is returned when reading a Request or Response
Body after the body has been closed. This typically happens when the
body is read after an HTTP Handler calls WriteHeader or Write on its
ResponseWriter
However I tried the code linked from the blog article you mentioned and it works fine under go 1.1.2 unless I write more than about 4k of data first in which case r.ParseForm() returns ErrBodyReadAfterClose.
So I think the answer is no, you can't do full duplex HTTP in go in general unless the responses are short (under 4k).
I would say that doing full duplex HTTP requests is unlikely to be a big benefit though as most clients won't attempt to read from the response until they have finished sending the request, so the most you'll win is the size of the TCP buffers in the client & server. A deadlock seems likely if these buffers are exceeded, eg
client is sending request
server is sending response
servers buffers get full sending response
server blocks
server no longer reading clients request
client buffers fill up
client blocks
deadlock
Related
When I look at the net/http server interface, I don't see an obvious way to get notified and react when the http.Server comes up and starts listening:
ListenAndServe(":8080", nil)
The function doesn't return until the server actually shuts down. I also looked at the Server type, but there doesn't appear to be anything that lets me tap into that timing. Some function or a channel would have been great but I don't see any.
Is there any way that will let me detect that event, or am I left to just sleeping "enough" to fake it?
ListenAndServe is a helper function that opens a listening socket and then serves connections on that socket. Write the code directly in your application to signal when the socket is open:
l, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
// Signal that server is open for business.
if err := http.Serve(l, rootHandler); err != nil {
// handle error
}
If the signalling step does not block, then http.Serve will easily consume any backlog on the listening socket.
Related question: https://stackoverflow.com/a/32742904/5728991
I need to check if the user is connected to internet before I can proceed.
I am hitting the endpoint using HttpClient as follows:
client := &http.Client{}
req, _ := http.NewRequest("GET", url, nil)
req.SetBasicAuth(username, password)
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
ui.Failed("Check your internet connection")
}
1) I need to show clear messages to the user if the user is not connected to the internet in this case, display "Please check your internet connection"
2) In case the server is not responding, and receives 504 bad gateway,
display "504 Bad gateway"
Can someone help how to proceed and distinguish between these two scenarios and I would like to display only simple messages and not the entire error messages received from the server.
Checking for an established Internet connection isn't as simple as making a single HTTP request to an arbitray URL, like Ivan de la Beldad suggests. This can fail for any number of reasons, none of which will necessarily stop you from doing what you actually intend to do with the connection. To name a few:
clients3.google.com may be deliberatly blocked by the local network (firewall, corporate or school proxy) or any en-route network (think Great Firewall of China)
clients3.google.com may be unreachable for some clients due to network outages
clients3.google.com itself may block the client for whatever reason (perhaps unintentionally)
clients3.google.com may have an outage
port 80 and 443 may work fine, but all other ports are blocked by shitty hotel/coffee shop wifi
shitty hotel/coffee shop wifi presents fake TLS certificates to clients, so HTTPS requests will fail in many cases
So instead of relying on a single arbitrary HTTP request it is much better to send some kind of liveliness probe to whatever service(s) you actually want to use.
If your app wants to communicate with an API, see if there is a health or status endpoint that you can call. If there isn't, look for some kind of cheap no-op. And try not to tell users to simply "check their Internet connection". Try to at least explain why your app concludes that there might be an issue "We can't connect to Twitter right now. If you are connected to the Internet try again in a few minutes" is much better.
On the off-chance that you really only want to check if the Internet itself is available, I would suggest making a DNS query to several DNS servers on the Internet. DNS is not likely to be blocked through local policies and cheaper than HTTP requests. Pick your DNS queries wisely and be prepared for NXDOMAIN responses.
To check if you're connected to internet you can use this:
func connected() (ok bool) {
_, err := http.Get("http://clients3.google.com/generate_204")
if err != nil {
return false
}
return true
}
And to get the status code you can have it from res.StatusCode.
The final result would be something like that:
if !connected() {
ui.Failed("Check your internet connection")
}
client := &http.Client{}
req, _ := http.NewRequest("GET", url, nil)
req.SetBasicAuth(username, password)
res, _ := client.Do(req)
if res.StatusCode == 504 {
ui.Failed("504 Bad gateway")
}
(I'm ignoring other errors that obviusly you should check)
I want to use Golang as my server side language, but everything I've read points to nginx as the web server rather than relying on net/http (not that it's bad, but it just seems preferable overall, not the point of this post though).
I've found a few articles on using fastcgi with Golang, but I have no luck in finding anything on reverse proxies and HTTP and whatnot, other than this benchmark which doesn't go into enough detail unfortunately.
Are there any tutorials/guides available on how this operates?
For example there is a big post on Stackoverflow detailing it with Node, but I cannot find a similar one for go.
That's not needed at all anymore unless you're using nginx for caching, Golang 1.6+ is more than good enough to server http and https directly.
However if you're insisting, and I will secretly judge you and laugh at you, here's the work flow:
Your go app listens on a local port, say "127.0.0.1:8080"
nginx listens on 0.0.0.0:80 and 0.0.0.0:443 and proxies all requests to 127.0.0.1:8080.
Be judged.
The nginx setup in Node.js + Nginx - What now? is exactly the same setup you would use for Go, or any other standalone server for that matter that isn't cgi/fastcgi.
I use Nginx in production very effectively, using Unix sockets instead of TCP for the FastCGI connection. This code snippet comes from Manners, but you can adapt it for the normal Go api quite easily.
func isUnixNetwork(addr string) bool {
return strings.HasPrefix(addr, "/") || strings.HasPrefix(addr, ".")
}
func listenToUnix(bind string) (listener net.Listener, err error) {
_, err = os.Stat(bind)
if err == nil {
// socket exists and is "already in use";
// presume this is from earlier run and therefore delete it
err = os.Remove(bind)
if err != nil {
return
}
} else if !os.IsNotExist(err) {
return
}
listener, err = net.Listen("unix", bind)
return
}
func listen(bind string) (listener net.Listener, err error) {
if isUnixNetwork(bind) {
logger.Printf("Listening on unix socket %s\n", bind)
return listenToUnix(bind)
} else if strings.Contains(bind, ":") {
logger.Printf("Listening on tcp socket %s\n", bind)
return net.Listen("tcp", bind)
} else {
return nil, fmt.Errorf("error while parsing bind arg %v", bind)
}
}
Take a look around about line 252, which is where the switching happens between HTTP over a TCP connection and FastCGI over Unix-domain sockets.
With Unix sockets, you have to adjust your startup scripts to ensure that the sockets are created in an orderly way with the correct ownership and permissions. If you get that right, the rest is easy.
To answer other remarks about why you would want to use Nginx, it always depends on your use-case. I have Nginx-hosted static/PHP websites; it is convenient to use it as a reverse-proxy on the same server in such cases.
I'm using the httptest package in Go to test my application. Recently I noticed that one of my tests was failing to finish because my test wasn't reading the body of the response
func Test_AnyTest(t *testing.T) {
serve := newServer()
s := httptest.NewServer(serve)
defer s.Close()
testUrl := "/any/old/url"
c := &http.Client{}
r, _ := http.NewRequest("GET", s.URL+testUrl, new(bytes.Buffer))
r.Header.Add("If-None-Match", cacheVersion)
res, _ := c.Do(r)
if res.StatusCode == 304 {
t.Errorf("Should not have got 304")
}
}
The above code was blocking on the deferred call to s.Close() because there were outstanding http connections on the test server. My server has some code that was writing to the body (using the http.ResponseWriter interface). Turns out that this code was actually blocking until I read the body in my test.
A call like this did the trick.
ioutil.ReadAll(res.Body)
I don't mind making this call in my tests but I am concerned that a client that misbehaves could cause this behaviour and consume server resources. Does anyone know what's happening here? Is this expected behaviour in the wild or is it just a problem with the test framework?
Thanks!
From the http docs:
The client must close the response body when finished with it:
resp, err := http.Get("http://example.com/")
if err != nil {
// handle error
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
// ...
So your test is violating the contract of http.Client; adding a defer res.Close() should let the server know that the response channel is closed, and stop writing to it. The effect of your ioutil.ReadAll was the same, as it closes the Reader once it reaches EOF.
In the wild, if a client opens a connection and sends a request, the server just sends the response and then closes the connection. It doesn't wait around for the client to close it. If the client is too slow to ACK the IP packets that the server sends, the OS will eventually take care of timing out the connection, and the http.Server will respond by exiting the goroutine that served the request, no kittens harmed.
I am looking for a client software that will run on a Unix system to do long-polling with multiple requests in a single http pipeline.
Basically we need to issue several long-polling GET requests to a server. All the requests need to be done within a single HTTP pipeline.
The client needs to have N requests open at any given time, where N > 1.
The server will respond either with a 200 OK or 204 No Content.
In case of a 200 OK, the response needs to be piped into a new process.
This can be easily implemented using PHP. The HttpRequestPool can be used to build a custom client doing just that. Also see How can I make use of HTTP 1.1 persistent connections and pipelining from PHP?
With Go it's also fairly easy, if you create the connection yourself, you just have to send all the requests and then you can read responses sequentially, and it will send it all through one http pipelined connection.
conn, _ := net.Dial("tcp", "127.0.0.1:80")
client := httputil.NewClientConn(conn, nil)
req, _ := http.NewRequest("GET", "/", nil)
client.Write(req)
resp, _ := client.Read(req)
You should do some more error checking though.