How can I implement an inactivity timeout on an http download - http

I've been reading up on the various timeouts that are available on an http request and they all seem to act as hard deadlines on the total time of a request.
I am running an http download, I don't want to implement a hard timeout past the initial handshake as I don't know anything about my users connection and don't want to timeout on slow connections. What I would ideally like is to timeout after a period of inactivity (when nothing has been downloaded for x seconds). Is there any way to do this as a built in or do I have to interrupt based on stating the file?
The working code is a little hard to isolate but I think these are the relevant parts, there is another loop that stats the file to provide progress but I will need to refactor a bit to use this to interrupt the download:
// httspClientOnNetInterface returns an http client using the named network interface, (via proxy if passed)
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {
log.Printf("Got IP addr : %s\n", string(interfaceIP))
// create address for the dialer
tcpAddr := &net.TCPAddr{
IP: interfaceIP,
}
// create the dialer & transport
netDialer := net.Dialer{
LocalAddr: tcpAddr,
}
var proxyURL *url.URL
var err error
if httpsProxy != nil {
proxyURL, err = url.Parse(httpsProxy.String())
if err != nil {
return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
}
}
httpTransport := &http.Transport{
Dial: netDialer.Dial,
Proxy: http.ProxyURL(proxyURL),
}
httpClient := &http.Client{
Transport: httpTransport,
}
return httpClient, nil
}
/*
StartDownloadWithProgress will initiate a download from a remote url to a local file,
providing download progress information
*/
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {
// start an http client on the selected net interface
httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
if err != nil {
return nil, err
}
// grab the header
headResp, err := httpClient.Head(srcURL)
if err != nil {
log.Printf("error on head request (download size): %s", err)
return nil, err
}
// pull out total size
size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
if err != nil {
headResp.Body.Close()
return nil, err
}
headResp.Body.Close()
errChan := make(chan error)
doneChan := make(chan struct{})
// spawn the download process
go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
resp, err := httpClient.Get(srcURL)
if err != nil {
errChan <- err
return
}
defer resp.Body.Close()
// create the file
outFile, err := os.Create(dstFilepath)
if err != nil {
errChan <- err
return
}
defer outFile.Close()
log.Println("starting copy")
// copy to file as the response arrives
_, err = io.Copy(outFile, resp.Body)
// return err
if err != nil {
log.Printf("\n Download Copy Error: %s \n", err.Error())
errChan <- err
return
}
doneChan <- struct{}{}
return
}(httpClient, srcURL, dstFilepath, errChan, doneChan)
// return Download
return (&Download{
updateFrequency: time.Microsecond * 500,
total: size,
errRecieve: errChan,
doneRecieve: doneChan,
filepath: dstFilepath,
}).Start(), nil
}
Update
Thanks to everyone who had input into this.
I've accepted JimB's answer as it seems like a perfectly viable approach that is more generalised than the solution I chose (and probably more useful to anyone who finds their way here).
In my case I already had a loop monitoring the file size so I threw a named error when this did not change for x seconds. It was much easier for me to pick up on the named error through my existing error handling and retry the download from there.
I probably crash at least one goroutine in the background with my approach (I may fix this later with some signalling) but as this is a short running application (its an installer) so this is acceptable (at least tolerable)

Doing the copy manually is not particularly difficult. If you're unsure how to properly implement it, it's only a couple dozen lines from the io package to copy and modify to suit your needs (I only removed the ErrShortWrite clause, because we can assume that the std library io.Writer implementations are correct)
Here is a copy work-alike function, that also takes a cancelation context and an idle timeout parameter. Every time there is a successful read, it signals to the cancelation goroutine to continue and start a new timer.
func idleTimeoutCopy(dst io.Writer, src io.Reader, timeout time.Duration,
ctx context.Context, cancel context.CancelFunc) (written int64, err error) {
read := make(chan int)
go func() {
for {
select {
case <-ctx.Done():
return
case <-time.After(timeout):
cancel()
case <-read:
}
}
}()
buf := make([]byte, 32*1024)
for {
nr, er := src.Read(buf)
if nr > 0 {
read <- nr
nw, ew := dst.Write(buf[0:nr])
written += int64(nw)
if ew != nil {
err = ew
break
}
}
if er != nil {
if er != io.EOF {
err = er
}
break
}
}
return written, err
}
While I used time.After for brevity, it's more efficient to reuse the Timer. This means taking care to use the correct reset pattern, as the return value of the Reset function is broken:
t := time.NewTimer(timeout)
for {
select {
case <-ctx.Done():
return
case <-t.C:
cancel()
case <-read:
if !t.Stop() {
<-t.C
}
t.Reset(timeout)
}
}
You could skip calling Stop altogether here, since in my opinion if the timer fires while calling Reset, it was close enough to cancel anyway, but it's often good to have the code be idiomatic in case this code is extended in the future.

Related

Golang HTTP Timeout test, not timing out as expected

I built a small testcase, that checks that code on my end timesout after a set amount of time has passed. But this isnt working as expected
I am hitting an server sided endpoint which works fine, but what happens if it is slower than usual, I will need code on my end to timeout, this has been implemented but I need to test that I implemented it correctly.
This is what I have so far
func TestTimeout(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(time.Second * 15)
}))
defer ts.Close()
client := &http.Client{
Timeout: time.Second * 10,
}
myRequest,err := createMyRequest(Somedata,SomeMoreData)
res, err := client.Do(myRequest)
if err != nil {
t.Fatal(err)
}
res.Body.Close()}
However my code successfully runs without giving a timeout error (as in I am not waiting for 10 seconds, where am I going wrong?
As already pointed out in some of the comments, you need to make a request to the test server's endpoint (using ts.URL) as follows:
myRequest, err := http.NewRequest(http.MethodPost, ts.URL, bytes.NewBuffer([]byte("test")))
if err != nil {
t.Fatal(err)
}
res, err := client.Do(myRequest)
if err != nil {
t.Fatal(err)
}
[...]

How to correctly implement a goroutine in terminal application

I'm trying to create a HTTP request interface in the terminal where you can pass some data (url, response body, etc) and then I make the request and display the data somewhere.
I'm trying to do the request in a goroutine, and display the results that the channel gives me. When I'm doing quick requests, this is impossible to note, but I created a simple Node endpoint to test a computationally heavy endpoint:
app.use(express.json())
app.get("/", (req, res) => {
new Promise(resolve => {
setTimeout(() => resolve(), 3000)
}).then(() => res.status(200).json({message: "OK"}))
.catch(err => res.status(500).json({error: err.message}))
})
app.listen(4000, () => console.log("Server up"))
When I call this endpoint, the whole ui freezes and only after the request is complete I can continue to use the GUI. For example, I tried to include a loading box at the GUI while the requests were being made, but the freeze happened, and the loading got displayed after the request completed.
Here's the code for the httpRequest function
func httpRequest(url, method string, body []byte, results chan Result) {
client := &http.Client{}
req, err := http.NewRequest(method, url, bytes.NewBuffer(body))
req.Header.Add("content-type", "application/json")
if err != nil {
results <- Result{err: err}
}
res, err := client.Do(req)
if err != nil {
results <- Result{err: err}
}
defer res.Body.Close()
r := Result{
method: res.Request.Method,
url: res.Request.URL.String(),
path: res.Request.URL.Path,
proto: res.Proto,
status: res.Status,
header: res.Header,
}
b, err := io.ReadAll(res.Body)
if err != nil {
results <- Result{err: err}
}
r.body = string(b)
results <- r
}
It is going to be called on the processRequest function:
func processRequest(g *ui.Gui, v *ui.View) error {
method_view, err := g.View("method")
if err != nil {
return err
}
method_view.Clear()
fmt.Fprintln(method_view, "loading...")
// GET ALL REQUEST DATA HERE...
out, err := g.View("res-output")
if err != nil {
return err
}
r := make(chan Result, 1)
start := time.Now()
switch active_method {
case 0:
go httpRequest(api_url, "GET", nil, r)
case 1:
go httpRequest(api_url, "POST", request_body_data, r)
case 2:
go httpRequest(api_url, "DELETE", request_body_data, r)
case 3:
go httpRequest(api_url, "PUT", request_body_data, r)
}
result := <-r
if result.err != nil {
fmt.Fprintf(out, "Error: %s\n", result.err.Error())
return nil
}
fmt.Fprintf(out, "Time: %f s\n", time.Since(start).Seconds())
fmt.Fprintln(out, formatResponse(result))
history_view, err := g.View("history")
if err != nil {
return err
}
history_item := fmt.Sprintf("%s %s %s\n", result.method, result.url, result.status)
fmt.Fprintln(history_view, history_item)
close(r)
method_view.Clear()
fmt.Fprintln(method_view, methods[active_method]) // CLEAR THE LOADING FROM TOP AND REWRITE REQUEST METHOD
return nil
}
Can I achieve a behaviour where I can still mess up with the UI while the request is being made?
I also tried to create another channel, done, to notify the processRequest func, but I get the same results.
The problem with your current code is that while you're making the request concurrently, you're blocking on waiting for the result result := <-r. This is negating the point of running the request in a go-routine because you're not doing anything (such as handling UI events) in the meantime.
You could structure your code in a way that the response is handled in a go-routine, not just the request, then your application can be used like normal and the response can concurrently update the UI when the request's response is received at some point in the future.
In other words, processRequest should be run in a go-routine instead of httpRequest.

Keep alive request for _change continuous feed

I am trying to convert below nodejs code to Go. I have to establish keep alive http request to PouchDB server's _changes?feed=continuous. However, I'm not able to achieve it in Go.
var http = require('http')
var agent = new http.Agent({
keepAlive: true
});
var options = {
host: 'localhost',
port: '3030',
method: 'GET',
path: '/downloads/_changes?feed=continuous&include_docs=true',
agent
};
var req = http.request(options, function(response) {
response.on('data', function(data) {
let val = data.toString()
if(val == '\n')
console.log('newline')
else {
console.log(JSON.parse(val))
//to close the connection
//agent.destroy()
}
});
response.on('end', function() {
// Data received completely.
console.log('end');
});
response.on('error', function(err) {
console.log(err)
})
});
req.end();
Below is the Go code
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
defer resp.Body.Close()
result, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Println(err)
}
fmt.Println(result)
I am getting status 200 Ok, but no data gets printed, its stuck. On the other hand if I use longpoll option ie. http://localhost:3030/downloads/_changes?feed=longpoll then I am receiving data.
Your code is working "as expected" and what you wrote in Go is not equivalent to code shown in Node.js. Go code blocks on ioutil.ReadAll(resp.Body) because connection is kept open by CouchDB server. Once server closes the connection your client code will print out result as ioutil.ReadAll() will be able to read all data down to EOF.
From CouchDB documentation about continuous feed:
A continuous feed stays open and connected to the database until explicitly closed and changes are sent to the client as they happen, i.e. in near real-time. As with the longpoll feed type you can set both the timeout and heartbeat intervals to ensure that the connection is kept open for new changes and updates.
You can try experiment and add &timeout=1 to URL which will force CouchDB to close connection after 1s. Your Go code then should print the whole response.
Node.js code works differently, event data handler is called every time server sends some data. If you want to achieve same and process partial updates as they come (before connection is closed) you cannot use ioutil.ReadAll() as that waits for EOF (and thus blocks in your case) but something like resp.Body.Read() to process partial buffers. Here is very simplified snippet of code that demonstrates that and should give you basic idea:
package main
import (
"fmt"
"net/http"
"net/url"
"strings"
)
func main() {
client := &http.Client{}
data := url.Values{}
req, err := http.NewRequest("GET", "http://localhost:3030/downloads/_changes?feed=continuous&include_docs=true", strings.NewReader(data.Encode()))
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
defer resp.Body.Close()
fmt.Println(resp.Status)
if err != nil {
fmt.Println(err)
}
buf := make([]byte, 1024)
for {
l, err := resp.Body.Read(buf)
if l == 0 && err != nil {
break // this is super simplified
}
// here you can send off data to e.g. channel or start
// handler goroutine...
fmt.Printf("%s", buf[:l])
}
fmt.Println()
}
In real world application you probably want to make sure your buf holds something that looks like a valid message and then pass it to channel or handler goroutine for further processing.
Finally, I was able to resolve the issue. The issue was related to DisableCompression flag. https://github.com/golang/go/issues/16488 this issue gave me some hint.
By setting DisableCompression: true fixed the issue.
client := &http.Client{Transport: &http.Transport{
DisableCompression: true,
}}
I am assuming client := &http.Client{} sends DisableCompression : false by default and pouchdb server is sending compressed json, Hence received data was compressed and resp.Body.Read was not able to read.

Golang write net.Dial response to the browser

I am playing with the net package, and i want to make a simple proxy.
First i make a listener on localhost, then i dial the remote address
remote, err := net.Dial("tcp", "google.com:80")
if err != nil {
log.Fatal(err)
}
defer remote.Close()
fmt.Fprint(remote, "GET / HTTP/1.0\r\n\r\n")
How can i pipe the response to the browser? Or do i need to work with the default webserver and copy the response body? I really want to try it with net package or something
thx
To copy the connection from the remote is used 2 goroutines with io.Copy
func copyContent(from, to net.Conn, done chan bool) {
_, err := io.Copy(from, to)
if err != nil {
done <- true
}
done <- true
}
// in the main func
done := make(chan bool, 2)
go copyContent(conn, remote, done)
go copyContent(remote, conn, done)
<-done
<-done

Specify timeout when tracing HTTP request in Go

I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
However, I can't seem to figure out how to do the same when tracing HTTP requests. Here is the piece of code I am working with:
func timeGet(url string) (httpTimingBreakDown, error) {
req, _ := http.NewRequest("GET", url, nil)
var start, connect, dns, tlsHandshake time.Time
var timingData httpTimingBreakDown
timingData.url = url
trace := &httptrace.ClientTrace{
TLSHandshakeStart: func() { tlsHandshake = time.Now() },
TLSHandshakeDone: func(cs tls.ConnectionState, err error) { timingData.tls = time.Since(tlsHandshake) },
}
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
start = time.Now()
http.DefaultTransport.(*http.Transport).ResponseHeaderTimeout = time.Second * 10 // hacky way, worked earlier but don't work anymore
if _, err := http.DefaultTransport.RoundTrip(req); err != nil {
fmt.Println(err)
return timingData, err
}
timingData.total = time.Since(start)
return timingData, nil
}
I am firing this function inside a goroutine. My sample data set is 100 urls. All goroutines fire, but eventually the program ends in 30+ secs as if the timeout is 30secs.
Earlier I made the same to work by using the hacky way of changing the default inside of it to 10 secs and anything that took too long, timed out and the program ended at 10.xxx secs but now its taking 30.xx secs.
What would be a proper way of specifying a timeout in this scenario?
I know the usual method of specifying a timeout with HTTP requests by doing:
httpClient := http.Client{
Timeout: time.Duration(5 * time.Second),
}
Actually, the preferred method is to use a context.Context on the request. The method you've used is just a short-cut suitable for simple use cases.
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(context.Background(), 5 * time.Second)
defer cancel()
req = req.WithContext(ctx)
And this method should work nicely for your situation as well.

Resources