I need to make a request to a server that return different responses at different times. I mean, the server generate different responses and these responses take different execution time, so server return the responses as soon as they are available.
And I want print in the screen (by the moment, I'd settle with that) these responses as soon as the server returns me.
All what I could do until now is print the responses but only when the server returns all the responses. So if the first response take 1sec, and the last response take 10sec, my code needs to wait 10sec to print all the messages.
EDIT: to add code I have:
//Config is gotten from yml file
RestConfig = Config["rest"].(map[string]interface{})
ServerConfig = Config["server"].(map[string]interface{})
RequestUrl := ServerConfig["url"]
RequestReader := bytes.NewReader(body)
Request, _ := http.NewRequest("POST", RequestUrl.(string), RequestReader)
//AppendHeaders append the needing headers to the request
client.AppendHeaders(Request, RestConfig["headers"])
//the type of client.HttpClient is *http.Client
Response, _ := client.HttpClient.Do(Request)
//And to print in the screen
defer Response.Body.Close()
fmt.Println( "-> Receiving response:\n---\n" )
fmt.Println( Response , "\n---\n-> Response body:\n---\n")
body_resp, _ := ioutil.ReadAll(Response.Body)
fmt.Println( string(body_resp) )
fmt.Println( "\n--\n")
Any way to do it??
Thank you very much.
Finally my code , is like this:
package main
import (
"fmt"
"log"
"bytes"
"strings"
"bufio"
"net/http"
)
func main() {
var body = "The body"
RequestReader := bytes.NewReader([]byte(body))
req, err := http.NewRequest("POST", "the_url", RequestReader)
if err != nil {
log.Fatal(err)
}
req.Header.Add("Accept", "application/xml")
req.Header.Add("Content-Type", "application/xml")
req.Header.Add("AG-Authorization", "key")
req.Header.Add("AG-Forwarded-Hosts", "*")
resp, err := (&http.Client{}).Do(req)
if err != nil {
log.Fatal(err)
}
reader := bufio.NewReader(resp.Body)
message := ""
for {
line, err := reader.ReadBytes('\n')
if err != nil {
log.Fatal(err)
}
message = message + string(line)
if strings.Contains(message, "<!-- End mark for each message -->"){
fmt.Println(message)
message = ""
}
}
}
Thank everyone.
The context package is what you are looking for.
The context package is responsible for signal cancelation and operation deadlines for processes and server requests. This has two public methods: WithCancel and WithTimeout. The Context associated with an incoming request is typically canceled when the request handler returns.
For your specific case you can use the WithTimeout method for setting a deadline on requests to backend servers.
// WithTimeout returns a copy of parent whose Done channel is closed as soon as
// parent.Done is closed, cancel is called, or timeout elapses. The new
// Context's Deadline is the sooner of now+timeout and the parent's deadline, if
// any. If the timer is still running, the cancel function releases its
// resources.
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc)
And here is a snippet taken from https://blog.golang.org/context/server/server.go
timeout, err := time.ParseDuration(req.FormValue("timeout")) // set a time limit in your post
if err == nil {
// The request has a timeout, so create a context that is
// canceled automatically when the timeout expires.
ctx, cancel = context.WithTimeout(context.Background(), timeout)
} else {
ctx, cancel = context.WithCancel(context.Background())
}
defer cancel() // Cancel ctx as soon as handleSearch returns.
For further reading take a look at this article:
https://blog.golang.org/context
Related
I am writing an HTTP server in Go, which uses the following pattern to handle API output:
func handler(w http.ResponsWriter, r *http.Request) {
defer reply(w, r, L)() //L is a Logger
//do things...
}
func reply(w http.ResponseWriter, r *http.Request, log Logger) func() {
cid := []byte{0, 0, 0, 0}
if log != nil {
rand.Read(cid)
log.Debug("[%x] %s %s", cid, r.Method, r.URL.String())
}
entry := time.Now()
return func() {
if log != nil {
defer log.Debug("[%x] elapsed %d millis", cid, time.Since(entry).Milliseconds())
}
_, err := w.Write(nil)
if err == http.ErrHijacked {
return //API is a WEBSOCKET entry point, do nothing
}
//handle common output logic for normal HTTP APIs...
}
}
The reason I do this, is that I found this comment in the standard library:
// ErrHijacked is returned by ResponseWriter.Write calls when
// the underlying connection has been hijacked using the
// Hijacker interface. A zero-byte write on a hijacked
// connection will return ErrHijacked without any other side
// effects.
ErrHijacked = errors.New("http: connection has been hijacked")
However following the Write() method, I got this comment:
// Write writes the data to the connection as part of an HTTP reply.
//
// If WriteHeader has not yet been called, Write calls
// WriteHeader(http.StatusOK) before writing the data. If the Header
// does not contain a Content-Type line, Write adds a Content-Type set
// to the result of passing the initial 512 bytes of written data to
// ...
Write([]byte) (int, error)
My questions are:
Is it OK to use my code to safely detect if a HTTP connection is hijacked? I only want to check the connection is hijacked or not, but do NOT want it to add headers for me!
Since the ResponseWriter is an interface, I cannot click through the source code to find out how the standard library implements that method. In general, how can I drill down to the standard library (or any open source code) to find out the implementation of an interface?
Thanks to Cerise, I found the source code of the standard response.Writer:
func (w *response) write(lenData int, dataB []byte, dataS string) (n int, err error) {
if w.conn.hijacked() {
if lenData > 0 {
caller := relevantCaller()
w.conn.server.logf("http: response.Write on hijacked connection from %s (%s:%d)", caller.Function, path.Base(caller.File), caller.Line)
}
return 0, ErrHijacked
}
... ....
So, as said in the document, there is NO side effect.
I have an HTTP server that when it recieves a request calls on an underlying gRPC server.
I have chosen to abstract away the gRPC call with an interface, to make testing of the http server easier.
The problem is that I am constantly getting the errors:
rpc error: code = Canceled desc = grpc: the client connection is closing
or
rpc error: code = Canceled desc = context canceled
And as I understand both of these are related to the context getting passed into the grpc call. And that I want the context to be alive throughout both the HTTP and gRPC calls.
type SetterGetter interface {
Getter(key string) (val string)
}
type Service struct {
sg SetterGetter
ctx context.Context
}
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
key := r.URL.Query()["key"][0]
res := s.sg.Getter(key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
func main() {
s := new(Service)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
s.sg = gc.NewClientwrapper(ctx)
http.HandleFunc("/get", s.getHandler)
log.Fatal(http.ListenAndServe(port, nil))
}
And my Getter implementation looks like this:
type clientwrapper struct {
sc pb.ServicesClient
ctx context.Context
}
func NewClientwrapper(ctx context.Context) *clientwrapper {
cw := new(clientwrapper)
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
err = fmt.Errorf("Error could not dial address: %v", err)
}
defer conn.Close()
cw.ctx = ctx
cw.sc = pb.NewServicesClient(conn)
return cw
}
func (cw *clientwrapper) Getter(key string) (val string) {
// Make the GRPC request
res, err := cw.sc.Get(cw.ctx, &pb.GetRequest{Key: key})
if err != nil {
return ""
}
getVal := res.GetValue()
return getVal
}
So here I am creating a context in my http servers main menu, and passing it onwards. I do it like this because it worked if I removed my interface and put everything in the main file.
I have also tried to create the context both in the http handler and passing it to the Getter and I have also tried creating it in the Getter itself.
I think the correct approach is to create the context in the http request using the context that gets created by the request and then passing it to the grpc Getter. Like such:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
// Create it like such
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
key := r.URL.Query()["key"][0]
// And pass it onwards (of course we need to change function signature for this to work)
res := s.sg.Getter(ctx, key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
So how should I create my context here, to not get these errors?
If your goal is to keep a long-running task running in the background, that doesn't cancel when the request is finalized, then don't use the request's context. Use context.Background() instead.
For example:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// ...
I am implementing a retry using http.RoundTripper in Go. Here is an implementation example.
type retryableRoundTripper struct {
tr http.RoundTripper
maxRetryCount int
}
func (t *retryableRoundTripper) RoundTrip(req *http.Request) (resp *http.Response, err error) {
for count := 0; count < t.maxRetryCount; count++ {
log.Printf("retryableRoundTripper retry: %d\n", count+1)
resp, err = t.tr.RoundTrip(req)
if err != nil || resp.StatusCode != http.StatusTooManyRequests {
return resp, err
}
}
return resp, err
}
Questions
Is it necessary to read and close the response body to reuse the TCP connection on retry?
func (t *retryableRoundTripper) RoundTrip(req *http.Request) (resp *http.Response, err error) {
for count := 0; count < t.maxRetryCount; count++ {
log.Printf("retryableRoundTripper retry: %d\n", count+1)
resp, err = t.tr.RoundTrip(req)
if err != nil || resp.StatusCode != http.StatusTooManyRequests {
return resp, err
}
}
// add
io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
return resp, err
}
As a side note, I've written a test and have confirmed that retries work as expected. (In Go Playground, it times out, but it works locally.)
https://play.golang.org/p/08YWV0kjaKr
Of course you need to read the connection to ensure that it can be reused, and Closing the connection is required as documented.
As stated in the docs:
The client must close the response body when finished with it
and
The default HTTP client's Transport may not
reuse HTTP/1.x "keep-alive" TCP connections if the Body is
not read to completion and closed.
If the server wants to send more data than fits in the initial read buffers, it is going to be blocked sending the response. This means that if the transport were to attempt to send a new request over that connection the server may not be able to handle it because it never completed the first request. This will usually result in a client error of connection reset by peer and a server error of write: broken pipe.
If you want to make an attempt to reuse the connection, but limit the amount read, use an io.LimitedReader and/or check the ContentLength value. This way you can discard the connection when it's faster to handle the errors and bring up a new connection than to read an unbounded amount of data. See Limiting amount of data read in the response to a HTTP GET request.
I have a basic HTTP server that accepts a request and returns data from a data store.
Each HTTP request does the following things:
Create a context with timeout
Create a read request (custom type)
Push read request onto channel
Wait for response and serve data
Here's the basic pseudo code:
package main
import (
"context"
"net/http"
"time"
)
type dataRequest struct {
data chan string
ctx context.Context
}
func handler(reqStream chan dataRequest) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
defer cancel()
req := dataRequest{
data: make(chan string),
ctx: ctx,
}
select {
case reqStream <- req:
// request pushed to que
case <-ctx.Done():
// don't push onto reqStream if ctx done
}
select {
case <-ctx.Done():
// don't try and serve content if ctx done
case data := <-req.data:
// return data to client
}
}
}
func main() {
dataReqs := make(chan dataRequest)
go func() {
for {
select {
case req := <-dataReqs:
select {
case <-req.ctx.Done():
// don't push onto data channel if ctx done
case req.data <- "some data":
// get data from store
}
}
}
}()
http.HandleFunc("/", handler(dataReqs))
http.ListenAndServe(":8080", nil)
}
My question is, because the context could finish at any time due to the deadline being exceeded or the client cancelling the request, is my current approach correct for handling this in multiple places or is there a more elegant solution?
seems to me that it'll work.
few comments -
you can return in the first case of <- ctx.Done()
you're already waiting for req.ctx.Done() in the data store handler so you can completely remove the first select {} statement and just publish to the data requests channel. not sure about performance hits for the rare cases when the context is done so early before the request is even published...
I have an application on Bluemix written on Golang. I send to it a large chunked HTTP request and I see that my application gets a notice that it got this request only when the request fully reaches the destination instead of starting after the header appeared (in the beginning without waiting to the body).
The same code on regular server works fine without any delays, the problem seems to be only when it runs on Bluemix. I did several experiments, like listening to tcp instead of http (but still sending http requests), and again the first notice that there is some networking only when the request fully arrived. I also tried to stream the data very slowly, 1 byte in 100ms, and send small requests, and the result is the same.
I assume there is some buffering before redirecting the request to the dynamic port the App has, or maybe in some other place. Any help why does it happen and how to avoid it, will be very appreciated. Thank you.
UPD: Server code in Go that listens to TCP:
func main() {
var port string
if port = os.Getenv("PORT"); len(port) == 0 { port = DEFAULT_PORT }
l, err := net.Listen("tcp", ":"+port)
if err != nil {
fmt.Println("Error listening:", err.Error())
os.Exit(1)
}
// Close the listener when the application closes.
defer l.Close()
for {
// Listen for an incoming connection.
conn, err := l.Accept()
if err != nil {
fmt.Println("Error accepting: ", err.Error())
os.Exit(1)
}
// Handle connections in a new goroutine.
go handleTCPRequest(conn)
}
}
func handleTCPRequest(conn net.Conn) {
fmt.Println("Got something!!!", conn)
// Make a buffer to hold incoming data.
buf := make([]byte, 1024)
// Read the incoming connection into the buffer.
reqLen, err := conn.Read(buf)
if err != nil {
fmt.Println("Error reading:", err.Error(), conn)
} else {
fmt.Println("Read: ", reqLen, buf, conn)
}
// Close the connection when you're done with it.
conn.Close()
}
The client just sends a big http request or a throttled one.
I would expect to get print "Got Something" really fast, after first packages arrive, but this is not what I see. For example for a throttling client I will get all the package together and print "Got something" only after the full package arrives.
Code of throttling client in Go:
reader, writer := io.Pipe()
request, _ := http.NewRequest("PUT", URL, reader)
request.Header = http.Header{}
request.Header["Connection"]=[]string {"keep-alive"}
request.Header["X-Content-Type-Options"]=[]string {"nosniff"}
request.Header["Accept-Encoding"]=[]string{"gzip, deflate"}
request.Header["Accept"]=[]string{"*/*"}
request.Header["Transfer-Encoding"]=[]string {"chunked"}
request.TransferEncoding=[]string {"chunked"}
//puts one byte in 100ms asynchronously
go func(w *io.PipeWriter){
i:=0
for true {
i+=1
w.Write([]byte{0})
time.Sleep(100 * time.Millisecond)
fmt.Print("-")
if i > 1000 {
w.CloseWithError(io.EOF)
break
}
}
}(writer)
http.DefaultClient.Do(request)