I have a basic HTTP server that accepts a request and returns data from a data store.
Each HTTP request does the following things:
Create a context with timeout
Create a read request (custom type)
Push read request onto channel
Wait for response and serve data
Here's the basic pseudo code:
package main
import (
"context"
"net/http"
"time"
)
type dataRequest struct {
data chan string
ctx context.Context
}
func handler(reqStream chan dataRequest) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
defer cancel()
req := dataRequest{
data: make(chan string),
ctx: ctx,
}
select {
case reqStream <- req:
// request pushed to que
case <-ctx.Done():
// don't push onto reqStream if ctx done
}
select {
case <-ctx.Done():
// don't try and serve content if ctx done
case data := <-req.data:
// return data to client
}
}
}
func main() {
dataReqs := make(chan dataRequest)
go func() {
for {
select {
case req := <-dataReqs:
select {
case <-req.ctx.Done():
// don't push onto data channel if ctx done
case req.data <- "some data":
// get data from store
}
}
}
}()
http.HandleFunc("/", handler(dataReqs))
http.ListenAndServe(":8080", nil)
}
My question is, because the context could finish at any time due to the deadline being exceeded or the client cancelling the request, is my current approach correct for handling this in multiple places or is there a more elegant solution?
seems to me that it'll work.
few comments -
you can return in the first case of <- ctx.Done()
you're already waiting for req.ctx.Done() in the data store handler so you can completely remove the first select {} statement and just publish to the data requests channel. not sure about performance hits for the rare cases when the context is done so early before the request is even published...
Related
I am writing an HTTP server in Go, which uses the following pattern to handle API output:
func handler(w http.ResponsWriter, r *http.Request) {
defer reply(w, r, L)() //L is a Logger
//do things...
}
func reply(w http.ResponseWriter, r *http.Request, log Logger) func() {
cid := []byte{0, 0, 0, 0}
if log != nil {
rand.Read(cid)
log.Debug("[%x] %s %s", cid, r.Method, r.URL.String())
}
entry := time.Now()
return func() {
if log != nil {
defer log.Debug("[%x] elapsed %d millis", cid, time.Since(entry).Milliseconds())
}
_, err := w.Write(nil)
if err == http.ErrHijacked {
return //API is a WEBSOCKET entry point, do nothing
}
//handle common output logic for normal HTTP APIs...
}
}
The reason I do this, is that I found this comment in the standard library:
// ErrHijacked is returned by ResponseWriter.Write calls when
// the underlying connection has been hijacked using the
// Hijacker interface. A zero-byte write on a hijacked
// connection will return ErrHijacked without any other side
// effects.
ErrHijacked = errors.New("http: connection has been hijacked")
However following the Write() method, I got this comment:
// Write writes the data to the connection as part of an HTTP reply.
//
// If WriteHeader has not yet been called, Write calls
// WriteHeader(http.StatusOK) before writing the data. If the Header
// does not contain a Content-Type line, Write adds a Content-Type set
// to the result of passing the initial 512 bytes of written data to
// ...
Write([]byte) (int, error)
My questions are:
Is it OK to use my code to safely detect if a HTTP connection is hijacked? I only want to check the connection is hijacked or not, but do NOT want it to add headers for me!
Since the ResponseWriter is an interface, I cannot click through the source code to find out how the standard library implements that method. In general, how can I drill down to the standard library (or any open source code) to find out the implementation of an interface?
Thanks to Cerise, I found the source code of the standard response.Writer:
func (w *response) write(lenData int, dataB []byte, dataS string) (n int, err error) {
if w.conn.hijacked() {
if lenData > 0 {
caller := relevantCaller()
w.conn.server.logf("http: response.Write on hijacked connection from %s (%s:%d)", caller.Function, path.Base(caller.File), caller.Line)
}
return 0, ErrHijacked
}
... ....
So, as said in the document, there is NO side effect.
I have an HTTP server that when it recieves a request calls on an underlying gRPC server.
I have chosen to abstract away the gRPC call with an interface, to make testing of the http server easier.
The problem is that I am constantly getting the errors:
rpc error: code = Canceled desc = grpc: the client connection is closing
or
rpc error: code = Canceled desc = context canceled
And as I understand both of these are related to the context getting passed into the grpc call. And that I want the context to be alive throughout both the HTTP and gRPC calls.
type SetterGetter interface {
Getter(key string) (val string)
}
type Service struct {
sg SetterGetter
ctx context.Context
}
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
key := r.URL.Query()["key"][0]
res := s.sg.Getter(key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
func main() {
s := new(Service)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
s.sg = gc.NewClientwrapper(ctx)
http.HandleFunc("/get", s.getHandler)
log.Fatal(http.ListenAndServe(port, nil))
}
And my Getter implementation looks like this:
type clientwrapper struct {
sc pb.ServicesClient
ctx context.Context
}
func NewClientwrapper(ctx context.Context) *clientwrapper {
cw := new(clientwrapper)
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
err = fmt.Errorf("Error could not dial address: %v", err)
}
defer conn.Close()
cw.ctx = ctx
cw.sc = pb.NewServicesClient(conn)
return cw
}
func (cw *clientwrapper) Getter(key string) (val string) {
// Make the GRPC request
res, err := cw.sc.Get(cw.ctx, &pb.GetRequest{Key: key})
if err != nil {
return ""
}
getVal := res.GetValue()
return getVal
}
So here I am creating a context in my http servers main menu, and passing it onwards. I do it like this because it worked if I removed my interface and put everything in the main file.
I have also tried to create the context both in the http handler and passing it to the Getter and I have also tried creating it in the Getter itself.
I think the correct approach is to create the context in the http request using the context that gets created by the request and then passing it to the grpc Getter. Like such:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
// Create it like such
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
key := r.URL.Query()["key"][0]
// And pass it onwards (of course we need to change function signature for this to work)
res := s.sg.Getter(ctx, key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
So how should I create my context here, to not get these errors?
If your goal is to keep a long-running task running in the background, that doesn't cancel when the request is finalized, then don't use the request's context. Use context.Background() instead.
For example:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// ...
I am trying to create a framework in which I would receive requests over REST API and would wait for another service (which works over gRPC) to poll and execute the request. This is needed cause the "other" service is very deeply embedded into the network and I can't directly call it. At the same time, I would like to buffer the output of the other service back to the request origin.
Any ideas how I can share this data between 2 different asynchronous API requests? Using the file system is a way... but I was thinking can I do it better via channels or something...?
Kind of pseudo code below:
func RestHandler(payload string) (string, error){
respChan := make(chan string)
workId := placeWorkInQueue(payload)
// Start polling in the background
go pollForResult(respChan, workId)
// wait for result in the channel
var result string
select {
case result = <-respChan:
// implement your timeout logic as a another case: here
}
return result, nil
}
// This is poller for just the workId given to it.
func pollForResult(respChan chan string, workId string) {
// Do the polling for workId result
/// Write response to respChan when available
// You may have to implement a timeout to give up polling.
}
func placeWorkInQueue(s string) string {
// Place the job in queue and return a unique workId
return "unique-id"
}
Use Redis queues in both directions. The API endpoint writes request and unique id to queue, registers Go channel with unique id as key with central reader in process, and waits on Go channel.
Queue reader gets responses with id from Redis queue and sends response to appropriate Go channel.
// Response represents the response type from the
// remote service.
type Response struct {
ID string
// ... more fields as needed
}
// Request represents request to remote serivce.
type Request struct {
ID string
// ... more fields as needed
}
// enqueueRequest writes request to Redis queue.
// Implementation TBD by application.
func enqueueRequest(r *Request) error {
return nil
}
// dequeueResponse reads a response from a Redis queue.
// Implementation TBD by application.
func dequeueResponse() (*Response, error) {
return nil, nil
}
Use sync.Map to register waiting Go channels from API request handlers. The key is the unique id for the request and the value is a chan *Response.
var responseWaiters sync.Map
Run queuePump in a single goroutine to dequeue results from Redis queue and send to appropriate channel. Start the gorountine before serving HTTP requests.
func queuePump() {
for {
response, err := dequeueResponse()
if err != nil {
// handle error
}
v, ok := responseWaiters.Load(response.ID)
if ok {
c := v.(chan *Response)
c <- response
// Remove cahnel from map to ensure that pump never sends
// twice to same channel. The pump will black forever if
// this happens.
responseWaiters.Delete(response.ID)
}
}
}
The API endpoint allocates a unique id for request, registers a channel with the queue pump, enqueues the request and waits for the response.
func apiEndpoint(w http.ResponseWriter, r *http.Request) {
id := generateUniqueID()
c := make(chan *Response, 1) // capacity 1 ensures that queue pump will not block
responseWaiters.Store(id, c)
defer responseWaiters.Delete(id)
req := &Request{
ID: id,
// fill in other fields as needed
}
if err := enqueueRequest(req); err != nil {
// handle error
}
select {
case resp := <-c:
// process response
fmt.Println(resp)
case <-time.After(10 * time.Second):
// handle timeout error
}
}
I need to make a request to a server that return different responses at different times. I mean, the server generate different responses and these responses take different execution time, so server return the responses as soon as they are available.
And I want print in the screen (by the moment, I'd settle with that) these responses as soon as the server returns me.
All what I could do until now is print the responses but only when the server returns all the responses. So if the first response take 1sec, and the last response take 10sec, my code needs to wait 10sec to print all the messages.
EDIT: to add code I have:
//Config is gotten from yml file
RestConfig = Config["rest"].(map[string]interface{})
ServerConfig = Config["server"].(map[string]interface{})
RequestUrl := ServerConfig["url"]
RequestReader := bytes.NewReader(body)
Request, _ := http.NewRequest("POST", RequestUrl.(string), RequestReader)
//AppendHeaders append the needing headers to the request
client.AppendHeaders(Request, RestConfig["headers"])
//the type of client.HttpClient is *http.Client
Response, _ := client.HttpClient.Do(Request)
//And to print in the screen
defer Response.Body.Close()
fmt.Println( "-> Receiving response:\n---\n" )
fmt.Println( Response , "\n---\n-> Response body:\n---\n")
body_resp, _ := ioutil.ReadAll(Response.Body)
fmt.Println( string(body_resp) )
fmt.Println( "\n--\n")
Any way to do it??
Thank you very much.
Finally my code , is like this:
package main
import (
"fmt"
"log"
"bytes"
"strings"
"bufio"
"net/http"
)
func main() {
var body = "The body"
RequestReader := bytes.NewReader([]byte(body))
req, err := http.NewRequest("POST", "the_url", RequestReader)
if err != nil {
log.Fatal(err)
}
req.Header.Add("Accept", "application/xml")
req.Header.Add("Content-Type", "application/xml")
req.Header.Add("AG-Authorization", "key")
req.Header.Add("AG-Forwarded-Hosts", "*")
resp, err := (&http.Client{}).Do(req)
if err != nil {
log.Fatal(err)
}
reader := bufio.NewReader(resp.Body)
message := ""
for {
line, err := reader.ReadBytes('\n')
if err != nil {
log.Fatal(err)
}
message = message + string(line)
if strings.Contains(message, "<!-- End mark for each message -->"){
fmt.Println(message)
message = ""
}
}
}
Thank everyone.
The context package is what you are looking for.
The context package is responsible for signal cancelation and operation deadlines for processes and server requests. This has two public methods: WithCancel and WithTimeout. The Context associated with an incoming request is typically canceled when the request handler returns.
For your specific case you can use the WithTimeout method for setting a deadline on requests to backend servers.
// WithTimeout returns a copy of parent whose Done channel is closed as soon as
// parent.Done is closed, cancel is called, or timeout elapses. The new
// Context's Deadline is the sooner of now+timeout and the parent's deadline, if
// any. If the timer is still running, the cancel function releases its
// resources.
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc)
And here is a snippet taken from https://blog.golang.org/context/server/server.go
timeout, err := time.ParseDuration(req.FormValue("timeout")) // set a time limit in your post
if err == nil {
// The request has a timeout, so create a context that is
// canceled automatically when the timeout expires.
ctx, cancel = context.WithTimeout(context.Background(), timeout)
} else {
ctx, cancel = context.WithCancel(context.Background())
}
defer cancel() // Cancel ctx as soon as handleSearch returns.
For further reading take a look at this article:
https://blog.golang.org/context
I have a set of requests handlers like the one below:
func GetProductsHandler(w http.ResponseWriter, req *http.Request) {
defer req.Body.Close()
products := db.GetProducts()
// ...
// return products as JSON array
}
How do I test them the right way? Should I send mock ResponseWriter and Request objects to the function and see the results?
Are there tools to mock request and response objects in Go to simplify the process without having to start server before testing?
Go provides a mock writer for use in testing handlers. The standard library documentation provides an example:
package main
import (
"fmt"
"net/http"
"net/http/httptest"
)
func main() {
handler := func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "something failed", http.StatusInternalServerError)
}
req := httptest.NewRequest("GET", "http://example.com/foo", nil)
w := httptest.NewRecorder()
handler(w, req)
fmt.Printf("%d - %s", w.Code, w.Body.String())
}
I think having a global dependency (db) throws a wrench into clean unit testing. Using go your test could reassign a value, masking, the global value of db.
Another strategy (my preferred) is to package your handler in a struct, which has a db attribute..
type Handlers struct {
db DB_INTERFACE
}
func (hs *Handlers) GetProductsHandler(w http.ResponseWriter, req *http.Request) {...}
This way your test can instantiate a Handlers with a stub db object which will allow you to create IO free unit tests.