I am building a web server that must accept HTTP requests from a client, but must also accept requests over a raw TCP socket from peers. Since HTTP runs over TCP, I am trying to route the HTTP requests by the TCP server rather than running two separate services.
Is there an easy way to read in the data with net.Conn.Read(), determine if it is an HTTP GET/POST request and pass it off to the built in HTTP handler or Gorilla mux? Right now my code looks like this and I am building the http routing logic myself:
func ListenConn() {
listen, _ := net.Listen("tcp", ":8080")
defer listen.Close()
for {
conn, err := listen.Accept()
if err != nil {
logger.Println("listener.go", "ListenConn", err.Error())
}
go HandleConn(conn)
}
}
func HandleConn(conn net.Conn) {
defer conn.Close()
// determines if it is an http request
scanner := bufio.NewScanner(conn)
for scanner.Scan() {
ln := scanner.Bytes()
fmt.Println(ln)
if strings.Fields(string(ln))[0] == "GET" {
http.GetRegistrationCode(conn, strings.Fields(string(ln))[1])
return
}
... raw tcp handler code
}
}
It is not a good idea to mix HTTP and raw TCP traffic.
Think about all firewalls and routers between your application and clients. They all designed to enable safe HTTP(s) delivery. What they will do with your tcp traffic coming to the same port as valid HTTP?
As a solution you can split your traffic to two different ports in the same application.
With ports separation you can route your HTTP and TCP traffic independently and configure appropriate network security for every channel.
Sample code to listen for 2 different ports:
package main
import (
"fmt"
"net"
"net/http"
"os"
)
type httpHandler struct {
}
func (m *httpHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
fmt.Println("HTTP request")
}
func main() {
// http
go func() {
http.ListenAndServe(":8001", &httpHandler{})
}()
// tcp
l, err := net.Listen("tcp", "localhost:8002")
if err != nil {
fmt.Println("Error listening:", err.Error())
os.Exit(1)
}
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
fmt.Println("Error accepting: ", err.Error())
os.Exit(1)
}
go handleRequest(conn)
}
}
// Handles incoming requests.
func handleRequest(conn net.Conn) {
// read/write from connection
fmt.Println("TCP connection")
conn.Close()
}
open http://localhost:8001/ in browser and run command line echo -n "test" | nc localhost 8002 to test listeners
Related
Problem Statement
I have a client (which dials to the server) and server (that listens for incoming requests) written in golang and with the RPC calls defined. I am trying to initiate an HTTP request on the server side which would in turn execute the RPC call for streaming and send a JSON response back to the user
Challenge
I was able to handle both grpc and HTTP requests on different ports but having issues with passing parameters from the HTTP request onto the RPC call on the server side
Server Code
log.Println("Listening for connections from client ........")
lis, err := net.Listen("tcp", ":9000")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := testApi.Server{}
grpcServer := grpc.NewServer()
testApi.RegisterTestApiServiceServer(grpcServer, &s)
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %s", err)
}
func main() {
go runGrpc()
log.Printf("*------ Waiting for requests from users ------*")
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/exchangeId/{test_id}", ConnectAndExchange).Methods("GET")
log.Fatal(http.ListenAndServe(":8080", router))
}
func ConnectAndExchange(w http.ResponseWriter, r *http.Request){
vars := mux.Vars(r)
test_id, _ := strconv.Atoi(vars["test_id"])
log.Println("Test id request from user : ", test_id)
func (s * Server) ConnectAndStream(channelStream TestApiService_ConnectAndStreamServer) error {
// Question: This Id has to come from http request above- test_id
var id int32 = 1234566
// id := a.ConnectAndExchange
log.Println("Id from sam user ", id)
// var id int32 = 1234566
for i := 1; i <= 2; i++ {
id += 1
log.Println("Speed Server is sending data : ", id)
channelStream.Send(&Input{Id: id})
}
for i := 1; i <= 2; i++ {
log.Println("now time to receive")
client_response, err := channelStream.Recv()
log.Println("Response from samd client : ", client_response.Id)
if err != nil {
log.Println("Error while receiving from samd : ", err)
}
}
return nil
}
I am stuck with being able to pass the test_id from the curl request to the RPC call as above. Any input is greatly appreciated
Note
Client - Dials in and connects to the server and starts receiving and sending data (bi-directional streaming)
Both the Http and GRPC client are part of the same server application. So why call the RPC method from the Http handler? The Http handler should have access to the same backend functionality.
Your question is slightly unclear but if you are trying to have your client establish a GRPC connection to the server via the HTTP handler this will not work. The GRPC connection established in this situation is between the server and its self.
Edit - thanks for the clarification. Now I understand better the flow that you are trying to achieve. Your http handler method can make the outgoing grpc call to the server and return the response back via the http.ResponseWriter
For simplicity I have used the hello world example on https://github.com/grpc/grpc-go/tree/master/examples/helloworld
Running the code sample below and hitting http://localhost:1000/exchangeId/Test will show the output
Starting
*------ Waiting for http requests from users on port 1000 ------*
server listening at 127.0.0.1:1001
Test id request from user : Test
Server Received: Test
Greeting: Hello Test
Code sample:
import (
"context"
"log"
"net"
"net/http"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"github.com/gorilla/mux"
)
var (
grpcserver = "localhost:1001"
)
func main() {
log.Print("Starting")
go StartGrpcServer()
log.Printf("*------ Waiting for http requests from users on port 1000 ------*")
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/exchangeId/{test_id}", ConnectAndExchange).Methods("GET")
log.Fatal(http.ListenAndServe(":1000", router))
}
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Server Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func StartGrpcServer() {
lis, err := net.Listen("tcp", grpcserver)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
log.Printf("server listening at %v", lis.Addr())
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
func ConnectAndExchange(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
test_id := vars["test_id"]
log.Println("Test id request from user : ", test_id)
// Set up a connection to the server.
conn, err := grpc.Dial(grpcserver, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
// Contact the server and print out its response.
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
resp, err := c.SayHello(ctx, &pb.HelloRequest{Name: test_id})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", resp.GetMessage())
w.Write([]byte(resp.GetMessage()))
}
I'm trying to implement a http forwarding server that supports domain blocking. I've tried
go io.Copy(dst, src)
go io.Copy(src, dst)
and it works like a charm on tcp forwarding. Then I've tried to do request line parsing with something similar to
go func(){
reader := io.TeeReader(src, dst)
textReader := textproto.NewReader(bufio.NewReader(reader))
requestLine, _ = textReader.ReadLine()
// ...
ioutil.ReadAll(reader)
}
It works fine, but I was getting worried about bad performance(with ioutil.ReadAll). So I've written the code below.
func (f *Forwarder) handle(src, dst net.Conn) {
defer dst.Close()
defer src.Close()
done := make(chan struct{})
go func() {
textReader := bufio.NewReader(src)
requestLine, _ = textReader.ReadString('\n')
// parse request line and apply domain blocking
dst.Write([]byte(requestLine))
io.Copy(dst, src)
done <- struct{}{}
}()
go func() {
textReader := bufio.NewReader(dst)
s.statusLine, _ = textReader.ReadString('\n')
src.Write([]byte(s.statusLine))
io.Copy(src, dst)
done <- struct{}{}
}()
<-done
<-done
}
Unfortunately, it doesn't work at all. Requests get to print out, but not for responses. I've stuck here and don't know what's wrong.
TCP forwarding is to realize that the tunnel proxy does not need to parse data. The reverse proxy can use the standard library.
The tunnel proxy is implemented to separate the http and https protocols. The client generally uses the tunnel to send https and sends the Connect method. Sending http is the Get method. For the https request service, only dail creates the connection tcp conversion, and the http request is implemented using a reverse proxy.
func(w http.ResponseWriter, r *http.Request) {
// check url host
if r.URL.Host != "" {
if r.Method == eudore.MethodConnect {
// tunnel proxy
conn, err := net.Dial("tcp", r.URL.Host)
if err != nil {
w.WriteHeader(502)
return
}
client, _, err := w.Hijack()
if err != nil {
w.WriteHeader(502)
conn.Close()
return
}
client.Write([]byte("HTTP/1.0 200 OK\r\n\r\n"))
go func() {
io.Copy(client, conn)
client.Close()
conn.Close()
}()
go func() {
io.Copy(conn, client)
client.Close()
conn.Close()
}()
} else {
// reverse proxy
httputil.NewSingleHostReverseProxy(r.URL).ServeHTTP(w, r)
}
}
}
Implementing a reverse proxy will parse the client request, and the proxy will send the request to the target server.
Reverse proxy conversion request, not tested :
func(w http.ResponseWriter, r *http.Request) {
// set host
r.URL.Scheme = "http"
r.URL.Path = "example.com"
// send
resp,err := http.DefaultClient.Do(r)
if err != nil {
w.WriteHeader(502)
return
}
// write respsonse
defer resp.Body.Close()
w.WriteHeader(resp.StatusCode)
h := w.Header()
for k,v := range resp.Header {
h[k]=v
}
io.Copy(w, resp.Body)
}
However, the direct forwarding request does not process the hop-to-hop header. The hop-to-hop header is clearly stated in the rfc. The hop-to-hop header is the transmission information between two connections. For example, the client to the proxy and the proxy to the server are two. And the client to the server is end-to-end.
Please use the standard library directly for the reverse proxy, it has already handled the hop-to-hop header and Upgrade for you.
exmample NewSingleHostReverseProxy with filter:
package main
import (
"net/http"
"strings"
"net/http/httputil"
"net/url"
)
func main() {
addr, _ := url.Parse("http://localhost:8088")
proxy := httputil.NewSingleHostReverseProxy(addr)
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if strings.HasPrefix(r.URL.Path, "/api/") {
proxy.ServeHTTP(w, r)
} else {
w.WriteHeader(404)
}
})
// Listen Server
}
This question already has answers here:
golang http+jsonrpc access from web page
(2 answers)
Closed 5 years ago.
How can I use JSON-RPC over HTTP based on this specification in Go?
Go provides JSON-RPC codec in net/rpc/jsonrpc but this codec use network connection as input so you cannot use it with go RPC HTTP handler. I attach sample code that uses TCP for JSON-RPC:
func main() {
cal := new(Calculator)
server := rpc.NewServer()
server.Register(cal)
listener, e := net.Listen("tcp", ":1234")
if e != nil {
log.Fatal("listen error:", e)
}
for {
if conn, err := listener.Accept(); err != nil {
log.Fatal("accept error: " + err.Error())
} else {
log.Printf("new connection established\n")
go server.ServeCodec(jsonrpc.NewServerCodec(conn))
}
}
}
The built-in RPC HTTP handler uses the gob codec on a hijacked HTTP connection. Here's how to do the same with the JSONRPC.
Write an HTTP handler that runs the JSONRPC server with a hijacked connection.
func serveJSONRPC(w http.ResponseWriter, req *http.Request) {
if req.Method != "CONNECT" {
http.Error(w, "method must be connect", 405)
return
}
conn, _, err := w.(http.Hijacker).Hijack()
if err != nil {
http.Error(w, "internal server error", 500)
return
}
defer conn.Close()
io.WriteString(conn, "HTTP/1.0 Connected\r\n\r\n")
jsonrpc.ServeConn(conn)
}
Register this handler with the HTTP server. For example:
http.HandleFunc("/rpcendpoint", serveJSONRPC)
EDIT: OP has since updated question to make it clear that they want GET/POST instead of connect.
I have an application that runs a basic HTTP server and also accepts connections over TCP.
Basic pseudo code is as follows:
package main
import (
"log"
"net"
"net/http"
)
func main() {
// create serve HTTP server.
serveSvr := http.NewServeMux()
serveSvr.HandleFunc("/", handler())
// create server error channel
svrErr := make(chan error)
// start HTTP server.
go func() {
svrErr <- http.ListenAndServe(":8080", serveSvr)
}()
// start TCP server
go func() {
lnr, err := net.Listen("tcp", ":1111")
if err != nil {
svrErr <- err
return
}
defer lnr.Close()
for {
conn, err := lnr.Accept()
if err != nil {
log.Printf("connection error: %v", err)
continue
}
// code to handle each connection
}
}()
select {
case err := <-svrErr:
log.Print(err)
}
}
I run both servers in separate goroutines and I need a way to gracefully shut them both down if either of them fail. For example; if the HTTP server errors, how would I go back and shutdown the TCP server/perform any cleanup?
Start by keeping a reference to the http server and the tcp listener so that you can later close them.
Create separate error channels so you know which path returned the error, and buffer them so that a send can always complete.
To make sure that whatever cleanup you want to attempt is complete before you exit, you can add a WaitGroup to the server goroutines.
I simple extension of your example might look like:
var wg sync.WaitGroup
// create HTTP server.
serveSvr := http.NewServeMux()
serveSvr.HandleFunc("/", handler())
server := &http.Server{Addr: ":8080", Handler: serveSvr}
// create http server error channel
httpErr := make(chan error, 1)
// start HTTP server.
wg.Add(1)
go func() {
defer wg.Done()
httpErr <- server.ListenAndServe()
// http cleanup
}()
tcpErr := make(chan error, 1)
listener, err := net.Listen("tcp", ":1111")
if err != nil {
tcpErr <- err
} else {
// start TCP server
wg.Add(1)
go func() {
defer wg.Done()
defer listener.Close()
for {
conn, err := listener.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok && ne.Temporary() {
// temp error, wait and continue
continue
}
tcpErr <- err
// cleanup TCP
return
}
// code to handle each connection
}
}()
}
select {
case err := <-httpErr:
// handle http error and close tcp listen
if listener != nil {
listener.Close()
}
case err := <-tcpErr:
// handle tcp error and close http server
server.Close()
}
// you may also want to receive the error from the server
// you shutdown to log
// wait for any final cleanup to finish
wg.Wait()
I'm writing a simple proxy in go that relays an HTTP request to one or more backend servers. Right now I'm still using only one backend server, but performance is not good and I'm sure I am doing something wrong. Probably related to how I send the HTTP request to another server: if I comment the call to send() then the server goes blazing fast, yielding more than 14 krps. While with the call to send() performance drops to less than 1 krps and drops even lower with time. This on a MacBook Pro.
The code is based on trivial code samples; I have created the client and reused it following the recommendation in the docs. Tests are done with Apache ab:
$ ab -n 10000 -c 10 -k "http://127.0.0.1:8080/test"
The backend server running on port 55455 does not change between experiments; Apache or nginx can be used. My custom web server yields more than 7 krps when measured directly, without proxy:
$ ab -n 10000 -c 10 -k "http://127.0.0.1:55455/test"
I would expect the proxied version to behave just as well as the non-proxied version, and sustain the performance over time.
The complete sample code follows.
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
tr := &http.Transport{
DisableCompression: true,
DisableKeepAlives: false,
}
client := &http.Client{Transport: tr}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
send(r, client)
fmt.Fprintf(w, "OK")
})
log.Fatal(http.ListenAndServe(":8080", nil))
}
func send(r *http.Request, client *http.Client) int {
req, err := http.NewRequest("GET", "http://localhost:55455" + r.URL.Path, nil)
if err != nil {
log.Fatal(err)
return 0
}
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
return 0
}
if resp == nil {
return 0
}
return 1
}
Eventually the code should send the request to multiple servers and process their answers, returning an int with the result. But I'm stuck at this step of just making the call.
What am I doing horribly wrong?
As the comment suggests, you should be returning (and dealing with) type error instead of ints, and to reiterate, don't use AB. The biggest thing that stands out to me is
You should set the MaxIdleConnsPerHost in your Transport. This represents how many connections will persist (keep-alive) even if they have nothing to do at the moment.
You have to read and close the body of the response in order for it to be returned to the pool.
...
resp, err := client.Do(req)
if err != nil {
return err
}
responseBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
defer resp.Body.Close()
...