Getting GRPC working when server is behind LB or Proxy - nginx

What are common solutions getting gRPC app running when there is a requirement to run through a some sort of proxy which does not support HTTP/2 toward origin, rather towards client side.
Were you people got this kind of setup done somehow?
The setup via proxy would create a flow similar to this:
Client <--- HTTP/2 ---> Proxy <--- HTTP/1.1 ---> gRPC Server.

Currently, it's not possible -- but stay tuned: Piotr Sikora from Google is trying to get HTTP2 upstreams supported on nginx, even though things are proceeding slower than one would expect:
https://github.com/grpc/grpc.github.io/issues/230#issuecomment-306974585
https://forum.nginx.org/list.php?29 (look for Piotr)

Related

HAProxy - HTTP/1.1 frontend with HTTP/2 backend? A good idea?

I have been working towards switching the communication protocol for our application from HTTP/1.1 to HTTP/2.
The communication flow is some thing like this:
Client talks to an Amazon Application load balancer over HTTP/2
Application load balancer talks to a reverse proxy (HAProxy) over HTTP/1.1
Reverse proxy then talks to the webserver over HTTP/1.1
I wanted all of this to be HTTP/2 but due to a limitation of the load balancer (https://forums.aws.amazon.com/thread.jspa?threadID=332847) the communication between it and the reverse proxy can either be HTTP/2 or HTTP/1.1 but not both. I need to support both because there is a WebSocket connection that is opened over HTTP/1.1.
I have an option to make the communication between HAProxy and the Webserver to be HTTP/2 as our Webserver support it.
So the flow becomes:
Client -> ALB (HTTP/2)
ALB -> HAProxy (HTTP/1.1)
HAProxy -> Webserver (HTTP/2)
I wanted to understand two things
If this is possible with HAProxy?
If this is a good move? will this give me any performance improvements?
Thanks in advance!
Cheers
Technically Ha Proxy can do HTTP / 2.0 end to end (version like 2.0 or newer https://www.haproxy.com/fr/blog/haproxy-2-0-and-beyond/#end-to-end-http-2)
And in 2.4 you can do HTTP/2 WebSockets (https://www.haproxy.com/fr/blog/announcing-haproxy-2-4/)
My first thought about HTTP/2.0 for multiplexing is interesting to reduce latency. Latency which is usually between the client and your fisrt instance (here ALB).
I don't know if you have latencies between HaProxy -> Webserver but if you think you have it. This brings a nice performance increase between HAProxy and your backend servers, since it condenses multiple connections down to a single connection. Otherwise do not expect a big improvement.
Moreover it could depend if you use TLS between Haproxy and webservers.
But options like headers compression and persistent TCP Connections are interesting so this makes sense to use it.

Load balancing go servers in Beanstalk

I'm trying to load balance go servers in AWS beanstalk that uses GRPC/Protobuf for data serialization. Beanstalk makes offers nginx as reverse proxy for client-server communication which makes use of http1.1 protocol. This is resulting in bogus messages exchanged between proxy and server but client messages never seem to reach the server as intended. Any clean ideas would help here.
Nginx doesnt support http/2 to backend yet. Some of us are working on a fix for this but will take another quarter before we could get to upstream it. You can either wait for that or use Envoy (https://github.com/lyft/envoy) in front which supports grpc and http/2 natively. Hope this helps.

Proxy server basics

I'm learning about network programming. Specifically proxy servers. I've created a very rudimentary proxy server on my mobile phone. However I think there's some proxy server basics that I don't know that will help me create a more robust proxy server.
What I've done so far: server on my mobile device listens for requests from laptop. When server receives a request like www.google.com the web page contents are fetched and returned to the client on the laptop. The client then opens the page contents in a desktop browser.
I think the sending/receiving of requests can happen on a lower OSI model layer (perhaps transport). How can I create a more robust proxy server? (one that just sends and receives bytes and doesn't care/know about HTTP)
A proxy server runs at the same layer as the protocol being proxied. It seems you are talking about an HTTP proxy. HTTP runs over TCP, and so does an HTTP proxy.
Define 'more robust'. What have you done so far?
An HTTP proxy server is a pretty simple thing, unless it has elaborate logging, caching, etc. The basis of it is (1) something to recognize and action the GET/POST/PUT/CONNECT etc. commands and (2) thereafter just copying bytes in both directions simultaneously.

What is the benefit of using NginX for Node.js?

From what I understand Node.js doesnt need NginX to work as a http server (or a websockets server or any server for that matter), but I keep reading about how to use NginX instead of Node.js internal server and cant find of a good reason to go that way
Here http://developer.yahoo.com/yui/theater/video.php?v=dahl-node Node.js author says that Node.js is still in development and so there may be security issues that NginX simply hides.
On the other hand, in case of a heavy traffic NginX will be able to split the job between many Node.js running servers.
In addition to the previous answers, there’s another practical reason to use nginx in front of Node.js, and that’s simply because you might want to run more than one Node app on your server.
If a Node app is listening on port 80, you are limited to that one app. If nginx is listening on port 80 it can proxy the requests to multiple Node apps running on other ports.
It’s also convenient to delegate TLS/SSL/HTTPS to Nginx. Doing TLS directly in Node is possible, but it’s extra work and error-prone. With Nginx (or another proxy) in front of your app, you don’t have to worry about it and there are tools to help you securely configure it.
But be prepared: nginx don't support http 1.1 while talking to backend so features like keep-alive or websockets won't work if you put node behind the nginx.
UPD: see nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections for more up-to-date info.

HTTP Proxy/FastCGI/SCGI not closing connection when client disconnected - bug or feature?

I'm working on Comet support for CppCMS framework via long XMLHttpRequest polls. In many cases, such request is closed by client before any response from server was given -- for example the page is closed, user moves to other page or it is just refeshed.
At the server side I expect that I would recieve the notification that connection is dropped. I tested the application via 3 connectors: FastCGI, SCGI and simple HTTP Proxy.
From 3 major UNIX web servers, Apache2, lighttpd and Nginx, only the last one had closed
connection as expected allowing my application to remove the request from wait queue -- this worked for both FastCGI and HTTP Proxy connectors. (Nginx does not have scgi module by default).
Others, Apache and Lighttpd do not close connection or inform the backend about disconnected
clients, the proceed as if the client is still on line. This happens for all 3 supported APIs: FastCGI, SCGI and HTTP Proxy.
I had opened an issue for Lighttpd, but what
more conserns me is the fact that Apache -- mature and well supported web server as lighttpd
and does not discloses the server backend that client had gone.
Questions:
Is this a bug or this is a feature? Is there any reason not to close the connection between web server and application backend?
Are there real life Comet application working behind these servers via FastCGI/SCGI/HTTP-Proxy backends?
If the above true, how do they deal with this issue? I understand that I can timeout all connections every 10 seconds, but I would like to keep them idle as far as client listens -- because this allows easier scale up -- each connection is very cheep -- the cost is only the opended socket.
Thanks!
(1) Feature. Or, more specifically, fallout from an implementation detail.
A TCP/IP connection does not involve a constant flow of traffic back and forth. Thus, there is no way to know that a client is gone without (a) the client telling you it is closing the connection or (b) a timeout.
(2) I'm not specifically familiar with Comet or CppCMS. But, yes, there are all kinds of CMS servers running behind the mentioned web servers and they all have to deal with this issue (and, yes, it is a pain).
(3) Timeouts are the only way, but you can mitigate the pain, so to speak. Have the client ping the server across the connection every N seconds when there is otherwise no activity. Doesn't have to do anything and you can tack stuff on the reply; notifications of concurrent edits or whatever you need.
You are correct in that it is surprising that mod_fastcgi doesn't support telling the backend that Apache has detected the disconnect or the connection timed out. And you aren't the first to be dismayed.
The second patch on this page should fix that particular issue:
http://osdir.com/ml/web.fastcgi.devel/2006-02/msg00015.html
http://ncannasse.fr/blog/tora_comet
I don't have any concrete information for you, but this article does mention that they can detect when the client has disconnected from Apache. See tora.Queue. And it sounds like the source is available in the neko CVS, so you might be able to find some clues there. Good luck.

Resources