Nginx: Reverse proxying WebSocket Draft 76 - nginx

I'm using nginx 1.4.0 and it deals perfectly fine with newer WebSocket versions, but Draft 76 is a problem. My backend (Netty-based Java application) doesn't seem to receive the handshake request, and in nginx's error log I have
[error] 7662#0: *3720 upstream timed out (110: Connection timed out) while reading response header from upstream
My configuration ($proxy_add_connection works the same way as described there)
include proxy_params;
proxy_pass http://127.0.0.1:8001/;
proxy_http_version 1.1;
proxy_set_header Connection $proxy_add_connection;
proxy_set_header Upgrade $http_upgrade;
If I connect directly to the backend, it works fine.
Is there anything I can do to fix it?

The recent changes to Nginx to support WebSocket proxying don't support WebSockets per se, but rather allow it to recognize a request to upgrade the connection from HTTP to another protocol. When it gets such a request it now establishes a tunnel to the backend rather than dropping the connection as invalid. The RFC6455 WebSocket handshake is a standard HTTP protocol upgrade request and as such it works with this new capability.
The draft 76/00 WebSocket handshake was designed specifically to break intermediaries that did not explicitly support WebSockets. As all that Nginx is doing is proxying the upgraded TCP connection, it doesn't actually understand the WebSocket handshake or what protocol version of WebSocket is being used. As such it has no way to perform the non-HTTP adjustments that the draft 76/00 handshake require.
To support draft 76/00 versions of WebSocket Nginx would have to implement special draft 76/00 detection and handling logic. Given the complexity of adding non-HTTP logic and the unfinished quality and questionable security of draft 76/00 it is unlikely that proxy intermediaries will ever support it.
If your users absolutely depend on 2-3 year old versions of Chrome/Safari, Flash fallback or raw TCP load balancing is likely your best bet.

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Performance with reverse proxy with HTTP/2 and backend service HTTP/1.1 [duplicate]

I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.
I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.
However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?
In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (i.e. slow round trip speed). These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.
For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.
So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.
However there are potential benefits to supporting HTTP/2 all the way through. For example it could allow server push all the way from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections and I'm not aware of any HTTP proxy server that would support this (few enough support HTTP/2 at backend never mind chaining HTTP/2 connections like this) so you'd need a layer-4 load balancer forwarding TCP packers rather than chaining HTTP requests - which brings other complications.
For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):
Is HTTP/2 proxy support planned for the near future?
Short answer:
No, there are no plans.
Long answer:
There is almost no sense to implement it, as the main HTTP/2 benefit
is that it allows multiplexing many requests within a single
connection, thus [almost] removing the limit on number of
simalteneous requests - and there is no such limit when talking to
your own backends. Moreover, things may even become worse when using
HTTP/2 to backends, due to single TCP connection being used instead
of multiple ones.
On the other hand, implementing HTTP/2 protocol and request
multiplexing within a single connection in the upstream module will
require major changes to the upstream module.
Due to the above, there are no plans to implement HTTP/2 support in
the upstream module, at least in the foreseeable future. If you
still think that talking to backends via HTTP/2 is something needed -
feel free to provide patches.
Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (e.g. CDN to origin server across the internet).
EDIT AUGUST 2021
HTTP/1.1 being text-based rather than binary does make it vulnerable to various request smuggling attacks. In Defcon 2021 PortSwigger demonstrated a number of real-life attacks, mostly related to issues when downgrading front end HTTP/2 requests to back end HTTP/1.1 requests. These could probably mostly be avoided by speaking HTTP/2 all the way through, but given current support of front end servers and CDNs to speak HTTP/2 to backend, and backends to support HTTP/2 it seems it’ll take a long time for this to be common, and front end HTTP/2 servers ensuring these attacks aren’t exploitable seems like the more realistic solution.
NGINX now supports HTTP2/Push for proxy_pass and it's awesome...
Here I am pushing favicon.ico, minified.css, minified.js, register.svg, purchase_litecoin.svg from my static subdomain too. It took me some time to realize I can push from a subdomain.
location / {
http2_push_preload on;
add_header Link "<//static.yourdomain.io/css/minified.css>; as=style; rel=preload";
add_header Link "<//static.yourdomain.io/js/minified.js>; as=script; rel=preload";
add_header Link "<//static.yourdomain.io/favicon.ico>; as=image; rel=preload";
add_header Link "<//static.yourdomain.io/images/register.svg>; as=image; rel=preload";
add_header Link "<//static.yourdomain.io/images/purchase_litecoin.svg>; as=image; rel=preload";
proxy_hide_header X-Frame-Options;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app_service;
}
In case someone is looking for a solution on this when it is not convenient to make your services HTTP2 compatible. Here is the basic NGINX configuration you can use to convert HTTP1 service into HTTP2 service.
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name localhost;
ssl on;
ssl_certificate /Users/xxx/ssl/myssl.crt;
ssl_certificate_key /Users/xxx/ssl/myssl.key;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
NGINX does not support HTTP/2 as a client. As they're running on the same server and there is no latency or limited bandwidth I don't think it would make a huge different either way. I would make sure you are using keepalives between nginx and node.js.
https://www.nginx.com/blog/tuning-nginx/#keepalive
You are not losing performance in general, because nginx matches the request multiplexing the browser does over HTTP/2 by creating multiple simultaneous requests to your node backend. (One of the major performance improvements of HTTP/2 is allowing the browser to do multiple simultaneous requests over the same connection, whereas in HTTP 1.1 only one simultaneous request per connection is possible. And the browsers limit the number of connections, too. )

WebSockets not working with HTTP/2 Load Balancer backend in GCP

I have an application running behind a Load Balancer in Google Cloud Platform.
When I use the HTTPS protocol in the backend, I'm able to connect with WebSockets and all WebSocket connections work fine. However, when I change the backend protocol to HTTP/2, I'm unable to connect from the application, and it returns a response of 502 Bad Gateway.
Can I use WebSockets with HTTP/2, or do I need to perform some configuration in order to use WebSockets with an HTTP2 backend?
As others have commented, WebSockets are not supported in HTTP/2 and this is the reason why you receive the 5XX error.
Having said that, the WebSocket functionality is achievable (and improved) with HTTP/2 ref.
If you have existing code working with WebSocket it might not be great to rewrite both backend and frontend.
However, if you are developing a new asynchronous service, it is a good idea to take a look at the HTTP/2 + Server Sent Event (SSE) scheme.

nginx treats websocket API data as http requests

I'm trying to set up a reverse proxy for an API at work with NGINX and node.js using AWS Lightsail, but NGINX doesn't appear to be handling the initial setup of the web socket connection correctly.
When I look in my access.log/error.log files, I can see that
1. There are no errors
2. The JSON formatted data I'm sending across my connection is visible inside the access.log file- something I don't think should show up there.
At first glance, it looks like nginx is trying to handle my data as if it were an HTTP request.
Using the net module from node, I receive this response on my client side app indicating that something went wrong, which makes sense if we assume that nginx is trying to handle my API data (JSON) as an http request.
Received: HTTP/1.1 400 Bad Request
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 06 Oct 2019 15:59:58 GMT
Content-Type: text/html
Content-Length: 182
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
The client side websocket, which thinks it's receiving JSON, immediately throws an error and closes.
It looks to me like NGINX is failing to redirect API data to node.js, but I really don't know why.
I've tried just about everything in my configuration files to get this working. This setup got me to where I am now.
server {
listen 80;
server_name xx.xxx.xxx.xx;
location / {
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade upgrade;
proxy_set_header Connection upgrade;
}
}
I've already confirmed that the API works when I open up port 4000 (the one node.js is listening on). When I switch back to port 80, the client connection callback function fires. This at least superficially indicates that the initial connect has taken place. From there everything stops working though.
EDIT: I'm can't find any reference to an initial http request in wireshark, and fiddler doesn't seem to detect any requests period from my client side node process.
My problem was that I was using the Node socket module, which does NOT implement web sockets. Instead this creates an interface for simple TCP, and not web sockets. This is really important because these two things are VERY different. TCP operates at a fundamentally lower level than HTTP, and certainly much lower than web sockets, which start out as HTTP connections and are upgraded to create a web socket connection.
This can be very confusing since when you're working on localhost, since these TCP connections will seemingly do exactly what you want. The problems begin when you try to set up a reverse proxy or something similar in Nginx or Apache. Neither of these are meant to be used at the level of TCP, and operate within the domain of HTTP instead. So simply put, trying to use TCP sockets in a reverse proxy will lead to nothing but frustration, and as far as I'm aware, is actually impossible within the context of Apache and Nginx.
If you're looking for an implementation of web sockets, check out the WS (short for web sockets) module on NPM, which was what I actually needed.

Is there a way to make nginx terminate a websocket connection and pass only the socket stream to a server?

Basically what I'm trying to do is have a secure websocket connection start life at a client, go through nginx where nginx would terminate the tls, and instead of just proxying the websocket connection to a server, have nginx handle the websocket upgrade and just send the socket stream data to a tcp server or a unix domain socket.
Is that possible with the existing nginx modules and configuration?
proxy_pass can connect to a server via a unix domain socket
proxy_pass http://unix:/tmp/backend.socket:/uri/;
But the implication is that it still speaks http over the unix domain socket and the server is responsible for handling the websocket upgrade. I'm trying to get nginx to do the upgrading so that only the raw socket stream data gets to my server.
Sorta like a mix between proxy_pass and fastcgi_pass.
Do I have to modify one of these modules to make that possible or is there some way to configure this to work?
So what I eventually came to realize is that proxies just proxy and don't parse protocols. There's nothing built into nginx (although mod_ws in apache might do it) that can actually process the websockets protocol, the nginx proxy function just forwards the stream to the back end server. I'm working on another approach for this as the hope of having the webserver do the heavy lifting is not going to work easily.

Resources