nginx treats websocket API data as http requests - nginx

I'm trying to set up a reverse proxy for an API at work with NGINX and node.js using AWS Lightsail, but NGINX doesn't appear to be handling the initial setup of the web socket connection correctly.
When I look in my access.log/error.log files, I can see that
1. There are no errors
2. The JSON formatted data I'm sending across my connection is visible inside the access.log file- something I don't think should show up there.
At first glance, it looks like nginx is trying to handle my data as if it were an HTTP request.
Using the net module from node, I receive this response on my client side app indicating that something went wrong, which makes sense if we assume that nginx is trying to handle my API data (JSON) as an http request.
Received: HTTP/1.1 400 Bad Request
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 06 Oct 2019 15:59:58 GMT
Content-Type: text/html
Content-Length: 182
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
The client side websocket, which thinks it's receiving JSON, immediately throws an error and closes.
It looks to me like NGINX is failing to redirect API data to node.js, but I really don't know why.
I've tried just about everything in my configuration files to get this working. This setup got me to where I am now.
server {
listen 80;
server_name xx.xxx.xxx.xx;
location / {
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade upgrade;
proxy_set_header Connection upgrade;
}
}
I've already confirmed that the API works when I open up port 4000 (the one node.js is listening on). When I switch back to port 80, the client connection callback function fires. This at least superficially indicates that the initial connect has taken place. From there everything stops working though.
EDIT: I'm can't find any reference to an initial http request in wireshark, and fiddler doesn't seem to detect any requests period from my client side node process.

My problem was that I was using the Node socket module, which does NOT implement web sockets. Instead this creates an interface for simple TCP, and not web sockets. This is really important because these two things are VERY different. TCP operates at a fundamentally lower level than HTTP, and certainly much lower than web sockets, which start out as HTTP connections and are upgraded to create a web socket connection.
This can be very confusing since when you're working on localhost, since these TCP connections will seemingly do exactly what you want. The problems begin when you try to set up a reverse proxy or something similar in Nginx or Apache. Neither of these are meant to be used at the level of TCP, and operate within the domain of HTTP instead. So simply put, trying to use TCP sockets in a reverse proxy will lead to nothing but frustration, and as far as I'm aware, is actually impossible within the context of Apache and Nginx.
If you're looking for an implementation of web sockets, check out the WS (short for web sockets) module on NPM, which was what I actually needed.

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Is there a way to make nginx terminate a websocket connection and pass only the socket stream to a server?

Basically what I'm trying to do is have a secure websocket connection start life at a client, go through nginx where nginx would terminate the tls, and instead of just proxying the websocket connection to a server, have nginx handle the websocket upgrade and just send the socket stream data to a tcp server or a unix domain socket.
Is that possible with the existing nginx modules and configuration?
proxy_pass can connect to a server via a unix domain socket
proxy_pass http://unix:/tmp/backend.socket:/uri/;
But the implication is that it still speaks http over the unix domain socket and the server is responsible for handling the websocket upgrade. I'm trying to get nginx to do the upgrading so that only the raw socket stream data gets to my server.
Sorta like a mix between proxy_pass and fastcgi_pass.
Do I have to modify one of these modules to make that possible or is there some way to configure this to work?
So what I eventually came to realize is that proxies just proxy and don't parse protocols. There's nothing built into nginx (although mod_ws in apache might do it) that can actually process the websockets protocol, the nginx proxy function just forwards the stream to the back end server. I'm working on another approach for this as the hope of having the webserver do the heavy lifting is not going to work easily.

Force Varnish to Use Proxy Protocol 1

I have HaProxy terminating SSL and passing the requests back to Varnish which then either serves the cached page or requests from Nginx. However, Varnish seems to be treating the request from HaProxy as HTTP/1 not HTTP/2 and failing to serve.
I can see in the Nginx logs the following when I try to hit a page:
" while reading PROXY protocol, client: 127.0.0.1, server: 127.0.0.1:8181
2016/08/11 06:53:31 [error] 5682#0: *1 broken header: "GET / HTTP/1.1
Host: www.example.com
User-Agent: curl/7.50.2-DEV
Accept: */*
X-Forwarded-For: IP_Removed
Accept-Encoding: gzip
X-Varnish: 32777
I've found something that relates to this here which states that the reason for this is that Nginx does not work with v2 PROXY only v1. So, as a result of this I've forced the use of protocol 1 in HaProxy using the send-proxy rather than send-proxy-v2 switch. But when it gets to Varnish I think that Varnish is converting this in some way to protocol 2 which is causing it to then fail to communicate properly with Nginx.
I have removed Varnish from the equation and connected HaProxy direct to Nginx and it works perfectly via HTTP/2. The problem is something is happening in the Varnish stack and the likely suspect is the proxy protocol v2 being used by Varnish.
So, to cut a long story short, how do I force Varnish to adhere to PROXY1 rather than PROXY2 protocol? I've tried adding PROXY1 into the launch daemon options but Varnish won't accept that. Any help is appreciated. Thanks!
UPDATE - I tested HaProxy > Nginx with the send-proxy-v2 switch on the HaProxy backend and it causes the identical problem to when Varnish is introduced into the stack. Switching back to send-proxy on HaProxy fixes the issue. So, I'm convinced that the issue is Varnish using protocol 2 rather than protocol 1. But how to tell it not to?
I understand that Varnish isn't HTTP/2 or does SSL but it should be passing the protocol back as is to Nginx no?
No.
But first, let's clarify. HTTP/2 and Proxy protocol V2 have absolutely nothing to do with each other. Remove HTTP/2 from your mind, as it is not applicable here in any sense.
Your question is, in fact, this:
If HAProxy is sending Proxy Protocol V1 to Varnish, and Nginx is configured behind Varnish to expect Proxy Protocol V1, why does Nginx complain of broken headers? Does Varnish not forward Proxy Protocol V1 to the backend? Does it for some reason send Proxy Protocol V2, instead?
And the answer to that question is that Varnish isn't sending either one. Neither V1 nor V2.
The only thing you need the Proxy protocol for is so that an HTTP-aware component can receive the client IP address (and port) from a upstream, non-HTTP-aware component, such as HAProxy using mode tcp or Amazon ELB with a listener in TCP mode, either of which is typically doing SSL offloading for you and not HTTP request routing, so it needs an alternative mechanism of passing the client address.
The first HTTP-aware component can take that address and set it in an HTTP header, customarily X-Forwarded-For, for the benefit of the remaining components in the stack. As such, there's no reason for Varnish to forward the Proxy protocol onward. It isn't doing that in your example, and there is no obvious reason why Varnish would even be capable of forwarding the Proxy protocol.¹
And this brings us to the error. You are misdiagnosing the problem that Nginx is reporting. The broken header error means that Nginx is receiving something other than Proxy protocol V1. With Varnish in the loop, there is no Proxy protocol header² present at all in the request to Nginx -- and when a listener is configured to expect the Proxy protocol header, that header is mandatory.
If a component is configured to expect Proxy protocol V1 and it is not present, that is always an error. But "not present" means exactly that. A V1 header is not present. That does not mean V2 is. It isn't.
So, I'm convinced that the issue is Varnish using protocol 2 rather than protocol 1.
You have convinced yourself incorrectly. Proxy V2 into Nginx -- as you have tried with HAProxy -- is an error, and no Proxy protocol header at all -- as you are seeing from Varnish -- is an error, as explained above. Both are misconfigurations, though of a different type. What you have done here is duplicated the error but for an entirely different reason.
If you are sending all requests through Varnish, then configure Varnish to set X-Forwarded-For in the forwarded request using the information it learns from the incoming Proxy protocol mesaage. Remove Proxy protocol from the Nginx configuration.
Or configure HAProxy to operate in HTTP mode and let it insert the header using option forwardfor.
¹ Clearly, from the error, Varnish is just sending ordinary HTTP headers -- nothing that looks like Proxy protocol. I don't think it even supports the option of sending Proxy protocol to the origin server, but somebody say something if I've overlooked that capability.
² I would assert that the Proxy protocol "header" is not properly called a header, given what that implies. It is a preamble, not a header, though it was unfortunately called a "header" in the standard. It's most certainly not an HTTP header.
If you upgrade Varnish to 5.0 it can send PROXY Protocol version 1 to NGINX by setting ".proxy_header = 1"

How to set nginx upstream module response to client synchronously

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

Nginx: Reverse proxying WebSocket Draft 76

I'm using nginx 1.4.0 and it deals perfectly fine with newer WebSocket versions, but Draft 76 is a problem. My backend (Netty-based Java application) doesn't seem to receive the handshake request, and in nginx's error log I have
[error] 7662#0: *3720 upstream timed out (110: Connection timed out) while reading response header from upstream
My configuration ($proxy_add_connection works the same way as described there)
include proxy_params;
proxy_pass http://127.0.0.1:8001/;
proxy_http_version 1.1;
proxy_set_header Connection $proxy_add_connection;
proxy_set_header Upgrade $http_upgrade;
If I connect directly to the backend, it works fine.
Is there anything I can do to fix it?
The recent changes to Nginx to support WebSocket proxying don't support WebSockets per se, but rather allow it to recognize a request to upgrade the connection from HTTP to another protocol. When it gets such a request it now establishes a tunnel to the backend rather than dropping the connection as invalid. The RFC6455 WebSocket handshake is a standard HTTP protocol upgrade request and as such it works with this new capability.
The draft 76/00 WebSocket handshake was designed specifically to break intermediaries that did not explicitly support WebSockets. As all that Nginx is doing is proxying the upgraded TCP connection, it doesn't actually understand the WebSocket handshake or what protocol version of WebSocket is being used. As such it has no way to perform the non-HTTP adjustments that the draft 76/00 handshake require.
To support draft 76/00 versions of WebSocket Nginx would have to implement special draft 76/00 detection and handling logic. Given the complexity of adding non-HTTP logic and the unfinished quality and questionable security of draft 76/00 it is unlikely that proxy intermediaries will ever support it.
If your users absolutely depend on 2-3 year old versions of Chrome/Safari, Flash fallback or raw TCP load balancing is likely your best bet.

Resources