How to set nginx upstream module response to client synchronously - nginx

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

handle request with nginx without proxy-pass or fast cgi

I have a nginx with multiple virtual host. one of them is for autodiscover and it only called when someone try to login with his mail in outlook client. this was happen less than one per month.
I want nginx run my program to get request and send response. I know that it can handle with proxy_pass or fastcgi but it's problem is that my program should run and listen for long time without doing anything and it is a overhead for main virtual host.

nginx treats websocket API data as http requests

I'm trying to set up a reverse proxy for an API at work with NGINX and node.js using AWS Lightsail, but NGINX doesn't appear to be handling the initial setup of the web socket connection correctly.
When I look in my access.log/error.log files, I can see that
1. There are no errors
2. The JSON formatted data I'm sending across my connection is visible inside the access.log file- something I don't think should show up there.
At first glance, it looks like nginx is trying to handle my data as if it were an HTTP request.
Using the net module from node, I receive this response on my client side app indicating that something went wrong, which makes sense if we assume that nginx is trying to handle my API data (JSON) as an http request.
Received: HTTP/1.1 400 Bad Request
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 06 Oct 2019 15:59:58 GMT
Content-Type: text/html
Content-Length: 182
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
The client side websocket, which thinks it's receiving JSON, immediately throws an error and closes.
It looks to me like NGINX is failing to redirect API data to node.js, but I really don't know why.
I've tried just about everything in my configuration files to get this working. This setup got me to where I am now.
server {
listen 80;
server_name xx.xxx.xxx.xx;
location / {
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade upgrade;
proxy_set_header Connection upgrade;
}
}
I've already confirmed that the API works when I open up port 4000 (the one node.js is listening on). When I switch back to port 80, the client connection callback function fires. This at least superficially indicates that the initial connect has taken place. From there everything stops working though.
EDIT: I'm can't find any reference to an initial http request in wireshark, and fiddler doesn't seem to detect any requests period from my client side node process.
My problem was that I was using the Node socket module, which does NOT implement web sockets. Instead this creates an interface for simple TCP, and not web sockets. This is really important because these two things are VERY different. TCP operates at a fundamentally lower level than HTTP, and certainly much lower than web sockets, which start out as HTTP connections and are upgraded to create a web socket connection.
This can be very confusing since when you're working on localhost, since these TCP connections will seemingly do exactly what you want. The problems begin when you try to set up a reverse proxy or something similar in Nginx or Apache. Neither of these are meant to be used at the level of TCP, and operate within the domain of HTTP instead. So simply put, trying to use TCP sockets in a reverse proxy will lead to nothing but frustration, and as far as I'm aware, is actually impossible within the context of Apache and Nginx.
If you're looking for an implementation of web sockets, check out the WS (short for web sockets) module on NPM, which was what I actually needed.

Is there a way to make nginx terminate a websocket connection and pass only the socket stream to a server?

Basically what I'm trying to do is have a secure websocket connection start life at a client, go through nginx where nginx would terminate the tls, and instead of just proxying the websocket connection to a server, have nginx handle the websocket upgrade and just send the socket stream data to a tcp server or a unix domain socket.
Is that possible with the existing nginx modules and configuration?
proxy_pass can connect to a server via a unix domain socket
proxy_pass http://unix:/tmp/backend.socket:/uri/;
But the implication is that it still speaks http over the unix domain socket and the server is responsible for handling the websocket upgrade. I'm trying to get nginx to do the upgrading so that only the raw socket stream data gets to my server.
Sorta like a mix between proxy_pass and fastcgi_pass.
Do I have to modify one of these modules to make that possible or is there some way to configure this to work?
So what I eventually came to realize is that proxies just proxy and don't parse protocols. There's nothing built into nginx (although mod_ws in apache might do it) that can actually process the websockets protocol, the nginx proxy function just forwards the stream to the back end server. I'm working on another approach for this as the hope of having the webserver do the heavy lifting is not going to work easily.

How to use nginx or apache to process tcp inbound traffic and redirect to specific php processor?

This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.

Resources