handle request with nginx without proxy-pass or fast cgi - nginx

I have a nginx with multiple virtual host. one of them is for autodiscover and it only called when someone try to login with his mail in outlook client. this was happen less than one per month.
I want nginx run my program to get request and send response. I know that it can handle with proxy_pass or fastcgi but it's problem is that my program should run and listen for long time without doing anything and it is a overhead for main virtual host.

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

How to set nginx upstream module response to client synchronously

I'm set up a live broadcast website. I use nginx as reverse proxy, and deploy multiple flv-live-stream process behind nginx(binary program writen by C++). In my flv-live-stream program. Clients maintain long connection with nginx. I count video frame that alreay sent to predict whether the client play smoothly.
But I found there is a strange buffer in upstream module. Even if the client 100% loss packets, back-end process can still send to nginx for 2~3 seconds, almost 2.5~3MBytes.
If there is a method that response can pass to a client synchronously, as soon as it is received from the back-end. And when nginx is unable to send data to client(exp. client loss packets...), nginx donot accept data from the back-end immediately.
I'm already set
listen 80 sndbuf=64k rcvbuf=64k;
proxy_buffering off;
fastcgi_buffering off;
Anyone can help? thanks!

Understanding the php pipeline when using nginx and php-fpm

So I'm trying to understand how the PHP pipeline works from request to response, specifically when using nginx and php-fpm.
I'm coming from a java/.net background so normally once the process is sent the request it uses threads etc. to handle the request/response cycle.
With php/nginx, I noticed the fpm process is setup like:
location / {
include /path/to/php-fpm;
}
Here are a few questions I have:
when nginx recieves request, does php-fpm take over, if so, at what point?
does each request spawn another process/thread?
when you make a change to a php source code file, do you have to reload? If not, does this mean each time a request comes in it parses the source code each time?
Any other interesting points about how a php request is served that would be great.
You configuration in your post is irrelevant as include /path/to/php-fpm; is the inclusion of an nginx configuration subpart.
It doens't take over anything, the request is passed from nginx to php-fpm with fastcgi_pass and nginx waits for the reply to come back but serve other request in the meantime.
Nginx uses the reactor pattern so requests are served by a limited amount of processes (usually the amount is the same than the amount of CPU cores available on the machine). It's an event driven web server that uses event polling to treat many requests on each process (asynchronous). In the other side php fpm uses a process pool to execute php code.
No you don't, because there's no caching anywhere unless you setup browser client's caching headers or server cache. It doesn't parse the php source code each time if the file is unchanged and frequently accessed because of OS caching. When the file content changes then yes it will be parsed again, as a normal file would be.

How can I config nginx to send multiple POST requests in one connection

I am developing an Upload application.
I use Google Chrome to upload a big file (GB) and use nginx to pass the file to my backend application.
I use Wireshark to find that Chrome send the file in one connection with multiple POST requests.
But nginx will split every POST request then send it in different connection to backend application.
How can I config nginx to make it send all the POST requests in one connection, not per POST request one connection?
Oh my god, it's pathetic!
The solution is just enable Nginx upstream keepalive.
Operations to enable upstream keepalive.

How to use nginx or apache to process tcp inbound traffic and redirect to specific php processor?

This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.

Resources