Understanding the php pipeline when using nginx and php-fpm - nginx

So I'm trying to understand how the PHP pipeline works from request to response, specifically when using nginx and php-fpm.
I'm coming from a java/.net background so normally once the process is sent the request it uses threads etc. to handle the request/response cycle.
With php/nginx, I noticed the fpm process is setup like:
location / {
include /path/to/php-fpm;
}
Here are a few questions I have:
when nginx recieves request, does php-fpm take over, if so, at what point?
does each request spawn another process/thread?
when you make a change to a php source code file, do you have to reload? If not, does this mean each time a request comes in it parses the source code each time?
Any other interesting points about how a php request is served that would be great.

You configuration in your post is irrelevant as include /path/to/php-fpm; is the inclusion of an nginx configuration subpart.
It doens't take over anything, the request is passed from nginx to php-fpm with fastcgi_pass and nginx waits for the reply to come back but serve other request in the meantime.
Nginx uses the reactor pattern so requests are served by a limited amount of processes (usually the amount is the same than the amount of CPU cores available on the machine). It's an event driven web server that uses event polling to treat many requests on each process (asynchronous). In the other side php fpm uses a process pool to execute php code.
No you don't, because there's no caching anywhere unless you setup browser client's caching headers or server cache. It doesn't parse the php source code each time if the file is unchanged and frequently accessed because of OS caching. When the file content changes then yes it will be parsed again, as a normal file would be.

Related

handle request with nginx without proxy-pass or fast cgi

I have a nginx with multiple virtual host. one of them is for autodiscover and it only called when someone try to login with his mail in outlook client. this was happen less than one per month.
I want nginx run my program to get request and send response. I know that it can handle with proxy_pass or fastcgi but it's problem is that my program should run and listen for long time without doing anything and it is a overhead for main virtual host.

Nginx: capture post requests when upstream is offline

I'm using Nginx as a reverse proxy for a Ruby on Rails application.
The application has 2 critical endpoints that are responsible for capturing data from customers who're registering their details with our service. These endpoints take POST data from a form that may or may not be hosted on our website.
When our application goes down for maintenance (rare, but we have a couple of SPOF services), I would like to ensure the POST data is captured so we don't lose it forever.
Nginx seems like a good place to do this given that it's already responsible for serving requests to the upstream Rails application, and has a custom vhost configuration in place that serves a static page for when we enable maintenance mode. I figured this might be a good place for additional logic to store these incoming POST requests.
The issue I'm having is that Nginx doesn't parse POST data unless you're pointing it at an upstream server. In the case of our maintenance configuration, we're not; we're just rendering a maintenance page. This means that $request_body¹ is empty. We could perhaps get around this by faking a proxy server, or maybe even pointing Nginx at itself and enabling the logger on a particular location. This seems hacky though.
Am I going about this the wrong way? I've done some research and haven't found a canonical way to solve this use-case. Should I be using a third-party tool and not Nginx?
1: from ngx_http_core_module: "The variable’s value is made available in locations processed by the proxy_pass, fastcgi_pass, uwsgi_pass, and scgi_pass directives when the request body was read to a memory buffer."

Nginx - Reacting to Upstream Response

I am using nginx as reverse proxy for file storage upload with an external provider.
When I am processing a file upload, I need to keep track (in my database) whether an upload was successful before returning the response to the user. I would therefore like to use the ngx.location.capture method provided in the lua-nginx-module to talk to my backend about the outcome of the request. Since I need to wait for the response of the upstream server I can only issue the capture in header_filter_by_lua. Unluckily I cannot issue any outwards communication in header_filter_by_lua. ngx.location.capture, ngx.socket.* and ngx.exec are only available when the response has not yet arrived.
How can I react to an upstream response in nginx?
Other approaches I've thought about:
Have a script watch the access log and then issue a curl request. (Seems like there should be an easier way)
Initially send the file via ngx.location.capture in content_by_lua (I don't think this would handle up to 5 GB filesize)
Help is appreciated :)
use for /upload location:
content_by_lua_file with resty.upload module

Buffered uploading via HTTP Proxy

I am trying to solve an issue with uploads to our web infastructure.
When a user uploads media to our site, it is proxied (via our Web Proxy tier) to a Java backend with a limited number of threads. When a user has a slow connection or a large upload, this holds one of the Java threads open a long period of time, reducing overall capacity.
To mitigate this I'd like to implement an 'upload proxy' which will accept the entire HTTP POST data of the upload and only when it has received all of the data it will proxy that POST to the Java backend quickly, pushing the problem of the upload thread being held open to a HTTP proxy.
Initially I found Apache Traffic Server has a 'buffer_upload' plugin, but it seems a bit bleeding edge and has no support for regex in URLs, although it would solve most of my issues.
Does anyone know a proxy product that would be able to do what I am suggesting (aside from Apache Traffic Server)?
I see that Nginx has fairly detaild buffer settings for proxying, but it doesn't seem (from docs/explanations) to wait for the whole POST before opening a backend connection/thread. Do I have this right?
Cheers,
Tim
Actually, nginx always buffers requests before opening a connection to the backend. It is possible to turn off response buffering using proxy_buffering or setting an X-Accel-Buffering response header for per-response buffering control.

What is the point of sub request at all?

Seems most web server support subrequest.
I found here's one question about sub request:
Subrequest for PHP-CGI
But what is the point of sub request at all,when is that kind of thing really useful?
Is it defined in http protocol?
Apache subrequests can be used in your e.g. PHP application with virtual() to access resources from the same server. The resource gets processed from Apache normally as normal requests are, but you don't have the overhead of sending a full HTTP request on the network interface.
Less overhead is probably the only reason one would want to use it instead of a real HTTP request.
Edit: The resources are processed by apache, which means that the apache modules are used if configured. You can request a mod_perl or mod_ruby processed resource from PHP.

Resources