NGiNX single worker process-Flow - nginx

I am using a single worker process and master_process off in nginx.conf.
Now as per my understanding, the flow of operation would be something like:
NGiNX master process created which will spawn a single worker_process using fork and then master process gets killed.
Is that correct ?
If yes then is it possible to avoid forking.
Pthreads is just now been introduced as a testing feature in NGiNX, so I don't want to use it. Is there any other way also to avoid fork?

I got the answer:When master_process is set to off, then master process does not fork any child processes and all requests are processed by the master process.

Related

Openresty dynamic spawning of worker process

Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?
Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.

nginx reverse proxy multiple backends multiple processes

I read the postings from this entry. I'm wondering if there is a way to run multiple instances in different processes? Because when I understand it right, running all instances in the same configuration file will create a common master and several worker processes. If the master process dies then all instances die at the same time.
From my perspective it would be better to run them separately, so that each backend system gets its own master. Do you agree? How could I achieve that?
Thankyou.
I think the better solution is to use a HA Cluster. You'll use two master server configured in exactly the same way, and a balancer in front of them.
This configuration will hide a possible fault and will work in "reduced" mode.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

Stop php script execution when a browser has been closed

I have my PHP app running on Nginx & PHP-FPM.
When I used Apache, request abortion (browser closing) terminated php process, but now script continues execution till its end. Nginx fastcgi_ignore_client_abort option is Off and I do not use fastcgi_finish_request function.
What can be reason of such behaviour? Or how can I tell php that request is aborted?
fastcgi leaves processes open and closes the handle within the process of the particular file. this is one of the main differences between fastcgi and regular cgi. also, php has no knowledge of the browser at all.
This is the reason fast-cgi typically provides greater performance than mod_php. A threaded approach as opposed to forking means there's no overhead of starting up the apache process for each request (or closing it down).
You can configure the amount of children running to tune the amount of resources consumed with the process manager documentation.

Applications of fork system call

fork is used to create a copy of process from which its called.
This is typically followed by calls to exec family of functions.
Are there any usages of fork other than this?
I can think of one. Doing IPC with pipe functions.
Yes of course. It's quite common to start a process, do some data initialization and then spawn multiple workers. They all have the same data in their address space and it's Copy On Write.
Another common thing is to have the main process listen to a TCP socket and fork() for every connection that comes in. That way new connections can be handled immediately while existing connections are handled in parallel.
I think you're forgetting that after a fork(), both processes have access to all data that existed in the process before the fork().
Another use of fork is to detach from the parent process (falling back to init, process 1). If some process, say bash with pid 1111, starts myserver which gets pid 2222, it will have 1111 as parent. Assume 2222 forks and the child gets pid 3333. If now process 2222 exits, the 3333 will loose its parent, and instead it will get init as its new parent.
This strategy is sometimes used by deamons when starting up in order to not have a parent relationship with the process that started it. See also this answer.
If you have some kind of server listening for incoming connections, you can fork a child process to handle the incoming request (which will not necessarily involve exec or pipes).
A "usage" of fork is to create a Fork Bomb
I have written a small shell, and it was full of forks (yes this is exec..), especially for piping elements. wikipedia page on pipe

Resources