Openresty dynamic spawning of worker process - nginx

Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?

Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.

Related

NGiNX single worker process-Flow

I am using a single worker process and master_process off in nginx.conf.
Now as per my understanding, the flow of operation would be something like:
NGiNX master process created which will spawn a single worker_process using fork and then master process gets killed.
Is that correct ?
If yes then is it possible to avoid forking.
Pthreads is just now been introduced as a testing feature in NGiNX, so I don't want to use it. Is there any other way also to avoid fork?
I got the answer:When master_process is set to off, then master process does not fork any child processes and all requests are processed by the master process.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

Potential Concerns in Stopping Meteor Ungracefully

Just getting into Meteor, which by many accounts seems like a great project. One potential issue (which it may not be) is there doesn't seem to be a meteor stop or another programmatic way to shut down meteor gracefully. Please let me know if I am wrong about this!
Are there potential concerns about maintaining database integrity (for example), if we interrupt the process using CTRL-C or shutting it down via an Activity Monitor? And are there steps we can take to reduce or eliminate such issues?
Caveat: I recognize the above questions are somewhat vague, and I understand that this is usually considered harmful on Stack, but I hope they are still answerable ones.
Thanks,
It does look like there is a cleanup which takes place before the process is terminated (https://github.com/meteor/meteor/blob/master/tools/cleanup.js).
The first signal sent is SIGINT which is a polite way to ask the process to shut down (and give it time to finish its last running thread)
With database integrity, the mongod process also tries to clean itself up before it shuts down & it has a recovery mechanism (from the journal files) on a quick recovery while restarting if forced to shutdown.
That being said, in the middle of a longer running thread I'm not too sure if it's allowed to finish or its killed immediately. But meteor does attempt to give it a chance to have a graceful termination at first, and then escalates it to a SIGHUP then finally a SIGTERM (which is still a graceful termination signal). At no point does meteor force or send a SIGKILL or SIGSTOP.
So meteor apps should be safe from Ctrl+C termination. With activity monitor termination it depends on what type of signal its sent (i.e Force Quit or just Quit)
So to add some closure to this, if your mongodb is externally managed, i.e. on a deployment production server meteor doesn't stop it as mongo-runner.js notes:
// Since it is externally managed, asking it to actually stop would be
// impolite, so our stoppable handle is a noop
if (process.env.MONGO_URL) {
launch_callback();
return handle;
}

php5-fpm craches

I have a webserver (nginx) running debian and php5-fpm randomly seems to crach, it replys with 504 bad gateway if i call php files.
when it is in a crashed state and i do sudo /etc/init.d/php5-fpm it says that it is running, but it will still it gives 504 bad gateway until i do sudo /etc/init.d/php5-fpm
I'm thinking that it has maybe to do with one of my php files which is in a infinity loop until a certain event occurs (change in mysql database) or until it will be time-outed. I don't know if generally that is a good thing or if i should make the loop quit itself before a timeout occurs.
Thanks in advice!
First look at the nginx error.log for the actual error. I don't think PHP crashed, just your loop is using all available php-fpm processes, so there is none free to serve your next request from nginx. That should produce Timeout error in the logs (nginx will wait for some time for available php-fpm process).
Regarding your second question. You should not use infinite loops for this. And if you do, insert sleep() command inside the loop - otherwise you will overload your CPU with that loop and also database with queries.
Also I guess it is enough to have one PHP process in that loop waiting for a event. In that case use some type of semaphore (file or info in db) to let other processes know that one is already waiting for that event. Otherwise you will always eat up all available PHP processes.

Node.JS: Converting tcp to stdin/stdout

Node.JS seems limited in its ability to live-update code and in its ability to automatically isolate exceptions. Both of which are practically by default in Java.
One very effective way to live-update is to have a listener process that simply echos communication to/from the child process. Then to update, the listener starts up a new child (which reads the updated code automatically) and then starts sending requests to the new child,, ending the old child when all requests are complete.
Is there already a system that provides this http functionality through stdout/stdin.
Is there a system that provides TCP server or UDP server functionaility through stdout/stdin.
By this I mean, providing a module that looks like the http or net module with the exception that it uses stdout/stdin for the underlying I/O.
Similar to This CGI module
some applications will only have to change require('http') to require('cgi')
I intend to do something similar. I hope to re-use code if it is already out there, and also to easily convert a small or single purpose webserver, into this listener layer which runs many webapps. It is important that cleanup occurs properly. Connections that end or error should be freed up and the end/error events/commands should be properly echoed both ways.
(I believe a common way is to have the children listen on ports and the parent communicate with those ports, but I think an stdout/stdin solution will be more efficient)
Use nginx (HttpUpstreamModule) or HAProxy. In both cases you'd run them in front and mark a backend as down and then bring it back up when you need to do a live upgrade.
I'm not certain that this is what you're looking for (indeed, I'm not certain that I understand your question), but Remy Sharp has written a very helpful node module called nodemon. It promises to "monitor for any changes in your node.js application and automatically restart the server." This may help with the issue of live updating code.

Resources