I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.
Related
I have following configuration with
worker_process 4;
But I noticed that it always hits only 1 worker.
I am testing on a local Centos VM. I am doing curl http call on specific port and added a file with 1000 curl requests and ran them from multiple terminal windows.
But see alll of them hit only 1 worker. Is there a way that I can have atleast more than 1 worker started. Can someone please share their knowledge on this.
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
In the epoll-and-accept the load balancing algorithm differs: Linux seems to choose the last added process, a LIFO-like behavior. The process added to the waiting queue most recently will get the new connection. This behavior causes the busiest process, the one that only just went back to event loop, to receive the majority of the new connections. Therefore, the busiest worker is likely to get most of the load.
Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?
Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.
In IIS there exists the idleTimeout option set at 20 minutes by default. Setting it to 0 allow the application to be always up (never idle) and together with the recycling options, I can decide when the Application must be recycled.
Are there the same configuration options, or something like, in NGINX+MONO FASTCGI? Cannot figure it out.
Thank you
LM
EDIT:
It seems that he problem is not FASTCGI. I created a linux service systemd, to startup fastcgi-mono-server4 at boot. It works, but after some minutes (~5) the running application fastcgi-mono-server4.exe starts again with a new PID.
I want to keep the fastcgi-mono-server4.exe always on with the same PID.
Solved!
The problem does not exists. FASTCGI is always up, it was a background thread that crashed the whole application and set the systemd service in a failed state.
I am implementing an ASP.NET application that needs to service conventional http requests but the responses require data that I need to acquire from providers that are executables that provide their data over sockets. My plan to implement was:
1) In Application_Start, start a new thread that starts a socket server
2) In Session_Start, launch the session-specific process that will ultimately connect to the socket server, and from there do a Monitor.Wait on a session-specific lock object which I've stored in Application.Contents by Session key
3) When the socket server sees a new connection, make the data available to the appropriate session Contents and do a Monitor.Pulse on the session-specific lock object
Is this technically feasible in IIS? Can this concept function as a stable system?
Before answering, please bear in mind I am not asking "is this the recommended approach", I am aware it is not and if I had the option to write this system from scratch I would do this differently. I'm also not able to change the fact that the programs communicate using sockets.
Given the constraints this approach makes sense.
Shutdown and recycling of IIS worker processes are always throny issues when it comes to keeping state in a web app. Note, that your worker process can recycle pretty much at any time for many reasons. Some of those reasons are unavoidable: Server reboot, app deployment, bug leading to a process crash. So you need to think through what happens in those cases: All sessions will be lost while the child processes still run. Suggested solution: Add the children into a Windows Job Object and configure the Job to be killed when the parent exits.
With overlapped IIS worker recycling you can have two functioning workers running at the same time. You must deal with that possibility.
Consider the possibility that the child process immediately crashes. It will never make a connection. Make sure your app doesn't hang waiting for the connection forever.
The thread will be started on each Application_Start event.
It will be a monitoring thread which is supposed to run constantly.
So even if the app shuts down, once it is restarted the thread will start too ensuring it runs all the time.
However I need to be sure that this thread will not be stopped / shut down while the application is running.
So in a few words, does anybody know if asp.net could shut down such a thread without actualy stopping / recycling the application.
As a matter of design, you shouldn't depend on asp.net to run threads like this. Little things like app recycling can cause you a lot of trouble.
Instead, create a windows service to execute the thread. This way you don't have to worry about it.
Update
I just wanted to add a little more information.
IIS has the ability to execute your app across multiple threads and processes. A standard site installation usually only has a single process (aka: web garden) assigned which spins up around 20 threads to handle request processing.
However, any IIS administrator can easily add more processes to the mix. They usually do this when a site can hose a single process either because request processing takes too long, or the number of handler threads isn't enough, or as a temporary measure if the app has enough problems that a single thread will hose the entire process fairly often.
If you have a thread being spun on app start, then it will create one for each worker process the site has. This may be unexpected behavior to you or your successors.
Also, monitoring apps are almost always completely separate to the application they are monitoring. One of the primary reasons is that in the event the monitored process dies, hangs, or otherwise becomes unresponsive then the monitoring app itself still needs to carry on and log this information. Otherwise the monitored process could very well hose the monitoring app itself.
So, do yourself a favor and move this to its own process. The best way to do this on an IIS server is to create a windows service and give it the appropriate execution rights to do what you need.