NGINX+MONO FASTCGI+CENTOS - How to avoid idle timeout - nginx

In IIS there exists the idleTimeout option set at 20 minutes by default. Setting it to 0 allow the application to be always up (never idle) and together with the recycling options, I can decide when the Application must be recycled.
Are there the same configuration options, or something like, in NGINX+MONO FASTCGI? Cannot figure it out.
Thank you
LM
EDIT:
It seems that he problem is not FASTCGI. I created a linux service systemd, to startup fastcgi-mono-server4 at boot. It works, but after some minutes (~5) the running application fastcgi-mono-server4.exe starts again with a new PID.
I want to keep the fastcgi-mono-server4.exe always on with the same PID.

Solved!
The problem does not exists. FASTCGI is always up, it was a background thread that crashed the whole application and set the systemd service in a failed state.

Related

ASP. NET global config changes limit number of threads and ignore timeout

Can I enforce at global level in a config file
possible number of active threads = 1 and
ignore connection timeout (sometimes timeouts are set for few seconds - bothers me at debug).
Also If I accidentally make several requests the debug session goes haywire, going through the same method several times before finishing it, hence why i want to temporarily limit the number of threads allowed to 1.
I'm not sure about the Test Environment (if you are testing on local machine)
but if you are deploying on IIS and testing then what you need to do is to configure the Application Pool to allow only 1 request and ignore the rest.
and for the timeout, you can also set timeout to max.
Hope this helps.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

httpRuntime shutdownTimeout and the IIS setting

I met an issue in ASP.Net. In any Web App config Web.config file there is a section called httpRuntime, and it has an attriube: shutdownTimeout. According to MSDN documentation, this attr specify how long the idle time of the worker process is allowed before ASP.Net runtime to terminate the worker process. On another side, under IIS's ApplicationPool's -> Default AppPool -> properties -> performance tab, there is a setting: "shutdown worker process after being idle for (20) minutes".
I guess that under IIS, this setting is for all worker process which is used to handle incoming request not only the process where a specific ASP.Net runtime resides. And if Web.Config's shutdownTimeout has not taken effect yet, the IIS's setting would then do its work.
However from my observation, although the httpRuntime's shutdownTimeout default value is 90 seconds, my web application was always shut down after idle for 20 minutes. It seems the IIS setting take priority for this aspect.
It is much appreciated if somebody could clarify on this: what is wrong with my guess.
I did some digging and found out the answer:
the attriubte shutdownTimeout controls how long the ASP.Net runtime will shut down the worker process before it gracefully exits by itself if it is asked to terminate by the ASP.Net runtime.
Is this right, any opinion is greatly appreciated.
https://msdn.microsoft.com/en-us/library/e1f13641(v=vs.100).aspx
Specifies the number of minutes that are allowed for the worker process to shut down. When the time-out expires, ASP.NET shuts down the worker process.
The default is 90.
So basically, the worker process has X minutes to shut down the worker process. If it reaches X it is killed.

Overcome compilation(?) pause on *secondary* initial ASP.Net application startup?

After deploying an ASP.Net application to a new server, the first user to hit the app gets a long pause, presumably because the app is performing its initial compilation. However, this pause seems to also also occur after the application has timed out and unloaded itself from memory.
The first compilation would be tolerable as it only happpens once, but the 2nd one seems to me like it should be unnecessary....is there a workaround to address this issue? It would be nice to be able to extend the application timeout, but I only see a way to extend the session timeout, which I would rather not do, as this would keep all user sessions in memory for a long period of time.
That's IIS shutting down the worker process - the default is a 20 min idle time. In IIS6, you can configure the app pool and turn off the Idle timeout (on Performance tab). In IIS7, it's in the Advanced Settings.
A common way (though looks hackish to me) to keep the worker process alive is to have some program issue web requests to some URL in the application on a regular basis.
You can reduce the initial startup time by precompiling the ASP.NET app with aspnet_compiler.exe. This won't reduce the initial request time to zero (it'll still need to create the worker process and do other housekeeping.) but it'll noticeably reduce it.
I did find some recommended settings for the App Pools in IIS. The intended outcome of these settings is to reduce the number of times your application needs to go through the startup cycle. These settings should apply to IIS 6 as well as IIS 7:
Recycle Worker Process: Disable. Use this only if you have multiple websites and are trying to isolate processes.
Shutdown worker processes after being idle for (time in minutes): Disable unless you are sharing resources on your server. With this setting disabled, your website should not unload, ensuring that startup time for your site is as minimal as possible.

CLR Debugger, ASP.NET -- how to increase timeout?

I'm trying to debug a problem (exception being thrown) in an ASP.NET MVC program using the Microsoft CLR Debugger. It works pretty well, except after a minute or two, the debugger gets detached (the web request times out or something?) and I can no longer inspect its state.
This is extremely frustrating. How can I make the server/debugger (not sure who is at fault here) stay open until I tell it to stop?
Its on the IIS side, check this on how to configure/fix: http://vaultofthoughts.net/ASPNETDebuggerTimeoutInWindowsVista.aspx
From the link:
In IIS 7 go to the Application Pools and select the Advanced Settings for the pool which you process runs in. In the Process Model section you can do one of two things;
Set the Ping Enabled property to False. This will stop IIS checking to see if the worker process is still running and keep your process alive permanently (or until you stop your debugged process)
If you prefer to allow IIS to continue the monitoring process then change the Ping Maximum Response Timeout value to something larger than 90 seconds (default value).

Resources