<!--
.Net CRL Version = v.4.0
Enable 32-Bit Applications = False
Managed Pipeline Mode = Integrated
Queue Length = 1000
Start Mode = AlwaysRunning
CPU
Limit (percent) = 0
Limit Action = noAction
Limit interval (minutes) = 5
Processor Affinity Enable = false
Processor Affinity Mask = 4294967295
Processor Affinity Mask(64-bit option) = 4294967295
Process Model
General Process Model Event Log Entry
Idle Time out Reached = True
Identity = Administrator
Idle Time-out (minutes) = 20
Idle Time-out Action = Suspend
Load User Profile = True
Maximum Worker Processes = 0
Ping Enable = True
Ping Maximum Response Time (seconds) = 90
Ping Period (seconds) = 30
Shutdown Time Limit (seconds) = 90
Startup Time Limit (seconds) = 90
I will send the setting in text format later , but regarding what happens between "better" and "worst" I am stressing the website with 4 or 5 users logging and doing some actions like filling forms submitting data.
Process orphaning
Enable = False
Executable =
Excutable Paramiters
Rapid – Fail Protection
“Service Unavailable” Response type = HttpLevel
Enable = True
Failure interval (minutes)= 5
Maximum Failures = 5
Shutdown Executable
Shutdown Executable Parameters
Recycling
Disable Overlapped Recycle = False
Disable Recycle for Configuration Changes = False
Private memory Limit (KB) = 0
Regular Time Interval (minutes)
Request Limit
Specific time= 7 hours
Virtual memory Limit (KB)= 0
-->
Vb website slow on the IIS7, server 2016.
If I recycle the App pool the website performs better and then after some time it get worst again.
Can you please help me to understand what is the issue and if this is due to the incorrect setting of the App pool ?
Also I have been suggested to change the value of Maximum Worker Processes to match with the number of cores. But after I have changed this the application started killing the session every 2 or 3 second and showing the login page. Can somebody help me to find why this is happening?
Please see the below App pool setting
Apppool
Apppool
Apppool
Thank you
Related
I am providing a timeout of one second , however when the URL is down it is taking 120+ seconds for the response to come. Is there some variable or something that overrides the timeout in do:url-open ?
Update: I was calling the dp:url-open on request-transformation as well as on response-transformation. So the overriden timeout is 60 sec, adding both side it was becoming 120 sec.
Here's how I am calling this (I am storing the time before and after dp:url-open calls, and then returning them in the response):
Case 1: When the url is reachable I am getting a result like:
Case 2: When url is not reachable:
Update: FIXED: It seems the port that I was using was getting timed-out in the firewall first there it used to spend 1 minute. I was earlier trying to hit an application running on port 8077, later I changed that to 8088, And I started seeing the same timeout that I was passing.
The do:url-open() timeout only affects the operation done in the script but not the service itself. It depends on how you have built the solution but the time-out from the do:url-open() should be honored.
You can check this by setting logs to debug and adding a <xsl:message>Before url-open</xsl:message> and one after to see in the log if it is your url-open call or teh service that waits 120+ sec.
If it is the url-open you have most likely some error in the script and if it is the service that halts the response you need to return from the script (or throw an error depending on your needs) to halt the service.
You can set the time-out for the service itself or set a time-out in the User Agent for the specific URL you are calling as well.
Please note that the time-out will terminate the service after that time if you set it on service level so 1 sec. would not be recommended!
I have an asp.net mvc 5 site running on iis. In the Application_Start I have a call to this method called Run();.
private void Run()
{
Task t = new Task(() => new XyzServices().ProcessXyz());
t.Start();
t.ContinueWith((x) =>
{
Thread.Sleep(ConfigReader.CronReRunTimeInSeconds);
Run();
});
}
I am running a task which process some data, and as soon as the task completes I wait fr 20-30 seconds and rerun the task again.
Now all this works fine. But after a certain time, the process stops and its resumes only when I reopen the site url.
How do I overcome this ? Any ideas or suggestions ?
Might be your application pool is shutdown automatically
1) Open IIS.
2) Right click on your Application-Pool for the web application and select "Advanced Settings".
3) Set "Rapid-Fail Protection" -> "Enabled" to False.
If this is true, the Application pool is shut down if there are a specified number of worker process crashes within a specified time period. By default, an application pool is shutdown if there are 5 crashes within a 5 minutes interval.
for more info https://sharepoint.stackexchange.com/questions/22885/application-pool-is-auto-stopped-when-browse-web-application-in-iis7]
I have a ASP.NET 4.0 app that is calling a WCF service. For testing, the closeTimeout, openTimeout, receiveTimeout, and sendTimeout values in the wsHttpBinding binding are all set to 01:00:00.
When I ran a test in which the service took 5 minutes 40 seconds, I could see the correct results of the WCF service in the app event log. However, the app did not process the results.
In subsequent identical tests in which the WCF service took less than 4 minutes, I could see the same correct results in the app event log, but the app processed the results correctly.
I'm thinking there's another timeout setting I don't know about. Any ideas? Thanks.
Try the OperationTimeout property of the WCF service client where it is instantiated:
MyWCFServiceClient client = new MyWCFServiceClient();
client.InnerChannel.OperationTimeout = new TimeSpan(0, 10, 0);
client.Open();
The above will set the timeout to 10 minutes
Environment: Win server 2008 R2, IIS 7.5
Website:
MainWebsite
MainWebsite\Subdirectory
MainWebsite\VirtualDirectory
For MainWebsite - ASP -> (Session Properties) -> Time-Out -> 00:04:00
For MainWebsite\Subdirectory -> ASP -> (Session Properties) -> Time-Out -> 00:08:00
AppPool -> (Process Model) -> Idle Time-Out (minutes) -> 10 [I set it to 10 minute]
Requirement:
I want the MainWebsite to hold the session for 4 minutes.
I want the MainWebsite\Subdirectory to hold the session for 8 minutes.
The above configuration is kind of a tiny test I am doing, so I could replace the time-out values with 20 mintues or 30 mintues on my real prod environment.
The issue is, the Session time out setting for the "Mainwebsite\subdirectory" is not effective.
Though it is set to 8 minutes, the session is lost after every 4 minutes, where 4 minutes is the setting of the MainWebsite.
Is this the expected behaviour, then I dont understand why there is an option to configure the ASP Session Timeout on a subdirectory level.
Let me know if you need more information. Appreciate your comments.
Since this is being done in IIS it does make sense how it's working. Your timing out the whole site including the subdirectories before the subdirectory is timed out. The reason you can set it is if you want to time out the sub directory before the main level. I would recommend using asp code to timeout the session. You can set it in your global.asa so it's done at the global level for each session.
<%
Session.Timeout = 40
%>
where 40 = the number of minutes.
Reference: http://classicasp.aspfaq.com/general/how-do-i-increase-timeout-values.html
I have an ASP.NET MVC website that gets about 6500 hits a day, on a shared hosting platform at Server Intellect. I keep seeing app restarts in the logs and I cannot figure out why.
I've read Scott Gu's article here: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx
and implemented the technique, and here's what shows up in my log:
Application Shutdown:
_shutDownMessage=HostingEnvironment initiated shutdown
HostingEnvironment caused shutdown
_shutDownStack=at
System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at
System.Environment.get_StackTrace() at
System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at
System.Web.Hosting.HostingEnvironment.InitiateShutdown() at
System.Web.Hosting.PipelineRuntime.StopProcessing()
It seems to occur about every five minutes.
Are there any other ways to debug this?
UPDATE: Here are the application pool settings mentioned by Softion:
CPU
Limit : 0
Limit Action : no action
Limit Interval : 5 Minutes
Process Model
Idle Timeout : 20 Minutes
Ping Maximum Response Time : 90 Seconds
Startup Time Limit : 90 Seconds
Rapid-Fail Protection
Enabled : True
Failure Interval : 5 Minutes
Recycling
Private Memory Limit : 100 MB
Regular Time Interval : 1740 Minutes (29 Hours)
Request Limit : 0
Specific Times : none
Virtual Memory Limit : 0
You can easily grab the reason of the shutdown by HostingEnvironment.
You read Scott Gu article, but you missed its comments.
var shutdownReason = HostingEnvironment.ShutdownReason;
If the reason is HostingEnvironment, check the IIS application pool parameters controlling recycling. I've put a red dot near each one. Check the description in the bottom help box in your own copy for full info.
You can ask your provider to give you the applicationHost.config file where all these parameters are set. They find it in C:\Windows\System32\inetsrv\config. I'm sure you can also get them using some .NET api.
For 6500 hits a day, which is a very low hit rate, i'm betting the "Idle time-out" is set to 5mn.
Update (moved comments to here //jgauffin)
CPU Limit 0 = disabled.
Process Model Idle Timeout : 20 Minutes (20mn without a request recycles your app).
Rapid-Fail Protection enabled (5mn). You need to know the maximum failures count. If your app throws more than this exception count in 5mn it will we recycled.
Private Memory Limit : 100 MB. Yes you should profile, this is a low limit.
Regular Time Interval : 1740 Minutes (29 Hours): it will recycle every 29h.
Request Limit : 0 (disabled).
Virtual Memory Limit : 0 (disabled).
Rapid-Fail Protection enabled (5mn). You need the maximum failures count. If your app throws more than this exception count in 5mn it recycles. If it recycles every 5mn this should be the thing to check. There should be 0 unhandled exception in secondary worker threads. Wrap your code into a try catch there.
re update:
The settings asked to the provider help, but is way better to ask for information on the reason of the restarts like I mentioned on my original answer i.e. the actual log entries of the restarts like I mentioned on my orig answer. From those you can know specifically what was triggered, I've seen happen one hitting different limits.
You really have to:
profile your application with a
realistic amount of test data
My money is on hitting resource limits set by your hosting provider.
Before going crazy with optimization without a target, contact your provider and ask them to give you information on the restarts.
Typical recycles:
idle x amount of time / like 15 mins
more than x amount of memory / like 200 MB
more than x % processor over y time / like 70 over 1 minute
a daily recycle
Once you know the case, you have to find out what's taking those resources. For this you have to profile your application with a realistic amount of test data. Knowing if it is memory or processor can help on knowing what to look for.
Is IIS set to recycle the app pool frequently?
Is there some kind of runaway memory leak in the app pool?
It requires a bit of know how on what your app does here's a list of things that can cause the app to restart/reset or even shut down
StackOverflowException
OutOfMemoryException
Any unhandled exception that crashes a thread
CodeContracts use Environment.FailFast when a contract violation occurs
Exceptions are quite easy to track if you can reproduce the issue with a debugger attached you can go into Visual Studio and enable all exceptions when they are thrown not caught by user code. It will sometimes reveal intresting stuff that otherwise is hidden away.