I have a WCF service installed on IIS7. I noticed that the first call to my service is always very very slow. The subsequent calls are much faster & acceptable.
If there are no calls made to the service for some time, it again goes to sleep mode. After this the next call again takes a long long time.
Any remedies for this problem?
It is because of process management on IIS. When there are no calls for certain period of time IIS release the recourses and stops the process.
This is why you can notice that it is slow for first request and for requests after a long delay. Because while the first request or after long period of silence IIS loads everything from scratch. JIT complier runs and etc...
Also note :
When you are hosting WCF services on IIS, the WCF services enjoy all the features of ASP.NET applications. You have to be aware of these features because they can cause unexpected behavior in the services world. One of the major features is application recycling, including application domain recycling and process recycling. Through the IIS Management Console, you can configure different rules when you want the recycling to happen. You can set certain thresholds on memory, on time, and on the amount of processed requests. When IIS recycles a worker process, all the application domains within the worker process will be recycled as well
If you need automatic starting: The Windows Service Control Manager allows you to set the startup type to automatic, so that as soon as Windows starts, the service will be started, without an interactive logon on the machine. So you can use Windows service as a host.
More details you can check in Hosting and Consuming WCF Services.
There is another approach through which you can make it better. We have some kind of scehduled process which keeps hitting our server like every 5 mins with very light 'fetch' requests to keep all servers "hot" (by loading most of the required dlls etc) so that user experience is far better.
I agree it is not a fool proof way but still is something you can consider apart from increasing the recycling settings in IIS.
Related
There is an ASP.NET application with a set of web services, and there is a WinForms application that calls these web services.
The w3wp.exe (which handles asmx calls) run special processes that do a really "hard" jobs (like zipping, xml parsing, etc.). It is a long story why we need it to be done in separate processes and why we don't run this code inside w3wp.exe - this doesn't matter here. It must be like that.
If multiple requests came simulteneously from many users, there can be 30-40 such processes, and they consume CPU. They consume it so much that almost no CPU resources is left for IIS itself.
So, the problem is next: when the following HTTP requests are coming to this IIS, IIS cannot handle them: it cannot even pass them to w3wp.exe because server is "bombed". And, as a result, in peak time 30 users have run their queries, and 31st user is getting a WebException in the WinForms client "timeout expired".
I have found a lot of articles in the internet which explain how to tweak IIS, but noone take into account the fact that OTHER processes and/or applications can run on the same machine. And noone tells what to do to make so that IIS is of higher priority.
So, the question is next: is there a way to "explain" IIS that it should start by default the apppool (svchost.exe) and w3wp.exe with hiher priority?
I faced this issue in Windows Server 2008 r2.
I had a windows service (which used parallel processing) which used to bloat up my CPU a lot & left nothing for IIS & SQL Server.
So I set processor affinity to the windows service to consume only specific processors.
You get in Task Manager->RightClick Process -> Set Affinity.
This left other processors free to use for SQL & IIS.
To an extent, It decreased the productivity of the windows service which was now working with a limited set of processors but it did not matter that much.
I have a WCF service hosted in IIS which takes a long time (around 5 hours) to execute. the WCF service basically generates some reports using SSRS (SQL server reporting services) and saves them to a locaton on the server. this service was actually stopping after generating few reports, so I disabled the "recycling of worker process", "shut down worker processes after being idle" and "limit kernel request queue" in application pool and that fixed the issue and all the reports were generated regardless of the amount taken to generate them. but I am not sure if this is the right fix for this and I would like to know what is the impact of unchecking these settings in application pool for the WCF service in IIS? and is there any better way to get around this problem?
For any long running process it is much better to do it outside of IIS.
In this case I would have a regular windows service running that monitored a request queue. When a request comes in to generate a report, it would then spin off a thread to perform the generation.
The web service would be responsible for 3 things. First, adding an item to the queue to be handled. Second, checking status on the queue as to whether the report is ready. Third, sending the completed report back to the calling client.
This would allow the client to essentially do a fire and forget on the report request and call back later to check on it's status. Further it would mean that if IIS recycled for whatever reason you are still OK.
For bonus points I would add some error handling code that when the windows service restarted it could restart report jobs that were in the middle of execution. This would make it a bit more robust and allow you to reboot the server at any point.
I have disable also all the automatic shut down process from iis for my application with out any issue. I have monitor the memory limits and of course the program work smoothly with out any issues on memory.
I think that this automatic shut down triggers are designed mostly for process that keep too many web sites together and possible some of them have not good programming. But if you are the master of your iis, and you have check your program that have not memory issue, then is better to not shut it down, or at least control the shut down process with some way.
Ok is better to make long running process outside IIS, but is not so simple to develop it, not so simple to install it, not so simple to check it out.
The thread will be started on each Application_Start event.
It will be a monitoring thread which is supposed to run constantly.
So even if the app shuts down, once it is restarted the thread will start too ensuring it runs all the time.
However I need to be sure that this thread will not be stopped / shut down while the application is running.
So in a few words, does anybody know if asp.net could shut down such a thread without actualy stopping / recycling the application.
As a matter of design, you shouldn't depend on asp.net to run threads like this. Little things like app recycling can cause you a lot of trouble.
Instead, create a windows service to execute the thread. This way you don't have to worry about it.
Update
I just wanted to add a little more information.
IIS has the ability to execute your app across multiple threads and processes. A standard site installation usually only has a single process (aka: web garden) assigned which spins up around 20 threads to handle request processing.
However, any IIS administrator can easily add more processes to the mix. They usually do this when a site can hose a single process either because request processing takes too long, or the number of handler threads isn't enough, or as a temporary measure if the app has enough problems that a single thread will hose the entire process fairly often.
If you have a thread being spun on app start, then it will create one for each worker process the site has. This may be unexpected behavior to you or your successors.
Also, monitoring apps are almost always completely separate to the application they are monitoring. One of the primary reasons is that in the event the monitored process dies, hangs, or otherwise becomes unresponsive then the monitoring app itself still needs to carry on and log this information. Otherwise the monitored process could very well hose the monitoring app itself.
So, do yourself a favor and move this to its own process. The best way to do this on an IIS server is to create a windows service and give it the appropriate execution rights to do what you need.
This should be a simple question but I haven't managed to find the answer on google.
I would like to know, in terms an idiot can understand, exactly what application lifetime means in ASP.NET (and therefore when you can expect application start and end events to run).
I assumed it would be when you run and stop the app in IIS, but I've read things that suggest it's related to number of requests.
By default the lifetime starts with the first request to the app. And it ends after an idle timeout.
But this is configurable based on various things (including request count) in IIS.
And IIS7.5 has the ability to start an application when IIS starts, rather than waiting for the first request.
You do have to consider how the Application Pool that your site is running in is configured. Applications can be dumped in a pool with other apps or it can have its own. The pool can be restarted based on memory usage beyond a certain point, by a timer so to speak (reset daily at 3am for example) and I believe by a number of requests beyond a certain configurable number. Not a super expert on IIS so verify before you buy ;-)
I have two diffrent web services(running on local machine) and pointing to one application pool(1.Can I do that?Is it any performance concern?).I have not much knowledge about how the applicationpool will works.
the other .Net application is using two webservices,but frequently one webservice is not responding which internally calling by ssis package with in the .Net application.
what might be the reason and how to make sure it responds all the time, is there any better way to improve the performance?
if am missing or any further information, Comments Welcome
Yes you can have multiple web applications using to the same application pool.
Is it a performance concern? If it is really high traffic or is faulty code, then perhaps.
Application pools allow pushing sites to different processes, reducing the risk of each affecting the other. If one app pool contains an application/web application that has a memory leak, the leak will only affect that particular process, at least directly. Each process can be recycled either by time or system parameters, which mitigates risks of having something in a bad state.
Performance? Another benefit to app pools is the ability to have multiple instances running simultaneously (a similar thing when putting each app in its own pool). The benefit of this is that more request can be handled at a time. The down-side is that you cannot use in-process session state and your application state will be duplicated for each instance of the process. You would need to consider how much 'stuff' you keep in session and how your caching scheme would be effected, but, it has potential for giving a web application more scalability.
You mention call SSIS... I am assuming that is a long-running service, so you would probably want to push the call to that process to some sort of queue that can process outside of the web service request. MSMQ might work for you. If using a queue as such, you would initiate the running of the code, then have a way of checking on the status of the call to see if it is done.
I agree with Greg Ogle but one more point I think is worth mentioning. Splitting the applications into multiple app pools will also give you an added benefit when it comes to troubleshooting if there are any issues. If you have the various applications split out, you can tell specifically what app pool is related to what w3wp.exe process in the time of need. Like say when that w3wp.exe process is taking 98% of your cpu.