I setup a web application which was running fine but after doing some testing, I ran it on my server and since the back end job was long, I let it run. Now I tried to load the web application from another machine and it won't even load. Is this related to worker processes and application pools or is there perhaps something else possibly wrong?
I do not want to stop the worker process just yet to check if it then loads.
I stopped the worker process and the web app finally loaded. Does anyone have any ideas to prevent this. It should not be an issue and I am beginning to think that maybe a MVC project would be better.
EDIT:
Thanks Dash; I am looking into it and I do think that it is simply a case of the process locking up stuff and the web application cannot load. Now my question is whether sessions are able to avoid this? I attempted to use background threads and assumed that this would avoid the locking/IO issues but need to rethink this. Any ideas appreciated.
Related
Since we moved the majority of our users to our asp.net web application running as a web application in Azure we've experienced an intermittent issue where the application will crash for all users, responding only with time-outs or 502/503 errors. This is usually occurring after we've made a configuration change (like changing an app setting in the portal) or swapping slots during a deployment. The very frustrating thing is that there appears to be no way to get it back until it eventually sorts itself out. During and immediately before the outage the diagnostics look fine - minimal CPU and memory usage. Lots of errors but they're most timeout errors. This problem is not resolve by scaling out or up, application restarts have no effect. Even killing the w3wp does not bring the app back. CPU profiles taken during outage show failed requests but not much else.
Does anyone know what might be going on here or have any ideas of what we could try?
While running Customer’s .NET processes in Azure Web App environment, it may intermittently crashes due to code or performance issues. It’s important to capture the crash dump when such crash/exception happen automatically for further investigation.
Here is a CrashDiag Site Extension, which can easily help us to capture the necessary data when intermittent unhandled exception happens. To capture dump for exceptions, you could refer to this article.
And as you have said, you only get error message with timeout or 502/503, here is an article you could refer to troubleshoot it.
This issue was actually being caused by our ORM writing to the TraceWriter. A configuration flag had been left on in the production environment due to a change in the deployment process. The TraceWriter is thread safe so was creating locks and blocking the CPU. Busy usage periods would cause requests to the TraceWriter to start queuing, thus leading to a non-responsive application.
I'm developing a .NET 4 application that requires a backend worker thread to be running. This thread consists mostly of the following code:
while (true) {
//Check stuff in database
//Do stuff
//write to database / filesystem
Thread.sleep(60000)
}
The ASP.NET app is just a frontend for the database.
My question is around where the best place to put this worker loop would be. It seems my immediate two choices would be (1) to spin it off from the Application_Start method, and just let it run, or (2) bundle it in a separate process (Windows service?)
(1) would obviously need some logic in the ASP.NET code to check it's still running, as IIS might kill it. It's also quite neat in that the whole application logic is in one, easily deployable package.
(2) is much more segregated, but feels a lot messier.
What's the best approach?
I would strongly opt for the Windows Service if possible. Background threading in ASP.NET comes with a lot of baggage.
The lifetime of your background process is at the mercy of IIS. If IIS decides its time to recycle the App Pool, your background process will restart. If IIS decides to stop the App Pool due to inactivity, your background process will not run.
If IIS is configured to run as a Web Garden (multiple processes per AppPool), then your background thread could run more than once.
Later on, if you decide to load balance your website (multiple servers running the site), then you may have to change your application to ensure the background threading is only happening on one server).
And plenty more.
Consider something simple like Hangfire and then think about the design points in this related answer.
I have an ASP .NET web application which on the backend is talking to an ASMX web service. We have counted and the average wait time for the initial request is 20s. I am wondering if there is a way I can send the web service up to the server precompiled, thus negating the need for compilation.
We have also noticed that IIS tends to recycle its worker threads and this also causes a compilation. The process itself is not accessed terribly often, but it needs to be much quicker when it is.
Any thoughts?
Thanks in advance
Update: thanks to all the suggestions, I have tried a number of them and here is what I have found. Recycle time shutdown/tinkering is dangerous cause I dont want threads to just sit around doing nothing. Upon further inspection the site is going up precompiled, so my question is why is there an initial spin up time for a web service?
Right now: Leaning towards the warmup script suggestion below
Update: The service is being hit from a web server on a different machine. We are seeing problems with the initial request only.
One alternative approach is to write a "warm-up script" which simply executes one page from your app. This will make the server spin-up for you and the next person will get a fast hit. You could also set a scheduled process to run that script occasionally (like, if you schedule the thread pool to recycle at 4 am, schedule the warm-up script to run at 4:01 am)
You should be looking to perform precompliation as part of your build/deploy scripts.
Having a post-deployment activity to programatically request each web resource and trigger compliation seems pretty daft to me.
Thomas' answer gives the compiler, there's also a guide over at the MSDN, How to: Precompile ASP.NET Web Sites.
If you're using MSBuild then go for the AspNetCompiler Task.
(I probably would have made this a comment but I'm not yet allowed... not enough SO juice)
Have you tried using aspnet_compiler in the framework folder (e.g. %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727)?
You can control ASP.NET recycling via the settings on the Application Pool. If it is recycling more often than the settings then something else is causing that (e.g. changes to the web.config etc.)
Try to disable application recyling in the configuration of the page or application pool in the IIS.
IIS 6 (If i remember correctly): Rightclick on AppPool -> Tab "Performance" -> Uncheck "Shut down working process on idle time"
IIS 7.5 There is property (seems to be an appPool settings too) that shuts down the AppPool after X minutes of idling time. The value 0 is equal to "never shut down"
Hope this helps
At a previous position we had similar issues with WCF services when initially spooling up we bypassed this by creating a simple program that would invoke all our web services after a deployment.
You could also use this same type of program as a keep alive service and just ping the services every 5-10 minutes etc.
I have a Silverlight application hosted in an ASP.NET page. I need to do some processing when the application first starts up and start up some background processes (various periodic checks).
I thought that the Global.asax Application_Start event would be a good place to do this, but I find that the Application_Start fires multiple times which I didn't expect. From what I've read it seems that when the last user logs out of my application their session disappears and IIS unloads my application. When it's next requested it gets loaded again and the Application_Start runs again, which is not really what I want.
Is this the expected behaviour? Is there any way to keep the application loaded and not have it restart like this?
Secondly, I have these periodic background processes that I want to run. Maybe a Windows Service would be a better place for them, but having a timer run from within a static class in my application is convenient. Is there a way I can keep these running even though there are no active users?
I think you are trying to achieve a behaviour which just doesn't fit the web server model well. Many CMSs try to perform periodic tasks etc. by having some user web requests initiate the work, but I have never seen it done with much success.
If you aren't restricted by deployment issues, access rights etc., I would recommend going with the Windows Service approach. Just make sure to incorporate it in your build/deployment process so that that won't become a hazzle.
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.