I have started to use New Relic to monitor the performance of http://alternativeto.net that is a fairly large website.
What I've noticed is that a significant time is spent in a method they report as "TransferRequestHandler" and when i dive into it i see that it's really the "BeginRequest()" method that is taking time.
It looks like this in New Relic.
The closest thing I've come to find anything that could be the problem is this thread here on Stack Overflow I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it but i've actually tried to replace the Session Module but that didn't help.
The site is a hybrid between ASP.NET MVC and Webforms.
I've realized that this is a long shot and you don't have much to "go on" but if someone can put me in the right direction and most importantly be able to reproduce the behavior locally or something like that i would be extremely grateful :)
The BeginRequest is the place that everything starts, so its normally there to be the delay but you must go deeper to find the actually point of your code that makes the delay.
If the session is the issue, then disable the session when the user make long actions, like download a file, or complicate procedures that the page stuck for long time.
relative to session:
call aspx page to return an image randomly slow
ASP.NET Server does not process pages asynchronously
Trying to make Web Method Asynchronous
Web app blocked while processing another web app on sharing same session
What perfmon counters are useful for identifying ASP.NET bottlenecks?
Replacing ASP.Net's session entirely
The next step is to make a totally custom session.
Now, its may help if you use more than one pool (web garden) to run your site, but before do that you must be sure that you have correct synchronize your data and use Mutex and other locking mechanism for run on multi pool environment.
Related
Our company releases updates of our rich client application (written mainly with ASP.NET, WCF services, and ASP.NET AJAX) on our client's Windows Server 2008 IIS 7 web server. Once in awhile, we have large releases of updates. And sometimes there are bugs that users catch right after the release that are not caught during automation testing nor stage testing. Is there a way to smoothly deploy ASP.NET code on IIS 7 while users are still on without disrupting their workflows containing code that was not affected? I've found that if I just copy the code from stage (without the web.config) manually, and paste it into the production web root folder, nobody is really kicked off. But I'm wondering if there are any side-effects to this strategy for users diligently working in the application. I'm just wondering if any other connections may get interrupted or how they're even handled in this situation (ie: SQL connection, WCF Service calls, whether they keep the same session and if that will have any impact, etc..)? If I chose this method, I'd have something in the web.config that would display a message to every user (in the master page)--like a banner, that says "Please log off, and clear your cache", so they would see updates to issues addressed. But this would only be relevant to the impacted users.
If someone doesn't think this is a good strategy for minor updates, and has a better strategy, like changing a web.config setting that forces the user to a different server or something while the deployment is taking place. Or some other methodology, my ears are listening. Obviously the latter sounds safer, but I just don't know how this could be done. I've read about load balanced servers, but I think this type of server setup is done for different purposes, like if a server goes down, doesn't it? Or would this be the best solution as you take one site down? I'm welcome to any idears.
I used to stress about minimal-impact for releases too, but now we take it down. The reality is twofold:
You cannot guarantee that everything someone is working on right now is NOT something you're about to update. Consider this: A user is working on x.aspx and is in the middle of a postback. You drop a new x.aspx.
With enough notice, maintenance windows are a way of life. Users should expect that, from time to time, you need exclusive access to the application to make updates, etc.
It's just too hard to keep all the plates in the air when you really don't know what someone might be working on while you deploy. Especially if database updates are in the mix!
If there is a load balancer in the mix: you would remove the server(s) with the old code and add add the server(s) with the new code. This lets the traffic die out on the old code servers without kicking people out. The new server(s) picks up the traffic.
We do this with a new release to one server at a time until we replace all of the code in our server farm. It gives the application time to bake in the real world. If issues comes up, and they have, you only have to revert a single server. using the load balances makes it easier.
It is (usually) a seamless transition. Of course you need to make sure you app can handle any database changes etc.
In our Application_Start event handler we're performing some actions that intermittently fail due to file locking issues. In this scenario we would like to return the application to an "un-started" state.
By this I mean that the user will be shown an error page, and then the next time a user hits the site the Application_Start event will be fired again.
We're using ASP.NET 3.5, WebForms and MVC.
AppDomain.Unload offers what you're looking for, but I wouldn't recommend it.
There are lots of catches; tearing down an app domain programmatically is not without its own set of issues (i.e. if a thread's blocked in native code you may see a CannotUnloadAppDomainException) and is generally a poor design, IMO.
What you're attempting to do is highly unconventional; I would reconsider the approach all together. If you just need to execute some code once at the app domain level, there are lots of better ways to do it, like statics for instance or a flag in the HttpRuntime cache. Just mind the web-garden and concurrency scenarios.
Good luck.
I'm pretty sure that there is no way to quit/force stop an webForms application from within the application. However, an ungentle way to prevent the application from starting is to generate an unhandled error within your application_start. This will prevent start up from happening and upon the next hit to your webForms app, the application_start will try again.
I have a Silverlight application hosted in an ASP.NET page. I need to do some processing when the application first starts up and start up some background processes (various periodic checks).
I thought that the Global.asax Application_Start event would be a good place to do this, but I find that the Application_Start fires multiple times which I didn't expect. From what I've read it seems that when the last user logs out of my application their session disappears and IIS unloads my application. When it's next requested it gets loaded again and the Application_Start runs again, which is not really what I want.
Is this the expected behaviour? Is there any way to keep the application loaded and not have it restart like this?
Secondly, I have these periodic background processes that I want to run. Maybe a Windows Service would be a better place for them, but having a timer run from within a static class in my application is convenient. Is there a way I can keep these running even though there are no active users?
I think you are trying to achieve a behaviour which just doesn't fit the web server model well. Many CMSs try to perform periodic tasks etc. by having some user web requests initiate the work, but I have never seen it done with much success.
If you aren't restricted by deployment issues, access rights etc., I would recommend going with the Windows Service approach. Just make sure to incorporate it in your build/deployment process so that that won't become a hazzle.
We currently have a Live ASP.NET application (Basically a CMS) running on our IIS7 web-server.
Every once and a while (Talking every few months) it's app pool will go to 100% CPU-usage and stay there until the page times out. We've tried increasing the timeout for the page to 30 minutes in the web.config but it still just stayed at full CPU so I'm presuming it's some form of infinite loop.
It is a massive application, one of the biggest we have, and far too large to blindly search for an issue. The prevailing opinion is that since it's so rare we can just restart the app-pool whenever it happens, but I'd much prefer to fix it.
I have access to the code and full administrator access to the hosting server, and the monitoring software we're running gives me plenty of time to be on the server while the issue is taking place but I can't find any way to get useful data about what's going on at the time without adding a massive constant overhead to the site (Which given it'll take months to happen isn't really viable).
I'm wondering if anyone has some advice as to how I could narrow down our search? A stack trace of the currently running threads would be spectacular, but even just a list of the pages that are actively being served would make a huge difference. I can add code to the project to make it more traceable, but logging everything in the hopes of catching it would be unrealistic (It gets a lot of traffic and we don't want to add significant overhead to page loads).
Tess's blog is an excellent resource on debugging production asp.net applications.
I think this blog post from her blog will be really helpful in getting started in debugging this problem: Hang debugging walkthrough.
Hope this helps
I recommend you to use ASP.Net performance counter, (like the requests queue and number of requests)
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.