Profiling warm-up of ASP.NET MVC3 Application on Azure - asp.net

Throughout the process of developing my Application, the first-response time has got worse and worse, It is now taking 10 minutes to load! I am using Web-Deploy to speed up publishing my changes, and from what i've read on MSDN, i understand that this delay is due to compilation and loading assemblies.
It's an ASP.NET MVC3 Application which uses EF CodeFirst, MVC-MiniProfiler etc. Im wondering if its one of these assemblies that is slowing things down.
Is there a way to track down the long running process plaguing my development/testing process?
As a side note, the issue is nowhere near as bad in the Azure Emulator.

Using Windows Azure SDK 1.4 and later, you have the option to enable Profiling for you application (beside the IntelliTrace). You can read about some of the options available (in 1.5) from my blog post here where you will also find a good screenshot showing the option to enable either IntelliTrace or Profiling.
The trick is that you can only have one of them running (either ItelliTrace or Profiling). So I suggest you first run the ItelliTrace and inspect ItelliTrace logs for any exceptions during your application execution. Then do another deployment using Profiling to catch which are the most time consuming methods.
Please note that enabling IntelliTrace /Profiling is only accomplishable during deployment process, and cannot be changed with simple WebDeploy, so you'll have to make at least two deployments for test.

It's hard to say what the slowdown is - as Awais mentioned, IntelliTrace is your friend. However, the delay might be unavoidable (I have seen this a number of times). If this is the case, you can add a startup script that will "prime" IIS, preventing the problem when the first user hits the site.

Related

Referencing an unstable DLL

We are referencing a 3rd party proprietary CLI DLL in our .net project. This DLL is only an interface to their proprietary C++ library. Our project is an asp.net (MVC4/Web API) web application.
The C++ unmanaged library is rather unstable. Sometimes it crashes with e.g. dangling pointers. We have no way of solving it, and using this library is a first-class customer requirement.
When the application crashes, the application pool in IIS doesn't respond anymore. We have to restart it, and doing so takes a couple minutes (yes, that long!).
We would like to keep this unstable DLL from crashing our application. What's the best way of doing it? Can we keep the CLI DLL in a separate AppDomain? How?
Thanks in advance.
I think every answer to this question will be some kind of work around.
My workaround would be to not interact directly with the DLL from your web application.
Instead write your requests from the web application to either a Message Queue or a SQL table. You can then have another application such as a Windows Service which reads the requests, interacts with the DLL and then writes the results back for your web application to read.
I'm not saying that SQL / Message Queues are the right way, I'm more thinking of the general process flow.
I had this exact problem with a third party library that accessed protected memory for purposes of interacting with a hardware copy protection dongle. It worked fine in a console or winforms app, but crashed like crazy when called from an IIS application.
We tried several different things, some of which are mentioned in other answers on this page. But ultimately, the best solution for us was to us a very old technology - .Net Remoting. I know - it's somewhat frowned on these days. But it fit this particular need quite well.
The unstable code was placed in a Windows Service application. The web application made remoting calls to this service, which relayed the commands to the third-party library.
Now I'm sure you could do the same thing with WCF, sockets, etc. But remoting was quick and easy to setup, and since we only talk to the same server it works without opening any ports. It just talks on a named pipe.
It does mean a second service to install besides the web application, but that was acceptable in my particular use case.
If you did something similar, and the third-party code actually crashed the service, you could probably write some code in your main application to bring it back up.
So perhaps a process boundary is more useful than an App Domain when you have unstable code to wrangle.
I would first increase the IIS process recyling rate, maybe the the DLL code fails after a certain number of calls, or after the process reaches a certain amount of memory usage.
You can find information on the configuration of IIS 7.0 recycling options here: http://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx
In your case I would recycle the process at a specific time, when you know there is less load on the application. And after a certain number of requests (lower than the default) to try and have "fresh" process most of the time.
The recycling process is graceful in the sense that the the old process is not terminated until the one that will replace it is ready, so there should be no noticeable downtime.
More information about the recycling mechanism here: http://technet.microsoft.com/en-us/library/cc745955.aspx
If the above does not solve the problem I would wrap the calls in my own code that manages the unstable DLL execution.
This code should recover from the failures for example by repeating the failing calls until a result is obtained and failing with a graceful error if it is not possible after a number of attempts.
Internally the calls to the unstable DLL could be made in a spawned thread or even the code could be in an new external executable that you could launch with Process.Start.
This last option has more overhead but it might be your only option. See this SO question for more information on this: How do you handle a thread that has a hung call?
I suggest following solution.
Wrap this dll with another web application. Can be one of the following ones. Since you already use web api, it is most suitable for you.
Simple ASMX Web Service
WCF Service
Asp.Net MVC - WEB Api Service
Control your p-invoke code so that you do not have any bug? See following articles.
The Black Art of P/Invoke and Marshaling in .NET
P/Invoke Revisited
Publish this application to IIS with different application pool.
Use standard techniques suggested before like. I suggest configure recycling IIS for both memory and scheduled times.
IIS process recycling rate
How to limit the memory used by an application in IIS?

High user volume, production web server experiences performance issue, DB server is running OK. How to find the bottle-neck

We are experiencing a series of performance issues with high user volume with our asp.net application on production servers. The sql database runs ok.
I suspect the issue relates to thread synchronization and locking mechanism (c# lock statement).
Is there any profiling tool that I can run on the production server (release code) and find which C# functions are taking the longest time or find the performance issue?
Thank you,
One of the way is to use Performance Counters - see
http://support.microsoft.com/kb/815159
http://msdn.microsoft.com/en-us/library/3xxk09t8.aspx
http://msdn.microsoft.com/en-us/library/wz3d1wc5%28v=VS.90%29.aspx
http://msdn.microsoft.com/en-us/library/ms972959.aspx
You can also use ASP.NET tracing to see at which events you are loosing time. For that matter, you can use your custom logging through-out the application to see where you are spending the time - however, that requires modifying your code. ASP.NET tracing will allow you to use the existing code to see time spent at various ASP.NET request life cycle events.
Lastly, from profiling perspective, you can use tools such as jetBrains dotTrace or Redgate ANTS Profiler.

How can I profile an ASP.NET application running on a production server?

I have an ASP.NET application that is consistently using 75% - 100% of the CPU on a production server. How can I profile the application to figure out what part of the code is using up the most CPU? I have looked at a couple of different tools (Xte Profiler, EQATEC, dotTrace), but they all seem to want you to load and run the application within their tool. It seems to me that they want you to load up the application in their tool and run tests locally (not in production). I want to profile the application while it is running in production with people hitting it to see what is actually going on. Is this possible?
I am a newbie to application profiling so forgive me if I have missed something obvious or am not thinking about this correctly.
Thanks,
Corey
Sam Saffron (one of the StackoverFlow creators) has written a great command-line tool a while ago, but unfortunately has abandoned it.
A friend of mine forked the code to make it work in 2015:
https://github.com/jitbit/cpu-analyzer
(the page has a link to Sam's post explaining how to use it)
The great thing about this tool (besides "no-install required" portability, cmd-line interface, etc etc) is that APM packages like NewRelic etc only monitor http-requests. If your app has some background threads - they won't help much.
You should consider taking a memory dump on the production server while it's experiencing high CPU. Check out ADPlus and taking a hang dump on the asp.net process. This can then be analyzed with Windbg or other tools.
I just went through a similar experience where our production servers were experiencing excessive CPU load - a scenario we could not recreate locally or in test/staging environments. It had nothing to do with the database (database CPU was normal). Analyzing the dump file is what clued us in on what was causing the problem (excessive compilation of regex objects by some library we were using).
This answer would be incomplete without Tess' blog, so here's the link.
My guess it has to do with long running database queries rather than the ASP.net application itself. In my experience 9 times out of 10 this is what I see and this takes the APPLICATION server down to a crawl as resources are consumed and the app has to wait for each query to finish to move on. Take a look at SQL profilier on the DB server and see if there are any queries that are taking a long time to execute.
It could be as simple as adding an index to a column or some other small minor optimizations. Once you know the query, you can then also go back to your code and tweak that section as well.
For those who stumble upon this question still, it really depends on what you are trying to accomplish.
If a server is running that high on CPU, odds are, a standard profiler will bring it to a grinding halt due to it's additional overhead.
There are actually three different types of profilers. Standard profilers, lightweight transaction profilers, and APM tools. You can read more about this in my blog post that discusses all 3:
.NET Profilers: 3 types and why you need all of them
It's certainly possible to profile ASP.NET with the EQATEC Profiler. See:
Profiling ASP.NET websites with EQATEC Profiler
EQATEC Profiler instruments your app in a separate step that enable the app itself to collect it's own profiling info, and the profiler then merely displays that timing data afterwards.
That means that you can run your instrumented ASP.NET app completely independent of the profiler itself.
You could e.g. instrument your app, mail it to your test site in India, have them run it on their server for some days where it will generate timing reports all on it's own, and have them mail back those reports to you, which you can then view in the profiler. Pretty neat.
Note: To have the profiled app generate timing snapshots "on it's own" it must know when to generate them. By default this is when the method Application_End is called in an ASP.NET app. You can programmatically dump snapshots when it suits you by using the EQATEC Profiler API. See the user guide or check out this thread.
You can read about this on Microsoft Developer Network.
You can select documentation according to the version of your Visual Studio. You should verify profiling functionality is provided for your Visual Studio type.
How to: Profile a Web Site or Web Application Using the Performance Wizard
Your best bet is to profile your code on your own machine to identify where it is spending time.
Grab a ten day free trial of this:
http://www.jetbrains.com/profiler/
Here are some links to get you going:
Link
http://msdn.microsoft.com/en-us/library/ms178643(v=VS.100).aspx
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx

Speeding up a Web Service

I have a web service running and I consume it from my desk application that is written on Compact Framework.
It takes 13 seconds to retrieve 8 results which is kinda slow. I also expect to be retrieving more results in the future. The database query runs fast.
Two questions: how do I detect where the speed slow down occurs? Do I put timers in the Web services code?
I would like to detect whether it is the network or the application code.
This is my first exposure to web services in a real environment so please bear with me.
i used asp.net 2.0 and c# to write a simple web service.
Another good profiler is the EQATEC Profiler. I did a write up on it here: http://elegantcode.com/2009/07/02/eqatec-profiler-and-net-cf-profiling-and-regular-net/
And it works find for .net CF projects. But this will allow you to see if there performance issues in unexpected places.
Your already on the right track of adding event logging, and include timers in them. Note, doing so will add to the over all time it takes, so you'll want to remove them after you track down the culprit. Also look into running the same webservice call multiple-times without re-initiating the connection, that may be cause as well.
-Jay
A starting point is to profile your web service to see where the delay is comming from
Did you know the CLR Profiler? There are some tools you can use to see what is happening
http://msdn.microsoft.com/en-us/library/ms998579.aspx
The database connectivity from your service to the DB could be a possible cause for slowdown. Adding timers should do the trick. If the code isnt too huge, you can look at the coding constructs to come up with an informed decision of where exactly things can be slow. Then add the timers. You would get a fair idea of where things are slowing down.
Two biggest pain points are going to be instantiating the web service reference and transferring all the data over the network. Pending anything turning up where some obvious blunder was made, I would look at ways of reducing the size of your xml and ways of better handling your web service reference.
All I know about the compact framework is that it is a pain to work in. I've worked on a number of web projects though and profiling your server, putting in logging to record the time taken will be helpful. If all the time is being taking post server response, however, it won't do much more than prove your server is working quickly.
SoapUI is a fantastic java application for consuming web services. It has a lot of functionality, including time metrics. I would start with that and see how long it takes to consume the same thing your client would be. Failing issues there, start with what I recommended above.

How to keep my ASP.NET app always "alive", and if its a bad idea, why shouldn't I do it?

I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.

Resources