app insights think time does not take into account think times - azure-application-insights

I am executing web tests in App Insights as availability tests.
The problem is that, those web test contain requests with certain think time.
For the tests I am doing the think time is crucial.
The problem I have is that seems that Application Insights does not take into account the think time values, so I don't see any way to pause the request calls within a web test.
Is there any way to make think times work in App Insights? Is it foreseen to solve this issue soon? Is there any recommendation or workaround?

This question was answered here on MSDN.
The answer provided there:
"At this point - we do not have plans on supporting arbitrary think times. We ourselves, and some customers work around this by calling a controller that can take a parameter on the duration it waits on before responding, from the web test.
Hope this helps.
"

Related

Performance tip: x% of this request was spent in waiting

When reviewing Application Insights for slow API requests I noticed a message stating: "98.49% of this request was spent in waiting.". I'm finding next to no explanation about this online.
What does this mean? What is it waiting for?
How can I fix it?
Application Insights collects performance details for the different operations in your application. By identifying those operations with the longest duration, you can diagnose potential problems or best target your ongoing development to improve the overall performance of the application.
The Performance Tip at the top of the screen supports the assessment that the excessive duration is due to waiting. Click the waiting link for documentation on interpreting the different types of events.
These are all indication of slow server operations.
You can read more about here. Also please look for the event which is causing waiting time duration and then work accordingly.
Let me know if you need any help related to fix perf issue.

Slow Transactions - WebTransaction taking the hit. What does this mean?

Trying to work out why some of my application servers have creeped up over 1s response times using newrelic. We're using WebApi 2.0 and MVC5.
As you can see below the bulk of the time is spent under 'WebTransaction'. The throughput figures aren't particularly high - what could be causing this, and what are the steps I can take to reduce it down?
Thanks
EDIT I added transactional tracing to this function to get some further analysis - see below:
Over 1 second waiting in System.Web.HttpApplication.BeginRequest().
Any insight into this would be appreciated.
Ok - I have now solved the issue.
Cause
One of my logging handlers which syncs it's data to cloud storage was initializing every time it was instantiated, which also involved a call to Azure table storage. As it was passed into the controller in question, every call to the API resulted in this instantiate.
It was a blocking call, so it added ~1s to every call. Once i configured this initialization to be server life-cycle wide,
Observations
As the blocking call was made at the time of the Controller being build (due to Unity resolving the dependancies at this point) New Relic reports this as
System.Web.HttpApplication.BeginRequest()
Although I would love to see this a little granular, as we can see from the transactional trace above it was in fact the 7 calls to table storage (still not quite sure why it was 7) that led me down this path.
Nice tool - my new relic subscription is starting to pay for itself.
It appears that the bulk of time is being spent in Account.NewSession. But it is difficult to say without drilling down into your data. If you need some more insight into a block of code, you may want to consider adding Custom Instrumentation
If you would like us to investigate this in more depth, please reach out to us at support.newrelic.com where we will have you account information on hand.

Determining what is putting pressure on IIS

I got a dedicated server running both IIS 7.5 and SQL Server 2010. Server CPU load is often near 100%. The SQL server does not take too much but the w3wp process is taking a significant amount of CPU (often 70+%).
I'd like to find out, what is causing this pressure:
* Too many requests of static files (a CDN could be added)
* Too many ajax requests (I am thinking about comet/web sockets anyways)
* Single asp.net pages consuming too much processing power (should be easy to optimize)
Where would you start looking to find out where to start optimizing?
The easiest possible way is to profile the app in production. Not sure if that is possible in your case. Some options:
look into the logs and look at the duration of the requests. Long requests are likely to put load on the system
Remote debug w3wp with Visual Studio and pause the debugger 10 times to see where it stops most. That is the hot spot
Use XPerf or PerfView to capture (managed) stacks. This has almost no impact on production performance
A good starting point would be to fire up the development tools (F12 in IE / Chrome) and look at the timings under the network tab. That will show you a waterfall-style diagram for how the page has loaded and should help you identify any particularly slow-loading static files which might be sensibly moved off to a cdn, any unnecessary requests being made, how much time is being spent getting the actual page itself, etc.
After that, profile the application with a performance profiler. A good profiler like ANTS Performance Profiler will let you look at things like execution time / hit counts for different methods, as well as what database queries are being run and how long they’re taking. A new version of ANTS (currently in EAP) will also group that activity by http request so you can see if specific pages need optimisation or are being hit too many times.
You'd also do well to check that caching is working as you intend it so that users aren’t unnecessarily re-requesting pages.
There's also a nice article on ASP.NET performance which you might want to read at http://aspalliance.com/1533_ASPNET_Performance_Tips.7.
Disclaimer: I work for Red Gate which makes ANTS.
I found an easy way to see what's going on on the server.
Nevertheless, the professional way is probably to go and use a profiling tool.
What did I do?
In IIS Console you can get a list of all current worker threads and if you choose one you can see what this thread is working on. So I was able to see that the thread was handling 100 requests in parallel, 70 of those were tracing back to the same ajax call.
The immediate solution was to reduce the frequency of that call (from every 10 to every 30 seconds). The next step will be to further optimize the call on the server side since I do have other ajax calls with the same frequency (every 10 seconds) which nearly never showed up in the active requests list since they were so fast.
Probably the easiest way to figure it out would be to install New Relic on the server. The trial lasts 30 days I think so it should give you enough time to get to the bottom of this. It'll show you long-running SQL queries, .NET methods, as well as just about everything else you can think of. It makes it very easy to identify bottlenecks.
By the way, I suggested New Relic because it sounds like your problem is in a production environment. New Relic isn't an incredibly detailed profiler. It gathers enough information to be helpful, but not so much as to slow down the server. That makes it well suited to this purpose.
If, however, you could reproduce the problem in a development environment you might try something like the free Eqatec profiler.

Can't find page request per second statistics for IIS/asp.net?

Does anyone have any literature regarding page requests per second for asp.net (preferably MVC) applications?
I realize it is application specific, but just looking to read up on case studies etc. and expereinces.
Is 1K per second possible?
It could be anywhere from less than one to hundreds of thousands per second. Hope that helps.
If you need more specific answers, try to include something (anything) about the setup or type of app. One machine, a farm of 10,000 machines, a simple app serving mostly-static data, a scientific data processing app, etc etc -
I was looking for something similar when I saw this question. But here's a useful blog post I've come across at the same time. Golden Rule of Web Caching which has some stats which may be relevant for you.
Admittedly, your first request will include the latency of hitting the database, but if you're caching correctly afterwards, you'll get something fairly similar to Gojko's results.
So, in answer, yes. 1k hits per second are definitely achievable.
Another thing I've been trying is using WCAT (Web Capacity Analysis Tool) for testing application performance. Is not difficult to get +1k pages per second from a modest IIS setup if you focus on nice light pages.

IIS and Threads

We are starting to write more and more code for an ASP.Net web application uses a new thread to complete long running tasks. I can find no solid documentation that give any useful guide to any limitations of restrictions of using threads within IIS (6). Any advice to this end would be appreciated - specifically the following:
What (if any) is the max number of threads
Is there a recommended max number
Are there any pitfalls of using threads within an ASP.Net IIS web application?
Thanks for any advice
I assume you have already looked into Asynchronous ASP.NET page processing?
Improving .NET Application Performance and Scalability
http://msdn.microsoft.com/en-us/library/ms998530.aspx
10 Tips for Writing High-Performance Web Applications
http://msdn.microsoft.com/en-us/magazine/cc163854.aspx
I can find no solid documentation that
give any useful guide to any
limitations of restrictions of using
threads within IIS (6).
Mainly because this is a bad idea. Long running processes should be converted into windows services which either run continuously and occasionally check the database or whatever else for work to do or services that can be woken up by your asp.net app.
I myself have frequently done the same thing. What i found was that there is a maximum which is based on a "n number of threads per CPU" these can be adjusted and fine tuned in the web.config and machine.config files. This post has a reasonable explanation of this.
The recommended maximum would be the default setting, at least according to the documentation I have read from Microsoft on this topic sometime ago.
The biggest pitfall you will find you need to cross is how to report progress or the results back to the user. I typically use a polling mechanism from the client to call back to a page which checks the session state for progress. The session state is of course being updated from the main thread. If you want to see this approach working in real life see the House of Travel website and do a search for flights.
Was going to make this a comment, but I realized it was more relevant than I thought. Eric Lippert has heard this set of questions before, and states that it is unanswerable.
So, in short, don't even go there.
Come up with a design that uses a
small number of threads and tune that.
Make sure that you only use threads when you're going to benefit. If your long-running code is CPU-intensive, then you won't actually benefit from making the call asynchronous (in fact, performance will decrease as there is an overhead).
Use threads for I/O operations or calling Web Services.
Each application is different. Simply setting the ThreadPool to max isn't the answer, or it would already be set at this level!
The higher you set the ThreadPool, the more you'll saturate the CPU, so IF you have CPU-intensive code then this will just compound the problem even more.
Of course, you could off-load these CPU-intensive calls onto another machine.
http://msdn.microsoft.com/en-gb/magazine/cc163327.aspx
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx

Resources