Intermittent delays in System.Web.HttpApplication.BeginRequest(), not SessionState related - asp.net

We have two web apps (Azure web roles) that both suffer from occasional long delays (40 to 60 seconds) during System.Web.HttpApplication.BeginRequest. We know this because we are using NewRelic to monitor our web apps. The usual culprit is thread agility issues due to ASP.NET's Session State locking mechanism, however we don't use ASP.NET Session State, and on one of the sites we don't use sessions at all.
One app is much more complex than the other and suffers more delays, but I'll use the simple app in this question to hopefully narrow down the root cause.
The simple web app is a series of ServiceStack based web services. It does not use sessions. It only acts as intermediary to a WCF based service layer. It's mainly channeling the requests to WCF services and then mapping the response to views to transfer back to the agent. The servers don't even break a sweat at the loads they are running (max 2.5% CPU).
So, what is the likely cause please?
My best guess is that it is a thread agility issue as it seems to be waiting for something, which would suggest a lock somewhere. But what is it waiting for if not Session State? NewRelic or ServiceStack are causing the lock?
NewRelic's reporting is wrong, and there is no problem. Well, NewRelic correctly reported problems when we used to use ASP.NET Session State and were getting a lot more of these delays.

New Relic doesn't have instrumentation specific to ServiceStack and WCF is very basic without custom instrumentation. Without more information, it's very difficult to offer any advice. Thread agility is also a primary culprit and I'd recommend investigating that route first.
It is possible New Relic is attributing the time to a method it shouldn't. I would probably start by opening a ticket with New Relic Support and include all your info (agent logs, IIS/ASP.NET configuration, custom handlers? and permalinks to your New Relic graphs).

Related

Responsibility of the http server vs responsibility of the web app hosted using this server

I'm evaluating various hosting options for ASP.NET Core application.
In the new programming model of ASP.NET you process a request with a set of middlewares (which are mixture of older IHttpModule & IHttpHandler).
You can have a middleware which can be responsible for authentication, handling of static files or compressing the response before sending (just to name some).
Here comes the confusion.
Where to set a border between server and an app in context of responsibility?
Which side should be responsible for compressing the response? With IIS this was handled by the server and configured in web.config. Kestrel doesn't provide this functionality AFAIK, so you need to implement a custom middleware in the app which will handle this for you. Which one is more appropriate?
What about authentication? IIS provides settings for authentication (anonymous, impersonation, forms auth). On the opposite, in ASP.NET Core we can also write an app middleware which can handle this for us.
Ok, SSL is handled by server, because it's below in the protocol layer and app operates on HTTP(S) only.
What responsibilities should server have? What responsibilities should an app have?
The server is responsible for implementing the base HTTP protocol, managing connections, etc.. It may also choose to offer other features (e.g. windows auth), but we recommend against it unless it can provide a distinct advantage over a middleware implementation. E.g. Windows auth could be implemented in middleware, but it would be much more difficult due to some of the connection management constraints. Compression could be implemented in middleware just as easily as in the server.
As stated on wikipedia:
"The primary function of a web server is to store, process and deliver
web pages to clients"
The thing is that all famous http servers (nginx, apache, IIS, ...) come with a lot of modules that can handle lots of different tasks including the ones you mentioned in your question (authentication, compression, ...).
It's quite likely that the more modules you'll add the slowest your http server will be. IIS for instance is, by far, not known to be the fastest http server around, but if you remove all the modules and use it just for serving resources, then it will become really fast because this what it has been built for back in the days!
The problem of responsibility goes the same with all kind of software application.
Think about databases whose main role is to store data. RDBMS like Oracle or SQL Server are pretty good at it. But as soon as they release a new version, they also release a new functionality that has nothing to do with storing data. And people use it! ;-)
How many times people used their DB as a search engine? I saw people sending mails with SQL Server! But the worse was some guys trying to call webservices within store procedures ;-)
It's always tempting to have one tool to do everything but you need to keep in mind that it has not been built for every purpose. I'd rather use a bunch of lightweight tools that have one single responsibility and that handle it correctly instead.
Now back to your question, I think it's a good approach to make use of middlewares. That way you have control on the entire pipeline and you know exactly what your request have been through. Middlewares are also testable! Getting rid of all the unnecessary modules will definitely lead you to a more lightweight http server.
The righteous "it depends" answer is also acceptable. If you make some tests and realize that gzip compression module is 10x faster than the middleware, go with the module! Don't be dogmatic neither!

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

Can an ASP.NET application handle NServiceBus events?

Most if not all of the NSB examples for ASP.NET (or MVC) have the web application sending a message using Bus.Send and possibly registering for a simple callback, which is essentially how I'm using it in my application.
What I'm wondering is if it's possible and/or makes any sense to handle messages in the same ASP.NET application.
The main reason I'm asking is caching. The process might go something like this:
User initiates a request from the web app.
Web app sends a message to a standalone app server, and logs the change in a local database.
On future page requests from the same user, the web app is aware of the change and lists it in a "pending" status.
A bunch of stuff happens on the back-end and eventually the requests gets approved or rejected. An event is published referencing the original request.
At this point, the web app should start displaying the most recent information.
Now, in a real web app, it's almost a sure thing that this pending request is going to be cached, quite possibly for a long period of time, because otherwise the app has to query the database for pending changes every time the user asks for the current info.
So when the request finally completes on the back-end - which might take a minute or a day - the web app needs, at a minimum, to invalidate this cache entry and do another DB lookup.
Now I realize that this can be managed with SqlDependency objects and so on, but let's assume that they aren't available - perhaps it's not a SQL Server back-end or perhaps the current-info query goes to a web service, whatever. The question is, how does the web app become aware of the change in status?
If it is possible to handle NServiceBus messages in an ASP.NET application, what is the context of the handler? In other words, the IoC container is going to have to inject a bunch of dependencies, but what is their scope? Does this all execute in the context of an HTTP request? Or does everything need to be static/singleton for the message handler?
Is there a better/recommended approach to this type of problem?
I've wondered the same thing myself - what's an appropriate level of coupling for a web app with the NServiceBus infrastructure? In my domain, I have a similar problem to solve involving the use of SignalR in place of a cache. Like you, I've not found a lot of documentation about this particular pattern. However, I think it's possible to reason through some of the implications of following it, then decide if it makes sense in your environment.
In short, I would say that I believe it is entirely possible to have a web application subscribe to NServiceBus events. I don't think there would be any technical roadblocks, though I have to confess I have not actually tried it - if you have the time, by all means give it a shot. I just get the strong feeling that if one starts needing to do this, then there is probably a better overall design waiting to be discovered. Here's why I think this is so:
A relevant question to ask relates to your cache implementation. If it's a distributed or centralized model (think SQL, MongoDB, Memcached, etc), then the approach that #Adam Fyles suggests sounds like a good idea. You wouldn't need to notify every web application - updating your cache can be done by a single NServiceBus endpoint that's not part of your web application. In other words, every instance of your web application and the "cache-update" endpoint would access the same shared cache. If your cache is in-process however, like Microsoft's Web Cache, then of course you are left with a much trickier problem to solve unless you can lean on Eventual Consistency as was suggested.
If your web app subscribes to a particular NServiceBus event, then it becomes necessary for you to have a unique input queue for each instance of your web app. Since it's best practice to consider scale-out of your web app using a load balancer, that means that you could end up with N queues and at least N subscriptions, which is more to worry about than a constant number of subscriptions. Again, not a technical roadblock, just something that would make me raise an eyebrow.
The David Boike article that was linked raises an interesting point about app pools and how their lifetimes might be uncertain. Also, if you have multiple app pools running simultaneously for the same application on a server (a common scenario), they will all be trying to read from the same message queue, and there's no good way to determine which one will actually handle the message. More of then than not, that will matter. Sending commands, in contrast, does not require an input queue according to this post by Udi Dahan. This is why I think one-way commands sent by web apps are much more commonly seen in practice.
There's a lot to be said for the Single Responsibility Principle here. In general, I would say that if you can delegate the "expertise" of sending and receiving messages to an NServiceBus Host as much as possible, your overall architecture will be cleaner and more manageable. Through experience, I've found that if I treat my web farm as a single entity, i.e. strip away all acknowledgement of individual web server identity, that I tend to have less to worry about. Having each web server be an endpoint on the bus kind of breaks that notion, because now "which server" comes up again in the form of message queues.
Does this help clarify things?
An endpoint(NSB) can be created to subscribe to the published event and update the cache. The event shouldn't be published until the actual update is made so you don't get out of sync. The web app would continue to pull data from the cache on the next request, or you can build in some kind of delay.

is Silverlight more friendly to load-balancing than ASP.NET?

I was discussing load-balancing with a colleague at lunch. I admit that I know very little about this topic. We were discussing the various ways of maintaing session in a ASP.NET application -- none of which suited the high performance load balancing that he was looking for.
What about Silverlight? says I. As far as I know it is stateless, you've got the app running in the browser and you've got services on the server that feed/process data.
Does Silverlight totally negate the need for Session state management? Is it more friendly to load-balancing? Is it something in between?
I would say that Silverlight is likely to be a little more load-balancer friendly than ASP.NET. You have much more sophisticated mechanisms for maintaining state (such as isolated local storage), and pretty much, you only need to talk to the server when (a) you initially download the application, and then (b) when you make a web service call to retrieve or update data. It's analogous in this sense to an Ajax application written entirely in C#.
To put it another way, so long as either (a) your server-side persistence layer knows who your client is, or (b) you pass in all relevant data on each WCF call, it doesn't matter which web server instance the call goes to. You don't have to muck about with firewall-level persistence to make sure your HTTP call goes back to the right web server.
I'd say it depends on your application. If it's a banking application,then yes I want something timingout out after 5 minutes and asking for my password again. If it's facebook then not so much.
Silverlight depends on XMLHttpRequest like any other ajax impelementation and is therefore capable of maintaining a session, forms authentiction, roles, profiles etc etc.
The benefit you are getting is obviating virtually all of the traffic. json requests are negligable compared to serving pages. Even the .xap can be cached on the client.
I would say you are getting the best of both worlds in regards to your question.

Retrieving active session information from IIS 7

I'm running several ASP.NET web sites with InProc session state and I would like to retrieve the number of active sessions per web site and hopefully any details around each session (eg client connection details).
My end goal is to be able to see who is connected to the web site so that I can notify them when deploying an update.
Is there any way to do this in .NET without resorting to SQL session state? I looked at Microsoft.Web.Administration but couldn't find a way to do it. And the "Sessions Active" performance counter in perfmon just gives the total sessions for the whole server (as well as not giving any metadata about the sessions).
EDIT: In my tests with performance counters I tested with total Sessions Active when I should have tested with the instance of Sessions Active for my web site. This gets me a little closer but I'd still like to actually retrieve the session information for the web site if possible.
Session is a concept, not an actuality. You can use the asp.net global.asax pseudo events for session start/end to track this concept but it will still only be an approximation. I think your best bet is to flip on your "maintenance in progress" flag and put something in the request pipeline that handles it for all incoming requests.
Not sure how/what you would do with this but I think you're going to be rolling some custom code here.

Resources