Can I intercept after a response has been sent to a client in an IHttpModule - asp.net

I have a custom IHttpModule that is used to log all HTTP requests and responses, and this is currently working very well, but I'd love to extend it so I can determine how long a response actually takes.
The response is logged in the HttpApplication.EndRequest event, but this event fires before the request is actually sent to the web client. While this allows me to determine how long it took for the server to process the response, I'd also love to be able to time how long it actually took for the client to receive the response.
Is there an event, or some other mechanism, which will allow me to intercept after the client has finished receiving the response?

So that would require client-side code. But not entirely clear what you are wanting to measure. From smallest to largest, the timings could be
time inside server application - measured by code which you already have.
Your code can set the start from either the "Now()" when it begins, or using the HTTP objects. The first call to a site would see a big difference between these start times, otherwise they should be almost identical.
time on server website - I believe this is already measured by most hosting services like IIS.
server machine - I believe this is what "mo" is referring to. You would have to have some kind of external monitoring on the server machine, ala WireShark.
client machine - again, you would have to have some kind of external monitoring on the client machine. This would be the hardest to get, but I think is really what you are asking for.
client application - this is what you can measure with javascript.
Unless this is the "first call" (see Slow first page load on asp.net site or ASP.NET application on IIS7 - very slow startup after iisreset), I believe that all of these time will be just so close that you can use a "good enough" approach instead.
If you must have a measure of this call's client time, then you are stuck in a bad spot. But if you just want better numbers, just continue to measure 1. (application time) with what you already have, and make sure to also measure the size of the request and response.
Then set a base-line for adjusting that time, by testing on various target client machines.
Measure ping times from the client to your server
Measure transfer times of moderately large content - both upload and download
Finagle the numbers to get your average adjustment
You should end up with a formula like:
[AdjustedTime] = [PingTime] + [ServerTime]
+ ([UploadSpeed] * [RequestSize])
+ ([DownloadSpeed] * [ResponseSize]);
This would be the expected client response time.

yes you could handle HttpApplication.EndRequest

another way could be to hook (example: windows service to write response-time to a database) into your webserver (IIS) and trace those events.if you want to analyse the time, a client needs to get your content.
but i think, iis is already able todo so.
it depends a littlebit, what you want todo.

Related

SignalR 'long polling' sends multiple requests simultaneously

I have written a simple 'analytics' tracking tool for my site, which has boolean columns such as
Visited_Store
Visited_Homepage
Checkout_Started
MainVideo_Played
MainVideo_Completed
I am also using Google Analytics but wanted a secondary place to montor activity.
I had been testing my application primarily in Chrome which of course will use web sockets by default. I switched to long polling because I wanted to be able to monitor the requests in Fiddler.
The way the hub works is pretty simple. The SignalR client sends events which sets flags (columns) when a particular event has completed. So on invocation it does the following :
Find row for user - or create if non existent
Set flags
Save row
I had no concurrency issues until I switched to long polling - when I found instant deadlocks.
My client will often send multiple events simultaneously (separate issue to fix - yes) and when using web sockets they are nicely queued and executed one by one. So obviously any deadlocks are going to be extremely unlikely.
Long polling is a different story - I suddenly found that my hub method was being entered multiple times and trying to create multiple rows, and deadlocks and 'row modified' errors all over the place.
One simple solution is just to lock(lockObj) when making a request, but if I have many clients I'd rather not do that. Another is to catch the deadlock and re-execute the request which right now occurs on just about every page load.
Is there perhaps a way to configure SignalR long polling to not send requests all at once? Or some other way to execute requests in turn (like ASP.NET does when you use SessionState).

Something to trace http request to and responses from server

We've having a problem at production (IIS + asp.net web forms, a form with devexpress callback panel - kind of a substitute to microsoft's updatepanel), when sometimes server won't respond to a callback - the updatePanel just waits for a response forever.
This occurs from time to time without a stable scenario and we have never encountered such problem on our testing environments.
So, obviously, the reason is something with configuration. I've added some logging and that's what I see:
customer reports this 'timeout' (lets call it so) at, say, 12:00
I see in logs that a request was received by server and successfully processed in ~0.5sec (i.e. time from page creation to unload)
For some reason response did not reach the client.
They say it happens about 8-10 times each working day for each operator. They also say there are no firewalls or other software that may block responses. I'm stuck.
Probably there's an instrument I can ask to install on production environments that will trace and log all http request and responces from server with useful diagnostic information that I can inspect later to see where the blocking occurs?
Pleeeease =))

asp.net infinite loop - can this be done?

This question is about limits imposed to me by ASP.NET (like script timeout etc').
I have a service running under ASP.NET and I want to create a counterpart service for monitoring.
The main service's data is located at a database.
I was thinking about having the monitor service query the database in intervals of 1 second, within a loop, issued by an http request done by the remote client.
Now the actual serving of this monitoring will be done by a client http request, which will make the script loop (written in C#) and when new data is detected it'll aggregate that data into that one looping request output buffer, send it, and exit the loop, thus finishing the request.
The client will have to issue a new request in order to keep getting updates.
This is actually exactly like TCP (precisely like Windows IOCP); You request the service for data and wait for it. When it arrives you fire another request.
My actual question is: Have you done it before? How did it go? Am I limited by some (configurable) limits imposed by the IIS/ASP.NET framework? What are my limits in such situation, or, what are better options without complicating things too much?
Note that I do not expect many such monitoring requests at a time, maybe a few dozens.
This means however that 10 such concurrent monitoring requests will keep 10 threads busy, and the question is; Can it hurt IIS/performance? How will IIS handle 10 busy threads? Will it issue more? What are the limits? This is just one example of a limit I can think of.
I think you main concern in this situation would be timeouts, which are pretty much configurable. But I think that it is a wrong solution - you'd be better of with some background service, running constantly/periodically, and writing the monitoring data to some data store and then your monitoring page would just return it upon request.
if you want your page to display something only if the monitorign data is available- implement it with ajax - on page load query monitoring service, then if some monitoring events are available- render them, if not- sleep and query again.
IMO this would be a much better solution than a reallu long running requests.
I think it won't be a very good idea to monitor a service using ASP.NET due to the following reasons...
What happens when your application pool crashes?
What if you decide to do IISReset? Which application will come up first... the main app, or the monitoring app?
What if the monitoring application hangs due to load?
What if the load is already high on the Main Service. Wouldn't monitoring it every 1 sec, increase the load on the Primary Service, as well as IIS?
You get the idea...

How to handle long running web service operations?

I have to create a Java EE application which converts large documents into different formats. Each conversion takes between 10 seconds and 2 minutes.
The SOAP requests will be made from a client application which I also have to create.
What's the best way to handle these long running requests? Clearly the process takes to much time to run without any feedback to the user.
I can think of the following ways to provide some kind of feedback, but I'm not sure if there isn't a better way, perhaps something standardized.
The client performs the request from a thread and the server sends the document in the response, which can take a few minutes. Until then the client shows a "Please wait" message, progress spinner, etc. (This seems to be simple to implement.)
The client sends a "Start conversion" command. The server returns some kind of job ID which the client can use to frequently poll for a status update or the final document. (This seems to be user friendly, because I can display a progress, but also requires the server to be stateful.)
The client sends a "Start conversion" command. The server somehow notifies the client when it is done. (Here I don't even know how to do this)
Are there other approaches? Which one is the best in terms of performance, stability, fault tolerance, user-friendliness, etc.?
Thank you for your answers.
Since this almost all done server-side, there isn't much a client can do besides poll the server somehow for updates on the status.
#1 is OK, but users get impatient really fast. "A few minutes" is a bit too long for most people. You'd need HTTP Streaming to implement #3, but I think that's overkill.
I would just go with #2.
For 3 the server should return a unique ID back to the client and using that ID the client has to ask the server the result at a later time
option 4 for those desiring to use web sockets
you request will be response with a jobId,
you get progress state over the web soket

What response time overhead would you expect for a webservice request compared to a simple file request?

I'm developing an asp.net webservice application to provide json-formatted data to a widget that uses jQuery.ajax to make the request. I've been using the FireBug Net view to check how long the requests for data take.
In my initial prototype I was simply requesting static json data files, which on my dev machine were obviously returned very quickly by IIS - in around 2 to 5ms, even if not present in the browser's cache.
Now I've connected to the webservice I'm concerned that the data requests are way too slow, as they are taking consistently around 200ms to return. (This is even after the first request which is obviosuly compiling stuff and taking around 6 whole seconds.) I have removed all database/processing overhead from the web request, so it should take very little time to process, and this is also still on the local dev machine, so no network latency. The overhead is no better with a release build and on a production server.
My question is this:
Is this response time of around 200ms the best I can expect from a .net web service that is simply returning 'Hello World'? If it is possible to do much better, then what on earth might I be doing wrong? If it isn't possible, what would you do instead?
If it's really doing nothing in terms of connecting to a database etc, then you should be able to get a much better response time that 200ms.
If you measure the time at the server side instead of the client side, what do you see? Have you tried using WireShark to see what's happening in the network?
Basically you want to be able to create a timeline as accurately as possible, showing when the client sent the request, when the request hit the server, when your server-side code received the request, when your server-side code finished processing the request, when the server actually sent the response, and when the client actually received the response.
At that point you can work out where the bottleneck is.

Resources