I have written a simple 'analytics' tracking tool for my site, which has boolean columns such as
Visited_Store
Visited_Homepage
Checkout_Started
MainVideo_Played
MainVideo_Completed
I am also using Google Analytics but wanted a secondary place to montor activity.
I had been testing my application primarily in Chrome which of course will use web sockets by default. I switched to long polling because I wanted to be able to monitor the requests in Fiddler.
The way the hub works is pretty simple. The SignalR client sends events which sets flags (columns) when a particular event has completed. So on invocation it does the following :
Find row for user - or create if non existent
Set flags
Save row
I had no concurrency issues until I switched to long polling - when I found instant deadlocks.
My client will often send multiple events simultaneously (separate issue to fix - yes) and when using web sockets they are nicely queued and executed one by one. So obviously any deadlocks are going to be extremely unlikely.
Long polling is a different story - I suddenly found that my hub method was being entered multiple times and trying to create multiple rows, and deadlocks and 'row modified' errors all over the place.
One simple solution is just to lock(lockObj) when making a request, but if I have many clients I'd rather not do that. Another is to catch the deadlock and re-execute the request which right now occurs on just about every page load.
Is there perhaps a way to configure SignalR long polling to not send requests all at once? Or some other way to execute requests in turn (like ASP.NET does when you use SessionState).
Related
I am using SignalR in my web api to provide real-time functionality to my client apps (mobile and web). Everything works ok but there is something that worries me a bit:
The clients get updated when different things happen in the backend. For example, when one of the clients does a CRUD operation on a resource that will be notified by SignalR. But, what happens when something happens on the client, let's say the mobile app, and the device data connection is dropped?.
It could happen that another client has done any action over a resource and when SignalR broadcasts the message it doesn't arrive to that client. So, that client will have an old view sate.
As I have read, it seems that there's no way to know if a meesage has been sent and received ok by all the clients. So, beside checking the network state and doing a full reload of the resource list when this happens is there any way to be sure message synchronization has been accomplished correctly on all the clients?
As you've suggested, ASP NET Core SignalR places the responsibility on the application for managing message buffering if that's required.
If an eventually consistent view is an issue (because order of operations is important, for example) and the full reload proves to be an expensive operation, you could manage some persistent queue of message events as far back as it makes sense to do so (until a full reload would be preferable) and take a page from message buses and event sourcing, with an onus on the client in a "dumb broker/smart consumer"-style approach.
It's not an exact match of your case, but credit where credit is due, there's a well thought out example of handling queuing up SignalR events here: https://stackoverflow.com/a/56984518/13374279 You'd have to adapt that some and give a numerical order to the queued events.
The initial state load and any subsequent events could have an aggregate version attached to them; at any time that the client receives an event from SignalR, it can compare its currently known state against what was received and determine whether it has missed events, be it from a disconnection or a delay in the hub connection starting up after the initial fetch; if the client's version is out of date and within the depth of your queue, you can issue a request to the server to replay the events out to that connection to bring the client back up to sync.
Some reading into immediate consistency vs eventual consistency may be helpful to come up with a plan. Hope this helps!
I have an application where requests to a controller will take a while to process. The controller starts a thread per request and eventually returns some data to a database. I need to limit how many requests can be processed. So let's say our limit is 100, if the controller is already processing 100 requests, the 101st request will return 503 status till at least one request is completed.
I could use an application wide static counter to keep count of current processes but is there a better way to do this ?
EDIT:
The reason why the controller takes a while to respond is because the controller calls another API, which is a large database spanning several TB of geostationary data. Even if I could optimize this in theory, its not something I have control over. To make matters worse, the third party API simply times out if I have more than 10 concurrent requests. I am already dropping incoming requests to a servicebus queue. I just need a good way on my api controller to keep a global count of how many requests are coming in and returning 503 whenever it exceeds a set number of requests.
The requests to the API controller should not be limited. An idea would be to take requests and store the list of processes that need completing (database, queue etc)
Then create something outside the web request that processes this work, this is where you can manage how many are processed at once using parallel processing/multi-threading etc. (using windows service /Worker Role / Hangfire etc)
Once processed, you could communicate back to the page via SignalR to then get the data required to display once processed, or show status.
The benefit of this is that you can always go back to the page or refresh and get some kind of status, without re-running the whole process.
Task:
I'm using static classes, so everyone shares data that's been already loaded. If someone makes a change, that request will put an item in a list with an incremental ID, and my idea would be that every client has it's version on client side and requests if there's any change.
My solution: For this I use a $.post with a timeout of 5000ms and sending the client version, on server side I have a 500 cycle for loop which checks if there's something newer and breaks the loop, returns the changes and have a 10ms Thread.Sleep in every cycle so it wouldn't hog the cpu. Either if on the client it times out, has an error, I call the post again, if it succeeds I process the return data, than call the post again. This way I always should get the changes almost instantly without an overwhelming number of requests, and if something fails, I only need to wait 5secs for it to resume.
My problem is that when this loop runs, other requests aren't handled. With asp.net development server, that's okay, because it's single threaded. But that's also the case with win7hp iis7.5.
What I tried: Set it in the registry (HKLM\SOFTWARE\Microsoft\ASP.NET\4.0.30319.0\MaxConcurrentRequestsPerCPU), increasing the worker threads for the application pool, updating the aspnet.config file with maxConcurrentRequestsPerCPU="12" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" settings, and I also read that my win7hp should be able to use 3 threads. I also thought it's an optimization, that I use same variables in one request so it queues the others, so I commented those lines, left the for loop with the sleep only, but same result.
Don't use Thread.Sleep on threads handling requests in ASP.Net. This will essentially consume thread and prevent more requests to be started. There is restriction on number of threads ASP.Net will create to handle requests that you've tried to change, but high number of threads will make process less responsive and can easily cause OutOfMemeoryException for 32bit proceses - so it is not a good route.
There are several other threads discussing implemeting long poll requests with ASP.Net like this one - Can ASP.NET MVC's AsyncController be used to service large number of concurrent hanging requests (long poll)? and obviously Comet questions like this - Comet implementation for ASP.NET?.
I have a custom IHttpModule that is used to log all HTTP requests and responses, and this is currently working very well, but I'd love to extend it so I can determine how long a response actually takes.
The response is logged in the HttpApplication.EndRequest event, but this event fires before the request is actually sent to the web client. While this allows me to determine how long it took for the server to process the response, I'd also love to be able to time how long it actually took for the client to receive the response.
Is there an event, or some other mechanism, which will allow me to intercept after the client has finished receiving the response?
So that would require client-side code. But not entirely clear what you are wanting to measure. From smallest to largest, the timings could be
time inside server application - measured by code which you already have.
Your code can set the start from either the "Now()" when it begins, or using the HTTP objects. The first call to a site would see a big difference between these start times, otherwise they should be almost identical.
time on server website - I believe this is already measured by most hosting services like IIS.
server machine - I believe this is what "mo" is referring to. You would have to have some kind of external monitoring on the server machine, ala WireShark.
client machine - again, you would have to have some kind of external monitoring on the client machine. This would be the hardest to get, but I think is really what you are asking for.
client application - this is what you can measure with javascript.
Unless this is the "first call" (see Slow first page load on asp.net site or ASP.NET application on IIS7 - very slow startup after iisreset), I believe that all of these time will be just so close that you can use a "good enough" approach instead.
If you must have a measure of this call's client time, then you are stuck in a bad spot. But if you just want better numbers, just continue to measure 1. (application time) with what you already have, and make sure to also measure the size of the request and response.
Then set a base-line for adjusting that time, by testing on various target client machines.
Measure ping times from the client to your server
Measure transfer times of moderately large content - both upload and download
Finagle the numbers to get your average adjustment
You should end up with a formula like:
[AdjustedTime] = [PingTime] + [ServerTime]
+ ([UploadSpeed] * [RequestSize])
+ ([DownloadSpeed] * [ResponseSize]);
This would be the expected client response time.
yes you could handle HttpApplication.EndRequest
another way could be to hook (example: windows service to write response-time to a database) into your webserver (IIS) and trace those events.if you want to analyse the time, a client needs to get your content.
but i think, iis is already able todo so.
it depends a littlebit, what you want todo.
This question is about limits imposed to me by ASP.NET (like script timeout etc').
I have a service running under ASP.NET and I want to create a counterpart service for monitoring.
The main service's data is located at a database.
I was thinking about having the monitor service query the database in intervals of 1 second, within a loop, issued by an http request done by the remote client.
Now the actual serving of this monitoring will be done by a client http request, which will make the script loop (written in C#) and when new data is detected it'll aggregate that data into that one looping request output buffer, send it, and exit the loop, thus finishing the request.
The client will have to issue a new request in order to keep getting updates.
This is actually exactly like TCP (precisely like Windows IOCP); You request the service for data and wait for it. When it arrives you fire another request.
My actual question is: Have you done it before? How did it go? Am I limited by some (configurable) limits imposed by the IIS/ASP.NET framework? What are my limits in such situation, or, what are better options without complicating things too much?
Note that I do not expect many such monitoring requests at a time, maybe a few dozens.
This means however that 10 such concurrent monitoring requests will keep 10 threads busy, and the question is; Can it hurt IIS/performance? How will IIS handle 10 busy threads? Will it issue more? What are the limits? This is just one example of a limit I can think of.
I think you main concern in this situation would be timeouts, which are pretty much configurable. But I think that it is a wrong solution - you'd be better of with some background service, running constantly/periodically, and writing the monitoring data to some data store and then your monitoring page would just return it upon request.
if you want your page to display something only if the monitorign data is available- implement it with ajax - on page load query monitoring service, then if some monitoring events are available- render them, if not- sleep and query again.
IMO this would be a much better solution than a reallu long running requests.
I think it won't be a very good idea to monitor a service using ASP.NET due to the following reasons...
What happens when your application pool crashes?
What if you decide to do IISReset? Which application will come up first... the main app, or the monitoring app?
What if the monitoring application hangs due to load?
What if the load is already high on the Main Service. Wouldn't monitoring it every 1 sec, increase the load on the Primary Service, as well as IIS?
You get the idea...