I'm trying to figure out why the "Connection Count" of my Azure SignalR resource basically never decreases. The only time it does, it drops drastically, just like it was recycled or something similar.
We automatically connect the user to SignalR when they login to our website, and I would presume that SignalR picks up the deconnect when the user leaves the website. But it doesn't look like it does.
I would have suspected that we have up to 1000 users visiting the site at the same time, but that can't be the case since they would've reconnected after the drop (we have a reconnect functionality implemented)
What I'm looking for is help on how to debug this? How can I figure out why the connection count doesn't decrease?
Npm: "#aspnet/signalr": "1.0.3"
Nuget: <PackageReference Include="Microsoft.Azure.SignalR" Version="1.0.8" />
I figured it out. It's not only the frontend that uses SignalR clients, but also some of our background processes (that pushes messages out to the frontend). I forgot about these, and it seems like they aren't being disposed correctly, so the connection remains open.
Related
I have a web app. I am living a problem about time of Meteor.logout() and Meteor.call(). When i meteor.logout(), it takes time between about 30-40 sec. Same for Meteor.call() as well. About 200-250 clients use this system on the same time.
if a client see about 100-200 items his on app screen this delay time is so much. but 10-20 items, it's a little well. we get data every 5-10 sec as different times each others on these items. I mean, live screen.
I don't get this problem when i work this system on diffrent port with same code and same database by the way just use only me.
I can't figure it. What can be reason it. I need your ideas and help.
The logout function waits for a callback form the server, there is something wrong with the way you have configured your server.
Run the same code on another machine, it should not happen.
You can use this.unblock() in every method and publications.
By default, Meteor process requests one by one, it will queue all the requests coming, if one is processing.
This may be due to the reason that some of the functions doing some bigger functionalities will be requiring more time and all other request to the server have to wait till it ends.
You need to simply place this.unblock() at the starting of every method and publications and it will not block your requests.
Thanks
I solved my problem.
While the collection update process is performed from one side, the meteor publish process is performed from the other side. As the number of clients increases, the server becomes unresponsive. I solved it with Mongodb oplog feature.
Thank you for your interest.
There could be multiple reasons.
There could be unsubscription of collections, which means client and server exchange the list of id's which are being unsubscribed.
You many have reactive UI, which suddenly gets overwhelmed with the amount of data that is being transferred and needs to update itself. (example angular digest cycle always runs after meteor sub/unsub)
Chrome Inspector - Network websocket frame is your best tool understand how soon Meteor logout fires and and if there are any messages being passed back and forth before server retutns the result of logout request.
You may also use this.unblock() feature in subscribe. This way your subscritption run parallelly and don't block each other
I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.
When you have a meteor cluster (lets say 2 boxes) and a server stops responding (goes down), does all the traffic get re-routed to the other "live" server? I'm building an application for someone that it is very likely will be a fire and forget application (where it runs and just provides updates when they come in).
My concern is that if one server goes down, there won't be any traffic to any of the clients that were attached to that box.
Info about app:
The app will be a fire and forget (load page and walk away). Likely someone won't refresh the page or anything.
This app is mission critical and someone not getting a notification is really, really bad, and a difference of a few seconds does matter.
Websockets must be used. The 10 second dely in pull-logging is unacceptable.
Most Importantly....
The app must auto recover. If a server goes down, the client must switch to a good box without a page refresh or someone walking over to the box and causing the refresh.
Meteor has will always try to reconnect to the server when a connection is lost, So if the server gets back online it will reconnect. But if you need a custom logic to retry a connection to different cluster when the user disconnect should also be easily coded the docs have reactive API to see the connection status (Meteor.status) here is a new package I found it can be a great place to see how it should work: https://github.com/nspangler/autoreconnect
also if you're using meteorhacks:cluster it's possible it will retry to connect to a different server, the docs don't really say anything about it but if it's not I think aruonda might add that just by asking on git.
good luck :)
In case of TL;DR - I basically need guidance regarding what tools are available to debug requests which are issued to IIS and which stall inside a module.
I have a problem with an old ASP 2.0 app at the moment whereby it will periodically become unavailable and recycling the app pool (horrible as that may be) doesn't bring it back up 100% of the time.
So first of all it presents itself as requests entering the app pool and being trapped in state 'BeginRequest' in RewriteModule.
It is not a specific request which is always the first to experience this issue. The issue cannot be easily recreated either.
Eventually requests join this backlog and when it becomes 70+ deep the app pool fails to respond to pings from WAS and it forcibly recycles. Predictably it doesn't stop on-time and the old app pool is forced to stop. When the new app pool comes up it either works just fine or it instantly experiences the same issue as the outgoing one and requests begin to queue.
In issues like this all the official guidance is understandably focussed around looking at why the RewriteModule may choke.
I have validated my redirections and though complex there are no obvious issues with syntax (XML validates).
Likewise in inetmgr loading up the URL Rewrite Module seems to parse the configs fine and show them visually.
Basic stuff like permissions is all fine.
When the app is working normally I also used Failed Request Tracing/Logging to look at the request pipeline for a sample URL which stalled and I can confirm that there is no circular logic or weird errors presenting - the request seems to be handled just fine. This also showed me how high up the rewritemodule is invoked and from this I really don't see how the issue could be app-related as .NET isn't invoked at this point.
Annoyingly when an app pool is experiencing this issue and I can throw in requests which just stall Failed Request Tracing is no good because you actually need a request to get to the end of it's journey and fail otherwise it refuses to log anything out.
I resorted to taking process dumps of affected w3wp.exe's and running them through DebugDiag. Unfortunately the only thing I see is that threads are open accessing the rewritemodule but precious little about what they are stuck on.
As anyone else would do I've tried to track the start of the issue back to any recently installed patches or code changes but nothing matches. Likewise this is happening on 3x servers otherwise I would try reinstalling the rewritemodule. Other sites on the same server which invoke rewritemodule are unaffected.
Has anyone else experienced issues like this - the net seems to have relatively little info in this case. Perhaps you can recommend further debugging tools or approaches for IIS which I can adapt to this scenario? This is sort of a cry for help from someone more used to Apache/Nginx - sorry for the long post.
I'm encountering a situation where it takes a long time for ASP.NET to generate reply with the web page (more than 2 hours). It due to the codebehind running for a while (very long, slow loop).
Browser (both IE & Firefox) stops waiting for the reply (after about an hour) and gives generic cannot display webpage error (similar to what you would see if you'd try to navige to non-existing server).
At the same time asp.net app keeps going (I can see it in debugger) and eventually completes.
Why does this happen? Are there any settings in web.config to influence this? I'm hoping there's a timeout setting that I'm missing that's causing this.
Maybe a settings in IE or Firefox? But I think they wait while the server is keeping connection alive.
I'm experiencing this even when I launch app in debug mode (with compilation debug="true") on my local machine from VS (so it's not running on IIS, but on ASP.NET Dev Server).
I know it's bad that it takes so long to generate the page, but it doesn't matter at this stage. Speeding it up would take a lot of extra work and the delay doesn't really matter. This is used internally.
I realize I can redesign around this issue running logic to a background process and getting notified when it's done through AJAX, or pull it to a desktop app or service or whatever. Something along those lines will be done eventually, but that's not what I'm asking about right now.
Sounds like you're using IE and it is timing out while waiting for a response from the server.
You can find a technet article to adjust this limit:
http://support.microsoft.com/kb/181050
CAUSE
By design, Internet Explorer imposes a
time-out limit for the server to
return data. The time-out limit is
five minutes for versions 4.0 and 4.01
and is 60 minutes for versions 5.x, 6,
and 7. As a result, Internet Explorer
does not wait endlessly for the server
to come back with data when the server
has a problem. Back to the top
RESOLUTION
In general, if a page does not return within a few
minutes, many users perceive that a
problem has occurred and stop the
process. Therefore, design your server
processes to return data within 5
minutes so that users do not have to
wait for an extensive period of time.
The entire paradigm of the Web is of request/response. Not request, wait two hours, response!
If the work takes so long to do, then have the page request trigger the work, and then not wait for it. Put the long-running code into a Windows service, and have the service listen to an MSMQ queue (or use WCF with an MSMQ endpoint). Have the page send requests for work to this queue. The service will read a request, maybe start up a new thread to process it, then write a response to another queue, file, or whatever.
The same page, or a different, "progress" page can poll the response queue or file for responses, and update the user, assuming the user still cares after two hours.
For something that takes this long, I would figure out a way to kick it off via AJAX and then periodically check on it's status. The background process should update some status variable on a regular basis and store it's data in the cache or session when complete. When it completes and the browser detects this (via AJAX), have the browser do a real postback (or get by changing location.href), pick up the saved data, and generate the page.
I have a process that can take a few minutes so I spin off a separate thread and send the result via ftp. If an error occures in the process I send myself an error message including the stack trace. You may want to consider sending the results via email or some other place then the browser and use a thread as well.