We have SPA (on Angular) with ASP.NET Core at the backend. We are leveraging Azure SignalR for communication.
Problem: SignalR client side drops WebSocket connection every 15 mins. That's confirmed by Network tab as well as Azure SignalR service logs.
It sound like it is either SignalR client library has some timeouts or maybe WebSockets themselves.
Tested: on different environments (different Azure SignalR service), on different browsers (Chrome, Firefox), on different browser locations (behind different networks), with different ASP.NET app hosting options (on IIS, on IIS Express, on Azure App Services). The result is always the same: WebSocket connection lasts exactly 15 mins.
One interesting fact: it is failing not only with specific interval, but also at specific times: at 0th, 15th, 30th, 45th min of every hour.
Guess this can be fixed with some configuration, but default "KeepAlive and other timeouts" look good.
Browser logs
Azure SignalR logs
Related
I have a service running in azure (in the same service plan as other services).
When this service runs locally (with the exact same settings as the one in azure), my service endpoint returns within 2 seconds. However when it runs in Azure it takes up to a minute.
The service endpoint itself is calling a bunch of external apis.
Looking at app insights it seems like the external api's is taking forever to return. (~10s a piece). Hitting the same external endpoints manually confirms that it returns immediately. AppInsights also shows that the services only takes 6.8ms doing work, but then spends the rest of the time waiting.
My intuition says its some form of connection starvation where the Azure app is waiting for a thread or connection to become available but checking the Azure metrics shows nothing out of the ordinary.
My web app has online 200 users. But when I check SignalR connections after 1 day - it counts near 5000, most 2-10 hours long.
It starts okay, but grows 500 connections per hour. It seems like some connections just don't close.
And when I try to send message to all SignalR clients - my app hangs with CPU load 100%.
What can be the issue? SignalR version 2.2.0.
If you're using any kind of reverse proxy or tunnel between IIS and the public internet make sure everything is up to date.
For me it turned out to be due to using an out of date Cloudflare Argo tunnel.
SignalR Core connections not being closed and bringing down IIS
I am writing a Meteor application that has two components - a frontend meteor app, hosted on one server, and a chat app hosted on another. The chat application uses socket.io to do the actual messaging (because I wanted to use redis pub-sub and that isn't supported by Meteor yet), and of course sockjs for the rest.
I am hosting the two on Kubernetes. At their network IPs, websockets are working.
However, I want to use Cloudflare, where websockets won't work, so I have the DISABLE_WEBSOCKETS env variable set to 1. Additionally, the transports for socket.io should have just defaulted to xhr polling.
The only problem is this:
when I get the conversations, the app hangs because the frontend web app is making a huge number of repeated "xhr" requests to the chat app.
after a while, the chat app is able to respond and send down the information after about 10 seconds, when it should have taken less than 0.5 seconds.
there are a huge number of sockjs xhr requests being made, whereas the number of sockjs xhr requests to the normal frontend app is small.
on development, this issue doesn't arise even with DISABLE_WEBSOCKETS set to 1.
On Cloudflare, I tried the following (from this page: https://modulus.desk.com/customer/portal/articles/1929796-cloudflare-configuration-with-meteor-xhr-polling):
- Set "Pseudo IPv4" to "Overwrite Headers"
Is there a special Meteor configuration I need to get xhr working with cloudflare? Additionally, I have another service on the app as well, but it works completely fine. Could socket.io somehow be interfering with sockjs in the chat service?
My team is in the middle of deciding the architecture of our backend system:
Webserver A is an ASP.NET MVC application with ASP.NET Web API component, hosted in Azure Website.
Windows Service B is a self-hosted OWIN server that will periodically push notifications to clients who subscribes to the notification, hosted in Azure VM.
Windows Service C is a client that subscribes to notification from B, hosted in Azure VM.
Since we are more-or-less entrenched in .NET stack, we implemented B as SignalR server with C being the SignalR client. This part seems to work well.
Now comes a point where we also want A to subscribe to B, but I realize that it means an ASP.NET Web Server is going to act as SignalR CLIENT, instead of the typical scenario where it acts as SignalR server.
I presume we can initialize the SignalR connection in Global.asax and make the process ever-running to avoid AppDomain recycle. However, I feel a bit iffy when a Web Server is made to do something other than serving web requests. This solution also make the web server not stateless since it needs to maintain the web socket connection alive.
Is there something fundamentally wrong with making an ASP.NET application a SignalR client? Is there any possible gotcha with this setup?
In Azure you cannot tell that your AppDomain will not recycle. Because of many reasons, it can restart itself to heal and then you will end up making a new connection to the SingleR server. Is that OK for you?
Also SingleR is mostly used in the Web Functionality improvement where polling and refresh on web clients is made simple. But as your requirement seems to be all a back end stuff, I would suggest you to go with any other event driven pattern. Check Azure Service Bus topic/subscription model to have different components listen to various events and act accordingly.
Suppose I have 3 applications -
WebApp 1 - a NancyFX app that serves html pages. there's also a SignalR hub for messaging communications between the users of that app. (and sends messages to WebApp2 sometimes)
WebApp 2 - a NancyFX app that serves html pages. there's a SignalR hub to that receives messages from WebApp 1 and updates the users of WebApp 2.
WebApp3 - a self hosted WebAPI that doesn't have a SignalR hub, but sends messages to WebApp2 in order to update it's connected clients.
So my question - is keeping two hubs in WebApp2 and WebApp1 the way to go, or should I have a (scalable) dedicated SignalR server which hosts the hubs of WebApp2 and WebApp1 to facilitate communications?
Thanks..
Tough to say what's best for you, since we have no details about your load requirements or how authentication/authorization works in your application. However, I'll say this:
Your scenario could be viewed as similar to a more typical SignalR scale-out situation, where you have a single application deployed to a web farm behind a load-balancer. In this scenario, you use SignalR's scaleout ("backplane") feature for server-to-server communication so that outgoing messages reach clients no matter which server they happen to be connected to. Your situation is really no different, except you have three different applications in play. As long as all three of your applications are hosting the same hub class (via a shared hub assembly) and are connected to the same scaleout backplane, it ought to work fine.