SignalR HubProvider.Connection.Start() is taking about 60 second - asp.net

I have an ASP.NET.Core server which uses Microsoft.AspNet.SignalR(2.2.1) for communication to WPF client. Because I use AspNet.SignalR and AspNet.SignalR.Owin on server side I host this app like a Kestrel console application. This application has been hosted on 3 different servers and there are problems with the servers hosted on Amazon EC2. First connection (running Connection.Start() method) takes a lot of time(about 60 seconds!!!). Invokes hub's method and requests to the controller working with normal speed. I tried to research this, of course I configured security group and firewall for HTTP traffic, but I really have no idea what is going on. Has anyone else had this problem? Does Amazon use any network monitor for EC2?
Update
Oh vey. I understand. Problem is much logic in OnConnected() hub method. All is OK but some my services on amazon start very slow. Thank you!

Related

Configuring Azure's Event Hub to receive events from ASP.NET MVC web application

Can someone point me in the right direction on how to configure network settings within event hub so i can successfully send data via the ASP.NET MVC application while running locally (localhost) as well as when I deploy the application in azure's dev/qa/production web environment.
I have build a proof of concept console application in .NET locally and on Azure's EventHub side added my IP address within Networking/Firewall settings, and have no issue sending data and receiving data from a local machine.
But when I try the same code in the ASP.NET MVC web application, the page just hangs on CreateBatchAsync() method and does not return any exception..
var producerClient = new EventHubProducerClient(connectionString, eventHubName);
EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Event 1z at " + DateTime.Now.ToString())));
await producerClient.SendAsync(eventBatch);
Any help would be appreciated.
Thanks
The call to CreateBatchAsync is the first point in your code to request a network operation and, consequently, will trigger creation of the connection and link to the Event Hubs service. The connection attempt has a timeout associated with it which is 60 seconds in the default configuration that you're using. Depending on the error that it is encountering, you may see retries take place, each of which would have a 60 second timeout. With the default configuration, this would look like a 3 minute hang. (60 seconds * 3 attempts)
The most common connection issue in an enterprise environment is that the ports needed for AMQP over TCP (5671/5672) are not open. Changing the transport to AMQP over WebSockets often helps, as it will use port 443 and may be routed through a proxy, if needed.
For more information, you may want to look at the sample for configuring Event Hubs clients and the Event Hubs network troubleshooting guide.

When we should use SignalR self hosted and when we should not?

I am in a stage of using SignalR in my project and i don't understand when to use Self hosted option and when we should not use. As a example if I am willing to host my web application in server farm,
There will be separate hosting servers
Separate SignalR hubs in each IIS server
If we want to broadcast message into each client, how this is working in SignalR
The idea with SignalR running in multiple instances is that clients connected on instance A cannot get messages from clients connected to instance B.
(SignalR scaleout documentation)
However, when you scale out, clients can get routed to different
servers. A client that is connected to one server will not receive
messages sent from another server.
The solution to this is using a backplane - everytime a server recieves a message, it forwards it to all other servers. You can do this using Azure Service Bus, Redis or SQL.
The way I see, you use the self host option when you either don't want the full IIS running (because you have some lightweight operations that don't require all IIS heaviness) or you don't want a web server at all (for example you want to add real-time functionality to an already existing let's say forms application, or in any other process).
Be sure to read the documentation for self-hosting SignalR and decide whether you actually need to self host SignalR.
If you are developing a web application under IIS, I don't see any reason why you would want to self-host SignalR.
Hope this helps. Best of luck!

Using ASP.NET Web application as SignalR client

My team is in the middle of deciding the architecture of our backend system:
Webserver A is an ASP.NET MVC application with ASP.NET Web API component, hosted in Azure Website.
Windows Service B is a self-hosted OWIN server that will periodically push notifications to clients who subscribes to the notification, hosted in Azure VM.
Windows Service C is a client that subscribes to notification from B, hosted in Azure VM.
Since we are more-or-less entrenched in .NET stack, we implemented B as SignalR server with C being the SignalR client. This part seems to work well.
Now comes a point where we also want A to subscribe to B, but I realize that it means an ASP.NET Web Server is going to act as SignalR CLIENT, instead of the typical scenario where it acts as SignalR server.
I presume we can initialize the SignalR connection in Global.asax and make the process ever-running to avoid AppDomain recycle. However, I feel a bit iffy when a Web Server is made to do something other than serving web requests. This solution also make the web server not stateless since it needs to maintain the web socket connection alive.
Is there something fundamentally wrong with making an ASP.NET application a SignalR client? Is there any possible gotcha with this setup?
In Azure you cannot tell that your AppDomain will not recycle. Because of many reasons, it can restart itself to heal and then you will end up making a new connection to the SingleR server. Is that OK for you?
Also SingleR is mostly used in the Web Functionality improvement where polling and refresh on web clients is made simple. But as your requirement seems to be all a back end stuff, I would suggest you to go with any other event driven pattern. Check Azure Service Bus topic/subscription model to have different components listen to various events and act accordingly.

Monitoring cluster of micro services (web,queue,db,ha proxy)

I am designing an architecture where all micro services are clustered.
For instance: 5 web server, 1 clustered db, 1 clustered queue system, 8 clustered workers (like send email,send sms,...) that consume from the queue (tasks are pushed by the web server)
I am wondering about the best practice in order to detect that each 'cluster of micro service' is healthy, and how to 'fail fast' the whole service in such case one of the micro service is unavailable.
All the service is sitting behind an nginx for ha proxy - should it be nginx that monitors everything and fails? How can I check the health of all the micro services?
You should use an external monitoring service like Pingometer.
This lets you setup simple health checks (HTTP, HTTPS, Ping, etc.) at regular intervals and receive alerts if a node fails, is unavailable, or not responding with the correct content.
In your contact, you can setup a webhook which is fired when a service goes down. You can use the webhook to trigger a failover, change DNS records, etc.
We setup something similar and it's working quite well.
You can also use something internally to monitor nGinX itself (e.g. cheaping workers + respawning them), but this doesn't let you know that a service is functioning externally (like a monitoring service would).

Where to host SignalR when long-running service via WCF is backend

I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.

Resources