I am building a web application that current utilizes two SignalR hubs:
ChatHub - User communication
ControlHub - User manipulates controls and receives responses from server
I want to add a third hub: GuideHub that will be responsible for determining whether or not a user has completed a set of tasks that they are assigned on the website. Technically, this hub will be active whenever ChatHub is active (they share a page element) but they serve thematically different purposes. Generally, users will only be actively communicating across one hub at a time.
I know that premature optimization is usually no good, in this scenario, I need to plan ahead about how I am going to enable these features to scale well. Is this architecture scale-able or should I combine ControlHub and GuideHub to reduce the number of open connections users will have?
2.x support multiple hubs over one connection
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#multiplehubs
Related
I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.
I have a DDD application and I am trying to understand where SignalR fits in my layers:
1. Presentation (Angular)
2. Distributed Services (Web API)
3. Application
4. Domain
5. Data
Basically, my SignalR hub notifies clients (Angular web app) when there is new data. For which I run a background service in a background thread that checks the database on an interval and notifies clients when there is new data.
I am inclined to think in this way:
The SignalR hub belongs to the Presentation layer. Given that my presentation project is purely client-side (Angular), I would add a new project under Presentation just for the hub.
The background service that checks the database on an interval seems appropriate for the Application layer. I would inject an INotify interface with a Notify method, which I would implement with SignalR.
Is this along the DDD principles?
DDD is all about ensuring that changes to your data only happen in a well-defined way, and where the code that executes those changes is defined in terms from a Ubiquitious Language which is well-understood throughout the whole business (not just the dev team).
DDD is silent on the mechanism used to interface with your users and other systems, other than recommending a layered architecture - which it sounds like you're already doing.
So - I wouldn't worry too much about DDD here - but it is worth considering your overall architectural approach - and in terms of layered architectural patterns, one that matches well to your approach is called Ports & Adaptors or Onion architecture. (see 1 and 2)
In this architecture, the outside of your system is considered as a set of adaptors that adapt between specific technology and your application layer. In your case your WebAPI layer is an example of an adaptor.
I would recommend creating a new SignalR adaptor - you can consider it at the same 'level' as the WebAPI adaptor (although in ports and adaptor parlance it's an 'output' adaptor, so you might diagram it on the bottom right of the onion).
In terms of the location of your background process - personally I would not consider that a part of the application layer, as it does not guide any use cases or process flows in your application. So, you could put it in your SignalR adaptor, or create a new dedicated component for it.
That said, you may find another concept from DDD useful - DomainEvents - they could remove the need for the background thread altogether. In your SignalR adaptor, include event handlers that register to handle DomainEvents, and in those handlers, propagate the information about the event via SignalR to your client side presentation layer - no need to poll the database at all! (Warning - depending on your domain event implementation, you may need to consider the risk of advertising events via SignalR before the aggregate is successfully persisted... but that's a whole 'nother topic.)
I'm writing a Single Page Application with Durandal and I'm planning on using SignalR for some functionality. First of all, I have a top bar that listens for notifications that the server may send. The site start a connection to the "TopBarNotificationHub".
On one of the pages I want to connection to another hub as two users might edit the data on this page simultaneous and I want to notify if someone updated the data. No problem, this works fine.
But, when leaving that page I want to disconnect from ONLY the second hub, but I can't find a way to accomplish this. If I just say hub.connection.stop(); the connection to th eTopBarNotificationHub also stops (as it's shared).
Is there a way to just leave one hubproxy and let the other exist?
As this is a SPA the "shell" is never reloaded so it doesn't connect to the hub again... I might be able to force this to reconnect everytime another page disconnects from a hub, but there might be a cleaner solution...
Thanks in advance.
//J
If you use multiple hubs on a single page that's fine, but they share the same connection, so it isn't taking up more resources on the client other than receiving the updates.
Therefore to "connect and disconnect to/from a hub" you need to slightly rearchitect. If you use Groups instead of Clients on the server side you can "register" with a Hub by calling a (for example) Hub1.Register method and sticking the client in the relevant group in that method. To "deregister" you call a (for example) Hub1.DeRegister and remove the client's ConnectionId from the group in that method. Then, when you push updates to clients, you can just use the Group instead of Clients.All.
(C# assumed for server language as you didn't specify in your tag)
To add a client to the hub group: Groups.Add(Context.ConnectionId, groupNameForHub);
To remove a client from the hub group: Groups.Remove(Context.ConnectionId, id.ToString());
To broadcast to that Hub's clients: Clients.Group(groupNameForHub).clientMethodName(param1, param2);
Just to make it confusing, though, note that the group named "myGroup" in Hub1 is separate to the group named "myGroup" in Hub2.
This is the exact approach recommended in the documents (copied here in case they move/change in later versions):
Multiple Hubs
You can define multiple Hub classes in an application. When you do that, the connection is shared but groups are separate:
• All clients will use the same URL to establish a SignalR connection with your service ("/signalr" or your custom URL if you specified one), and that connection is used for all Hubs defined by the service.
There is no performance difference for multiple Hubs compared to defining all Hub functionality in a single class.
• All Hubs get the same HTTP request information.
Since all Hubs share the same connection, the only HTTP request information that the server gets is what comes in the original HTTP request that establishes the SignalR connection. If you use the connection request to pass information from the client to the server by specifying a query string, you can't provide different query strings to different Hubs. All Hubs will receive the same information.
• The generated JavaScript proxies file will contain proxies for all Hubs in one file.
For information about JavaScript proxies, see SignalR Hubs API Guide - JavaScript Client - The generated proxy and what it does for you.
• Groups are defined within Hubs.
In SignalR you can define named groups to broadcast to subsets of connected clients. Groups are maintained separately for each Hub. For example, a group named "Administrators" would include one set of clients for your ContosoChatHub class, and the same group name would refer to a different set of clients for your StockTickerHub class.
I am thinking about the scenario where hundreds of calls need to be made to the same web service in rapid succession, with differing parameters. Both the client and the service are written in ASP.NET. The client class is the one automatically generated from the WSDL.
Leaving aside the questions of whether to make asynchronous calls, use parallel threads, or whether the service can handle that many hits, I have a question about performance.
Re-using an instance of the web service client class for all the calls will save the cost of re-creating and tearing down the client instance for each call. I already know that. But are there any other performance advantages to re-using that instance? Does anything about the communication with the service (or processing the results) run more quickly if the same instance of the client is used for every call?
Are you creating any dynamic client that can invoke any WSDL at runtime ? Otherwise it is not necessary to maintain a client instance in server. The client instance can not do anything on the server performance.
I've been working on building a set of enterprise services using WCF 4 within my organization and could use some guidance. The setup/architecture I've designed thus far is similar to a lightweight custom ESB. I have one main "broker" service (using wsHttp), that connects to three underlying netTcp services. Both the broker and underlying services share a common assembly that contains the model, as well as the contract interfaces. In the broker service I can choose which operations from the underlying services I want to expose. The idea is that potentially we can have a core of set of services and few different brokers on top of them depending on the business need. We plan on hosting everything (including the netTcp services) in IIS 7.5 leveraging AppFabric and WAS.
Here's my question, is such a design good practice and will it scale? These services should be able to handle thousands of transactions per day.
I've played around with the routing in WCF 4 in lieu of the broker service concept I've mentioned, however, have not seen much value in it as it just simply does a redirect.
I'm also trying to figure out how to optimize the proxies that the broker service (assuming this practice is advisable) has to the underlying services. Right now I simply have the proxies as private members within the brokers main class. Example:
private UnderlyingServiceClient _underlyingServiceClient = new UnderlyingServiceClient();
I've considered caching the proxy, however, am concerned that if I run into a fault that the entire proxy at that point would be faulted and cannot be reused (unless I catch the fault and simply re-instantiate).
My goal with these services is to ensure that the client consuming them can get "in and out" as quickly as possible. A quick request-reply.
Any input/feedback would be greatly appreciated.
If i understand you correctly, you have a handful of "backend" services, possibly on separate computers. Then you have one "fontend" service, which basically acts like a proxy to the backend, but fully customizable in code. We are doing this exact setup with a few computers in a rack. Our frontend is IIS7, backend is a bunch of wcf services on several machines.
One, will it scale? Well, adding more processing power on the backend is pretty easy, and writing some load balancing code isn't too bad either. For us, the problem was the frontend was getting bogged down, even though it was only acting as a proxy. We ended up adding a couple more front end computers, "brokers" as you call them. That works very well. People have suggested that I use Microsoft ForeFront for automatic load balancing, but I have not researched it yet.
Two, should you cache the proxy? I would say definitely yes, but it kind of sucks. These channels DO fault occasionally. I have a thread always running in the background. Every 3 seconds, it wakes up, checks all the wcf services and wcf clients in the app. Any that are faulted get destroyed and recreated.
check host channels: ...
while(true)
{
try{if(MyServiceHost.State!=System.ServiceModel.CommunicationState.Opened) {ReCreate();}} catch{}
System.Threading.Thread.Sleep(3000);
}
check client channels: ...
private static ChannelFactory<IMath> mathClientFactory = new ChannelFactory<IMath>(bindingHttpBin);
while(true)
{
try
{
if(MyServiceClient.State==System.ServiceModel.CommunicationState.Faulted)
{
EndpointAddress ea = new EndpointAddress(ub.Uri);
ch = WcfDynamicLan.mathClientFactory.CreateChannel(ea);
}
}
catch{}
System.Threading.Thread.Sleep(3000);
}
On the client, I not only cache the channel, but also cache the ChannelFactory. This is just for convenience though to make the code for creating a new channel shorter.