Signalr backpane scaleout - signalr

I am using a DB Named : MyServiceDB for signalR scaleout for my application.
Another application that is also doing SignalR stuff wants to use my DB for Scaleout.
Will there be any performance loss or delay with different applications sharing a DB for Scaleout?
Should each app use its own DB for Scaleout ?

Don't use the same database to scale out separate SignalR applications. Each application will try to initialize database and may drop tables the other application created. SignalR also assumes that there is a global, monotonically increasing cursor pointing to the last message. I don't think you are able to guarantee this with two separate applications. I also think you may get weird issues like getting messages meant for one application to be sent to another, missing messages or seeing the same message multiple times etc.

Related

Make .net core service run in multiple machines to make it highly available but do the work by only one node

I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.

SignalR stop "old" connections

I am facing an issue with old SignalR connections. The workflow is like this:
Debugging a web project in VS2015, SignalR creates some websockets which send data (status information and data). Nothing fancy.
I change some code parts, rebuild the project and load the web project again (in a new tab, old tab still exists).
An initialization method gets called in the code (setting some database connection strings, loading values, ...)
Here is my problem: Just before #3 happens (database connection strings are not initialized yet, ...) an websocket poll comes from the old chrome tab. The poll tries to get some data. Application crashes because the initialization isn't done yet - database connection cannot be made yet, and so on.
How would you handle this? Simply use more "if-initialization-is-done-then-..." statements? Or is there a neat trick to handle this in SignalR?

Performance impacts of running three simultaneous hub connections?

I am building a web application that current utilizes two SignalR hubs:
ChatHub - User communication
ControlHub - User manipulates controls and receives responses from server
I want to add a third hub: GuideHub that will be responsible for determining whether or not a user has completed a set of tasks that they are assigned on the website. Technically, this hub will be active whenever ChatHub is active (they share a page element) but they serve thematically different purposes. Generally, users will only be actively communicating across one hub at a time.
I know that premature optimization is usually no good, in this scenario, I need to plan ahead about how I am going to enable these features to scale well. Is this architecture scale-able or should I combine ControlHub and GuideHub to reduce the number of open connections users will have?
2.x support multiple hubs over one connection
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#multiplehubs

Signalr webfarm with Backplane out of sync

We have a SignalR application that we built and tested for use on a single web server. The requirements have changed and we need to deploy the application to a webfarm. SignalR supports several backplanes, since the application already uses Sql Server that is what we have implemented. With the introduction of a second web node we ran into an issue with keeping the data that is cached within the Hub synced between the nodes.
Each hub has an internal cache in the form of a DataSet.
private static DataSet _cache; The cache gets populated when a client first requests data, and from there any interaction updates the local cache and the sql server, then notifies all connected clients of the changes.
The backplane handles the broadcast messages between the clients but the other node does not receive a message.
Our first thought was that there might be a method that we could wire up that would triggered from the backplane sending a message to a nodes clients, but we have not seen such a thing mentioned in documentation.
Our second thought was to create a .net client within the hub.
private async void ConnectHubProxy()
{
IHubProxy EventViewerHubProxy = _hubConnection.CreateHubProxy("EventViewerHub");
EventViewerHubProxy.On<string>("BroadcastAcknowledgedEvents", EventIDs => UpdateCacheIsAcknowledged(EventIDs));
EventViewerHubProxy.On<string>("BroadcastDeletedEvents", EventIDs => UpdateCacheIsDeleted(EventIDs));
ServicePointManager.DefaultConnectionLimit = 10;
await _hubConnection.Start();
}
Our questions:
How do we keep the cache in sync?
Is the first thought possible and we missed it in the documentation?
Are there any issues concerns with having a hub connect to itself?
The recommended way to have "state" in a scaleout scenario is to have a source of persistence. So in your case, if you're looking to have a "global-like" cache one way you can implement that is via a database.
By creating a database all your nodes can write/read from the same source and therefore have a global cache.
The reason why having an in-memory cache is not a good idea is because in a web farm, if nodes go down, they then lose all their in-memory cache. By having persistence, it doesn't matter if a nodes been up for days or has just recovered from a "shutdown" based failure, the persistence layer is always there.

Whats the best paradigm or design pattern for exception handling in a mission critical DB driven web application?

I want to design a bullet-proof, fault tolerant, DB driven web application and was wondering on how to architect it.
the system will have a asp.net UI, web services middle tier and SQL2005 back end. The UI and Services will communicate using JSON calls.
I was wondering on how to ensure transactions are committed and if not for any error to bubble up and be logged. ideally the action to be retried a couple times after 5 minute intervals, like an email app does.
I was planing to use try catch blocks in SQL and was wondering what the interface (or contract if you will) would look like between the SQL stored procs and the Services that call them would look/ function. this interface will play 2 roles one is to pass params for the proc to function and return expected results. the next will be for the proc to return error information. maybe somethign like error number and error message.
my quagmire is how to i structure this intelligently so that the services expect and react accordingly to both data and error info returned from procs and handle each accordingly?
is there a framework for this because it seems very boiler plate?
You might consider looking into SQL Server Service Broker:
http://msdn.microsoft.com/en-us/library/ms345108%28v=sql.90%29.aspx
The unique features of Service Broker
and its deep database integration make
it an ideal platform for building a
new class of loosely coupled services
for database applications. Service
Broker not only brings asynchronous,
queued messaging to database
applications but significantly expands
the state of the art for reliable
messaging.

Resources