How does Azure Signal R handle Application Server scaling? - signalr

We have an existing web service that we are modifying such that when certain events happen within the service then they can be published to users that are interested. We are using Azure Signal R Service as our mechanism for relaying messages from our service to interested users. Currently, our architecture looks like this:
Our Signal R application server has only one hub and we are currently running three instances of the application server. I have labeled these Hub Instance 01, Hub Instance 02, and Hub Instance 03 in the diagram above.
Each instance of our existing web service opens one connection to the Azure Signal R service. After reading the Azure SignalR Service internals docs I have come to understand that each client connection to the Azure Signal R service goes through a one-time mapping to an application server (or Hub Instance in this case). In the diagram I have modeled that by showing a colored link coming from either the existing web service instance or a user and another link of the same color and style coming out of the Azure Signal R service and into a single Hub Instance.
Our primary concern is that the connection from the existing web service instance into the Azure Signal R service (the solid green and solid blue links in the diagram) could become saturated if we're trying to send too many events. Our plan to mitigate that concern was to open multiple connections from each web service instance to the Azure Signal R service. Then, within our code we would simply round-robin through each of the connections as we send messages.
Our concern with that approach is that we don't know how those connections to the Azure Signal R service are going to be mapped to Hub Instances. We could end up in a situation like the one below, where one or two Hub Instances end up taking the brunt of our traffic.
In this diagram we can see:
Each instance of the existing web service has opened multiple connections to the Azure Signal R service. Unfortunately, Hub Instance 01 and Hub Instance 03 have been assigned the majority of those connections. That means that they'll be taking the brunt of our traffic and will eventually start to run hot.
This leads me to the following questions:
Is there anything we can do in our existing web service to make sure that the connections we establish to the Azure Signal R service are evenly spread out across the Hub Instances?
What can we do if one of our Hub Instances starts running hot? It seems like just adding another Hub Instance isn't going to be helpful because only new clients will be assigned to that instance. Is there a way to have the Azure Signal R service re-balance connections when a new Hub Instance comes online?
How are client connections affected if an application server instance goes down (i.e. for deploying updates)? Are the client connections terminated and then the client is supposed to reconnect?
Within the Azure Signal R service, how are connections balanced if the Signal R Service cluster itself needs to scale up or down?

We're facing a similiar issue, and from what I've read in the Microsoft docs they suggest incorporating a backplane using Redis or Service Bus into the architecture to manage the connections.

Related

Is there a limit on the number of stateless services in one service fabric application

I have to create 28 stateless services under one service fabric application. The role of each state less service is just to listen to the service bus queue, retrieve the messages and post them to a Rest endpoint. Is there a hard limit on how many stateless services that can be created on a single azure service fabric application? Do we run into any memory issues by having numerous stateless services?
EDIT : We looked on the server itself and saw that each app takes up about 250 MB mem.
Nothing comes for free. Each stateless service will use some memory, also of course largely depending on what it is doing. There is no hard limit but the size of the underlying machine and possibly the scale of the cluster do define the limits. What is the instance count of each service anyway?

Server to Server Communication

I'd like to know if there's a way to communicate directly between two (or more) flask-socketio servers. I want to pass information between servers, and have clients connect a single web socket server, which would have all the combined logic and data from the other servers.
I found this example in JS Socket IO Server to Server where the solution was to use a socket.io-client to connect to another server.
I've looked through the Flask-SocketIO documentation, as well as other resources, however it doesn't appear that Flask-SocketIO has a client component to it.
Any suggestions or ideas?
Flask-SocketIO 2.0 can (maybe) do what you want. This is explained in the Using Multiple Workers section of the documentation.
Basically, the servers are configured to connect to a shared message queue service (redis, for example), and then a load balancer in front of them assigns clients to any of the servers in the pool using sticky sessions. Broadcasting operations are coordinated automatically among the servers by passing messages on the queue.
As an additional feature, if you use this set up, you can have any process connect to the message queue to post messages for clients, so for example, you can emit events to clients from a worker or other auxiliary process that is not a SocketIO server.
From your question it is unclear if you were looking to implement something like this, or if you wanted to have the servers communicate for a different reason. Sending of custom messages on the queue is currently not supported, but your question gave me the idea, this might be useful for some scenarios.
As far as using a SocketIO client as in the question you referenced, that shouud also work. You can use this Python package: https://pypi.python.org/pypi/socketIO-client. If you go this route, you can have a server be a client and receive events or join rooms.

Solution for simple grid computing in local network

I'd like to develop a simple solution using .NET for the following problem:
We have several computers in a local network:
10 client computers that may need to execute a program that is only installed on two workstations
The two workstations that are only used to execute the defined program
A server that can be used to install a service available from all previously described computers
When a client computer needs to execute the program, he would send a request to the server, and the server would distribute the job to a workstation when available for execution, and inform the client computer when the execution has been performed.
I'm not very used to web and services development so I'm not sure if it's the best way to go, but below is a possible solution I thought about:
A web service on the server stores in queues or in a database the list of tasks with their status
The client computer calls the web service to execute a program and gets a task id. Then calls it every second with the task id to know if the execution has been performed.
The workstations that are available call the web service every second to know if there is something to execute. If yes, the server assigns the task, and the workstation calls the web service when the execution is completed.
I summarized this in the below figure:
Do you think to a simpler solution?
Have a look at signalr! You could use it as messaging framework and you would not need to poll the service from 2 different diretions. With signalR you would be able to push execution orders to the service and the service will notify the client once the execution has been processed. The workstation would be connected with signalR, too. They would not need to ask for execution orders as the webservice would be able to push execution orders to either all or a specific workstation.

SAP receive adapter high availability

We are having a active-active BizTalk cluster with windows server as software load balancer. The solution includes a SAP receive adapter accepting inbound rfc calls. The goal is to make SAP adapter high availabile.
Read the documentation (), it does says 'You must always cluster the SAP receive adapter to accommodate a two-phase commit scenario.' and 'hosts running the receive handlers for FTP, MSMQ, POP3, SQL, and SAP require a clustering mechanism to provide high availability.'
What we currently did in both the active-active node for BizTalk, we have a host instance enabled. With refering to above documentation, does it mean we did it incorrectly? We should take the clustered host instance instead the active-active deployment?
thanks for all the help in advance.
You need to cluster the host that handles the SAP receive. What this means is that you will always have only one instance of the adapter running at any given time and if one of the server goes down, the other will pick up.
Compare this with your scenario where you simply have two (non-clustered) instances running concurrently: yes, this gives you high availability - but also deadlocks! The two will run independently of each other... With the cluster scenario above, they will run one at the time
To cluster the SAP receive host: open the admin console, find the host, right-click and Cluster.

How to make a Windows Service listen for additional request while it is already processing the current request?

I need to build a Windows Service in VB.net under Visual Studio 2003. This Windows service should read the flat file (Huge file of about a million records) from the local folder and upload it to the corresponding database table. This should be done in Rollback mode (Database transaction). While transferring data to table, the service should also be listening to additional client requests. So, if in between client requests for a cancel operation, then the service should rollback the transactions and give feedback to the client. This windows service also keeps writing continuously to two log files about the status and error records.
My client is ASPX page (A website).
Can somebody help me explain how to organize and achieve this functionality in a windows service(Processing and listening for additional client requests simultaneously. Ex. Cancellation request).
Also could you suggest me the ideal way of achieving this (like if it is best to implement it as web service or windows service or just a remote object or some other way).
Thank you all for your help in advance!
You can architect your service to spawn "worker threads" that do the heavy lifting, while it simply listens for additional requests. Because future calls are likely to have to deal with the current worker, this may work better than, say, architecting it as a web service using IIS.
The way I would set it up is: service main thread is listening on a port or pipe for a communication. When it gets a call to process data, it spawns a worker thread, giving it some "status token" (could be as simple as a reference to a boolean variable) which it will check at regular intervals to make sure it should still be running. Thread kicks off, service goes back to listening (network classes maintain a buffer of received data so calls will only fail if they "time out").
If the service receives a call to abort, it will set the token to a "cancel" value. The worker thread will read this value on its next poll and get the message, rollback the transaction and die.
This can be set up to have multiple workers processing multiple files at once, belonging to callers keyed by their IP or some unique "session" identifier you pass back and forth.
You can design your work like what FTP do. FTP use two ports, one for commands and another for data transfer.
You can consider two classes, one for command parsing and another for data transfer, each one on separate threads.
Use a communication channel (like a privileged queue) between threads. You can use Syste.Collections.Concurrent if you move to .NET 4.0 and more threading features like CancellationTokens...
WCF has advantages over web service, but comparing it to windows service needs more details of your project. In general WCF is easier to implement in compare to windows service.

Resources