Does SignalR (v2+) work with Sticky Sessions without a Backplane? - signalr

Is using Sticky Sessions a supported scale out scenario? Has anyone deployed SignalR with Sticky Sessions and were there any unexpected issues?
We're investigating SignalR for a load-balanced, broadcast based project (similar to a stock ticker) where messages latency is an important factor. After reading though the Scale Out documentation it seems that the Backplane model could introduce significant latency in the messages, especially when message rates are high.
I've found some references implying that it would work with some side effects but not what the reliability and performance implications are.
Thanks!

If you use SignalR without a backplane, any client method invocation will only be able to reach clients connected directly to the server making the invocation.
This might be fine if you are only using Clients.Caller since the caller should always come back to the same server given sticky sessions. This could become an issue if you are using Clients.All, Clients.Others Clients.Client(connectionId), Clients.User(userName), Clients.Group(groupName), etc... In these cases, any client that is connected to a server different from the one executing the Clients... code will not receive the invocation regardless of whether the client is connected to the same Hub, has the right connectionId, etc...

Related

How to call Biztalk orchestrations without using the messagebox

Is there anyway you could call a Biztalk orchestration without placing a message in the messagebox? The point here is to use an orchestration that is stored and configured in Biztalk but avoid the performance loss of using a database to trigger it.
Message box is an integral part of BizTalk server and no transaction can occur without message box. In most cases it works great and having message box provides lots of benefits for message delivery and processing. If you are having performance issues, I would recommend you measure your solution performance and identify bottlenecks. Some key points you can look:
Orchestration Persistent points
Use BizTalk host settings and change Polling intervals for messaging
and Orchestration to reduce from 500 ms to 50 ms, it does help.
If message box is a bottleneck (which is usually not the case until your volume is very high) add slave message boxes. BizTalk allows to scale out message box by adding slave message boxes. In this case one message box serves as master and rest others as slaves to process the request. Scale out message box
The answer to the question is NO, but, you are probably laboring under a false assumption.
There is no 'performance loss' due to the MessageBox. If you can prove the MessageBox causes you to miss an SLA, then you should be considering a completely different app platform such as a Windows Service. However, many of us have implemented very low latency apps with BizTalk without issue.
So, unless you SLA approaches the definition of 'real-time', I wouldn't worry about it.

Load balanced SignalR fails on start. Will Redis backplane fix?

I'm having issues with SignalR failing to complete its connection cycle when running in a load balanced environment. I’m exploring Redis as a means of addressing this, but want a quick sanity check that I’m not overlooking something obvious.
Symptoms –
Looking at the network traffic, I can see the negotiate and connect requests made, via XHR and websockets respectively, which is what I’d expect. However, the start request fails with
Error occurred while subscribing to realtime feed. Error: Invalid start response: ''. Stopping the connection.
And an error message of ({"source":null,"context":{"readyState":4, "responseText":"","status":200, "statusText":"OK"}})
As expected, this occurs when the connect and start requests are made on different servers. Also, works 100% of the time in a non load-balanced environment.
Is this something that a Redis backplane will fix? It seems to make sense, but most of the rational I’ve seen for adding a backplane is around hub messaging getting lost, not failing to make a connection, so I'm wondering if I'm overlooking something basic.
Thanks!
I know this is a little late, but I believe that the backplane only allows one to send messages between the different environments user pools, it doesn't have any affect on how the connections are made or closed.
Yes, you are right. Backplane acts as cache to pass messages to other servers behind load balancer. Connection on load balancer is a different and a tricky topic, for which I am also looking for an answer

SignalR scaleout/backplane in server broadcast implementation - Will it not cause duplicate messages to clients?

SignalR documentation says that scaleout/backplane works well in case of server broadcast type of load/implementation. However I doubt that in case of pure server broadcast it will cause duplicate messages to be sent to the clients. Consider the following scenario:
I have two instances of my hub sitting on two web servers behind a load balancer on my web farm.
The hub on each server implements a timer for database polling to fetch some updates and broadcast to clients in groups, grouped on a topic id.
The clients for a group/topic might be divided between the two servers.
Both the hub instances will fetch the same or overlapping updates from the database.
Now as each hub sends the updates to clients via the backplane, will it not result in duplicate updates sent to the clients?
Please suggest.
The problem is not with SignalR, but with your database polling living inside your hubs. A backplane deals correctly with broadcast replication, but if you add another responsibility to your hubs then it's a different story. That's the part that is duplicating messages, not SignalR, because now you have N pollers doing broadcast across all server instances.
You could, for example, remove that logic from hubs into something else, and letting just one single instance of your server applications use this new piece in order to do the generation of messages by polling, using maybe a piece of configuration to decide which one. This way you would send messages only from there, and SignalR's backplane would take care of the replication. It's just a very basic suggestion and it could be done differently, but the key point is that your poller should not be replicated, and that's not directly related to SignalR.
It's also true that polling might not be the best way to deal with your scenario, but IMO that would be answering a different question.

SignalR pinging clients

I feel like I'm getting mixed reviews about SignalR and disconnection functionality and I'm trying to figure out who is right (these packages move so fast it's hard to tell what is right information these days since something you find online could be 2 months old and outdated).
I've seen many people setup pinging code to tell if a client is still connected. Yet I see others talking about the Disconnect() function that gets fired from the Hub when a client disconnects. Then I see some say the Disconnect() method isn't reliable?
Does anyone have the details on this as it stands today? Should I not be using the Disconnect() method because in some cases (which maybe I haven't ran into yet) it's not reliable? It's so confusing trying to search for information when these things change so often invalidating older information you find on the web about it.
There might be a couple of edge cases where you don't get timely notifications but in general it is reliable. Also, we raise disconnect events on the client as well and we have a keep alive functionality which ensures that if the client doesn't hear from the server within a specified timeout, we will try to reconnect and ultimately disconnect if reconnects fail. Therefore, you can take appropriate actions on the client.
You can read more about this here http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#connectionlifetime

Sending data between .net applications over the internet

I am sending small messages consisting of xml(about 1-2 KB each) across the internet from a windows application to a asp.net web service.
99% of the time this works fine but sometimes a message will take an inordinate amount of time to arive, 25 - 30 seconds instead of the usual 4 - 5 seconds. This delay also causes the message to arrive out of sequence.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
If its not possible to resolve can i please get recomendations of a low latency messaging framework that can deliver messages in order over the internet.
Thanks.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
Using just webservices this is not possible. You will always run into situations where occasionally something will take much longer that it "should". This is the nature of network programming and you have to work around it.
I would also recommend using XMPP for something like this. Have a look at xmpp.org for info on the standard and jabber-net for a set of client libraries for .Net.
Well this is a little off target, but have you looking into the XMPP (Jabber) protocol.
It's the messaging system that GTalk uses. Quite simple to use. Only downside to it, is that you will need a stateful service to receive and process the messages.
I also agree with #Mat's comment. It was the first solution that came to mind, then i remembered that I used XMPP in the pas to acomplish fast/ small and reliable messages between servers.
http://xmpp.org/about-xmpp/
if you search google you will easily find .net libraries which support this protocol.
and there are plenty of free jabber servers out there.
One way to ensure your messages are sent in sequence and are resolved as a batch together is to make one call to the webservice with all messages that are dependent on each other as a single batch.
Traditionally, when you make a call to a web service you do not expect that other calls to the web service will occur in a specific order. It sounds like you have an implicit sequence the data needs to arrive in the destination application, which makes me think you need to group your messages together and send them together to ensure that order.
No matter the speed of the messaging framework, you cannot guarantee to prevent a race condition that could send messages out of order, unless you send one message that has your data in the correct order.
If you are sending messages in a sequence across internet, you will never know how long will take the message to arrive from one point to another. One possible solution is to include in each message its position in the sequence, and in each endpoint implement the logic to order the messages prior to processing them. If you receive a message out of sequence, you can wait for the missing message, or request to the other endpoint to resend it.

Resources