signalr managing connections in external datastore - signalr

we are looking for a way to have a background process to push out messages to the connected clients.
The approach we are taking is that whenever a new connection is established (OnConnected) we stored the connectionId alone with some request metadata (for later filtering) in our mongo db. And when an event happened (triggered from client or backend process), a workerrole (another background process) will listen to those events (via messaging or whatever) then based on the event detail it will filter the connected client using the metadata captured.
The approach seems to be ok, but we have a problem when
signalr server goes down
before the server comes backup, the client disconnects (close browser or whatever)
signalr server goes back up
we are left with connections in the mongodb which we dont know their connection status
i am wondering if there is a better way to do this, the goal is to be able to target specific connected client to push message to from a backend service (worker role)
by the way, we are using scaleout option with service bus backplane

The following guide on Mapping SignalR Users to Connections goes over several options you have for managing connections.
The approach you are currently taking falls under the "Permanent, external storage" option.
If you want/need to stick with that option, you could add some sort of cleanup procedure that periodically removes connections from your database that have been inactive for longer than a specified time. Of course, you can also proactively remove old entries when a client with matching metadata reconnects with a new connectionId.
I think the better alternative is to use a IUserIdProvider or (single-user?) groups assuming your filtering requirements aren't too complex. Using either of these options should make it unnecessary to store connectionIds in your database. These options also make it fairly easy to send messages to multiple devices/tabs that a single user could have open simultaneously.

Related

Web sockets with redis backplane scaleout - multiple redis channels per user or one redis channel for all users

I am connecting clients to our servers using SignalR (same as socketio websockets) so I can send them notifications for activities in the system. It is NOT a chat application. So messages when sent will be for a particular user only.
These clients are connected on multiple web servers and these servers are subscribed to a redis backplane. Like mentioned in this article - http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
My question here is for this kind of notification system, in redis pubsub - should i have multiple channels - one per user in the backplane and the app server listening to each users notification channel. Or have one channel for all these notifications and the app server parses each message and figure out if they have that userid connected and send the message to that user.
Based on the little I know about the details of your application, I think you should create channels/lists in the backplane/Redis on a per-client basis. This would be cheap in Redis, and it gives the server side process handling a specific client only the notifications they are supposed to have.
This should save your application iteration or handling of irrelevant data, which could have implications of performance at scale, and if security is at all a concern (don't know what the domain or application is), then it would be best to never retrieve/receive information unnecessarily that wasn't intended for a particular client.
I will pose a final question and some thoughts which I think support my opinion. If you don't do this on a client-by-client basis, then how will you handle when the user is not present to receive a message? You would either have to throw that message away, or have the application server handle that un-received message for every single client, every time they poll or otherwise receive information from Redis. This could really add up. Although, without knowing the details of the application, I'm not sure if this paragraph is relevant.
At the end of the day, though approaches and opinions may vary depending on the application, I would think about the architecture in terms of the entities and you outlined. You have clients, and they send and receive messages directly to one another. Those messages should be associated with each of the parties involved somehow, and they should be stored in a manner that will be efficient for lookup and which helps define/outline the structure of the application.
Hope my 2c helps!

SockJS and meteor: what if load balancer do not support sticky sessions?

I'm exploring balancing options for Meteor. This article looks very cool and it says that the following should be supported to load balance Meteor:
Mongo optailing. Otherwise, it may take up to ten seconds for one instance of Meteor to get updates from the another, because polling Mongo driver will be used, which polls-and-diffs DB each ten seconds.
Websocket. It's clear too - otherwise clients will fallback to HTTP and long-polling, which will work, but it's not as cool as Websocket.
Sticky sessions 'which are required by SockJS'. Here the question comes:
As I understood, 'sticky sessions support' is something that assign one client to the same server during his session. Is it essential? What may happen if I don't configure sticky sessions at all?
Here's what I came up to by myself:
Because Meteor stores all data sent to client in memory, if client connects to X servers, then X times more memory will be consumed
Some minor (or major, if there are no oplog) lag may appear for the same user in, say, different tabs or windows, which may be surprising.
If SockJS reconnects and wants some data to persist across reconnections, it gonna have a bad time. I'm not sure about how SockJS works, is this point valid?
What bad can happen? These three points doesn't look very bad: data is valid, available, may be at a cost of extra memory consumption.
Basics
Sticky Sessions are required to ensure that the browser's in memory session can be managed correctly by the server.
First let me explain why you need sticky sessions:
Each publish that uses an ordinary publish cursor keeps track of whatever collections the client may have, so when something changes it knows what to send down back to the client. This would apply to every Meteor app if it needs a DDP connection. This is the case with websockets and sockjs
Additionally there may be other client session state stored in variables but those you would be edge cases (e.g you store the user's state in a variable).
The problem happens when the server disconnects and reconnects, but somehow perhaps the connection gets transferred to the other node (without re-establishing a new connection) - which has no idea about the client's data, so the behaviour could turn up a bit weird.
The issue with SockJS & Long Polling
With SockJS there is an additional issue. SockJS uses websocket emulation when it falls back to long polling.
With Long polling a new connection attempt/new http request is made every time new data is available.
If sticky Sessions are not enabled each of these connections will be randomly assigned to a different node/dynamo.
So you have a 50% chance (in the case its random) that the server has no idea about the client's DDP Session with every every time new data is available.
It would then force the client to re-negotiate a connection/ignore the clients DDP commands and you would end up getting very weird behaviour on the client.
Half of these would be to the wrong node:

Does SignalR can reuse active connections to server other users

I want to know SignalR per-server connection limitations. Let's say that my mobile app is starting a connection to the server. The app is idle for let's say 5 minutes (not data is send from a specific client to the server nor from the server to a specific client, does SignalR can use that connection to serve other users, or SignalR creates a separate connection for each user?
I want to know whether I should use SignalR or just call the server every few seconds. My mobile app will be running in the background on the user's mobile phone and might be active on the user's mobile phone all day long.
SignalR has 1 connection for every user and the amount of connections you can have open at a given time completely depends on the server implementation, hardware etc.
If your app does not rely on real-time data then polling is an appropriate approach. However if you do want nearly real-time data then I'd argue that polling every 2-3 can be just as taxing as maintaining an open connection.
As a final note SignalR can be configured to poll via its Long Polling transport but it will still maintain a connection object on the server, the request just wont be held onto. That way SignalR can keep track of all the users and will ensure that users get the messages that were sent to them.

Remote server push notification to arduino (Ethernet)

I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino

Is polling the way to go for live chat on web?

I'm trying to implement a custom live chat program on the web, but I'm not sure how to handle the real-time (or near real-time) updates for users. Would it make more sense to send Ajax requests from the client side every second or so, polling the database for new comments?
Is there a way to somehow broadcast from the database each time a comment is added? If this is possible how would that work? I'm using Sql Server 2008 with Asp.net (c#).
Thanks!
Use long polling/server side push/comet:
http://en.wikipedia.org/wiki/Comet_(programming))
Also see:
http://en.wikipedia.org/wiki/Push_technology
I think when you use long polling you'll also want your web server to provide some support in the form of non-blocking io for requests, so that you aren't holding a thread per connection.
You could have each client poll the server, and at the server side keep the connection open without responding.
As soon there is a message detected at server side, this data is returned through the already open connection. On receipt, your client immediately issues a new request.
There's some complexity as you need to keep track server side which connections is associated with which session, and which should be responded upon to prevent timeouts.
I never actually did this but this should be the most resource efficient way.
Nope. use queuing systems like RabiitMq or ActiveMQ. Check mongoDB too.
A queuing system will give u a publish - subscribe facilities.

Resources