Does SignalR can reuse active connections to server other users - asp.net

I want to know SignalR per-server connection limitations. Let's say that my mobile app is starting a connection to the server. The app is idle for let's say 5 minutes (not data is send from a specific client to the server nor from the server to a specific client, does SignalR can use that connection to serve other users, or SignalR creates a separate connection for each user?
I want to know whether I should use SignalR or just call the server every few seconds. My mobile app will be running in the background on the user's mobile phone and might be active on the user's mobile phone all day long.

SignalR has 1 connection for every user and the amount of connections you can have open at a given time completely depends on the server implementation, hardware etc.
If your app does not rely on real-time data then polling is an appropriate approach. However if you do want nearly real-time data then I'd argue that polling every 2-3 can be just as taxing as maintaining an open connection.
As a final note SignalR can be configured to poll via its Long Polling transport but it will still maintain a connection object on the server, the request just wont be held onto. That way SignalR can keep track of all the users and will ensure that users get the messages that were sent to them.

Related

TCP Connection in Web application

I really need your advice on this.
I have many TCP Client Devices. The Web application is going to be accessed by many users after an authentication.
The problem I need to solve is -
Create a TCP Listener for these client machines and that should be accessed by every user -
Solution I think - create a tcp connection on a page. Every user will create a new tcp connection from their device(local host) once the page is load . This is possible because every user pc is diferent so its going to be entirely different connection. But this solution is not going with problem 2.
The Machine broadcasts data every 30 seconds. So my application should be able to catch that data and update on the page.
This is I think is the main problem.
I know live data update on a web page can be done using SignalR. But SignalR does not connect with TCP Client machine directly. So what I was trying to do is -
1st - I tried making a WCF Service for TCP listener. WCF service will get data from machine and save into database from where signalR will do its job. But Whenever I am using wcf service my system is getting hang. So I dont know if its a right way to do that.
2nd - I tried creating Windows service as tcp listener. But I dont think its gonna work with web application.
To be very frank, I am not able to understand what should I do for this functionality.
From my side - I just want a TCP connection on application level that should be persistent, independent of user and should not close on any page reload.
Whenever that connection receives data it should be updated on every user's web browser without any reload.
I cannot use timer. It should be realtime.
According to what I found and understood, TCP connection is with a device so signalR can not be used directly. We need something else(like a service) in between to make it work with signalR.
So at the end, What should I do?
I hope I am clear enough to state my problem.
I just want to discuss so that my doubts can get clear and I may find the desired result at the end.
EDIT 1 :
In websocket, signalR... normally client refers to web browser.
Can a device(Multimedia control device) be a client similar to web browser for signalR?

Web sockets with redis backplane scaleout - multiple redis channels per user or one redis channel for all users

I am connecting clients to our servers using SignalR (same as socketio websockets) so I can send them notifications for activities in the system. It is NOT a chat application. So messages when sent will be for a particular user only.
These clients are connected on multiple web servers and these servers are subscribed to a redis backplane. Like mentioned in this article - http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
My question here is for this kind of notification system, in redis pubsub - should i have multiple channels - one per user in the backplane and the app server listening to each users notification channel. Or have one channel for all these notifications and the app server parses each message and figure out if they have that userid connected and send the message to that user.
Based on the little I know about the details of your application, I think you should create channels/lists in the backplane/Redis on a per-client basis. This would be cheap in Redis, and it gives the server side process handling a specific client only the notifications they are supposed to have.
This should save your application iteration or handling of irrelevant data, which could have implications of performance at scale, and if security is at all a concern (don't know what the domain or application is), then it would be best to never retrieve/receive information unnecessarily that wasn't intended for a particular client.
I will pose a final question and some thoughts which I think support my opinion. If you don't do this on a client-by-client basis, then how will you handle when the user is not present to receive a message? You would either have to throw that message away, or have the application server handle that un-received message for every single client, every time they poll or otherwise receive information from Redis. This could really add up. Although, without knowing the details of the application, I'm not sure if this paragraph is relevant.
At the end of the day, though approaches and opinions may vary depending on the application, I would think about the architecture in terms of the entities and you outlined. You have clients, and they send and receive messages directly to one another. Those messages should be associated with each of the parties involved somehow, and they should be stored in a manner that will be efficient for lookup and which helps define/outline the structure of the application.
Hope my 2c helps!

Is it possible to improve this zmq architecture?

Intro:
In the below architecture, there are three key components.
Users - Machines where user application is running.
Applications - which are running inside the remote server.
Gateway/Broker - Required for isolation between user devices and server applications.
Message flow between user device and server application should happen as below
User shall transmit message to remote server, which will be used by
the one or more server applications.
Application shall broadcast/publish message to all connected
users.
Application shall send message to a particular user device
(Unicast).
In addition, one or more users will be connected or disconnected to the server arbitrarily and one or more application will be spawned or terminated arbitrarily.
For the above problem statement, I have designed the below zmq architecture.
The Gateway/Broker handles arbitrary assignments of users and applications and also provides the required isolation. It publishes user messages to all applications. It also aggregates all messages needed to be sent to users from applications via a SUB socket.
The application sends a two part message, the first part is the user identity and the second part is the actual message. The Gateway/Broker transmits that message to a user, based on identity. A special identity for a broadcast will be created, the gateway, if has received broadcast identity, will publish the message to all users via PUB socket.
The user connects to both ROUTER and PUB sockets in gateway. Fair queued data will be received from both sockets. While sending, the message will be sent to only gateway's ROUTER socket, not PUB socket.
Questions:
Q1: Is there any flaw with above architecture?
Q2: Is it possible to improve it more?
Metric assumed for the Q2:
The users and applications are dynamic in nature, they connect and disconnect on their own, the design should withstand that,
User reports its status periodically to server, design should facilitate latency of less than 333 [ms] ( a user, connected to server over internet, WAN connectivity btw user and server provides a latency much less than 333 [ms] )
Lossless transmission between server and users ( ACKing at backend, retransmission if lost )
You can try Malamute, which gives you what you need and more like credit-flow, keep-alive, tracking.
Malamute is small broker based on zeromq and part of the zeromq community. You can run Malamute as a component inside your application and don't need a dedicate service or daemon for it.
If you are using C or C++ that is no brainer as it integrate naturally. It also has binding for a lot more languages.
https://github.com/zeromq/malamute

signalr managing connections in external datastore

we are looking for a way to have a background process to push out messages to the connected clients.
The approach we are taking is that whenever a new connection is established (OnConnected) we stored the connectionId alone with some request metadata (for later filtering) in our mongo db. And when an event happened (triggered from client or backend process), a workerrole (another background process) will listen to those events (via messaging or whatever) then based on the event detail it will filter the connected client using the metadata captured.
The approach seems to be ok, but we have a problem when
signalr server goes down
before the server comes backup, the client disconnects (close browser or whatever)
signalr server goes back up
we are left with connections in the mongodb which we dont know their connection status
i am wondering if there is a better way to do this, the goal is to be able to target specific connected client to push message to from a backend service (worker role)
by the way, we are using scaleout option with service bus backplane
The following guide on Mapping SignalR Users to Connections goes over several options you have for managing connections.
The approach you are currently taking falls under the "Permanent, external storage" option.
If you want/need to stick with that option, you could add some sort of cleanup procedure that periodically removes connections from your database that have been inactive for longer than a specified time. Of course, you can also proactively remove old entries when a client with matching metadata reconnects with a new connectionId.
I think the better alternative is to use a IUserIdProvider or (single-user?) groups assuming your filtering requirements aren't too complex. Using either of these options should make it unnecessary to store connectionIds in your database. These options also make it fairly easy to send messages to multiple devices/tabs that a single user could have open simultaneously.

Why does Gmail not reconnect faster?

When Gmail loses the connection, it displays messages such as:
Not connected. Connecting in 3:36… [Try now]
Would faster reconnect intervals really be that big of a deal?
I am asking because I am developing a Socket.IO based mobile web app,
and I want to avoid having a message as on Gmail. Instead I imagine a
scheme such as:
reconnect at fast random intervals between one second and a minute,
plus
reconnect on certain user interaction, plus
reconnect on change of browser state.
One reason why your application loses connection to the server could be that the server or the connection to the server is overloaded. Spamming it with reconnection attempts could make the situation worse.
In the end, it depends on your usability requirements. When the user spends a long time in an email program, he is usually not interacting with it constantly but is reading a single email. Also, a mail client can live with being disconnected for several minutes, because it isn't unusual for emails to be read with a latency of several hours after they got sent. So GMail can live with longer delays before attempting to reconnect. When you have an application where the user is constantly interacting with the server, you might prefer shorter delays for reconnection attempts.

Resources