SignalR - Handling disconnected users - signalr

Hy,
I'm using the signalR library on a project to handle an notification and chat modules. I've a table on an database to keep a track of online users.
The HUB for chat is inheriting IDisconnect where i disconnect the user. After disconnecting the user, i warm the users about that event. At this point, i check if the disconnect user is the client. If it's, then i call an method on HUB to reconnect the user (just update the table).
I do this because with the current implementation, once the user closes a tab on the browser it calls the Disconnect task but he could have another tab opened.
I've not tested (with larger requests) this module yet, but on my development server it could take a few seconds between the IDisconnect event, and the request from the user to connect again.
I'm concerned with my implementation to handle disconnected users from the chat but i can't see another way to improve this.
If possible, could someone give me a advice on this, or this is the only solution that i've?
Update: I ended up using a singleton class to store all the users and their connections id from signalr. This way i can get the id from user during the disconnect task (at this point you don't have any httpcontext to get the user information, but you can always get the user id with the connection id of signalr from the array in the singleton class).
20-02-2013 Although the above solution was doing the job, i had the need to scale my project. My solution was to use Redis to store all user connections, and take benefit of key expiration time on disconnect events. During the reconnect i check if the key is in pending state (gonna expire in a few minutes).

You can check out how JabbR, a multi-room chat application built on top of SignalR, solves this problem: https://github.com/JabbR/JabbR/blob/master/JabbR/Hubs/Chat.cs
It basically keeps a 1:N mapping of User<->ConnectionId, so when the last connection is disconnected the user can be marked as "offline".

Related

Microservices client acknowledgement and Event Sourcing

Scenario
I am building courier service system using Microservices. I am not sure of few things and here is my Scenario
Booking API - This is where customer Place order
Payment API - This is where we process the payment against booking
Notification API - There service is responsible for sending the notification after everything is completed.
The system is using event-driven Architecture. When customer places booking order , i commit local transaction in booking API and publish event. Payment API and notification API are subscribed to their respective event . Once Done Payment and notification API need to acknowledge back to Booking API.
My Questions is
After publishing the event my booking service can't block the call and goes back to the client (front end). How does my client app will have to check the status of transaction or it would know that transaction is completed? Does it poll every couple of seconds ? Since this is distributed transaction and any service can go down and won't be able to acknowledge back . In that case how do my client (front end) would know since it will keep on waiting. I am considering saga for distributed transactions.
What's the best way to achieve all of this ?
Event Sourcing
I want to implement Event sourcing to track the complete track of the booking order. Does i have to implement this in my booking API with event store ? Or event store are shared between services since i am supposed to catch all the events from different services . What's the best way to implement this ?
Many Thanks,
The way I visualize this is as follows (influenced by Martin Kleppmann's talk here and here).
The end user places an order. The order is written to a Kafka topic. Since Kafka has a log structured storage, the order details will be saved in the least possible time. It's an atomic operation ('A' in 'ACID') - all or nothing
Now as soon as the user places the order, the user would like to read it back (read-your-write). To acheive this we can write the order data in a distributed cache as well. Although dual write is not usually a good idea as it may cause partial failure (e.g. writing to Kafka is successful, but writing to cache fails), we can mitigate this risk by ensuring that one of the Kafka consumer writes the data in a database. So, even in a rare scenario of cache failure, the user can read the data back from DB eventually.
The status of the order in the cache as written at the time of order creation is "in progress"
One or more kafka consumer groups are then used to handle the events as follows: the payment and notification are handled properly and the final status will be written back to the cache and database
A separate Kafka consumer will then receive the response from the payment and notification apis and write the updates to cache, DB and a web socket
The websocket will then update the UI model and the changes would be then reflected in the UI through event sourcing.
Further clarifications based on comment
The basic idea here is that we build a cache using streaming for every service with data they need. For e.g. the account service needs feedback from the payment and notification services. Therefore, we have these services write their response to some Kafka topic which has some consumers that write the response back to order service's cache
Based on the ACID properties of Kafka (or any similar technology), the message will never be lost. Eventually we will get all or nothing. That's atomicity. If the order service fails to write the order, an error response is sent back to the client in a synchronous way and the user probably retries after some time. If the order service is successful, the response to the other services must flow back to its cache eventually. If one of the services is down for some time, the response will be delayed, but it will be sent eventually when the service resumes
The clients need not poll. The result will be propagated to it through streaming using websocket. The UI page will listen to the websocket As the consumer writes the feedback in the cache, it can also write to the websocket. This will notify the UI. Then if you use something like Angular or ReactJS, the appropriate section of the UI can be refreshed with the value received at the websocket. Until that happens user keeps seeing the status "in progress" as was written to the cache at the time of order creation Even if the user refreshes the page, the same status is retrieved from the cache. If the cache value expires and follows a LRU mechanism, the same value will be fetched from the DB and wriitten back to the cache to serve future requests. Once the feedback from the other services are available, the new result will be streamed using websocket. On page refresh, new status would be available from the cache or DB
You can pass an Identifier back to client once the booking is completed and client can use this identifier to query the status of the subsequent actions if you can connect them on the back end. You can also send a notification back to the Client when other events are completed. You can do long polling or you can do notification.
thanks skjagini. part of my question is to handle a case where other
microservices don't get back in time or never. lets say payment api is
done working and charged the client but didn't notify my order service
in time or after very long time. how my client waits ? if we timeout
the client the backend may have processed it after timeout
In CQRS, you would separate the Commands and Querying. i.e, considering your scenario you can implement all interactions with Queues for interaction. (There are multiple implementations for CQRS with event sourcing, but in simplest form):
Client Sends a request --> Payment API receives the request --> Validates the request (if validation fails throws error back to the user) --> On successful validation --> generates a GUID and writes the message request to Queue --> passes the GUID to the user
Payment API subscribes the payment queue --> After processing the request --> writes to Order queue or any other queues
Order APi subscribes to Order Queue and processes the request.
User has a GUID which can get him data for all the interactions.
If use a pub/sub as in Kafka instead of Kafka (all other subsequent systems can read from the same topic, you don't need to write for each queue)
If any of the services fail to process, once the services are restarted they should be able to pick where they left off, if the services are down in the middle of a transaction as long as they roll back their resp changes you system should be stable condition
I'm not 100% sure what you are asking. But it sounds like you should be using a messaging service. As #Saptarshi Basu mentioned kafka is good. I would really recommend NATS - although I'm biased because that's the one I work with
With NATS you can create request-reply messages to interface between client and booking service. That's a 1-1 communication
If you have multiple instances of each of your services running, you can use the Queuing service to automatically load balance. NATS will just randomly select a server for you
And then you can use pub-sub feeds for communication between all of your services.
This will give you a very resilient and scalable architecture, and NATS makes it all incredibly easy

Firebase - Automatically sign out user onDisconnect

Since I have noticed that once a user signs in with email and password, on reopening the application the session will not have expired and there is no need for a new authentication, I wish to avoid this.
I want to automatically .signOut() a user when .onDisconnect is triggered. How can I achieve this? I have tried with the following code, but unsuccessfully:
firebase.auth().onDisconnect().signOut();
When you say "onDisconnect", I'm assuming that you mean Realtime Database onDisconnect triggers.
The first thing to know about onDisconnect is that it triggers when the socket connection between Realtime Database and the client app is closed. This could happen for any number of reasons, and it can happen at any time, even if the app seemingly has a good internet connection. So, be careful about what you're trying to do here.
Also, onDisconnect triggers can only affect data in the database directly, and nothing else. So, this limits what you can effectively accomplish. You can't perform any action in the client app in response on an onDisconnect.
Between these two facts, what you're trying to do isn't really possible, and, I don't think it's desirable. You could end up logging out the user just because their train went underground momentarily, or if they simply switched out of the application for some time. This would be massively inconvenient to the user.
If you want to automatically log out the user, I strongly suggesting finding some other way to do this, such as writing some code to remember how long it's been since the user used your app, and forcing the logout on the on the client app based on your preferred logic.
The onDisconnect() is related to the database connection, and has little to do with your authenticated user. As in: onDisconnect() may fire when your user is signed in, simply because the connection to the database drops temporarily.
But more importantly: onDisconnect handlers run server-side, once the server detects that the client has disappeared. When this is because if a dirty disconnect (e.g. the app crashes), there is no way for the client to detect this anymore.
The most likely approach you'll want is to simply sign the user out when they close the app.
Alternative you might want to attach a listener to .info/connected in your client. This is a client-side listener that fires when the client detects that it is connected or disconnected.

how signalR manage concurrent user updates

can anyone tell me please, how SIGNALR manage consurrent user updates. What happened when user click "send" button at a same time. How SIGNALR identify which call is associated to which client.
Each single connection has a unique connectionId which is transmitted across server and clients usually at each call, so there is no issue with that. The server would receive 2 parallel calls but each one will be correctly associated to the right client through the exchanged connectionId. You should check the official documentation here.

SignalR just for checking if user is online or not

I would like to ask, if it is a good idea to use SinglR just for knowing if the current user now online or not?
For example I have an small website with log in system, and some where on the side i would like to show the logged in members.
Is this a good idea to use signalr for that?
And if it the case should I then on each page start the connection with hub? (In this case when user navigates on the pages, will be the ReConnected method called on hub, or OnDisconnected and OnConnected)?
I'm just starting with signalr, so curious what ppl think.
You could use SignalR though there might be better methods to do this. So when a user logs in, logs out or becomes inactive - you would have some sort of message being sent from the client to the server that indicates the change in status. You can store that information in a temporary database and whenever a value in the database changes you can use SignalR to relay that information to all the connected clients.
Signalr will get reconnected when the user moves from one page to another page. Whenever a user logs into a website the user security details will be persisted in a cookie assuming you are using Cookiebase authentication. So till the user logs out or session timesout the cookie will be active. So there is no real need for Signalr here.
I have been investigating the same thing. From my research, I would say that you COULD do this, but I'm on the fence of whether it's the best way to go about it. I would expect a LOT of disconnecting, connecting and reconnecting. If you're persisting this data in a database, you should anticipate a lot of database traffic. if you're only on a single server though, you could just persist this in memory.
Something to also note is that the ConnectionId changes with each page refresh. At first, I thought that was dumb because I wanted the connection id to be consistent so i could keep a handle on a user with it. However, if you open a link in a new tab and then close one of them, you have to still keep the other connection in storage. If the id was the same you would remove it on disconnect even though the other tab was open, so your user would incorrectly be marked as offline.
However, the other issue that i'm thinking about is that if you're just browsing around the site in a single tab, you will disconnect for a split second between each page load. So you might run into connection consistency issues with that.
I'd say online presence with signalr is more common to be used for a chat room or game lobby. So I'd say this is possible, but whether it's a good solution -- i'm unsure.

Check database for changes via long polling

Im creating a chat app in ASP.NET MVC3.
im using long polling and AsyncController to do so
when a user posts a chat its saved in database , to retrieve should i constantly check database for change in record or after definite interval
or is there an better/ efficient way of doing it
i came across this question but could not get a usable answer.
You may take a look at SignalR for an efficient way. Contrary to the standard polling mechanism (in which you are sending requests at regular intervals to check for changes), SignalR uses a push mechanism in which the server sends notifications to connected clients to notify them about changes.
Since you're already using long polling and an asynccontrolller, why not create a message pool? Take a look at this solution.
In a nutshell, instead of just writing the updated chat to the database, you should also stick it in some sort of queue. Then each user's async thread is listening to that pool waiting for a message to appear. When one appears return the data to the user through your normal operation. When all listening threads have picked up the message it can be removed from the queue. This will prevent you from having several threads hammering your database looking for a new message.
You can give PServiceBus(http://pservicebus.codeplex.com/) a try and here is a sample web chat app(http://74.208.226.12/ChatApp/chat.html) running and does not need database in between to pass message between two web clients. If you want to persist data in the database for logging sake, you can always subscribe to the chat message and log it to database.

Resources