I'm developing a real time application using SignalR. Where SigalR will be hosted in an ASP.NET (VB.NET v.2010)
I have the following questions regarding SignalR availability:
What are the cases on which a client could not connect to the signalR?
Is SignalR is trusted to support Real Time Applications?
To have a static (shared) array in the hub, will it affect the performance if the array is too big?
Since the user of the client app will connect to the ASP.NET app via web-service, then is there a cases where the client app can consume the web-service and can not connect to the SignalR?
Can SignalR keep alive for long time, since my app will be working 24/7?ยด
What are the cases on which a client could not connect to the signalR?
Network failure will do it. SignalR will attempt to reconnect due to a transient failure. Or your app being down. :-)
Is SignalR is trusted to support Real Time Applications?
Yep. I'm currently working on an app that has around 3K continuously active users and we never have any widespread connectivity problems. In fact, I don't even recall seeing a support ticket related to SignalR connectivity.
To have a static (shared) array in the hub, will it affect the performance if the array is too big?
Well, pursuant to available memory, you should be fine. Be careful with threading. If you are locking to access an array that is frequently updated, watch out for lock contention.
Since the user of the client app will connect to the ASP.NET app via web-service, then is there a cases where the client app can consume the web-service and can not connect to the SignalR?
I can't envision a scenario. A proper SignalR implementation would be available if network connectivity is available.
Can SignalR keep alive for long time, since my app will be working
24/7?
Yep. SignalR will attempt to reconnect due to transient network interruptions. You can also handle events on the client for disconnect to attempt to reconnect after SignalR gives up.
Related
I had a web service which has been using NServiceBus message handling between controller and domain logic. For a new functionality, I have to implement signalR webRT communication between clients and server. But there are some performance problems. I thought that, the problems occur from my design defect but I couldn't detect reason of the problem, yet.
Before SignalR integration, my web service was responsing in 20-30ms. But when a signalR client (only one client is enough) is connected to my web service, the responce time becomes to around 10000ms. When I remove NServiceBus implementation between my controller and domainlogic (directly operate my logic in controller and return) the responce time again down to reasonable value (around 20-30ms).
My web service is a .net core project. And I'm using long-pool threading mechanism for signalR.
There is a Windows service that I need to communicate with (in a duplex way) from ASP.NET. Is it safe to turn the Windows service into a WCF service and organize two-way communication?
I'm concerned about a scenario when the service is trying to communicate but ASP.NET process is getting reloaded and the message gets lost. Though it's unlikely during development, I guess it's quite likely in production with many clients.
I'm leaning towards a solution that involves some kind of persistence:
Both the Windows service and ASP.NET write data to SQL Server and get notified via SqlDependency
They exchange messages via RabbitMq
Here's a couple of ideas regarding the general case where two independent systems (processes, servers, etc.) need to communicate reliably:
Transaction model, where the transmitting party initiates communication and waits for acknowledgment from the recipient before marking the message as delivered. In case of transmission failure/timeout, it's the sender's responsibility to persist the message and retry later. For instance, Webhook architectures rely on this model.
Publish/Subscribe model, used by a lot of distributed systems, where both parties rely on a third-party message broker (message queue/service bus mechanism) such as RabbitMQ. In this architecture, sender is only responsible for making sure that the message has been successfully queued. The responsibility of making sure that the message is delivered to the recipient is on the message broker. In this case, you need to make sure that your message broker satisfies your reliability needs, for example: Is it in-memory only? Or does it also persist to disk and is able to recover from not just a process-recycle but also a power/system recycle.
And like you said, you can build your own messaging infrastructure too: sender writes to a local or cloud database or a cloud queue/service bus, and the receiver polls and consumes the messages.
So, a few guidelines:
If you ever need to scale out (have multiple servers) and they need to somehow collaborate on these messages, then make your initial investment on a database or cloud-queue solution (such as Azure SQL or Azure Queues).
Otherwise, if your services only need to communicate within one server, then you can use a database approach or use a queue service that satisfies your persistence/reliability requirements. RabbitMQ seems like a robust solution for this scenario.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
The Problem
My application works as follows:
Multiple (< 20) device clients (Android) are running at a single location.
Thousands of locations exist (therefore tens or hundreds of thousands of device clients exist).
A web portal client also exists that works in sync with each location's data and its device clients.
New data generated on a device is posted to the server (cloud) via a REST API (ASP.net WebAPI).
So far this application is a pretty standard application with a mobile device client and web portal client.
However, due to requirements on each device client that is out of my control (device clients need to function in offline mode, reduce network latency, etc), each device client does not use the server database as its immediate source of record. Each device client has its own local database (SQLite) that stays in sync with all data for its location. For example: when I make a data change on device client A, that change needs to be propagated to device client B and to web portal client C.
The web portal client reads directly from the server database since it does not need offline functionality.
As you can see, the problem here is that we now need a way to keep all device client databases in sync with each other in real time. Brief delays in data being in sync between two device clients is expected and considered okay.
Proposed Solution
My proposed solution is as follows:
When a new client device comes online initially, it receives a data dump for what it has missed since the last time it was online from the server via REST API.
Each new data item posted/updated/deleted from client devices via REST API is propagated through to the server database. The server database houses all data for all locations and should be considered as the permanent source of record.
The web portal works directly with the server database since it has no offline type requirements.
A connection from each client device is established to a data sync stream service via SignalR.
A worker service is "tailing" the server database for new Create/Update/Delete operations. When a CUD operation is detected, a message is dispatched to an Azure Service Bus queue/subscription (via fan-out topic) for each data sync service instance. This allows for horizontal scaling of the SignalR data sync service (with an Azure Service Bus backplane) since thousands of device client connections will exist.
The data sync service reads from its message queue/subscription and pushes a sync message (containing all needed data for the sync) to all connected client devices (for the location related to the data) via SignalR.
The following diagram illustrates this solution:
Large blocks depict servers (gray squares are HTTP web servers that can be horizontally scaled)
Arrows depict the direction of data flowing through the application.
Questions
Is SignalR the right technology for this problem/solution? Originally my solution involved each client device establishing it's own Azure Service Bus queue/subscription that collected messages from the database-tailing worker (sync river). The problem with that solution is that I would be pushing lots of wasted messages to offline device clients that may not come back online for a very long time, if ever. By dumping back the delta data when a device client comes online initially and streaming data via SignalR thereafter I can solve this.
I have not used SignalR extensively in a production environment before, so I am a bit new with it. What problems/challenges can I expect to experience with it for this solution?
The following article states that "There are some scenarios where a backplane can become a bottleneck. Here are some typical SignalR scenarios: High-frequency realtime (e.g., real-time games): A backplane is not recommended for this scenario.". Would this solution fall into this category? What problems could the backplane of Azure Service Bus messaging introduce? How else would I scale this solution if not in this way?
Your general opinions and recommendations for this solutions are also welcome and appreciated.
You have a requirement on real-time communication to devices when they are online. One of the most promising ways to do this is by using web sockets.
Using web socket itself is not practical and so there are popular
libraries for it such as SignalR, socket.io. These libraries absorb
many difficulties faced in production and also in development. These
libraries even support scaling.
Since your stack is .net based SignalR is choice here.
SignalR will work well in most of the cases. Here you don't have to
worry on backplane becoming a bottleneck as given in a real -time
games.
But maintaining a self-hosted real-time solution such as SignalR comes with a cost. The success rate of communication will be not high reliable in stock SignalR and you will have to implement various monitoring mechanisms and failover processes. Geo-distribution also not supported. So the next choice for a high reliable real-time system which addresses all mention issues is a hosted service such as pub-nub.
I have an existing application (WPF) that monitors OPC Servers and alarms. There is a requirement for this to be accessible via a browser so that users can view the status of alarms etc remotely. I'm feeling out of my depth (I'm not a Web developer) and I just need some advice on the best technology to accomplish this.
I've written several WCF Services, but all these have done is, via a function call, crunch some data sending back a result.
This 'service' will have to be persistent and able to be interrogated by x number of clients. For example, a client will need to be able to connect, stay connected and be informed of events as an when they happen. This has been a major problem in the past when I've developed WCF services (channel faults etc) and I've learnt to only keep a connection open for as long as it's needed. Is a WCF Service the best option in this case (as opposed to a normal Window's Service)
I need to be able to 'push' information from the service to clients. So, someone navigates to a webpage, the page shows in realtime, what is happening in the service. Do I need to use timers since this could be big problem if session state cannot be maintained.
I've read about Observer Design Pattern, but can this be implemented in ASP.net and how would ASP connect (and remain connected) to a remote windows service? Again, do I have to resort to timers?
I apologise it this appears vague, but the situation boils down to the following:
A process that's continually running (somewhere), receiving connections from remote clients (desktop/web), and then keeping the clients informed as events (alarms going off etc) occur.
I am interested in the Pub/Sub paradigm in order to provide a notifications system (ie : like Facebook), especially in a web application which has publishers (in several web applications on the same web server IIS) and one or more subscribers, in charge to display on the web the notifications for the front user.
I found out Redis, it seems to be a great server which provides interesting features : Caching (like Memcached) , Pub/Sub, queue.
Unfortunately, I didn't find any examples in a web context (ASP.NET, with Ajax/jQuery), except WebSockets and NodeJS but I don't want to use those ones (too early). I guess I need a process (subscriber) which receives messages from the publishers but I don't see how to do that in a web application (pub/sub works fine with unit tests).
EDIT : we currently use .NET (ASP.NET Forms) and try out ServiceStack.Redis library (http://www.servicestack.net/)
Actually Redis Pub/Sub handles this scenario quite well, as Redis is an async non-blocking server it can hold many connections cheaply and it scales well.
Salvatore (aka Mr Redis :) describes the O(1) time complexity of Publish and Subscribe operations:
You can consider the work of
subscribing/unsubscribing as a
constant time operation, O(1) for both
subscribing and unsubscribing
(actually PSUBSCRIBE does more work
than this if you are subscribed
already to many patterns with the
same client).
...
About memory, it is similar or smaller
than the one used by a key, so you
should not have problems to subscribe
to millions of channels even in a
small server.
So Redis is more than capable and designed for this scenario, but the problem as Tom pointed out in order to maintain a persistent connection users will need long-running connections (aka http-push / long-poll) and each active user will take its own thread. Holding a thread isn't great for scalability and technologically you would be better off using a non-blocking http server like Manos de Mono or node.js which are both async and non-blocking and can handle this scenario. Note: WebSockets is more efficient for real-time notifications over HTTP, so ideally you would use that if the users browser supports it and fallback to regular HTTP if they don't (or fallback to use Flash for WebSockets on the client).
So it's not the Redis or its Pub/Sub that doesn't scale here, it's the number of concurrent connections that a threaded HTTP server like IIS or Apache that is the limit, with that said you can still support a fair amount of concurrent users with IIS (this post suggests 3000) and since IIS is the bottleneck and not Redis you can easily just add an extra IIS server into the mix and distribute the load.
For this application, I would strongly suggest using SignalR, which is a .Net framework that enables real-time push to connected clients.
Redis publish/subscribe is not designed for this scenario - it requires a persistent connection to redis, which you have if you are writing a worker process but not when you are working with stateless web requests.
A publish/subscribe system that works for end users over http takes a little more work, but not too much - the simplest approach is to use a sorted set for each channel and record the time a user last got notifications. You could also do it with a list recording subscribers for each channel and write to the inbox list of each of those users whenever a notification is added.
With either of those methods a user can retrieve their new notifications very quickly. It will be a form of polling rather than true push notifications, but you aren't really going to get away from that due to the nature of http.
Technically you could use redis pub/sub with long-running http connections, but if every user needs their own thread with active redis and http connections, scalability won't be very good.