Maintaining an open Redis PubSub subscription with Booksleeve - asp.net

I am using a Redis pubsub channel to send messages from a pool of worker processes to my ASP.NET application. When a message is received, my application forwards the message to a client's browser with SignalR.
I found this solution to maintaining an open connection to Redis, but it doesn't account for subscriptions when it recreates the connection.
I'm currently handling Redis pubsub messages in my Global.asax file:
public class Application : HttpApplication
{
protected void Application_Start()
{
var gateway = Resolve<RedisConnectionGateway>();
var connection = gateway.GetConnection();
var channel = connection.GetOpenSubscriberChannel();
channel.PatternSubscribe("workers:job-done:*", OnExecutionCompleted);
}
/// <summary>
/// Handle messages received from workers through Redis.</summary>
private static void OnExecutionCompleted(string key, byte[] message)
{
/* forwarded the response to the client that requested it */
}
}
The problem occurs when the current RedisConnection is closed for whatever reason. The simplest solution the problem would be to fire an event from the RedisConnectionGateway class when the connection has been reset, and resubscribe using a new RedisSubscriberChannel. However, any messages published to the channel while the connection is being reset would be lost.
Are there any examples of recommended ways to handle this situation?

Yes, if the connection dies (network instability, re-mastering, whatever) then you will need to re-apply any subscriptions you have made. An event to reconnect and resubscribe is pretty normal, and not very different to what we use here on SE/SO (except we typically track more granular subscriptions, and have some wrapper code that handles all that).
Yes, any events published while your connection was broken are gone. That is the nature of redis pub/sub; it does not guarantee delivery to disconnected clients. Either use a tool that does promise this, or use redis to drive a queue instead - pushing/popping to/from opposite ends of a list is usually a reasonable alternative, and ensures nothing is lost (as long as your software doesn't drop it after popping it from the list). If it helps, I have on my list a request to add the blocking pop methods - they totally destroy the multiplexer intent, but they have genuine use in some cases, so I'm not against adding them.

Related

Do I need a Redis ConnectionMultiplexer for each pub/sub subscription?

I have a .Net Core Web Api setup where I expose an endpoint that is basically a forever-frame. I am constrained by an API contract that forces me to expose it as such.
That forever-frame pushes data that is received from a Redis pub/sub channel. I will have multiple listeners on this endpoint, and they should basically be individual subscribers to the same channel.
I use StackExchange.Redis.
There is one thing I cannot wrap my head around, and that is how to use the ConnectionMultiplexer in this scenario. Everywhere I read about it I am told to have one global ConnectionMultiplexer. But if I do that won't I unsubscribe all subcribers when one leaves and shuts down a subscription to the channel that they are all listening to?
If I don't then I will run into a memory leak I am sure.
A global ConnectionMultiplexer keeps the number of connections to Redis at a minimum, but I don't see any way to avoid it here.
Is there something I have misunderstood?
Always use the same instance of ConnectionMultiplexer, or you will lose the benefits of using a multiplexer.
I had a similar issue when calling unsubscribe on a channel caused all subscribers to unsubscribe too.
If you look at the ISubscriber interface, there is two ways to subscribe to a channel :
void Subscribe(RedisChannel channel, Action<RedisChannel, RedisValue> handler, CommandFlags flags = CommandFlags.None);
ChannelMessageQueue Subscribe(RedisChannel channel, CommandFlags flags = CommandFlags.None);
I took the second one and it solved my problem.

SignalR - Sending from server-to-client from latest Hub context

I have a simple demo with only 1 client and the server. I can send messages back and forth, very trivial stuff.
The server Hub has a timer which sends a message to the client(s) every 1000 milliseconds. Now I have a button, where when clicked, sends a message to the server (via signalR).
Problem:
When the button is clicked (and the message sent to the server), the Hub is instantiated each time (I read about the SignalR lifecycle here).
Of course, when the Hub is instantiated the Timer is also instantiated. So the side effect (ie. bug) that I am seeing is that messages are being send to the client from multiple Hub instances.
What I would like:
I would like the client to receive messages (from the Timer that is running on the Hub), but only 1 set of messages from a single Hub (latest Hub instance?). I do not want simultaneous/multiple messages that were spawned from each Hub that was instantiated.
But perhaps I am doing something drastically wrong in design here.
You shouldn´t set the timer in the hub instance because they are re-created on every request.
Just create a Singleton class to handle the timer and actions. Then access that singleton from your hub instance.
The singleton instance will persist during the whole live cycle of your application, thus you will create only one timer.
To avoid concurrency problems, your singleton should be Lazy

SignalR in multiple instances in Azure

I'd like to use Azure to host my web application, for instance CloudService web role or Azure Websites, inside the application I use SignalR to connect client and server.
Since I scaled two instances for my web roles, it seems I came across a very common problem, the SignalR could not find the correct original instance. The client JavaScript said it was already started, but the server hub OnConnected event randomly not raised, so were the server methods which intended to be called by clients, all these strange issues happened randomly.
Once I changed the instance to be one, all the problems gone. So can anyone explain what happened when the client call server method, why sometimes the server seems not response properly?
I found the post, can Azure Service Bus solve this issue?
Yes, you need to use the azure service bus. Otherwise the connections are stored in memory on the given server and the other server will know nothing about them. Once you create the service bus, just reference it in the startup class.
public void Configuration(IAppBuilder app)
{
System.Diagnostics.Trace.TraceInformation("SignalR Startup > Configurtion start");
// Any connection or hub wire up and configuration should go here
string connectionString = "XXX";
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "TopicName");
...
}
You will also need to get a reference to the context in each of your hub methods:
var context = GlobalHost.ConnectionManager.GetHubContext<HubName>();
It's easy peasy :)

Signalr webfarm with Backplane out of sync

We have a SignalR application that we built and tested for use on a single web server. The requirements have changed and we need to deploy the application to a webfarm. SignalR supports several backplanes, since the application already uses Sql Server that is what we have implemented. With the introduction of a second web node we ran into an issue with keeping the data that is cached within the Hub synced between the nodes.
Each hub has an internal cache in the form of a DataSet.
private static DataSet _cache; The cache gets populated when a client first requests data, and from there any interaction updates the local cache and the sql server, then notifies all connected clients of the changes.
The backplane handles the broadcast messages between the clients but the other node does not receive a message.
Our first thought was that there might be a method that we could wire up that would triggered from the backplane sending a message to a nodes clients, but we have not seen such a thing mentioned in documentation.
Our second thought was to create a .net client within the hub.
private async void ConnectHubProxy()
{
IHubProxy EventViewerHubProxy = _hubConnection.CreateHubProxy("EventViewerHub");
EventViewerHubProxy.On<string>("BroadcastAcknowledgedEvents", EventIDs => UpdateCacheIsAcknowledged(EventIDs));
EventViewerHubProxy.On<string>("BroadcastDeletedEvents", EventIDs => UpdateCacheIsDeleted(EventIDs));
ServicePointManager.DefaultConnectionLimit = 10;
await _hubConnection.Start();
}
Our questions:
How do we keep the cache in sync?
Is the first thought possible and we missed it in the documentation?
Are there any issues concerns with having a hub connect to itself?
The recommended way to have "state" in a scaleout scenario is to have a source of persistence. So in your case, if you're looking to have a "global-like" cache one way you can implement that is via a database.
By creating a database all your nodes can write/read from the same source and therefore have a global cache.
The reason why having an in-memory cache is not a good idea is because in a web farm, if nodes go down, they then lose all their in-memory cache. By having persistence, it doesn't matter if a nodes been up for days or has just recovered from a "shutdown" based failure, the persistence layer is always there.

doing database write after the response

I have a web service that receives requests from users and returns some json. I need to save the json string in the database so for the moment, the write query occurs before the response is sent back.
Is there a way to send the response first and then do the write query, after the response left the web service?
Thanks.
There's a couple of different options here - they all have tradeoffs, though, and would be pretty esoteric. You don't mention why you want to do this, so I'm guessing performance. If that's the case, I think you're barking up the wrong tree - a simple write is almost certainly not your performance problem.
So, off the top of my head:
Queuing, as Ragesh mentions, would be a nice approach. This gets you similar semantics of a transaction, while off loading the write. You still have to write to the queue, though, which may be about the same overhead as writing to the DB.
You could spawn a new thread (using either the ThreadPool or System.Threading.Thread - there's some debates about which is preferable in ASP.NET) to handle the write. This can generally work, but you may have issues with unhandled exceptions, app domain restarts, etc.
You could store the JSON data into a static or Application variable, then use a Timer to periodically write them to the DB. This will be multithreaded code, so you will need to synchronize read/writes to the collection.
Similar to #3, store the JSON data into Cache and use the invalidation callback to write to the DB.
Lots of variations on store somewhere (memory, disk, flat DB table, etc.), process later (ASP.NET, scheduled task, Windows Service, Sql Agent, etc.).
#frenchie says: a response starts by reading the json string from the db and ends with writing it back. In other words, if the user sends a request, the json string that's going to be read must be the one that was written in the previous response.
That complicates things, since inherent in async work is not knowing when something is done. If you require the async portion (writing back to the DB) to be done before handling the next request, you'll have to execute a wait to make sure it actually completed. In order to do that, you'll need to keep server side state on the client - not exactly a best practice as far as services go (though, it sounds like you're already doing that with these JSON request/response pairs).
Given the complications, I would make sure that you've done your profiling and determined it is indeed a performance problem.
You can do schedule a query work like
ThreadPool.QueueUserWorkItem(state =>
this.AsynchronousExecuteReference());
// and run
static void AsynchronousExecuteReference()
{
// run here your sql update
}
One other example using Thread inside an class and you can pass parameters to it.
public class RunThreadProcess
{
// Some parametres
public int cProductID;
// my thread
private Thread t = null;
// start it
public Thread Start()
{
t = new Thread(new ThreadStart(this.work));
t.IsBackground = true;
t.SetApartmentState(ApartmentState.MTA);
t.Start();
return t;
}
// actually work
private void work()
{
// do thread work
all parametres are available here
}
}
And here is how I run it
var OneAction = new RunThreadProcess();
OneAction.cProductID = 100;
OneAction.Start();
Do not worry about memory, CG knows that this process is used until the thread ends, so I have check it and CG not delete it and wait the thread to ends.
You should look at using message queues like MSMQ, ActiveMQ or RabbitMQ to do this. When you receive your request, you'll put the relevant data in to the queue, and send your response to the client. At the other end of the queue, you'll have some process that reads from the queue and inserts data in to your database.
this is missing the point of a request/response. unless you want to get into async commands like a service bus, but that's pub/sub, not request/response. the point of request/response is to do the work on the server after receiving the request and before sending the response. even if the work is sending an async message to a service bus.
You could try moving your web service URL to an ASPX page where the lifecycles come in to play.
In the code-behind, call your routine that does the main portion of the work in Page_Load or Page_Prerender (or whenever is appropriate prior to the response being sent) and then do your DB work in the Page_Unload event which occurs after the response has been sent (http://msdn.microsoft.com/en-us/library/ie/ms178472.aspx).

Resources