We have a SignalR application that we built and tested for use on a single web server. The requirements have changed and we need to deploy the application to a webfarm. SignalR supports several backplanes, since the application already uses Sql Server that is what we have implemented. With the introduction of a second web node we ran into an issue with keeping the data that is cached within the Hub synced between the nodes.
Each hub has an internal cache in the form of a DataSet.
private static DataSet _cache; The cache gets populated when a client first requests data, and from there any interaction updates the local cache and the sql server, then notifies all connected clients of the changes.
The backplane handles the broadcast messages between the clients but the other node does not receive a message.
Our first thought was that there might be a method that we could wire up that would triggered from the backplane sending a message to a nodes clients, but we have not seen such a thing mentioned in documentation.
Our second thought was to create a .net client within the hub.
private async void ConnectHubProxy()
{
IHubProxy EventViewerHubProxy = _hubConnection.CreateHubProxy("EventViewerHub");
EventViewerHubProxy.On<string>("BroadcastAcknowledgedEvents", EventIDs => UpdateCacheIsAcknowledged(EventIDs));
EventViewerHubProxy.On<string>("BroadcastDeletedEvents", EventIDs => UpdateCacheIsDeleted(EventIDs));
ServicePointManager.DefaultConnectionLimit = 10;
await _hubConnection.Start();
}
Our questions:
How do we keep the cache in sync?
Is the first thought possible and we missed it in the documentation?
Are there any issues concerns with having a hub connect to itself?
The recommended way to have "state" in a scaleout scenario is to have a source of persistence. So in your case, if you're looking to have a "global-like" cache one way you can implement that is via a database.
By creating a database all your nodes can write/read from the same source and therefore have a global cache.
The reason why having an in-memory cache is not a good idea is because in a web farm, if nodes go down, they then lose all their in-memory cache. By having persistence, it doesn't matter if a nodes been up for days or has just recovered from a "shutdown" based failure, the persistence layer is always there.
Related
I am using a DB Named : MyServiceDB for signalR scaleout for my application.
Another application that is also doing SignalR stuff wants to use my DB for Scaleout.
Will there be any performance loss or delay with different applications sharing a DB for Scaleout?
Should each app use its own DB for Scaleout ?
Don't use the same database to scale out separate SignalR applications. Each application will try to initialize database and may drop tables the other application created. SignalR also assumes that there is a global, monotonically increasing cursor pointing to the last message. I don't think you are able to guarantee this with two separate applications. I also think you may get weird issues like getting messages meant for one application to be sent to another, missing messages or seeing the same message multiple times etc.
I am trying to understand what Sessions Consistency actually means when working with Azure DocumentDb via the .NET client SDK i.e. What defines (and bounds) a session. Is a new session created each time we create a new instance of DocumentClient and if so does the behavior change if we are using the IReliableReadWriteDocumentClient wrapper?
Yes, a new session is created each time you create a new instance of the DocumentClient class. Each DocumentClient instance maintains a map of collection -> session token mapping. The client saves the latest session token received from the server, and echoes it as a header (x-ms-sessiontoken) during read requests. This enables DocumentDB to locate an up-to-date replica of your collection to serve session (or read-your-writes) consistency. This is the same with IReliableReadWriteDocumentClient, since it's a wrapper over the DocumentClient.
Note: the easiest way to achieve session consistency is to have a single DocumentClient instance manage it for you automatically. You can also manage a logical session across multiple DocumentClient instances with a little more complexity. For example, let's say that you have a load balanced Web API with two servers each with a DocumentClient instance, and you want session consistency across these servers.
client writes -> App Server 1 -> DocumentDB
client reads -> App Server 2 -> DocumentDB
You can implement this by saving the x-ms-sessiontoken returned in step 1 by saving it as a cookie in the client, then echoing that x-ms-sessiontoken in the read request. By round-tripping the session token, you can get session consistency.
I have following scenario:
User request for certain resource on server, This request is long running task and very like 2~3 seconds to 10 seconds. We issue a JobTicket to user, As our user want to wait.
On receiving request we store that request in persistence storage and issue a token to user as JobTicket (GUID).
User make connection with Hub to get information about that GUID.
In Background:
We have WAS Hosted as well as Windows Service to perform some operation on that request.
On complete, WAS Hosted/Windows Service call our Web Application that job has been completed.
From there based on job Ticket we identify which user and on its connection we let user know its job has been completed.
Now we have farm of servers, we are using Windows Server On Prem ServiceBus 1.1 which is working fine, But challenge we have is that we are not able to intercept ServiceBus based backplane message broadcast and message is going to all the client. As we have farm, user intermediately may have drop connection and connected to other server based on load balancer so we need to have scale out using Service Bus as its kind of seamless to integrate and we are also using for our internal purpose in our application so we don't want to user any other mix in complex solution.
I have tried using IHubPipelineModule but still Scale out message broadcast not passing thru that, I tried to hookup SignalR code directly and debug thru it but its taking long. I don't want to mess-up something arbitrary in actual code. As I can see that in OnReceive I can see message are coming, but not able to follow further. I just need small mechanism that I can intercept broadcast message and make sure that it goes to client it intended and not all the client by wasting resources, and security concern as well.
Please help me on this issue, it's kind of stuck from last 4 days and not able to come to any solution and same time I want to go with establish pattern and don't want to fork any special build for this kind of small issues which I am sure one of you expert knows how I can do that seamlessly.
Thanks,
Shrenik
After lots of struggling and not finding straight forward way, I have found the way as below for someone else in future it might help.
Scenario:
1. Web Farm: Host External User facing Web Pages
2. Backend Process: Which is mix of WebApi, SharePoint, Windows Service etc.
User from Web Page submit some request and get a unique id as return back. Internally on receiving request, we queue that request to Service Bus using TopicClient for processing.
There are pool of Windows Service watching on Message on Service Bus using SubscriptionClient and process that message. On completion of process which can run from 5 seconds to 30 seconds and some cases even more. We need to inform client that its job done if its waiting on web page or waiting for completion notification.
In this story, We are using SignalR to push job completion notification to client.
Now my earlier problem is How I let know from windows service to web application that job is done so send notification to client who submitted request.
One way is we hosted another hub internally in web application, Windows service act as client and call web application hosted hub, and in that hub method it will call external facing hub method to propagate message to specific client who submitted request, for which we are using Single user Group.
And as we have register service bus as backplane it will propagate to other servers and then appropriate client will get notification. So this is ideal solution and should work in most cases.
In above approach we have one limitation that, how Windows Service connect to Web Client, as we donot have windows auth, but we have openid based auth with ADFS. Now in such case Web Application required special code in which provide separate userid or password for windows service to communicate or have windows authentication also allowed for that hub for service account of windows service.
I was trying and trying how to remove all this hopes between interserver communication and again management of extra security.
So I did below with simplicity, though it tooks me whole night to find our internal of SignalR. But it works:
Approach is to send message directly to ServiceBus Backplane, and as all Web Server already hooked-up with ServiceBus backplane then they will get message.
Unfortunately SignalR doesn't provide such mechanism to send message directly to Backplane. I think its on pub/sub model so they don't want somebody to hack in their system :). or its violation of their pattern, but its make sense, in my case because of different roles and security, I have simplify code as below:
Create a ServiceBusMessageBus instance in my code, Same way as Below: Though I have created separate instance and store till lifetime of Windows Service, so I don't create instance every time:
ServiceBusMessageBus serviceBusBackplane = new ServiceBusMessageBus(new DefaultDependencyResolver(), new ServiceBusScaleoutConfiguration(connectionString, appName));
Create a ClientHubInvocation Object: This is a message which actually get created in SignalR infrastructure when Backplane based message broadcast:
ClientHubInvocation hubData = new ClientHubInvocation
{
Args = new object[] { msg },
Hub = "JobStatusHub",
Method = "onJobStatus",
State = null,
};
Create a Message object which accept by ServiceBusMessageBus.Publish, Yes, so this is a method which actually get called on base class ScaleoutMessageBus.Publish. This class is actually responsible for sending message to topic and other subscribers on other server nodes. Why not use that directly. Now to create Message Object, You need following code:
Message backplaneMessage = new Message(
sourceId,
"hg-JobStatusHub." + name,
new ArraySegment(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(hubData))));
In above second parameter is something interesting,
In case if you want to publish to all the client then syntax is "h-", in my case specific group user, so syntax is "hg-.. You can check the code here: https://github.com/SignalR/SignalR/blob/bc9412bcab0f5ef097c7dc919e3ea1b37fc8718c/src/Microsoft.AspNet.SignalR.Core/Infrastructure/PrefixHelper.cs
Publish your message to backplane directly as below:
await serviceBusBackplane.Publish(backplaneMessage);
I wish this PrefixHelper class have been public.
Remember: This is not recommended way and doent insulate from future upgrade for SignalR, as its internal they may change so any upgrade might come with small hazale to change this code. But in summary this works. Hope SignalR Team provide some mechanisam out of box to send message directly to backplane instead.
Thanks
I'd like to use Azure to host my web application, for instance CloudService web role or Azure Websites, inside the application I use SignalR to connect client and server.
Since I scaled two instances for my web roles, it seems I came across a very common problem, the SignalR could not find the correct original instance. The client JavaScript said it was already started, but the server hub OnConnected event randomly not raised, so were the server methods which intended to be called by clients, all these strange issues happened randomly.
Once I changed the instance to be one, all the problems gone. So can anyone explain what happened when the client call server method, why sometimes the server seems not response properly?
I found the post, can Azure Service Bus solve this issue?
Yes, you need to use the azure service bus. Otherwise the connections are stored in memory on the given server and the other server will know nothing about them. Once you create the service bus, just reference it in the startup class.
public void Configuration(IAppBuilder app)
{
System.Diagnostics.Trace.TraceInformation("SignalR Startup > Configurtion start");
// Any connection or hub wire up and configuration should go here
string connectionString = "XXX";
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "TopicName");
...
}
You will also need to get a reference to the context in each of your hub methods:
var context = GlobalHost.ConnectionManager.GetHubContext<HubName>();
It's easy peasy :)
Have a SOAP Web Service that encapsulates calls to a 3rd party API... so our application can simply call my service and then my service handles all the various calls to the API. Works just fine.
However, we've hit a problem where the API we're connecting to allows a max of 10 connections at any given time for a given set of credentials.
Connections at most take a couple of seconds to process, but when we go live, we could in theory have users that max out this. So we've created multiple accounts (5) to the API giving us 50 connections across the 5 users.
How does ASP.NET handle connections to the Web service? I know it works asynchronously, but does it spawn multiple instances of my class or reuse the same class. Will variables persist across instances (i.e Will static variables work)?
What I need to do is if a call to the API fails on Client1, rollover to Client2 (or Clients[0], Clients[1]) etc... Sadly I have no way to detect if a given Client is out of connections at any given moment. I could poll it with a test call, but that would take time and be no guarantee the the client has a connection available when I make the call.
The API I'm calling is via XMLRPC Proxy class (CookComputing). Is the "connection" made when the Client is created or when the call is made, passing along the credentials?
public static IVoicestar GetClient(string userID, string password)
{
IVoicestar client = XmlRpcProxyGen.Create<IVoicestar>();
client.Credentials = new NetworkCredential(userID, password);
return client;
}
Seems from this that the credentials simply "ride along" until I make a call via Client.MethodCall() and then the connection is made.
If you are using ASP.NET Web Services (asmx) then it would spawn a new instance of your web service class for each request. In case WCF based web services, you can control the instancing /concurrency using attributes/configuartion (see this article) - you have three instancing modes possible - per call, per session and singleton.
Irrespective of what you are using, you can always implement your own pooling mechanism to pool your API connection. You already have a factory method to get the API client - just put call to pooling layer within method.
Normally Windows XP and Windows 7 have a limit of 10 concurrent TCP/IP connections. Maybe that's it. Be sure to work in a windows server version.