In one server, I have 2 web applications. One of them is a Web API, and the other one is SignalR. Both apps are hosted in IIS, under 2 different application pulls.
What is the best way to communicate between those 2 web applications? Is using either SignalR, or REST calls viable, for example?
You can use several way;
1) A message queue system would work. Your server is IIS, you can use MSMQ.
2) Alternate to MSMQ, you can use RabbitMQ.
3) As you mentioned, you can use HTTP calls.
4) You have already a SignalR. So you can use it for communication. Write a Hub that the servers join to hub.
Options are depends on your requirement. Backend servers, mostly, communicate with a message queue system. HTTP calls are also acceptable.
The biggest difference between HTTP and a message queue is async calls. For example, When a HTTP call trying to reach an endpoint, it waits for a response and if the server is down, you have to try again until server up. On the other hand, a message queue system uses a queue. Just fire and forget the data. Other side of the connection can get the data whenever the server is ready.
SignalR is too risky for this job.
Related
I have some questions regarding SignalR Core on the server side;
My server is written in ASP.NET Core, and it uses SignalR for sending notifications to users. The server uses Controllers with endpoints that clients interact with.
1) Can I host the entire thing in Azure App Service and add the SignalR service to it? Or would it be better to split the SignalR code out to its own server, which is called from the "main" server when needed?
2) The SignalR Service has an option for "Serverless", which according to documentation doesn't support clients calling any server RPCs when in said mode. Could I run this thing in Serverless mode as I'm only using the sockets for sending notifications to the clients. Or is it reserved for Azure functions?
3) Is there a way to get the number of connections for a user in a SignalR hub? I would like to send a push message to the user if he doesn't have any connections to the server. If not - what is the recommended way of handling this? I was thinking of adding a singleton service that keeps count, but am unsure if this would work at scale, especially with the SignalR service.
Thanks.
1) Better use the Azure SignalR.
2) Use it with the hub.
3) If you use Azure SignalR, you can just see it from the portal. In the code, whenever you use Azure SignalR or not, you can save the user Id in some var and count the connections. If you have multiple hubs and servers, you need to do more (if using redis-backplane for example).
I am in a stage of using SignalR in my project and i don't understand when to use Self hosted option and when we should not use. As a example if I am willing to host my web application in server farm,
There will be separate hosting servers
Separate SignalR hubs in each IIS server
If we want to broadcast message into each client, how this is working in SignalR
The idea with SignalR running in multiple instances is that clients connected on instance A cannot get messages from clients connected to instance B.
(SignalR scaleout documentation)
However, when you scale out, clients can get routed to different
servers. A client that is connected to one server will not receive
messages sent from another server.
The solution to this is using a backplane - everytime a server recieves a message, it forwards it to all other servers. You can do this using Azure Service Bus, Redis or SQL.
The way I see, you use the self host option when you either don't want the full IIS running (because you have some lightweight operations that don't require all IIS heaviness) or you don't want a web server at all (for example you want to add real-time functionality to an already existing let's say forms application, or in any other process).
Be sure to read the documentation for self-hosting SignalR and decide whether you actually need to self host SignalR.
If you are developing a web application under IIS, I don't see any reason why you would want to self-host SignalR.
Hope this helps. Best of luck!
I have WCF service, called by ASP.NET web application. When there is more than one call per page request, is it better to keep client open and share the instance across the whole request, or is it better to create and dispose client per each service call as shown below?
using (var client = new WcfClient())
{
var result = client.Method();
}
If you're using webHttpbinding, wshttpbinding or basicHttpbinding, the default behavior is for each client request (call) to get its own unique connection and instance of the web service object(s). This means that when Client A and B send requests to your web service, each will get it's own instance of the service, instantiated by the hosting program, then disposed of neatly (hopefully) when the response is sent back to the client. The WCF .NET infrastructure and the hosting program take care of all of the creation and destruction of the connections and objects for you, unless you hijack the process and do something fancy.
It's possible to create persistant client sessions that leave a connection open and the service in memory, but I've never tried it. Here's a link to an explanation of how to do it:
WCF sessions with a wsHttpBinding and without windows security
For the last two years, I've worked entirely on WCF client and host software on an industrial scale and there's not much reason to worry about the efficiency of continuously openning and closing connections on a WCF web service. I've benchmarked our tests services with hundreds of concurrent client connections, each uploading and downloading files, and it barely stresses the WCF server's CPU. During our tests, the majority of the stress (as usual) fell on the database side.
I am learning Signal-R, and this is something that has been in my head during all time.
How does Signal-R fits in the IIS/ASP.NET life cycle?
How long does the Hubs live (I see they have re-connection semantics)?
Does IIS does prevent the shutdown of an AppDomain that has a persistent connection?
It is my understanding that IIS is designed to handle request-response scenarios. A request hits IIS, this finds the AppDomain, activate it, and then pass the request to it. And after an idle time, shutdown the AppDomain. If the request takes too long, a timeout exception is thrown.
Now let´s imagine that I have another application that broadcast information through a TCP socket. I want my javascript clients to get that information in real time, so I create a Signal-R web application. I can create a TCP client on application start, but what does guarantee that IIS is not going to shutdown the whole thing after some time with inactivity?
I could self host the Signal-R app in a window service, but then I would have to use a different port, enable cross domain, etc... Many problems for deployment. But, I am concerned about using an ASP.NET MVC application for this, since it looks to me like fitting a driving wheel in a motorbike.
Cheers.
SignalR in IIS/ASP.NET Lifecycle
SignalR uses Owin: http://owin.org/
A good article on Owin here: http://msdn.microsoft.com/en-us/magazine/dn451439.aspx
Hub object lifetime
From the SignalR docs: http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#transience:
You don't instantiate the Hub class or call its methods from your own code on the server; all that is done for you by the SignalR Hubs pipeline. SignalR creates a new instance of your Hub class each time it needs to handle a Hub operation such as when a client connects, disconnects, or makes a method call to the server.
Because instances of the Hub class are transient, you can't use them to maintain state from one method call to the next. Each time the server receives a method call from a client, a new instance of your Hub class processes the message. To maintain state through multiple connections and method calls, use some other method such as a database, or a static variable on the Hub class, or a different class that does not derive from Hub. If you persist data in memory, using a method such as a static variable on the Hub class, the data will be lost when the app domain recycles.
Your long running TCP client
This is not a problem with SignalR. Your TCP client can be shutdown by IIS: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
I would rather make the TCP client run in a windows service. The TCP client receives TCP broadcast messages and forwards the messages to the Hub using the SignalR .NET client.
Hubs are recreated on each SignalR request, so if you need a persistent connection you may have to look into using static vars or dictionary to hold state. But as you point ASP.NET can restart for a variety of reasons.
It depends on what persistancy you really need. If you have a connection that MUST stay alive at all times and cannot be torn down and reestablished then hosting in IIS is not the right choice. However, if you can re-establish the same connection after a shutdown, then maybe this can still work.
You can do quite a bit in making sure that ASP.NET apps don't shut down in recent versions of IIS:
http://weblog.west-wind.com/posts/2013/Oct/02/Use-IIS-Application-Initialization-for-keeping-ASPNET-Apps-alive
If that's not enough for you running as a separate service is an option. If you run as a service on the same IP address there are no cross domain concerns. Here's more info on running SignalR using a Windows Service:
http://weblog.west-wind.com/posts/2013/Sep/04/SelfHosting-SignalR-in-a-Windows-Service
I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.