We host different MVC5 web applications for the intranet(500 employees) on server farms. We want to lightly use signalR 2.2 with SQL Server service broker backplane, mainly for server broadcast. We want to use the same backplane DB for different applications, all applications having acces to the backplane DB server.
Question : 1-Is it to avoid in terms of performance, I didn't see any good practice guidance and it seems to work technically. 2-If a message is broadcast to application1 clients, will it be sent to clients of Application2 also?3-What would be the advantages of using separate backplane DB for each application?
Up to version 2.x, I don't think that is a good idea because it would probably be inefficient. It might work, but the current mechanism would broadcast all messages to all apps using the same connection string (= same server + same database). There is no way to segregate applications on the same database. It looks like there is a plan for it in future versions, but as of today it's probably not recommended.
Related
I have some questions regarding SignalR Core on the server side;
My server is written in ASP.NET Core, and it uses SignalR for sending notifications to users. The server uses Controllers with endpoints that clients interact with.
1) Can I host the entire thing in Azure App Service and add the SignalR service to it? Or would it be better to split the SignalR code out to its own server, which is called from the "main" server when needed?
2) The SignalR Service has an option for "Serverless", which according to documentation doesn't support clients calling any server RPCs when in said mode. Could I run this thing in Serverless mode as I'm only using the sockets for sending notifications to the clients. Or is it reserved for Azure functions?
3) Is there a way to get the number of connections for a user in a SignalR hub? I would like to send a push message to the user if he doesn't have any connections to the server. If not - what is the recommended way of handling this? I was thinking of adding a singleton service that keeps count, but am unsure if this would work at scale, especially with the SignalR service.
Thanks.
1) Better use the Azure SignalR.
2) Use it with the hub.
3) If you use Azure SignalR, you can just see it from the portal. In the code, whenever you use Azure SignalR or not, you can save the user Id in some var and count the connections. If you have multiple hubs and servers, you need to do more (if using redis-backplane for example).
Situation
So I'm thinking about building ASP.NET Core website to host it on Linux based hosting provider. But I still want to use MSSQL database, so best choice for that would be Microsoft Azure.
My Question
Now my question is rather security based, since I know, that hosting them on different providers is totally possible (Regarding this question)
But if I'm about to do so, then how will be my data encrypted? If I'm about to use default HTTP protocol, then I asume, it's not, but if to use HTTPS protocol, it should be encrypted as well? Or how would it work, do I need to setup some other protocols or security for that matter?
My Thoughts
Since Client won't be directly connected with Web Site to Database connection, then there is not chance, that this connection would be listened, yet this "might not be listened" is rather not a far chance. And if HTTPS is included, then all connections should be encrypted, then it should work same with Web Server to Database connection.
You can access Azure SQL from anywhere as long as IP address is in the firewall rule. Since communication to Azure SQL is on SSL/TCP at all times, data is already encrypted.
Ideally, you want to host Azure SQL and web application in same region not to mention same provider. The main reason is your website will be dramatically slow due to network latency, if you host those in different location.
Recently, Azure offers App Service on Linux. It is definitely worth the try, before considering an alternative route.
FYI: Web Apps on Linux does not yet support deployment of .NET Core apps from uncompiled source. You need to publish/compile your .NET Core app locally first, and then push the published site bits to your app.
I am in a stage of using SignalR in my project and i don't understand when to use Self hosted option and when we should not use. As a example if I am willing to host my web application in server farm,
There will be separate hosting servers
Separate SignalR hubs in each IIS server
If we want to broadcast message into each client, how this is working in SignalR
The idea with SignalR running in multiple instances is that clients connected on instance A cannot get messages from clients connected to instance B.
(SignalR scaleout documentation)
However, when you scale out, clients can get routed to different
servers. A client that is connected to one server will not receive
messages sent from another server.
The solution to this is using a backplane - everytime a server recieves a message, it forwards it to all other servers. You can do this using Azure Service Bus, Redis or SQL.
The way I see, you use the self host option when you either don't want the full IIS running (because you have some lightweight operations that don't require all IIS heaviness) or you don't want a web server at all (for example you want to add real-time functionality to an already existing let's say forms application, or in any other process).
Be sure to read the documentation for self-hosting SignalR and decide whether you actually need to self host SignalR.
If you are developing a web application under IIS, I don't see any reason why you would want to self-host SignalR.
Hope this helps. Best of luck!
I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.
I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.