I'm using the SignalR Service Bus package to let SignalR coordinate activity across several azure web role instances within a deployment.
I used to be able to choose the service bus topic name to use. After updating SignalR and the Service Bus package to rc1, that option is no longer available. Instead, SignalR uses the name of the web role as its topic name.
The problem is that when multiple azure deployments are running simultaneously (i.e. Production and Staging), they fight over the single service bus topic that's automatically named after the web role. I wind up getting large numbers of duplicate messages. I want each deployment to have its own service bus topic.
How can I use a single service bus namespace to manage SignalR connections across multiple deployments of the same project? Or even two deployments of two different projects that happen to have web roles of the same name?
Can you log a bug on https://github.com/signalr/signalr/issues for this please and we'll look at it in an upcoming release.
I solved my problem. Instead of using
GlobalHost.DependencyResolver.UseWindowsAzureServiceBus()
I'm now using
GlobalHost.DependencyResolver.UseServiceBus()
which I can pass a topic prefix.
UseWindowsAzureServiceBus still causes problems when I have Production and Staging deployments running simultaneously, but I'm marking this question as solved.
Related
I'm currently fighting my way through Event Hubs and EventProcessorHost. All guidance I found so far suggests running an EventProcessor in an Azure Cloud Service worker role. Since those are very slow to deploy and update I was wondering if there is any Azure service that lets me run an EventProcessor in a more agile environment?
So far my rough architecture looks like this
Device > IoT Hub > Stream Analytics Job > Event Hub > [MyEventProcessor] > SignalR > Clients...
Or maybe there is another way of getting from Steam Analytics to fire SignalR messages?
Any recommendations are highly appreciated.
Thanks, Philipp
You may use Azure Web App service with the SignalR enabled and merge your pipeline "steps" [MyEventProcessor] and SignalR into one step.
I have done that a few times, started from the simple SignalR chat demo and added the Event Hub receiver functionality to the SignalR processing. That article is close to what i mean in terms of approach.
You may take a look at Azure WebJobs as well. Basically, it can work as a background service doing your logic. WebJobs SDK has the support of Event Hub.
You can run an EventProcessorHost in any Azure thing that will run arbitrary C# code and will keep running. The options for where you should run it end up depending on how much you want to spend and what you need. So Azure Container Service may be the new fancy deployment system, but it's minimum cost may not be suitable for you. I'm running my binaries that read data from EventHubs on normal Azure Virtual Machines with our deployment system in charge of managing them.
If your front end processes using SignalR to talk to clients have a process that stays around for a while, you could just make each one of those their own logical consumer (consumer group) and have them consume the entire stream. Or even if they don't stay around (ie you're using an Azure hosting option that turns off the process when idle) you could write your receiver to just start at the end of stream (as opposed to reprocessing older data), if that's what your scenario requires.
I'm working on getting Microsoft Orleans "Grains" to put events onto a SignalR bus. There's an example project that does this, and I've linked to SignalR integration below.
It looks to me that this sample is using meta-data from the Azure Web and Worker roles to enumerate all the web roles, and explicitly publish messages to each one. It seems to me that if SignalR's backplane is configured properly on the azure web roles that this shouldn't be necessary -- one HubConnection/HubProxy should do it. Is that right?
In fact, when I look closely at the file linked to below, and see some of the odd logic in the Hub itself, I wonder if the sample functions as a rudimentary backplane.
I'm hoping someone with deeper SignalR experience can clarify this for me.
SignalR integration example: https://orleans.codeplex.com/SourceControl/latest#src/samples/GPSTracker/GPSTracker.GrainImplementation/PushNotifierGrain.cs
The sample is a rudimentary backplane, in that it sends the message to all web roles instances present in the deployment, and therefore doesn't require a complete backplane (such as Redis). However, it won't propgate client originated messages to the other servers.
A more complete Orleans backplane for SignalR is available here: https://github.com/OrleansContrib/OrleansR
What is the benefit to keep the WCF project having - WEB HOST PROJECT and Service Implementaiton project separately.
Service contract library
Service implementation library
Service Host project
I understand Contract and Implementaiton to keep separate will helpful for SOC principal and allow to use into other application also if require to implement interfaces.
But,I am not understand why to keep - Service Host and Service Implmentation project separately.
I went through below link, but not understand the benefit of keeping this separate.
http://www.devx.com/codemag/Article/39837 (Page 4,5)
If any one guide here then, it is helpful.
Thank You
As the article said:
Decoupling the services from the host lets you host your services in whatever type of host you want, and to change that host any time. Now, the host could be an IIS application, Windows Activation Services, or any self-hosting application including console applications, Windows Forms applications, Windows Services, etc. - WCF the Manual Way…the Right Way : Page 3
Test mocking, though important, arguably applies to most things programming wise. What is more useful here however is how service separation helps to deploy said services in production, not how it helps developer-level testing. The latter is only useful for a short time period compared to the operational life of the system in production where operations staff may change how the service is hosted. Operations, from an ALM perspective, continues way after SDLC completes.
Though off topic here, one can go further and decouple service logic itself not only from the service's contract but also from anything WCF-related. As mentioned in Thomas Erl's book SOA Design Patterns -
Facade logic is placed in between the contract and the core service logic. This allows the core service logic to remain decoupled from the contract. - Service Façade
Keeping the WCF implementation and WCF host process separate allows you to change how it is hosted later
Advanced: Keeping the WCF implementation and service processing logic separate ensures the latter is free to change without impacting users of the exposed service contract
In addition to Micky's answer, I give you some examples of deployment.
1. If you are going to host the service in IIS, you don't need the Service Host project, since IIS/WAS/.NET runtime will created a service host for you upon the first client request.
2. If you want to host the service in Windows service or a console app, you may create the service host in the Window service project or the console application project, because there will be just a few lines of codes for creating Service Host, unless you have complex logic of managing service host.
This may not be new, but I hope some one can put me on right track as its bit confusing during azure deployment. I'm in the process of planning for deployment on Azure. This is what I have
A Public facing ASP.Net MVC app (web-role) + WCF Service (web-role) to be accessible only to this asp.net app + WCF Service (worker-role) again accessible to 1. over message-queue
An Custom STS i.e. ASP.NET MVC app (web-role) acting as Id-Provider (for 1. which is a Relying Party) + WCF Service (web-role) to expose some of STS funcionality to RP's such as 1.
SQL Azure: accessed by 1 and 2
Note: 1. will eventually grow to be come a portal, with multiple wcf-services hosted on web and worker roles for both internal and external access.
My question is if 1. is going to be the app exposed to public, and 2. is for 1. to Federate the security (internal), how should I plan my deployment on azure keeping in mind 1. will require scale-out sometime later along with the two wcf services? Do I publish to one cloud service or how?.
My understanding is that A Cloud Service is a Logical Container for n-web/worker roles.
But when yu have 2 web soles like in this case both asp.net apps, which one becomes the default one?
Best Regards
Satish
By default all web roles in the solution are public. You can change this by going into the service definition and remove HTTP endpoints, if you wish; you can also define internal HTTP endpoints that will only be available to cloud services, nothing will be exposed to the load balancer. The advantage to having all web roles in the same project is that it's easy to dynamically inspect the RoleEnvironment and each web role -- in other words, all roles in a solution are "aware" of other roles and their available ports. It's also easy to deploy one package.
All roles share the same DNS name (.cloudapp.net) (however you could use host headers to differentiate), but they are typically exposed by using different ports via the load balancer on your .cloudapp.net service. You can see this when the service is running in the cloud, there are links in the portal that point to each role that has a public HTTP endpoint that specifies the port. The one that is port 80 (as defined by the external HTTP endpoint) is the "default" site.
You could also create multiple cloud projects, and deploy them separately. In this case, each would have its own DNS name, and each is managed separately. Whether this is a good thing or not depends on how tightly coupled the applications are, and if you will typically be deploying the entire solution, or just updating individual roles within that solution. But there's no cost or scalability difference.
If you are going to frequently redeploy only one of the roles, I'd favor breaking them out.
To deploy multiple Web Roles under the same cloud instance have a look at this:
Best practice for Deployment of Web site into a cloud service
Multiple worker roles will be trickier to implement:
Run multiple WorkerRoles per instance
Is it better to have lots of small deployments with a few web services per war, or to have one big deployment with lots of web services per war?
In this case, assume that all of the web services share a common backend and will benefit from code-sharing. For small wars shared code would have to be put into a jar project and included from all the smaller deployments. Now each war can be tested/deployed separately, but if the backend changes they all need to be updated rather than only one.
The backend in this case is yet another web service provided by a vendor. Updates to it are usually backwards compatible but not always.
I know there is no clear-cut answer but any experience shared will be helpful.
As a rule you'd want one war per service. The point is that a service does not have to be a single web-service (in fact some of the endpoint can be other technologies not just web-services). A service can expose multiple endpoints and contracts.
You'd group together related contracts e.g a service that handles user management can have APIs for both users and groups. However APIs related to Orders probably belong in a different service (and thus war).
If you slice the services pieces that are too small you can get what I call the nano-service antipattern where the overhead of a service is more than the utility you get from it