DotNet5 Hangfire microservice running on multiple servers - .net-core

We have a microservice having hangfire scheduler deployed in multiple server environment. Currently the hangfire gets triggers from all 3 servers simultaneously resulting in duplication of process on all 3 servers. Is there any way to restrict the hangfire job to be execute only on 1 server at a time?

In your web application, code logic to enable hangfire only on one physical server. This is what I have in my startup.cs (run when application restarts),
// if "isHangfireOn" = 1 inside web.config, enable hangfire on server.
// ONLY one physical server should run with this setting enabled.
if (("1".Equals(ConfigurationManager.AppSettings["isHangfireOn"].ToString())))
{
//Specify the use of Sqlserver for timed task persistence
GlobalConfiguration.Configuration.UseSqlServerStorage("hangfireconfig");
//enable server
app.UseHangfireServer();
//enable Dashboard panel
app.UseHangfireDashboard();
//Cyclic execution of tasks
RecurringJob.AddOrUpdate(() => Email.SendEmail(), Cron.Daily(amHour, amMins), TimeZoneInfo.Local);
}
Then in your web.config, enable "isHangfireOn" on ONE server only. Disable it for any other servers.

Related

Configuring Azure's Event Hub to receive events from ASP.NET MVC web application

Can someone point me in the right direction on how to configure network settings within event hub so i can successfully send data via the ASP.NET MVC application while running locally (localhost) as well as when I deploy the application in azure's dev/qa/production web environment.
I have build a proof of concept console application in .NET locally and on Azure's EventHub side added my IP address within Networking/Firewall settings, and have no issue sending data and receiving data from a local machine.
But when I try the same code in the ASP.NET MVC web application, the page just hangs on CreateBatchAsync() method and does not return any exception..
var producerClient = new EventHubProducerClient(connectionString, eventHubName);
EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Event 1z at " + DateTime.Now.ToString())));
await producerClient.SendAsync(eventBatch);
Any help would be appreciated.
Thanks
The call to CreateBatchAsync is the first point in your code to request a network operation and, consequently, will trigger creation of the connection and link to the Event Hubs service. The connection attempt has a timeout associated with it which is 60 seconds in the default configuration that you're using. Depending on the error that it is encountering, you may see retries take place, each of which would have a 60 second timeout. With the default configuration, this would look like a 3 minute hang. (60 seconds * 3 attempts)
The most common connection issue in an enterprise environment is that the ports needed for AMQP over TCP (5671/5672) are not open. Changing the transport to AMQP over WebSockets often helps, as it will use port 443 and may be routed through a proxy, if needed.
For more information, you may want to look at the sample for configuring Event Hubs clients and the Event Hubs network troubleshooting guide.

NServiceBus doesn't start picking up messages until an endpoint is "touched"

So when I run my two services locally, I can hit service A which sends a command to service B which picks it up and processes it. Pretty straight forward. However, when I publish these to my web server and I send a request to service A which sends it to service B (I can see the message in service B's queue) but it won't get picked up and processed. I created an endpoint on service B that simply returns an OK response -- if I call this endpoint, effectively "touching" the service, everything kicks on and the messages get processed from that point on.
I figured maybe this had something to do with with late compiling, so I changed the publish to precompile on publish, but I get the same result.
Is there a way to have the service start processing as soon as it is published? Also worth noting that both services are WebAPI 2
Another option (probably more “standard”) would be to move the “handlers” into a Windows Service instead of a web application.
For this Windows Service you can leverage the NServiceBus Host which will turn a standard class library into a Windows Service for you. They have a good amount of documentation about this here: https://docs.particular.net/nservicebus/hosting/nservicebus-host/?version=Host_6
I would argue that this is more stable as you can separate the processing of sending commands (Web Application / WebApi) and processing commands / publishing events (NSB Host). The host can sit on the web server itself or you can put these on a different server.
Our default architecture is to have a separate server run our NSB Hosts as you scale web applications and NSB Hosts differently. If you run the NSB Host on the web server, you can get issues where a web app gets too much traffic and takes the NSB Host processing down. You can always start simple with using 1 server for both, monitor the server and then move things around as traffic increases.
Not sure if this is the "right way" to do things, but what I ended up doing is setting up each site to be always running and auto initialize.
App pool set to "Always Running"
Website set preload enabled = true
Web.config has webServer entry for application initialization doAppInitAfterRestart="true"
Web Role added to server "Application Initialization"
With those things being set, the deployment process is basically to publish the site and iisreset. If there are better options, I'm still looking :)

Create Windows Task Scheduler on Web Clusters

I have ASP.NET web application which is hosted on two different IIS web servers(Server A and Server B) for http request Clustering. I have designed web application where user can able to create and kick-off(run) manual windows task scheduler on IIS web server where website is hosted (but in my case it is hosted on two different web servers for load balancing).
When first time user creates scheduler from web UI and http request goes to Server A to create scheduler, so it will created manual windows task scheduler on Server A. But now next time when user tries to kick-off the windows task scheduler and http request goes to Server B, but there is no windows task scheduler on Server B (in first http request windows scheduler has been created on Server A). The second http request is unable to find the task scheduler on Server B and it is displaying the alert message that no windows task scheduler found.
As below, Server A has one scheduler- MyScheduler but the server B does not have any scheduler with same name
How can I come out of this challange, please do the needful.
After lot of research and development I found that Windows Task scheduler allows us to define the Target Server where you want to Create, Read, Run and Delete the tasks.
TaskService.TargetServer property is gets the name of the computer that is running the Task Scheduler service that the user is connected to.You can find more details about the TaskService here.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa383474(v=vs.85).aspx
Suppose we have Server A and Server B as mentioned in question above. So we can define Server A as target server to create scheduler on single machine in Cluster environment.
Example:
TaskService taskService=new TaskService(targetServer: "Server A");

how to use signalr in isolated app server?

I have a 3-tier application:
1. web servers: publicly available, serves web pages and hosts logic
2. app servers: accessible only from web servers, running long running processes processes
3. database servers: accessible only from web and app servers
I would like to use signalR to update users about progress of long-running processes. These processes are kicked off by users through web servers ( user->web server-> app server). However, since the processes will be running on app serves, these app servers need to send update back to the browsers.
How should this be implemented so that SignalR can be used to push updates from app servers to browers, where browsers do not communicate directly to app servers?

Asp.net "background service" listening to MSMQ not working after IIS site is stopped/started

We've implemented a "background service" in our Asp.Net web app that receives messages from MSMQ at random intervals without requiring an HTTP request to start up the application. We auto start this web app using a serviceAutoStartProvider.
Everyhing works when IIS initially starts up, the server is rebooted and so on, we receive messages just fine. BUT if we just stop the site in IIS (not touching the application or app pool), the application stops receiving MSMQ messages. And when we start the web site again, the serviceAutoStartProvider is not called again, so our app does not start listening to MSMQ messages again!
If we issue a HTTP request against the web app after the IIS site has been stopped and started again, it starts listening to MSMQ messages again.
Shouldn't our "background service" web app continue to listen to MSMQ messages even if the IIS site is stopped? It won't get any requests, but I think it should continue to run.
What exactly happens in an Asp.Net application/app pool when the IIS site is stopped? Any events fired that we can hook up to? The app pool claims to be "started" in IIS manager, but code is not running in it.
Why isn't our serviceAutoStartProvider called when the site is started again? I believe it is "by design", since the application isn't really stopped. But the applications isn't running, either, has to be waken up by an actual HTTP request.
When the IIS Web App shuts down (eg. due to no new HTTP(S) requests for the timeout time) the .NET app domain (within the app pool worker process) completely closes and unloads. This includes all background threads, including those used by the .NET thread pool.
A Web App can be configured with a longer (or no) timeout, then background worker threads could continue to process work.
But better would be to either run such workers in a specialist service process managed completely separately.
Or, even better, use IIS application hosting with WCF to create the MSMQ listener. I understand in this case the integration of Windows Process Activation Services with IIS would restart the Web App if a new message arrived after it had been shutdown.
I would host the MSMQ listenner in a windows service. Why couple it to IIS?
UPDATE
Actually what I mean is why couple the MSMQ and ASPNET in the same app pool?
You can now use "Application Initialization" feature of IIS8 (and IIS7.5), more information including version availability and usage documentation can be found at:
http://www.iis.net/learn/get-started/whats-new-in-iis-8/iis-80-application-initialization
This replaces "Application Warm-Up Module" which is no longer supported, and provides us with proper control over component/service initialization in an "always running" scenario.

Resources