How does Hangfire work for a web farm? - asp.net

I'm trying to understand exactly how Hangfire will behave if used in a web farm, where each ASP.NET application is configured identically, and there are N instances using the same shared SQL Server database for Hangfire storage.
The documentation just says that distributed locks are used to prevent race conditions, but this is a bit low-level, I need to understand what this means in practice.
Example:
If I have 5 web server instances, and I create a background job with a schedule that will run once a day at 5pm, does this mean that the first instance to obtain a 'lock' on the job will end up running it, and all other instances will ignore the job while it is locked?
I'm assuming that Hangfire will only allow one instance to process a job at a time, but I haven't confirmed it.
What about if I actually wanted to run a job on each server instance at the same time?
If anyone has any practical experience with Hangfire in a web farm, I'm all ears.

These are the basic rules that I've established after more research and testing:
Hangfire will execute a background job on the first Hangfire server that has available capacity based on the number of worker threads on that server
Hangfire will continue to execute background jobs on that server until the worker pool is saturated, at which point it will move to the next available server in the farm, and so on
Hangfire servers are automatically included in a web farm if they use the same Hangfire storage instance, so generally speaking no extra configuration is required.
If you want to run specific background jobs on specific servers or use different workload distribution algorithms, you can use named queues, where a queue is given a name and potentially assigned to a specific server instance, and background jobs must be scheduled to run on that queue as well.

Related

.net core worker porcess on virtual kubelet on ACI ..how to start the job?

I am creating a .net core worker process to run some background tasks. This job will run as an Azure Container instance hosted on a virtual kubelet of an existing AKS cluster as mentioned in https://medium.com/#fbeltrao/scheduling-jobs-in-aks-with-virtual-kubelet-97f59c466c2d
The problem is this job is not a timer/schedule based job and will run when a particular event happens (like a http post). I am not sure how to invoke this job. is my approach correct?
PS: This job will process around 25k rows and each row processing consists of lots of business logic and multiple DB updates.

Load balance background tasks in Azure Web App

I am developing an ASP.NET application that will be hosted as an Azure web app. Part of the app will continuously record multiple web-based cameras by retrieving a snapshot every N seconds. I would like to design the app so that the processes that record the cameras can be run on multiple instances. I would like it to load balance between all instances, but not duplicate effort for any one camera.
For example, if I have 100 cameras, and am running on 2 instances, I want each instance to get 50 cameras to process. If I have 5 instances, each instance should get 20 cameras to process. As I add cameras or scale instances up/down I would like for the system to load balance the work evenly.
If it's feasible, I would rather not spin up dedicated VMs just for processing cameras, due to increased cost.
I'm somewhat familiar with Akka.NET, Hangfire, and WebJobs, but am unclear if these will help in this scenario. I have used Hangfire and WebJobs to do background processing, but not with this sort of load-balancing requirement. Will these or some other framework or tool help me load balance these background tasks evenly across Azure Web App Instances? How should I go about setting up these or another framework to do this?
I honestly don't think you want to try to "balance" the servers. I think you just want to make sure the work is well distributed. If I were you, I would use a queue system like SQS to queue up all of the cameras that need a snapshot and let each instance worker dequeue one at a time and process it.
A good approach could be to have a master server responsible for queueing up the snapshots, and then have all of your workers servers simply work out of this shared queue. Even if one server happens to process more than the others, that is fine since the others were working out of the same queue. It just means that this server was able to process its jobs more quickly than the others.
To be honest, there are a lot of ways to approach this. You could do something as simple as just having a shared list of your cameras, with a timestamp for the last snapshot, and use this to work off of. Each server would request a camera, they would look at the list and find one that was stale, and then update the timestamp and perform the snapshot for the camera. The downside to something like this is you are going to struggle with non-atomic operations and the possibility of multiple workers making the request at the same time and both working on the same server. These are the type of things that a queue system will help you with, because as soon as one of those queue items are in flight, they will no longer be available. And also, because each server is responsible for invalidating their items once they are finished, if a server were to crash mid-snapshot, this work would simple go back into the queue.
No matter which solution you choose, it is going to boil down to having a central system/list for serving up stale cameras.
The Azure WebJob SDK uses the Storage Account you set up to balance the work between the various instances that are running your Jobs. You can gain finer control by using a Queue to divide up the work that needs doing and then scale your App Service Plan based on the Queue length.
Here's a rough picture of that architecture:

Does a webapplication run faster when Maximum Worker Processes is more than one?

for IIS7
Does a webapplication run faster when Maximum Worker Processes is more than one?
By increasing the Maximum Worker Processes over 1 you're creating a Web Garden. So the short answer is: likely no... unless:
To quote Chris Adams an ex IIS PM's article I have flowers... should I get a Web Garden?:
Web gardens was designed for one single reason – Offering applications that are not CPU-bound but execute long running requests the ability to scale and not use up all threads available in the worker process.
The examples might be things like -
Applications that make long running database requests (e.g. high computational database transaction)
Applications that have threads occupied by long-running synchronous and network intensive transactions
The question that you must ask yourself -
What is the current CPU usage of the server?
What is the application’s threads executing and what type of requests are they?
Based on the criteria above, you should understand better when to use Web Gardens. Web Gardens, in the metabase, equals the MaxProcesses metabase property if you are not using the User Interface to configure this feature.
cscript adsutil.vbs set w3svc/apppools/defaultapppool/maxprocesses 4
I hope that I get some mileage out of having this blog and more importantly I hope it helps you understand this better…
You may want to look at "What is Web Garden?" from Deploying ASP.NET Websites on IIS 7.0 [codeproject.com] which says:
By default each Application Pool runs with a Single Worker Process (W3Wp.exe). We can assign multiple Worker Processes With a Single Application Pool. An Application Pool with multiple Worker process is called "Web Gardens". Many worker processes with the same Application Pool can sometimes provide better throughput performance and application response time. And each worker process should have their own Thread and Own Memory space.
WebGarden is faster than single worker process in case if application contains locks that prevent its parallelization. For example, GDI+ based image processing.
See this and this questions for more info.

Pros and Cons of running Quartz.NET embedded or as a windows service

I want to add quartz scheduling to an ASP.NET application.
It will be used to send queued up emails.
What are the pros and cons of running quartz.net as windows service vs embedded.
My main concern is how Quartz.NET in embedded mode handles variable number of worker processes in IIS.
Here are some things to you can consider while you decide whether you should run embedded or not:
If you are going to be creating jobs ONLY from within the hosting application, then run embedded. Otherwise, run as a service.
If your jobs might need permissions that are different from the permissions that the web app has, run as a service.
If your jobs are long running jobs, or jobs that use a lot memory, run as a service.
If you need to run your jobs in a clustered environment for either performance, scalability or fault tolerance, run as a service.
From the items above you can deduce that my preference is to run it as a service. This is because if you are going to go through the trouble of setting up a job scheduler, this means that you have jobs that need to run on a schedule, or long running jobs. A service is usually the better choice for this type of work.
Quartz.NET can be instantiated on a per-application basis (web farm configuration mandates number of schedulers). You can safely run multiple schedulers if you have your jobs backed in a database and you have Quartz.NET configured in clustered mode (and clocks synced naturally).
The main concern is to the application pool handling prior to IIS 7.5. Without constant checks your application worker can get recycled and your scheduler will be down until someone issues a web request to fire up the application pool again. IIS 7.5 has the new feature to keep application pools running all the time.
Otherwise there should not be a big difference between the two models.

Webservice Applicationpool

I have two diffrent web services(running on local machine) and pointing to one application pool(1.Can I do that?Is it any performance concern?).I have not much knowledge about how the applicationpool will works.
the other .Net application is using two webservices,but frequently one webservice is not responding which internally calling by ssis package with in the .Net application.
what might be the reason and how to make sure it responds all the time, is there any better way to improve the performance?
if am missing or any further information, Comments Welcome
Yes you can have multiple web applications using to the same application pool.
Is it a performance concern? If it is really high traffic or is faulty code, then perhaps.
Application pools allow pushing sites to different processes, reducing the risk of each affecting the other. If one app pool contains an application/web application that has a memory leak, the leak will only affect that particular process, at least directly. Each process can be recycled either by time or system parameters, which mitigates risks of having something in a bad state.
Performance? Another benefit to app pools is the ability to have multiple instances running simultaneously (a similar thing when putting each app in its own pool). The benefit of this is that more request can be handled at a time. The down-side is that you cannot use in-process session state and your application state will be duplicated for each instance of the process. You would need to consider how much 'stuff' you keep in session and how your caching scheme would be effected, but, it has potential for giving a web application more scalability.
You mention call SSIS... I am assuming that is a long-running service, so you would probably want to push the call to that process to some sort of queue that can process outside of the web service request. MSMQ might work for you. If using a queue as such, you would initiate the running of the code, then have a way of checking on the status of the call to see if it is done.
I agree with Greg Ogle but one more point I think is worth mentioning. Splitting the applications into multiple app pools will also give you an added benefit when it comes to troubleshooting if there are any issues. If you have the various applications split out, you can tell specifically what app pool is related to what w3wp.exe process in the time of need. Like say when that w3wp.exe process is taking 98% of your cpu.

Resources