How does Hangfire work for a web farm?
Asked Answered
A

1

6

I'm trying to understand exactly how Hangfire will behave if used in a web farm, where each ASP.NET application is configured identically, and there are N instances using the same shared SQL Server database for Hangfire storage.

The documentation just says that distributed locks are used to prevent race conditions, but this is a bit low-level, I need to understand what this means in practice.

Example:

If I have 5 web server instances, and I create a background job with a schedule that will run once a day at 5pm, does this mean that the first instance to obtain a 'lock' on the job will end up running it, and all other instances will ignore the job while it is locked?

I'm assuming that Hangfire will only allow one instance to process a job at a time, but I haven't confirmed it.

What about if I actually wanted to run a job on each server instance at the same time?

If anyone has any practical experience with Hangfire in a web farm, I'm all ears.

Asymmetric answered 6/10, 2016 at 6:44 Comment(0)
A
4

These are the basic rules that I've established after more research and testing:

  • Hangfire will execute a background job on the first Hangfire server that has available capacity based on the number of worker threads on that server

  • Hangfire will continue to execute background jobs on that server until the worker pool is saturated, at which point it will move to the next available server in the farm, and so on

  • Hangfire servers are automatically included in a web farm if they use the same Hangfire storage instance, so generally speaking no extra configuration is required.

  • If you want to run specific background jobs on specific servers or use different workload distribution algorithms, you can use named queues, where a queue is given a name and potentially assigned to a specific server instance, and background jobs must be scheduled to run on that queue as well.

Asymmetric answered 18/1, 2017 at 7:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.