Hangfire DisableConcurrentExecution: What happens when the timeout expires?
Asked Answered
N

1

35

Per the Hangfire 0.8.2 announcement post, Hangfire has a DisableConcurrentExecution filter which, when applied to a method, prevents multiple instances of the method from executing concurrently.

The DisableConcurrentExecution filter takes a timeoutInSeconds int parameter. From the example in the linked article:

[DisableConcurrentExecution(timeoutInSeconds: 10 * 60)]
public void SomeMethod()
{
    // Operations performed inside a distributed lock
}

My question is: What happens when, given a job which is waiting on obtaining the lock for a DisableConcurrentExecution-filtered method, the time that the job has been waiting exceeds the timeoutInSeconds value?

Nonesuch answered 9/11, 2016 at 21:2 Comment(1)
Note: If you have tasks waiting they'll block the worker threads waiting for the lock to be released. So be careful if you rely deliberately or accidentally on this attribute for running items in sequence because you may be preventing OTHER tasks from running too.Reniti
F
46

I tested that recently. That job instance was recorded as failed in the dashboard and listed the exception which indicated that the timeout had expired while waiting for an exclusive lock.

You'll see the following exception:

Hangfire.Storage.DistributedLockTimeoutException: Timeout expired. The timeout elapsed prior to obtaining a distributed lock on the 'xxx' resource.
    at Hangfire.SqlServer.SqlServerDistributedLock.Acquire(IDbConnection connection, String resource, TimeSpan timeout)
Fancier answered 9/11, 2016 at 21:28 Comment(4)
Here you go, more detailed exception information (apologies for lack of formatting): Hangfire.Storage.DistributedLockTimeoutException: Timeout expired. The timeout elapsed prior to obtaining a distributed lock on the 'xxx' resource. at Hangfire.SqlServer.SqlServerDistributedLock.Acquire(IDbConnection connection, String resource, TimeSpan timeout)Fancier
I'm having a very similar issue, its a common issue of recurring jobs, its so sad...Definitely
Is there any easier way to avoid this error exception? in case we need to not need re-tryBarber
I have similar issue. I run email sender every 5 minutes. If it comes to send 1k emails it could take longer, so I don't want to run next job and not to create a failed record. Please refer this: stackoverflow.com/a/52211678Otherworld

© 2022 - 2024 — McMap. All rights reserved.