How to accurately detect if a SQL Server job is running and deal with the job already running?
Asked Answered
Z

3

10

I'm currently using code like this to detect if a SQL server job is running. (this is SQL Server 2005, all SP's)

return (select isnull(  
(select top 1 CASE 
    WHEN current_execution_status = 4 THEN 0
    ELSE 1
    END
from openquery(devtestvm, 'EXEC msdb.dbo.sp_help_job')
where current_execution_status = 4 and
    name = 'WQCheckQueueJob' + cast(@Index as varchar(10))
), 1)
)

No problems there, and generally speaking, it works just fine.

But.... (always a but)

On occasion, I'll invoke this, get back a "job is not running" result, at which point I'll try and start the job, via

exec msdb.dbo.sp_start_job @JobName

and SQL will return that "SQLAgent has refused to start the job because it already has a pending request".

Ok. Also not a problem. It's conceivable that there's a slight window where the target job could get started before this code can start it, but after checking if it's started. However, I can just wrap that up in a try catch and just ignore the error, right?

begin try
if dbo.WQIsQueueJobActive(@index) = 0 begin
    exec msdb.dbo.sp_start_job @JobName
    break
end         
end try begin catch
    -- nothing here
end catch

here's the problem, though.

9 times out of 10, this works just fine. SQL agent will raise the error, it's caught, and processing just continues on, since the job is already running, no harm no foul.

But occasionally, I'll get a message in the Job History view (keep in mind the above code to detect if a specific job is running and start it if not is actually running from another job) saying that the job failed because "SQLAgent has refused to start the job because it already has a pending request".

Of course, this is the exact error that TRY CATCH is supposed to be handling!

When this happens, the executing job just dies, but not immediately from what I can tell, just pretty close. I've put logging all over the place and there's no consistency. One time it fails, it'll be at place a, the next time at place b. In some cases, Place A and place B have nothing but a

select @var = 'message'

in between them. Very strange. Basically, the job appears to be unceremoniously dumped and anything left to execute in the job is +not+ executed at all.

However, if I remove the "exec StartJob" (or have it invoked exactly one time, when I KNOW that the target job can't already be running), everything works perfectly and all my processing in the job runs through.

The purpose behind all this is to have a job started as a result of a trigger (among other things), and, if the job is already started, there's really no need to "start it again".

Anyone ever run into behavior like this with SQL Agent's Job handling?

EDIT: Current flow of control is like so:

  1. Change to a table (update or insert)...
  2. fires trigger which calls...
  3. a stored proc which calls...
  4. sp_Start_Job which...
  5. starts a specific job which...
  6. calls another stored proc (called CheckQueue) which...
  7. performs some processing and...
  8. checks several tables and depending on their contents might...
  9. invoke sp_start_job on another job to start up a second, simultaneous job to process the additional work (this second job calls the CheckQueue sproc also but the two invocations operate on completely separate sets of data)
Zing answered 2/5, 2011 at 19:2 Comment(3)
Not the answer I'm really looking for, but as a workaround, I created a "monitor" table, that contains a flag indicating whether each particular job is "really" running or not, and expiry time to guard against failures. So, if this flag indicates a job is not running, there's no way it can be running so it's safe to start it. If it indicates it IS running, I use the sp_help_job method to check if the job is running. if that returns that it's NOT running. I bump a counter and continue.Zing
If the counter is bumped x times in a row with an amount of time between them, then I can be reasonably certain that the job is NOT in fact running, clear it's status and make it available to run again. Pain in the butt, but it works consistently.Zing
Still hoping for a bit less roundabout solution thoughZing
U
4

First of all, have you had a chance to look at service broker? From your description, it sounds like that's what you actually want.

The difference would be instead of starting a job, you put your data into a SB queue and SB will call your processing proc asynchronously and completely side-step issues with already-running jobs etc. It will auto spawn/terminate additional threads and demand dictates, it takes care of order etc.

Here's a good (and vaguely related) tutorial. http://www.sqlteam.com/article/centralized-asynchronous-auditing-with-service-broker

Let's assume that you can't use SB for whatever reason (but seriously, do!).

What about using the job spid's context_info.

  1. Your job calls a wrapper proc that execs each step individually.
  2. The first statement inside the wrapper proc is

    DECLARE @context_info VARBINARY(30)
    SET @context_info = CAST('MyJob1' AS VARBINARY)
    SET CONTEXT_INFO @context_info
    
  3. When your proc finishes (or in your catch block)

    SET CONTEXT_INFO 0x0
    
  4. When you are looking at calling your job, do this:

    IF NOT EXISTS (SELECT * FROM master..sysprocesses WITH (NOLOCK) WHERE context_info=CAST('MyJob1' AS VARBINARY))
        EXEC StartJob
    

When your wrapper proc terminates or the connection is closed, your context_info goes away.

You could also use a global temp table (i.e. ##JobStatus) They will disappear when all spids that reference it disconnect or if it's explicitly dropped.

Just a few thoughts.

Undrape answered 3/8, 2011 at 17:35 Comment(3)
I did not know about the Context_info feature. Interesting, although, at then end of the day, that approach is basically the same thing as using a table, with rows containing the SPIDs of the jobs and some tracking info. Possibly a little simpler though (since the context info goes away automatically when the job finished.Zing
I looked into ServiceBroker initially, but the client nixed that as they had no one that knew anything about it on staff. Still, the servicebroker stuff is definitely on my mind for similar future projects. Thanks!Zing
I'm giving this answer the nod, mainly because 1) it mentions CONTEXT_INFO which i'd not used before, and 2) it is essentially what I ended up with using a Job Tracking table, although the Global Temp table might be a +little+ cleaner, in that it tends to clean up after itself automatically, whereas with a normal table, I had to make sure things got cleaned up explicitly.Zing
C
3

I have a query that gives me the running jobs, maybe it can help you. It has been working for me, but if you find any fault on it, let me know, I will try to rectify. cheers.

-- get the running jobs
--marcelo miorelli
-- 10-dec-2013


SELECT sj.name
      ,DATEDIFF(SECOND,aj.start_execution_date,GetDate()) AS Seconds
 FROM msdb..sysjobactivity aj
 JOIN msdb..sysjobs sj on sj.job_id = aj.job_id
WHERE aj.stop_execution_date IS NULL -- job hasn't stopped running
 AND aj.start_execution_date IS NOT NULL -- job is currently running
--AND sj.name = 'JobName'
and not exists( -- make sure this is the most recent run
    select 1
    from msdb..sysjobactivity new
    where new.job_id = aj.job_id
      and new.start_execution_date > aj.start_execution_date )
Couscous answered 27/9, 2014 at 14:52 Comment(0)
A
-3

To Deal with a job already running: 1. Open Task Manger 2. Check if a Process with ImageName "DTExec.exe" is running 3. If the process is running and if it is the problematic job, execute "End Process".

Agamogenesis answered 10/7, 2013 at 11:16 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.