the job repository (database implementation) reflects the last 'known' state. so, in the case where the job was running and the JVM crashed, then it will never be updated in the database.
in the event of the database being 'out-of-sync' with the JVM, then the process needs to be a manual one, there doesn't appear to be an out of the box solution for it. the simplest solution would be to execute a script on startup that checked the batch tables for any RUNNING jobs and then 'failed' them.
update batch_job_execution set STATUS = 'FAILED', EXIT_CODE = 'FAILED', EXIT_MESSAGE = 'FORCED UPDATE' where job_execution_id in (select job_execution_id from batch_job_execution where status = 'RUNNING');
one thing you will want to consider in this situation is if the JobRepository tables, and the jobs associated with them, are shared with another JVM. in this case, you may wish to do a pass that also evaluates if the job is still running beyond the maximum runtime of any history it has. (a subselect with max() end_time - create_time for the same job_name)