The transaction log for the database is full
Asked Answered
H

16

133

I have a long running process that holds open a transaction for the full duration.

I have no control over the way this is executed.

Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.

So the process fails with the error "The transaction log for database 'xxx' is full".

I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.

Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.

Any ideas?

If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.

There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.

-=-=-=-=- UPDATE -=-=-=-=-

Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:

I am getting the following error...

Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".

According to Microsoft: http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx

That means the following:

A transaction is active (all recovery models). • A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.

• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.

Have I misunderstood something?

-=-=-=- UPDATE 2 -=-=-=-

Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.

-=-=-=- Final UPDATE -=-=-=-

The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.

I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.

Thanks all for your input.

Holston answered 16/7, 2013 at 11:9 Comment(3)
re "...and have backed up the log"....if the database is in Simple mode, you wouldn't be able to backup up the log, log backups are not applicable for simple mode. Is it bulk-logged?Courthouse
I backed up the entire DB and shrunk it which resulted in the Log shrinking to 1MB. I then increased the size of the Log file to 20GB initially, and now 30 GB.Holston
Related post - TempDB Log Space and ACTIVE_TRANSACTIONOutput
A
52

I had this error once and it ended up being the server's hard drive that run out of disk space.

Acaroid answered 5/6, 2014 at 13:59 Comment(2)
Read the OP's updates. This turned out to be the issue.Vasilek
That was the case for me as well. I found a completely unrelated program had created a ridiculously huge log file (over 400 GB, holy smoke!) and filled up all remaining hard drive space! Once I cleared that up, the issue went away.Miletus
G
123

To fix this problem, change Recovery Model to Simple then Shrink Files Log

1. Database Properties > Options > Recovery Model > Simple

2. Database Tasks > Shrink > Files > Log

Done.

Then check your db log file size at Database Properties > Files > Database Files > Path

To check full sql server log: open Log File Viewer at SSMS > Database > Management > SQL Server Logs > Current

Gannon answered 14/7, 2014 at 3:16 Comment(9)
No, that doesn't fix the problem. The problem was that the log file grew during a long running process until it ran out of disk space. It was corrected by temporarily moving the log file to another drive that had 1TB of space available. You can't shrink the log file while a long running process - that is holding open a transaction - is in progress. That process was solely responsible for the file growth.Holston
As @Holston already said, this does not fix the OP's problem. It may free up some currently unused space, but as soon as a long transaction is running again, the space will be taken up again (and probably fail even earlier)Organicism
That didn't fix the problem. My log have only 500 bytes. I think this problem started after I have done a backup yesterday.Boding
This is definitely the fix if you have a couple megabytes left to spare on the full drive.Collaborative
It might fix A problem. It does not fix the problem reported in the OP.Holston
For me it fixed the problem. Simply it was test database and I could change the recovery model. Otherwise it's not suggested solution for production databases.Josefinejoseito
It fixed "a" problem for you, but the problem you were experiencing is not the problem reported in the OP.Holston
that didnt fix the problem. I still have "is full due to 'LOG_BACKUP'Colicroot
I get "ACTIVE_TRANSACTION" error, so I can't follow this procedureAstrosphere
A
52

I had this error once and it ended up being the server's hard drive that run out of disk space.

Acaroid answered 5/6, 2014 at 13:59 Comment(2)
Read the OP's updates. This turned out to be the issue.Vasilek
That was the case for me as well. I found a completely unrelated program had created a ridiculously huge log file (over 400 GB, holy smoke!) and filled up all remaining hard drive space! Once I cleared that up, the issue went away.Miletus
O
24

Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"

Oke answered 16/7, 2013 at 11:21 Comment(6)
Yes. It's set to autogrow 10%, unrestricted. The issue is that autogrow won't work while there is an open transaction.Holston
Do you have any idea how big the transaction will be? try to set the Transaction log size bigger than that estimation, anyway if disk allocation is not an issue, allocate at the beginning plenty space, for data and log as well. It improves performance. Don't us autogrow by 10%, do it by some few of GB, so performance will be good enough.Guardianship
SQL Server will autogrow the log during a transaction if it needs more space to complete that transaction.Oke
Hi Ross, I've provided my logic for thinking the open transaction is preventing the growth in an update to the question. Am I incorrect in my reasoning?Holston
Hi Luis, to be honest I am surprised it needs anything close to the size. in the last run I increased it to 20GB. I have just increased it to 30GB, but have not kicked off that process pending any ideas you guys might throw my way.Holston
@Holston SQL Server doesn't require you to have it reserved. If you have autogrow, SQL Server does the autogrow during the transaction. Having it big enough it can save lot of time, but shouldn't affect the process.Guardianship
S
21

Is this a one time script, or regularly occurring job?

In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.

Scouting answered 16/7, 2013 at 11:35 Comment(6)
I wouldn't say it's a one-time job, but it is rare that we have to do it. I didn't create a second log file, but I did increase the initial size of my current log file to 30GB. During my last run it was set to 20GB and it still failed.Holston
Would having a second log file be better somehow than having one big one given that I only have one drive to work with?Holston
As I recall now, the additional file mostly enabled us to access another, bigger drive.Scouting
How big is the data being imported? If you're importing 30 GB of data, you're log file may need to be at least as big.Scouting
The process is taking the database users, mapping them to active directory users, creating GUIDs for each of them and updating those GUIDS in tables across the database where the original user's guid was referenced. It's a very slow process, but I have no real way of quantifying the transaction load other than to say I only have about 30 users, and the data portion of the DB is only 26 GB.Holston
Log size is the key. The current task failed again and I couldn't believe my eyes when I saw the size of the log file at the point that it failed. It only processed half of the accounts and was already at 53GB. It looks like I'm going to have to clear somewhere in the vicinity of another 60-70GB to be able to complete this process.Holston
M
12

This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.

Matilda answered 16/7, 2013 at 12:17 Comment(1)
Unfortunately I have no control over the way the process is performed. Dynamics CRM is a Microsoft application and the organisation import process is part of that application.Holston
D
4

I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.

For example:

instead of using this statement:

DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())

using following statement repeatedly:

DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
Debbi answered 14/8, 2018 at 4:14 Comment(1)
That did the trick, easy solution !Nievesniflheim
B
3

Try this:

USE YourDB;  
GO  
-- Truncate the log by changing the database recovery model to SIMPLE.  
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;  
GO  
-- Shrink the truncated log file to 50 MB.  
DBCC SHRINKFILE (YourDB_log, 50);  
GO  
-- Reset the database recovery model.  
ALTER DATABASE YourDB
SET RECOVERY FULL;  
GO 

I hope it helps.

Bermuda answered 10/6, 2019 at 15:33 Comment(1)
This was one of the first things attempted and it's even mentioned in the question text. This does not solve the problem of a single open transaction filling up the log and using all of the available disk space. This entire process took place in simple mode. However, you are one of many people who offered this exact answer having not read the question...Holston
A
1

The following will truncate the log.

USE [yourdbname] 
GO

-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO

-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN  RETURN 0 END
GO
Adolphadolphe answered 5/4, 2014 at 14:43 Comment(2)
Hey Pinal, this functionality was removed completely from SQL Server 2008 and above: brentozar.com/archive/2009/08/…Spurgeon
With later versions, try BACKUP LOG <myDB> TO DISK=N'NUL:'Huskamp
O
1

If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.

This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.

To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.

Operand answered 6/7, 2016 at 0:4 Comment(0)
G
1

adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB

Goblet answered 14/7, 2021 at 5:59 Comment(1)
True. In my case the server didn't have sufficient resources so we had to plug in an external drive, which did the trick. Mostly, this was a difficult to investigate issue because we were surprised to see the log needed so much space, the process in question was black boxed and the whole thing was rolled back before we had a chance to check on it.Holston
V
0

My problem solved with multiple execute of limited deletes like

Before

DELETE FROM TableName WHERE Condition

After

DELETE TOP(1000) FROM TableName WHERECondition
Vashtee answered 30/6, 2019 at 5:45 Comment(0)
B
0

Solved: As per the error the free space left the in the drive is not sufficient. to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space. Note: check the existing path from below steps

Database properties -> Select File option

Belting answered 1/9, 2022 at 12:13 Comment(1)
Duplicate answer.Holston
H
-1

The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.

The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.

Harney answered 7/10, 2018 at 19:0 Comment(1)
If you read the question you'd know that the DB was already in simple recovery mode as this was happening. This does not help when you have one long running open transaction. The file continues to grow until the transaction is committed or rolled back. Read the first line of the question "I have a long running process that holds open a transaction for the full duration."Holston
M
-1

Here's my hero code. I've faced this problem. And use this code to fix this.

 USE master;

    SELECT 
        name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled 
    FROM 
        sys.databases 
    WHERE 
        name = 'XX_System';

    SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');


    USE XX_System;
    EXEC sp_repldone null, null, 0,0,1;
    EXEC sp_removedbreplication XX_System;


    DBCC OPENTRAN;
    DBCC SQLPERF(LOGSPACE);
    EXEC sp_replcounters;



    DBCC SQLPERF(LOGSPACE);
Mona answered 13/11, 2019 at 6:36 Comment(1)
Please put your answer always in context instead of just pasting code. See here for more details.Interceptor
W
-1

In my issue. This was on the local machine that I used for development.
I had not performed a Trans Log backup in about 6 months.
Once I performed the Trans Log Backup, this issue was resolved.

Wittman answered 29/11, 2023 at 18:26 Comment(1)
You didn't have the same issue. You had a completely different issue that resulted in the same error message.Holston
C
-4

Try this:

If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

Cottle answered 26/2, 2020 at 17:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.