Firebird backup restore is frustrating, is there a way to avoid it?
Asked Answered
C

3

8

I am using Firebird, but lately the database grows really seriously. There is really a lot of delete statements running, as well update/inserts, and the database file size grows really fast. After tons of deleting records the database size doesn't decrease, and even worse, i have the feeling that actually the query getting slowed down a bit. In order to fix this a daily backup/restore process have been involved, but because of it's time to complete - i could say that it is really frustrating to use Firebird.

  • Any ideas on workarounds or solution on this will be welcome.

  • As well, I am considering switching to Interbase because I heard from a friend that it is not having this issue - it is so ?

Congratulatory answered 3/6, 2011 at 8:32 Comment(16)
@Yordan I agree, this is one of the most frustrating problem in FirebirdPass
@JustMe, it's most probably working as designed. Be careful with your transactions.Wraf
the transactions is fine - everything is opimised as we checked and rechecking, but with data transaction over 200 GB daily - the garbage goes so hight that it became really ugly lately.Congratulatory
Can you post database statistics before backup/restore?Wraf
200 GB Daily? How much of this data do you delete? The database file that doesn't shrink is a feature, not a bug. It prevent the server sending space to disk just to ask for more space on disk again. It works to you, not against you. Probably you are fighting against it, and should work with it instead. Please, provide comment to answers bellow, as they will give us more info in your case. Also, provide database statistics as asked.Tal
@Wraf - i dont have directly any statistics, it is all as the things goin into the process and resource viewer. Just to note that the server is not loaded with anything else than the database itself.Congratulatory
@Tal - we usually deleting about up to 0.5-1Gb data directly... while users deleting the same size ... it is a bit random, but sometime it is alot.Congratulatory
True that 0.5 GB seems nothing near 200GB. But it still a lot of data, and as WarmBooter said below, this can fire (start) the Garbage Colector (GC). You should get database statistics by using gstat. The minimal statistics can be get by gstat -h gstat example of use Also, I would like to know What version of Firebird? What kind of Server? (Classic, Super Server or SuperClassic?) What kind of transactions you have? How many? How long they last? How many users?Tal
Firebird 2.5, Classic, the transactions lasts up to a few seconds ( mostly ), there is about 20 users.Congratulatory
@Yordan: You can create the statistics with gstat tool. Documentation here: firebirdsql.org/manual/gstat.html However I see you marked my answer as accepted, did you find any problem? Let us know what was the real reason, so someone else can also benefit from your question.Wraf
well - there is none chnaged - everything is as it was. we have done all the things which is proposed, but the database keep the need of backup/restore procedure. that's all. seems like we are cursed with this firebird :|. Hopefully Firebird 3 will come out soon, else - banging the head against the wall seems like a good solution right now :DCongratulatory
"we have done all the things which is proposed" IDK... I still think you should discuss this more. As Harriv said, someone else can benefit of your problem someday. I know some of the things you already tried, and maybe you are tired of this. But we don't know what exactly you did unless you say it. So we are trying to help... gstat statistics would help to get a better picture. Also, FB 3 will be out, but how will you be sure of this change if the most part of the users doesn't have a problem with it at all? Unless you did asked for this feature already, what I am not aware...Tal
Fine then. I removed accpeted answer. I hope we will get deeper and drop on solving the problem completely. :|Congratulatory
:|(this smile is not much happy you know...) Well, I read your another question. Now I see that this is not a new topic. #6135399 Did you analyzed the statistics of gstat as proposed? Did you used Sinatica Monitor as Mariuz suggested in the other topic? If you already did all you think is possible, maybe you would like to ask for help in Firebird support group list or developer list. This can help they tune FB3 to help you. :)Tal
yep - Firebird 3 will bring alot of new things which is "A CHRISTMAS WISH" into our development. but Firebird 3 release date is biggest mistery after the big bang ... :DCongratulatory
Here are some tes tresults with 1TB database: ib-aid.com/articles/item104Wraf
B
9

We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.

As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.

Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/

Baxy answered 3/6, 2011 at 9:55 Comment(1)
I sometimes put in a scheduled sweep once a day and that seems to keep Firebird databases peachy.Bewick
W
8

Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.

You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.

There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.

Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/

Wraf answered 3/6, 2011 at 9:28 Comment(0)
V
7

The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird. If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.

Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.

Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

Viquelia answered 5/6, 2011 at 0:38 Comment(2)
You can also disable the automatic garbage collection and execute it manually when the other usage is low.Wraf
@Harriv, you can also disable it at a connection base as explained by Dmitri K. in IBDeveloper blog. Its not common... but you can. (:Tal

© 2022 - 2024 — McMap. All rights reserved.