Programmatically determine available free space on a database?
Asked Answered
E

4

6

Ok, I have tried searching around for this answer, but with no luck. I have about 50 databases on our SQL Server at work that I have to check almost every day to see if any space can be freed up via shrinking. Our drives tend to fill up a lot, so this is why it is an almost daily task.

Is there a quick way to check what databases actually have free space available? Does anyone know of a system/custom stored procedure or system view that can generate a list of every database and how much free space is available to be shrunk on that database?

By the way, using sql server 2005.

Etrem answered 29/5, 2009 at 13:33 Comment(0)
R
5

Run sp_spacedused for each database:

--temporary table to hold database names
CREATE TABLE #Databases (name varchar(255))

INSERT INTO #Databases
SELECT name FROM master..sysdatabases

DECLARE abc CURSOR FOR
   SELECT name FROM #Databases

DECLARE @name varchar(255)
DECLARE @sql nvarchar(1024)

OPEN abc

FETCH NEXT FROM abc INTO @name

WHILE @@FETCH_STATUS = 0 
BEGIN
   --build sql to switch to that database...
   SET @sql = 'USE '+@name
   PRINT @sql
   EXECUTE (@sql)

   --and run spaceused on it
   SET @sql = 'EXECUTE sp_spaceused @UpdateUsage=True'
   PRINT @sql
   EXECUTE (@sql)

   FETCH NEXT FROM ABC INTO @name
END

CLOSE abc
DEALLOCATE abc
DROP TABLE #Databases

Sample singular results

Execute T-SQL:

USE Contoso
EXECUTE sp_spaceused @UpdateUsage=True

Results:

database_name: Contoso
database_size:     85.13 MB
unallocated_space: 15.41 MB

reserved:         70,368 KB (68.7 MB)
data:             42,944 KB (41.9 MB)
index_size:       24,200 KB (23.6 MB)
unused:            3,224 KB ( 3.1 MB)

Shrink:

DBCC SHRINKDATABASE (Contoso)

Check space again:

EXECUTE sp_spaceused @UpdateUsage=True

Results:

database_name: Contoso
database_size:     69.81 MB
unallocated_space:  0.20 MB

reserved:         70,256 KB (68.6 MB)
data:             43,024 KB (42.0 MB)
index_size:       24,200 KB (23.6 MB)
unused:            3,032 KB ( 3.0 MB)
Reggi answered 29/5, 2009 at 14:9 Comment(1)
The code didn't work completely, but it got me going in the right direction. Thanks. Sorry about the delay on selecting the best answer.Etrem
L
3

The answer is not to worry about how much free space there is in the databases, and instead run a nightly or weekly maintenance task that runs compaction. Odds are you're not compacting your log files (this is where I usually had the most growth).

Run a full backup, then run a transaction log backup with TRUNCATE_ONLY and once that completes, run DBCC SHRINKFILE() on your logfiles and databases.

Then you only have to monitor total available diskspace, and not individual database growth.

SQL Server Shrink Database. You absolutely HAVE to perform a full backup and a transaction log backup before this will work.

You can get information about the individual files loaded by running 'select * from master.dbo.sysdatabases' and the run 'select * from [dbname]..sysfiles'. I'm not 100%, I haven't touch a SQL Server instance in almost a year, sysfiles might be in the 'master' database.

Loth answered 29/5, 2009 at 14:4 Comment(1)
Scheduled shrink jobs are not a good idea, even on log files. See sqlskills.com/BLOGS/PAUL/post/…Orlina
U
1

Probably something in WMI or SQL views. But I gotta ask - how does the cost of your time poking around with this every day compare to buying some disk drives?

Upstage answered 29/5, 2009 at 13:39 Comment(2)
That's what I have asked, but apparently it costs less then buying space :). Also, if we buy more space, our users are most likely just going to consume it quickly again anyways, even with us telling them to delete old tables, etc. It's kind of a vicious cycle.Etrem
I agree with Ryanulit. Although there are some issues which can be solved with more space, you're just hiding the original problem, which is the management of the space available.Coating
E
0

You can use either sp_spaceused and consume its result, or you can look yourself in the sys.allocations_units table and count the total used pages. You shouldn't look at 'database' level but at file level because some might have multiple NDFs. Start with sys.database_files to see all files in the database files. The size column will contain the total number of pages in the database. The sum up the total pages from sys.allocation_units to get the used pages:

with sum_au as (
select data_space_id
    , sum(total_pages) as total_pages
    from sys.allocation_units au 
    group by data_space_id)
select ds.name
    , df.size
    , total_pages as in_use 
    from sys.database_files df
    join sys.data_spaces ds on df.data_space_id = ds.data_space_id
    join sum_au au on au.data_space_id = ds.data_space_id
Eaton answered 29/5, 2009 at 16:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.