top of page
Learn T-SQL

SQL Server Backup Index and Stats & Maintenance Checks

Updated: Mar 25, 2023


As an integral part of any business, the reliable operation and longevity of SQL Server databases are crucial. To ensure the smooth running of your SQL Server operations, appropriate preventative maintenance tasks should be conducted on a regular basis to help maintain system performance, increase stability and reduce downtime. This blog post provides details on maintenance plans a comprehensive list of suggested maintenance tasks for DBA's and CEO's alike so that proper upkeep can be ensured for their SQL Server environment.


This blog post will help you query and ascertain if your SQL Server is properly maintained


List SQL Server maintenance Tasks

SQL Server maintenance tasks are crucial for ensuring the health and performance of your SQL Server database. Here are brief definitions of some of the common SQL Server database maintenance tasks:


Re-indexing:

This task involves rebuilding the indexes of a database, which can improve the database's performance by reducing the fragmentation of data and making it easier for the SQL Server agent to retrieve data from the database.


DBCC check DB:

This command performs a consistency check on a database to identify and fix any logical and physical inconsistencies in the database.


Query statistics:

This task updates the statistics for a database, which helps SQL Server optimize query execution plans by providing accurate information about the data distribution in the database.


Backups:

This task involves creating regular backups of the various database files, which is crucial for disaster recovery and data protection.


While all of these database maintenance plan and tasks are important for SQL Server, it's worth noting that shrinking databases frequently is generally not a good idea. Shrinking a database involves reclaiming unused space in the database, which can be useful if you have a database that is growing rapidly and you need to free up disk space. However, shrinking databases frequently can have several negative consequences:

  • Performance degradation: Shrinking a database involves moving data around on disk, which can cause fragmentation and slow down query performance.

  • Increased file fragmentation: Shrinking a database can also cause the physical files that make up the database to become fragmented, which can also slow down performance.

  • Increased risk of data loss: Shrinking a database involves moving data around on disk, which increases the risk of data loss if something goes wrong during the process.

In summary, while re-indexing, DBCC check DB, stats update, and backups are all important maintenance tasks for SQL Server, it's generally not a good idea to shrink databases frequently. Instead, focus on optimizing your database's performance and monitoring its growth to ensure that you have enough disk space available.

Re-indexing - Check If Indexes Are Maintained

Indexes are database objects that help optimize database queries by allowing the database engine to quickly locate data rows in a table based on the values of one or more columns. An index consists of a data structure that organizes the values of the indexed columns into a tree-like structure, making it faster to search for specific values.


When a query is executed against a table, the database engine can use an index to quickly find the subset of rows that match the query's conditions, rather than having to scan the entire table. This can significantly improve query performance, especially for large tables or tables with complex queries.


However, as data is inserted, updated, and deleted from a table, the index can become fragmented or out of date, which can lead to reduced query performance. Therefore, it is important to maintain indexes regularly to ensure that they remain optimized for the queries that are executed against them.


There are several tasks involved in maintaining indexes, including:


Rebuilding indexes:

This involves dropping and recreating an index to remove fragmentation and update statistics. This is typically done when an index is heavily fragmented or has a large number of deleted rows.


Reorganizing indexes:

This involves physically reordering the data pages in an index to remove fragmentation and improve query performance. This is typically done when an index has moderate fragmentation.


Updating index statistics:

This involves updating the statistics that the database engine uses to determine the most efficient query execution plan for a given index. This is typically done when a large amount of data has been added, modified, or deleted from a table.

By regularly performing these tasks, you can help ensure that your indexes remain optimized for your database queries, which can improve overall database performance and user experience.


You can use the following T-SQL query to determine the percentage of index fragmentation per table in a database:

SELECT 
    DB_NAME() AS DatabaseName, 
    t.NAME AS TableName, 
    i.name AS IndexName, 
    index_type_desc AS IndexType, 
    avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id
WHERE ps.index_id > 0
ORDER BY DatabaseName, TableName, IndexName;

This query uses the sys.dm_db_index_physical_stats dynamic management function to retrieve information about the physical fragmentation of indexes in the current database. The query joins the results with the sys.tables and sys.indexes system tables to retrieve the table and index names, as well as the index type.


The avg_fragmentation_in_percent column in the sys.dm_db_index_physical_stats function provides the percentage of fragmentation for each index. The query orders the results by database name, table name, and index name.


Note that this query will return results for all indexes in the current database, including system tables and indexes. If you only want to retrieve results for user-defined tables, you can add a filter to the sys.tables join, like this:

INNER JOIN sys.tables t ON ps.object_id = t.object_id AND t.is_ms_shipped = 0

This will exclude system tables from the results.


Re-indexing - Rebuild Indexes In A Database

Here is a T-SQL script to rebuild or reorganize all indexes in a database:

DECLARE @DatabaseName NVARCHAR(128)
DECLARE @SQL NVARCHAR(MAX)

SET @DatabaseName = '<your_database_name>'

SET @SQL = ''

SELECT @SQL = @SQL + 
    CASE 
        WHEN avg_fragmentation_in_percent > 30 THEN 
            'ALTER INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(o.schema_id)) + '.' + QUOTENAME(o.name) + ' REBUILD WITH (FILLFACTOR = 80, ONLINE = ON);' 
        ELSE 
            'ALTER INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(o.schema_id)) + '.' + QUOTENAME(o.name) + ' REORGANIZE WITH (LOB_COMPACTION = ON);'
    END + CHAR(13) + CHAR(10)
FROM sys.dm_db_index_physical_stats(DB_ID(@DatabaseName), NULL, NULL, NULL, NULL) ps
INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id
INNER JOIN sys.objects o ON ps.object_id = o.object_id
WHERE ps.index_id > 0
ORDER BY ps.avg_fragmentation_in_percent DESC

EXEC (@SQL)

This script uses the sys.dm_db_index_physical_stats dynamic management function to retrieve the fragmentation level of each index in the database. It then generates a dynamic SQL statement to either rebuild or reorganize the index based on its fragmentation level.

Indexes with a fragmentation level greater than 30% are rebuilt using the ALTER INDEX ... REBUILD statement, while indexes with a fragmentation level less than or equal to 30% are reorganized using the ALTER INDEX ... REORGANIZE statement.


For more information about re-indexing and alternative solutions check out this blog


DBCC Check DB


DBCC CHECKDB is a command in SQL Server that checks the logical and physical consistency of a database, and can detect and repair a wide range of database corruption issues. It is recommended to run DBCC CHECKDB regularly, to keep database integrity and ensure that the database is healthy and free of any corruption that may cause data loss or performance issues.


Running DBCC CHECKDB on a regular basis can help identify and fix a wide range of issues, such as:

  • Allocation and structural errors: These include page-level errors, index-related errors, and other structural issues that affect the integrity of the database.

  • File system errors: These include file system-level errors that may impact the ability of the database to read and write data.

  • Database consistency errors: These include issues related to the consistency of the database, such as mismatched metadata, incorrect internal pointers, and so on.

To determine when the last DBCC CHECKDB was executed on a specific database, you can use the following T-SQL query:

DBCC SHOWCONTIG ('<database_name>') WITH TABLERESULTS;

This query will return a result set with several columns, including LastUpdate, which indicates the date and time when the last DBCC CHECKDB was executed on the database. Note that this value may not be accurate if the database was restored from a backup, as the restore process resets the DBCC information. In such cases, you should consider running DBCC CHECKDB on the restored database to ensure its health.


This script uses a cursor to iterate through all user-defined databases in the SQL Server instance (excluding the system database tempdb). For each database, it sets the database context using the USE statement and then executes the DBCC CHECKDB command with the WITH ALL_ERRORMSGS, NO_INFOMSGS options.


The WITH ALL_ERRORMSGS option instructs SQL Server to display all error messages generated during the check, while the NO_INFOMSGS option suppresses informational messages, which can help reduce the amount of output generated by the command.

The PRINT statement is included to display a message in the query output for each database being checked, so you can easily track the progress of the script.

Once the script has completed, you can review the output to check for any errors or inconsistencies in the databases.


Here's an example T-SQL code to run DBCC CHECKDB on all user databases in an instance of SQL Server:

DECLARE @db_name nvarchar(128)
DECLARE @sql nvarchar(max)

DECLARE db_cursor CURSOR FOR
SELECT name
FROM sys.databases
WHERE database_id > 4 -- exclude system databases

OPEN db_cursor

FETCH NEXT FROM db_cursor INTO @db_name

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @sql = 'USE [' + @db_name + ']; DBCC CHECKDB WITH ALL_ERRORMSGS, NO_INFOMSGS;'
    PRINT 'Running DBCC CHECKDB on database [' + @db_name + ']...'
    EXEC sp_executesql @sql
    FETCH NEXT FROM db_cursor INTO @db_name
END

CLOSE db_cursor
DEALLOCATE db_cursor

This code uses a cursor to loop through all user databases (excluding operating system databases) and runs DBCC CHECKDB with the WITH ALL_ERRORMSGS and NO_INFOMSGS options on each database. The PRINT statement is optional and can be removed if you don't want to see the messages printed in the output window.


Note: Running DBCC CHECKDB on large databases or during peak hours can affect server performance, so it's recommended to schedule this operation during off-peak hours.


Statistics

Statistics in SQL Server are used by the query optimizer to estimate the cardinality (number of rows) of a table or an index. The query optimizer uses these estimates to create a query plan that is most efficient in terms of execution time.

Statistics are created automatically by SQL Server when an index is created, or when the query optimizer determines that the existing statistics are outdated or not accurate. However, there are times when it may be necessary to manually create or update statistics to ensure optimal query performance.

Here are some ways to maintain statistics in SQL Server:


Automatic Updates:

SQL Server can automatically update statistics when a threshold of changes to the data has been reached. This threshold is determined by the "Auto Update Statistics" database option, which is enabled by default. However, this may not always be enough to ensure optimal query performance.


Manual Updates:

Manually updating statistics can be done using the UPDATE STATISTICS command. This command can be used to update statistics on a specific table or index, or on all tables in a database.


For example, to update statistics on a specific table:

UPDATE STATISTICS table_name;

To update statistics on a specific index:

UPDATE STATISTICS table_name index_name;

To update statistics on all tables in a database:

EXEC sp_updatestats;


Full Scan:

By default, SQL Server updates statistics using a sample of the data. However, if the sample size is not large enough, it can result in inaccurate estimates. In such cases, a full scan can be done to update the statistics using all the data. This can be done using the WITH FULLSCAN option of the UPDATE STATISTICS command.


For example, to update statistics on a specific table using a full scan:

UPDATE STATISTICS table_name WITH FULLSCAN;

Filtered Statistics:

In some cases, queries may only access a subset of the data in a table. In such cases, creating filtered statistics on the subset of the data can improve query performance. This can be done using the CREATE STATISTICS command.


For example, to create filtered statistics on a specific column in a table:

CREATE STATISTICS stats_name ON table_name (column_name) WHERE filter_expression;


Column Statistics:

In addition to table and other index and column statistics and statistics, column-level statistics can also be created for specific columns in a table. This can be done using the CREATE STATISTICS command.


For example, to create column statistics on a specific column in a table:

CREATE STATISTICS stats_name ON table_name (column_name);

It is important to maintain statistics in SQL Server to ensure optimal query performance. Outdated or inaccurate statistics can result in poor query performance, and can cause the query optimizer to choose inefficient query plans.


You can use the following T-SQL query to find how old statistics are in SQL Server for each database:

SELECT 
    DB_NAME() AS DatabaseName, 
    OBJECT_SCHEMA_NAME(s.object_id) AS SchemaName, 
    OBJECT_NAME(s.object_id) AS TableName, 
    s.name AS StatisticName, 
    STATS_DATE(s.object_id, s.stats_id) AS LastUpdated, 
    DATEDIFF(DAY, STATS_DATE(s.object_id, s.stats_id), GETDATE()) AS DaysSinceLastUpdate
FROM sys.stats s
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
ORDER BY DatabaseName, TableName, StatisticName;

This query uses the sys.stats system catalog view to retrieve information about statistics for user tables in the current database. The STATS_DATE function is used to retrieve the date and time when the statistics were last updated for each statistic, and the DATEDIFF function is used to calculate the number of days since the statistics were last updated.

The OBJECTPROPERTY function is used to filter out system tables from the results, since statistics are not typically updated for system tables.


The query orders the results by database name, table name, and statistic name.

Note that the results returned by this query may not be entirely accurate, since statistics may not be updated regularly, or may be updated automatically by SQL Server when certain conditions are met. However, this query can give you a general idea of how old your statistics are, which can help you identify tables that may need more frequent updates to maintain optimal query performance.


Backups:

Taking backups is crucial for any serious SQL Server database maintenance ever, for the following reasons:


Disaster Recovery: Backups are the primary means of recovering a database after a disaster such as hardware failure, natural disasters, human errors, or cyberattacks. In such cases, the database can be restored from the most recent backup and transaction logs can be applied to bring the database up to the point of failure.

Data Loss Prevention: Backups are a means of preventing data loss due to accidental deletion, corruption, or any other unexpected issues. Without regular backups, there is a high risk of losing important data permanently. Compliance: Many organizations are required to maintain backups for regulatory compliance reasons. For example, financial institutions may need to keep transactional data for a specific period of time, and backups are the only way to meet those requirements. Database Migration: Backups can be used to move a database from one server to another, or to upgrade to a new version of SQL Server. Without backups, the process of migrating a database can be difficult and risky. Testing and Development: Backups can be used for testing and development purposes, as they allow for a database to be restored to a specific point in time. This can be useful for testing new code, patches, or configurations before applying them to a production environment.


In summary, taking backups is crucial for ensuring data availability, disaster recovery, compliance, and minimizing risks associated with your database maintenance operations. It is recommended to develop a backup and restore strategy that meets the organization's needs and to regularly test backups of historical data to ensure they can be successfully restored.


You can use the following T-SQL script to find when the last backup was taken for all databases on a SQL Server instance:

SELECT 
    database_name = DB_NAME(database_id),
    backup_type = 
        CASE 
            WHEN backup_type = 'D' THEN 'Full' 
            WHEN backup_type = 'I' THEN 'Differential' 
            WHEN backup_type = 'L' THEN 'Transaction Log' 
            ELSE 'Unknown' 
        END,
    backup_finish_date = MAX(backup_finish_date)
FROM 
    msdb.dbo.backupset
GROUP BY 
    database_id, backup_type;

This script queries the msdb.dbo.backupset table in the msdb system database, which contains information about all backups taken on the SQL Server instance. It groups the results by database ID and backup type, and then selects the maximum backup_finish_date for each group.


The CASE statement is used to translate the backup type code (D for the full backup name, I for differential, L for transaction log) into a more readable format.

The output of this script will show the name of each database on the instance, along with the type of the last backup taken (Full, Differential, or Transaction Log) of shrink database, and the date and time that the backup finished. If a database has never been backed up, it will not appear in the output.


Note that this script assumes that backups are being taken regularly on the instance and that the msdb database is being maintained properly. If backups are not being taken, or if the msdb database has been corrupted or is not being maintained properly, the results of this script may be inaccurate or incomplete.


Here's an example T-SQL code to backup all user databases on an instance of SQL Server to the D drive:

DECLARE @db_name nvarchar(128)
DECLARE @backup_path nvarchar(max)

SET @backup_path = 'D:\SQLBackups\' -- change to your desired backup path

DECLARE db_cursor CURSOR FOR
SELECT name
FROM sys.databases
WHERE database_id > 4 -- exclude system databases

OPEN db_cursor

FETCH NEXT FROM db_cursor INTO @db_name

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @backup_path = @backup_path + @db_name + '_' + CONVERT(varchar(8), GETDATE(), 112) + '.bak'
    BACKUP DATABASE @db_name TO DISK = @backup_path
    PRINT 'Backup of database [' + @db_name + '] complete.'
    SET @backup_path = 'D:\SQLBackups\' -- reset backup path for next database
    FETCH NEXT FROM db_cursor INTO @db_name
END

CLOSE db_cursor
DEALLOCATE db_cursor

This code uses a cursor to loop through log files for all user databases (excluding system databases) and backs up each database to a file in the specified backup path on the D drive. The log backup to filename includes the database name and the current date in YYYYMMDD format.


Note: Make sure that the database backup and path specified exists and that the SQL Server backup service account has write permissions to it. Also, be aware that backing up large databases can take a significant amount of time and may affect server performance. It's recommended to schedule backups during off-peak hours.

15 views0 comments
bottom of page