top of page

Learn SQL Server

This page contains instructions and tips on Microsoft SQL Server. If you need help with a specific setting or topic reach out by clicking below regarding consulting hours.  I can quickly help you correct bugs, and optimize your system making and that pesky index script work as expected.

Anchor 2

Check, Indexes Backups,Parallelism, Statistics

Check, Indexes Backups,Parallelism, Statistics

Failed Job Alerts, Database Alerts 19-25, SA Log On

IFI, DAC, Compression

SQL Server Blog Posts

Detect Fragmented Indexes


Connect to the SQL Instance and open a new query window - If you need help connecting, please see this blog.



You can use the following T-SQL query to determine the percentage of index fragmentation per table in a database: The critical column in this output is avg fragmentation or "Average fragmentation"


Mikes Take: Index frgamnation > 30 = bad


If you would like detailed information about what indexes are, b-trees and how exactly we are defragmenting, please see What Are Indexes


Paste this in the new query window, select what database you would like to audit, and Execute

SELECT 
    DB_NAME() AS DatabaseName, 
    t.NAME AS TableName, 
    i.name AS IndexName, 
    index_type_desc AS IndexType, 
    avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id
WHERE ps.index_id > 0
ORDER BY DatabaseName, TableName, IndexName;

If you want to include the results in an email or an analysis, you can click on the header and select "copy with the headers."



The first question your supervisor will ask after you say we need a maintenance window with downtime is, how long will this take?


To estimate how long a re-index operation will take on a database in T-SQL, you can use the following script:


USE YourDatabaseName;

-- Create a temporary table to store the index information
CREATE TABLE #IndexInfo
(
    TableName NVARCHAR(255),
    IndexName NVARCHAR(255),
    IndexSizeMB DECIMAL(18,2)
);

-- Insert index information into the temporary table
INSERT INTO #IndexInfo
(
    TableName,
    IndexName,
    IndexSizeMB
)
SELECT
    OBJECT_NAME(i.object_id) AS TableName,
    i.name AS IndexName,
    (SUM(a.used_pages) * 8) / 1024.0 AS IndexSizeMB
FROM
    sys.indexes i
    INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
    INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
WHERE
    i.name IS NOT NULL
GROUP BY
    i.object_id,
    i.name
ORDER BY
    SUM(a.used_pages) DESC;

-- Calculate the estimated time for re-indexing all indexes
DECLARE @TotalSizeMB DECIMAL(18,2) = SUM(IndexSizeMB),
        @ThroughputMBPerMinute DECIMAL(18,2) = 200, -- set your own throughput here
        @EstimatedMinutes INT = ROUND(@TotalSizeMB / @ThroughputMBPerMinute, 0);

-- Display the estimated time
SELECT 
    'The estimated time for re-indexing all indexes in ' + DB_NAME() + ' is ' + CAST(@EstimatedMinutes AS VARCHAR(10)) + ' minutes.';
 
-- Drop the temporary table
DROP TABLE #IndexInfo;

Once you have determined that defragmentation is needed and about how long this will take to fix you can send this e-mail to the person who will ultimately make the decision.


Subject:

Fragmentation in the Database


Supervisor's Name,

I am writing to bring to your attention an issue that we have detected in our database. During a routine check, we identified that there is fragmentation in our database, which can potentially impact its performance and efficiency. We have run the following SQL code to determine the level of fragmentation in the database -- Code from above.


Based on the results of this query, we have found that there is indeed fragmentation in our database. This can have a negative impact on the performance of our system, which can ultimately affect our operations.


To resolve this issue, I recommend that we schedule a maintenance window to perform database defragmentation. This will take about XYZ hours. We also should review our current database maintenance plan to ensure that we are regularly performing such tasks to prevent fragmentation from occurring in the future.


Also, please see attached for index fragmentation details by database over 30% fragamntion is considered bad and should be addressed as soon as possible.


Please let us know how we can proceed with resolving this issue. We look forward to your guidance and support. Thank you for your attention to this matter


Mike Bennyhoff


 

Check Statistics

Once again, open a new query window and execute this query. This query will show you the last time your statistics were updated.


You can use the following T-SQL query to check SQL Server statistics for each table in a database:

SELECT 
    SCHEMA_NAME(schema_id) AS schema_name,
    t.name AS table_name,
    s.name AS statistics_name,
    s.stats_id AS statistics_id,
    sp.last_updated AS last_updated
FROM sys.stats s 
INNER JOIN sys.tables t ON s.object_id = t.object_id 
INNER JOIN sys.schemas ON t.schema_id = schemas.schema_id
OUTER APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) AS sp
WHERE t.is_ms_shipped = 0
ORDER BY schema_name, table_name, statistics_name

Here is the e-mail that I would send to my manager to tell them what I am planning on doing.


Subject: SQL Server Statistics Not Updated Recently


[Supervisor's Name]


I have identified a potential issue with our SQL Server database. I have discovered that the statistics for some of the tables have not been updated recently, which can potentially impact the performance and efficiency of our system. As you know, statistics are important for SQL Server to generate efficient query execution plans. They provide valuable information about the distribution of data in the tables, which helps the query optimizer to make informed decisions about how to execute the queries.


We have run a script to identify the tables whose statistics have not been updated recently, and the results are attached to this email. Based on this analysis, I recommend that we schedule a maintenance window to update the statistics for these tables as soon as possible.


I estimate that this will take about an hour, and it can be done during production hours with minimal impact.


Updating the statistics is a routine maintenance task that Microsoft recommends, and it should be performed regularly to ensure that the query optimizer has the most up-to-date information about the data in the tables. By updating the statistics, we can improve query performance, reduce response times, and ultimately provide better service to our customers.


Please let me know when I can proceed with updating the statistics.


Mike Bennyhoff

 

Check Backups

You can use the following T-SQL script to check backups for each database in the instance:

SELECT
    database_name = database_name,
    backup_start_date = MAX(backup_start_date),
    backup_type = backup_type,
    physical_device_name = physical_device_name,
    backup_size_mb = CAST(CAST(SUM(backup_size) AS FLOAT) / (1024 * 1024) AS DECIMAL(12,2)),
    compressed_backup_size_mb = CAST(CAST(SUM(compressed_backup_size) AS FLOAT) / (1024 * 1024) AS DECIMAL(12,2))
FROM
    msdb.dbo.backupset
WHERE
    database_name NOT IN ('master', 'model', 'msdb', 'tempdb')
GROUP BY
    database_name, backup_type, physical_device_name
ORDER BY
    database_name

This script will retrieve backup information for each database in the instance, including the database name, backup start date, backup type, physical device name, backup size in MB, and compressed backup size in MB. The msdb.dbo.backupset table in the msdb database contains information about the backup sets for all databases in the instance.


The WHERE clause is used to exclude system databases (master, model, msdb, and tempdb) from the result set. The GROUP BY clause groups the results by database name, backup type, and physical device name, while the ORDER BY clause sorts the results by database name.


To estimate how long a backup will take, you can use the following script:


USE YourDatabaseName;

-- Calculate the total size of the database
DECLARE @TotalSizeMB DECIMAL(18,2) = (SELECT SUM(size) * 8.0 / 1024 FROM sys.master_files WHERE database_id = DB_ID());

-- Calculate the estimated time for a full backup
DECLARE @ThroughputMBPerMinute DECIMAL(18,2) = 200, -- set your own throughput here
        @EstimatedMinutes INT = ROUND(@TotalSizeMB / @ThroughputMBPerMinute, 0);

-- Display the estimated time
SELECT 
    'The estimated time for taking a full backup of ' + DB_NAME() + ' is ' + CAST(@EstimatedMinutes AS VARCHAR(10)) + ' minutes.';

This script calculates the total size of the database and then estimates the time for taking a full backup based on a throughput value that you can set yourself (in this example, it is set to 200 MB per minute). Finally, it displays the estimated time in minutes. You can adjust the throughput value to match the performance of your system.


Please note that this is an estimate and the actual backup time can vary depending on factors such as disk speed, network speed, and the workload on the system at the time of the backup.


We can put this all together in an e-mail asking to create a backup script.


Subject: Alert: SQL Server Backups Not Taken


[Supervisor's Name],


I am writing to inform you that SQL Server backups have not been taken for our databases, which can have significant consequences for the system in case of data loss or system failure.As you are aware, backups are critical for ensuring the recoverability of databases in case of any disaster, such as system crashes, database corruption, or accidental data deletion.


I propose that we create a backup script that will manage this process and ensure that backups are taken regularly and automatically without any manual intervention. The script will be scheduled to run at specific intervals, such as daily or weekly, and will take full backups.


I estimate that the backup process will take approximately 3 hours to complete, considering the size of our databases. To ensure that the backups are taken successfully and do not interfere with the system's normal operation, I request a daily backup window from 11:00 PM to 2:00 AM.


During this time, we can run the backup script and ensure that the backups are completed before the start of the next business day. I am confident that this approach will provide us with a reliable and consistent backup strategy and ensure that we can recover our databases quickly and efficiently in case of any disaster.


Please let me know your thoughts on this proposal, and we can discuss the next steps to implement this solution.


Thank you for your attention to this matter.


Mike Bennyhoff

 

Check Full Or Simple Mode


In SQL Server, Full and Simple modes refer to the recovery models that determine how the database engine handles the transaction logs.


Full mode allows for full database backups and incremental backups, and enables point-in-time recovery to a specific transaction. In Full mode, the transaction logs are not automatically truncated, meaning they will continue to grow until a backup is taken and they are manually truncated.


Simple mode only allows for full database backups and does not support point-in-time recovery. In Simple mode, the transaction logs are automatically truncated, which means that they are cleared out after each checkpoint. As a result, you can only recover the database to the point of the last full backup.


In general, Full mode is recommended for production environments where data loss cannot be tolerated, such as in financial or healthcare applications. This mode enables point-in-time recovery, which means you can recover the database to a specific point in time, such as just before a system crash or data corruption.


Full mode also provides the ability to perform incremental backups, which can reduce the amount of time required to perform a full backup.


Simple mode is recommended for environments where data loss is acceptable. Simple mode is also useful for databases that are rarely updated, such as read-only databases, because the transaction logs are automatically truncated, which can free up disk space.


Here is a video of how to check if each database is in full or simple mode.



If you would like to change the backup mode, you can right-click on the database and select properties, then edit the mode.




 

Check Ram Settings

When it comes to setting RAM for SQL Server, there are several factors to consider. Here are some general recommendations:

  • Reserve RAM for the OS: It's important to reserve a portion of the system's RAM for the operating system to ensure that it has enough resources to operate efficiently. In general, it's recommended to reserve at least 4GB of RAM for the operating system, regardless of the total amount of RAM available.

  • Allocate RAM for SQL Server: The amount of RAM allocated for SQL Server depends on several factors, including the size of the database, the number of users, and the workload. As a general rule of thumb, SQL Server should be allocated at least 50% of the available RAM, but not more than 90%.

  • Use lock pages in memory: To prevent the operating system from paging out SQL Server memory, it's recommended to configure the SQL Server service account to use the lock pages in memory privilege. This ensures that SQL Server memory is not paged out, which can improve performance.

  • Enable AWE or 64-bit architecture: If the server has more than 4GB of RAM, it's recommended to use either Address Windowing Extensions (AWE) or a 64-bit architecture to take advantage of the additional RAM. AWE is used in 32-bit architecture to allow applications to address more than 4GB of memory. However, AWE is not supported in newer versions of SQL Server and 64-bit architecture is now the preferred option.

  • Monitor performance: It's important to monitor the server's performance regularly to ensure that the allocated RAM is sufficient for the workload. If performance issues arise, consider increasing the amount of allocated RAM or optimizing the workload.


Here are some queries you can use to determine if SQL Server is using the RAM allocated to it:


Query to show SQL Server memory usage:

SELECT 
    (physical_memory_in_use_kb/1024) AS Memory_used_MB,
    (locked_page_allocations_kb/1024) AS Locked_pages_used_MB,
    (virtual_address_space_reserved_kb/1024) AS VAS_reserved_MB,
    (virtual_address_space_committed_kb/1024) AS VAS_committed_MB
FROM 
    sys.dm_os_process_memory;

This query returns information about the amount of memory used by SQL Server, including physical memory in use, locked page allocations, virtual address space reserved, and virtual address space committed.


Query to show buffer pool usage:

SELECT
    (COUNT(*) * 8) / 1024.0 AS Buffer_Pool_MB
FROM 
    sys.dm_os_buffer_descriptors;

This query returns information about the buffer pool usage, which is the portion of memory used by SQL Server to cache data pages.


Query to show memory usage by individual processes:

SELECT 
    session_id, 
    (memory_usage_kb/1024) AS Memory_used_MB
FROM 
    sys.dm_exec_sessions
WHERE 
    is_user_process = 1;

This query returns information about the amount of memory used by individual SQL Server processes.

By using these queries, you can determine if SQL Server is using the RAM allocated to it and monitor the server's memory usage over time to identify any potential issues. It's recommended to schedule these queries to run periodically and store the results in a monitoring tool for analysis.


Check Ram settings in SSMS by right-clicking on the instance name and selecting properties.




 

Parallelism:


Parallelism:

Parallelism is the ability of SQL Server to split a single query into multiple threads that can be executed simultaneously on multiple processors. This can improve query performance by leveraging multiple CPUs to process queries in parallel. However, if the parallelism setting is too high, it can result in increased CPU and memory usage, and potentially slower performance.


To set the parallelism value, you can use the MAXDOP (maximum degree of parallelism) setting, which limits the number of processors that can be used for parallel processing. The default setting is 0, which means SQL Server will automatically determine the number of processors to use. In general, it's recommended to set the MAXDOP value to a value between 1 and the number of available CPUs, depending on the workload.


To set the MAXDOP value, you can use the following command:

sp_configure 'max degree of parallelism', <value>;
RECONFIGURE;

Cost Threshold for Parallelism:

The cost threshold for parallelism is a value that determines the minimum cost of a query before it is considered for parallel execution. The cost is a measure of the estimated CPU, I/O, and memory resources required to execute a query.


To set the cost threshold for parallelism value, you can use the following command:

sp_configure 'cost threshold for parallelism', <value>;
RECONFIGURE;

The default value for cost threshold for parallelism is 5, which means any query with a cost of 5 or higher will be considered for parallel execution. In general, it's recommended to set the cost threshold for parallelism value to a value that is appropriate for your workload. I recommend a value between 50 and 80. You have read more about these settings here


A higher value may result in fewer queries being considered for parallel execution but may also reduce the overhead of managing parallelism. Conversely, a lower value may result in more queries being considered for parallel execution, but may also increase the overhead of managing parallelism.


To set these properties In SSMS right click on the instance name and select the properties.


Anchor 1
bottom of page