top of page

Instance And Database Settings In SQL Server

SQL Server Instance Settings refer to the configurations that are applied at the server level and affect the behavior of all databases running on that instance. These settings include things like memory allocation, processor usage, security settings, and backup options. Examples of SQL Server Instance Settings include the maximum server memory, server authentication mode, and database default locations. On the other hand, SQL Server Database Settings refer to configurations that are specific to an individual database, and they only affect that particular database. These settings include things like recovery model, compatibility level, collation, and auto-shrink. Examples of SQL Server Database Settings include the recovery model, compatibility level, and collation. In summary, SQL Server Instance Settings affect the behavior of all databases running on an instance, while SQL Server Database Settings are specific to individual databases.


Here are some common SQL Server instance and database settings and a brief explanation of what they do:


Max Server Memory - Sets the maximum amount of memory that SQL Server can use.


The default value for Max Server Memory in SQL Server varies depending on the version and edition of SQL Server. For example, in SQL Server 2019 Enterprise Edition, the default value for Max Server Memory is the larger of either 2.88 TB or 80% of the total physical memory, up to a maximum of 24 TB. In contrast, the default value in SQL Server 2019 Standard Edition is 128 GB. However, it is important to note that the default value for Max Server Memory may not be appropriate for every scenario, as it is often set to a high value to ensure that SQL Server can take advantage of the available memory on the system. This can lead to other applications running on the same server being starved of memory, resulting in poor overall performance. As mentioned earlier, the recommended value for Max Server Memory depends on a number of factors, such as the amount of memory available on the server, the size and activity level of the databases running on SQL Server, and the memory requirements of other applications running on the same server. It is generally recommended to allocate 50-70% of the total physical memory on the server to SQL Server's Max Server Memory setting. It is important to monitor the memory usage of SQL Server regularly and adjust the Max Server Memory setting as needed to ensure optimal performance. In addition, it is a good practice to set a specific value for Max Server Memory rather than relying on the default value, as this provides more granular control over the amount of memory used by SQL Server.



Server Authentication Mode

In SQL Server, Server Authentication mode is a security setting that determines how users are authenticated when they connect to an instance of SQL Server. There are two modes of authentication in SQL Server: Windows Authentication mode and SQL Server and Windows Authentication mode (also known as Mixed Mode). In Windows Authentication mode, users are authenticated through Windows Active Directory. This mode is recommended for environments where all users have Windows domain accounts and need to connect to SQL Server using their Windows credentials. With this mode, there is no need for users to remember separate login credentials for SQL Server. In SQL Server and Windows Authentication mode, users can be authenticated either through Windows Authentication or SQL Server Authentication. With SQL Server Authentication, users are authenticated using a username and password that is stored in SQL Server. This mode is useful in scenarios where there are non-Windows clients that need to access SQL Server or where there is a need for a separate set of credentials for accessing SQL Server. The Server Authentication mode can be changed in SQL Server Management Studio or by using Transact-SQL commands. It is important to choose the appropriate authentication mode for your environment to ensure the security of your SQL Server instance.


Network Configuration

You can start the SQL Server Network Configuration screen using the SQL Server Configuration Manager. Here are the steps to start the Network Configuration screen:

  • Open the SQL Server Configuration Manager. You can find it in the SQL Server program group in the Windows Start menu, or by searching for "SQL Server Configuration Manager" in the Windows search bar.

  • In the SQL Server Configuration Manager, click on "SQL Server Network Configuration" in the left-hand pane.

  • This will display a list of network protocols supported by SQL Server, such as TCP/IP, Named Pipes, and Shared Memory. You can select a protocol to view its properties and configuration settings, or right-click on it to enable, disable, or restart the protocol.

The Network Configuration screen allows you to manage various aspects of SQL Server network connectivity, such as enabling or disabling network protocols, specifying port numbers, configuring IP addresses, setting authentication modes, and configuring remote connections. You can also use this screen to troubleshoot network connectivity issues and monitor network activity.


The recommended network protocols to enable for SQL Server depend on your specific environment and connectivity requirements. However, in general, the following network protocols are commonly used and recommended for SQL Server:

  • TCP/IP: This protocol is the most commonly used network protocol for SQL Server connections. It is used for remote connections over the internet or a network. TCP/IP supports multiple simultaneous connections and provides good performance and security.

  • Named Pipes: This protocol is used for local connections within a network. It provides faster connections than TCP/IP and is useful when you need to transfer large amounts of data.

  • Shared Memory: This protocol is used for local connections within the same computer. It provides the fastest connections and is useful when you need to transfer small amounts of data.

Overall, the SQL Server Network Configuration screen is an important tool for managing SQL Server network connectivity and ensuring that your SQL Server instances are properly configured for secure and reliable network communication.


Backup Options

SQL Server 2022 offers various backup options to ensure that your databases are protected against data loss and corruption. The following are some of the backup options available in SQL Server 2022:

  • Full backup: This option creates a complete backup of the database and all its objects. It is recommended to take full backups regularly to ensure that you have a complete and up-to-date backup of your database.

  • Differential backup: This option backs up only the changes made to the database since the last full backup. It is useful for reducing the time and storage space required for backups.

  • Transaction log backup: This option backs up the transaction log of the database, which contains information about all the transactions made on the database. It is useful for recovering the database to a specific point in time.

  • File or filegroup backup: This option allows you to back up individual files or filegroups of the database. It is useful for large databases where backup times can be reduced by backing up only the required files.

  • Copy-only backup: This option creates a backup without affecting the backup chain. It is useful for creating ad-hoc backups without interrupting the regular backup schedule.

  • Backup compression: This option compresses the backup to reduce the backup size and the time required to complete the backup.

  • Backup encryption: This option encrypts the backup to ensure that the backup data is secure and cannot be accessed by unauthorized users.


Query Store

The Query Store is a feature in SQL Server that provides a way to track query performance over time and troubleshoot performance issues. It captures query execution plans, runtime statistics, and other related information and stores it in a dedicated database called the Query Store. This data can be used to identify performance regressions, optimize queries, and troubleshoot performance issues.

To use the Query Store, you first need to enable it on your SQL Server instance. This can be done using the SQL Server Management Studio or by running T-SQL commands. Once enabled, the Query Store automatically starts capturing query performance data.

You can view the Query Store data by using various built-in reports, such as the Top Resource-Consuming Queries report or the Regressed Queries report. You can also use T-SQL queries to access the Query Store data directly and perform custom analysis.


The Query Store provides several benefits, including:

  • Query performance troubleshooting: The Query Store makes it easy to identify queries with poor performance, compare query performance before and after changes, and identify performance regressions.

  • Query optimization: The Query Store provides insights into query execution plans, statistics, and other related information that can be used to optimize query performance.

  • Query tuning: The Query Store can be used to fine-tune queries by providing information on execution plans, query statistics, and query resource consumption.

  • Historical analysis: The Query Store stores data over time, allowing you to track query performance trends and compare performance across different time periods.


Cost Threshold for Parallelism

The Cost Threshold for Parallelism is a configuration setting in SQL Server that determines the minimum query execution cost required to trigger parallel query execution. When a query's estimated execution cost exceeds the value set for the Cost Threshold for Parallelism, SQL Server may use parallelism to execute the query. The Cost Threshold for Parallelism is measured in query execution plan units, which are calculated based on the estimated CPU and I/O resources required to execute the query. The default value for the Cost Threshold for Parallelism in SQL Server is 5, meaning that queries with an estimated execution cost of 5 or higher will be considered for parallel execution. The appropriate value for the Cost Threshold for Parallelism depends on the workload and hardware configuration of your SQL Server instance. In general, if your workload consists of small and simple queries, you may want to lower the value to reduce the overhead of parallel query execution. If your workload consists of complex and resource-intensive queries, you may want to raise the value to ensure that parallelism is only used for the most expensive queries. A good starting point for setting the Cost Threshold for Parallelism is to a value between 60 and 80 and monitor query performance. If you find that queries are not being parallelized when you expect them to be, you may want to lower the value. If you find that too many queries are being parallelized, causing performance issues, you may want to raise the value.



Maximum Degree of Parallelism

The Maximum Degree of Parallelism (MaxDOP) is a configuration setting in SQL Server that determines the maximum number of processors that can be used to execute a single query in parallel. When a query is executed, SQL Server may use multiple processors to divide the work and process it in parallel. The MaxDOP setting limits the maximum number of processors that can be used for this purpose.

The appropriate value for MaxDOP depends on the workload and hardware configuration of your SQL Server instance. In general, the MaxDOP setting should be set to a value that balances query execution speed with system resource utilization. Setting MaxDOP too high can cause excessive CPU and memory usage, while setting it too low can result in slower query execution times.

A good starting point for setting MaxDOP is to use the default value of 0, which allows SQL Server to use all available processors for parallel query execution. However, in some cases, it may be necessary to lower this value to prevent excessive resource usage and maintain system stability. This may be necessary if you are running other resource-intensive applications on the same server, or if you have a large number of concurrent queries executing simultaneously.

On the other hand, if you have a high-performance server with a large number of processors, you may benefit from increasing the MaxDOP value to take advantage of the additional processing power.

CLR Integration

CLR (Common Language Runtime) Integration is a feature in SQL Server that allows developers to write database objects (such as stored procedures, functions, and triggers) using the .NET Framework and other CLR languages, instead of Transact-SQL (T-SQL).


Once CLR Integration is enabled and the assembly is registered, you can use .NET languages to write database objects in SQL Server. CLR Integration provides developers with a powerful toolset for creating complex and feature-rich database objects that can be used to solve a wide range of business problems.


Ad Hoc Distributed Queries

The Ad Hoc Distributed Queries setting is a configuration option in Microsoft SQL Server that allows the server to execute queries that reference external data sources, such as other SQL Server instances or non-SQL Server data sources like Oracle or Excel. When Ad Hoc Distributed Queries is enabled, SQL Server can execute queries that use the OPENROWSET or OPENDATASOURCE functions, which enable the server to retrieve data from external sources. The recommended setting for Ad Hoc Distributed Queries is to disable it, unless you specifically need to execute queries that reference external data sources. This is because enabling this feature can introduce security risks and potential performance issues. If you do need to execute queries that reference external data sources, it is recommended to enable Ad Hoc Distributed Queries temporarily, execute the necessary queries, and then disable it again to mitigate security risks.


Enable Ad Hoc Distributed Queries


sp_configure 'show advanced options', 1;  
RECONFIGURE;
GO 
sp_configure 'Ad Hoc Distributed Queries', 1;  
RECONFIGURE;  
GO  

Database Default Locations

The Database Default Locations setting in SQL Server refers to the default file locations for database data and log files. When you create a new database in SQL Server, you must specify where to store the data and log files that make up the database. The default file locations determine where these files are stored if you do not specify a different location during database creation. There are two types of default locations:

  • Default data file location: This is the default location where SQL Server stores the data files for new databases. By default, this is set to the "...\MSSQL\DATA" folder on the SQL Server instance's local drive.

  • Default log file location: This is the default location where SQL Server stores the log files for new databases. By default, this is set to the "...\MSSQL\DATA" folder on the SQL Server instance's local drive.

You can modify the default locations for data and log files by changing the server-level configuration options using SQL Server Management Studio or the T-SQL command ALTER SERVER CONFIGURATION. By changing the default file locations, you can ensure that new databases are stored in a location that best suits your needs, such as a separate drive for better performance or to meet storage requirements.



Auto Close

The Auto Close setting in SQL Server is a database option that determines whether the database should be automatically closed and its resources freed up when there are no more connections to the database. When the Auto Close option is enabled, SQL Server will automatically close the database when there are no active connections. This can free up system resources and memory, but it can also result in a delay when the next connection is made to the database, as SQL Server has to reopen the database and load it into memory. The recommended setting for the Auto Close option is to disable it. This is because enabling the Auto Close option can introduce performance overhead due to the need to constantly open and close the database, which can result in slower query execution times and increased disk I/O. Additionally, if there are frequent connections to the database, enabling Auto Close can result in increased resource usage as SQL Server has to repeatedly open and close the database. If you have a database with infrequent connections or limited resource availability, enabling Auto Close may be appropriate. However, for most databases, it is recommended to disable the Auto Close option for optimal performance and stability. You can disable the Auto Close option for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE.


Auto Shrink

The Auto Shrink setting in SQL Server is a database option that determines whether the database files should be automatically shrunk to free up disk space when the database size decreases. When the Auto Shrink option is enabled, SQL Server will automatically shrink the database files when there is free space available in the file. This can help to reclaim disk space, but it can also result in performance issues and fragmentation of the database files. The recommended setting for the Auto Shrink option is to disable it. This is because enabling Auto Shrink can result in frequent file growth and shrink operations, which can cause performance issues and fragmentation of the database files. Additionally, shrinking database files can cause data pages to become fragmented, which can result in slower query performance. Instead of using Auto Shrink, it is recommended to manually manage database file size and disk space. This can be done by monitoring the database size and file growth, and performing periodic maintenance tasks such as rebuilding indexes, defragmenting file systems, and archiving old data to free up space. By manually managing the database files, you can ensure optimal performance and stability of the database. You can disable the Auto Shrink option for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE.



Recovery Model

The Recovery Model setting in SQL Server is a database-level option that determines how much transaction log data is retained and how the database can be restored in case of a failure. There are three recovery models in SQL Server:

  • Simple Recovery Model: In this mode, SQL Server automatically reclaims space in the transaction log to keep it from growing too large. The transaction log backups are not required in this mode and only full backups are necessary to restore the database in case of a failure. However, point-in-time recovery is not possible with this mode.

  • Full Recovery Model: In this mode, SQL Server does not automatically reclaim space in the transaction log, and it must be backed up regularly to prevent it from growing too large. This mode supports point-in-time recovery, but requires both full and transaction log backups to restore the database in case of a failure.

  • Bulk-Logged Recovery Model: This mode is similar to the Full Recovery Model, but it is designed for bulk operations such as bulk inserts or select into operations. It allows for faster logging of these operations, but at the cost of increased log size.

The recommended Recovery Model setting depends on the business requirements and recovery objectives. For example, if the database contains critical data and requires point-in-time recovery, the Full Recovery Model may be necessary. On the other hand, if the database contains less critical data and does not require point-in-time recovery, the Simple Recovery Model may be appropriate. You can change the Recovery Model setting for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE.

Backup Compression Default

Backup Compression is a feature in SQL Server that allows you to compress your database backups to reduce their size and save disk space. This feature is available in all editions of SQL Server.

When you enable backup compression, SQL Server compresses the backup data before writing it to the backup file. The compression ratio depends on the amount of redundancy in the database, such as empty space or duplicated data. Generally, databases with more redundant data will achieve a higher compression ratio than those with less redundant data.

Backup compression can typically compress a database backup file by 40-90% of its original size, depending on the type of data being backed up and the level of redundancy. This can significantly reduce the storage requirements for backup files, as well as the time and bandwidth required to transfer them to other locations.

It is worth noting that backup compression does require additional CPU resources to perform the compression, so there may be a small performance overhead during the backup process. However, this is typically outweighed by the benefits of reduced storage requirements and faster backup times. Additionally, modern hardware and software have made backup compression a more viable option than it may have been in the past.



To enable backup compression and set it as the default option for all backup operations on the server, execute the following T-SQL command:

sp_configure 'backup compression default', 1;

User Connections

The "User Connections" setting in SQL Server refers to the maximum number of simultaneous user connections that can be made to a particular instance of SQL Server. This setting determines the maximum number of users that can access a database at the same time.

The recommended setting for the "User Connections" setting depends on the hardware and resources available on the SQL Server machine, as well as the workload and usage patterns of the database.

As a general rule of thumb, the "User Connections" setting should be set high enough to accommodate the expected number of concurrent users accessing the database, but not so high that it causes performance issues or resource contention on the server.

The default value for the "User Connections" setting in SQL Server is 0, which means there is no specific limit set on the number of user connections. Instead, the maximum number of user connections is determined by the available resources on the server.

If you need to change the "User Connections" setting in SQL Server, you can do so using SQL Server Management Studio or by executing the following T-SQL command:

sp_configure 'user connections', <max_user_connections>;

Replace <max_user_connections> with the maximum number of user connections that you want to allow. After changing the setting, you will need to run the "RECONFIGURE" command to apply the changes:

RECONFIGURE;

Lock Timeout

The "lock timeout" setting in SQL Server refers to the amount of time that a transaction will wait for a lock to be released before it is canceled and rolled back.

The recommended setting for the "lock timeout" depends on the application and the workload that the database is supporting. In general, the lock timeout setting should be set high enough to allow for normal transaction processing, but not so high that it causes excessive blocking and concurrency issues.

The default value for the "lock timeout" setting in SQL Server is -1, which means that transactions will wait indefinitely for a lock to be released. This can lead to blocking and concurrency issues, and is generally not recommended for production environments.

If you need to change the "lock timeout" setting in SQL Server, you can do so using SQL Server Management Studio or by executing the following T-SQL command:

SET LOCK_TIMEOUT <timeout_in_milliseconds>;

Replace <timeout_in_milliseconds> with the desired lock timeout value in milliseconds. The maximum lock timeout value is 2,147,483,647 milliseconds (or 24.8 days).

It is important to note that changing the lock timeout setting can have significant impacts on the performance and concurrency of the database. It is recommended to thoroughly test any changes to the lock timeout setting in a non-production environment before making changes to a production system.

Fill Factor

In SQL Server, the fill factor refers to the percentage of space on a database page that is initially filled with data when an index is created or rebuilt. The fill factor can be set for each index and determines how much free space should be left on each page, to allow for future updates and inserts. For example, if the fill factor is set to 80%, each page of the index will initially be filled to 80% of its capacity, leaving 20% of the space free for future updates. The recommended value for the fill factor depends on several factors, such as the frequency of updates and inserts to the table, the size of the table, and the available disk space. As a general guideline, a fill factor of 100% is appropriate for read-only tables, where no updates or inserts will occur. For tables that are frequently updated, a lower fill factor (such as 70-80%) is recommended to allow for future updates and inserts without causing excessive page splits. However, setting the fill factor too low can also lead to wasted disk space, as the index pages will have more free space than necessary. It is generally recommended to monitor the fill factor over time and adjust it as needed based on the actual usage patterns of the table.

Determining the optimal fill factor for an index involves a trade-off between space usage and performance. A lower fill factor will leave more free space on index pages, which can reduce the frequency of page splits and improve write performance, but will also increase the overall size of the index. On the other hand, a higher fill factor will reduce the size of the index but may also increase the frequency of page splits and degrade write performance.

Here are some tests you can perform to determine an appropriate fill factor for a specific index:

Monitor the index fragmentation: If the index is highly fragmented, this may indicate that the fill factor is set too high, causing frequent page splits. To monitor the fragmentation level of an index, you can use the following T-SQL query:

SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName, ind.name AS IndexName, indexstats.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id AND ind.index_id = indexstats.index_id
WHERE indexstats.avg_fragmentation_in_percent > 30 AND indexstats.index_type_desc = 'NONCLUSTERED'

If the fragmentation level is consistently high, you may want to consider lowering the fill factor to reduce page splits.


Monitor the index usage: If the index is frequently updated or inserted into, a lower fill factor may be more appropriate to reduce the frequency of page splits. To monitor the usage of an index, you can use the following T-SQL query:

SELECT OBJECT_NAME(object_id) AS TableName, name AS IndexName, user_updates, user_seeks, user_scans, user_lookups
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID() AND object_id = OBJECT_ID('tableName')

If the index is frequently updated (i.e., high user_updates count), you may want to consider lowering the fill factor to reduce page splits. If the index is frequently queried (i.e., high user_seeks or user_scans counts), a higher fill factor may be appropriate to reduce the size of the index.

Monitor disk space usage: If disk space is a concern, you may want to consider increasing the fill factor to reduce the overall size of the index. To monitor the disk space usage of an index, you can use the following T-SQL query:

If the index is frequently updated (i.e., high user_updates count), you may want to consider lowering the fill factor to reduce page splits. If the index is frequently queried (i.e., high user_seeks or user_scans counts), a higher fill factor may be appropriate to reduce the size of the index.

SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName, ind.name AS IndexName, indexstats.page_count * 8 / 1024 AS IndexSizeMB
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id AND ind.index_id = indexstats.index_id
WHERE indexstats.avg_fragmentation_in_percent < 30 AND indexstats.index_type_desc = 'NONCLUSTERED'ORDER BY IndexSizeMB DESC

If the index size is consistently large, you may want to consider increasing the fill factor to reduce the amount of wasted space on index pages.


Lightweight Pooling

Lightweight Pooling (LWP) is a legacy feature in SQL Server that was introduced in SQL Server 7.0 and deprecated in SQL Server 2016. LWP is a thread scheduling mechanism that allows multiple SQL Server user threads to share a single operating system thread. It was designed to improve the scalability of SQL Server by reducing the overhead associated with creating and managing operating system threads. When LWP is enabled, SQL Server creates a fixed number of operating system threads, each of which is associated with a lightweight worker thread. When a user thread needs to execute a query, it is assigned to one of the available lightweight worker threads. If no lightweight worker threads are available, the user thread is placed in a queue until a worker thread becomes available. LWP is typically used in environments where SQL Server is handling a large number of small transactions or connections, such as web applications. However, LWP can also introduce performance overhead if the number of user threads exceeds the number of available worker threads, causing contention for resources. In recent versions of SQL Server, LWP has been replaced by the "fiber mode" feature, which provides a similar mechanism for managing lightweight user threads. Unlike LWP, which uses a fixed number of worker threads, fiber mode dynamically adjusts the number of user threads based on system load and available resources, providing better scalability and performance.

Priority Boost

Priority Boost is a setting in SQL Server that allows the SQL Server process to run at a higher priority than other processes on the same system. This can be useful in situations where SQL Server is the primary workload on the system and needs to be given priority over other processes. However, it is generally not recommended to enable Priority Boost in SQL Server. Here are a few reasons why:

  • It can cause instability: Enabling Priority Boost can cause SQL Server to compete with other critical system processes for CPU time, potentially leading to system instability and crashes.

  • It can impact performance: Even in cases where SQL Server is the primary workload on the system, enabling Priority Boost can actually have a negative impact on performance. This is because other critical system processes, such as those related to I/O, memory management, and networking, may be starved of resources and unable to perform their duties effectively.

  • It is not necessary in most cases: In general, SQL Server is designed to run effectively and efficiently without the need for Priority Boost. If you are experiencing performance issues with SQL Server, there are typically other ways to address them, such as optimizing queries, tuning indexes, and configuring memory settings.

In short, enabling Priority Boost in SQL Server should be avoided in most cases. If you are experiencing performance issues with SQL Server, it is recommended to investigate other solutions before considering Priority Boost as a potential option.

Other Links

Configure SQL Instance


Intro To Database Administration


SQL Server For IT Managers


SP_Who2





Recent Posts

See All

Get in Touch

Thanks for submitting!

bottom of page