
Search Results
Search Results
Search Results
175 results found with an empty search
- Power BI Dashboards vs. Reports – Which One Should You Use?
What is the difference between a Power BI Dashboard and a report When it comes to data analytics, the terms dashboard and report might seem interchangeable. However, the two are actually quite distinct from each other although there is a lot of overlap. Dashboards provide an overview or snapshot of data set of key performance indicators (KPI) while reports focus on providing detailed insights into particular findings or events. In this blog post, we’ll dive deep into key differences between what makes dashboards and reports unique and how they both can be employed in data analysis projects to achieve maximum impact. What is a Dashboard and Report In power BI Desktop Power BI Service A common mistake for new Power BI users is not knowing the difference between reports and dashboards. Although these terms are often used interchangeably, reports and dashboards have different functionality and serve different purposes. First and foremost, the content that you create within Power BI Desktop is an entire page, known as a report. Even if it has a “dashboard” feel or contains multiple pages or tabs, it's still a report page and simply known as a report. A dashboard, however, can't be created within Power BI Desktop. A dashboard is a collection of visuals from one or multiple reports that is assembled within the Power BI dashboard Service Additional Info - The Power BI Service Overview of Dashboards and Reports – Understand the purpose and features of each Dashboards and reports are essential tools in the world of business analytics, serving as powerful instruments for decision-makers to visualize and comprehend complex data patterns. Dashboards offer a real-time snapshot of key performance indicators and other vital data through highly customizable visualizations, such as charts, graphs, and tables. This allows users to quickly assess the current state of the business or project, making it possible to identify trends, monitor progress, and respond to emerging situations. On the other hand, reports provide a more thorough, detailed, and typically historical view of business operations. They enable users to delve deeper into the data, revealing insights that may not be immediately apparent through a few dashboard window. Data Visualization: Power BI allows you to create visually stunning and interactive dashboards that help you quickly identify trends, patterns, and insights in your data. With over 100 pre-built visuals to choose from, you can create dashboards that are tailored to your specific needs. Real-Time Insights: Power BI dashboards can be connected to a variety of data sources, allowing you to get real-time insights into your business operations. This means you can make more informed decisions based on up-to-date information. Self-Service Analytics: Power BI allows business users to easily create and customize dashboards without relying on IT or data analysts. This means you can quickly create and iterate on dashboards to meet your changing needs. Integration with other Microsoft products: Power BI integrates seamlessly with other Microsoft products such as Excel, SharePoint, and Teams. This means you can easily share your dashboards with others, collaborate on data analysis, and embed dashboards into other applications. Mobile Accessibility: Power BI dashboards can be accessed from anywhere using a mobile device, making it easy to stay on top of your business operations on the go. Scalability: Power BI can handle large amounts of data and can scale to meet the needs of businesses of all sizes. This means you can start small and grow your data analysis capabilities as your business grows. In summary, Power BI dashboards provide powerful data visualization and analysis capabilities, real-time insights, self-service analytics, seamless integration with other Microsoft products, mobile accessibility, and scalability. All of these advantages can help businesses make more informed decisions and stay ahead of the competition. What is a paginated report? A paginated report is a type of Power BI report that is designed for printing or generating PDF files. It is a type of operational report that is optimized for presenting data in a structured and paginated format, typically for printing or for delivering to users in a fixed format. A paginated report is typically designed for a specific audience and purpose, such as printing invoices, sales reports, or financial statements. Paginated reports are also often used for regulatory or compliance reporting pages data sources, as they provide a structured and consistent format for presenting data. Advantages of Power BI Paginated Reports: Customization: Power BI reports can be customized to fit the specific needs of your business, allowing you to create tailored reports for different departments or use cases. Data integration: Power BI reports can integrate data from a variety of sources, including Excel spreadsheets, cloud-based databases, and on-premise data sources, allowing you to create comprehensive reports that include all of your business data. Data modeling: Power BI reports include a data modeling feature that allows you to create relationships between different data sources and perform advanced calculations, making it easy to create complex reports. Real-time updates: Power BI reports can be configured to automatically refresh data on a regular basis, ensuring that your reports always include the latest data. Sharing and collaboration: Power BI reports can be easily shared with colleagues, allowing for easy collaboration and communication. How are paginated reports different from dashboards? Dashboards are designed to provide a high-level overview of key performance indicators (KPIs) for one or more reports or metrics in underlying report in a visually appealing and interactive way. They allow users to explore data and drill down into specific details. Dashboards are optimized for real-time data and typically include interactive visualizations such as charts, tables pages dashboards, and maps. Paginated reports create summary pages, on the other hand, are designed for presenting data in a structured column chart report page, and paginated report page, format, typically for printing or for delivering to users in a single page, fixed format. They are optimized for static, printable output and typically include more detailed information than dashboards. Paginated reports are often used for operational or regulatory reporting, while dashboards are used for monitoring and analyzing data in real-time. In summary, while both paginated reports and dashboards are types of Power BI reports, they serve different purposes and are designed for different audiences. Paginated reports have a single dataset of report pages and are optimized for structured, paginated output, while dashboards create reports with multiple report pages, and are optimized for real-time, interactive data exploration. When To Use Power bi dashboards vs reports The choice between using a Power BI dashboard or an existing visualization of a report depends on the specific needs of your organization and the purpose of the existing dashboard for visualization of the data presentation. Here are some general guidelines to consider: Use a Power BI dashboard when: You need to provide a high-level overview of key performance indicators (KPIs) or metrics. You want to track real-time data and provide up-to-date insights. You need to present data in an interactive and visually appealing way. You want to allow end-users to explore data and drill down into specific details. You need to monitor a wide range of data from different sources. Use a report when: You need to provide detailed information on a specific topic or subject. You want to present data in a structured and organized manner, such as tables and charts. You need to provide a historical view of data and track trends over time. You want to include additional information, such as commentary, analysis, or recommendations. You need to provide a static, printable output. In summary, a full Power BI solution or dashboard is best suited for presenting high-level, real-time data in an interactive and visually appealing way, while a report is best suited for providing detailed, structured, and historical data in a static, printable format. When deciding between a power bi dashboard capabilities, or a report, it's important to consider the specific needs of your organization and the purpose of the data presentation. Uses of Dashboards - What makes a dashboard unique A dashboard, often regarded as a crucial element in the realm of data visualization, possesses distinct characteristics that set it apart from other data representation tools. One such attribute of bi dashboard is its ability to present complex information through comprehensible visual representations, making it accessible to a wide range of users. Furthermore, an efficacious and well designed dashboard consolidates the most essential data points from across multiple dashboards, reports and various sources, enabling users to extract meaningful insights from existing dashboard and view report data at a glance. Customizability is yet another aspect definition dashboard has, allowing users to tailor the existing dashboard up to their specific needs, thereby ensuring meticulous attention to vital metrics. In real-time scenarios, the dynamic nature of dashboards is their most invaluable feature, as data is updated continuously, empowering users to make informed, timely decisions. Overall, the unique combination of these characteristics transforms a dashboard into an indispensable tool that amplifies data-driven decision-making. Uses of Dashboards - Explore how dashboards are used to monitor data Dashboards have emerged as a vital tool in contemporary data management, offering a comprehensive platform for monitoring pertinent information across various domains. Serving as a graphical user interface, dashboards amalgamate diverse data sources, function as a centralized repository, and enable users to make more informed decisions by simplifying complex data into easily comprehensible visuals. As a consequence of their versatility, dashboards play an indispensable role in various sectors including business, governance, and finance. For instance, in a business setting, they aid in tracking key performance indicators, facilitating rapid insights, and providing real-time feedback for optimal decision-making processes. In the realm of governance, dashboards offer a transparent medium for governments to share information, and important metrics such as public expenditure, with citizens, bolstering overall trust and accountability. In the financial sector, they facilitate the exchange of data science critical metrics, such cloud data such as trends, forecasts, and variances, which assist in performance management and risk analysis. Ultimately, through the employment of dashboards, organizations are empowered to make strategic choices by leveraging multiple datasets for a holistic understanding of multifaceted data. Benefits of Reports for Decision-Making - Highlight why reports can be helpful when making decisions The benefits of utilizing reports for decision-making cannot be overstated. Reports, when systematically and meticulously prepared, offer a structured and data-driven approach to assist organizations and individuals in making well-informed choices. By integrating in-depth analysis, report data, expert insights, and clear visuals, these documents serve as invaluable tools to practitioners and stakeholders alike. Moreover, such comprehensive reports can uncover hidden trends, enabling decision-makers to anticipate potential risks and pinpoint opportunities with precision. Ultimately, the powerful capacity of reports to synthesize complex information into actionable knowledge underscores their pivotal role in enhancing the quality of strategic decisions, boosting organizational success, and fostering a culture of continuous improvement. Comparing Dashboards and Reports - Compare the benefits and drawbacks of both tools Dashboards and reports are both essential tools in the realm of data analysis, offering unique advantages and disadvantages. Dashboards, being interactive, offer real-time insights and allow users to manipulate data to discern trends and make informed decisions quickly. However, this interactivity may have an inherent drawback; as data gets updated, past information can potentially disappear, making it difficult to keep a historical record of actions. On the other hand, reports provide thorough and in-depth analysis of data, often combining a report of one or more summary pages, or single page with a variety of sources to yield valuable conclusions. While reports are comprehensive, their static nature can be limiting as the raw data presented in sales report may become outdated quickly, rendering the report less relevant for decision-making. Although dashboards and reports seem to have contrasting benefits and drawbacks, a synergistic approach bi dashboard and that view related reports that leverages the strengths of both tools can be instrumental in optimizing data-driven decision-making processes for organizations. To summarize, dashboards and reports feature distinctly different purposes in the context of data analysis. Dashboards focus on presenting relevant data points at-a-glance for easier recognition of patterns within a set of information, and the related reports are most often used to provide an ongoing visualization of the various data sets being monitored. Reports are more suitable for gaining insight into a specific issue by providing additional detail and understanding over an extended timeframe. Knowing how to effectively leverage both tools is an essential element for any business's decision-making process, as both can play complementary roles for achieving desired results. With their advantages only the highlights and disadvantages of new dashboard and report view becoming increasingly clear cut, it’s important to understand that choosing one or the other will depend upon your organizational goals and needs. This demonstrates why it is so important to evaluate the disparate benefits of each option when assessing which platform should have one or more datasets to be utilized in order to implement data-driven decisions at your organization. Other Links What is Power BI - https://www.bps-corp.com/post/what-is-power-bi Table Vs Matrix - https://www.bps-corp.com/post/tables-and-matrices-in-power-bi Paginated Reports - https://www.bps-corp.com/post/paginated-reports-power-bi How Is Power BI Licensed - https://www.bps-corp.com/post/how-is-power-bi-licensed SQL Reporting Services (Reports) - https://www.bps-corp.com/post/howtovisulizedatawithssrs
- Views in SQL Server
This article provides an overview of the basic views in SQL Server. The way to view definition, create or replace view view or replace the views in SQL is by using SQL Server Management Studio / SQL-SQL. What is A View In SQL Server In SQL Server, a view is a virtual table that is based on the result of a SELECT statement. The view itself does not store data but instead acts as a filter on top of one or more real tables together, allowing you to see only the data from the real table or virtual table together that meets certain criteria. Once a view is created, you can treat it like a regular table by querying it with SELECT, INSERT, UPDATE, and DELETE statements, although these operations may have some limitations based on the view's definition. Views can be used to simplify complex queries, abstract away unnecessary details, and provide a layer of security by restricting access to certain rows and columns, or restrict access to other rows and columns. Views can also be used to aggregate data from multiple columns, create or replace view statement between them, join multiple tables into replace view, create or replace view to provide a customized representation of data. In addition, views can be nested within other views, allowing for even greater flexibility in data modeling and query construction. What are the disadvantages of views in SQL Server While views can be useful in many situations, there are also some potential disadvantages to using them in SQL Server. These include: Performance overhead: Depending on the complexity of the view, querying it may result in additional processing overhead and slower performance than querying a table directly. Limited write capabilities: While views can be used to update, insert, and delete data, there are some limitations based on the view's definition. For the above example below, a view may not allow updates to certain columns or rows, or may not allow certain types of joins or aggregations. Maintenance overhead: Creating and maintaining views can add to the complexity of creating views in a database schema and increase the amount of work required to manage it. Difficulty in optimization: Views may be difficult to optimize for performance, as the query optimizer must consider the underlying tables and the view's view definition, when generating a query plan. Security concerns: Views may provide a layer of security by restricting access to certain data, but they may also be used to bypass other security measures if not configured correctly. It's important to weigh the benefits and drawbacks of using views in the same way in each specific situation and to consider alternative approaches if necessary. How to create a view SQL Syntax The basic syntax for creating a view in SQL Server is as follows: CREATE VIEW view_name AS SELECT column1, column2, ... FROM table_name WHERE condition; In this syntax: view_name is the name you want to give to the view. column1, column2, column list, etc. are the names of the columns you want to include in the view. table_name is the name of the table or tables from create view statement which you want to create view retrieve data for the above sql create view below. condition is an optional WHERE clause that specifies any conditions that must be met for a row to be created create a view to be included in the view. Once you have created the view, you can use it in queries just like a regular table. Note that the SELECT statement used to define the view can be any valid SELECT statement, including joins, aggregations, and subqueries. How to create a view in SQL with a Single Table Here is an example T-SQL script that creates a table, populates it with 10 rows of sample data, and then creates a simple view, based on that table: -- Create the table CREATE TABLE MyTable ( Id INT IDENTITY(1,1) PRIMARY KEY, FirstName VARCHAR(50) NOT NULL, LastName VARCHAR(50) NOT NULL, Age INT NOT NULL, Email VARCHAR(100) NOT NULL ); -- Insert sample data INSERT INTO MyTable (FirstName, LastName, Age, Email) VALUES ('John', 'Doe', 25, 'john.doe@example.com'), ('Jane', 'Doe', 32, 'jane.doe@example.com'), ('Bob', 'Smith', 45, 'bob.smith@example.com'), ('Alice', 'Johnson', 27, 'alice.johnson@example.com'), ('Tom', 'Brown', 40, 'tom.brown@example.com'), ('Sara', 'Williams', 35, 'sara.williams@example.com'), ('David', 'Lee', 28, 'david.lee@example.com'), ('Mary', 'Clark', 30, 'mary.clark@example.com'), ('Peter', 'Adams', 50, 'peter.adams@example.com'), ('Sarah', 'Davis', 42, 'sarah.davis@example.com'); -- Create the view CREATE VIEW MyView AS SELECT Id, FirstName, LastName, Age FROM MyTable; In this example, the table is called MyTable, with columns for Id, FirstName, LastName, Age, and Email. The table is then populated with 10 rows of sample data using the INSERT INTO statement. Finally, a simple empty view is created that is created called MyView that selects the Id, FirstName, LastName, and Age columns from MyTable. ALTER a SQL VIEW Here's an example of how you could alter the view example. MyView view example made from the previous view example is: -- Alter the view to include the Email column ALTER VIEW MyView AS SELECT Id, FirstName, LastName, Age, Email FROM MyTable; This ALTER VIEW statement modifies the MyView view so that it now includes the Email column from the MyTable table. Note that you can add or remove columns from a view using the ALTER VIEW statement in a similar way to how you can either create or replace a view or replace a view using the CREATE VIEW statement. Updatable Views Updatable views in SQL Server are views of tables that allow modifications to the underlying data. In other words, they allow you to perform insert, update, and delete operations on the data as if it were a regular table. To make a view updatable, there are several requirements that must be met: The view must select from only one base table. If the view is based on multiple base tables, then it must not have any ambiguous joins or other constructs that could lead to uncertainty about which table to select statement create or replace view to update. The view must select all of the columns from the base table that are involved in the modification. If the view omits any columns, then the modification operation will fail. The view must not contain any aggregate functions or subqueries in the SELECT clause. The columns being modified must not be derived columns column names or expressions. The whole view statement must have a unique index or constraint on it that includes all of the other view statement columns being modified. here's an example of creating a new created table and an updatable view based on a created view that same created table:: -- Create the base table CREATE TABLE Employees ( EmployeeID INT PRIMARY KEY, FirstName VARCHAR(50) NOT NULL, LastName VARCHAR(50) NOT NULL, Age INT NOT NULL, Department VARCHAR(50) NOT NULL ); -- Populate the base table with some sample data INSERT INTO Employees (EmployeeID, FirstName, LastName, Age, Department) VALUES (1, 'John', 'Doe', 25, 'Sales'), (2, 'Jane', 'Doe', 32, 'Marketing'), (3, 'Bob', 'Smith', 45, 'IT'), (4, 'Alice', 'Johnson', 27, 'HR'), (5, 'Tom', 'Brown', 40, 'IT'), (6, 'Sara', 'Williams', 35, 'Marketing'); -- Create an updatable view based on the base table CREATE VIEW EmployeeView AS SELECT EmployeeID, FirstName, LastName, Age, Department FROM Employees WHERE Department = 'IT'; In this following example below, we first create a base table called Employees with columns for EmployeeID, FirstName, LastName, Age, and Department. We then populate the table with some sample data using the INSERT INTO statement. Finally, we create an updatable view in sql, called EmployeeView that selects only the employees in the IT department. This view meets the requirements for updatable views, as it selects from a single base table, selects all of null values in the table columns not involved in the modification, and does not contain any aggregate functions or subqueries in the SELECT clause. Now, if we were to perform an insert, update, or delete operation on the EmployeeView, the corresponding operation would be performed on the Employees table, allowing us to modify all the data in the same table, through the same view again. Here are some examples of how you could use the updatable EmployeeView view from the previous example to perform insert, update, and delete operations on the underlying Employees table: -- Insert a new employee into the IT department using the view INSERT INTO EmployeeView (EmployeeID, FirstName, LastName, Age, Department) VALUES (7, 'Alex', 'Lee', 30, 'IT'); -- Update the age of an existing employee in the IT department using the view UPDATE EmployeeView SET Age = 31 WHERE EmployeeID = 3; -- Delete an employee from the IT department using the view DELETE FROM EmployeeView WHERE EmployeeID = 5; In these examples, we use the INSERT INTO, UPDATE, and DELETE statements to modify the data in the Employees table through the EmployeeView view. Note that the modifications are performed on the base table, not the view itself. The view is simply in such a way as to provide a different perspective on the data in the base table. Conditions for Creating Partitioned Views A Partitioned View in SQL Server is a database object that logically groups together multiple tables with column names and similar structures and allows users to query the data in these tables as if they were a single table. The tables that are included in the partitioned view can be partitioned horizontally, vertically, or both. Horizontal partitioning divides the data based on rows, while vertical partitioning divides the data based on columns. Both techniques can be used to optimize query performance and manage large amounts of data. A partitioned view appears as a single table to users and applications, even though it is composed of multiple underlying one or more tables. When a query is executed against the partitioned view, SQL Server automatically routes the query to the appropriate underlying table or one or more tables, to retrieve the required data. Partitioned views have several advantages and disadvantages that should be considered before implementing them in a SQL Server database: Advantages: Simplified Data Management: Partitioned views allow you to logically group multiple tables with similar structures and query them as a single object. This can simplify data management by reducing the need virtual table, for complex joins and other query techniques. Improved Query Performance: Partitioned views can improve query performance by routing queries to the appropriate underlying table or tables. This can help to reduce the amount of data that needs to be processed and improve query response times. Flexible Partitioning: Partitioned views support both horizontal and vertical partitioning, which can help to optimize performance and manage large amounts of data. This flexibility allows you to tailor the partitioning strategy of create view to meet the specific needs of your application. Transparent to Applications: Partitioned views are transparent to applications, which means that you can implement partitioning without changing your existing code or queries. This can simplify the process of implementing partitioning in an existing database. Disadvantages: Limited Functionality: Partitioned views have some limitations and are not suitable for all scenarios. For example, they cannot be used with certain types of SQL Server features, such as indexed views and full-text search. Complex Design: Partitioned views can be complex to design and implement, especially when dealing with large amounts of data. Careful planning and testing are required to ensure that the partitioning strategy is effective and optimized for your specific needs. Maintenance Overhead: Partitioned views require additional maintenance overhead, such as managing the underlying tables and ensuring that the partitioning strategy remains effective over time. This can add complexity and increase the workload of database administrators. here's an example of a Partitioned View in SQL Server using the two tables named "Orders2019" and customers table name "Orders2020": CREATE TABLE Orders2019 ( OrderID int PRIMARY KEY, CustomerID int, OrderDate datetime, TotalAmount decimal(10,2) ); CREATE TABLE Orders2020 ( OrderID int PRIMARY KEY, CustomerID int, OrderDate datetime, TotalAmount decimal(10,2) ); -- Sample data for Orders2019 table INSERT INTO Orders2019 (OrderID, CustomerID, OrderDate, TotalAmount) VALUES (1, 1001, '2019-01-01', 100.00), (2, 1002, '2019-01-05', 200.00), (3, 1001, '2019-01-10', 300.00), (4, 1003, '2019-01-15', 400.00); -- Sample data for Orders2020 table INSERT INTO Orders2020 (OrderID, CustomerID, OrderDate, TotalAmount) VALUES (5, 1002, '2020-01-01', 500.00), (6, 1001, '2020-01-05', 600.00), (7, 1003, '2020-01-10', 700.00), (8, 1002, '2020-01-15', 800.00); -- Create a Partitioned View CREATE VIEW OrdersView AS SELECT * FROM Orders2019 UNION ALL SELECT * FROM Orders2020; Now, we can query the "OrdersView" just like a regular table to retrieve data from both "Orders2019" and "Orders2020" tables at the same time, following example: SELECT * FROM OrdersView; Additional Resources Delete Data In Tables - https://www.bps-corp.com/post/sql-delete-deleting-data-in-a-table-or-multiple-tables Search Text of Views and Tables - https://www.bps-corp.com/post/sql-search-text-of-views-stored-procs-and-tables SQL Server Data Types - https://www.bps-corp.com/post/sql-server-data-types-1 SQL Update - https://www.bps-corp.com/post/sql-update-statement-updating-data-in-a-table Create Tables - https://www.bps-corp.com/post/sql-server-create-table-statement
- The SQL Server Resource Governor
The SQL Server Resource Governor is available in the Enterprise and Developer editions of SQL Server. It is not available in the Standard, Web, or Express editions. The Enterprise edition of SQL Server provides the full set of Resource Governor features, including the ability to manage CPU, memory, and I/O resources, and the ability to configure resource pools, workload groups, and resource allocation policies. The Developer edition of SQL Server provides the same Resource Governor features as the Enterprise edition, but it is intended for non-production environments such as development, testing, and training. If you require Resource Governor functionality and have a version of SQL Server that does not support it, you may want to consider upgrading to the Enterprise edition or moving to a cloud-based solution that provides this feature. The SQL Server Resource Governor has evolved over time, with new features and capabilities added in different versions of SQL Server. Here are some of the key differences in Resource Governor across different versions: SQL Server 2008: The Resource Governor was first introduced in SQL Server 2008 Enterprise edition. It provided the ability to limit CPU usage by defining resource pools and workload groups, and setting maximum CPU usage for each group. However, it did not support memory or I/O resource management. SQL Server 2012: In SQL Server 2012, the Resource Governor added support for memory resource management. This allowed you to control memory usage for different workloads and applications by setting maximum memory limits for each resource pool and workload group. SQL Server 2014: The Resource Governor in SQL Server 2014 introduced the ability to manage I/O resources. This allowed you to limit the amount of I/O that different workloads and applications could consume, helping to prevent I/O contention and improve overall server performance. SQL Server 2016: In SQL Server 2016, the Resource Governor introduced the ability to control the amount of physical I/Os per database file. This allowed you to prevent a single database from consuming too many physical I/Os and impacting the performance of other databases on the server. SQL Server 2017: The Resource Governor in SQL Server 2017 added support for the Intelligent Query Processing feature, which uses machine learning to improve query performance. This allowed you to apply Resource Governor policies to Intelligent Query Processing features, such as batch mode memory grant feedback and adaptive query processing. SQL Server 2019: The Resource Governor in SQL Server 2019 added support for Kubernetes containerization, which allows you to manage and allocate resources for SQL Server workloads running in containers. To enable the SQL Server Resource Governor using T-SQL, you can use the following steps: Enable the Resource Governor by running the following command: ALTER RESOURCE GOVERNOR RECONFIGURE; This command will enable the Resource Governor feature on your SQL Server instance. Create a resource pool by running the following command: CREATE RESOURCE POOL [PoolName] WITH ( [MAX_CPU_PERCENT] = 30, -- Maximum CPU usage percentage allowed for the pool [MAX_MEMORY_PERCENT] = 50 -- Maximum memory usage percentage allowed for the pool ); This command will create a resource pool named "PoolName" with a maximum CPU usage of 30% and a maximum memory usage of 50%. Create a workload group by running the following command: CREATE WORKLOAD GROUP [GroupName] USING [PoolName]; This command will create a workload group named "GroupName" that uses the "PoolName" resource pool. Assign a login or user to the workload group by running the following command: ALTER LOGIN [LoginName] WITH ( WORKLOAD_GROUP = [GroupName] ); This command will assign the login or user "LoginName" to the "GroupName" workload group. Finally, create a Resource Governor classifier function to map incoming requests to the appropriate workload group based on the criteria you define. For example, you could create a classifier function that assigns requests to a workload group based on the user name, the application name, or other factors. CREATE FUNCTION [ClassifierFunctionName]() RETURNS SYSNAME WITH SCHEMABINDING ASBEGINDECLARE @workload_group_name AS SYSNAME; SET @workload_group_name = ( SELECT [GroupName] FROM [dbo].[MyClassifierTable] WHERE [Criteria] = SUSER_SNAME() ); RETURN @workload_group_name; END; This command will create a classifier function named "ClassifierFunctionName" that assigns requests to the appropriate workload group based on the criteria you define in the [dbo].[MyClassifierTable] table. Once you have completed these steps, the Resource Governor will be enabled and configured on your SQL Server instance, and requests will be automatically routed to the appropriate workload group based on the criteria you have defined. Here are some additional queries that you can use to manage the SQL Server Resource Governor using T-SQL: To view the current configuration of the Resource Governor, including resource pools and workload groups, you can run the following command: SELECT * FROM sys.resource_governor_configuration; This command will display a list of all the resource pools and workload groups configured on your SQL Server instance, along with their associated settings. To modify the settings of an existing resource pool or workload group, you can use the ALTER RESOURCE POOL or ALTER WORKLOAD GROUP command, respectively. For example, to change the maximum CPU usage for a resource pool named "PoolName" to 40%, you could run the following command: ALTER RESOURCE POOL [PoolName] WITH MAX_CPU_PERCENT = 40; This command will modify the "PoolName" resource pool to allow a maximum CPU usage of 40%. To remove a resource pool or workload group from the Resource Governor, you can use the DROP RESOURCE POOL or DROP WORKLOAD GROUP command, respectively. For example, to remove a workload group named "GroupName", you could run the following command: DROP WORKLOAD GROUP [GroupName]; This command will remove the "GroupName" workload group from the Resource Governor. To view the current status of the Resource Governor, including CPU usage and memory usage for each resource pool and workload group, you can run the following command: SELECT * FROM sys.dm_resource_governor_resource_pools; SELECT * FROM sys.dm_resource_governor_workload_groups; These commands will display information about the current resource usage for each resource pool and workload group, allowing you to monitor and optimize your SQL Server resource allocation. By using these and other T-SQL commands, you can effectively manage the SQL Server Resource Governor and optimize the performance of your SQL Server instances. To view the current workload group assignments for a given user or login, you can run the following command: SELECT SUSER_SNAME(), request_id, group_id FROM sys.dm_exec_sessions WHERE group_id IS NOT NULL; This command will display the current workload group assignments for the currently logged-in user, along with the request ID for each session. To view the current active requests and their associated workload groups, you can run the following command: SELECT session_id, request_id, group_id FROM sys.dm_exec_requests WHERE group_id IS NOT NULL; This command will display a list of all currently active requests on the SQL Server instance, along with their associated workload groups. To view the current resource usage by each session or request, you can run the following command: SELECT session_id, request_id, scheduler_id, current_tasks_count, active_requests_count, pending_io_count FROM sys.dm_exec_sessions WHERE is_user_process = 1; This command will display a list of all currently active user sessions on the SQL Server instance, along with their current resource usage statistics. To manually force a request to run in a specific workload group, you can use the SET WORKLOAD GROUP command. For example, to force a request with a specific session ID to run in a workload group named "GroupName", you could run the following command: SET WORKLOAD GROUP [GroupName] FOR SESSION ; This command will force the specified session to run in the "GroupName" workload group. By using these and other T-SQL commands, you can effectively manage the SQL Server Resource Governor and optimize the performance of your SQL Server instances.
- SQL Encryption
SQL Server Encryption is a feature in Microsoft SQL Server that enables you to encrypt data to protect it from unauthorized access. SQL Server offers several encryption options, including cell-level encryption, Transparent Data Encryption (TDE), and backup encryption. Cell-level encryption allows you to encrypt individual columns of data within a table, and it offers a higher level of security than database-level encryption. TDE encrypts the entire database and its backups, making it more suitable for larger databases that require protection at rest. Backup encryption is used to encrypt SQL Server backups, protecting them from unauthorized access. SQL Server Encryption uses symmetric encryption algorithms, such as Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES), to encrypt the data. The encryption key is then protected by a certificate or an asymmetric key, which is stored in the SQL Server master database. The certificate or asymmetric key is then encrypted by the database master key, which is generated automatically when the SQL Server instance is installed. SQL Server Encryption is an essential component of any data security strategy, as it helps to ensure that sensitive data is protected from unauthorized access. However, implementing SQL Server Encryption requires careful planning and consideration of performance implications, as encrypting and decrypting data can affect the performance of database operations. SQL Server Encryption is available in several editions of Microsoft SQL Server, but the availability of specific encryption features may vary by edition. Here is a summary of SQL Server Encryption features available in each edition: Enterprise Edition: This edition includes all of the encryption features available in SQL Server, including Transparent Data Encryption (TDE), Cell-level encryption, backup encryption, Extensible Key Management (EKM), and Always Encrypted. Standard Edition: This edition includes Transparent Data Encryption (TDE), backup encryption, and Always Encrypted. Web Edition: This edition includes Transparent Data Encryption (TDE) and backup encryption. Developer and Express Editions: These editions include Transparent Data Encryption (TDE) and backup encryption. Note that the availability of SQL Server Encryption features may also depend on the version and service pack level of SQL Server. It's important to review the documentation for your specific version and edition of SQL Server to determine which encryption features are available and how to implement them. There are differences in encryption features and capabilities among versions of SQL Server. Here are some of the key differences: Transparent Data Encryption (TDE): TDE was introduced in SQL Server 2008 Enterprise Edition and is available in all editions of SQL Server since SQL Server 2016 SP1. Prior to that, it was only available in Enterprise Edition. TDE encrypts the entire database, including data, log files, and backups. Cell-level encryption: Cell-level encryption was introduced in SQL Server 2005 Enterprise Edition and is available in all editions of SQL Server. It allows you to encrypt individual columns of data within a table. Backup encryption: Backup encryption was introduced in SQL Server 2014 and is available in all editions of SQL Server. It allows you to encrypt database backups. Always Encrypted: Always Encrypted was introduced in SQL Server 2016 and is available in Enterprise, Standard, and Developer Editions. It allows you to encrypt sensitive data such as credit card numbers and personally identifiable information (PII) at rest and in transit. Extensible Key Management (EKM): EKM was introduced in SQL Server 2008 Enterprise Edition and is available in all editions of SQL Server. It allows you to store and manage encryption keys in an external hardware security module (HSM). Transport Layer Security (TLS): SQL Server supports different versions of TLS for encrypting network traffic between client applications and the SQL Server instance. TLS 1.2 is the default version starting from SQL Server 2016, while older versions of SQL Server support older versions of TLS. It's important to note that some encryption features are only available in specific editions of SQL Server. Therefore, it's important to review the documentation for your specific version and edition of SQL Server to determine which encryption features are available and how to implement them. SQL Server provides various encryption options to secure data and protect it from unauthorized access. Here are the different types of encryption available in SQL Server and their pros and cons: Transparent Data Encryption (TDE): TDE encrypts the entire database and is transparent to applications and end-users. Pros include the ability to protect data at rest and secure backups. However, TDE can have performance overhead and may not protect against all attack vectors. Cell-level encryption: Cell-level encryption encrypts individual columns or cells within a table. Pros include fine-grained control over data access and the ability to encrypt only sensitive data. However, cell-level encryption can have performance overhead, and it may not protect against all attack vectors. Backup encryption: Backup encryption encrypts database backups. Pros include the ability to secure backups and protect against unauthorized access. However, backup encryption can have a performance overhead and may require additional storage space. Always Encrypted: Always Encrypted encrypts data at rest and in transit between the client and server. Pros include the ability to secure sensitive data even when it's in use and the ability to delegate key management to the client. However, Always Encrypted may have a performance overhead and may not be suitable for all types of data. Transport Layer Security (TLS): TLS encrypts data in transit between the client and server. Pros include the ability to secure data in transit and protect against eavesdropping and man-in-the-middle attacks. However, TLS may have a performance overhead and may require additional configuration. Extensible Key Management (EKM): EKM allows you to store and manage encryption keys in an external hardware security module (HSM). Pros include the ability to meet compliance requirements and the ability to protect keys from being compromised. However, EKM may require additional hardware and configuration, and it may not be suitable for all environments. Overall, the pros and cons of each encryption type will depend on your specific use case, performance requirements, security needs, and compliance requirements. It's important to review the documentation for your specific version and edition of SQL Server and consult with security experts to determine which encryption options are suitable for your environment. To implement Transparent Data Encryption (TDE) in SQL Server, you can follow these general steps: Create or obtain a database master key (DMK) and a certificate or asymmetric key to protect the TDE encryption key. Create a database encryption key (DEK) for the database that you want to encrypt. Enable TDE for the database by setting the encryption on the database to ON. Here are more detailed steps: Create or obtain a database master key (DMK) and a certificate or asymmetric key to protect the TDE encryption key: USE master; CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'; -- password for the DMK GO CREATE CERTIFICATE TDE_Cert WITH SUBJECT = 'TDE certificate'; -- certificate or asymmetric key to protect the TDE encryption key GO Create a database encryption key (DEK) for the database that you want to encrypt: USE [database_name]; CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 -- encryption algorithm ENCRYPTION BY SERVER CERTIFICATE TDE_Cert; -- certificate or asymmetric key to protect the DEK GO Enable TDE for the database by setting the encryption on the database to ON: USE [database_name]; ALTER DATABASE [database_name] SET ENCRYPTION ON; GO After TDE is enabled, all database files, including the data file, log file, and any other filegroups, will be encrypted. If you create a new filegroup after enabling TDE, you'll need to manually encrypt it using the same steps as above. Note that enabling TDE can have a performance overhead, so it's important to test and monitor the performance of your database after enabling TDE. Also, TDE only protects data at rest, not data in transit or data that is being processed by the application. Therefore, it's important to use other encryption options, such as Always Encrypted or SSL/TLS, to protect data in transit or in use. To implement Cell-level encryption in SQL Server, you can follow these general steps: Create or obtain a certificate or asymmetric key to protect the column encryption key. Create a column master key (CMK) and a column encryption key (CEK) for each column that you want to encrypt. Alter the table to add the encrypted column and specify the encryption type for the column. Here are more detailed steps: Create or obtain a certificate or asymmetric key to protect the column encryption key: USE master; CREATE CERTIFICATE Cell_Encryption_Cert WITH SUBJECT = 'Cell-level encryption certificate'; -- certificate or asymmetric key to protect the CEK GO Create a column master key (CMK) and a column encryption key (CEK) for each column that you want to encrypt: USE [database_name]; CREATE COLUMN MASTER KEY Cell_CMK WITH (KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE', KEY_PATH = 'Current User/Personal/Cell_Encryption_Cert'); GO CREATE COLUMN ENCRYPTION KEY Cell_CEK WITH VALUES ( COLUMN_MASTER_KEY = Cell_CMK, ALGORITHM = 'RSA_OAEP', ENCRYPTED_VALUE = ***** ); GO Alter the table to add the encrypted column and specify the encryption type for the column: USE [database_name]; ALTER TABLE [schema_name].[table_name] ADD [encrypted_column] varbinary(max) ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = Cell_CEK, ENCRYPTION_TYPE = RANDOMIZED, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'); GO After cell-level encryption is enabled, only the specified columns will be encrypted, and other columns in the table will remain unencrypted. It's important to note that cell-level encryption can have a performance overhead and may not protect against all attack vectors. It's recommended to use it only for sensitive data and to test and monitor the performance of your database after enabling cell-level encryption. How to implement Backup encryption To implement backup encryption in SQL Server, you can follow these general steps: Create or obtain a certificate or asymmetric key to protect the backup encryption key. Create a backup encryption certificate and a backup encryption key for the certificate. Backup the database using the backup encryption certificate and key. Here are more detailed steps: Create or obtain a certificate or asymmetric key to protect the backup encryption key: sql USE master; CREATE CERTIFICATE Backup_Encryption_Cert WITH SUBJECT = 'Backup encryption certificate'; -- certificate or asymmetric key to protect the backup encryption key GO Create a backup encryption certificate and a backup encryption key for the certificate: USE master; CREATE CERTIFICATE Backup_Cert WITH SUBJECT = 'Backup certificate'; -- backup encryption certificate GO CREATE SYMMETRIC KEY Backup_Key WITH ALGORITHM = AES_256 -- encryption algorithm ENCRYPTION BY CERTIFICATE Backup_Cert; -- backup encryption key for the certificate GO Backup the database using the backup encryption certificate and key: BACKUP DATABASE [database_name] TO DISK = 'backup_file_name'WITH INIT, FORMAT, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = Backup_Encryption_Cert), COMPRESSION, STATS = 10; -- use the backup encryption certificate and key for encryption GO After backup encryption is enabled, the backup file will be encrypted and can only be restored by a user with the appropriate backup encryption certificate and key. It's important to note that backup encryption can have a performance overhead and may increase the size of the backup file. It's recommended to use it only for sensitive data and to test and monitor the performance of your backup process after enabling backup encryption. Here are the general steps and some T-SQL commands for implementing SQL Server Always Encrypted: Create a column master key: Use the following T-SQL command to create a column master key: CREATE COLUMN MASTER KEY [CMK_Name] WITH ( KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE', KEY_PATH = 'CurrentUser/My/ColumnMasterKeyName' ); Replace CMK_Name with the name of your column master key, and ColumnMasterKeyName with the name of the certificate you want to use. Create a column encryption key: Use the following T-SQL command to create a column encryption key: CREATE COLUMN ENCRYPTION KEY [CEK_Name] WITH VALUES ( COLUMN_MASTER_KEY = [CMK_Name], ALGORITHM = 'RSA_OAEP', ENCRYPTED_VALUE = ); Replace CEK_Name with the name of your column encryption key, CMK_Name with the name of your column master key, and with the encrypted value of your column encryption key. Define column encryption settings: Use the following T-SQL command to define column encryption settings: ALTER TABLE [Table_Name] ALTER COLUMN [Column_Name] [Data_Type] ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK_Name], ENCRYPTION_TYPE = [Encryption_Type]); Replace Table_Name with the name of your table, Column_Name with the name of your column, Data_Type with the data type of your column, CEK_Name with the name of your column encryption key, and Encryption_Type with the encryption type you want to use (deterministic or randomized). Modify your application: To modify your application, you need to use the Always Encrypted enabled .NET Framework Data Provider for SQL Server. You can download the provider from Microsoft. You also need to change your connection string to include the following parameters: Column Encryption Setting=Enabled Certificate Thumbprint=Certificate_Thumbprint Certificate Store Location=Current User Certificate Store Name=My Replace Certificate_Thumbprint with the thumbprint of the certificate you are using. Test and deploy: Test your application thoroughly to ensure that it works correctly with Always Encrypted. Once you are satisfied that everything is working correctly, deploy your changes to your production environment. These are the general steps and T-SQL commands for implementing SQL Server Always Encrypted. Keep in mind that this is a complex feature that requires careful planning and implementation. Consult the SQL Server documentation and seek expert advice before implementing it in a production environment. Here are the general steps and some T-SQL commands for implementing Extensible Key Management (EKM) in SQL Server: Install the EKM provider: Install the EKM provider software from your third-party vendor onto your SQL Server machine. Register the EKM provider: Use the following T-SQL command to register the EKM provider: USE [master] GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'; GO CREATE EXTERNAL MASTER KEY [EKM_Name] WITH PROVIDER_NAME = 'EKM_Provider_Name', PROVIDER_TYPE = 'EKM_Provider_Type', PROVIDER_KEY_NAME = 'EKM_Key_Name', ALGORITHM = 'EKM_Algorithm' ENCRYPTION BY PASSWORD = 'password'; GO Replace EKM_Name with the name you want to use for the EKM, EKM_Provider_Name with the name of your EKM provider, EKM_Provider_Type with the type of your EKM provider, EKM_Key_Name with the name of your EKM key, and EKM_Algorithm with the algorithm your EKM provider uses. Create a database master key: Use the following T-SQL command to create a database master key: USE [database_name] GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'; GO Create a certificate or asymmetric key: Use the following T-SQL command to create a certificate or asymmetric key: USE [database_name] GO CREATE CERTIFICATE [Certificate_Name] WITH SUBJECT = 'Certificate_Subject'; Replace Certificate_Name with the name you want to use for the certificate, and Certificate_Subject with a description of the certificate. Create a symmetric key: Use the following T-SQL command to create a symmetric key: USE [database_name] GO CREATE SYMMETRIC KEY [Symmetric_Key_Name] WITH ALGORITHM = 'AES_256', IDENTITY_VALUE = 'identity_value', KEY_SOURCE = 'key_source' ENCRYPTION BY CERTIFICATE [Certificate_Name]; GO Replace Symmetric_Key_Name with the name you want to use for the symmetric key, identity_value with a value that your EKM provider uses to generate a key, key_source with the source of the key, and Certificate_Name with the name of the certificate you created in step 4. Use the symmetric key to encrypt data: Use the following T-SQL command to encrypt data using the symmetric key: USE [database_name] GO OPEN SYMMETRIC KEY [Symmetric_Key_Name] DECRYPTION BY CERTIFICATE [Certificate_Name]; GO UPDATE [table_name] SET [column_name] = ENCRYPTBYKEY(KEY_GUID('Symmetric_Key_Name'), [column_name]); GO Replace table_name with the name of the table containing the column you want to encrypt, column_name with the name of the column you want to encrypt, and Symmetric_Key_Name and Certificate_Name with the names of the symmetric key and certificate you created in steps 4 and 5. These are the general steps and T-SQL commands for implementing Extensible Key Management (EKM) in SQL Server. Keep in mind that this is a complex feature that requires careful planning and implementation. Consult the SQL Server documentation and seek expert advice before implementing it in a production environment.
- Alerts 19-25
In SQL Server, Alerts are used to monitor specific events or errors that occur within the SQL Server engine, and the severity levels 19 through 25 are the most critical ones. These alerts indicate an internal error has occurred, making the continued operation of the database unlikely. These are some common alerts for severity levels 19-25: Severity 19: Indicates a non-configurable fatal runtime error has occurred. This error results in the termination of the client connection or SQL Server process. Severity 20: Indicates that a system malfunction has occurred, and the execution of the current query or stored procedure cannot continue. This severe level will trigger a fatal error and leads to SQL Server shutdown. Severity 21: Generated when SQL Server fails to retrieve memory needed to perform an operation, and the execution of the current query or stored procedure cannot continue; however, the database can continue to function. Severity 22: Indicates that the database is being shut down due to a serious internal error. Severity 23-25: Indicates that the database has run out of resources or system issues and, as a result, must shut down immediately. When an alert is triggered in SQL Server, they can be configured to perform specific actions like sending an email, executing a job, or running a PowerShell script. Alert monitoring can help to prevent downtime, optimize performance, and reduce technical debt by allowing database administrators to respond to critical issues and resolve them proactively. How To Setup Alerts T-SQL USE [msdb] GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 016', @message_id=0, @severity=16, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 016', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 017', @message_id=0, @severity=17, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 017', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 018', @message_id=0, @severity=18, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 018', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 019', @message_id=0, @severity=19, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 019', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 020', @message_id=0, @severity=20, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 020', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 021', @message_id=0, @severity=21, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 021', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 022', @message_id=0, @severity=22, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 022', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 023', @message_id=0, @severity=23, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 023', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 024', @message_id=0, @severity=24, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 024', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 025', @message_id=0, @severity=25, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 025', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Error Number 823', @message_id=823, @severity=0, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000' GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 823', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Error Number 824', @message_id=824, @severity=0, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000' GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 824', @operator_name=N'The DBA Team', @notification_method = 7; GO EXEC msdb.dbo.sp_add_alert @name=N'Error Number 825', @message_id=825, @severity=0, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000' GO EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 825', @operator_name=N'The DBA Team', @notification_method = 7; GO Setting Up Operators SQL Agent operators are used to define the recipients for alerts and notifications generated by SQL Server Agent jobs. Here are the steps to create SQL Agent operators: Open SQL Server Management Studio and connect to the database. Expand the "SQL Server Agent" node in the Object Explorer and right-click on the "Operators" folder. Select "New Operator" to open the "New Operator" dialog box. In the "General" section, enter the name of the operator in the "Name" field. Enter the email address of the operator in the "E-mail name" field. You can add multiple email addresses by separating them with semicolons. In the "Pager" section, enter the pager email address of the operator (if applicable). In the "Net send" section, enter the computer name of the operator (if applicable). Click "OK" to create the operator. Once the operator is created, you can assign them to SQL Server Agent jobs to receive alerts and notifications. Here are the steps to assign operators to SQL Server Agent jobs: Right-click on the SQL Server Agent job in the Object Explorer and select "Properties". Select the "Notifications" tab. Select the "E-mail" checkbox and select the operator from the drop-down list. If applicable, select the "Page" checkbox and select the operator from the drop-down list. If applicable, select the "Net send" checkbox and select the operator from the drop-down list. Click "OK" to save the changes. Now, the SQL Server Agent job will send notifications to the assigned operators when it completes or when an error occurs. Setup Database Mail Here's an example T-SQL code to setup Database Mail, create a new mail profile, and set it as the default profile for public use: -- Enable Database Mail sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'Database Mail XPs', 1; GO RECONFIGURE; GO -- Create a new mail profileEXECUTE msdb.dbo.sysmail_add_profile_sp @profile_name = 'MyMailProfile', @description = 'My Mail Profile Description', @profile_secure = 0; -- Add an account to the mail profileEXECUTE msdb.dbo.sysmail_add_account_sp @account_name = 'MyMailAccount', @email_address = 'myemail@example.com', @mailserver_name = 'smtp.example.com', @port = 25, @username = 'myusername', @password = 'mypassword', @use_default_credentials = 0; -- Add the account to the mail profileEXECUTE msdb.dbo.sysmail_add_profileaccount_sp @profile_name = 'MyMailProfile', @account_name = 'MyMailAccount', @sequence_number = 1; -- Set the mail profile as default for public useEXECUTE msdb.dbo.sysmail_default_profile_sp @default_profile_name = 'MyMailProfile', @default_profile_security_option = 0; In this code, we first enable the Database Mail feature, then create a new mail profile named "MyMailProfile" and add an account to it. We then add the account to the profile and set it as the default profile for public use with no security option. You can modify this code as per your requirements, such as adding multiple accounts to the profile or setting a security option for the default profile. Setting Up A SMTP Server The choice of SMTP servers for use with SQL Server Database Mail depends on various factors such as the organization's infrastructure, email service provider, and specific requirements. However, here are some general recommendations for SMTP servers that are commonly used with SQL Server Database Mail: I have an account with MailGun and think it works well. The choice of SMTP servers for use with SQL Server Database Mail depends on various factors such as the organization's infrastructure, email service provider, and specific requirements. However, here are some general recommendations for SMTP servers that are commonly used with SQL Server Database Mail: Microsoft Exchange Server: If your organization uses Microsoft Exchange Server, it can be a good option for setting up SMTP for Database Mail. Exchange Server is a widely used mail server that supports sending emails from SQL Server. Gmail: Gmail is a popular email service provider that offers SMTP relay service. You can configure SQL Server Database Mail to use Gmail SMTP server for sending emails. Amazon SES: Amazon Simple Email Service (SES) is a cloud-based email service that provides a reliable and cost-effective option for sending emails from SQL Server. SendGrid: SendGrid is a cloud-based email service provider that offers a reliable SMTP relay service for sending emails from SQL Server. Mailgun: Mailgun is another cloud-based email service provider that offers a powerful SMTP relay service for sending transactional emails from SQL Server. These SMTP servers have different pricing plans and feature sets, so it's important to evaluate them based on your specific needs and requirements. Additionally, some SMTP servers may require additional configuration or authentication settings to work with SQL Server Database Mail. Microsoft Exchange Server: If your organization uses Microsoft Exchange Server, it can be a good option for setting up SMTP for Database Mail. Exchange Server is a widely used mail server that supports sending emails from SQL Server. Gmail: Gmail is a popular email service provider that offers SMTP relay service. You can configure SQL Server Database Mail to use Gmail SMTP server for sending emails. Amazon SES: Amazon Simple Email Service (SES) is a cloud-based email service that provides a reliable and cost-effective option for sending emails from SQL Server. SendGrid: SendGrid is a cloud-based email service provider that offers a reliable SMTP relay service for sending emails from SQL Server. Mailgun: Mailgun is another cloud-based email service provider that offers a powerful SMTP relay service for sending transactional emails from SQL Server. These SMTP servers have different pricing plans and feature sets, so it's important to evaluate them based on your specific needs and requirements. Additionally, some SMTP servers may require additional configuration or authentication settings to work with SQL Server Database Mail.
- Table Optimization Strategies in SQL Server
Are you looking for ways to improve the performance of your database tables? If so, then table optimization in SQL Server is an important skill to master. Table optimization refers to the process of analyzing and improving the way data is stored and retrieved. This article will provide an overview of several table optimization strategies that can help you get the most out of your database tables. Indexing Indexing is a strategy that can be used to improve the speed of data retrieval operations on a table. An index is a data structure that stores values from one or more columns in a table, and it enables fast lookup of rows when compared with searching the entire table. In addition, partitioning a table means dividing it into smaller, more manageable pieces called partitions. Partitioning can be used to improve query performance by reducing the amount of data that needs to be searched through when executing queries. Data compression Data compression can also reduce the amount of storage space required by a table, which can improve query performance by allowing more data to fit into memory at once. Another strategy is to denormalize a table by adding redundant data to it. This can improve query performance by reducing the number of joins required to retrieve the data. Finally, SQL Server collects statistics on table columns to help it make better decisions about how to execute queries. These statistics are updated periodically and should be monitored regularly for accuracy. SQL Server compression is a feature that allows you to reduce the storage space required for your data by compressing it. This can result in significant space savings and can also improve query performance by reducing the amount of data that needs to be read from disk. There are two types of compression available in SQL Server: Row Compression: This type of compression works by eliminating redundant data within each row of a table. This can result in a compression ratio of up to 50% for certain types of data, such as strings that contain repeated values. Row compression is best suited for tables that have a high degree of redundancy within each row. Row compression in SQL Server is most effective when you have tables with a high degree of redundancy within each row. Some examples of situations where row compression can be effective include: Tables with long string columns: If you have tables with long string columns that contain repeated values, such as address or description columns, row compression can be very effective in reducing the storage space required for these columns. Tables with many NULL values: If you have tables with many columns that contain NULL values, row compression can be effective in reducing the storage space required for these columns, as NULL values take up space in the data file. Tables with repetitive data: If you have tables with columns that contain repetitive data, such as flag or status columns, row compression can be effective in reducing the storage space required for these columns. Large tables: If you have large tables that are consuming a significant amount of storage space, row compression can be effective in reducing the overall size of the table. Page Compression: This type of compression works by compressing entire data pages, rather than individual rows. This can result in a compression ratio of up to 90% for certain types of data. Page compression is best suited for tables that have a high degree of redundancy between rows, such as tables that contain many columns with similar data. You would typically use compression when you have a large amount of data that is consuming a significant amount of storage space, or when you need to improve the performance of queries that access that data. Compression can also be useful in situations where you have limited storage capacity and need to make the most efficient use of available space. Page compression in SQL Server is most effective when you have tables with a high degree of redundancy between rows, where rows have similar data. Some examples of situations where page compression can be effective include: Large tables with high data redundancy: If you have large tables with a high degree of redundancy between rows, such as tables with many columns containing similar data, page compression can be very effective in reducing the storage space required for the table. History tables: If you have history tables that contain many rows with similar data, such as transaction or audit tables, page compression can be effective in reducing the storage space required for these tables. Fact tables in data warehouses: Fact tables in data warehouses often contain a large number of rows with repetitive data, and page compression can be effective in reducing the storage space required for these tables. Read-intensive workloads: Page compression can be effective in improving query performance for read-intensive workloads, as compressed data can be read more quickly from disk. Downsides To Compression While compression can provide many benefits, such as reducing storage space and improving query performance, there are also some downsides to consider: Increased CPU usage: Compression requires additional processing power to compress and decompress data, which can result in increased CPU usage on the server. This can potentially impact the performance of other applications running on the same server. Slower write performance: Compressed data requires more processing to write to disk, which can result in slower write performance for tables that are frequently updated. Higher resource requirements: Compression can increase the resource requirements for your SQL Server instance, including CPU, memory, and disk I/O. Increased complexity: Compression adds complexity to your database architecture, including managing compressed and uncompressed data, monitoring compression performance, and troubleshooting compression-related issues. Limited applicability: Compression may not be appropriate for all types of data, such as data that is already highly compressed or data that is frequently updated. It's important to carefully evaluate the potential downsides of compression and to test it thoroughly before implementing it in a production environment. It's also recommended to monitor the performance impact of compression regularly to ensure that it continues to provide the desired benefits without causing any unintended consequences. Here are some T-SQL queries that can help you manage compression in SQL Server: Check compression status of a table: SELECT OBJECT_NAME(object_id) AS TableName, name AS IndexName, index_id, type_desc AS IndexType, is_disabled AS IndexDisabled, has_filter AS IndexFilter, compression_delay AS CompressionDelay, state_desc AS CompressionState FROM sys.indexes WHERE object_id = OBJECT_ID('YourTableName'); This query will show you the compression status of each index on the specified table, including the compression state and any delay in compression that may be configured. Check compression savings for a table: SELECT OBJECT_NAME(object_id) AS TableName, name AS IndexName, index_id, type_desc AS IndexType, compression_ratio, page_count, compressed_page_count FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('YourTableName'), NULL, NULL, 'DETAILED') WHERE index_level = 0; This query will show you the compression ratio and savings for each index on the specified table. It will also show you the number of pages that are currently in use and the number of compressed pages. Enable row compression for a table: ALTER TABLE YourTableName REBUILD WITH (DATA_COMPRESSION = ROW); This query will enable row compression for the specified table. The REBUILD option will rebuild the table and its indexes to apply the compression setting. Enable page compression for a table: ALTER TABLE YourTableName REBUILD WITH (DATA_COMPRESSION = PAGE); This query will enable page compression for the specified table. The REBUILD option will rebuild the table and its indexes to apply the compression setting. Check compression information for all tables: SELECT OBJECT_NAME(object_id) AS TableName, sum(page_count) AS TotalPages, sum(compressed_page_count) AS CompressedPages, CAST(sum(compressed_page_count) AS DECIMAL(18,2)) / CAST(sum(page_count) AS DECIMAL(18,2)) AS CompressionRatio, sum(row_count) AS RowCount, sum(used_page_count) AS UsedPages FROM sys.dm_db_page_info(DB_ID(), 0) GROUP BY object_id ORDER BY CompressionRatio DESC; This query will show you compression information for all tables in the current database, including the total number of pages, the number of compressed pages, the compression ratio, the row count, and the number of used pages. This can help you identify which tables may benefit the most from compression. Here's how to implement compression in SQL Server using both T-SQL and SSMS: Implementing compression using T-SQL: a) To enable row compression on a table: ALTER TABLE TableName REBUILD WITH (DATA_COMPRESSION = ROW); b) To enable page compression on a table: ALTER TABLE TableName REBUILD WITH (DATA_COMPRESSION = PAGE); Implementing compression using SSMS: a) To enable row compression on a table: In SSMS Object Explorer, right-click on the table you want to compress and select "Properties". In the "Properties" dialog box, click on "Storage" on the left-hand side. Under "Table Compression", select "Row" from the drop-down menu. Click "OK" to save the changes. b) To enable page compression on a table: In SSMS Object Explorer, right-click on the table you want to compress and select "Properties". In the "Properties" dialog box, click on "Storage" on the left-hand side. Under "Table Compression", select "Page" from the drop-down menu. Click "OK" to save the changes. Note that when using SSMS to enable compression, the table will be automatically rebuilt to apply the compression setting. Partitioning: Partitioning in SQL Server is the process of dividing a large table into smaller, more manageable pieces called partitions. Each partition is stored separately and can be accessed and managed independently of the others. Partitioning can improve the performance of queries that access large tables by reducing the amount of data that needs to be searched. For example, if you have a table with billions of rows and you partition it by date, queries that only need to access data from a specific date range can be targeted to the appropriate partition. This can greatly reduce the amount of data that needs to be scanned, resulting in faster query performance. SQL Server supports two types of partitioning: Partitioned Tables: In partitioned tables, the table is divided into individual partitions based on a partition function. A partition function defines how the data is divided into partitions based on a specific column, such as a date or a geographical region. Each partition is stored separately as a separate physical filegroup in the database. A partitioned table in SQL Server is a large table that has been divided into smaller, more manageable pieces called partitions. Each partition contains a subset of the table's data and can be stored separately, allowing the table to be spread across multiple filegroups and/or physical storage devices. Partitioning is often used to improve performance and manageability of large tables, by allowing data to be quickly accessed and modified without having to scan the entire table. It also makes it easier to manage the storage and backup of large tables, as well as allowing for more efficient queries and parallel processing. SQL Server supports several types of partitioning, including range, hash, and list partitioning. Range partitioning divides the table into partitions based on a range of values, while hash partitioning uses a hashing algorithm to distribute the data across partitions. List partitioning is similar to range partitioning, but partitions data based on a list of values instead of a range. Partitioning is a feature available in SQL Server Enterprise Edition, but it is not available in the Standard or Express editions. Here are some details about the different types of partitioning in SQL Server and when to use each: Range Partitioning: Range partitioning is a type of partitioning that divides a table or index into partitions based on a range of values in a specific column, called the partitioning column. Each partition contains a range of values that fall within a specified range of the partitioning column. Range partitioning is typically used for data that has natural ranges, such as dates, and allows for efficient querying of a specific range of data. When to use range partitioning: When the data can be logically divided into ranges based on a partitioning column, such as dates or numeric values. When there is a need to efficiently query or maintain data within specific ranges. Hash Partitioning: Hash partitioning is a type of partitioning that divides a table or index into partitions based on a hashing algorithm applied to a specific column, called the partitioning column. The hashing algorithm is used to distribute the rows across the partitions in a random or pseudo-random manner. Hash partitioning is typically used when there is no natural way to divide the data into ranges and when a more even distribution of data across the partitions is desired. When to use hash partitioning: When the data cannot be logically divided into ranges based on a partitioning column, such as with random strings or complex data. When a more even distribution of data across the partitions is desired. List Partitioning: List partitioning is a type of partitioning that divides a table or index into partitions based on a list of values in a specific column, called the partitioning column. Each partition contains a set of values that match a specified list of values in the partitioning column. List partitioning is typically used when the data has discrete values that can be easily partitioned into separate groups. When to use list partitioning: When the data can be divided into separate groups based on a specific set of values. When the number of distinct values in the partitioning column is relatively small. Note that the choice of partitioning type depends on the characteristics of the data being partitioned and the specific use case for the data. A combination of partitioning types can also be used for more complex data scenarios. Partitioned Views: In partitioned views, the table is not physically partitioned, but a view is created on top of multiple tables that have been partitioned. The view then presents the data from the partitioned tables as if it were a single table. Partitioned views can be used to partition data that cannot be partitioned using a partition function, such as data that is not easily divisible into discrete ranges. A partitioned view in SQL Server is a view that combines the data from multiple tables, each of which has been partitioned separately. The partitioned view appears to the user as a single virtual table, but the data is actually stored in separate physical tables. Partitioned views allow you to divide large tables into smaller, more manageable pieces without having to modify the existing table structure. This can be useful in situations where the table is too large to fit into a single physical location or when there is a need to distribute the data across multiple filegroups. To create a partitioned view, you must first create the individual tables that will be used to store the data. Each table should have the same structure and should be partitioned using the same partitioning scheme. Once the tables have been created, you can create a view that selects data from all of the tables and combines them into a single result set. Partitioned views can be useful in a variety of scenarios, including: Archiving data: You can partition a large table by date or some other criterion, and then use a partitioned view to present the archived data as a single table. Security: You can use partitioned views to restrict access to specific portions of a table, based on the partitioning scheme. Performance: You can use partitioned views to improve query performance by selecting data from only the relevant partitions, rather than scanning the entire table. Note that partitioned views have some limitations, such as not supporting certain types of joins and not allowing updates or inserts on the view directly. Additionally, partitioned views are not available in all editions of SQL Server, and are only supported in Enterprise, Developer, and Evaluation editions. Partitioning can also be combined with other optimization techniques, such as indexing, to further improve query performance. However, partitioning should be used judiciously and only for tables that are truly large and require such optimization. It is important to carefully evaluate the performance of your queries and choose the partitioning strategy that is best suited to your specific needs. Here are some of the limitations of partitioned views in SQL Server: Limited join support: Partitioned views do not support certain types of joins, such as outer joins, full-text joins, or self-joins. This can make it difficult to write complex queries that involve multiple tables. No direct updates or inserts: Because partitioned views are read-only, you cannot update or insert data directly into the view. Instead, you must update or insert data into the underlying tables. Complexity: Partitioned views can add complexity to your database design, especially if you need to create multiple views to handle different scenarios or partitioning schemes. Maintenance: Partitioned views require more maintenance than regular views, since you need to manage the underlying tables and ensure that they are properly partitioned. Limited availability: Partitioned views are not available in all editions of SQL Server, and are only supported in Enterprise, Developer, and Evaluation editions. Despite these limitations, partitioned views can still be a useful tool for managing large tables and improving query performance. However, it's important to carefully consider the tradeoffs and limitations before deciding to use partitioned views in your database design. Here are the general steps to implement partitioning in SQL Server: Determine the partitioning key: The partitioning key is the column or columns that will be used to divide the table into partitions. This could be a date column, a geographical region column, or any other column that makes sense for your specific use case. Create a partition function: A partition function is a function that maps the partitioning key to a specific partition number. You can create a partition function using the CREATE PARTITION FUNCTION statement. For example, the following statement creates a partition function that partitions a table based on a date column: CREATE PARTITION FUNCTION myPartitionFunction (datetime) AS RANGE LEFT FOR VALUES ('2019-01-01', '2020-01-01', '2021-01-01'); This creates a partition function that partitions the table into four partitions based on the date ranges: before 2019-01-01, between 2019-01-01 and 2020-01-01, between 2020-01-01 and 2021-01-01, and after 2021-01-01. Create a partition scheme: A partition scheme maps the partitions defined by the partition function to physical filegroups in the database. You can create a partition scheme using the CREATE PARTITION SCHEME statement. For example, the following statement creates a partition scheme that maps the partitions defined by the myPartitionFunction partition function to four filegroups named fg1, fg2, fg3, and fg4: CREATE PARTITION SCHEME myPartitionScheme AS PARTITION myPartitionFunction TO (fg1, fg2, fg3, fg4); Create the partitioned table: You can create a partitioned table using the CREATE TABLE statement, specifying the partition scheme and partitioning key. For example, the following statement creates a partitioned table named myTable that is partitioned based on a date column:l CREATE TABLE myTable ( id INT PRIMARY KEY, myDate DATETIME, otherColumn VARCHAR(50) ) ON myPartitionScheme (myDate); This creates a partitioned table named myTable with a primary key column named id, a date column named myDate, and another column named otherColumn. The table is partitioned based on the myDate column using the myPartitionScheme partition scheme. Insert data into the partitioned table: You can insert data into the partitioned table as you would any other table. The data will be automatically partitioned based on the partitioning key. Partitioning is a complex topic and there are many additional details and considerations to take into account when implementing it. It is recommended to carefully evaluate your specific use case and consult the SQL Server documentation for more detailed guidance. Denormalization: Denormalization is the process of intentionally adding redundant data to a database in order to improve performance or simplify queries. It involves breaking with the principles of normalization, which is the practice of designing a database with a logical structure that minimizes redundancy. The idea behind denormalization is to reduce the number of joins required to answer common queries by duplicating data that is frequently accessed or joined. By doing so, queries can be executed more quickly, at the expense of increased storage requirements and a more complex data model. There are several ways to denormalize a database, including: Adding redundant columns: This involves duplicating data that is frequently used in queries across multiple tables. For example, if you frequently join a customer table with an orders table to retrieve the customer name, you might add a "customer_name" column to the orders table to avoid the join. Duplicating entire tables: This involves creating a copy of an existing table that contains only the data needed for a specific set of queries. For example, if you have a large orders table, you might create a smaller, denormalized version of the table that contains only the most frequently accessed data. Creating precomputed aggregates: This involves creating summary tables that contain precomputed aggregates such as totals, averages, or counts. For example, if you frequently need to calculate the total sales for each customer, you might create a summary table that contains the total sales for each customer, rather than calculating the total dynamically each time. Denormalization can be a powerful tool for improving performance in certain situations, but it also has its drawbacks. One of the main risks of denormalization is the potential for data inconsistency, since redundant data can become out of sync if not properly maintained. Additionally, denormalization can make the data model more complex and harder to maintain, especially as the database grows in size and complexity. There are several types of queries that can be helpful when implementing denormalization in a database. Here are a few examples: Query to identify frequently accessed tables and columns: SELECT t.name AS table_name, c.name AS column_name, COUNT(*) AS access_count FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp JOIN sys.tables t ON t.object_id = qp.objectid JOIN sys.columns c ON c.object_id = qp.objectid AND c.column_id = qp.columnid WHERE st.dbid = DB_ID() GROUP BY t.name, c.name ORDER BY access_count DESC; This query can help you identify the tables and columns that are frequently accessed in queries, which can help you determine which data to denormalize. Query to identify redundant columns: SELECT t.name AS table_name, c.name AS column_name, COUNT(*) AS row_count, COUNT(DISTINCT c.value) AS distinct_value_count FROM dbo.orders o JOIN dbo.customers c ON c.customer_id = o.customer_id JOIN sys.tables t ON t.name = 'orders'GROUP BY t.name, c.name ORDER BY row_count DESC; This query can help you identify redundant columns that can be moved from one table to another to simplify queries. Query to create a denormalized table: CREATE TABLE dbo.denormalized_orders ( order_id INT PRIMARY KEY, customer_name VARCHAR(50), order_date DATETIME, total_cost DECIMAL(10, 2) ); INSERT INTO dbo.denormalized_orders (order_id, customer_name, order_date, total_cost) SELECT o.order_id, c.customer_name, o.order_date, SUM(od.unit_price * od.quantity) AS total_cost FROM dbo.orders o JOIN dbo.order_details od ON o.order_id = od.order_id JOIN dbo.customers c ON c.customer_id = o.customer_id GROUP BY o.order_id, c.customer_name, o.order_date; This query creates a denormalized table that contains redundant data from multiple tables. Note that this example uses a simplified data model for illustrative purposes. These queries can help you get started with denormalization in SQL Server, but it's important to carefully consider the implications of denormalization before implementing it in a production environment.
- A Guide to the System Databases in SQL Server
Blog Introduction: SQL Server is a powerful database system that provides an array of features and capabilities. One of these features is the system databases, which are essential for managing and maintaining the system. In this blog post, we will take a look at the different system databases in SQL Server and how they are used. Master Database The master database is the main repository for all data related to the entire SQL Server instance. It stores information about logins, configuration settings, and other system-level information such as linked servers, linked server logins, and server certificates. The master database should never be modified directly; instead, any changes should be done via stored procedures or built-in commands. The master database also contains metadata about all the other databases on the SQL Server instance, including their file locations and configuration settings. If the master database becomes corrupt or is lost, the entire SQL Server instance may become unusable. To prevent this, it is important to back up the master database regularly and to keep a copy of the backup in a safe location. Additionally, any changes made to the master database should be carefully planned and tested before implementation to avoid any unintended consequences. The following are some important Dynamic Management Views (DMVs) that are available in the system databases of SQL Server: sys.dm_tran_active_transactions - This DMV provides information about all currently active transactions in the SQL Server instance. sys.dm_db_index_usage_stats - This DMV provides information about the usage of indexes in a database, such as when an index was last used, and how many times it has been used. sys.dm_exec_query_stats - This DMV provides information about the performance of queries that have been executed on the SQL Server instance. sys.dm_os_wait_stats - This DMV provides information about the wait statistics of the SQL Server instance, such as how long tasks have been waiting for a particular resource. sys.dm_exec_connections - This DMV provides information about the active connections to the SQL Server instance, including the login name, client address, and connection time. sys.dm_db_file_space_usage - This DMV provides information about the space usage of database files, such as the size of data and log files, and how much space is used. sys.dm_db_partition_stats - This DMV provides information about the space usage of individual partitions in a database, including the number of rows and pages. These DMVs can be used to monitor and optimize the performance of SQL Server, as well as troubleshoot issues with the system. The following operations cannot be performed on the master database: Adding files or filegroups. Backups, only a full database backup can be performed on the master database. Changing collation. The default collation is the server collation. Changing the database owner. master is owned by sa. Creating a full-text catalog or full-text index. Creating triggers on system tables in the database. Dropping the database. Dropping the guest user from the database. Enabling change data capture. Participating in database mirroring. Removing the primary filegroup, primary data file, or log file. Renaming the database or primary filegroup. Setting the database to OFFLINE. Setting the database or primary filegroup to READ_ONLY. Recommendations When you work with the master database, consider the following recommendations: Always have a current backup of the master database available. Back up the master database as soon as possible after the following operations: Creating, modifying, or dropping any databas Changing server or database configuration values Modifying or adding logon accounts Do not create user objects in master. If you do, master must be backed up more frequently. Do not set the TRUSTWORTHY option to ON for the master database. What to Do If master Becomes Unusable If the master database becomes unusable in SQL Server, it can be a critical situation because the master database stores information about all other databases and server-level objects, such as logins and endpoints. Here are the steps to follow in case the master database becomes unusable: Try to restore a backup: If you have a recent backup of the master database, you can restore it to a different instance and extract the necessary information. In this case, you might lose any changes that were made after the last backup. Rebuild the master database: If you don't have a recent backup, you can rebuild the master database by running the setup program for SQL Server and choosing the "Rebuild the system databases" option. This will create a new master database with the default settings. Reattach user databases: After rebuilding the master database, you need to reattach any user databases that were detached before the rebuild. Restore server-level objects: If you have a script or backup of the server-level objects, you can restore them using that script or backup. This will help to restore the server to its original state. Recreate logins: If you don't have a script or backup of the server-level objects, you will need to recreate any logins, endpoints, and other server-level objects manually. It is important to note that rebuilding the master database should be done only as a last resort, as it can cause significant downtime and might result in data loss. Therefore, it is recommended to have regular backups of the master database and to test the backup and restore process to ensure that it works correctly. Model Database The model database acts as a template for all new databases created on the SQL Server instance. When a new database is created, it inherits all of its settings from the model database. This includes default filegroups and default file sizes as well as other options such as page checksums and auto-growth settings. If you need to apply certain configurations to all new databases, then it is best to modify the model database instead of manually setting them up with each new database creation. MSDB Database The MSDB database contains information about backups, replication activities, SQL Agent jobs, alert notifications and operators, SSIS packages, and other maintenance activities. This information can be accessed by using views or stored procedures that are provided by Microsoft. Recommendations When you work with the msdb database, consider the following recommendations: Always have a current backup of the msdb database available. Back up the msdb database as soon as possible after the following operations: Creating, modifying, or deleting any jobs, alerts, proxies or maintenance plans Adding, changing or deleting database mail profiles Adding, modifying or deleting Policy based management policies Do not create user objects in msdb. If you do, msdb must be backed up more frequently. Treat the msdb database as highly sensitive and do not grant access to anyone without a proper need. Especially keep in mind, that SQL Server Agent jobs are often owned by members of the sysadmin-role and therefore make sure that code that is executed cannot be tampered with. Audit any changes to objects in msdb The following operations cannot be performed on the msdb database: Changing collation. The default collation is the server collation. Dropping the database. Dropping the guest user from the database. Enabling change data capture. Participating in database mirroring. Removing the primary filegroup, primary data file, or log file. Renaming the database or primary filegroup. Setting the database to OFFLINE. Setting the primary filegroup to READ_ONLY Here are some common queries to manage the MSDB database in SQL Server: To view a list of all the jobs in the MSDB database: USE MSDB SELECT * FROM sysjobs To view the job history for a specific job: USE MSDB SELECT * FROM sysjobhistory WHERE job_id = 'job_id_here' To view the job schedules for a specific job: USE MSDB SELECT * FROM sysjobschedules WHERE job_id = 'job_id_here' To view the backup history for a specific database: USE MSDB SELECT * FROM backupset WHERE database_name = 'database_name_here' To view the SQL Server Agent properties: USE MSDB EXEC sp_helpserver 'server_name' To view the SQL Server Agent operators: USE MSDB SELECT * FROM dbo.sysoperators Some of the important tables in the MSDB database are: sysjobs - This table contains information about the SQL Server Agent jobs that have been defined on the instance, such as the job name, owner, and job category. sysjobsteps - This table contains information about the steps that are defined for a SQL Server Agent job, such as the step name, command, and output file. sysjobschedules - This table contains information about the schedules that are associated with SQL Server Agent jobs, such as the start and end times, and the frequency of the job. sysjobservers - This table contains information about the SQL Server instances that are associated with SQL Server Agent jobs, such as the server name and version. sysalerts - This table contains information about the alerts that have been defined on the instance, such as the alert name, condition, and response. sysschedules - This table contains information about the schedules that are defined on the instance, such as the start and end times, and the frequency of the schedule. sysoperators - This table contains information about the operators who are defined on the instance, such as the operator name, email address, and pager number. These tables and others in the MSDB database can be queried to retrieve information about SQL Server Agent jobs, alerts, and schedules, as well as to monitor and troubleshoot issues with the SQL Server Agent service. Resource Database The Resource database is a read-only database in SQL Server that contains all the system objects that are included with the installation of SQL Server. These objects include system stored procedures, system functions, views, and other system-defined database objects. The purpose of the Resource database is to provide a dedicated location for these system objects so that they can be easily accessed and maintained by the SQL Server instance. The Resource database is created during the installation of SQL Server and is located in the SQL Server installation directory. It is not meant to be modified directly by users or administrators, and attempts to modify its contents may result in system instability or errors. The Resource database is a vital component of SQL Server and should not be deleted or modified without the guidance of Microsoft support. When a user or application references a system object in SQL Server, the SQL Server engine checks the Resource database first to see if the object exists. If the object is not found in the Resource database, the SQL Server engine then searches the user-defined databases for the object. This allows SQL Server to efficiently manage and organize its system objects and provide a consistent and reliable experience for users and applications. Other DB's In addition to the Resource, Master, Model, MSDB, and TempDB databases, there are a few other system databases that may be present in a SQL Server instance. Distribution database: This database is used in SQL Server replication to store and manage replication data. It is created when replication is configured and is used to store the snapshot files, transaction logs, and other metadata that is necessary for replication to function properly. ReportServer and ReportServerTempDB databases: These databases are used by SQL Server Reporting Services (SSRS) to store and manage report data. The ReportServer database contains metadata about reports and their execution history, while the ReportServerTempDB database is used to store temporary data that is generated during report processing. SSISDB: This database is used by SQL Server Integration Services (SSIS) to store and manage SSIS packages, project files, and other artifacts. It provides a centralized location for SSIS administrators to manage and deploy packages across the enterprise. FileStream Filegroup: This is not a database, but rather a filegroup that can be added to a database. It is used to store large binary data such as images, audio, and video files. FileStream data is stored as files on the file system rather than in the database itself, which can improve performance and reduce storage costs. Overall, these system databases and features play an important role in the functionality and performance of SQL Server and should be carefully managed and maintained by database administrators.
- SQL Server Backup Index and Stats & Maintenance Checks
As an integral part of any business, the reliable operation and longevity of SQL Server databases are crucial. To ensure the smooth running of your SQL Server operations, appropriate preventative maintenance tasks should be conducted on a regular basis to help maintain system performance, increase stability and reduce downtime. This blog post provides details on maintenance plans a comprehensive list of suggested maintenance tasks for DBA's and CEO's alike so that proper upkeep can be ensured for their SQL Server environment. This blog post will help you query and ascertain if your SQL Server is properly maintained List SQL Server maintenance Tasks SQL Server maintenance tasks are crucial for ensuring the health and performance of your SQL Server database. Here are brief definitions of some of the common SQL Server database maintenance tasks: Re-indexing: This task involves rebuilding the indexes of a database, which can improve the database's performance by reducing the fragmentation of data and making it easier for the SQL Server agent to retrieve data from the database. DBCC check DB: This command performs a consistency check on a database to identify and fix any logical and physical inconsistencies in the database. Query statistics: This task updates the statistics for a database, which helps SQL Server optimize query execution plans by providing accurate information about the data distribution in the database. Backups: This task involves creating regular backups of the various database files, which is crucial for disaster recovery and data protection. While all of these database maintenance plan and tasks are important for SQL Server, it's worth noting that shrinking databases frequently is generally not a good idea. Shrinking a database involves reclaiming unused space in the database, which can be useful if you have a database that is growing rapidly and you need to free up disk space. However, shrinking databases frequently can have several negative consequences: Performance degradation: Shrinking a database involves moving data around on disk, which can cause fragmentation and slow down query performance. Increased file fragmentation: Shrinking a database can also cause the physical files that make up the database to become fragmented, which can also slow down performance. Increased risk of data loss: Shrinking a database involves moving data around on disk, which increases the risk of data loss if something goes wrong during the process. In summary, while re-indexing, DBCC check DB, stats update, and backups are all important maintenance tasks for SQL Server, it's generally not a good idea to shrink databases frequently. Instead, focus on optimizing your database's performance and monitoring its growth to ensure that you have enough disk space available. Re-indexing - Check If Indexes Are Maintained Indexes are database objects that help optimize database queries by allowing the database engine to quickly locate data rows in a table based on the values of one or more columns. An index consists of a data structure that organizes the values of the indexed columns into a tree-like structure, making it faster to search for specific values. When a query is executed against a table, the database engine can use an index to quickly find the subset of rows that match the query's conditions, rather than having to scan the entire table. This can significantly improve query performance, especially for large tables or tables with complex queries. However, as data is inserted, updated, and deleted from a table, the index can become fragmented or out of date, which can lead to reduced query performance. Therefore, it is important to maintain indexes regularly to ensure that they remain optimized for the queries that are executed against them. There are several tasks involved in maintaining indexes, including: Rebuilding indexes: This involves dropping and recreating an index to remove fragmentation and update statistics. This is typically done when an index is heavily fragmented or has a large number of deleted rows. Reorganizing indexes: This involves physically reordering the data pages in an index to remove fragmentation and improve query performance. This is typically done when an index has moderate fragmentation. Updating index statistics: This involves updating the statistics that the database engine uses to determine the most efficient query execution plan for a given index. This is typically done when a large amount of data has been added, modified, or deleted from a table. By regularly performing these tasks, you can help ensure that your indexes remain optimized for your database queries, which can improve overall database performance and user experience. You can use the following T-SQL query to determine the percentage of index fragmentation per table in a database: SELECT DB_NAME() AS DatabaseName, t.NAME AS TableName, i.name AS IndexName, index_type_desc AS IndexType, avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS ps INNER JOIN sys.tables t ON ps.object_id = t.object_id INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id WHERE ps.index_id > 0 ORDER BY DatabaseName, TableName, IndexName; This query uses the sys.dm_db_index_physical_stats dynamic management function to retrieve information about the physical fragmentation of indexes in the current database. The query joins the results with the sys.tables and sys.indexes system tables to retrieve the table and index names, as well as the index type. The avg_fragmentation_in_percent column in the sys.dm_db_index_physical_stats function provides the percentage of fragmentation for each index. The query orders the results by database name, table name, and index name. Note that this query will return results for all indexes in the current database, including system tables and indexes. If you only want to retrieve results for user-defined tables, you can add a filter to the sys.tables join, like this: INNER JOIN sys.tables t ON ps.object_id = t.object_id AND t.is_ms_shipped = 0 This will exclude system tables from the results. Re-indexing - Rebuild Indexes In A Database Here is a T-SQL script to rebuild or reorganize all indexes in a database: DECLARE @DatabaseName NVARCHAR(128) DECLARE @SQL NVARCHAR(MAX) SET @DatabaseName = '' SET @SQL = '' SELECT @SQL = @SQL + CASE WHEN avg_fragmentation_in_percent > 30 THEN 'ALTER INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(o.schema_id)) + '.' + QUOTENAME(o.name) + ' REBUILD WITH (FILLFACTOR = 80, ONLINE = ON);' ELSE 'ALTER INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(o.schema_id)) + '.' + QUOTENAME(o.name) + ' REORGANIZE WITH (LOB_COMPACTION = ON);' END + CHAR(13) + CHAR(10) FROM sys.dm_db_index_physical_stats(DB_ID(@DatabaseName), NULL, NULL, NULL, NULL) ps INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id INNER JOIN sys.objects o ON ps.object_id = o.object_id WHERE ps.index_id > 0 ORDER BY ps.avg_fragmentation_in_percent DESC EXEC (@SQL) This script uses the sys.dm_db_index_physical_stats dynamic management function to retrieve the fragmentation level of each index in the database. It then generates a dynamic SQL statement to either rebuild or reorganize the index based on its fragmentation level. Indexes with a fragmentation level greater than 30% are rebuilt using the ALTER INDEX ... REBUILD statement, while indexes with a fragmentation level less than or equal to 30% are reorganized using the ALTER INDEX ... REORGANIZE statement. For more information about re-indexing and alternative solutions check out this blog https://www.bps-corp.com/post/implementing-ola-hallengren-indexing DBCC Check DB DBCC CHECKDB is a command in SQL Server that checks the logical and physical consistency of a database, and can detect and repair a wide range of database corruption issues. It is recommended to run DBCC CHECKDB regularly, to keep database integrity and ensure that the database is healthy and free of any corruption that may cause data loss or performance issues. Running DBCC CHECKDB on a regular basis can help identify and fix a wide range of issues, such as: Allocation and structural errors: These include page-level errors, index-related errors, and other structural issues that affect the integrity of the database. File system errors: These include file system-level errors that may impact the ability of the database to read and write data. Database consistency errors: These include issues related to the consistency of the database, such as mismatched metadata, incorrect internal pointers, and so on. To determine when the last DBCC CHECKDB was executed on a specific database, you can use the following T-SQL query: DBCC SHOWCONTIG ('') WITH TABLERESULTS; This query will return a result set with several columns, including LastUpdate, which indicates the date and time when the last DBCC CHECKDB was executed on the database. Note that this value may not be accurate if the database was restored from a backup, as the restore process resets the DBCC information. In such cases, you should consider running DBCC CHECKDB on the restored database to ensure its health. This script uses a cursor to iterate through all user-defined databases in the SQL Server instance (excluding the system database tempdb). For each database, it sets the database context using the USE statement and then executes the DBCC CHECKDB command with the WITH ALL_ERRORMSGS, NO_INFOMSGS options. The WITH ALL_ERRORMSGS option instructs SQL Server to display all error messages generated during the check, while the NO_INFOMSGS option suppresses informational messages, which can help reduce the amount of output generated by the command. The PRINT statement is included to display a message in the query output for each database being checked, so you can easily track the progress of the script. Once the script has completed, you can review the output to check for any errors or inconsistencies in the databases. Here's an example T-SQL code to run DBCC CHECKDB on all user databases in an instance of SQL Server: DECLARE @db_name nvarchar(128) DECLARE @sql nvarchar(max) DECLARE db_cursor CURSOR FOR SELECT name FROM sys.databases WHERE database_id > 4 -- exclude system databases OPEN db_cursor FETCH NEXT FROM db_cursor INTO @db_name WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = 'USE [' + @db_name + ']; DBCC CHECKDB WITH ALL_ERRORMSGS, NO_INFOMSGS;' PRINT 'Running DBCC CHECKDB on database [' + @db_name + ']...' EXEC sp_executesql @sql FETCH NEXT FROM db_cursor INTO @db_name END CLOSE db_cursor DEALLOCATE db_cursor This code uses a cursor to loop through all user databases (excluding operating system databases) and runs DBCC CHECKDB with the WITH ALL_ERRORMSGS and NO_INFOMSGS options on each database. The PRINT statement is optional and can be removed if you don't want to see the messages printed in the output window. Note: Running DBCC CHECKDB on large databases or during peak hours can affect server performance, so it's recommended to schedule this operation during off-peak hours. Statistics Statistics in SQL Server are used by the query optimizer to estimate the cardinality (number of rows) of a table or an index. The query optimizer uses these estimates to create a query plan that is most efficient in terms of execution time. Statistics are created automatically by SQL Server when an index is created, or when the query optimizer determines that the existing statistics are outdated or not accurate. However, there are times when it may be necessary to manually create or update statistics to ensure optimal query performance. Here are some ways to maintain statistics in SQL Server: Automatic Updates: SQL Server can automatically update statistics when a threshold of changes to the data has been reached. This threshold is determined by the "Auto Update Statistics" database option, which is enabled by default. However, this may not always be enough to ensure optimal query performance. Manual Updates: Manually updating statistics can be done using the UPDATE STATISTICS command. This command can be used to update statistics on a specific table or index, or on all tables in a database. For example, to update statistics on a specific table: UPDATE STATISTICS table_name; To update statistics on a specific index: UPDATE STATISTICS table_name index_name; To update statistics on all tables in a database: EXEC sp_updatestats; Full Scan: By default, SQL Server updates statistics using a sample of the data. However, if the sample size is not large enough, it can result in inaccurate estimates. In such cases, a full scan can be done to update the statistics using all the data. This can be done using the WITH FULLSCAN option of the UPDATE STATISTICS command. For example, to update statistics on a specific table using a full scan: UPDATE STATISTICS table_name WITH FULLSCAN; Filtered Statistics: In some cases, queries may only access a subset of the data in a table. In such cases, creating filtered statistics on the subset of the data can improve query performance. This can be done using the CREATE STATISTICS command. For example, to create filtered statistics on a specific column in a table: CREATE STATISTICS stats_name ON table_name (column_name) WHERE filter_expression; Column Statistics: In addition to table and other index and column statistics and statistics, column-level statistics can also be created for specific columns in a table. This can be done using the CREATE STATISTICS command. For example, to create column statistics on a specific column in a table: CREATE STATISTICS stats_name ON table_name (column_name); It is important to maintain statistics in SQL Server to ensure optimal query performance. Outdated or inaccurate statistics can result in poor query performance, and can cause the query optimizer to choose inefficient query plans. You can use the following T-SQL query to find how old statistics are in SQL Server for each database: SELECT DB_NAME() AS DatabaseName, OBJECT_SCHEMA_NAME(s.object_id) AS SchemaName, OBJECT_NAME(s.object_id) AS TableName, s.name AS StatisticName, STATS_DATE(s.object_id, s.stats_id) AS LastUpdated, DATEDIFF(DAY, STATS_DATE(s.object_id, s.stats_id), GETDATE()) AS DaysSinceLastUpdate FROM sys.stats s WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1 ORDER BY DatabaseName, TableName, StatisticName; This query uses the sys.stats system catalog view to retrieve information about statistics for user tables in the current database. The STATS_DATE function is used to retrieve the date and time when the statistics were last updated for each statistic, and the DATEDIFF function is used to calculate the number of days since the statistics were last updated. The OBJECTPROPERTY function is used to filter out system tables from the results, since statistics are not typically updated for system tables. The query orders the results by database name, table name, and statistic name. Note that the results returned by this query may not be entirely accurate, since statistics may not be updated regularly, or may be updated automatically by SQL Server when certain conditions are met. However, this query can give you a general idea of how old your statistics are, which can help you identify tables that may need more frequent updates to maintain optimal query performance. Backups: Taking backups is crucial for any serious SQL Server database maintenance ever, for the following reasons: Disaster Recovery: Backups are the primary means of recovering a database after a disaster such as hardware failure, natural disasters, human errors, or cyberattacks. In such cases, the database can be restored from the most recent backup and transaction logs can be applied to bring the database up to the point of failure. Data Loss Prevention: Backups are a means of preventing data loss due to accidental deletion, corruption, or any other unexpected issues. Without regular backups, there is a high risk of losing important data permanently. Compliance: Many organizations are required to maintain backups for regulatory compliance reasons. For example, financial institutions may need to keep transactional data for a specific period of time, and backups are the only way to meet those requirements. Database Migration: Backups can be used to move a database from one server to another, or to upgrade to a new version of SQL Server. Without backups, the process of migrating a database can be difficult and risky. Testing and Development: Backups can be used for testing and development purposes, as they allow for a database to be restored to a specific point in time. This can be useful for testing new code, patches, or configurations before applying them to a production environment. In summary, taking backups is crucial for ensuring data availability, disaster recovery, compliance, and minimizing risks associated with your database maintenance operations. It is recommended to develop a backup and restore strategy that meets the organization's needs and to regularly test backups of historical data to ensure they can be successfully restored. You can use the following T-SQL script to find when the last backup was taken for all databases on a SQL Server instance: SELECT database_name = DB_NAME(database_id), backup_type = CASE WHEN backup_type = 'D' THEN 'Full' WHEN backup_type = 'I' THEN 'Differential' WHEN backup_type = 'L' THEN 'Transaction Log' ELSE 'Unknown' END, backup_finish_date = MAX(backup_finish_date) FROM msdb.dbo.backupset GROUP BY database_id, backup_type; This script queries the msdb.dbo.backupset table in the msdb system database, which contains information about all backups taken on the SQL Server instance. It groups the results by database ID and backup type, and then selects the maximum backup_finish_date for each group. The CASE statement is used to translate the backup type code (D for the full backup name, I for differential, L for transaction log) into a more readable format. The output of this script will show the name of each database on the instance, along with the type of the last backup taken (Full, Differential, or Transaction Log) of shrink database, and the date and time that the backup finished. If a database has never been backed up, it will not appear in the output. Note that this script assumes that backups are being taken regularly on the instance and that the msdb database is being maintained properly. If backups are not being taken, or if the msdb database has been corrupted or is not being maintained properly, the results of this script may be inaccurate or incomplete. Here's an example T-SQL code to backup all user databases on an instance of SQL Server to the D drive: DECLARE @db_name nvarchar(128) DECLARE @backup_path nvarchar(max) SET @backup_path = 'D:\SQLBackups\' -- change to your desired backup path DECLARE db_cursor CURSOR FOR SELECT name FROM sys.databases WHERE database_id > 4 -- exclude system databases OPEN db_cursor FETCH NEXT FROM db_cursor INTO @db_name WHILE @@FETCH_STATUS = 0 BEGIN SET @backup_path = @backup_path + @db_name + '_' + CONVERT(varchar(8), GETDATE(), 112) + '.bak' BACKUP DATABASE @db_name TO DISK = @backup_path PRINT 'Backup of database [' + @db_name + '] complete.' SET @backup_path = 'D:\SQLBackups\' -- reset backup path for next database FETCH NEXT FROM db_cursor INTO @db_name END CLOSE db_cursor DEALLOCATE db_cursor This code uses a cursor to loop through log files for all user databases (excluding system databases) and backs up each database to a file in the specified backup path on the D drive. The log backup to filename includes the database name and the current date in YYYYMMDD format. Note: Make sure that the database backup and path specified exists and that the SQL Server backup service account has write permissions to it. Also, be aware that backing up large databases can take a significant amount of time and may affect server performance. It's recommended to schedule backups during off-peak hours.
- Configure A SQL Instance
SQL Server Mistakes to Avoid Blog Introduction: As a database administrator (DBA), there are certain mistakes that you should be aware of and avoid when it comes to managing your SQL Server. Over the past 20 years of consulting on SQL Server, I’ve seen a few common errors that can lead to significant issues down the line if not addressed. In this blog post I will cover some of these key mistakes and how you can make sure your system is running as efficiently as possible. No Backups or Full Backups with no Logs One of the most important steps for a DBA is to ensure that backups are taken regularly and correctly. If backups are not taken at all, or if full backups are taken without logs, this means that there is no way to restore data in the event of an emergency. It is essential to have regular backup schedules in place and use both full and log backups to make sure your data is safe and secure. Here's a T-SQL script to show all databases and their backup mode: SELECT [name] AS [Database Name], CASEWHEN [recovery_model] = 1 THEN 'Full'WHEN [recovery_model] = 2 THEN 'Bulk-Logged'WHEN [recovery_model] = 3 THEN 'Simple'ELSE 'Unknown'END AS [Backup Mode] FROM [master].[sys].[databases] WHERE [database_id] > 4 -- exclude system databasesORDER BY [name] ASC; This script retrieves the name of all databases except the system databases and displays their backup mode as either Full, Bulk-Logged, Simple, or Unknown (if an invalid recovery model is set). You can run this script in SQL Server Management Studio or any other SQL client. In SQL Server, transaction log backups are required in the Full and Bulk-Logged recovery models. These recovery models enable point-in-time recovery, which means that you can recover a database to a specific point in time using transaction log backups. In the Full recovery model, all transaction log records are retained until they are backed up or truncated. This means that you can recover a database to any point in time since the last full or differential backup by applying the relevant transaction log backups. To ensure that you have all necessary transaction log backups, you should perform them at regular intervals, based on your recovery point objective (RPO) and recovery time objective (RTO). In the Bulk-Logged recovery model, transaction log backups are still required, but the behavior is slightly different. In this model, large-scale bulk operations are minimally logged, which can improve performance but also increases the risk of data loss in the event of a failure. As a result, it's important to back up the transaction log more frequently in the Bulk-Logged model to minimize the amount of data that could be lost. In the Simple recovery model, transaction log backups are not required because the transaction log is truncated after each checkpoint. This means that you can only recover a database to the time of the most recent full or differential backup, which may result in some data loss. The Simple recovery model is suitable for databases that can be easily recreated or restored from a backup. Mikes View: Only use FULL mode when necessary and always have transaction log backups with full mode. No Security or Too Many Individual Users in System vs Groups Another key mistake that DBAs often make is not setting up proper security measures or having too many individual users in the system rather than groups. Having too many individual user accounts can slow down performance and increase the risk of unauthorized access. To combat this issue, it's important to set up appropriate security protocols such as password expiration, account lockout policies, and two-factor authentication. Additionally, consolidating individual user accounts into groups can help improve efficiency and reduce administrative overhead. When it comes to security in SQL Server, there are two main approaches: using groups for security or using individual accounts. Both approaches have their advantages and disadvantages, and the choice depends on the specific needs and requirements of your organization. Using groups for security can simplify management and improve security by reducing the number of permissions that need to be managed. By creating groups that correspond to different roles within your organization (such as database administrators, developers, or report writers), you can assign permissions to the group rather than individual accounts. This can save time and reduce the risk of errors or inconsistencies in permissions management. Using individual accounts for security can provide more granular control over permissions and can be useful in situations where different users need different levels of access. By assigning permissions to individual accounts, you can ensure that each user only has access to the specific resources they need. However, managing a large number of individual accounts can be time-consuming and increase the risk of errors or inconsistencies. In general, using groups for security is recommended for most organizations because it simplifies management and improves security. However, there may be situations where individual accounts are necessary to provide the required level of access control. Ultimately, the choice between using groups or individual accounts for security depends on the specific needs and requirements of your organization. Mikes View: Working with the system admin sucks, but you would outsource adding people to groups to systems. Maximum Degree of Parallelism Maximum Degree of Parallelism (MAXDOP) in SQL Server is a setting that determines the maximum number of processors that can be used to execute a single query. In other words, it limits the number of processors that can be used to parallelize a single query execution. By default, the MAXDOP setting is set to 0, which means that SQL Server can use all available processors to parallelize a query. However, in some cases, using all available processors may not be optimal and can lead to performance issues. The MAXDOP setting can be configured at the server level or the query level. At the server level, you can set the maximum degree of parallelism for all queries by configuring the "Max Degree of Parallelism" option in the Server Properties dialog box. At the query level, you can override the server-level setting by using the "OPTION (MAXDOP n)" syntax, where "n" is the maximum degree of parallelism you want to use for that query. Setting the MAXDOP setting to an appropriate value can help optimize query performance by balancing parallelism with resource utilization. However, it's important to note that setting the MAXDOP setting too low can result in slower query performance, while setting it too high can result in resource contention and performance issues. Therefore, it's important to carefully evaluate and test different values for MAXDOP to find the optimal setting for your specific workload and hardware configuration. The optimal setting for Maximum Degree of Parallelism (MAXDOP) in SQL Server depends on several factors, including the hardware configuration, the workload characteristics, and the overall system performance goals. However, here are some general recommendations for setting the MAXDOP value: For OLTP workloads: It's generally recommended to set MAXDOP to 1 for OLTP (Online Transaction Processing) workloads, which typically involve many small transactions with short execution times. This is because parallelizing queries may not provide significant performance benefits and can instead lead to resource contention and contention problems. For data warehouse workloads: For data warehouse workloads, which typically involve complex queries and large data sets, setting MAXDOP to a value between 4 and 8 is often recommended. This can help balance parallelism with resource utilization and provide good performance benefits without causing resource contention issues. Consider the hardware configuration: The optimal value for MAXDOP also depends on the number of processors and cores available on the server. As a general rule of thumb, it's recommended to set MAXDOP to no more than the number of physical cores on the server. Test and evaluate: It's important to carefully evaluate and test different MAXDOP settings to find the optimal value for your specific workload and hardware configuration. This can involve running workload simulations with different MAXDOP settings and monitoring performance metrics such as CPU usage, query execution time, and resource utilization. Cost Threshold for Parallelism Set Incorrectly The Cost Threshold for Parallelism is a configuration option in SQL Server that specifies the threshold at which SQL Server decides to use parallel execution plans for queries. When a query is submitted to SQL Server, the Query Optimizer generates one or more execution plans to determine the most efficient way to execute the query. Parallelism is a feature that allows SQL Server to divide a query into multiple smaller tasks and execute them simultaneously on multiple processors, which can provide significant performance benefits for certain types of queries. The Cost Threshold for Parallelism specifies the minimum estimated cost required for a query to be considered for parallel execution. The cost of a query represents an estimated measure of the resources (such as CPU, I/O, and memory) required to execute the query. By default, the Cost Threshold for Parallelism is set to 5, which means that any query with an estimated cost of 5 or higher will be considered for parallel execution. However, this default value may not be optimal for all workloads, and it's important to evaluate and adjust the setting based on the specific workload characteristics. Setting the Cost Threshold for Parallelism too low can cause SQL Server to generate parallel execution plans for queries that would perform better using a serial plan, which can lead to performance degradation and resource contention issues. On the other hand, setting the Cost Threshold for Parallelism too high can prevent SQL Server from using parallelism for queries that could benefit from it, which can also result in suboptimal performance. In general, it's recommended to set the Cost Threshold for Parallelism to 50 and see how things work out. Things You Should Configure Server Memory Configuration: This setting determines how much memory SQL Server can use. You should ensure that the server has enough memory to handle the workload and that the memory settings are configured correctly. For example, you may need to adjust the min server memory and max server memory settings to ensure optimal performance. TempDB Configuration: TempDB is a system database that SQL Server uses to store temporary data, such as temporary tables and indexes. You should ensure that TempDB is configured correctly based on the workload characteristics. For example, you may need to adjust the number of TempDB files and the size of each file to optimize performance. TempDB is a system database in SQL Server that is used to store temporary user objects, internal objects, and version stores. It is important to configure TempDB properly to ensure optimal performance of SQL Server. Here are the steps to configure TempDB in SQL Server: Determine the number of processor cores: The number of processor cores on the SQL Server machine should be determined. This can be done by running the following command in SQL Server Management Studio: SELECT cpu_count FROM sys.dm_os_sys_info Determine the initial size and growth settings: The initial size and growth settings for TempDB should be determined based on the size of the workload and the number of processor cores. A general rule of thumb is to set the initial size to 8 MB per processor core and the growth increment to 64 MB. Configure multiple data files: Multiple data files should be created for TempDB to improve performance. The number of data files should be equal to the number of processor cores. The files should be located on separate disks to improve disk I/O performance. Set the autogrowth settings: The autogrowth settings for TempDB should be configured to ensure that the files do not run out of space. A recommended setting is to set the autogrowth increment to 64 MB and the maximum size to the size of the disk. Set the recovery model: The recovery model for TempDB should be set to Simple to reduce the overhead of transaction logging. Restart SQL Server: After making the configuration changes, SQL Server should be restarted for the changes to take effect. Here is an example T-SQL script to configure TempDB: USE master; GO ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, SIZE = 8MB, FILEGROWTH = 64MB); GO ALTER DATABASE tempdb ADD FILE (NAME = tempdev2, FILENAME = 'D:\SQLData\tempdb2.ndf', SIZE = 8MB, FILEGROWTH = 64MB), FILE (NAME = tempdev3, FILENAME = 'E:\SQLData\tempdb3.ndf', SIZE = 8MB, FILEGROWTH = 64MB), FILE (NAME = tempdev4, FILENAME = 'F:\SQLData\tempdb4.ndf', SIZE = 8MB, FILEGROWTH = 64MB); GO ALTER DATABASE tempdb MODIFY FILE (NAME = templog, SIZE = 8MB, FILEGROWTH = 64MB); GO ALTER DATABASE tempdb SET RECOVERY SIMPLE; GO Note: The above example assumes that the initial size of each data file is 8MB and that the growth increment is 64MB. The file locations and autogrowth settings should be adjusted based on the hardware configuration and workload characteristics of the SQL Server machine. Database Compatibility Level: The database compatibility level determines the version of SQL Server that the database is compatible with. You should ensure that the compatibility level is set correctly based on the version of SQL Server that you are using. You can read more here: https://www.bps-corp.com/post/sql-server-compatibility-levels Security Configuration: You should ensure that the security settings are configured correctly to protect your data and prevent unauthorized access. This includes configuring server-level security settings, database-level security settings, and auditing. Here are some common queries you can use to check the security of your SQL Server: List all logins: SELECT name, type_desc, create_date, modify_date FROM sys.server_principals WHERE type IN ('U', 'G', 'S') ORDER BY type, name; List all users in a database: SELECT name, type_desc, create_date, modify_date FROM sys.database_principals WHERE type IN ('U', 'G', 'S') ORDER BY type, name; List all server roles: SELECT name, create_date, modify_date FROM sys.server_principals WHERE type = 'R'; List all database roles: SELECT name, create_date, modify_date FROM sys.database_principals WHERE type = 'R'; List all users and their server roles: SELECT p.name AS UserName, r.name AS RoleName FROM sys.server_role_members m JOIN sys.server_principals r ON m.role_principal_id = r.principal_id JOIN sys.server_principals p ON m.member_principal_id = p.principal_id WHERE r.type = 'R'ORDER BY UserName; List all users and their database roles: SELECT p.name AS UserName, r.name AS RoleName FROM sys.database_role_members m JOIN sys.database_principals r ON m.role_principal_id = r.principal_id JOIN sys.database_principals p ON m.member_principal_id = p.principal_id WHERE r.type = 'R'ORDER BY UserName; List all users and their permissions on a specific object: SELECT USER_NAME(p.grantee_principal_id) AS UserName, p.permission_name, p.state_descFROM sys.database_permissions p WHERE p.major_id = OBJECT_ID('.') ORDER BY UserName; Note: Replace and with the schema name and table name of the object you want to check. List all logins that have not been used in a specified number of days: SELECT name, create_date, modify_date FROM sys.server_principals WHERE type IN ('U', 'G', 'S') AND name NOT LIKE '##%'AND name NOT LIKE 'NT AUTHORITY\%'AND DATEDIFF(DAY, last_login, GETDATE()) >= ORDER BY name; These queries can help you identify security-related issues in your SQL Server environment and take appropriate action to address them. You can use them in combination with other security tools and best practices to ensure the security and integrity of your data. Auto Growth Settings: This setting determines how the database files grow when they reach their maximum size. You should ensure that the auto growth settings are configured correctly to prevent the database from running out of space. This includes setting the growth increment and the maximum size of the files. Filegroups: Filegroups are used to group database objects and data files. You should ensure that the filegroups are configured correctly based on the size and type of data that is stored in the database. For example, you may want to create separate filegroups for frequently accessed data to improve performance. Index Settings: Indexes are used to improve query performance by allowing SQL Server to locate data quickly. You should ensure that the index settings are configured correctly based on the database schema and query patterns. This includes creating appropriate indexes, setting fill factor, and updating statistics. Locking and Concurrency Settings: Locking and concurrency settings determine how SQL Server manages database transactions and concurrency. You should ensure that the locking and concurrency settings are configured correctly to prevent blocking and deadlocks. This includes setting the isolation level, configuring lock escalation, and optimizing query performance. Query Store Settings: The Query Store is a feature in SQL Server that allows you to track query performance over time. You should ensure that the Query Store settings are configured correctly to capture query performance data and improve query performance. This includes enabling the Query Store, configuring data retention, and using the Query Store reports to analyze query performance. Database Owner: The database owner is the security principal that owns the database. You should ensure that the database owner is set correctly and that appropriate permissions are assigned to the database owner. Mikes Take: Set Database Owner to SA - unless there is a reason not to SQL Server Agent Settings: SQL Server Agent is a service that is used to automate tasks such as backups, maintenance, and notifications. You should ensure that the SQL Server Agent settings are configured correctly to ensure that jobs are executed successfully. Here are some SQL Agent settings to check when installing a new instance: Service Account: Make sure that the SQL Agent service is running under a domain account with appropriate permissions. It is recommended to use a separate domain account for each SQL Server instance. Startup Type: Configure the SQL Agent service to start automatically when the server starts. Error Log: Check the SQL Agent error log to make sure that there are no errors related to the service. Database Mail: Configure Database Mail to enable alerts and notifications from SQL Agent jobs. Proxy Accounts: Create proxy accounts for SQL Agent jobs that require elevated privileges. Use the principle of least privilege when assigning permissions to proxy accounts. Job Schedules: Configure job schedules to run at appropriate times and intervals. Avoid running jobs during peak hours or when system resources are limited. Alerts: Configure alerts to notify you when specific events occur, such as job failures or high CPU usage. Operators: Configure operators to receive notifications from SQL Agent jobs. You can create multiple operators and assign them to different groups or job categories. Backup Jobs: Create backup jobs to ensure that your databases are backed up regularly. Schedule the backup jobs to run at appropriate intervals and verify that the backups are completed successfully. Database Mail Settings: Database Mail is a feature in SQL Server that allows you to send email notifications from the database engine. You should ensure that the Database Mail settings are configured correctly to enable email notifications for important events such as failed backups or database corruption. Here are the steps to set up Database Mail in SQL Server using T-SQL scripts: Enable Database Mail: Run the following command to enable Database Mail: sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'Database Mail XPs', 1; GO RECONFIGURE GO Create a Mail Profile: Run the following script to create a mail profile: EXECUTE msdb.dbo.sysmail_add_profile_sp@profile_name = 'YourMailProfileName', @description = 'Description of your mail profile', @profile_security = 'private'; GO Add Mail Accounts: Run the following script to add a mail account: EXECUTE msdb.dbo.sysmail_add_account_sp@account_name = 'YourMailAccountName', @email_address = 'YourEmailAddress', @display_name = 'YourDisplayName', @mailserver_name = 'YourMailServerName', @port = YourMailServerPortNumber, @username = 'YourMailServerUsername', @password = 'YourMailServerPassword', @use_ssl = 0; GO Note: Set the @use_ssl parameter to 1 if your mail server requires SSL encryption. Add Mail Profile to Account: Run the following script to add the mail profile to the mail account:s EXECUTE msdb.dbo.sysmail_add_profileaccount_sp@profile_name = 'YourMailProfileName', @account_name = 'YourMailAccountName', @sequence_number = 1; GO Add Operators: Run the following script to add operators: scss EXECUTE msdb.dbo.sp_add_operator@name = 'YourOperatorName', @enabled = 1, @email_address = 'YourEmailAddress'; GO Add Alerts: Run the following script to add alerts: EXECUTE msdb.dbo.sp_add_alert@name = 'YourAlertName', @message_id = 0, @severity = 0, @enabled = 1, @delay_between_responses = 0, @include_event_description_in = 1, @category_name = 'YourCategoryName'; GO Add Alert Operators: Run the following script to add alert operators: EXECUTE msdb.dbo.sp_add_alert_response @alert_name = 'YourAlertName', @operator_name = 'YourOperatorName', @job_name = NULL; GO Once you have executed these scripts, Database Mail will be set up in SQL Server, and you can use it to send email notifications from SQL Server.
- Master Data Services (MDS)
Master Data Services (MDS) is a feature of SQL Server that provides a platform for managing master data, which is the core data used to support business operations and decision-making. Master data includes information such as customers, products, suppliers, employees, and financial data, and is typically shared across multiple systems and business units. MDS enables organizations to create a central repository for managing master data, and provides tools for defining data models, managing data quality, and enforcing data governance policies. Some of the key features of MDS include: Data modeling: Allows users to define the structure of master data, including entities, attributes, and relationships between data elements. Data management: Provides tools for managing master data, including importing, exporting, and updating data, as well as tracking changes and maintaining data history. Data quality: Includes data validation rules, data cleansing, and data matching capabilities to ensure that master data is accurate, consistent, and up-to-date. Security and governance: Provides role-based security and access controls, as well as workflow and approval processes to ensure that master data is managed in accordance with organizational policies and standards. MDS is designed to be flexible and customizable, and can be integrated with other Microsoft technologies such as SQL Server Integration Services (SSIS) and SQL Server Reporting Services (SSRS). It can also be extended using custom code and third-party tools to meet specific business requirements. Overall, MDS is a powerful tool for managing master data and improving data governance and quality. It can help organizations to streamline their data management processes, reduce errors and inconsistencies, and ensure that data is accurate and reliable across the organization.
- Data Quality Component
The Data Quality Client is a component of SQL Server Data Quality Services (DQS), which is a feature of SQL Server Enterprise and Business Intelligence editions. The Data Quality Client provides a graphical user interface for configuring and managing data quality projects, which are used to identify and resolve data quality issues in large datasets. The Data Quality Client allows users to create and manage knowledge bases, which are collections of rules, data domains, and reference data that are used to perform data validation and enrichment. It also provides tools for profiling data, discovering relationships between data elements, and cleansing data by applying standardized values or correcting errors. Some of the key features of the Data Quality Client include: Domain management: Allows users to define and manage data domains, which represent specific types of data (such as postal codes, phone numbers, or email addresses) and the rules for validating and cleansing them. Matching and deduplication: Provides tools for identifying and resolving duplicate records in a dataset by comparing data elements and applying matching rules. Reference data management: Allows users to import and manage reference data, such as lookup tables or external data sources, that can be used to enrich or validate data. Integration with SSIS: Provides integration with SQL Server Integration Services (SSIS), allowing users to create data quality projects as part of an SSIS package and incorporate data quality processes into ETL (Extract, Transform, Load) workflows. Overall, the Data Quality Client is a powerful tool for improving the accuracy and consistency of data in large datasets, and can help organizations to reduce errors, improve data governance, and make better-informed decisions based on high-quality data. Mikes Take -- Not Many Organizations Use This Skip If Possible!
- What is SSIS and Why You Should Use It for Your Database Projects
SQL Server Integration Services (SSIS) is a powerful data integration and ETL (Extract, Transform, Load) tool provided by Microsoft. It enables database administrators to quickly create custom ETL solutions by providing them with a user-friendly graphical design environment and a wide range of data transformation tools. In this blog post, we’ll discuss some of the best features of SSIS and why it should be your go-to tool for your database projects. Integration with Other Technologies One of the biggest advantages of SSIS is its tight integration with other technologies such as Oracle, Excel, and Flat Files. This allows you to easily import data from disparate sources and transform it into meaningful information that can be used to drive business insights. Additionally, SSIS can easily integrate with other Microsoft products such as Microsoft Dynamics, SharePoint, and Azure. This makes it easier for developers to create databases Imports (ETL Packages) that are compatible across multiple platforms. User-Friendly Graphical Design Environment SSIS provides a user-friendly graphical design environment that simplifies the process of creating ETL solutions. It allows developers to quickly build robust data pipelines using drag-and-drop components without having to write complex code. Additionally, SSIS includes a wide range of data transformation tools such as data conversion, aggregate, merge, lookup, and conditional split which makes it easier for developers to manipulate raw data into usable information. Optimized Performance & Scalability Another great feature of SSIS is its optimized performance and scalability. The software is designed in such a way that it takes full advantage of multi-core processors which enhances the speed at which tasks are completed. Furthermore, SSIS also provides task scheduling capabilities which allow you to automate processes so they can run on their own without any manual intervention or supervision required from your end. One of the key features of SSIS is its ability to connect to a wide variety of data sources, including Oracle, Excel, and Flat Files. This means that developers can easily extract data from these sources and transform it into a format that can be loaded into SQL Server for further analysis or processing. SSIS provides a variety of built-in connectors that enable it to connect to various data sources, including ODBC, OLE DB, ADO.NET, and XML. Additionally, SSIS provides a flexible architecture that allows developers to create custom connectors to connect to other data sources not supported out of the box. Here is a list of connectors in SSIS: ODBC connector: Enables SSIS to connect to any database that supports the Open Database Connectivity (ODBC) standard. OLE DB connector: Enables SSIS to connect to any database that supports the Object Linking and Embedding Database (OLE DB) standard. ADO.NET connector: Enables SSIS to connect to any database that supports the ADO.NET framework, including SQL Server, Oracle, and MySQL. Flat File connector: Enables SSIS to read and write data from flat files, including CSV, TXT, and fixed-width formats. Excel connector: Enables SSIS to read and write data from Microsoft Excel spreadsheets. XML connector: Enables SSIS to read and write data from XML files. SharePoint connector: Enables SSIS to connect to SharePoint lists and libraries. Dynamics CRM connector: Enables SSIS to connect to Microsoft Dynamics CRM. SAP connector: Enables SSIS to connect to SAP systems. Salesforce connector: Enables SSIS to connect to Salesforce. HTTP connector: Enables SSIS to connect to web services and REST APIs. FTP connector: Enables SSIS to transfer files over FTP. SMTP connector: Enables SSIS to send email messages. WMI connector: Enables SSIS to connect to Windows Management Instrumentation (WMI) data sources. Analysis Services connector: Enables SSIS to connect to SQL Server Analysis Services (SSAS) databases. Once data is extracted from a source system, SSIS provides a wide range of data transformation tools that enable developers to clean, manipulate, and enrich the data before loading it into SQL Server. SSIS includes over 40 data transformations, such as data conversion, merge, lookup, and aggregate. There are several editions of SQL Server and SSIS, and each edition has different features and licensing options. Here are some of the key differences between editions of SQL Server and SSIS: SSIS Server Editions: Standard Edition: This edition includes all the basic data integration capabilities of SSIS, including connectors to various data sources and basic transformation tasks. Enterprise Edition: This edition includes all the features of Standard Edition, as well as additional advanced data integration capabilities such as data profiling, data cleansing, and data quality services. It also includes support for high-performance and scalable data integration scenarios. Developer Edition: This edition includes all the features of Enterprise Edition, but is licensed only for development and testing purposes. Overall, the differences between editions of SQL Server and SSIS depend on the specific features and capabilities required for your organization's data management needs. The Enterprise Editions of both SQL Server and SSIS provide the most comprehensive set of features, but come with a higher licensing cost, while the Standard Editions provide basic functionality at a lower cost. Conclusion: Overall, SQL Server Integration Services (SSIS) is an incredibly powerful tool that makes it easy for database administrators to create custom ETL solutions quickly and efficiently. Its tight integration with other technologies such as Oracle, Excel, Flat Files etc., combined with its user-friendly graphical design environment make it an ideal choice for any database project requiring quick turnaround times and maximum efficiency. Additionally, its optimized performance and scalability ensure that tasks are completed in record time while still ensuring accuracy and reliability that customers have come to expect from Microsoft products over the years. If you’re looking for an efficient way to move large amounts of data between different databases or applications then look no further than SSIS!



