
Search Results
Search Results
Search Results
175 results found with an empty search
- Insert Data into a Table Using SQL Insert Statement
Syntax of SQL INSERT statement The following SQL statement or INSERT statement is used to add new records (rows) adding values to specific columns in a table. Here’s the basic syntax: INSERT INTO table_name (column1, column2, ...) VALUES (value1, value2, ...); Let’s break down the components: INSERT INTO: This is where syntax insert the keyword that indicates you want to insert data into a table. table_name: The name of the table you want to insert data into. (column1, column2, …): Optional list of columns you want to insert data into. If you specified columns, you need to provide values for each of these columns. If omitted, values for all columns must be provided in the same order as they appear in the table. VALUES: This keyword is used to specify the values and data types you want to insert values into the table. (value1, value2, …): The values you want to insert into the table command the specified three columns in the table. The number of values must match the number of columns specified, or match the total number of columns in the table if no columns are specified. Here’s a simple example: INSERT INTO Students (FirstName, LastName, Age, Grade) VALUES ('John', 'Doe', 18, 'B'); This statement inserts a new record into both the column names first column of the “Students” table with default values of ‘John’ for the FirstName column, ‘Doe’ for the LastName column, 18 for the Age column, and ‘B’ for the Grade column. The INSERT statement in SQL Server has evolved over different versions of the software, introducing new features and enhancements. Here’s a brief overview of the different versions and their notable features related to the INSERT statement: SQL Server 2000: Introduced the basic INSERT INTO syntax for adding new rows to a table. Supported the INSERT INTO SELECT statement for inserting data from one table into another based on a SELECT query. Allowed inserting data into tables with identity columns. SQL Server 2005: Introduced the OUTPUT clause for capturing the results of an INSERT, UPDATE, DELETE, or MERGE statement. Added support for inserting multiple rows using a single INSERT INTO statement with multiple value lists. Introduced the ROW_NUMBER() function, which allowed for generating row numbers for inserted rows. SQL Server 2008: Introduced the MERGE statement, which combines INSERT, UPDATE, and DELETE operations into a single statement based on a specified condition. Added support for table-valued parameters, allowing inserting multiple rows into a table using a single parameterized INSERT INTO statement. SQL Server 2012: Introduced the SEQUENCE object, which allows for generating sequence numbers that can be used in INSERT statements. Added support for the OFFSET FETCH clause, which allows for paging through result sets, useful for batching inserts. SQL Server 2016: Introduced the support for JSON data format, allowing for inserting JSON data into SQL Server tables. Added support for the STRING_SPLIT function, which splits a string into rows of substrings based on a specified separator. SQL Server 2019: Introduced the support for the APPROX_COUNT_DISTINCT function, which provides an approximate count of distinct values, useful for faster inserts and queries on large datasets. Added support for the INSERT INTO … VALUES clause with the DEFAULT keyword, allowing for explicitly inserting default values into columns. These are some of the significant versions of SQL Server and the features related to the INSERT statement introduced in each version. Each version has brought improvements and new capabilities to the INSERT statement, enhancing its functionality and performance. Simple SQL Insert statement Let’s use an automotive example to demonstrate the INSERT INTO syntax. Suppose we have a table called “Cars” with columns for the make, model, year, and price of each car. Here’s how you can use the INSERT INTO statement to both add values to new records statement to insert the values to this table: Inserting a Single Record: CREATE TABLE Cars ( CarID INT PRIMARY KEY AUTO_INCREMENT, Make VARCHAR(50), Model VARCHAR(50), Year INT, Price DECIMAL(10, 2) Null ); INSERT INTO Cars (Make, Model, Year, Price) VALUES ('Toyota', 'Camry', 2022, 25000); This statement inserts a new record into the “Cars” table with the following query make ‘Toyota’, model ‘Camry’, year 2022, and price $25,000. Inserting Multiple Records: INSERT INTO Cars (Make, Model, Year, Price) VALUES ('Honda', 'Civic', 2021, 22000), ('Ford', 'Mustang', 2020, 35000), ('Chevrolet', 'Silverado', 2019, 30000); This statement inserts multiple records into the “Cars” table. Each row in customers table specifies the make, model, year, and price of a car. In this example, we’re inserting records for a Honda Civic, Ford Mustang, and Chevrolet Silverado. Inserting Records with NULL Values: INSERT INTO Cars (Make, Model, Year) VALUES ('Tesla', 'Model S', 2023); In this example, we’re inserting a record for a Tesla Model S into the “Cars” table. We’re not providing a value for the “Price” column, so it will default to NULL. Inserting Records with Explicit Column Names: INSERT INTO Cars (Make, Model) VALUES ('BMW', 'X5'); If you don’t specify values for all columns, you need to explicitly list all the columns that you’re inserting data into. In this example, we’re just inserting values for a record for a BMW X5 into the “Cars” rows of data table, without specifying the year or price. These examples demonstrate how to use the INSERT INTO statement to add new records to a table, using an automotive example with new rows in the “Cars” table. Insert With Select Statement Let’s create a new table called “CarModels” and demonstrate how to use the INSERT INTO statement with a SELECT query to insert data from one existing table name, the “Cars” table into the “CarModels” table. -- Create the CarModels table CREATE TABLE CarModels ( ModelID INT PRIMARY KEY AUTO_INCREMENT, ModelName VARCHAR(50), Make VARCHAR(50), Year INT ); -- Insert data into the CarModels table from the Cars table INSERT INTO CarModels (ModelName, Make, Year) SELECT Model, Make, Year FROM Cars; In this example: We first create the “CarModels” table with columns for the model ID, model name, make, and year of each car model. The ModelID column is defined as the primary key with auto-increment. Then, we use the INSERT INTO statement with a SELECT query to insert data into the “CarModels” table from the “Cars” table. The SELECT query retrieves the model, make, and year from the “Cars” table. We don’t explicitly specify the values for the ModelID column because it’s auto-incremented and will generate unique column values almost automatically. This INSERT INTO statement with a SELECT query allows us to populate new row in the “CarModels” table with data from the “Cars” table. Each row in the “CarModels” table will represent a car model extracted from the “Cars” table. Select Info The SELECT INTO statement is used to create a new table based on the result set of a SELECT query. Here’s how you can use it to insert existing records back into a new table: -- Create a new table "NewCars" and insert data into it from the "Cars" table SELECT * INTO NewCars FROM Cars; In this example: We use the SELECT INTO statement to create a new table called “NewCars” and insert data into it from the “Cars” table. The following query: SELECT * retrieves all columns and rows from the “Cars” table. The INTO keyword specifies that the result set should be inserted into a new table called “NewCars”. After executing this statement, a new table “NewCars” will be created with the same structure and same data type as the “Cars” table in previous example. It’s important to note that the SELECT INTO statement creates a new table based on the result set of the SELECT query and doesn’t require the “NewCars” table to exist beforehand. If the “NewCars” table already exists, this statement will result in an error.How to Insert Multiple Records with the INSERT Statement Select Into with Temp Table -- Create a new table "NewCars" and insert data into it from the "Cars" table SELECT * INTO #NewCars FROM Cars; Inserting data in specific columns How to Insert Records from a SELECT Statement with a WHERE Clause, an ORDER BY Clause, a LIMIT Clause, and an OFFSET Clause Inserting data returned from an OUTPUT clause Let’s use the same “Cars” table data and demonstrate how to insert data returned from one table statement an OUTPUT clause into another table statement. Suppose we have a table called “SoldCars” where we want to insert records for cars that have been sold. We’ll use an OUTPUT clause to capture the inserted data and then insert it into the “SoldCars” table. -- Create the SoldCars table CREATE TABLE SoldCars ( SaleID INT PRIMARY KEY AUTO_INCREMENT, Make VARCHAR(50), Model VARCHAR(50), Year INT, Price DECIMAL(10, 2) ); -- Insert data into the SoldCars table and capture the inserted data using OUTPUT clause INSERT INTO SoldCars (Make, Model, Year, Price) OUTPUT inserted.* INTO SoldCars SELECT Make, Model, Year, Price FROM Cars WHERE Make = 'Toyota' AND Year = 2022; In this example: We first create the “SoldCars” table with columns inserting rows for the sale ID, make, model, year, and price of each sold car. Then, we use the INSERT INTO statement with an OUTPUT clause to capture the inserted data. The OUTPUT clause specifies inserted.*, which means we want to capture all columns of the inserted rows. We use the INTO keyword to specify that the output data should be inserted into the “SoldCars” table. The SELECT query retrieves data from the “Cars” table where the make is ‘Toyota’ and the year is 2022. Only the records that meet the specified conditions will be inserted into the “SoldCars” table, and the inserted data will also be returned as a result of the OUTPUT clause. Inserting data into a table with columns that have default values Let’s create a new table with a default value for the “Age” column and then insert data into first table in it: -- Create a new table with a default value for the "Age" column CREATE TABLE DefaultAgeTable ( ID INT PRIMARY KEY, Name VARCHAR(50), Age INT DEFAULT 30 ); -- Insert data into the DefaultAgeTable, omitting the "Age" column INSERT INTO DefaultAgeTable (ID, Name) VALUES (1, 'John'), (2, 'Jane'), (3, 'Michael'); -- Verify the inserted data SELECT * FROM DefaultAgeTable; In this example: We create a new table called “DefaultAgeTable” with three rows and columns for ID, Name, and Age. The Age column has a default value of 30 specified. When we insert data into the “DefaultAgeTable”, we omit specifying a value for the Age column. Since the Age column has a default value defined, the database system will automatically assign that default value to the Age column for each inserted row. After executing the INSERT INTO statement, we use a SELECT query to verify that the data has been inserted correctly into the “DefaultAgeTable” Row_Number function With Insert statement In T-SQL, the ROW_NUMBER() function is typically used in the SELECT statement and queries to generate row numbers for result sets. It’s not directly used in INSERT statements. However, you can use a CTE (Common Table Expression) with the ROW_NUMBER() function to assign row numbers to the rows you’re inserting. Here’s an example: Let’s say we have a table called “Employees” with columns for EmployeeID, FirstName, and LastName. We want to insert rows of new employees into this table, but we also want to assign a unique EmployeeID to insert multiple rows for each new employee. We can use the ROW_NUMBER() function to generate these unique IDs: -- Example of using ROW_NUMBER in an INSERT statement INSERT INTO Employees (EmployeeID, FirstName, LastName) SELECT ROW_NUMBER() OVER (ORDER BY FirstName, LastName) + (SELECT ISNULL(MAX(EmployeeID), 0) FROM Employees), FirstName, LastName FROM NewEmployees; -- NewEmployees is a table or query that contains the new employee data In this example: We use the SELECT query with the ROW_NUMBER() function to generate row numbers for each row in the result set. We use the ROW_NUMBER() function with the OVER clause to specify the ordering of the rows. You can order the rows based on any column(s) or expression(s) you want. We then add the generated row number to the maximum EmployeeID in the Employees table to ensure uniqueness. Finally, we insert the generated EmployeeID, FirstName, and LastName into the Employees table. This approach allows you to insert multiple new rows into a table while generating unique IDs for each row using the ROW_NUMBER() function. String Split Insert statement The STRING_SPLIT function in T-SQL allows you to split a string into a table of substrings based on a specified separator. Here’s an example of how you can use STRING_SPLIT in an INSERT statement: Let’s say we have a table called “Tags” with a single column “Tag” where we want to insert tags for a post. We have a string of tags separated by commas that we want to insert into this table. -- Example of using STRING_SPLIT in an INSERT statement DECLARE @TagsString VARCHAR(MAX) = 'SQL, T-SQL, Database'; INSERT INTO Tags (Tag) SELECT value FROM STRING_SPLIT(@TagsString, ','); In this example: We declare a variable @TagsString and assign it a string containing multiple tags separated by commas. We use the STRING_SPLIT function to split the @TagsString into individual substrings based on the comma separator. We then use the SELECT query to retrieve the individual substrings (tags) generated by STRING_SPLIT. Finally, we insert these individual substrings (tags) into the “Tags” table. After executing this INSERT statement, the “Tags” table will contain separate rows for each tag extracted from the @TagsString variable. Each row in source table will contain a single tag in the “Tag” column. Column Names And Values Both The OFFSET FETCH clause in T-SQL is used for paging through result sets. It allows you to skip a specified number of rows from the beginning of the result set and then return a specified number of rows after that. This can be useful for implementing pagination in applications, where you want to display data in chunks or pages. Here’s an example of how to use OFFSET FETCH: Suppose we have a table called “Products” with columns for ProductID, ProductName, and Price. We want to retrieve a paginated list of products sorted by ProductID. -- Example of using OFFSET FETCH for pagination SELECT ProductID, ProductName, Price FROM Products ORDER BY ProductID OFFSET 10 ROWS -- Skip the first 10 rows FETCH NEXT 5 ROWS ONLY; -- Fetch the next 5 rows In this example: We use the OFFSET clause to skip the first 10 rows of the result set. We use the FETCH NEXT clause to fetch the next 5 rows after skipping the offset rows. The result set will contain rows 11 through 15 of the sorted Products table. When might this be useful? Pagination: As mentioned earlier, OFFSET FETCH is commonly used for implementing pagination in web applications. It allows you to retrieve data in chunks or pages, improving performance by fetching only the necessary rows. Displaying subsets of data: If you have a large result set and you want to display only a portion of it at a time, OFFSET FETCH allows you to fetch subsets of data based on specified criteria. Analyzing trends: You can use OFFSET FETCH to analyze trends or patterns in your data by fetching subsets of data for analysis. Overall, OFFSET FETCH is useful whenever you need to work with result sets in chunks or pages, or when you need to retrieve subsets of data for analysis or display purposes. Inserting With JSON In SQL Server, you can use JSON data format to insert data into tables by converting JSON objects into rows. Here’s an example: Let’s say we have a table called “Employees” with columns for EmployeeID, FirstName, LastName, and DepartmentID. We want to insert new employees into this table using JSON format. -- Example of inserting data into a table using JSON in T-SQL DECLARE @EmployeeData NVARCHAR(MAX) = ' [ {"FirstName": "John", "LastName": "Doe", "DepartmentID": 1}, {"FirstName": "Jane", "LastName": "Smith", "DepartmentID": 2}, {"FirstName": "Michael", "LastName": "Johnson", "DepartmentID": 1} ]'; INSERT INTO Employees (FirstName, LastName, DepartmentID) SELECT JSON_VALUE(EmployeeData, '$.FirstName') AS FirstName, JSON_VALUE(EmployeeData, '$.LastName') AS LastName, JSON_VALUE(EmployeeData, '$.DepartmentID') AS DepartmentID FROM OPENJSON(@EmployeeData); In this example: We declare a variable @EmployeeData and assign it a JSON string containing information about new employees. We use the OPENJSON function to parse the JSON string and convert it into a tabular format. OPENJSON returns a table with columns for each property in the JSON objects. We use JSON_VALUE function to extract values from the JSON objects and insert them into the Employees table. The SELECT query retrieves the FirstName, LastName, and DepartmentID values from the JSON objects returned by OPENJSON. Finally, we insert the extracted values into the Employees table. After executing this INSERT statement, the “Employees” table will contain the new employees specified in the JSON string. Each row in customers table will represent a new employee with their respective FirstName, LastName, and DepartmentID. Additional Resources Here is a short video on Insert with SQL Server https://youtu.be/uWIYfbtZat0?si=_CANB6HFKZe6vqxz
- SQL Server: UPDATE Statement
The SQL UPDATE Statements The UPDATE statement in T-SQL (Transact-SQL) is used to update query modify existing records in a table. Here’s the basic syntax to update statement: UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition; Let’s break down the components: UPDATE: Keyword indicating that you want to update existing records. table_name: The name of the table you want to update. SET: Keyword indicating that you’re specifying which columns you want to update and the new values you want to assign to them. column1, column2, …: The columns you want to update. value1, value2, …: The new values you want to assign to the columns. WHERE: Optional keyword used to specify a condition that determines which rows will be updated. If omitted, all rows in the table will be updated. condition: The condition that must be met for a row to be updated. Only rows that satisfy this condition will be updated. Here’s a simple example: UPDATE Employees SET Salary = 50000 WHERE Department = 'Finance'; This statement would update the “Salary” column of all employees in the “Finance” department to 50000. Remember to always check and use the WHERE clause cautiously, as omitting it can result in a number of serious errors, unintended name errors and updates to all rows in the table. How to Use UPDATE Query in SQL? Updating a Single Column for All Rows: This type of version of database update is useful when you need to apply the same change to update all rows in a table. UPDATE Employees SET Department = ‘HR’; This statement updates the “Department” column for all rows in the “Employees” table, setting them to ‘HR’. Updating Multiple Columns: Updating multiple columns allows you to modify various aspects of a row simultaneously. UPDATE Students SET Grade = 'A', Status = 'Pass' WHERE Score >= 90; This statement updates the “Grade” and “Status” columns in the “Students” table for all students who scored 90 or above, setting their grade to ‘A’ and status to ‘Pass’. Updating Based on a Subquery: You can use a subquery to determine which rows should be updated based on some condition. UPDATE Orders SET Status = 'Shipped' WHERE OrderID IN (SELECT OrderID FROM PendingOrders); This statement updates the “Status” column in the “Orders” table for orders that are pending (i.e., their OrderID exists in the “PendingOrders” table), setting their status to ‘Shipped’. Updating with Calculated Values: Calculated updates allow you to adjust column values based on expressions or calculations. UPDATE Inventory SET Quantity = Quantity - 10 WHERE ProductID = 123; This statement updates the “Quantity” column in the “Inventory” table for the product with ID 123, subtracting 10 from its current quantity. Updating Using Joins: Joins enable you to update rows based on related data from other tables. UPDATE Employees SET Department = Departments.NewDepartment FROM Employees INNER JOIN Departments ON Employees.DepartmentID = Departments.DepartmentID WHERE Employees.YearsOfService > 5; This statement updates the “Department” column in the “Employees” table for employees with more than 5 years of service, setting their department to the new department specified in the “Departments” table. These examples illustrate different scenarios where the UPDATE query and syntax can be applied to update query modify data in SQL databases, offering flexibility and precision in data manipulation queries. SQL Update Multiple Columns Let’s create a new table called “Students” and insert some sample data. Then, I’ll demonstrate an example where we update multiple columns of records in the database using this table. -- Create the Students table CREATE TABLE Students ( StudentID INT PRIMARY KEY, FirstName VARCHAR(50), LastName VARCHAR(50), Age INT, Grade VARCHAR(2) ); -- Insert sample data into the Students table INSERT INTO Students (StudentID, FirstName, LastName, Age, Grade) VALUES (1, 'John', 'Doe', 18, 'B'), (2, 'Jane', 'Smith', 20, 'C'), (3, 'Michael', 'Johnson', 22, 'A'), (4, 'Emily', 'Brown', 19, 'B'); -- Select the initial data from the Students table SELECT * FROM Students; Here are the results Now, let’s say we want to update the database and check both the “Age” and “Grade” columns to check for details on a specific student (for example, StudentID = 2). -- Update the Age and Grade columns for StudentID = 2 UPDATE Students SET Age = 21, Grade = 'A' WHERE StudentID = 2; -- Select the updated data from the Students table SELECT * FROM Students; After executing these SQL commands, the “Age” and “Grade” columns for the records in the record of the student with StudentID = 2 will be updated to 21 and ‘A’ respectively, and the rest of records in the data will remain unchanged. Example – Update table with data from another table Let’s create a new table called “StudentScores” to store the scores of each student. Then, I’ll demonstrate an example where we update the value of the “Grade” column in the “Students” table based on the average score of each student from the just created “StudentScores” table. -- Create the StudentScores table CREATE TABLE StudentScores ( StudentID INT, Score INT ); -- Insert sample data into the StudentScores table INSERT INTO StudentScores (StudentID, Score) VALUES (1, 85), (2, 92), (3, 98), (4, 79); -- Select the initial data from the StudentScores table SELECT * FROM StudentScores; Now, let’s demonstrate how to update the “Grade” column in the “Students” records in a table below based on the average score of each student from the “StudentScores” records in a table below. -- Update the Grade column in the Students table based on average score UPDATE Students SET Grade = CASE WHEN (SELECT AVG(Score) FROM StudentScores WHERE StudentID = Students.StudentID) >= 90 THEN 'A' WHEN (SELECT AVG(Score) FROM StudentScores WHERE StudentID = Students.StudentID) >= 80 THEN 'B' WHEN (SELECT AVG(Score) FROM StudentScores WHERE StudentID = Students.StudentID) >= 70 THEN 'C' ELSE 'F' END; -- Select the updated data from the Students table SELECT * FROM Students; In this example, we will update multiple columns in the “Grade” column in the “Students” table based on the average score of each student from the “StudentScores” table. Depending on the average score, we assign one table of different grades (‘A’, ‘B’, ‘C’, or ‘F’) to each student. Update With A Join Let’s use the same “Students” and “StudentScores” tables from the previous example and demonstrate how to update the “Grade” column in the “Students” table using a JOIN operation with the “StudentScores” table. -- Create the Students table CREATE TABLE Students ( StudentID INT PRIMARY KEY, FirstName VARCHAR(50), LastName VARCHAR(50), Age INT, Grade VARCHAR(2) ); -- Insert sample data into the Students table INSERT INTO Students (StudentID, FirstName, LastName, Age, Grade) VALUES (1, 'John', 'Doe', 18, NULL), (2, 'Jane', 'Smith', 20, NULL), (3, 'Michael', 'Johnson', 22, NULL), (4, 'Emily', 'Brown', 19, NULL); -- Create the StudentScores table CREATE TABLE StudentScores ( StudentID INT, Score INT ); -- Insert sample data into the StudentScores table INSERT INTO StudentScores (StudentID, Score) VALUES (1, 85), (2, 92), (3, 98), (4, 79); -- Select the initial data from the Students table SELECT * FROM Students; -- Select the initial data from the StudentScores table SELECT * FROM StudentScores; Now, let’s demonstrate how to update information in the “Grade” column in the “Students” table based on the average score of each student from each row in the “StudentScores” table using a JOIN operation: -- Update the Grade column in the Students table based on average score using a JOIN UPDATE Students SET Grade = CASE WHEN AVG(Score) >= 90 THEN 'A' WHEN AVG(Score) >= 80 THEN 'B' WHEN AVG(Score) >= 70 THEN 'C' ELSE 'F' END FROM Students INNER JOIN StudentScores ON Students.StudentID = StudentScores.StudentID GROUP BY Students.StudentID; -- Select the updated data from the Students table SELECT * FROM Students; In this example, we update the “Grade” column in the “Students” table based on the average score of each student from the “StudentScores” table using a JOIN operation. The UPDATE statement joins one table, the “Students” table with all the rows from “StudentScores” to one table, on the “StudentID” column and calculates the average score for each student. Then, it assigns a grade (‘A’, ‘B’, ‘C’, or ‘F’) based on the average score. UPDATE With A Where Condition Let’s use the same “Students” table and “StudentScores” table data from the previous examples. This time, I’ll demonstrate how to query and update the “Grade” column in the “Students” table based on a condition using the WHERE clause query syntax above. -- Update the Grade column in the Students table based on a condition using the WHERE clause UPDATE Students SET Grade = 'A' WHERE StudentID = 3; -- Select the updated data from the Students table SELECT * FROM Students; In this example, we update the “Grade” column in the “Students” table for a specific student, identified by the StudentID = 3, and set the value of their grade to ‘A’. This UPDATE statement only affects the row(s) where the condition StudentID = 3 is true. In this case, it will update the value in the “Grade” column for the record of the student with StudentID = 3 to ‘A’. Update with Aliases For Table Name We’ll use the same syntax for “Students” and “StudentScores” tables as shown in example before and demonstrate how to update the “Grade” column in the “Students” table using an alias for table names. -- Update the Grade column in the Students table based on average score using an alias UPDATE s SET Grade = CASE WHEN AVG(sc.Score) >= 90 THEN 'A' WHEN AVG(sc.Score) >= 80 THEN 'B' WHEN AVG(sc.Score) >= 70 THEN 'C' ELSE 'F' END FROM Students AS s INNER JOIN StudentScores AS sc ON s.StudentID = sc.StudentID GROUP BY s.StudentID; -- Select the updated data from the Students table SELECT * FROM Students; In this example: We use aliases “s” for the “Students” table and “sc” for the “StudentScores” table to make the query more readable. We update the “Grade” column in the “Students” table based on the average score of each student from the “StudentScores” table using an INNER JOIN operation. We calculate the average score for each student using the AVG function and GROUP BY the student’s ID. Then, we use a CASE statement to assign a grade (‘A’, ‘B’, ‘C’, or ‘F’) based on the average score. This UPDATE statement will update the “Grade” column for each student in the “Students” table based on the changes to their average score from the last update table and “StudentScores” last update table below. Additional Resources Here is a good video on SQL Update statement https://youtu.be/QB-2bChzt68?si=3RrIOogCDMtASY5i Here is a quick and easy way to execute test values and review the SQL UPDATE statement without installing SQL Server. https://sqltest.net/
- I Wrote A Book!
Here is the link on Amazon https://a.co/d/9gzb1DI
- Powering Your Business Intelligence Getting Started With Mike Bennyhoff & Power BI:
In today's fast-paced business environment, decision-making is critical. To make informed and actionable decisions, you need the right tools. This is where Power BI comes in. Power BI is a cloud-based business intelligence solution from Microsoft that helps you turn data into insights. It is a powerful tool for data analysis that allows you to connect, transform, and visualize data in a way that makes sense. In this blog post, we'll walk you through the basic steps you need to take to get started with Power BI and working with Bennyhoff Products And Services (Mike Bennyhoff). Are You Frustrated By: • Skill Shortage - Lack Of gravitas or experience of consultants • High Turnover - On outsourcing platforms here today, gone tomorrow • Poor Communication - Missed deadlines, not sure about the next steps Mike B - Experience. Results. Efficiency! What Others Are Say About My Project Management Systems "Mike is PROFICIENT in what he does and will go the extra mile to provide you with solutions and guidance. A++" "Great communicator! Will definitely work with him again." My Process: To communicate project stats and billing, I use a tool called Jira.This tool is provided at no cost to you and will send an update each time I log hours, close a task or ask you to diver information. Each time we meet I can move a card from "In Progress" to "Done" Once we have the project parameters and communication setup, I setup a VM (Virtual Machine) for your environment this way there is NO cross pollination of your data and others customers data. I have a server with 20, 512 GB of ram and 25 TB of storage no other consultant offers anything that comes close to this level of service. Power BI: Step 1: Get the Right Licenses The first step in getting started with Power BI is ensuring you have the right licenses. You must sign up for a Power BI account using your corporate email address; nope, a google e-mail address will not work! Depending on the size of your organization, there may be multiple ways to purchase the different types of Power BI products are available, including Power BI Free, Power BI Pro, and Power BI Premium. There are specific features and limitations of each license type. -- I suggest starting with my blog on Power BI Licensing Alliteratively, Have Me Review And Consult On What Is Best For Your Organization. We can have a 1-hour conversation, and I can describe the best license to get you started without pointless spending and unnecessary services. Step 2: Connect Your Data Sources Once you have the right licenses, the next step is to connect your data sources to Power BI. Power BI can connect to a wide range of data sources, including Excel spreadsheets, SQL Server databases, SharePoint lists, Salesforce, and many others. The easiest way to connect your data sources is to use the Power BI Desktop application. This application allows you to connect to your data sources and create data models that can be used to analyze your data. Look daunting, have BPS connect Power BI to API's and other complex systems like Salesforce and Google Analytics; here is a list of the products with which I have connected Power BI. Salesforce - https://www.salesforce.com/ Google Analytics - https://marketingplatform.google.com/about/analytics/ Viewpoint Construction - https://www.viewpoint.com/ SAP - https://www.sap.com/index.html Microsoft Dynamics 365 - https://dynamics.microsoft.com/en-us/ NetSuite - https://www.netsuite.com/portal/home.shtml Sage - https://www.sage.com/en-us/ Step 3: Create Reports and Visualizations With your data sources connected, you can start creating reports and visualizations. Power BI makes it easy to create reports by providing a drag-and-drop interface that allows you to add visualizations to your report canvas. These visualizations can be customized to suit your needs and can include charts, tables, maps, and many others. You can also add filters, slicers, and drill-down capabilities to your reports to allow for greater interactivity. Once more I provide full service consultation and consulting, if you have the data I can wrangle it and transform it and make it useful. Step 4: Share Your Reports and Dashboards Once you've created your reports and visualizations, you can share them with others in your organization. Power BI allows you to share your reports and dashboards with individuals, groups, or the entire organization. You can also set up automatic data refresh to ensure that your reports are always up-to-date. Step 5: Plan for Growth As your organization grows, so will your data needs. It's essential to create a plan for how you will scale your Power BI implementation. This may include upgrading your licenses to support more users, investing in more powerful hardware, or developing a governance plan to ensure that data is managed and maintained properly. Conclusion: Business intelligence is critical for decision-making in today’s fast-paced business environment. With Power BI, you have a powerful tool that can help you turn data into insights. Getting started with Power BI is easy, but it's important to take the right steps to ensure that you set yourself up for success. By following the steps outlined in this guide, you'll be well on your way to creating powerful reports and visualizations that can help you make informed and actionable decisions. So, what are you waiting for, connect with me today to get stared! Power up your business intelligence with Power BI today! Mike Bennyhoff - 916-303-3627 - Mike@BPS-Corp.com
- How to Create Login, User and Grant Permissions in SQL Server
In database environments, data rights assign permissions, and privileges management is essential for preserving security. The article details the process of granting and revoking user account access to particular objects in a database through SQL queries following query call. It also examines how these actions safeguard an individual's personal information from being modified or viewed without authorization. Create sql server login using SQL Server Management Studio. To create master database for a SQL Server login using SQL Server Management Studio (SSMS), follow these steps: Open SSMS and connect to the SQL Server instance where you want to create the login. Expand the Security folder in the Object Explorer pane. Right-click on the Logins folder and select New Login. In the Login - New dialog box, specify the following options: Login name: specifies the name of the login. This is a required field. Password and Confirm password: specify the password for the login. This is a required field for SQL Server logins, and is not used for Windows logins. Default database: specifies the default database for the login. This is the database that the login will be connected to by default when logging in. Default language: specifies the default language for the login. This determines the language that will be used for error messages and system messages for the login. Enforce password policy: specifies whether to enforce password policy for the login. If this option is set to ON, the login's password must meet the SQL Server password policy requirements. Enforce password expiration: specifies whether to enforce password expiration for the login. If this option is set to ON, the login's password will expire after a specified number of days. Click OK to create the login. Note that when creating a login using SSMS, the SID and Credential options to create login and are not visible. The SID is automatically generated by the SQL Server authentication itself, and the Credential option can be set separately using the CREATE CREDENTIAL statement. Server Roles In SQL Server Management Studio Server roles in SQL Server are predefined roles that are used to grant permissions and control access to server-wide resources and settings. Server roles are intended to simplify the process of granting permissions and managing security in a SQL Server environment. There are several server roles that can be assigned to users or groups in SQL Server, including: Sysadmin: Members of the sysadmin server role have full control over the SQL Server instance, including all databases and system objects. They can perform any action on the server without restrictions. Serveradmin: Members of the serveradmin server role can manage the configuration and settings of the SQL Server instance. They can also shut down and restart the server, and manage linked servers. Securityadmin: Members of the securityadmin server role can manage server-level security settings, including login and user creation and permission assignment. Processadmin: Members of the processadmin server role can manage processes running on the SQL Server instance, including stopping and starting processes. Setupadmin: Members of the setupadmin server role can manage SQL Server installation and configuration, including creating and modifying SQL Server instances and managing service accounts. Bulkadmin: Members of the bulkadmin server role can perform bulk import and export operations on the SQL Server instance. Diskadmin: Members of the diskadmin server role can manage disk files and filegroups on the SQL Server instance. DBOwner: Members of the db_owner server role have full control over a specific database, including all objects and data within the database. Public: The public server role is a default server role that all logins are a member of. It provides basic access to the SQL Server instance and all databases, but does not grant any specific permissions. Server roles can be assigned to logins or database users, and each login or user can be a member of multiple server roles. Server roles can be assigned using a login in SQL Server Management Studio or by using Transact-SQL statements. It is important to carefully manage server roles page role membership to ensure that users have the appropriate permissions and access to server resources. User Mapping page User mapping in SQL Server is the process of associating a SQL Server database user with a SQL Server login. A sql server authentication login is a security principal that allows a user to connect to a SQL Server instance, while a user is a security principal that is used to control access to a sql server user permissions a specific database. When creating a new user in SQL Server, user mapping is an important step to ensure that the user has permissions granted the appropriate permissions to create user access the desired database. User mapping can be done in SQL Server Management Studio or using Transact-SQL statements. To map a database user to database engine create a login using a SQL Server login using SSMS, follow these steps: Open SSMS and connect to the SQL Server instance. Expand the Databases folder and select the database where you want to create the user. Right-click on the Security folder and select New -> User. In the User - New dialog box, specify the user name and login name for the new user. Under "Securables", select the database objects that the user should have access to, and specify the appropriate permissions for each object. Click OK to create the user. During the user creation process, SSMS will automatically generate a script to create the user and map it to the specified login. This script can be reviewed and modified as necessary before executing it to create login again. In addition to specifying database object permissions, user mapping can also be used to create a user and specify default schema and database role membership for the user. This can be done in the User Mapping tab of the New User dialog box. It is important to carefully manage user mapping to ensure that users have the appropriate permissions and access to database resources. Improperly mapped users can result in security vulnerabilities and data breaches. Securtables The Securables page in the user setup in SQL Server allows you to specify the database objects that a user should have access to, and the specific permissions that the user should have on each object. This includes tables, views, stored procedures, functions, and other database objects. When creating a new user login sql server or modifying an existing user in SQL Server, the Securables page is used to control access to specific database objects. This page displays a list of all available database objects in the selected database, along with checkboxes to specify the permissions that the user should have on each object. The available permissions that can be granted to a user on a database object include: Select: Allows the user to read data from the object. Insert: Allows the user to add new data to the object. Update: Allows the user to modify existing data in the object. Delete: Allows the user to remove data from the object. Execute: Allows the user to execute stored procedures and functions. Alter: Allows the user to modify the structure of the object. Control: Allows the user to perform administrative tasks on the object. By selecting the appropriate checkboxes on the Securables page, you can grant the user the necessary permissions to perform the required actions on each database object. It is important to carefully manage object permissions to ensure that users have the appropriate level of access to database resources, while also protecting sensitive data from unauthorized access or modification. Note that granting excessive permissions to a user can result in security vulnerabilities and data breaches, so it is important to regularly review and update user permissions to ensure that they are aligned with business requirements and security best practices. Status Page The Status page in the user setup wizard in SQL Server displays information about the result of the user creation process. This page provides details about any errors or warnings that occurred during the user creation process, and also displays the Transact-SQL script that was executed to create the user. After you have specified the user properties, securables database permissions, and membership, you can click the OK button to create the user. SQL Server Management Studio will execute the necessary Transact-SQL statements to create the user and apply the specified permissions windows domain account. If any errors or warnings occur during the user creation process, they will be displayed on the Status page. You can review the error messages to determine the cause of the problem and take appropriate action to create a new login or resolve the issue. The Status page also displays the Transact-SQL script that was executed to create the user. This script can be copied and saved for future reference or modification. If you need to create a new or similar user in another database or on another SQL Server instance, you can modify the script as necessary and execute it to create the new user. Overall, the Status page in the user setup wizard provides important feedback about the user creation process and allows you to quickly identify and resolve any issues that may occur create a login name. Other User Creation Resources For Your SQL Server Instance SQL Server Error Logs - Where you can fined failed Log Attempts SQL Server Operators What Is Database Ownership SQL Server Jobs and The SQL Agent
- SQL Server Restore Database From Backup
Can I restore from one SQL Server database version of SQL to another? It is possible to restore a database from one version of SQL Server to another, but there are some limitations and considerations to keep in mind. When restoring a database from a previous version of SQL Server to a later version, the restore process automatically upgrades the already existing database with replace sql server restore database from backup to back to the latest version. For example, you can restore a database backup from SQL Server 2008 to restore SQL database from Server 2019, but the restored database will be upgraded to SQL Server 2019 format during the restore process. However, you cannot restore a full database backup from a later version of SQL Server to a database ready for an earlier version. For example, you cannot restore a full database backup from SQL Server 2019 to a database in SQL Server 2016. In addition to version compatibility, there may be other considerations to keep in mind when restoring a database from one server or database box to another server tree database list database box either. For example, differences in hardware, operating system versions, and database server configurations may affect the restore process and the performance of the restored database. To minimize the risk of compatibility issues and ensure a smooth restore and backup process throughout, it is recommended that you test your backup and restore process in a non-production environment and thoroughly review the SQL Server documentation and release notes for any version-specific considerations. Additionally, it may be helpful to seek the advice of a qualified SQL Server professional if you have any questions or concerns about the backup process or window restoring a database from one version of SQL Server to another. What if my source servers and destination servers have different collations If the source and destination servers have different collations, you may encounter issues when restoring a used database in restore sql server database. Collation determines how character data is sorted and compared in restore SQL Server database. If the collation of the source server is different from the collation of the destination server, the character data in the restored user database may be sorted and compared differently than it was in the source database. This can cause data inconsistencies or errors. To avoid collation conflicts during the restore process, you can change the collation of the database objects the destination server to match the collation of the database non operational source server before restoring the database restore a sql server. To change the collation of database objects on the server, you can use the SQL Server Management Studio or the ALTER DATABASE statement. Here are the general steps to change the collation of the sql server database using SQL Server Management Studio: Open SQL Server Management Studio and connect to the destination server. Right-click on the server name in the Object Explorer and select "Properties". In the "Server Properties" dialog box, select the "Collation" tab. Select the desired collation from the list of available collations. Click "OK" to save the changes and close the dialog box. Once you have changed the collation of restore database backup the destination server to match the collation of restore database window the source server, you can proceed with the restore process as usual. The restored database on restore destination side should have the same collation as the source restore database window does, and you should not encounter any collation conflicts or data inconsistencies. Note that changing the collation of sql database on the server may affect other databases and applications running on the server. It is important to thoroughly test the impact of changing the collation ms sql server and consult the SQL Server documentation and release notes for any version-specific considerations before making any changes to the collation of the sql server database. How can I tell how large a restored database will be on my server from just the backup file? It is possible to estimate the size of a restored database from just the .bak file, but it may not be accurate since there are many factors that can affect the final size of the new one overwrite the existing database is back on the server. Here are the general steps to estimate the size of a restored database from a .bak file: Check the size of the .bak file: The size of the .bak file can give you a rough idea of the size of the restored database. However, keep in mind that the size of the backup file does not necessarily reflect the size of the database on the server, as the backup file may be compressed or contain empty space. Determine the compression and backup options used: If the backup file was compressed, it will likely be smaller than the original database. Additionally, if specific backup options were used, such as including or excluding filegroups, this can affect the size of the restored database. Estimate the database size based on the data and log file sizes: The size of the restored database can be estimated based on the sizes of the data and log files in the .bak file. You can use the following formula to estimate the database size: Estimated database size = total size of data files + total size of log files You can find the sizes of the data and log files in the "Restore Files" section of the restore wizard or by using the RESTORE FILELISTONLY command. The RESTORE FILELISTONLY command is a Transact-SQL statement that retrieves a list of the files included in a database backup. The command does not actually restore the backup, but instead provides information about the backup file, including the names, sizes, and paths of the data and log files. Here is an example of how to use the RESTORE FILELISTONLY command: RESTORE FILELISTONLY FROM 'C:\Backup\AdventureWorks.bak' In this example, the command retrieves the file list from the AdventureWorks.bak backup file located in the C:\Backup folder. The output of the RESTORE FILELISTONLY command includes the following columns: LogicalName: the logical name of the file as stored in the backup file PhysicalName: the physical file name as stored in the backup file Type: the file type, either "D" for data file or "L" for log file FileGroupName: the name of the filegroup to which the file belongs, if applicable Size: the size of the file in bytes MaxSize: the maximum size of the file, if applicable FileId: the file ID as stored in the backup file The output of the RESTORE FILELISTONLY command can be used to determine the sizes and paths of the data and log files in the backup file, which can be helpful when restoring the database to a different server or location. Keep in mind that the actual size of the restored database may be larger than the estimated size due to factors such as database growth, indexes, statistics, and other metadata. It is important to monitor the size of the restored and overwrite the existing database on the server and adjust the restore database and settings as needed point in time to optimize performance and manage storage. Restore SQL Database using SQL Server Management Studio Open SQL Server Management Studio: Open SQL Server Management Studio and connect to the database select the SQL Server instance that you want to restore the same database files on. Navigate to the Restore Database Wizard: In the Object Explorer, right-click on the Databases folder and select "Restore Database...". Select the backup file: In the "General" page of the Restore Database Wizard, select the "Device" radio button and click on the ellipsis button to select the destination database or backup file. You can choose to either create backup, add a backup file or select a backup media file from a backup media set. Specify the backup history information and restore options: In restore section under the "General" on click restore database on page, specify the database name that you want to restore the backup to. You can also select options in restore section such as the backup set to restore and the restore options. Select the backup sets to restore: In the "Backup sets to restore" page, select the backup sets that you want to restore. You can choose to restore the differential backup, the full backup or select specific differential backup, and/or additional transaction logs or log backups to restore. Specify the restore options for the selected backup sets: In the "Options" page, select the restore options for the backup sets you have selected. For example, you can specify the location of the backup file location, restored files, replication settings, the backup set expiration date full backup amount, and the recovery state of the various database files. Preview the restore script: In the "Summary" page, review the restore script that SSMS generates based on your selections. You can also choose to script the restore action or schedule the restore point in time of operation. Restore the existing database with replace it with earlier backup: Click on the "OK" button to restore the database and recover data from the earlier backup. The restore operation may take some time depending on the size of the database and the number of backup sets selected. Verify and restore the database restore operations: Once the restore operation is complete, verify that the database is restored and functioning as expected. Suggestions for troubleshooting restring backup files to SQL Server Verify the backup file: Before attempting to restore a backup file, verify that the file is valid and not corrupted. You can do this by using the RESTORE VERIFYONLY command or by checking the backup history in SQL Server Management Studio. Check the SQL Server version: Make sure that the version of SQL Server that you are restoring the backup file to is compatible with the version of the backup file. Restoring a backup file to a different version of SQL Server can cause errors or data loss. Check file paths and permissions: Make sure that the file paths and permissions for the data and log files in the backup file are correct and accessible on the destination server. If the file paths or permissions are incorrect, you may encounter errors or the restore process may fail. Check database settings and options: Make sure that the database settings and options are compatible with the backup file. For example, if the backup file was created with compression, make sure that compression is enabled on the destination server. Check the restore sequence: Make sure that you are restoring the backup files in the correct order and using the correct options. For example, if the backup file includes multiple filegroups, you may need to restore them in a specific order. Check for conflicting objects: If the restore process fails with an error about conflicting objects, make sure that the destination database does not already contain objects with the same name. You may need to modify the restore options to ignore conflicts or rename conflicting objects. Review the SQL Server error log: If the restore process fails with an error message, review the SQL Server error log for more information about the error. The error log may provide clues about the cause of the error and suggest possible solutions. Other Resources System Databases In SQL Server Transaction Log Backups and Full Mode Snapshot Backups In SQL Server SQL Server Backups And Indexing
- Instance And Database Settings In SQL Server
SQL Server Instance Settings refer to the configurations that are applied at the server level and affect the behavior of all databases running on that instance. These settings include things like memory allocation, processor usage, security settings, and backup options. Examples of SQL Server Instance Settings include the maximum server memory, server authentication mode, and database default locations. On the other hand, SQL Server Database Settings refer to configurations that are specific to an individual database, and they only affect that particular database. These settings include things like recovery model, compatibility level, collation, and auto-shrink. Examples of SQL Server Database Settings include the recovery model, compatibility level, and collation. In summary, SQL Server Instance Settings affect the behavior of all databases running on an instance, while SQL Server Database Settings are specific to individual databases. Here are some common SQL Server instance and database settings and a brief explanation of what they do: Max Server Memory - Sets the maximum amount of memory that SQL Server can use. The default value for Max Server Memory in SQL Server varies depending on the version and edition of SQL Server. For example, in SQL Server 2019 Enterprise Edition, the default value for Max Server Memory is the larger of either 2.88 TB or 80% of the total physical memory, up to a maximum of 24 TB. In contrast, the default value in SQL Server 2019 Standard Edition is 128 GB. However, it is important to note that the default value for Max Server Memory may not be appropriate for every scenario, as it is often set to a high value to ensure that SQL Server can take advantage of the available memory on the system. This can lead to other applications running on the same server being starved of memory, resulting in poor overall performance. As mentioned earlier, the recommended value for Max Server Memory depends on a number of factors, such as the amount of memory available on the server, the size and activity level of the databases running on SQL Server, and the memory requirements of other applications running on the same server. It is generally recommended to allocate 50-70% of the total physical memory on the server to SQL Server's Max Server Memory setting. It is important to monitor the memory usage of SQL Server regularly and adjust the Max Server Memory setting as needed to ensure optimal performance. In addition, it is a good practice to set a specific value for Max Server Memory rather than relying on the default value, as this provides more granular control over the amount of memory used by SQL Server. Server Authentication Mode In SQL Server, Server Authentication mode is a security setting that determines how users are authenticated when they connect to an instance of SQL Server. There are two modes of authentication in SQL Server: Windows Authentication mode and SQL Server and Windows Authentication mode (also known as Mixed Mode). In Windows Authentication mode, users are authenticated through Windows Active Directory. This mode is recommended for environments where all users have Windows domain accounts and need to connect to SQL Server using their Windows credentials. With this mode, there is no need for users to remember separate login credentials for SQL Server. In SQL Server and Windows Authentication mode, users can be authenticated either through Windows Authentication or SQL Server Authentication. With SQL Server Authentication, users are authenticated using a username and password that is stored in SQL Server. This mode is useful in scenarios where there are non-Windows clients that need to access SQL Server or where there is a need for a separate set of credentials for accessing SQL Server. The Server Authentication mode can be changed in SQL Server Management Studio or by using Transact-SQL commands. It is important to choose the appropriate authentication mode for your environment to ensure the security of your SQL Server instance. Network Configuration You can start the SQL Server Network Configuration screen using the SQL Server Configuration Manager. Here are the steps to start the Network Configuration screen: Open the SQL Server Configuration Manager. You can find it in the SQL Server program group in the Windows Start menu, or by searching for "SQL Server Configuration Manager" in the Windows search bar. In the SQL Server Configuration Manager, click on "SQL Server Network Configuration" in the left-hand pane. This will display a list of network protocols supported by SQL Server, such as TCP/IP, Named Pipes, and Shared Memory. You can select a protocol to view its properties and configuration settings, or right-click on it to enable, disable, or restart the protocol. The Network Configuration screen allows you to manage various aspects of SQL Server network connectivity, such as enabling or disabling network protocols, specifying port numbers, configuring IP addresses, setting authentication modes, and configuring remote connections. You can also use this screen to troubleshoot network connectivity issues and monitor network activity. The recommended network protocols to enable for SQL Server depend on your specific environment and connectivity requirements. However, in general, the following network protocols are commonly used and recommended for SQL Server: TCP/IP: This protocol is the most commonly used network protocol for SQL Server connections. It is used for remote connections over the internet or a network. TCP/IP supports multiple simultaneous connections and provides good performance and security. Named Pipes: This protocol is used for local connections within a network. It provides faster connections than TCP/IP and is useful when you need to transfer large amounts of data. Shared Memory: This protocol is used for local connections within the same computer. It provides the fastest connections and is useful when you need to transfer small amounts of data. Overall, the SQL Server Network Configuration screen is an important tool for managing SQL Server network connectivity and ensuring that your SQL Server instances are properly configured for secure and reliable network communication. Backup Options SQL Server 2022 offers various backup options to ensure that your databases are protected against data loss and corruption. The following are some of the backup options available in SQL Server 2022: Full backup: This option creates a complete backup of the database and all its objects. It is recommended to take full backups regularly to ensure that you have a complete and up-to-date backup of your database. Differential backup: This option backs up only the changes made to the database since the last full backup. It is useful for reducing the time and storage space required for backups. Transaction log backup: This option backs up the transaction log of the database, which contains information about all the transactions made on the database. It is useful for recovering the database to a specific point in time. File or filegroup backup: This option allows you to back up individual files or filegroups of the database. It is useful for large databases where backup times can be reduced by backing up only the required files. Copy-only backup: This option creates a backup without affecting the backup chain. It is useful for creating ad-hoc backups without interrupting the regular backup schedule. Backup compression: This option compresses the backup to reduce the backup size and the time required to complete the backup. Backup encryption: This option encrypts the backup to ensure that the backup data is secure and cannot be accessed by unauthorized users. Query Store The Query Store is a feature in SQL Server that provides a way to track query performance over time and troubleshoot performance issues. It captures query execution plans, runtime statistics, and other related information and stores it in a dedicated database called the Query Store. This data can be used to identify performance regressions, optimize queries, and troubleshoot performance issues. To use the Query Store, you first need to enable it on your SQL Server instance. This can be done using the SQL Server Management Studio or by running T-SQL commands. Once enabled, the Query Store automatically starts capturing query performance data. You can view the Query Store data by using various built-in reports, such as the Top Resource-Consuming Queries report or the Regressed Queries report. You can also use T-SQL queries to access the Query Store data directly and perform custom analysis. The Query Store provides several benefits, including: Query performance troubleshooting: The Query Store makes it easy to identify queries with poor performance, compare query performance before and after changes, and identify performance regressions. Query optimization: The Query Store provides insights into query execution plans, statistics, and other related information that can be used to optimize query performance. Query tuning: The Query Store can be used to fine-tune queries by providing information on execution plans, query statistics, and query resource consumption. Historical analysis: The Query Store stores data over time, allowing you to track query performance trends and compare performance across different time periods. Cost Threshold for Parallelism The Cost Threshold for Parallelism is a configuration setting in SQL Server that determines the minimum query execution cost required to trigger parallel query execution. When a query's estimated execution cost exceeds the value set for the Cost Threshold for Parallelism, SQL Server may use parallelism to execute the query. The Cost Threshold for Parallelism is measured in query execution plan units, which are calculated based on the estimated CPU and I/O resources required to execute the query. The default value for the Cost Threshold for Parallelism in SQL Server is 5, meaning that queries with an estimated execution cost of 5 or higher will be considered for parallel execution. The appropriate value for the Cost Threshold for Parallelism depends on the workload and hardware configuration of your SQL Server instance. In general, if your workload consists of small and simple queries, you may want to lower the value to reduce the overhead of parallel query execution. If your workload consists of complex and resource-intensive queries, you may want to raise the value to ensure that parallelism is only used for the most expensive queries. A good starting point for setting the Cost Threshold for Parallelism is to a value between 60 and 80 and monitor query performance. If you find that queries are not being parallelized when you expect them to be, you may want to lower the value. If you find that too many queries are being parallelized, causing performance issues, you may want to raise the value. Maximum Degree of Parallelism The Maximum Degree of Parallelism (MaxDOP) is a configuration setting in SQL Server that determines the maximum number of processors that can be used to execute a single query in parallel. When a query is executed, SQL Server may use multiple processors to divide the work and process it in parallel. The MaxDOP setting limits the maximum number of processors that can be used for this purpose. The appropriate value for MaxDOP depends on the workload and hardware configuration of your SQL Server instance. In general, the MaxDOP setting should be set to a value that balances query execution speed with system resource utilization. Setting MaxDOP too high can cause excessive CPU and memory usage, while setting it too low can result in slower query execution times. A good starting point for setting MaxDOP is to use the default value of 0, which allows SQL Server to use all available processors for parallel query execution. However, in some cases, it may be necessary to lower this value to prevent excessive resource usage and maintain system stability. This may be necessary if you are running other resource-intensive applications on the same server, or if you have a large number of concurrent queries executing simultaneously. On the other hand, if you have a high-performance server with a large number of processors, you may benefit from increasing the MaxDOP value to take advantage of the additional processing power. CLR Integration CLR (Common Language Runtime) Integration is a feature in SQL Server that allows developers to write database objects (such as stored procedures, functions, and triggers) using the .NET Framework and other CLR languages, instead of Transact-SQL (T-SQL). Once CLR Integration is enabled and the assembly is registered, you can use .NET languages to write database objects in SQL Server. CLR Integration provides developers with a powerful toolset for creating complex and feature-rich database objects that can be used to solve a wide range of business problems. Ad Hoc Distributed Queries The Ad Hoc Distributed Queries setting is a configuration option in Microsoft SQL Server that allows the server to execute queries that reference external data sources, such as other SQL Server instances or non-SQL Server data sources like Oracle or Excel. When Ad Hoc Distributed Queries is enabled, SQL Server can execute queries that use the OPENROWSET or OPENDATASOURCE functions, which enable the server to retrieve data from external sources. The recommended setting for Ad Hoc Distributed Queries is to disable it, unless you specifically need to execute queries that reference external data sources. This is because enabling this feature can introduce security risks and potential performance issues. If you do need to execute queries that reference external data sources, it is recommended to enable Ad Hoc Distributed Queries temporarily, execute the necessary queries, and then disable it again to mitigate security risks. Enable Ad Hoc Distributed Queries sp_configure 'show advanced options', 1; RECONFIGURE; GO sp_configure 'Ad Hoc Distributed Queries', 1; RECONFIGURE; GO Database Default Locations The Database Default Locations setting in SQL Server refers to the default file locations for database data and log files. When you create a new database in SQL Server, you must specify where to store the data and log files that make up the database. The default file locations determine where these files are stored if you do not specify a different location during database creation. There are two types of default locations: Default data file location: This is the default location where SQL Server stores the data files for new databases. By default, this is set to the "...\MSSQL\DATA" folder on the SQL Server instance's local drive. Default log file location: This is the default location where SQL Server stores the log files for new databases. By default, this is set to the "...\MSSQL\DATA" folder on the SQL Server instance's local drive. You can modify the default locations for data and log files by changing the server-level configuration options using SQL Server Management Studio or the T-SQL command ALTER SERVER CONFIGURATION. By changing the default file locations, you can ensure that new databases are stored in a location that best suits your needs, such as a separate drive for better performance or to meet storage requirements. Auto Close The Auto Close setting in SQL Server is a database option that determines whether the database should be automatically closed and its resources freed up when there are no more connections to the database. When the Auto Close option is enabled, SQL Server will automatically close the database when there are no active connections. This can free up system resources and memory, but it can also result in a delay when the next connection is made to the database, as SQL Server has to reopen the database and load it into memory. The recommended setting for the Auto Close option is to disable it. This is because enabling the Auto Close option can introduce performance overhead due to the need to constantly open and close the database, which can result in slower query execution times and increased disk I/O. Additionally, if there are frequent connections to the database, enabling Auto Close can result in increased resource usage as SQL Server has to repeatedly open and close the database. If you have a database with infrequent connections or limited resource availability, enabling Auto Close may be appropriate. However, for most databases, it is recommended to disable the Auto Close option for optimal performance and stability. You can disable the Auto Close option for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE. Auto Shrink The Auto Shrink setting in SQL Server is a database option that determines whether the database files should be automatically shrunk to free up disk space when the database size decreases. When the Auto Shrink option is enabled, SQL Server will automatically shrink the database files when there is free space available in the file. This can help to reclaim disk space, but it can also result in performance issues and fragmentation of the database files. The recommended setting for the Auto Shrink option is to disable it. This is because enabling Auto Shrink can result in frequent file growth and shrink operations, which can cause performance issues and fragmentation of the database files. Additionally, shrinking database files can cause data pages to become fragmented, which can result in slower query performance. Instead of using Auto Shrink, it is recommended to manually manage database file size and disk space. This can be done by monitoring the database size and file growth, and performing periodic maintenance tasks such as rebuilding indexes, defragmenting file systems, and archiving old data to free up space. By manually managing the database files, you can ensure optimal performance and stability of the database. You can disable the Auto Shrink option for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE. Recovery Model The Recovery Model setting in SQL Server is a database-level option that determines how much transaction log data is retained and how the database can be restored in case of a failure. There are three recovery models in SQL Server: Simple Recovery Model: In this mode, SQL Server automatically reclaims space in the transaction log to keep it from growing too large. The transaction log backups are not required in this mode and only full backups are necessary to restore the database in case of a failure. However, point-in-time recovery is not possible with this mode. Full Recovery Model: In this mode, SQL Server does not automatically reclaim space in the transaction log, and it must be backed up regularly to prevent it from growing too large. This mode supports point-in-time recovery, but requires both full and transaction log backups to restore the database in case of a failure. Bulk-Logged Recovery Model: This mode is similar to the Full Recovery Model, but it is designed for bulk operations such as bulk inserts or select into operations. It allows for faster logging of these operations, but at the cost of increased log size. The recommended Recovery Model setting depends on the business requirements and recovery objectives. For example, if the database contains critical data and requires point-in-time recovery, the Full Recovery Model may be necessary. On the other hand, if the database contains less critical data and does not require point-in-time recovery, the Simple Recovery Model may be appropriate. You can change the Recovery Model setting for a database using SQL Server Management Studio or the T-SQL command ALTER DATABASE. Backup Compression Default Backup Compression is a feature in SQL Server that allows you to compress your database backups to reduce their size and save disk space. This feature is available in all editions of SQL Server. When you enable backup compression, SQL Server compresses the backup data before writing it to the backup file. The compression ratio depends on the amount of redundancy in the database, such as empty space or duplicated data. Generally, databases with more redundant data will achieve a higher compression ratio than those with less redundant data. Backup compression can typically compress a database backup file by 40-90% of its original size, depending on the type of data being backed up and the level of redundancy. This can significantly reduce the storage requirements for backup files, as well as the time and bandwidth required to transfer them to other locations. It is worth noting that backup compression does require additional CPU resources to perform the compression, so there may be a small performance overhead during the backup process. However, this is typically outweighed by the benefits of reduced storage requirements and faster backup times. Additionally, modern hardware and software have made backup compression a more viable option than it may have been in the past. To enable backup compression and set it as the default option for all backup operations on the server, execute the following T-SQL command: sp_configure 'backup compression default', 1; User Connections The "User Connections" setting in SQL Server refers to the maximum number of simultaneous user connections that can be made to a particular instance of SQL Server. This setting determines the maximum number of users that can access a database at the same time. The recommended setting for the "User Connections" setting depends on the hardware and resources available on the SQL Server machine, as well as the workload and usage patterns of the database. As a general rule of thumb, the "User Connections" setting should be set high enough to accommodate the expected number of concurrent users accessing the database, but not so high that it causes performance issues or resource contention on the server. The default value for the "User Connections" setting in SQL Server is 0, which means there is no specific limit set on the number of user connections. Instead, the maximum number of user connections is determined by the available resources on the server. If you need to change the "User Connections" setting in SQL Server, you can do so using SQL Server Management Studio or by executing the following T-SQL command: sp_configure 'user connections', ; Replace with the maximum number of user connections that you want to allow. After changing the setting, you will need to run the "RECONFIGURE" command to apply the changes: RECONFIGURE; Lock Timeout The "lock timeout" setting in SQL Server refers to the amount of time that a transaction will wait for a lock to be released before it is canceled and rolled back. The recommended setting for the "lock timeout" depends on the application and the workload that the database is supporting. In general, the lock timeout setting should be set high enough to allow for normal transaction processing, but not so high that it causes excessive blocking and concurrency issues. The default value for the "lock timeout" setting in SQL Server is -1, which means that transactions will wait indefinitely for a lock to be released. This can lead to blocking and concurrency issues, and is generally not recommended for production environments. If you need to change the "lock timeout" setting in SQL Server, you can do so using SQL Server Management Studio or by executing the following T-SQL command: SET LOCK_TIMEOUT ; Replace with the desired lock timeout value in milliseconds. The maximum lock timeout value is 2,147,483,647 milliseconds (or 24.8 days). It is important to note that changing the lock timeout setting can have significant impacts on the performance and concurrency of the database. It is recommended to thoroughly test any changes to the lock timeout setting in a non-production environment before making changes to a production system. Fill Factor In SQL Server, the fill factor refers to the percentage of space on a database page that is initially filled with data when an index is created or rebuilt. The fill factor can be set for each index and determines how much free space should be left on each page, to allow for future updates and inserts. For example, if the fill factor is set to 80%, each page of the index will initially be filled to 80% of its capacity, leaving 20% of the space free for future updates. The recommended value for the fill factor depends on several factors, such as the frequency of updates and inserts to the table, the size of the table, and the available disk space. As a general guideline, a fill factor of 100% is appropriate for read-only tables, where no updates or inserts will occur. For tables that are frequently updated, a lower fill factor (such as 70-80%) is recommended to allow for future updates and inserts without causing excessive page splits. However, setting the fill factor too low can also lead to wasted disk space, as the index pages will have more free space than necessary. It is generally recommended to monitor the fill factor over time and adjust it as needed based on the actual usage patterns of the table. Determining the optimal fill factor for an index involves a trade-off between space usage and performance. A lower fill factor will leave more free space on index pages, which can reduce the frequency of page splits and improve write performance, but will also increase the overall size of the index. On the other hand, a higher fill factor will reduce the size of the index but may also increase the frequency of page splits and degrade write performance. Here are some tests you can perform to determine an appropriate fill factor for a specific index: Monitor the index fragmentation: If the index is highly fragmented, this may indicate that the fill factor is set too high, causing frequent page splits. To monitor the fragmentation level of an index, you can use the following T-SQL query: SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName, ind.name AS IndexName, indexstats.avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id AND ind.index_id = indexstats.index_id WHERE indexstats.avg_fragmentation_in_percent > 30 AND indexstats.index_type_desc = 'NONCLUSTERED' If the fragmentation level is consistently high, you may want to consider lowering the fill factor to reduce page splits. Monitor the index usage: If the index is frequently updated or inserted into, a lower fill factor may be more appropriate to reduce the frequency of page splits. To monitor the usage of an index, you can use the following T-SQL query: SELECT OBJECT_NAME(object_id) AS TableName, name AS IndexName, user_updates, user_seeks, user_scans, user_lookups FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID() AND object_id = OBJECT_ID('tableName') If the index is frequently updated (i.e., high user_updates count), you may want to consider lowering the fill factor to reduce page splits. If the index is frequently queried (i.e., high user_seeks or user_scans counts), a higher fill factor may be appropriate to reduce the size of the index. Monitor disk space usage: If disk space is a concern, you may want to consider increasing the fill factor to reduce the overall size of the index. To monitor the disk space usage of an index, you can use the following T-SQL query: If the index is frequently updated (i.e., high user_updates count), you may want to consider lowering the fill factor to reduce page splits. If the index is frequently queried (i.e., high user_seeks or user_scans counts), a higher fill factor may be appropriate to reduce the size of the index. SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName, ind.name AS IndexName, indexstats.page_count * 8 / 1024 AS IndexSizeMB FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id AND ind.index_id = indexstats.index_id WHERE indexstats.avg_fragmentation_in_percent < 30 AND indexstats.index_type_desc = 'NONCLUSTERED'ORDER BY IndexSizeMB DESC If the index size is consistently large, you may want to consider increasing the fill factor to reduce the amount of wasted space on index pages. Lightweight Pooling Lightweight Pooling (LWP) is a legacy feature in SQL Server that was introduced in SQL Server 7.0 and deprecated in SQL Server 2016. LWP is a thread scheduling mechanism that allows multiple SQL Server user threads to share a single operating system thread. It was designed to improve the scalability of SQL Server by reducing the overhead associated with creating and managing operating system threads. When LWP is enabled, SQL Server creates a fixed number of operating system threads, each of which is associated with a lightweight worker thread. When a user thread needs to execute a query, it is assigned to one of the available lightweight worker threads. If no lightweight worker threads are available, the user thread is placed in a queue until a worker thread becomes available. LWP is typically used in environments where SQL Server is handling a large number of small transactions or connections, such as web applications. However, LWP can also introduce performance overhead if the number of user threads exceeds the number of available worker threads, causing contention for resources. In recent versions of SQL Server, LWP has been replaced by the "fiber mode" feature, which provides a similar mechanism for managing lightweight user threads. Unlike LWP, which uses a fixed number of worker threads, fiber mode dynamically adjusts the number of user threads based on system load and available resources, providing better scalability and performance. Priority Boost Priority Boost is a setting in SQL Server that allows the SQL Server process to run at a higher priority than other processes on the same system. This can be useful in situations where SQL Server is the primary workload on the system and needs to be given priority over other processes. However, it is generally not recommended to enable Priority Boost in SQL Server. Here are a few reasons why: It can cause instability: Enabling Priority Boost can cause SQL Server to compete with other critical system processes for CPU time, potentially leading to system instability and crashes. It can impact performance: Even in cases where SQL Server is the primary workload on the system, enabling Priority Boost can actually have a negative impact on performance. This is because other critical system processes, such as those related to I/O, memory management, and networking, may be starved of resources and unable to perform their duties effectively. It is not necessary in most cases: In general, SQL Server is designed to run effectively and efficiently without the need for Priority Boost. If you are experiencing performance issues with SQL Server, there are typically other ways to address them, such as optimizing queries, tuning indexes, and configuring memory settings. In short, enabling Priority Boost in SQL Server should be avoided in most cases. If you are experiencing performance issues with SQL Server, it is recommended to investigate other solutions before considering Priority Boost as a potential option. Other Links Configure SQL Instance https://www.bps-corp.com/post/configure-a-sql-instance Intro To Database Administration https://www.bps-corp.com/post/introduction-to-database-administration-in-sql-server SQL Server For IT Managers https://www.bps-corp.com/sql-sql-it-managers-home SP_Who2 https://www.bps-corp.com/post/sp_who2
- SQL Instance Aliasing In SQL Server.
A SQL Instance alias is a name that is used to refer to a specific SQL Server instance. It is a user-defined name that can be used in place of the actual instance name when connecting to the SQL Server instance. Instance aliases are useful when you have multiple SQL Server instances running on the same machine, or when you need to change the name or location of a SQL Server instance without affecting the applications that are connecting to it. To create a SQL Instance alias, you can use the SQL Server Configuration Manager or the SQL Server Native Client Configuration utility. When creating an alias, you specify the alias name, the protocol (TCP/IP or Named Pipes), the server name (or IP address), and the instance name (if applicable). Once the alias is created, applications can use the alias name to connect to the SQL Server instance instead of the actual instance name. For example, suppose you have a SQL Server instance named "MyInstance" running on a machine with the hostname "MyServer". You could create an alias named "MyAlias" that points to this instance. Then, applications could use the alias name "MyAlias" to connect to the SQL Server instance, regardless of the actual instance name or machine name. Using an instance alias has several benefits: Easy maintenance: An instance alias can make it easier to manage multiple database instances. Instead of remembering the IP address or server name of each instance, you can simply refer to them by their alias. Flexibility: Using an instance alias allows you to move a database instance to a different server or IP address without having to update all the references to it in your code. Security: By using an instance alias, you can hide the actual IP address or server name of the database instance, which can help protect it from unauthorized access. Simplified code: When using an instance alias, your code can be simplified as you only need to reference the alias instead of the full server name or IP address. This can make your code more readable and easier to maintain. Overall, using an instance alias in SQL can simplify management, improve flexibility, enhance security, and make code more readable. Instance aliases are helpful in situations where you have multiple SQL Server instances installed on a server, or when you need to move an instance to a different server but want to maintain the same connection string. Instead of having to update all the references to the actual instance name, you can simply update the instance alias. In SQL Server 2019 and below, You can SQL instance alias in the SQL Server Configuration Manager. Here are the steps to create an instance alias: Open SQL Server Configuration Manager- You may need to open this as an administrator. Click on "SQL Server Native Client Configuration" in the left-hand pane. Right-click on "Aliases" in the right-hand pane and select "New Alias". Enter the desired name for the alias in the "Alias Name" field. Select the "Network Libraries" tab and choose the appropriate network protocol (e.g., TCP/IP). Enter the server name and instance name for the SQL Server instance you want to alias in the "Server" field. Click "OK" to save the alias.Instance aliases are helpful in situations where you have multiple SQL Server instances installed on a server, or when you need to move an instance to a different server but want to maintain the same connection string. Instead of having to update all the references to the actual instance name, you can simply update the instance alias. If the TCP/IP client protocol is not enabled, you will need to enable it Troubleshooting - SQL 2022 + Versions Aliases is Grayed Out If you cannot type a name in the SQL Alias box because its grayed out or not editable you are missing the SQL Native Client but know it's not supported anymore. You can download the Native Client here https://learn.microsoft.com/en-us/sql/relational-databases/native-client/applications/installing-sql-server-native-client?view=sql-server-ver16 Additionally, you will need the C++ Visual Studio Installer, and this will require a reboot If you cannot install the legacy native client, you will need to use the Client Network Utility. How to add a network library configuration (Client Network Utility). Open The Utility --> Ailas --> Add While SQL aliases can be helpful in simplifying database management and connectivity, they also have some limitations Here are some limitations of SQL aliases: Not supported in some SQL Server components: Some SQL Server components, such as SQL Server Integration Services (SSIS), do not support SQL aliases. This can limit the usefulness of aliases in certain scenarios. Limited to a single server: SQL aliases are limited to a single server. If you need to connect to a different server, you will need to create a new alias or use the actual server name. Not available in some SQL Server editions: SQL aliases are not available in all editions of SQL Server. For example, SQL Server Express does not support aliases. Can cause confusion: If you have many aliases, it can be challenging to keep track of them all. Additionally, if you're working with other developers or administrators who are not familiar with your aliases, it can cause confusion and make it harder to troubleshoot issues. Potential performance impact: Using a SQL alias can add an additional layer of network traffic, which can impact performance. While the impact is generally minimal, it's important to be aware of this potential limitation. Instance aliases are supported in most versions of SQL Server, but there are some differences in how they are managed and used in different versions. Here are some major differences in SQL Server versions that support instance aliases: SQL Server 2000: This version of SQL Server introduced support for instance aliases. However, unlike later versions of SQL Server, instance aliases in SQL Server 2000 are managed using the Client Network Utility. SQL Server 2005-2008 R2: Instance aliases in SQL Server 2005-2008 R2 are managed using the SQL Server Configuration Manager. This version also introduced the ability to create 32-bit and 64-bit aliases. SQL Server 2012 and later versions: Starting with SQL Server 2012, the SQL Server Configuration Manager was updated to support both 32-bit and 64-bit aliases in the same interface. Additionally, SQL Server 2012 introduced the ability to use a fully qualified domain name (FQDN) in an alias. Azure SQL Database: Azure SQL Database supports instance aliases, but they are managed using the Azure portal or Azure PowerShell. Additionally, Azure SQL Database supports only TCP/IP as the network protocol for aliases. Does ODBC support Alias in SQL Server? Yes, ODBC (Open Database Connectivity) supports SQL Server instance aliases. ODBC is a standardized API that allows applications to access data from various database management systems, including SQL Server, using a common syntax and set of commands. When creating a connection to a SQL Server instance using ODBC, you can specify an instance alias in the "Server" field of the connection string. For the following example below, if you created an instance alias called "MyAlias" for an instance named "MyInstance", the connection string would be: Driver={SQL Server};Server=MyAlias\MyInstance;Database=myDatabase;Uid=myUsername;Pwd=myPassword; When the application attempts to connect to the SQL Server instance using this connection string, ODBC resolves the alias name to the actual server and instance name, and establishes the connection accordingly. Overall, ODBC is a flexible and widely-used way to connect to SQL Server and many other database systems, and supports the optional use of instance aliases to simplify connection strings. Other Resources System Databases In SQL Server Configure SQL Server Instance Consulting Hours - If you just want me to do this for you Alias In T-SQL (This is different than an Instance Alias)
- SQL Server Error Logs
These logs are important because they enable and provide a detailed record of process events that can be used for troubleshooting, performance tuning, and auditing purposes. Error logs file are generated by SQL Server whenever an error or warning occurs, and they can be viewed using SQL Server Management Studio or by querying the system tables in SQL Server. The logs contain information such as the time the error occurred, the severity of the error, the source of the error, and a description of the error. Some of the reasons why error logs are important include: Troubleshooting: Error logs can be used to troubleshoot issues with SQL Server. By examining the logs, administrators can identify the source of errors and take corrective action. Performance tuning: Error logs can also be used to optimize SQL Server performance. By monitoring the logs, administrators can identify performance issues and take steps to improve performance. Auditing: Error logs can be used for auditing purposes. By reviewing the sys' logs, administrators can track user activity and identify any unauthorized user access to data on the system. The location and format of error logs in SQL Server can vary depending on the version and edition of SQL Server being used. Here are the general locations of the error logs by version of SQL Server: SQL Server 2005 and earlier: Error logs are stored in the "Log" folder under the SQL Server instance's installation directory. SQL Server 2008 and later: Error logs are stored in the "Log" folder under the SQL Server instance's installation directory, but the default location for the error logs can be changed during installation. SQL Server 2012 and later: The default location for the error logs is "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log" for the default instance, where "MSSQL11.MSSQLSERVER" refers to the version of SQL Server being used. SQL Server 2016 and later: The default location for the error logs is "C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Log" for the default instance, where "MSSQL14.MSSQLSERVER" refers to the version of SQL Server being used. Using SQL Server Management Studio (SSMS): Open SSMS and connect to the SQL Server instance. Expand the "Management" node, right-click on the "SQL Server Logs" node, and select "Configure". In the "Configure SQL Server Error Logs" window, you can see the current location of the error logs and modify the log settings if needed. Using T-SQL: Open a new query window in SSMS and execute the following query to retrieve the location of the current SQL Server Error log file: EXEC sp_readerrorlog 0, 1, N'Logging SQL Server messages in file' This will return a result set containing the current path of the SQL Server Error log file. Using Windows Explorer: Navigate to the default log directory for the SQL Server instance. By default, the location of the SQL Server Error logs is "%ProgramFiles%\Microsoft SQL Server\MSSQL{version_number}.{instance_name}\MSSQL\Log". Replace {version_number} with the version of SQL Server (e.g., "MSSQL12.MSSQLSERVER" for SQL Server 2014) and {instance_name} with the name of the SQL Server instance (e.g., "MSSQLSERVER" for the default instance). Using the registry: Open the Windows Registry Editor and navigate to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\Parameters Look for the "SQLArg" value, which contains the startup parameters for the SQL Server instance. The location of the error log file is specified with the "-e" parameter. Looking at the SQL Server Instance Properties In SQL Server Management Studio (SSMS), you can view and manage SQL Server error logs using the Object Explorer. Here are the general steps to find the error logs in SSMS by version: SQL Server 2005 and earlier: In Object Explorer, expand the SQL Server instance, and then expand Management. Right-click SQL Server Logs, and then select View SQL Server Log. SQL Server 2008 and later: In Object Explorer, expand the SQL Server instance, and then expand Management. Right-click SQL Server Logs, and then select View -> SQL Server Log. SQL Server 2012 and later: In Object Explorer, expand the SQL Server instance, and then expand Management. Right-click SQL Server Logs, and then select View -> SQL Server Log. Alternatively, you can use the keyboard shortcut "Ctrl+Shift+L" to open the SQL Server Log Viewer. SQL Server 2016 and later: In Object Explorer, expand the SQL Server instance, and then expand Management. Right-click SQL Server Logs, and then select View -> SQL Server Log. Alternatively, you can use the keyboard shortcut "Ctrl+Shift+L" to open the SQL Server Log Viewer. Log Properties SQL Server Error Log Properties are settings that can be configured to control the behavior of error logs in SQL Server. These settings affect how often the logs are logged and cycled, how many logs are retained, and the maximum size of each log file. Here's a brief overview of each setting and how it can affect the server and the transaction log logs: Maximum number of error log files: This setting determines the maximum number of error logs that can be retained before the oldest log is overwritten. The default value is 6, meaning that the current error log and the previous 5 logs are retained. If this value is set too low, important error information may be lost. If it is set too high, it may consume more disk space than necessary. Maximum size (MB) of each error log file: This setting determines the maximum size of each error log file. When the log file reaches the maximum size, a new log file is created. The default value is 20 MB. If this value is set too low, log files may be created and rotated too frequently, which can lead to performance issues. If it is set too high, log files may consume more disk space than necessary. Maximum size (MB) of all error log files: This setting determines the maximum size of all error log files combined. When the total size of all log files reaches the maximum size, the oldest log file is deleted and a new log file is created. The default value is 2,147,483,647 MB (or 2 terabytes), which is essentially unlimited. If this value is set too low, log files may be deleted before important error information can be reviewed. If it is set too high, log files may consume more disk space than necessary. Recycle error logs: This setting determines how often error logs are cycled. By default, error logs are cycled when the SQL Server service is restarted. However, this setting can be configured to cycle logs on a regular basis (e.g., daily, weekly, or monthly). If logs are not cycled often enough, log files may consume more disk space than necessary. If they are cycled too often, important error information may be lost. Why are some errors also saved the Windows Event Log Centralized logging: The Windows Event Log is a centralized logging mechanism that allows administrators to view system events from multiple sources in one location. By logging SQL Server errors to the Windows Event Log, administrators can see important events related to SQL Server along with other system events in the same place. Compliance: Some compliance standards, such as PCI DSS, require that logs be written to the Windows Event Log in addition to the application-specific log files. This ensures that events are captured and retained in a secure, centralized location. Critical errors: In some cases, SQL Server may encounter critical errors that prevent it from writing to its own error log. In these cases, SQL Server will write the error to the Windows Event Log so that it can be captured and reviewed by administrators. Windows monitoring tools: Windows monitoring tools, such as Microsoft System Center Operations Manager (SCOM), can be configured to monitor the Windows Event Log for specific events, including SQL Server errors. By logging SQL Server errors to the Windows Event Log, these tools can provide real-time alerts and notifications when critical events occur. Reviewing Logs and The DBA There are several ways to proactively automate monitoring error logs in SQL Server. Here are some examples: SQL Server Agent Alerts: SQL Server Agent can be configured to generate alerts for specific errors or error severity levels. These alerts can be configured to trigger an email notification or run a specific job when the alert is triggered. This allows DBAs to proactively monitor and respond to errors as they occur. Third-party monitoring tools: There are several third-party monitoring tools available that can automate the monitoring of SQL Server error logs. These tools can be configured to monitor specific errors, error severity levels, or even patterns in the error log entries. They can also be set up to generate alerts or notifications when specific conditions are met. PowerShell scripts: PowerShell scripts can be used to automate the monitoring of SQL Server error logs. The scripts can be scheduled to run at specific intervals and can be configured to check for specific errors or error severity levels. The scripts can also be set up to generate alerts or notifications when specific conditions are met. Custom T-SQL scripts: Custom T-SQL scripts can be written to monitor the SQL Server error log and generate alerts or notifications when specific conditions are met. These scripts can be scheduled to run at specific intervals using SQL Server Agent, or they can be run manually as needed. Additional Resources SQL Server Management and Maintenance (if you just want me to manage your Logs) Configure A SQL Instance T-Log Backups Setting Failure Alerts In SQL Agent Video (2008) but the concepts are still good
- Learning About and Implementing Isolation Levels In SQL Server:
Are you looking to boost your SQL Server's concurrency and consistency? SQL Server has your back with various transaction isolation levels. Each level provides a unique balance between consistency and concurrency, making SQL Server's job of managing concurrency a whole lot easier. Wondering which level is the right fit for your needs? We've got your back with a brief overview of each isolation level. And if you're looking for more detailed examples using SQL Server, keep reading. Default Isolation Level and NOLOCK The default isolation level in SQL Server is "Read Committed" isolation level. This means that by default, SQL Server will use the Read Committed isolation level to control concurrency and ensure data consistency. However, it's important to note that the default isolation level can be changed at the database level or query level using the SET TRANSACTION ISOLATION LEVEL statement. Additionally, some operations such as index creation or bulk loading may temporarily change the isolation level to improve performance. It's important to carefully consider the implications of changing the isolation level and to test any changes thoroughly before implementing them in a production environment. You can check the current isolation level setting in SQL Server by running the following query: DBCC USEROPTIONS; This query will return a result set with various options and settings, including the current transaction isolation level. Look for the "isolation level" option to see the current setting. What is the difference between setting an SQL Server isolation level and setting Nolock? Setting an isolation level determines how concurrent transactions interact with each other, including how they acquire locks and read data. The isolation level affects the consistency and concurrency of the data in a database. For example, setting the isolation level to Read Committed means that a transaction can read only committed data and that it will acquire shared locks on the data it reads, which can prevent other transactions from modifying that data until the first transaction releases its locks. On the other hand, setting NOLOCK is a query hint that tells SQL Server to read data without acquiring locks. It allows a SELECT statement to read data that is currently being modified by another transaction, which can improve performance, but may lead to inconsistent or incorrect results. For example, using the NOLOCK hint with a SELECT statement that reads a table can allow the query to return dirty or uncommitted data, which may not reflect the true state of the database. Here's an example of how these two concepts can be used in practice: Suppose you have a table called Customers with the columns ID, Name, and City, and two concurrent transactions, T1 and T2, that modify the data in the table. T1 updates the City column of a particular customer, while T2 reads the City column of the same customer. If the isolation level of both transactions is set to Read Committed, T1 will acquire an exclusive lock on the row it updates, which will prevent T2 from reading the data until T1 releases its lock. This ensures that T2 reads the updated data and not the old data. If the isolation level of T2 is set to Read Uncommitted, and the NOLOCK hint is used with the SELECT statement that reads the data, T2 will read the data without acquiring any locks. This means that T2 may read the old, unmodified data while T1 is updating the City column, which can result in inconsistent data. In summary, isolation levels and the NOLOCK hint are both used to control how concurrent transactions access data in a SQL Server database, but they are used in different ways and have different effects on data consistency and concurrency. Read uncommitted: This is the lowest isolation level in SQL Server and allows read dirty data reads, non-repeatable reads, and phantom reads. This means that transactions can read data that has been modified but not yet committed by other transactions, which can result in inconsistent data. You might use this isolation level in situations where data consistency is not critical, and high concurrency is required. Repeatable read: This isolation level prevents non-repeatable reads, but phantom reads can still occur. Under this isolation level, locks are placed on all data that is read by a transaction, and other transactions cannot modify the locked data until the first transaction completes. You might use this isolation level in situations where data consistency is important, and you need to prevent non-repeatable reads. Serializable: This is the highest isolation level in SQL Server and prevents all three types of anomalies (dirty reads, non-repeatable reads, and phantom reads). Under this isolation level, transactions acquire range locks on all data they read or modify, which prevents other transactions from modifying the same data. You might use this isolation level in situations where data consistency is critical, and you can tolerate a lower degree of concurrency. Snapshot: This isolation level is similar to Read committed enable snapshot isolation and optimistic isolation levels, in that it uses a versioning mechanism to maintain multiple versions of each row. However, unlike the Read committed enable snapshot isolation, and the read committed no snapshot only isolation levels, the snapshot isolation level provides a consistent view of the data for the duration of a transaction, without allowing any other transactions to modify the same data. You might use this isolation level in situations where you need a consistent view of the data, but you don't want to use locking-based isolation levels. Read uncommitted Detail: Read uncommitted is the lowest isolation level in SQL Server, and it allows transactions to only read only committed data that has been modified but not yet the only committed data used by other transactions. This means that transactions can see "dirty" data, which can result in inconsistent data. Under Read uncommitted isolation, no locks are placed on data that is read by transactions, which allows transactions to read data without waiting for other transactions to release locks on that data. This can result in higher concurrency but at the cost of data consistency. As a result, transactions operating under Read uncommitted isolation level can experience non-repeatable reads and phantom reads. A non-repeatable read occurs when a transaction reads the same row twice, but another transaction between the reads has modified the data in the row. A phantom read occurs when a transaction reads a set of rows based on a certain condition, and another transaction inserts or deletes rows that satisfy the same condition before the first transaction completes. Read uncommitted isolation level is generally not recommended for most applications since it can lead to inconsistent or incorrect data back. However, it might be useful in some scenarios, such as when you need to run ad-hoc queries and do not require accurate results, or when you need to provide temporary access to a specific set of data without affecting other transactions. For example here is a typical example in a bank. if we have a balance of $50 but have a pending auto deposit of another $50. However at the same instance we walk up to the ATM and view our account, we night get the incorrect result of $50 dollars in our account. When To Use Read Uncommitted Read uncommitted read committed isolation level one, or read committed, level is the lowest isolation level in SQL Server and is generally not recommended for most applications since it can lead to inconsistent data. However, there are some situations where it might be useful: Reporting: When generating reports that do not need to be 100% accurate or where data consistency is not important, Read uncommitted isolation level can be used. In such scenarios, using Read uncommitted can help to improve the performance of the reports by allowing multiple users to read the same data simultaneously. Data analysis: When performing ad-hoc queries on a database, Read uncommitted isolation level can be useful. Ad-hoc queries are typically one-time queries used to analyze data, and the results don't need to be 100% accurate. In such scenarios, using Read uncommitted isolation level can improve the query performance by allowing multiple users to run queries on sample data simultaneously. Data migration: When migrating large amounts of data from one a database engine to another, Read uncommitted isolation level can be used to speed up the process. Since data consistency is not critical during migration, using Read uncommitted can allow multiple transactions to read the same data simultaneously, which can speed up the migration process. In SQL Server, you can set the transaction isolation level to Read uncommitted at the connection level or at the transaction level. Here's previous example of how you can implement Read uncommitted isolation level in SQL Server: Connection committed isolation level: You can set the connection string transaction isolation level to either Read committed snapshot or uncommitted at the connection level by using the SET TRANSACTION ISOLATION LEVEL command. For example, the following T-SQL code sets the transaction isolation at pooled connection level to Read committed snapshot uncommitted for the current connection: SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED Transaction level: You can also set the transaction isolation level to Read uncommitted at the transaction level by using the SET TRANSACTION ISOLATION LEVEL command inside a transaction block. For example, the following T-SQL code sets the transaction isolation level to Read uncommitted for a specific transaction: BEGIN TRANSACTION SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED -- Perform some queries or modifications here COMMIT TRANSACTION To implement Read uncommitted and isolation levels and level in a query, you can also use the WITH (NOLOCK) query hint. For example, the following T-SQL code uses the WITH (NOLOCK) query hint to select data from the Sales table using the Read uncommitted isolation level: SELECT * FROM Sales WITH (NOLOCK) In SQL Server, you cannot set transaction isolation level snapshot below the transaction isolation level to Read uncommitted at the instance or set transaction isolation level snapshot above. Transaction isolation levels are set at the connection, database engine or transaction level, and they apply only to the connection or transaction that they are set on. However, you can configure the default transaction isolation level for all new connections by using the sp_configure stored procedure. The default transaction isolation level determines the default isolation level for user database name, that is used when a new connection is established to the SQL Server instance. To set the default transaction isolation level to Read uncommitted, you can execute the following T-SQL command: sp_configure 'user options', 512 RECONFIGURE The user options configuration option is used to specify the default transaction isolation level. A value of 512 indicates Read uncommitted isolation as transaction started level. After executing the above command, all new connections to the SQL Server instance will use Read uncommitted isolation level as the default transaction isolation level. However, changing the default connection string transaction isolation level at the instance level can have unintended consequences, as it can affect all applications and users that connect to the SQL Server instance. It is generally not recommended to change the default connection string transaction isolation level at the instance level, and it's better to set the default isolation level at the same connection string or transaction level as needed. Repeatable read Repeatable read is an isolation level in SQL Server that guarantees that any data read during a transaction will be the same if it is read again within the same transaction. This means that if a transaction reads a row and then reads the same row again later in the same transaction, it will see the same data both times, regardless of any changes made by other transactions in the meantime. In Repeatable read isolation level, shared locks are held on the version store all the data modifications that is read by a transaction, and these locks are held until the transaction is completed. This prevents other transactions from modifying data modifications or deleting the same version store the same version store the data that is being read by the same version store the current transaction, thereby guaranteeing data consistency for subsequent connections. However, Repeatable read isolation level does not prevent non-repeatable reads or phantom reads. Non-repeatable reads occur when a same row versions is read twice within a transaction, but the second read returns different results because the row was modified by another transaction in the meantime. Phantom reads occur when a transaction reads a row version a set of rows that satisfy a certain condition, and then another transaction inserts or deletes a row that also satisfies the same condition, causing the first transaction to see a row versioning a different set of rows when it reads the row versions with the same condition again. Repeatable read is set transaction isolation level for snapshot transactions and can be set using the SET TRANSACTION ISOLATION LEVEL command. For example, the following T-SQL code sets the transaction isolation level to Repeatable read: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ It's important to note that Repeatable read isolation level can lead to increased locking and blocking, which can negatively impact the performance of concurrent transactions. It should be used judiciously and only in cases where it is necessary to guarantee repeatable reads within a transaction. To set the Repeatable read isolation level for a specific transaction, you can use the SET TRANSACTION ISOLATION LEVEL command in your SQL query. Here is an example of how to set the Repeatable read isolation level in a query: BEGIN TRAN SET TRANSACTION ISOLATION LEVEL REPEATABLE READ -- your SQL statements here COMMIT In this example, the BEGIN TRAN statement starts a new transaction. The SET TRANSACTION ISOLATION LEVEL REPEATABLE READ command sets the transaction isolation level to Repeatable read for the current transaction. You can then execute your SQL statements between the BEGIN TRAN and COMMIT statements. Finally, the COMMIT statement ends the transaction. It's important to note that the Repeatable read isolation level can lead to increased locking and blocking, which can negatively impact the performance of concurrent transactions. Therefore, you should use it judiciously and only when necessary to guarantee repeatable reads within a transaction. When Should You Use Repeatable Read Repeatable Read isolation level can be useful in scenarios where you need to ensure that the data read during a transaction remains consistent throughout the transaction. For example, if you are running a financial transaction that involves multiple reads and writes, you may want to use Repeatable Read isolation level to ensure that the data being read remains consistent throughout the transaction. In general, Repeatable Read isolation level is suitable when: You need to ensure that the data being read in a transaction is not modified by other transactions during the transaction. You need to ensure that the data being read multiple times within a transaction remains consistent throughout the transaction. However, Repeatable Read isolation level can cause more blocking and deadlocks active transactions, as it holds locks on all rows read until the first transaction itself is completed. This can negatively impact the performance of concurrent transactions. Therefore, it is important to use Repeatable Read isolation level judiciously and only when necessary. In most cases, Read Committed Snapshot isolation level can provide a good balance between data consistency and performance. Serializable Isolation Level Serializable isolation level is the highest level of transaction isolation in SQL Server. It provides the strongest guarantees of data consistency and prevents all concurrency issues, such as dirty reads, non-repeatable reads, and phantom reads. In Serializable isolation level, each transaction is executed as if it is the only transaction running on the system, even though there may be multiple transactions executing concurrently with one transaction. It ensures that transaction sequence numbers and number of transactions execute in a serializable order, which means that the result of a whole transaction sequence number of numbers concurrent transactions is equivalent to running them one after another in some serial order. Serializable isolation level works by placing a range lock on all the data that is read during a transaction, preventing other transactions from modifying or inserting data within that range. This ensures that the data read by a transaction remains the same throughout the transaction. However, because of the locking mechanism used, Serializable isolation level can cause more blocking and deadlocks compared to other isolation levels. This can negatively impact the performance of concurrent transactions. Serializable isolation level can be set using the SET TRANSACTION ISOLATION LEVEL command. For example: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Serializable isolation level should only be used when it is absolutely necessary to ensure the strongest level of data consistency, and when the potential impact on performance is acceptable. Serializable isolation level is the highest level of transaction isolation in SQL Server and should only be used when it is absolutely necessary to ensure the strongest level of data consistency. It provides the strongest guarantees of data consistency and prevents all concurrency issues, such as dirty reads, non-repeatable reads, and phantom reads. You may consider using Serializable isolation level in scenarios where: You have critical transactions that must be executed with the highest level of data consistency and accuracy, such as financial transactions or healthcare data management. Your system has a low level of concurrency, and the potential impact on performance due to locking is acceptable. However, Serializable isolation level can cause more blocking and deadlocks compared to other isolation levels, which can negatively impact the performance of concurrent transactions. Therefore, it is important to use it judiciously and only when necessary. In most cases, a lower isolation level, such as Read Committed or Repeatable Read, can provide a good balance between data consistency and performance. It is important to carefully consider the requirements of your application before deciding on the appropriate isolation level to use. Snapshot Isolation level Detail Snapshot committed snapshot isolation in sql* level is a transaction committed snapshot isolation level used in SQL Server that provides a high level of data consistency while minimizing the potential for blocking and deadlocks caused by locking. It works by allowing transactions to read and write data that has been read committed snapshot isolation by other transactions, but not yet fully the committed data to the database. In Snapshot isolation, each transaction gets a full row versioning of the data that it reads, and any changes made by other transactions are not visible to it. This means that the row versioning to store the data read by a transaction remains consistent throughout the transaction, even if other transactions modify or insert data into the row versioning the same table. Snapshot isolation uses row versioning to keep track of changes to the versioned rows of the data. When a transaction reads data from read committed snapshot, it gets the versioned rows of the data that was available at the start of the transaction read committed snapshot. Any changes made to row versions of the data by other transactions after the start of the transaction read committed snapshot are not visible to it. Snapshot isolation can be enabled at the database level using the ALTER DATABASE command, as follows: ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON; Once enabled, you can set the Snapshot isolation level for a transaction using the SET TRANSACTION ISOLATION LEVEL command, as follows: SET TRANSACTION ISOLATION LEVEL SNAPSHOT; Snapshot isolation is useful in scenarios where you need to have snapshot isolation levels ensure high data consistency while minimizing blocking and deadlocks caused by locking. However, it can increase the overhead of the system, as it requires additional storage to maintain the row of consistent snapshot data versions. Summary Graphic Setting A Combination READ COMMITTED & SNAPSHOT Isolation Read Committed Snapshot Isolation (RCSI) is an only snapshot isolation at level available in SQL Server that provides a higher level of concurrency than traditional Read Committed snapshot isolation in sql*. RCSI allows transactions to read data that has been modified by other transactions, even before those transactions have been read committed back to the database. This is achieved by maintaining a snapshot of the data as it existed at the start of the transaction, and allowing the transaction to read from this snapshot rather than the live data. When a transaction modifies data, the changes are made to a copy of the data rather than the original, and other transactions can continue to read from the original data until the changes are committed. This allows multiple transactions to read and write to the same data without conflicts, improving concurrency and performance. RCSI is particularly useful in high-concurrency environments where multiple transactions are accessing the same data simultaneously, as it can reduce the likelihood of lock contention and deadlocks. However, it's important to note that RCSI can increase the overhead of maintaining and managing the snapshot, and may not be appropriate for all scenarios. The advantages and disadvantages of Read Committed Snapshot Isolation (RCSI) in SQL Server include: Improved concurrency: RCSI allows multiple transactions to read and write to the same data simultaneously without conflicts, reducing lock contention and deadlocks and improving overall concurrency. Consistent results: RCSI provides consistent and repeatable results for queries, as each transaction reads from a consistent snapshot of the data rather than the live data. This can help to avoid issues such as non-repeatable reads and phantom reads. Reduced blocking: Since RCSI uses row versioning instead of locks, it can reduce blocking and improve performance by allowing transactions to continue to read data even when it is being modified by another transaction. Faster queries: RCSI can improve query performance by reducing the overhead of acquiring and releasing locks, and by reducing the need for expensive locking mechanisms such as table locks. Improved scalability: By improving concurrency and reducing blocking and contention, RCSI can help to improve the scalability of a database and support more users and transactions. While Read Committed Snapshot Isolation (RCSI) offers several advantages, it also has some potential disadvantages to consider: Increased storage requirements: RCSI uses row versioning to maintain a snapshot of the data, which can increase the storage requirements of the database. Increased overhead: RCSI can also increase the overhead of managing and maintaining the snapshot, which may affect the overall performance of the database. Increased complexity: RCSI can add complexity to the design and implementation of the database, particularly for applications that require complex transaction processing or handling of large amounts of data. Increased risk of conflicts: RCSI may increase the risk of write conflicts, particularly in applications that have a high rate of write operations. Potential for inconsistent data: While RCSI provides consistent and repeatable results for queries, it's possible for data to become inconsistent if multiple transactions modify the same data simultaneously. This can result in non-deterministic behavior, which may be difficult to diagnose and resolve. Read Committed Snapshot Isolation (RCSI) can have some effects on the TempDB database in SQL Server. When RCSI is enabled for a database, SQL Server uses versioning to maintain a snapshot of the data for each transaction, a version store which is stored in TempDB. This can increase the usage of TempDB and potentially impact its performance, particularly if the database has a high rate of read and write operations. In particular, the version store in TempDB can grow very large if there are long-running transactions or if there is a high rate of updates or deletes on heavily used tables. This can lead to contention and performance issues, particularly if the TempDB database is not sized appropriately. To mitigate these issues, it's important to monitor the size and usage of TempDB when using RCSI, and to ensure that the database is sized appropriately to handle the workload. Best practices for managing TempDB include: Monitoring TempDB growth: Monitor the size and growth of TempDB, and proactively manage space allocation to avoid running out of disk space. Sizing TempDB appropriately: Size TempDB appropriately based on the workload and usage patterns of the database. Separating TempDB from user databases: Consider separating TempDB onto a separate disk or storage device to avoid contention with user databases. Configuring TempDB for optimal performance: Configure TempDB for optimal performance by setting appropriate file growth settings, enabling trace flags, and using SSDs or other high-performance storage devices. Here is a query to help you determine how much TempDB is utilized -- Show space usage in tempdb SELECT DB_NAME(vsu.database_id) AS DatabaseName, vsu.reserved_page_count, vsu.reserved_space_kb, tu.total_page_count as tempdb_pages, vsu.reserved_page_count * 100. / tu.total_page_count AS [Snapshot %], tu.allocated_extent_page_count * 100. / tu.total_page_count AS [tempdb % used] FROM sys.dm_tran_version_store_space_usage vsu CROSS JOIN tempdb.sys.dm_db_file_space_usage tu WHERE vsu.database_id = DB_ID(DB_NAME()); Lets create an Orders Table with 10 sample rows using T-SQL: Here's an example of how to create a "Sales" table with 8 sample rows using T-SQL. CREATE TABLE Sales ( order_id INT, part_name VARCHAR(50), date DATE, status VARCHAR(20), total_amount DECIMAL(10, 2) ); INSERT INTO Sales VALUES (1, 'Engine', '2022-04-01', 'Shipped', 350.25), (1, 'Wheels', '2022-04-01', 'Shipped', 150.00), (2, 'Tail Assembly', '2022-04-02', 'Pending', 225.50), (3, 'Landing Gear', '2022-04-03', 'Shipped', 500.00), (3, 'Wings', '2022-04-03', 'Shipped', 750.00), (4, 'Propeller', '2022-04-04', 'Pending', 125.50), (5, 'Seats', '2022-04-05', 'Shipped', 175.00), (6, 'Avionics', '2022-04-06', 'Shipped', 450.75), (7, 'Cabinetry', '2022-04-07', 'Pending', 320.25), (8, 'Fuel Tanks', '2022-04-08', 'Shipped', 800.00); Execute Query #1 without the commit SET TRANSACTION ISOLATION LEVEL READ COMMITTED BEGIN TRANSACTION SELECT Top 3 * FROM Sales COMMIT Execute Query #2 without the commit SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION SELECT Top 3 * FROM Sales COMMIT Both queries return Through enabling RCSI, it's possible to run simultaneous queries that involve the same data with different isolation levels. Notably, when running two queries without committing either, the one with a SERIALIZABLE isolation level typically blocks other users from accessing the same table. However, with READ COMMITTED isolation level using snapshot scans, such blocks don't occur. This design is effective for systems that predominantly read but minimally write data. The extra perk is that it doesn't necessitate modifying existing queries. Implement Snapshot isolation in sql server databases Turn on Snapshot isolation in sql ALTER DATABASE MyOrders SET ALLOW_SNAPSHOT_ISOLATION ON; Query #1 Run with out committing SET TANSACTION ISOLATION LEVEL SNAPSHOT; SET LOCK_TIMEOUT 10000; BEGIN TRAN; UPDATE Sales SET [Status] = 1 WHERE order_id = 1; WAITFOR DELAY '00:00:05'; COMMIT; Query #2 Run Two queries were executed simultaneously to change the order expedited flag, but with different values. To ensure both were active, they were run with a little delay and under isolation level snapshot. The first query was successful, but the result of the second one was quite different with an error message that indicated something went wrong. Here is the full error message. The delicate balance of updating data in SQL Server can oftentimes result in conflicts, ultimately leading to transaction termination. Recently, a transaction was aborted due to update conflict. This issue arose from two queries both trying to update the same row. Upon trying to commit the updated version of the row in the second query, SQL Server detected that the first transaction was also attempting an update to the same row. In such cases, it is not within the database engine's purview to decide which query should take precedence - that is a decision best left to business logic. Accordingly, SQL Server with update conflict error was forced to terminate one of the transactions, ultimately preventing the potential for data inaccuracies. Other Resources And Links Update conflicts and locking and blocking Stored procedures Database Engine checks Lock Hints set Lock Escalation Create SQL Server Databases Finally, If you need help, please reach out for a free sales meeting. If you need help beyond that, please schedule consulting hours
- What Is Crystal Reports
Crystal Reports is a powerful business intelligence application that unlocks the value of data from multiple sources. From Oracle and SQL Server databases to Excel spreadsheets - no stone goes unturned in Crystal's comprehensive reporting capabilities. The software comes with an intuitive report designer, a high-performance engine, plus a viewer for output previewing - all integrated seamlessly into existing development environments and programming languages, such as C#, Java, and Visual Basic. Crystal Reports can be used to create a wide range of report types, including summary reports, cross-tab reports, and subreports. The Best Features Of Crystal Reports include: Pixel-Perfect Documents Achieve flawless accuracy and visual consistency with perfectly formatted documents and forms. Flexible data connectivity: Crystal Reports can connect to a wide range of data sources, including relational databases, OLAP data sources, and XML files. It also supports direct data access and the creation of custom data connectors. Crystal Reports can connect to a wide range of data sources, including: Relational databases: Crystal Reports can connect to various relational databases such as Microsoft SQL Server, Oracle, MySQL, and IBM DB2. OLAP data sources: Crystal Reports can connect to online analytical processing (OLAP) data sources such as Microsoft SQL Server Analysis Services and SAP BusinessObjects BI. XML files: Crystal Reports can connect to and retrieve data from XML files. Custom data connectors: Crystal Reports allows the creation of custom data connectors to connect to data sources not natively supported. Web services: Crystal Reports can connect to and retrieve data from web services such as SOAP and REST. Excel files: Crystal Reports can connect to and retrieve data from Excel files. Flat files: Crystal Reports can connect to and retrieve data from flat files such as CSV, comma-delimited, and tab-delimited files. ODBC: (Open Database Connectivity) and OLEDB (Object Linking and Embedding, Database) data sources: Crystal Reports can connect to a wide range of data sources via ODBC and OLEDB, including other databases and data sources not natively supported. Comprehensive report design: Crystal Reports provides a wide range of design options, including a variety of report templates, a drag-and-drop report design interface, and a wide range of formatting options. Advanced-Data Visualization: Crystal Reports provides a wide range of charting options and data visualization tools, such as bar charts, pie charts, and line graphs, to help users better understand their data. Scheduled Reporting With Crystal Server Crystal Reports can be scheduled to automatically run and deliver reports to specific users or groups at specified intervals with the Crystal Server. You can get more info here from SAP Crystal Reports Server is a server-based solution for creating, managing, and delivering reports and business intelligence (BI) content. It is built on the Crystal Reports engine, and it allows organizations to create, view, and distribute reports over the web or via email. Overall, Crystal Reports Server is an ideal solution for organizations that need to create, manage, and distribute large numbers of reports, and want to provide their users with a centralized location for accessing those reports. Business intelligence capabilities: Crystal Reports provides advanced business intelligence functionality, such as data grouping and sorting, cross-tabulation, and sub-reporting. Integration: Crystal Reports can be integrated with a wide range of programming languages and development environments, such as C#, Java, and Visual Basic. Report distribution: Crystal Reports can be exported to a wide range of file formats, including PDF, Excel, and HTML, for distribution and sharing. Advanced Security Options: Crystal Reports provides advanced security options, such as user-level security and the ability to password-protect reports, to ensure that sensitive data is protected. Crystal Reports and Power BI are both business intelligence (BI) tools used for creating and analyzing reports, but they have some key differences: Crystal Reports Vs.Power BI - Purpose: Crystal Reports is primarily used for creating and generating reports from a wide range of data sources, while Power BI is focused on providing a self-service BI solution for data visualization and exploration. Data Connectivity: Crystal Reports can connect to a wide range of data sources such as relational databases, OLAP data sources, and XML files. Power BI can also connect to various data sources including cloud services, databases, and Excel files. Report Design: Crystal Reports provides a wide range of design options, including a variety of report templates, a drag-and-drop report design interface, and a wide range of formatting options. Power BI is more focused on providing interactive data visualization options, and it has a more modern UI and user-friendly drag-and-drop interface, also Power BI allow you to create interactive and shareable dashboards. Data Analysis: Crystal Reports provides advanced business intelligence functionality, such as data grouping and sorting, cross-tabulation, and sub-reporting. Power BI has more advanced data analysis options, including machine learning and natural language processing. Integration: Crystal Reports can be integrated with a wide range of programming languages and development environments, such as C#, Java, and Visual Basic. Power BI can be integrated with other Microsoft products, such as Excel and SharePoint, as well as other platforms like PowerApps and Azure. Crystal Reports Vs.Power BI - Pricing Crystal Reports and Power BI are two different business intelligence tools that have different pricing models. Crystal Reports is a standalone software that requires a one-time purchase of a perpetual license, starting at around ~$300 The pricing of Crystal Reports depends on the version of the software and the number of users. Crystal Reports Server is also available as an add-on to the Crystal Reports software, starting around ~$800 which allows for scheduling, distribution, and management of reports. On the other hand, Power BI is a cloud-based solution that is offered as a part of the Microsoft Power Platform, which includes other tools such as Power Apps and Power Automate. Power BI is available in two different pricing models: Microsoft Website For Power BI Pricing SAP Website For Current Pricing Power BI Free: This is a free version that allows users to create and share reports with a limited number of data refreshes and data volume. Power BI Pro: - $9.95 This is a paid version that allows users to create and share reports with more data refreshes and data volume, as well as collaborate with other users, and schedule data refreshes. The pricing for Power BI Pro is based on a monthly or annual subscription. Power BI Premium: - $19.95 This is a more advanced version of Power BI Pro with more features, such as dedicated cloud resources, and the ability to share and collaborate with external users. Power BI premium is based on a monthly or annual subscription and is offered in two types: P1 and P2. In summary, Crystal Reports is a one-time purchase software while Power BI is a cloud-based subscription-based software with different pricing options. Power BI has more features and is more flexible, but Crystal Reports may be more cost-effective for small businesses with limited needs. Other Blogs That Might Interest You Power BI Dashboards Vs Reports Crystal Reports Server Paginated Reports In Power BI How Is Power BI Licensed
- What Is Power BI?
Power BI is a business intelligence (BI) and data visualization tool developed by Microsoft. It allows users to connect to various data sources, create and publish interactive dashboards and reports, and share them with others. With Power BI, you can easily connect to various data sources such as Excel, SQL Server, SharePoint, and many more, and create interactive visualizations and reports using a drag-and-drop interface. Power BI has a powerful data modeling feature that allows you to create relationships, hierarchies, and calculated columns, which can be used to create more advanced and accurate reports. Power BI also provides a wide range of pre-built visualization options, such as charts, tables, maps, and gauges, which can be used to create rich, interactive visualizations. Additionally, Power BI has a cloud-based sharing and collaboration feature that allows you to share your reports and dashboards with others. You can also create a Power BI portal and share it with others, giving them access to all the reports and dashboards in one place. Power BI can also be integrated with other Microsoft products such as Excel, SharePoint, and SQL Server Reporting Services (SSRS), which allows you to create and share Power BI reports within these applications. There Are Currently Three Main Versions Of Power BI: Power BI Desktop: This is a free Windows application that allows you to create, design, and publish interactive reports and dashboards. It is used to create and design reports, and the report can be shared on Power BI service where it can be viewed and interact by others. Power BI Service: This is a cloud-based service that allows you to view and interact with reports and dashboards that were created using Power BI Desktop. It also includes collaboration features such as sharing and commenting, and the ability to create and view dashboards on mobile devices. Power BI Report Server: This version allows you to host and manage Power BI reports on-premise, it's designed for organizations that want to keep sensitive data on their own servers, this version is an on-premise solution that provides the capability to share and view Power BI reports in a corporate network. Power BI Premium: This is a paid version of Power BI that provides additional features and capabilities such as dedicated cloud resources, larger data capacity, and the ability to share reports with external users. It also allows for using Power BI Report Server and Power BI Embedded. Power BI Embedded: This is a service that allows developers to embed Power BI reports and dashboards into other applications, such as SharePoint or custom web applications. It is used by ISV's and organizations that want to embed Power BI reports within their custom application. All the versions of Power BI can be integrated with other Microsoft products such as Excel, SharePoint, and SQL Server Reporting Services (SSRS) which allows you to create and share Power BI reports within these applications. The Power BI Desktop? Power BI Desktop is a Windows application that allows you to create, design, and publish interactive reports and dashboards. It is a powerful tool that enables you to connect to various data sources, build and shape data models, and create visually appealing and informative reports that can be shared with others. With Power BI Desktop, you can: Connect to various data sources: You can easily connect to a wide variety of data sources, such as Excel, SQL Server, SharePoint, and many more, and bring data into Power BI for further analysis. Build and shape data models: Power BI Desktop has a powerful data modeling feature that allows you to create relationships, hierarchies, and calculated columns, which can be used to create more advanced and accurate reports. Create visually appealing reports: Power BI Desktop provides a wide range of pre-built visualization options, such as charts, tables, maps, and gauges, which can be used to create rich, interactive visualizations. Publish reports: Once you have created your reports and dashboards, you can publish them to the Power BI service, where they can be shared with others. Use advanced features such as R and Python scripts: Power BI desktop allows you to use R and Python scripts within the report to extract insights from data, create calculations and predictions. Schedule Data Refresh: Power BI Desktop enables you to schedule data refresh for your reports and dashboards, so that the data is always up-to-date when others view them. Power BI Desktop is a free application that can be downloaded from the Microsoft website. It is used to create and design reports, and the report can be shared on Power BI service where it can be viewed and interact by others. The Power BI Service Power BI Service is a cloud-based service that allows you to view and interact with reports and dashboards that were created using Power BI Desktop. It is a web-based platform that allows users to access and analyze data from various sources in a visual and interactive way. With Power BI Service, you can: View and interact with reports and dashboards: You can view and interact with reports and dashboards that have been published to the Power BI service, including filtering and drilling down into data. Collaborate and share reports: You can share reports and dashboards with others within your organization, and collaborate on them by adding comments and annotations. Create and view dashboards: You can create and view interactive dashboards that can be customized and shared with others. Access data on mobile devices: You can access reports and dashboards on mobile devices using the Power BI mobile app, which is available for iOS and Android. Schedule Data Refresh: Power BI service allows you to schedule data refresh for your reports and dashboards, so that the data is always up-to-date when others view them. Advanced features and capability with Power BI Premium : Power BI service can be upgraded to Power BI Premium which provides additional features and capabilities such as dedicated cloud resources, larger data capacity, and the ability to share reports with external users. Power BI Service is available as a part of the Office 365 subscription, and it can be accessed via the web from anywhere, making it easy for teams to collaborate and share insights. It is also integrated with other Microsoft products such as Excel, SharePoint, and SQL Server Reporting Services (SSRS), which allows you to create and share Power BI reports within these applications. Power BI Report Server Power BI Report Server is an on-premise version of the Power BI service that allows you to host and manage Power BI reports within your own corporate network. It is designed for organizations that want to keep sensitive data on their own servers, and need to share and view Power BI reports within a corporate network. With Power BI Report Server, you can: Host and manage reports on-premise: You can host and manage Power BI reports on your own servers, within your own corporate network, which can provide an additional layer of security for sensitive data. View and interact with reports: You can view and interact with reports and dashboards that have been published to the Power BI Report Server, including filtering and drilling down into data. Share and collaborate on reports: You can share and collaborate on reports with others within your organization, and add comments and annotations. Create and view dashboards: You can create and view interactive dashboards that can be customized and shared with others. Schedule Data Refresh: Power BI Report Server allows you to schedule data refresh for your reports and dashboards, so that the data is always up-to-date when others view them. Advanced features and capability with Power BI Premium : Power BI Report Server can be upgraded to Power BI Premium which provides additional features and capabilities such as dedicated cloud resources, larger data capacity, and the ability to share reports with external users. Power BI Report Server requires a separate installation and setup, but it can use the same Power BI Desktop to create, design and publish the report. It's important to note that Power BI Report Server has a limited set of features compared to Power BI Service and Power BI Premium, and it can only be used to view and share Power BI reports within a corporate network. Power BI Premium Power BI Premium is a paid version of Power BI that provides additional features and capabilities beyond the standard version of Power BI. It is designed for organizations that have a large number of users, or require more advanced features and capabilities. With Power BI Premium, you can: Assign dedicated cloud resources: Power BI Premium provides dedicated cloud resources, which means that you can assign a specific amount of resources to your organization, making sure that your reports and dashboards always perform well, even during peak usage times. Share reports with external users: Power BI Premium allows you to share reports with external users, such as customers, partners, or suppliers, without requiring them to have a Power BI Pro or Power BI Premium license. Create and share content packs: You can create and share content packs, which are pre-built collections of reports and dashboards, with others in your organization. Access to Power BI Report Server and Power BI Embedded: Power BI Premium allows you to use Power BI Report Server and Power BI Embedded, which are on-premises and embedded version of Power BI. Large data capacity: Power BI Premium provides a large data capacity, which allows you to handle large amounts of data and create more complex reports and dashboards. Schedule Data Refresh: Power BI Premium allows you to schedule data refresh for your reports and dashboards, so that the data is always up-to-date when others view them. Advanced features such as R and Python scripts: Power BI Premium allows you to use R and Python scripts within the report to extract insights from data, create calculations and predictions. Power BI Embedded Power BI Embedded is a service that allows developers to embed Power BI reports and dashboards into other applications, such as SharePoint or custom web applications. It is used by ISV's (Independent software vendors) and organizations that want to embed Power BI reports within their custom application. With Power BI Embedded, you can: Embed Power BI reports and dashboards into other applications: You can embed Power BI reports and dashboards into other applications, such as SharePoint or custom web applications, and allow users to view and interact with them without needing to leave the application. Access to Power BI features and capabilities: You can access all the features and capabilities of Power BI, such as data visualization, data modeling, and data exploration, within the embedded reports and dashboards. Securely share with external users: You can share the embedded reports and dashboards with external users, such as customers, partners, or suppliers, without requiring them to have a Power BI Pro or Power BI Premium license. Control user access: You can control user access to the embedded reports and dashboards, and set permissions for different users and groups. Create and share content packs: You can create and share content packs, which are pre-built collections of reports and dashboards, with others in your organization. Schedule Data Refresh: Power BI Embedded allows you to schedule data refresh for your reports and dashboards, so that the data is always up-to-date when others view them. Power BI Embedded is a part of Power BI Premium and it can be used in conjunction with Power BI Report Server and Power BI Service to provide a complete solution for creating, sharing and viewing reports and dashboards. It is available on a capacity based pricing model, which means you pay for the amount of capacity you need to run your embedded reports and dashboards. Other Related Blogs How Power BI Is Licensed Dashboards Vs Reports In Power BI Buttons and Bookmarks In Power BI Why Choose Power BI











