top of page
MikeBennyhoff

The Relational Model And Edgar F. Codd's 12 Rules

EF Codd's 12 Rules By Mike Bennyhoff


The relational model for databases was first proposed by Edgar F. Codd in 1970. Codd's model was based on the idea of representing data in the form of rows and columns in a table, with each row representing a unique instance of an entity and each column representing an attribute of that instance.

In the late 1970s and early 1980s, various companies and researchers began to develop implementations of the relational model, including Oracle and IBM. The Structured Query Language (SQL) was developed as a standard programming language for working with relational databases. SQL has become the standard language for interacting with relational databases, and is used by a wide range of database management systems (DBMSs). Today, the relational model and SQL are widely used in a variety of applications, including financial systems, customer relationship management systems, and many others.


Edgar F. Codd, a computer scientist who worked for IBM, formulated a set of 12 rules in 1985 to define a fully-relational database management system. These rules are known as the "Codd's 12 rules" and they set a standard for relational databases. The rules are:

Edgar F. Codd's 12 Rules

Edgar F. Codd.
Edgar Frank "Ted" Codd (19 August 1923 – 18 April 2003)


Edgar F. Codd Rule #1 - Information Rule

Also known as the "information rule", states that all data in a database must be represented in one and only one way.

This rule emphasizes the importance of a consistent and unambiguous representation of data within a database. It ensures that the data is stored in a normalized manner, with no duplication of data, and that all data is represented in a single, consistent format. This makes it easier to understand, maintain and query the data, and also reduces data redundancy and inconsistencies.


It is important that this rule is followed in order to ensure data integrity, accuracy and reliability. It also makes it easier to update the data, as you only need to update the data in one location, and it will be reflected throughout the entire database.

Edgar F. Codd Rule #2 - The guaranteed access rule:

The guaranteed access rule, also known as Codd's Rule 2, states that data in a relational database must be logically accessible through a unique identifier called the "primary key". A primary key is a field or set of fields that uniquely identifies each record in a table. It is used to access and retrieve a specific record from a table. A primary key must be unique, non-null, and not changeable.

This rule guarantees that every row in a table can be accessed by a unique identifier, which makes it possible to retrieve, update and delete specific data from the table. This rule allows the database to be able to identify the data in a table and helps the system to maintain the integrity of the data. It ensures that the data is consistent and accurate, and also helps to prevent duplicate data.

It also allows data to be linked across multiple tables through the use of foreign keys, which are fields that reference the primary key of another table. This allows for the creation of complex relationships between different tables, and enables the database to provide more powerful querying and reporting capabilities.


Primary Keys

Edgar F. Codd Rule #3 - Systematic treatment of null values:

The system must support null values, which represent missing or unknown data, and provide a way to distinguish between a missing value and a value of zero or empty string.


Edgar F. Codd Rule #4 - Dynamic online catalog:

The database must have a catalog that is accessible to authorized users, which contains metadata (data about data) describing the schema, domains, and constraints.


Codd's Dynamic online catalog, also known as Rule 4, states that a relational database management system (RDBMS) must have a catalog that is accessible to authorized users, which contains metadata (data about data) describing the schema, domains, and constraints of the database.


A catalog is a database that stores information about the structure and contents of other databases, including information about tables, fields, indexes, constraints, and relationships. The catalog is also known as the system catalog or data dictionary.

The catalog contains information about the structure of the database, such as the names and types of fields in each table, and information about the relationships between tables. It also contains information about domains, which are sets of valid values for a given field, and constraints, which are rules that ensure the consistency and integrity of the data.


The catalog is an important part of a relational database management system because it allows the system to understand and manage the structure of the data, and it also provides a way for users to understand the structure of the data in the database.


In SQL Server, there are several system catalog views and system tables that can be queried to retrieve information about tables, fields, indexes, constraints, and relationships.

To query information about tables, you can use the sys.tables view or the INFORMATION_SCHEMA.TABLES view. For example, to retrieve a list of all tables in a specific database, you can use the following query:


Select * From Sys.Tables

To query information about fields, you can use the sys.columns view or the INFORMATION_SCHEMA.COLUMNS view. For example, to retrieve a list of all fields in a specific table, you can use the following query:


Select * From sys.columns Where object_ID = Object_ID('My Table')

Edgar F. Codd Rule #5 - Comprehensive Data Sublanguage:

The system must support at least one data manipulation language that has a well-defined syntax and semantics.


Codd Rule #5, also known as the "Comprehensive Data Sublanguage Rule," states that a relational database management system must support at least one well-defined, comprehensive data sublanguage that can be used to define, manipulate, and control the data stored in the database. This sublanguage should be a simple, non-procedural, high-level language that is easy to use and understand. It should also be able to handle all types of data, including character strings, integers, and floating-point numbers, as well as data structures such as tables, rows, and columns. Additionally, it should be able to perform all types of data manipulation and control, including data retrieval, insertion, update, and deletion.


Best T-SQL books that I have read

Beginning T-SQL: A Step-by-Step Approach 4th ed.

by Kathi Kellenberger (Author), Lee Everest (Contributor)



Edgar F. Codd Rule #6 - View updating:


Codd Rule #6, also known as the "View Updating Rule," states that a relational database management system must support the ability to update the data stored in the database through views, which are virtual tables that present a specific subset of the data in the database in a specific format. This means that any changes made to the data in a view should be automatically reflected in the underlying base tables, and conversely, any changes made to the base tables should be reflected in any views that include that data. This allows users to update the data in the database through a simplified, high-level interface, rather than having to access and manipulate the underlying base tables directly.


Edgar F. Codd Rule #7 - High-level insert, update, and delete:

Codd Rule #7, also known as the "High-level Insert, Update, and Delete Rule," states that a relational database management system must support the ability to perform insert, update, and delete operations on the data stored in the database using a high-level, non-procedural language, rather than requiring the user to specify the exact steps that need to be taken to perform the operation. This means that the user should be able to specify the desired outcome of the operation, rather than having to specify how to accomplish it. This allows for more efficient and less error-prone manipulation of data.


Edgar F. Codd Rule #8 - Physical data independence:

Codd Rule #8, also known as the "Physical Data Independence Rule," states that a relational database management system must be physically independent of the hardware and software used to store and access the data. This means that the organization and access methods of the data should be separate from the physical storage of the data. This allows for changes in the physical storage of the data, such as changes in hardware or file organization, to be made without affecting the logical structure of the data or the way in which it is accessed. This makes it easier to change the way data is stored and accessed, without having to change the way the data is used or the applications that rely on it.


Edgar F. Codd Rule #9 - Logical data independence:

Codd Rule #9, also known as the "Logical Data Independence Rule," states that a relational database management system must be logically independent of the structure of the data. This means that the logical structure of the data, such as the schema, should be separate from the way in which the data is accessed. This allows for changes in the logical structure of the data, such as changes in the schema, to be made without affecting the way in which the data is accessed. This makes it easier to change the way data is organized and structured, without having to change the way the data is used or the applications that rely on it.

In other words, this rule allows the database design to be changed without affecting the application programs that access the database, and the application programs can continue to access the data in the same way as before, regardless of the changes made to the database design.


Edgar F. Codd Rule #10 - Integrity independence:

Codd Rule #10, also known as the "Integrity Independence Rule," states that a relational database management system must support the ability to define integrity constraints, which are rules that specify the conditions under which the data in the database is considered valid. These constraints should be defined and enforced independently of the application programs that access the data. This means that the integrity constraints should be built into the database management system, rather than being specified and enforced by the application programs.


This allows for the integrity of the data to be maintained automatically and consistently, regardless of the application programs that are used to access the data. In simple terms, this rule states that the database management system must support the ability to define integrity constraints that enforce the data consistency, which are independent of the application programs. This allows the database integrity to be maintained automatically, by the database management system, rather than relying on the application programs to maintain it.


Edgar F. Codd Rule #11 - Distribution independence:

Codd Rule #11, also known as the "Distribution Independence Rule," states that a relational database management system must support the ability to distribute the data across multiple physical locations, while maintaining the integrity and consistency of the data. This means that the database management system should be able to handle the distribution of data across different machines, networks, or storage devices, without affecting the way in which the data is accessed or the integrity of the data.

This rule allows the data to be distributed across different locations, without having to change the way the data is used or the applications that rely on it. This is particularly important in large, distributed systems where data needs to be replicated or partitioned across multiple machines to improve scalability and availability.

In other words, this rule states that the database management system should be able to handle the distribution of data across multiple physical locations, without affecting the way the data is accessed and integrity of the data. This allows for the scalability and availability of the data to be improved.


Edgar F. Codd Rule #12 - Non-subversion rule:

The system must provide a way to ensure that the database cannot be subverted by malicious users or by programs that act on their behalf.

These rules set the standard for relational database management systems, and most modern relational database management systems such as MySQL, PostgreSQL, Oracle, and SQL Server, are based on them.


The Benefits Of The Relational Model

  • Data integrity: The relational model helps to ensure the integrity of data by using rules to specify how data can be stored and accessed. For example, foreign keys can be used to enforce relationships between tables and prevent data inconsistencies.

  • Scalability: Relational databases are designed to be scalable, meaning that they can handle large amounts of data and support a high number of users.

  • Ease of use: SQL, the standard language for interacting with relational databases, is relatively easy to learn and use, which makes it accessible to a wide range of users.

Limitations To The Relational Model:

  • Complexity: While the relational model is generally easy to understand, implementing it in a database can be complex, particularly when dealing with large amounts of data or a high number of relationships between entities.

  • Performance: Relational databases can be slower than other types of databases when it comes to certain types of queries or workloads.

  • Flexibility: The relational model is based on the concept of a fixed schema, which defines the structure of the data and the relationships between entities. This can make it difficult to handle more flexible or dynamic data structures.

The Relational Model Vs Big Data

The relational model is a way of organizing data in a database, while "big data" refers to very large datasets that are too large or complex to be processed using traditional database management systems. While the relational model is well-suited to many types of data and applications, it is not always the best choice for working with big data.

One of the key differences between the relational model and big data is the way that data is stored and processed. Relational databases typically store data in fixed-schema tables, where each row represents an instance of an entity and each column represents an attribute of that entity. In contrast, big data systems often use more flexible data storage and processing approaches, such as NoSQL databases or distributed file systems. These systems are designed to handle large volumes of data and support parallel processing, which can make them more suitable for working with big data.

Another difference is in the types of queries and workloads that are supported. Relational databases are optimized for transactions (i.e., reading and writing data) and are generally not well-suited to more complex queries or analytics workloads. Big data systems, on the other hand, are often designed to support a wider range of queries and workloads, including real-time analytics and machine learning.


Some examples of relational database management systems (RDBMSs) include:

  • Oracle

  • MySQL

  • Microsoft SQL Server

  • PostgreSQL

NoSQL databases, also known as "not only SQL" databases, are a type of database that does not use the traditional relational model for storing and organizing data. Instead, NoSQL databases are designed to handle a wider range of data types and structures and to support horizontal scaling (i.e., the ability to add more nodes to a distributed system to handle increased workloads). Some examples of NoSQL databases include:

  • MongoDB

  • Cassandra

  • Redis

  • Couchbase


Distributed file systems are a type of file system that allows data to be stored and accessed across a distributed network of computers. These systems are designed to be scalable, fault-tolerant, and high-performance, and are often used in big data and cloud computing environments. Some examples of distributed file systems include:


  • HDFS (Hadoop Distributed File System)

  • GFS (Google File System)

  • GlusterFS

  • Ceph

SQL Server does not natively support NoSQL databases, which are a type of database that does not use the traditional relational model for storing and organizing data.However, SQL Server does provide support for some non-relational data types and scenarios. For example, it includes support for JSON (JavaScript Object Notation) data, which is a flexible, text-based format commonly used for storing and exchanging data. SQL Server also includes support for graph data and machine learning, and provides integration with Hadoop and other big data platforms.


There are also a number of third-party tools and services available that can be used to integrate SQL Server with NoSQL databases and other big data platforms. For example, it is possible to use SQL Server as a data source for NoSQL databases by using tools such as the MongoDB Connector for SQL Server.


Microsoft does offer a NoSQL database service in Azure. The service is called Azure Cosmos DB and it provides a fully managed, globally distributed database service that supports multiple data models, including document, key-value, graph, and column-family. Azure Cosmos DB is designed to be highly scalable, highly available, and low-latency, and it offers a variety of consistency levels to support different application requirements. It also provides integration with various Azure services, such as Azure Functions and Azure Stream Analytics, as well as with popular open source tools like Apache Spark.

Azure Cosmos DB is a fully managed service, which means that Microsoft takes care of the underlying infrastructure and maintenance tasks, allowing you to focus on building your applications. It is available on a pay-as-you-go basis, with a variety of pricing options to choose from.


Azure Cosmos DB is a fully managed, globally distributed database service that supports multiple data models, including document, key-value, graph, and column-family. It is designed to be highly scalable, highly available, and low-latency, and it offers a variety of consistency levels to support different application requirements.

Azure Synapse is a fully managed data integration, analytics, and data warehousing service that combines the power of SQL and big data processing. It includes a SQL-based programming language called T-SQL, as well as integration with various big data technologies such as Apache Spark and Azure Machine Learning.

There are a few key differences between Azure Cosmos DB and Azure Synapse:

  • Data model: Azure Cosmos DB is a NoSQL database that supports multiple data models, while Azure Synapse is a SQL-based data platform.

  • Use cases: Azure Cosmos DB is well-suited for applications that require fast, scalable access to data, such as mobile apps and gaming platforms. Azure Synapse is more geared towards data integration, analytics, and data warehousing scenarios.

  • Integration: Azure Cosmos DB integrates with various Azure services, such as Azure Functions and Azure Stream Analytics, as well as with popular open source tools like Apache Spark. Azure Synapse also integrates with various Azure services, as well as with big data technologies such as Apache Spark and Azure Machine Learning.



Other Links SQL Server Stats

Recent Posts

See All

What Is A SQL Server DBA?

A SQL Server Database Administrator (DBA) is responsible for the design, implementation, maintenance, and performance of a SQL Server...

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Get in Touch

Thanks for submitting!

bottom of page