ACID Properties
ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably.
Atomicity:
Atomicity requires that each transaction is “all or nothing”: if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged.
Consistency:
The consistency property ensures that any transaction will bring the database from one valid state to another.
Isolation:
The isolation property ensures that the concurrent execution of transactions results in a system state that could have been obtained if transactions were executed serially.
Durability:
Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors.
Write Ahead Logging:
At the time a modification is made to a page in the buffer, a log record is built in the log cache recording the modification. This log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. SQL Server has logic that prevents a dirty page from being flushed before the associated log record. Because log records are always written ahead of the associated data pages, the process is called write-ahead logging.
Dirty pages:
When a page is read from disk into memory, it is regarded as a clean page because it’s exactly the same as its counterpart on the disk. However, once the page has been modified in memory, it is marked as a dirty page.
A dirty page is simply a page that has changed in memory since it was loaded from disk and is now different from the on-disk page.
A checkpoint is a point in time created by the checkpoint process at which SQL Server can be
sure that any committed transactions have had all their changes written to disk. This checkpoint then becomes the marker from which database recovery can start. The checkpoint process ensures that any dirty pages associated with a committed transaction are flushed to disk.
Automatic
Automatic Checkpoint is the most common one, and it issues automatically in the background as per the settings done in Recovery Interval server configuration option. This recovery interval parameter is defined at the server level. By default, this parameter value is 0 (zero), in which the target recovery interval is 1 minute. Automatic checkpoints are throttled on the basis of the number of outstanding writes and on the fact that the database engine senses any rise in write latency above 20 milliseconds.
Indirect
Indirect checkpoints were added in SQL Server 2012, and this also runs in the background, but the difference is that it runs on the basis of user-specified target time for recovery for respective databases. If the user has used ALTER DATABASE to set TARGET_RECOVERY_TIME as >0, it will be used overriding the recovery interval specified at server level completely, avoiding automatic checkpoints for that database. It has been observed that indirect checkpoint are faster and provide a more predictable recovery time as compared to automatic checkpoints.
Manual
As the name defines, this command runs like any other T-SQL statement, and once issued, it will run to its completion. It must be noted that Manual Checkpoint will run for the current database only. Checkpoint_Duration can also be defined in seconds at the database level, which defines the time to complete checkpoint and is optional.
Internal
The user cannot control these internal checkpoint. This is issued by various server operations, like backup & database snapshot creation, ensuring that the images taken synchronize with the state of the log. Following events will follow up with generation of internal checkpoints;
Lazywriter also flushes dirty pages to disk. SQL Server constantly monitors memory usage to assess resource contention (or availability); it’s job is to make sure that there is a certain amount of free space available at all times. As part of this process, when it notices any such resource contention, it triggers LazyWriter to free up some pages in memory by writing out dirty pages to disk. It employs the Least Recently Used (LRU) algorithm to decide which pages are to be flushed to the disk.
Log flush:
Log Flush also writes pages to disk. The difference here is that it writes pages from the log cache into the transactional log file (LDF). Once a transaction completes, LogFlush writes those pages (from Log Cache) to an LDF file on disk.
Each and every transaction that results in data page changes also incurs some log cache changes. At the end of each transaction (commit), these changes from Log Cache are flushed down to the physical file (LDF).