Performance, Recovery, SQL Server

Delayed Durability in SQL Server 2014 – Part 2 – Performance Testing

Introduction

In Part 1 of this series we took a look at some background information on the introduction of Delayed Durability in SQL Server 2014. In this post we take a rough and ready look at the sorts of performance improvements one might expect with Delayed Durability enabled and in Part 3 we take a look at whether your workload is suitable for running in a Delayed Durability environment.

The Tests

The basic principal behind our test is to ensure consistency each test run. To that end I ran the following script to set up a clean database for each test run:


USE Master
GO

IF (DATABASEPROPERTY('DelayedDurability', 'Version') > 0)
	DROP DATABASE DelayedDurability

CREATE DATABASE DelayedDurability ON PRIMARY(
	NAME = 'DelayedDurability_Data',

	-- You may need to change this
	FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\DelayedDurability.mdf',

	-- We want the data size to be large enough to not require an autogrowth op
	SIZE = 200MB,
	FILEGROWTH = 100MB

)
LOG ON (
	NAME = 'DelayedDurability_Log',

		-- You may need to change this
	FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\DelayedDurability.ldf',

	-- We want the log size to be large enough to not require an autogrowth op
	SIZE = 200MB,
	FILEGROWTH = 100MB

)
GO

USE DelayedDurability

GO

CREATE TABLE tblData(
	ID INT IDENTITY(1, 1),
	Data1 VARCHAR(50),
	Data2 INT
)

CREATE CLUSTERED INDEX PK_tblData ON tblData(ID)
GO

CREATE NONCLUSTERED INDEX IX_tblData_Data2 ON tblData(Data2)

GO

This creates a new, very simple database with one key attribute – the data file and log file are set larger than they need to be. This should ensure that no time is spent performing an auto growth operation on either the data file or log.

On test runs where we enabled Delayed Durability I set it at a database level to Forced – meaning all commits return immediately without having to specifically indicate it in each query. This was done to simplify the code and is not necessarily what you should do in the real world

The test script was fairly primitive – but good enough for a simple brute force test:


USE DelayedDurability

SET NOCOUNT ON

DECLARE @counter AS INT = 0

PRINT (CONVERT(VARCHAR, SYSDATETIME(), 121))

	WHILE (@counter < 1000000)
		BEGIN

			INSERT INTO tblData (Data1, Data2)
			VALUES('My Data', @counter)

			SET @counter = @counter + 1

		END

GO

GO
PRINT (CONVERT(VARCHAR, SYSDATETIME(), 121))
GO

We basically send 1 million individual insert transactions as quickly as SSMS can send them.

The performance difference was pretty stark.

Operating out of the box in what I’ll call ACID-compliant mode, 1 million inserts took an average of:

360 seconds

With Delayed Durability Enabled the same operation took an average of:

27 seconds

which by my reckoning is considerably more than a 10X improvement in throughput.

Hey – I get far better/far worse results than these!

Now – I am not trying to convince you that if you enable Delayed Durability “you will see a 10 fold improvement in write performance”.  In fact you almost certainly won’t see that size of improvement because I deliberately designed the test to maximise the impact of the Delayed_Durability setting. There are a huge number of variables at play that will effect your numbers. Off the top of my head these might include:

1. You are running SQL Server 2014 rather than 2014 CTP2
2. You are not running your tests on my laptop!
3. Your write workload may be substantially different to my deliberately primitive test workload. You might already be bundling up large operations into single transactions for example
4. You might have a snazzy fibre channel SAN with huge amounts of cache sitting in front of SSDs, in which case I’m pretty jealous of you right now.

Clearly my tests are hardly scientific and were deliberately set to maximise the effect of batching up large numbers of very small transactions. In the real world your mileage will vary considerably. That said, I think its fair to say that under certain circumstances the performance improvement of Delayed Durability could be significant.

Be sure to read Part 1 and Part 3 for some background and guidance on using Delayed Durability.

Cheers

Advertisements

3 thoughts on “Delayed Durability in SQL Server 2014 – Part 2 – Performance Testing

  1. Hello, thnx for great post.
    If you enclose you loop insert cycle to explicit tran, even with full durability it will be fast as with delayed durability. Why ?

    1. Hi Hermes

      If I understand your question correctly – it will be because only 1 large transaction will be getting written rather than 1 million individual transactions.

      This will result in far less “management” overhead of having to try manage 1 million individual operations each needing to be tracked and controlled until complete.

      The other thing that springs to mind is that with one large commit, you would probably get the benefit of the IO system writing out to the transaction log *sequentially* rather than as a series of intermittent random IO requests. Sequential writes are normally considerably faster than random IO requests and single large transactions will be far more predisposed to causing sequential writes than high volumes of small transactions.

      Best regards

      s

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s