SQL Server Memory Optimized Table - poor performance compared to temporary table
Asked Answered
F

2

6

I'm trying to benchmark memory optimized tables in Microsoft SQL Server 2016 with classic temporary tables.

SQL Server version:

Microsoft SQL Server 2016 (SP2) (KB4052908) - 13.0.5026.0 (X64)  Mar 18 2018 09:11:49   
Copyright (c) Microsoft Corporation  
Developer Edition (64-bit) on Windows 10 Enterprise 10.0 <X64> (Build 17134: ) (Hypervisor) 

I'm following steps described here: https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-table-and-table-variable-by-using-memory-optimization?view=sql-server-ver15.

CrudTest_TempTable 1000, 100, 100
go 1000

versus

CrudTest_memopt_hash 1000, 100, 100
go 1000

What this test does?

  • 1000 inserts
  • 100 random updates
  • 100 random deletes

And this is repeated 1000 times.

First stored procedure that uses classic temporary tables takes about 6 seconds to run.

Second stored procedure takes at least 15 seconds and usually errors out:

Beginning execution loop

Msg 3998, Level 16, State 1, Line 3
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.

Msg 701, Level 17, State 103, Procedure CrudTest_memopt_hash, Line 16 [Batch Start Line 2]
There is insufficient system memory in resource pool 'default' to run this query.

I have done following optimizations (before it was even worse):

  • hash index includes both Col1 and SpidFilter

  • doing everything in single transaction makes it works faster (however it would be nice to run without it)

  • I'm generating random ids - without it records from every iteration ended up in the same buckets

I haven't created natively compiled SP yet since my results are awful.

I have plenty of free RAM on my box and SQL Server can consume it - in different scenarios it allocates much memory but in this test case it simply errors out.

For me these results mean that memory optimized tables cannot replace temporary tables. Do you have similar results or am I doing something wrong?

The code that uses temporary tables is:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

DROP PROCEDURE IF EXISTS CrudTest_TempTable;
GO

CREATE PROCEDURE CrudTest_TempTable 
    @InsertsCount INT, @UpdatesCount INT, @DeletesCount INT
AS
BEGIN
    SET NOCOUNT ON;
    BEGIN TRAN;

    CREATE TABLE #tempTable  
    (  
        Col1 INT NOT NULL PRIMARY KEY CLUSTERED,  
        Col2 NVARCHAR(4000),
        Col3 NVARCHAR(4000),
        Col4 DATETIME2,
        Col5 INT NOT NULL
    ); 

    DECLARE @cnt INT = 0;
    DECLARE @currDate DATETIME2 = GETDATE();


    WHILE @cnt < @InsertsCount
    BEGIN
        INSERT INTO #tempTable (Col1, Col2, Col3, Col4, Col5)
        VALUES (@cnt, 
        'sdkfjsdjfksjvnvsanlknc kcsmksmk ms mvskldamvks mv kv al kvmsdklmsdkl mal mklasdmf kamfksam kfmasdk mfksamdfksafeowa fpmsad lak',
        'msfkjweojfijm  skmcksamepi  eisjfi ojsona npsejfeji a piejfijsidjfai  spfdjsidjfkjskdja kfjsdp fiejfisjd pfjsdiafjisdjfipjsdi s dfipjaiesjfijeasifjdskjksjdja sidjf pajfiaj pfsdj pidfe',
        @currDate, 100);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @UpdatesCount
    BEGIN
        UPDATE #tempTable SET Col5 = 101 WHERE Col1 = cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @DeletesCount
    BEGIN
        DELETE FROM #tempTable WHERE Col1 = cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END

    COMMIT;
END
GO

The objects used in the in-memory test are :

DROP PROCEDURE IF EXISTS CrudTest_memopt_hash;
GO

DROP SECURITY POLICY IF EXISTS tempTable_memopt_hash_SpidFilter_Policy;
GO

DROP TABLE IF EXISTS tempTable_memopt_hash;
GO

DROP FUNCTION IF EXISTS fn_SpidFilter;
GO

CREATE FUNCTION fn_SpidFilter(@SpidFilter smallint)  
    RETURNS TABLE  
    WITH SCHEMABINDING , NATIVE_COMPILATION  
AS  
    RETURN  
        SELECT 1 AS fn_SpidFilter  
            WHERE @SpidFilter = @@spid;
GO

CREATE TABLE tempTable_memopt_hash  
(  
    Col1 INT NOT NULL,
    Col2 NVARCHAR(4000),
    Col3 NVARCHAR(4000),
    Col4 DATETIME2,
    Col5 INT NOT NULL,

    SpidFilter  SMALLINT    NOT NULL   DEFAULT (@@spid),  
    INDEX ix_SpidFiler NONCLUSTERED (SpidFilter), 
    INDEX ix_hash HASH (Col1, SpidFilter) WITH (BUCKET_COUNT=100000),
    CONSTRAINT CHK_SpidFilter CHECK ( SpidFilter = @@spid )
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO


CREATE SECURITY POLICY tempTable_memopt_hash_SpidFilter_Policy  
    ADD FILTER PREDICATE dbo.fn_SpidFilter(SpidFilter)  
    ON dbo.tempTable_memopt_hash  
    WITH (STATE = ON); 
GO

And the stored procedure that uses them is:

CREATE PROCEDURE CrudTest_memopt_hash 
    @InsertsCount INT, @UpdatesCount INT, @DeletesCount int
AS
BEGIN

    SET NOCOUNT ON;
    BEGIN TRAN;

    DECLARE @cnt INT = 0;
    DECLARE @currDate DATETIME2 = GETDATE();

    DECLARE @IdxStart INT = CAST ((rand() * 1000) AS INT);

    WHILE @cnt < @InsertsCount
    BEGIN
        INSERT INTO tempTable_memopt_hash(Col1, Col2, Col3, Col4, Col5)
        VALUES (@IdxStart + @cnt, 
        'sdkfjsdjfksjvnvsanlknc kcsmksmk ms mvskldamvks mv kv al kvmsdklmsdkl mal mklasdmf kamfksam kfmasdk mfksamdfksafeowa fpmsad lak',
        'msfkjweojfijm  skmcksamepi  eisjfi ojsona npsejfeji a piejfijsidjfai  spfdjsidjfkjskdja kfjsdp fiejfisjd pfjsdiafjisdjfipjsdi s dfipjaiesjfijeasifjdskjksjdja sidjf pajfiaj pfsdj pidfe',
        @currDate, 100);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @UpdatesCount
    BEGIN
        UPDATE tempTable_memopt_hash 
        SET Col5 = 101 
        WHERE Col1 = @IdxStart + cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @DeletesCount
    BEGIN
        DELETE FROM tempTable_memopt_hash 
        WHERE Col1 = @IdxStart + cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END

    DELETE FROM tempTable_memopt_hash;
    COMMIT;
END
GO

Index stats:

table   index   total_bucket_count  empty_bucket_count  empty_bucket_percent    avg_chain_length    max_chain_length
[dbo].[tempTable_memopt_hash]   PK__tempTabl__3ED0478731BB5AF0  131072  130076  99  1   3

UPDATE

I'm including my final test cases and sql code for creating procedures, tables, etc. I've performed test on empty database.

SQL Code: https://pastebin.com/9K6SgAqZ

Test cases: https://pastebin.com/ckSTnVqA

My last run looks like this (temp table is the fastest one when it comes to tables, but I am able to achieve fastest times using memory optimized table variable):

Start CrudTest_TempTable 2019-11-18 10:45:02.983
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_TempTable 2019-11-18 10:45:09.537
Start CrudTest_SpidFilter_memopt_hash 2019-11-18 10:45:09.537
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_SpidFilter_memopt_hash 2019-11-18 10:45:27.747
Start CrudTest_memopt_hash 2019-11-18 10:45:27.747
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_memopt_hash 2019-11-18 10:45:46.100
Start CrudTest_tableVar 2019-11-18 10:45:46.100
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_tableVar 2019-11-18 10:45:47.497
Fisticuffs answered 14/11, 2019 at 13:59 Comment(21)
Please update your question with the result of select @@versionOverblown
Where's the code that uses in-memory tables? How were they created? The fact you get an error is a very strong indication that something is wrong with that codeSisak
@Overblown added sql version at the beginning of the questionFisticuffs
@PanagiotisKanavos check create table tempTable_memopt_hashFisticuffs
@Fisticuffs posting that much code means most of it is hidden.Sisak
@Fisticuffs there's no primary key in the in-memory table? What's the purpose of fn_SpidFilter ?Sisak
@PanagiotisKanavos I haven't created PK because it should be compound primary key (Col1, SpidFilter) - I can try adding itFisticuffs
@PanagiotisKanavos please check ms docs I have linked - the reason is that you need to have single global mem opt table to replace temporary tables.Fisticuffs
@Fisticuffs you can't compare performance when the tables are different, and when you add a CHECK constraint on a column with very slow selectivity.Sisak
@PanagiotisKanavos - changing ix_hash line to PRIMARY KEY NONCLUSTERED HASH (Col1, SpidFilter) WITH (BUCKET_COUNT=100000) does not make any difference in my tests. About CHECK CONSTRAINT - that's what Microsoft recommends. I'm just following ms docs for replacing temporary tables with mem opt tables - do you know other recommendations?Fisticuffs
Please post in your question the result of the query from Example A of this BOL article: learn.microsoft.com/en-us/sql/relational-databases/…Overblown
You have a wrong bucket count and a wrong hash index, I'll elaborate more on it but first please add the result of the query from the link aboveOverblown
@empti that column is added to emulate per-session tables. In-memory tables aren't temporary tables. They are a different type of object. In fact, the huge number of buckets, the very low selectivity of that index, results in a lot of scanning over empty buckets. The article you follow is not a recommendation, it's an example for SQL Server 2019 that shows how you can emulate temp tables with in-mem tables. That's not what in-memory tables are for.Sisak
@empti the actual guidelinse are here - guidelines for hash indexes and non-clustered in-mem indexes. Empty buckets impact the performance of full index scans.. The in-mem table has 100K buckets when only 1000 will ever be usedSisak
@Overblown stats addedFisticuffs
@empti Empty bucket percent: 99Sisak
@Overblown - changed buckets count to 1000 - still got 22 secondsFisticuffs
Your index cannot be used at all: it's HASH index and you shoud filter on BOTH fields otherwise it cannot be used, you have always scanOverblown
@Overblown - if I change to PRIMARY KEY NONCLUSTERED (Col1, SpidFilter) - I get 21 seconds.Fisticuffs
To be able to SEARCH on col1 your hash index should be defined on col1 onlyOverblown
@Overblown - it's filtered on both fields due to security policy. Without it (hash only on Col1) it worked even slower. If I replace policy (remove it from table definition) and add explicit conditions on SpidFilter I get about 17 seconds. Do you think by using mem opt tables I can get performance improvements over temp tables or is it just for limiting IO on tempdb? I think the article claimed it should faster (it's even in the title).Fisticuffs
O
6

IMHO, the test in OP cannot show the advantages of memory-optimized tables because the greatest advantage of these tables is that they are lock-and-latch free, this means your update/insert/delete do not take locks at all that permits concurrent changes to these tables.

But the test made does not include concurrent changes at all, the code shown make all the changes in one session.

Another observation: hash index defined on the table is wrong as you search only on one column and hash index is defined on two columns. Hash index on two columns means that hash function is applied to both arguments, but you search only on one column so hash index just cannot be used.

Do you think by using mem opt tables I can get performance improvements over temp tables or is it just for limiting IO on tempdb?

Memory-optimized tables are not supposed to substitute temporary tables, as already mentioned, you'll see the profit in highly concurrent OLTP environment, while as you guess temporary table is visible only to your session, there is no concurrency at all.

Eliminate latches and locks. All In-Memory OLTP internal data structures are latch- and lock-free. In-Memory OLTP uses a new multi-version concurrency control (MVCC) to provide transaction consistency. From a user standpoint, it behaves in a way similar to the regular SNAPSHOT transaction isolation level; however, it does not use locking under the hood. This schema allows multiple sessions to work with the same data without locking and blocking each other and improves the scalability of the system allowing fully utilize modern multi-CPU/multi-core hardware.

Cited book: Pro SQL Server Internals by Dmitri Korotkevitch

What do you think about the title "Faster temp table and table variable by using memory optimization"

I opened this article and see these examples (in the order they are in the article)

  • A. Basics of memory-optimized table variables
  • B. Scenario: Replace global tempdb ##table
  • C. Scenario: Replace session tempdb #table

A. I use table variables only in cases they contain very few rows. Why should I even take care about this few rows?

B. Replace global tempdb ##table . I just don't use them at all.

C. Replace session tempdb #table. As already mentioned, session tempdb #table is not visible to any other session, so what is the gain? That the data don't go to the disk? May be you shuold think about fastest SSD disk for your tempdb if you really have problems with tempdb? Starting with 2014 tempdb objects don't necessarily goes to disk even in case of bulk inserts, in any case I have even RCSI enabled on my databases and have no problems with tempdb.

Overblown answered 14/11, 2019 at 15:8 Comment(4)
What do you think about the title "Faster temp table and table variable by using memory optimization" learn.microsoft.com/en-us/sql/relational-databases/…? It specifically says: "If you use temporary tables, table variables, or table-valued parameters, consider conversions of them to leverage memory-optimized tables and table variables to improve performance." Do you think it's possible?Fisticuffs
>>>Do you think it's possible<<< I didn't see any test confirming it, but I saw very good results in the concurrent changes to the same table. I'll update my answer if I find the results of these tests. At the moment I see only a lot of limitations vs temp tables usage...Overblown
I see benefits of memory optimized tables. I'm just focusing now on the issue of replacing temp tables with mem opt tables as the article describes. In my test case they are simple a lot slower and it's not worth it (or maybe if you really need to have less io on tempdb).Fisticuffs
@Fisticuffs >>>but I don't think it's cool to start with "Your problem is that you do not understand ..." you are right, I'll change itOverblown
R
0

Likely will not see performance improvement, only in very special applications. SQL dabbled with things like 'pin table' in the past but the optimizer, choosing what pages are in memory based on real activity, is probably as good as it gets for almost all cases. This has been performance tuned over decades. I think that 'in memory' is more a marketing touchpoint than any practical use. Prove me wrong please.

Regimen answered 22/6, 2021 at 16:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.