Why does Azure Database perform better with transactions
Y

1

8

We decided to use a micro-orm against an Azure Database. As our business only needs "inserts" and "selects", we decided to suppress all code-managed SqlTransaction (no concurrency issues on data).

Then, we noticed that our instance of Azure Database responded very slowly. The "rpc completed" event occured in delays that are hundreds times the time needed to run a simple sql statement.

Next, we benchmarked our code with EF6 and we saw that the server responded very quickly. As EF6 implements a built-in transaction, we decided to restore the SqlTransaction (ReadCommited) on the micro-orm and we noticed everything was fine.

Does Azure Database require an explicit SqlTransaction (managed by code) ? How does the SqlTransaction influence Azure Database performances ? Why was it implemented that way ?

EDIT : I am going to post some more precise information about the way we collected traces. It seems our Azure events logs sometimes express in nanoseconds, sometimes in milliseconds. Seems so weird.

Youngyoungblood answered 5/2, 2016 at 18:51 Comment(5)
Are you batching any queries inside one transaction? Is the transaction isolation level different than default (RC / RCSI) when using explicit transactions?Incogitant
What is the database tier you are using ?Adulterine
The type of transaction we start against the SQL Database is "Read Commited" and we have only one statement per transaction.Melancholia
Add 'OPTION (READCOMMITTEDLOCK)' at the end of the query and try again link. Every SQL statement runs in an implicit transaction if you don't manually create one. By default SQL Database runs with SNAPSHOT ISOLATION (which in your case override to READ COMMITTED when you wrap it in a transaction). You might be running into resource contention because of the versioning overhead in SNAPSHOT ISOLATION mode.Torticollis
Right now I don't think that an answer can be given. Can you demonstrate the perf difference with code? Also, what kind of latency numbers are you seeing. Also, post the benchmark code (many benchmarks are broken). We need more information.Koodoo
C
1

If I understand what you are asking correctly, batching multiple SQL queries into one transaction will give you better results on any DBS. Committing after every insert/update/delete has a huge overhead on a DBS that is not designed for it (like MyISAM on MySQL).

It can even cause bad flushes to disk and thrashing if you do too much. I once had a programmer committing thousands of entries to one of my DBs every minute, each as their own transactions, and it brought the server to a halt.

InnoDB, one of 2 most popular database formats for MySQL, can only commit 20-30 transactions a second (or maybe it was 2-3... it's been a long time), as each is flushed to the disk at the end for ACID compliance.

Conventionalize answered 13/2, 2016 at 8:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.