entity framework 6 and pessimistic concurrency
Asked Answered
S

2

1

I'm working on a project to gradually phase out a legacy application. In the proces, as a temporary solution we integrate with the legacy application using the database.

The legacy application uses transactions with serializable isolation level. Because of database integration with a legacy application, i am for the moment best off using the same pessimistic concurrency model and serializable isolation level.

These serialised transactions should not only be wrapped around the SaveChanges statement but includes some reads of data as well.

I do this by

  • Creation a transactionScope around my DbContext with serialised isolation level.
  • Create a DbContext
  • Do some reads
  • Do some changes to objects
  • Call SaveChanges on the DbContext
  • Commit the transaction scope (thus saving the changes)

I am under the notion that this wraps my entire reads and writes into on serialised transaction and then commits.

I consider this a way form of pessimistic concurrency.

However, reading this article, https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application states that ef does not support pessimistic concurrency.

My question is:

  • A: Does EF support my way of using a serializable transaction around reads and writes
  • B: Wrapping the reads and writes in one transaction gives me the guarantee that my read data is not changed when committing the transaction.
  • C: This is a form of pessimistic concurrency right?
Spier answered 11/5, 2017 at 10:5 Comment(0)
D
3

One way to acheive pessimistic concurrency is to use sonething like this:

var options = new TransactionOptions
{
   IsolationLevel = System.Transactions.IsolationLevel.Serializable,
   Timeout = new TimeSpan(0, 0, 0, 10)
};

using(var scope = new TransactionScope(TransactionScopeOption.RequiresNew, options))
{ ... stuff here ...}

In VS2017 it seems you have to rightclick TransactionScope then get it to add a reference for: Reference Assemblies\Microsoft\Framework.NETFramework\v4.6.1\System.Transactions.dll

However if you have two threads attempt to increment the same counter, you will find one succeeds whereas the other thread thows a timeout in 10 seconds. The reason for this is when they proceed to saving changes they both need to upgrade their lock to exclusive, but they cannot because other transaction is already holding a shared lock on the same row. SQL Server will then detect the deadlock after a while fails one transactions to solve the deadlock. Failing one transaction will release shared lock and the second transaction will be able to upgrade its shared lock to exclusive lock and proceed with execution.

The way out of this deadlocking is to provide a UPDLOCK hint to the database using something such as:

private static TestEntity GetFirstEntity(Context context) {
return context.TestEntities
              .SqlQuery("SELECT TOP 1 Id, Value FROM TestEntities WITH (UPDLOCK)")
              .Single();
}

This code came from Ladislav Mrnka's blog which now looks to be unavailable. The other alternative is to resort to optimistic locking.

Divalent answered 26/7, 2017 at 10:27 Comment(2)
Link is malware or somethingCawnpore
Thanks @Cawnpore I've replaced the link with a quick summary.Divalent
P
0

The document states that EF does not have a built in pessimistic concurrency support. But this does not mean you can't have pessimistic locking with EF. So YOU CAN HAVE PESSIMISTIC LOCKING WITH EF!

The recipe is simple:

  • use transactions (not necessarily serializable, cause it will lead to poor perf.) - readcommitted is ok to use...but depends...
  • do your changes, call dbcontext.savechanges()
  • do lock your table - execute T-SQL manually, or feel free to use the code att. below.
  • the given T-SQL command with the hints will keep that database locked until the duration of the given transaction.
  • there's one thing you need to take care: your loaded entities might be obsolete at the point you do the lock, so all entities from the locked table should be re-fetched (reloaded).

I did a lot of pessimistic locking, but optimistic locking is better. You can't go wrong with it.

A typical example where pessimistic locking can't help is a parent child relation, where you might lock the parent and treat it like an aggregate (so you assume you are the only one having access to the child too). So if other thread tries to access the parent object, it won't work (will be blocked) until the other thread releases the lock from the parent table. But with an ORM, any other coder can load the child independently - and from that point 2 threads will make changes to the child object... With pessimistic locking you might mess up the data, with optimistic you'll get an exception, you can reload valid data and do try to save again...

So the code:

public static class DbContextSqlExtensions
{
    public static void LockTable<Entity>(this DbContext context) where Entity : class
    {
        var tableWithSchema = context.GetTableNameWithSchema<Entity>();
        context.Database.ExecuteSqlCommand(string.Format("SELECT null as dummy FROM {0} WITH (tablockx, holdlock)", tableWithSchema));
    }
}

public static class DbContextExtensions
{
    public static string GetTableNameWithSchema<T>(this DbContext context)
                where T : class
    {
        var entitySet = GetEntitySet<T>(context);
        if (entitySet == null)
            throw new Exception(string.Format("Unable to find entity set '{0}' in edm metadata", typeof(T).Name));

        var tableName = GetStringProperty(entitySet, "Schema") + "." + GetStringProperty(entitySet, "Table");
        return tableName;
    }

    private static EntitySet GetEntitySet<T>(DbContext context)
    {
        var type = typeof(T);
        var entityName = type.Name;
        var metadata = ((IObjectContextAdapter)context).ObjectContext.MetadataWorkspace;

        IEnumerable<EntitySet> entitySets;
        entitySets = metadata.GetItemCollection(DataSpace.SSpace)
                         .GetItems<EntityContainer>()
                         .Single()
                         .BaseEntitySets
                         .OfType<EntitySet>()
                         .Where(s => !s.MetadataProperties.Contains("Type")
                                     || s.MetadataProperties["Type"].ToString() == "Tables");
        var entitySet = entitySets.FirstOrDefault(t => t.Name == entityName);
        return entitySet;
    }

    private static string GetStringProperty(MetadataItem entitySet, string propertyName)
    {
        MetadataProperty property;
        if (entitySet == null)
            throw new ArgumentNullException("entitySet");
        if (entitySet.MetadataProperties.TryGetValue(propertyName, false, out property))
        {
            string str = null;
            if (((property != null) &&
                (property.Value != null)) &&
                (((str = property.Value as string) != null) &&
                !string.IsNullOrEmpty(str)))
            {
                return str;
            }
        }
        return string.Empty;
    }
}
Papillose answered 11/5, 2017 at 19:19 Comment(5)
Thank for responding to my question. I was under the impression that serialisable isolation would do the locking of the tables for me. When i ran some test this also seem to work. Do i understand you correctly in saving that it does not and i would need to do i manually. ThanksSpier
Serializable does the locking. But it's the most expensive locking you can get, so it should be avoided. In a web app if 10+ users do a request simultaneously with serializable isolation, then they will wait for each other... If there will be more than 5 concurrent users, avoid serializable. Or at least restrict it to special use-cases only! I do really advise you to try optimistic locking - it needs just an additional field. The only downside is that you might get on changes an exception you need to deal with. But optimistic can be mixed with the table locks (the code I've attached).Papillose
Thanks for your answer. I understand the disadvantages to this expensive isolation level. In fact my preferred way is optimistic locking indeed. Is just that, as i'm in a transition period, with two applications on one database and this is only a temp solution. Until we strangle the old application off i have no need to change the locking level and would only introduce a risk to the project. Thanks again for answering my question.Spier
Hi @baHI, the workaround looks good, but I have a doubt, where or when I need to call dbContext.LockTable<EntityName>() ? Before or after modifying the entity? Can you share some example, please? Thanks in advance.Anatolian
@JoaquinAlberto So usually need to lock before loading the entity, this ensures that no-one else can alter the entity after you load and until you save it. But you need to keep lock very short-lived. It's good if you're working with simple entities and with no/minimal relations and just one, main item needs to be locked and some atomic thing ensured on it. But if there's a hierarchy to "lock" or any dependency between multiple data, the best way is to use optimistic concurrency.Papillose

© 2022 - 2024 — McMap. All rights reserved.