Concurrency control
Asked Answered
E

3

7

Hello
I would like to know the best way to implement concurrency control in 3 tier application? May first thought is:

  1. A client wants to edit a record from a dataset.
  2. send a request to the server asking a lock on that record
  3. the server accepts/denies the edit request based on lock table

Based on this scenario the locks should have a reference to both the record locked and client using that record.
The client has to send periodic keep alive messages to the server. The keep alive is used to free locked records in case we lost the client in meddle of editing operation.

I will be using Delphi with datasnap. Maybe this is a novice question but I have to ask!!

Eiderdown answered 10/3, 2011 at 22:8 Comment(0)
J
3

I'm building on jachguate's Optimistic Concurrency Control answer to answer a question posed in comments.

I prefer to use OCC wherever I can because the implementation is easier. I'm going to talk about a three tier app using an object persistence framework. There are three levels to my prferred scheme:

  1. row or object level control, where a unique version ID is stored on each object. If you try to update the object the version id is automatically changed. If your version id doesn't match what's already there your update fails.

  2. field or column level locking. You send a complete copy of the original object as well as the updated one. Each field in your update has the actual and old values compared before the new value is applied. It's possible to ask the user to resolve the conflicts rather than discarding them, but this becomes messy as the amount of data in the commit increases.

  3. pessimistic locking. Each object has a lock owner which is usually null (the object is not locked). When you want to edit the object you first lock it. The problem here is that locks need to be tidied up and the business rules around that can be ugly (what timeout is desirable).

The advantage of this is that most of the time the low-cost OCC path is taken. For things that happen a lot but with low contention the benefits are significant. Think of product tracking in a warehouse - products move all the time, but very rarely do identical items move at the same time, and when they do resolving is easy (quantity left = original less my removal and your removal). For the complex case where (say) a product is relocated it probably makes sense to lock the product while it's in transit (because that mirrors the physical situation).

When you do have to fall back to locking, it's often useful to be able to notify both users and have a communication channel. At least notify the user who wants the lock when it's available, preferably allow them to send a message to the lock holder and possibly even allow them to force the lock. Then notify the lock loser that "Jo Smith has taken you lock, you lose your changes". Let office politics sort that one out :)

I usually drive the fallback process by user complaints rather than bug reports. If users are complaining that they lose their edits too often in a particular process, change it. If users complain that records are locked too often, you will have to refactor your object mappings to increase lock granularity or make business process changes.

Janelljanella answered 11/3, 2011 at 0:27 Comment(2)
Nice answer, but it seems not related to DataSnap, which is stated in the question. DataSnap is (or can be) a stateless application server with no built-in support for the locking schemas you mention here.Boffa
That's because I've not used DataSnap, but I think the design-level details apply. A stateless server can be made implicitly stateful if you need to, but it will take more work.Trotline
B
3

I Design my applications with a Optimistic concurrency control in mind, by no locking any record when a user want to edit it nor trying to control concurrency.

Important calculations and updates are done server side (Application or database) after proper built-in database locking functionality is set while processing the updates applied by the client. DataSnap automatic transaction rollback prevent these locks to block other concurrent users in case of failure.

With DataSnap, you have total control to prevent data loss when two users edits collide by appropriate using the ProviderFlags for your fields. Set the pfInWhere for any field you want to check automatically to have the same value at edition/deletion time as when the record was read.

Additionally, when a collision occurs you can react programatically at the Application Server (provider OnUpdateError event), at the client (TClientDataSet OnReconcileError event) or even ask the user for proper conflict resolution (take a look at the ReconcileErrorDialog in the New Item repository).

In the meantime, IMHO avoiding the complexity required to maintain lock lists, client lists, locks-per-client-lists, keep-alive messages, robust application server failure recovery and all possible glitches you'll end with a cleaner and better solution.

Boffa answered 10/3, 2011 at 23:1 Comment(12)
just what I was about to suggest. Note that this does work better with low contention, but I've found that is often the case in business apps. If you can, use OCC and fall back only when actual use demonstrates a need. You may find you never actually fall back, so don't write the locking code until you have to. The fallback pattern I've used is row level OCC, then field level OCC, then explicit locking.Trotline
I was thinking Pessimistic lock in my case because editing just one record can take sometime and changing a lot of values. it will be bad to end with error like Sorry we cant save the changes you have just done.Eiderdown
@Najem: Re-read my answer, you don't have to end with that kind of error, you just have to think the other way: react when the collision occurs. If it is possible to programatically determine what to do, you can do it transparently to the user, if not, ask the user what to do without loosing her edits. IMHO it is better in a lot of ways, for example because the complexity to code and maintain it is very much lower than to write a proper locking mechanism, just to mention one. You'll get less end user complains because someone go to lunch while editing a high required record.Boffa
@moz: I think you can effectively use OCC in highly concurrent environments with proper design (in fact a mix, but locks are managed always via robust database mechanism under transaction control), and it is and scales much better than pessimistic. The more concurrency, the more unlikely you'll fall back to pessimistic in your own app server.Boffa
@jachguate: I've seen the pathological case of high concurrency, high dependency a few times so I'm cautious. It's better to tell a user "Sam Jones has this record locked" than try to get them to resolve highly conflicted edits afterwards (think user one setting up manufacturing process details when the customer rings user two to modify the job).Trotline
@moz: can you please explain the mix idea.Eiderdown
@Boffa : I agree with moz "Sam Jones has this record locked"Eiderdown
@Najem: I'll write an answer.Trotline
@moz, you have a point, as always, there's no absolute answers for wider things like OCC or multitier.Boffa
@jachguate: which also makes it a more interesting question to answer. Especially when "user one" is prone to ranting about how they hate being interrupted while they're setting the process up and the customer is ringing every customer-facing staff member they can find to get their 27th set of modifications made before manufacturing starts. And they all blame the stupid programmer who can't "just make it work".Trotline
@moz, <joke>it looks like this really hurt you... </joke> :D. IMHO, there are things that belongs to the process, before to the software, and for very specific things like this, you can use very specific solutions, and not a system-wide locking mechanism, which seems overkilling, at least, to me. :)Boffa
I'm not sure what you mean by "system-wide locking mechanism", but yes, specific situations need specific solutions. The trick is to generalise existing solutions when you can.Trotline
J
3

I'm building on jachguate's Optimistic Concurrency Control answer to answer a question posed in comments.

I prefer to use OCC wherever I can because the implementation is easier. I'm going to talk about a three tier app using an object persistence framework. There are three levels to my prferred scheme:

  1. row or object level control, where a unique version ID is stored on each object. If you try to update the object the version id is automatically changed. If your version id doesn't match what's already there your update fails.

  2. field or column level locking. You send a complete copy of the original object as well as the updated one. Each field in your update has the actual and old values compared before the new value is applied. It's possible to ask the user to resolve the conflicts rather than discarding them, but this becomes messy as the amount of data in the commit increases.

  3. pessimistic locking. Each object has a lock owner which is usually null (the object is not locked). When you want to edit the object you first lock it. The problem here is that locks need to be tidied up and the business rules around that can be ugly (what timeout is desirable).

The advantage of this is that most of the time the low-cost OCC path is taken. For things that happen a lot but with low contention the benefits are significant. Think of product tracking in a warehouse - products move all the time, but very rarely do identical items move at the same time, and when they do resolving is easy (quantity left = original less my removal and your removal). For the complex case where (say) a product is relocated it probably makes sense to lock the product while it's in transit (because that mirrors the physical situation).

When you do have to fall back to locking, it's often useful to be able to notify both users and have a communication channel. At least notify the user who wants the lock when it's available, preferably allow them to send a message to the lock holder and possibly even allow them to force the lock. Then notify the lock loser that "Jo Smith has taken you lock, you lose your changes". Let office politics sort that one out :)

I usually drive the fallback process by user complaints rather than bug reports. If users are complaining that they lose their edits too often in a particular process, change it. If users complain that records are locked too often, you will have to refactor your object mappings to increase lock granularity or make business process changes.

Janelljanella answered 11/3, 2011 at 0:27 Comment(2)
Nice answer, but it seems not related to DataSnap, which is stated in the question. DataSnap is (or can be) a stateless application server with no built-in support for the locking schemas you mention here.Boffa
That's because I've not used DataSnap, but I think the design-level details apply. A stateless server can be made implicitly stateful if you need to, but it will take more work.Trotline
L
-1

The approach given by jachgate is great, and probably better, but in case you do want to implement this, you will need a TThreadList on the server that is created when the service is started. Use the TThreadList because it's thread-safe. You can have on TThreadList per table so that you can minimize the performance hit of navigating the lists. To control what's locked, you'll need an object that's created and passed to the list

  TLockedItem = class(TObject)
  public
    iPK: Integer;
    iClientID: Integer;
  end;

To do the actual locking, you'd need something like this:

function LockItem(pPK, pClientID: Integer): Boolean;
var
  oLockedItem: TLockedItem;
  oInternalList: TList;
  iCont: Integer;
  bExists: Boolean;
begin
  bExists := False;
  if (Assigned(oLockedList)) then
  begin
    oInternalList := oLockedList.LockList;
    try
      if (oInternalList.Count > 0) then
      begin
        iCont := 0;
        while ((not bExists) and (iCont < oInternalList.Count)) do
        begin
          oLockedItem := TLockedItem(oInternalList[iCont]);
          if (oLockedItem.iPK = pPk) then
            bExists := True
          else
            Inc(iCont);
        end;
      end;
    finally
      oLockedList.UnlockList;
    end;
    if (not bExists) then
    begin
      oLockedItem := TLockedItem.Create;
      oLockedItem.iPK := pPK;
      oLockedItem.iClientID := pClientID;
      oInternalList := oLockedList.LockList;
      try
        oInternalList.Add(oLockedItem);
      finally
        oLockedList.UnlockList;
      end;
    end;
  end;
  Result := bExists;
end;

That's just an ideia of what you'd need. You would have to do an unlock method with similar logic. You'd probably need a list for the clients, that would keep a point of each TLockItem held by each client, in case of lost connection. This is not a definitive answer, just a push on the direction, in case you want to implement this approach.
Good luck

Lon answered 10/3, 2011 at 23:27 Comment(5)
sorry, what are you implementing here? With an OPF and pessimistic locking you might lock the objects using something like this, but your answer is too much code, not enough explanation at this stage.Trotline
@pascal Can you explain how this approach will scale in a failover/load-balancing scenario?. I think with this you stuck with 1 app-server limit.Boffa
@moz and @jachgate I'm just showing @Eiderdown something he might take a look at. As I stated in my own post, your implementation is the way to go. I'm just showing some possibilities so he can decide what to do on he's own. It's not for failover/load-balacing scenario, but we don't know what @Eiderdown needs. The more ideas we throw at someone, the better decision he can make. :)Lon
@Eiderdown we're welcome. Theoric discussion is great, and very important, but some code really helps as well. If you need anything, fell free to let me know. ;-)Lon
down votes without commenting why are downright rude. The guy who wrote asked the question thanked me for the code, saying it will be helpful. You don't see him thanking any of the others...Lon

© 2022 - 2024 — McMap. All rights reserved.