Lock-based Concurrency Control
COMP9315 21T1 ♢ Locking ♢ [0/10]
❖ Lock-based Concurrency Control | |
Requires read/write lock operations which act on database objects.
Synchronise access to shared DB objects via these rules:
- before reading X, get read (shared) lock on X
- before writing X, get write (exclusive) lock on X
- a tx attempting to get a read lock on X is blocked
if another tx already has write lock on X
- a tx attempting to get a write lock on X is blocked
if another tx has any kind of lock on X
These rules alone do not guarantee serializability.
COMP9315 21T1 ♢ Locking ♢ [1/10]
❖ Lock-based Concurrency Control (cont) | |
Locks introduce additional mechanisms in DBMS:
The Lock Manager manages the locks requested by the scheduler
COMP9315 21T1 ♢ Locking ♢ [2/10]
❖ Lock-based Concurrency Control (cont) | |
Lock table entries contain:
- object being locked (DB, table, tuple, field)
- type of lock: read/shared, write/exclusive
- FIFO queue of tx's requesting this lock
- count of tx's currently holding lock
(max 1 for write locks)
Lock and unlock operations
must be atomic.
Lock upgrade:
- if a tx holds a read lock, and it is the only tx holding that lock
- then the lock can be converted into a write lock
COMP9315 21T1 ♢ Locking ♢ [3/10]
❖ Lock-based Concurrency Control (cont) | |
Consider the following schedule, using locks:
T1(a): Lr(Y) R(Y)
T2(a): Lr(X) R(X) U(X)
T1(b): U(Y) Lw(X) W(X) U(X)
T2(b): Lw(Y)....W(Y) U(Y)
(where Lr = read-lock, Lw = write-lock, U = unlock)
Locks correctly ensure controlled access to X
and Y
.
Despite this, the schedule is not serializable. (Ex: prove this)
COMP9315 21T1 ♢ Locking ♢ [4/10]
To guarantee serializability, we require an additional constraint:
- in every tx, all lock requests precede all unlock requests
Each transaction is then structured as:
- growing phase where locks are acquired
- action phase where "real work" is done
- shrinking phase where locks are released
Clearly, this reduces potential concurrency ...
COMP9315 21T1 ♢ Locking ♢ [5/10]
Appropriate locking can guarantee serializability..
However, it also introduces potential undesirable effects:
- Deadlock
- No transactions can proceed; each waiting on lock held by another.
- Starvation
- One transaction is permanently "frozen out" of access to data.
- Reduced performance
- Locking introduces delays while waiting for locks to be released.
COMP9315 21T1 ♢ Locking ♢ [6/10]
Deadlock occurs when two tx's wait for a lock on an item held by the other.
Example:
T1: Lw(A) R(A) Lw(B) ......
T2: Lw(B) R(B) Lw(A) .....
How to deal with deadlock?
- prevent it happening in the first place
- let it happen, detect it, recover from it
COMP9315 21T1 ♢ Locking ♢ [7/10]
Handling deadlock involves forcing a transaction to "back off"
- select process to roll back
- choose on basis of how far tx has progressed, # locks held, ...
- roll back the selected process
- how far does this it need to be rolled back?
- worst-case scenario: abort one transaction, then retry
- prevent starvation
- need methods to ensure that same tx isn't always chosen
COMP9315 21T1 ♢ Locking ♢ [8/10]
Methods for managing deadlock
- timeout : set max time limit for each tx
- waits-for graph : records Tj waiting on lock held by Tk
- prevent deadlock by checking for new cycle ⇒ abort Ti
- detect deadlock by periodic check for cycles ⇒ abort Ti
- timestamps : use tx start times as basis for priority
- scenario: Tj tries to get lock held by Tk ...
- wait-die: if Tj < Tk, then Tj waits, else Tj rolls back
- wound-wait: if Tj < Tk, then Tk rolls back, else Tj waits
COMP9315 21T1 ♢ Locking ♢ [9/10]
Properties of deadlock handling methods:
- both wait-die and wound-wait are fair
- wait-die tends to
- roll back tx's that have done little work
- but rolls back tx's more often
- wound-wait tends to
- roll back tx's that may have done significant work
- but rolls back tx's less often
- timestamps easier to implement than waits-for graph
- waits-for minimises roll backs because of deadlock
COMP9315 21T1 ♢ Locking ♢ [10/10]
Produced: 14 Apr 2021