Skip to content
Snippets Groups Projects
Select Git revision
  • benchmark-tools
  • postgres-lambda
  • master default
  • REL9_4_25
  • REL9_5_20
  • REL9_6_16
  • REL_10_11
  • REL_11_6
  • REL_12_1
  • REL_12_0
  • REL_12_RC1
  • REL_12_BETA4
  • REL9_4_24
  • REL9_5_19
  • REL9_6_15
  • REL_10_10
  • REL_11_5
  • REL_12_BETA3
  • REL9_4_23
  • REL9_5_18
  • REL9_6_14
  • REL_10_9
  • REL_11_4
23 results

pgrowlocks.c

Blame
    • Alvaro Herrera's avatar
      27846f02
      Optimize locking a tuple already locked by another subxact · 27846f02
      Alvaro Herrera authored
      Locking and updating the same tuple repeatedly led to some strange
      multixacts being created which had several subtransactions of the same
      parent transaction holding locks of the same strength.  However,
      once a subxact of the current transaction holds a lock of a given
      strength, it's not necessary to acquire the same lock again.  This made
      some coding patterns much slower than required.
      
      The fix is twofold.  First we change HeapTupleSatisfiesUpdate to return
      HeapTupleBeingUpdated for the case where the current transaction is
      already a single-xid locker for the given tuple; it used to return
      HeapTupleMayBeUpdated for that case.  The new logic is simpler, and the
      change to pgrowlocks is a testament to that: previously we needed to
      check for the single-xid locker separately in a very ugly way.  That
      test is simpler now.
      
      As fallout from the HTSU change, some of its callers need to be amended
      so that tuple-locked-by-own-transaction is taken into account in the
      BeingUpdated case rather than the MayBeUpdated case.  For many of them
      there is no difference; but heap_delete() and heap_update now check
      explicitely and do not grab tuple lock in that case.
      
      The HTSU change also means that routine MultiXactHasRunningRemoteMembers
      introduced in commit 11ac4c73 is no longer necessary and can be
      removed; the case that used to require it is now handled naturally as
      result of the changes to heap_delete and heap_update.
      
      The second part of the fix to the performance issue is to adjust
      heap_lock_tuple to avoid the slowness:
      
      1. Previously we checked for the case that our own transaction already
      held a strong enough lock and returned MayBeUpdated, but only in the
      multixact case.  Now we do it for the plain Xid case as well, which
      saves having to LockTuple.
      
      2. If the current transaction is the only locker of the tuple (but with
      a lock not as strong as what we need; otherwise it would have been
      caught in the check mentioned above), we can skip sleeping on the
      multixact, and instead go straight to create an updated multixact with
      the additional lock strength.
      
      3. Most importantly, make sure that both the single-xid-locker case and
      the multixact-locker case optimization are applied always.  We do this
      by checking both in a single place, rather than them appearing in two
      separate portions of the routine -- something that is made possible by
      the HeapTupleSatisfiesUpdate API change.  Previously we would only check
      for the single-xid case when HTSU returned MayBeUpdated, and only
      checked for the multixact case when HTSU returned BeingUpdated.  This
      was at odds with what HTSU actually returned in one case: if our own
      transaction was locker in a multixact, it returned MayBeUpdated, so the
      optimization never applied.  This is what led to the large multixacts in
      the first place.
      
      Per bug report #8470 by Oskari Saarenmaa.
      27846f02
      History
      Optimize locking a tuple already locked by another subxact
      Alvaro Herrera authored
      Locking and updating the same tuple repeatedly led to some strange
      multixacts being created which had several subtransactions of the same
      parent transaction holding locks of the same strength.  However,
      once a subxact of the current transaction holds a lock of a given
      strength, it's not necessary to acquire the same lock again.  This made
      some coding patterns much slower than required.
      
      The fix is twofold.  First we change HeapTupleSatisfiesUpdate to return
      HeapTupleBeingUpdated for the case where the current transaction is
      already a single-xid locker for the given tuple; it used to return
      HeapTupleMayBeUpdated for that case.  The new logic is simpler, and the
      change to pgrowlocks is a testament to that: previously we needed to
      check for the single-xid locker separately in a very ugly way.  That
      test is simpler now.
      
      As fallout from the HTSU change, some of its callers need to be amended
      so that tuple-locked-by-own-transaction is taken into account in the
      BeingUpdated case rather than the MayBeUpdated case.  For many of them
      there is no difference; but heap_delete() and heap_update now check
      explicitely and do not grab tuple lock in that case.
      
      The HTSU change also means that routine MultiXactHasRunningRemoteMembers
      introduced in commit 11ac4c73 is no longer necessary and can be
      removed; the case that used to require it is now handled naturally as
      result of the changes to heap_delete and heap_update.
      
      The second part of the fix to the performance issue is to adjust
      heap_lock_tuple to avoid the slowness:
      
      1. Previously we checked for the case that our own transaction already
      held a strong enough lock and returned MayBeUpdated, but only in the
      multixact case.  Now we do it for the plain Xid case as well, which
      saves having to LockTuple.
      
      2. If the current transaction is the only locker of the tuple (but with
      a lock not as strong as what we need; otherwise it would have been
      caught in the check mentioned above), we can skip sleeping on the
      multixact, and instead go straight to create an updated multixact with
      the additional lock strength.
      
      3. Most importantly, make sure that both the single-xid-locker case and
      the multixact-locker case optimization are applied always.  We do this
      by checking both in a single place, rather than them appearing in two
      separate portions of the routine -- something that is made possible by
      the HeapTupleSatisfiesUpdate API change.  Previously we would only check
      for the single-xid case when HTSU returned MayBeUpdated, and only
      checked for the multixact case when HTSU returned BeingUpdated.  This
      was at odds with what HTSU actually returned in one case: if our own
      transaction was locker in a multixact, it returned MayBeUpdated, so the
      optimization never applied.  This is what led to the large multixacts in
      the first place.
      
      Per bug report #8470 by Oskari Saarenmaa.