Skip to content
Snippets Groups Projects
Select Git revision
  • benchmark-tools
  • postgres-lambda
  • master default
  • REL9_4_25
  • REL9_5_20
  • REL9_6_16
  • REL_10_11
  • REL_11_6
  • REL_12_1
  • REL_12_0
  • REL_12_RC1
  • REL_12_BETA4
  • REL9_4_24
  • REL9_5_19
  • REL9_6_15
  • REL_10_10
  • REL_11_5
  • REL_12_BETA3
  • REL9_4_23
  • REL9_5_18
  • REL9_6_14
  • REL_10_9
  • REL_11_4
23 results

syscache.c

Blame
    • Tom Lane's avatar
      8b9bc234
      Remove the limit on the number of entries allowed in catcaches, and · 8b9bc234
      Tom Lane authored
      remove the infrastructure needed to enforce the limit, ie, the global
      LRU list of cache entries.  On small-to-middling databases this wins
      because maintaining the LRU list is a waste of time.  On large databases
      this wins because it's better to keep more cache entries (we assume
      such users can afford to use some more per-backend memory than was
      contemplated in the Berkeley-era catcache design).  This provides a
      noticeable improvement in the speed of psql \d on a 10000-table
      database, though it doesn't make it instantaneous.
      
      While at it, use per-catcache settings for the number of hash buckets
      per catcache, rather than the former one-size-fits-all value.  It's a
      bit silly to be using the same number of hash buckets for, eg, pg_am
      and pg_attribute.  The specific values I used might need some tuning,
      but they seem to be in the right ballpark based on CATCACHE_STATS
      results from the standard regression tests.
      8b9bc234
      History
      Remove the limit on the number of entries allowed in catcaches, and
      Tom Lane authored
      remove the infrastructure needed to enforce the limit, ie, the global
      LRU list of cache entries.  On small-to-middling databases this wins
      because maintaining the LRU list is a waste of time.  On large databases
      this wins because it's better to keep more cache entries (we assume
      such users can afford to use some more per-backend memory than was
      contemplated in the Berkeley-era catcache design).  This provides a
      noticeable improvement in the speed of psql \d on a 10000-table
      database, though it doesn't make it instantaneous.
      
      While at it, use per-catcache settings for the number of hash buckets
      per catcache, rather than the former one-size-fits-all value.  It's a
      bit silly to be using the same number of hash buckets for, eg, pg_am
      and pg_attribute.  The specific values I used might need some tuning,
      but they seem to be in the right ballpark based on CATCACHE_STATS
      results from the standard regression tests.