Skip to content
Snippets Groups Projects
  1. May 04, 2007
  2. May 03, 2007
  3. May 02, 2007
    • Tom Lane's avatar
      Fix things so that when CREATE INDEX CONCURRENTLY sets pg_index.indisvalid · 8ec94385
      Tom Lane authored
      true at the very end of its processing, the update is broadcast via a
      shared-cache-inval message for the index; without this, existing backends that
      already have relcache entries for the index might never see it become valid.
      Also, force a relcache inval on the index's parent table at the same time,
      so that any cached plans for that table are re-planned; this ensures that
      the newly valid index will be used if appropriate.  Aside from making
      C.I.C. behave more reasonably, this is necessary infrastructure for some
      aspects of the HOT patch.  Pavan Deolasee, with a little further stuff from
      me.
      8ec94385
    • Alvaro Herrera's avatar
      Use the new TimestampDifferenceExceeds API instead of timestamp_cmp_internal · 229d3380
      Alvaro Herrera authored
      and TimestampDifference, to make coding clearer.  I think this should also fix
      the failure to start workers in platforms with low resolution timers, as
      reported by Itagaki Takahiro.
      229d3380
    • Alvaro Herrera's avatar
      Fix failure to check for INVALID worker entry in the new autovacuum code, which · a115bfe3
      Alvaro Herrera authored
      could happen when a worker took to long to start and was thus "aborted" by the
      launcher.  Noticed by lionfish buildfarm member.
      a115bfe3
    • Tom Lane's avatar
      Fix oversight in PG_RE_THROW processing: it's entirely possible that there · 88f1fd29
      Tom Lane authored
      isn't any place to throw the error to.  If so, we should treat the error
      as FATAL, just as we would have if it'd been thrown outside the PG_TRY
      block to begin with.
      
      Although this is clearly a *potential* source of bugs, it is not clear
      at the moment whether it is an *actual* source of bugs; there may not
      presently be any PG_TRY blocks in code that can be reached with no outer
      longjmp catcher.  So for the moment I'm going to be conservative and not
      back-patch this.  The change breaks ABI for users of PG_RE_THROW and hence
      might create compatibility problems for loadable modules, so we should not
      put it into released branches without proof that it's needed.
      88f1fd29
  4. May 01, 2007
  5. Apr 30, 2007
    • Tom Lane's avatar
      Change the timestamps recorded in transaction commit/abort xlog records · c4320619
      Tom Lane authored
      from time_t to TimestampTz representation.  This provides full gettimeofday()
      resolution of the timestamps, which might be useful when attempting to
      do point-in-time recovery --- previously it was not possible to specify
      the stop point with sub-second resolution.  But mostly this is to get
      rid of TimestampTz-to-time_t conversion overhead during commit.  Per my
      proposal of a day or two back.
      c4320619
    • Tom Lane's avatar
      Fix oversight in my patch of yesterday: forgot to ensure that stats would · 641912b4
      Tom Lane authored
      still be forced out at backend exit.
      641912b4
    • Tom Lane's avatar
      Implement rate-limiting logic on how often backends will attempt to send · 957d08c8
      Tom Lane authored
      messages to the stats collector.  This avoids the problem that enabling
      stats_row_level for autovacuum has a significant overhead for short
      read-only transactions, as noted by Arjen van der Meijden.  We can avoid
      an extra gettimeofday call by piggybacking on the one done for WAL-logging
      xact commit or abort (although that doesn't help read-only transactions,
      since they don't WAL-log anything).
      
      In my proposal for this, I noted that we could change the WAL log entries
      for commit/abort to record full TimestampTz precision, instead of only
      time_t as at present.  That's not done in this patch, but will be committed
      separately.
      957d08c8
    • Tom Lane's avatar
      Marginal performance hack: use a dedicated routine instead of copyObject · 57b82bf3
      Tom Lane authored
      to copy nodes that are known to be Vars during plan reference adjustment.
      Saves useless memzero operation as well as the big switch in copyObject.
      57b82bf3
    • Tom Lane's avatar
      Marginal performance hack: avoid unnecessary work in expression_tree_mutator. · afaa6b98
      Tom Lane authored
      We can just palloc, instead of using makeNode, when we are going to
      overwrite the whole node anyway in the FLATCOPY macro.  Also, use
      FLATCOPY instead of copyObject for common node types Var and Const.
      afaa6b98
    • Tom Lane's avatar
      Marginal performance hack: remove the loop that used to be needed to · 39a333aa
      Tom Lane authored
      look through a freelist for a chunk of adequate size.  For a long time
      now, all elements of a given freelist have been exactly the same
      allocated size, so we don't need a loop.  Since the loop never iterated
      more than once, you'd think this wouldn't matter much, but it makes a
      noticeable savings in a simple test --- perhaps because the compiler
      isn't optimizing on a mistaken assumption that the loop would repeat.
      AllocSetAlloc is called often enough that saving even a couple of
      instructions is worthwhile.
      39a333aa
  6. Apr 29, 2007
  7. Apr 28, 2007
    • Tom Lane's avatar
      Modify processing of DECLARE CURSOR and EXPLAIN so that they can resolve the · bbbe825f
      Tom Lane authored
      types of unspecified parameters when submitted via extended query protocol.
      This worked in 8.2 but I had broken it during plancache changes.  DECLARE
      CURSOR is now treated almost exactly like a plain SELECT through parse
      analysis, rewrite, and planning; only just before sending to the executor
      do we divert it away to ProcessUtility.  This requires a special-case check
      in a number of places, but practically all of them were already special-casing
      SELECT INTO, so it's not too ugly.  (Maybe it would be a good idea to merge
      the two by treating IntoClause as a form of utility statement?  Not going to
      worry about that now, though.)  That approach doesn't work for EXPLAIN,
      however, so for that I punted and used a klugy solution of running parse
      analysis an extra time if under extended query protocol.
      bbbe825f
  8. Apr 27, 2007
  9. Apr 26, 2007
Loading